Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
7,700
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy import integrate
def trapz(f, a, b, N):
Integrate the function f(x) over the range [a,b] with N points.
pts = np.linspace(a, b, N + 1)
vals = f(pts)
h = (b - a) / (1.0 * N)
area = .5 * h * sum(vals[0:N] + vals[1:(N+1)])
return area
#raise NotImplementedError()
f = lambda x: x**2
g = lambda x: np.sin(x)
I = trapz(f, 0, 1, 1000)
assert np.allclose(I, 0.33333349999999995)
J = trapz(g, 0, np.pi, 1000)
assert np.allclose(J, 1.9999983550656628)
def compare(f, a, b, N):
trapint = trapz(f, a, b, N)
quadint = integrate.quad(f, a, b)[0]
print("Trapezoid Rule: %f" % trapint)
print("Scipy: %f" % quadint)
print("Error: %f" % (quadint - trapint))
compare(f, 0, 1, 1000)
compare(g, 0, 1, 1000)
#raise NotImplementedError()
assert True # leave this cell to grade the previous one
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Trapezoidal rule
Step3: Now use scipy.integrate.quad to integrate the f and g functions and see how the result compares with your trapz function. Print the results and errors.
|
7,701
|
<ASSISTANT_TASK:>
Python Code:
class Base1:
pass
class Base2:
pass
class MultiDerived(Base1, Base2):
pass
class Base1:
def test(self):
print("in Base1 -> test")
class Base2:
def test(self):
print("in Base2 -> test")
class MultiDerived(Base1, Base2):
def test2(self):
super().test()
Base2.test(Base2)
class MultiDerived2(Base2, Base1):
def test2(self):
super().test()
Base2.test(Base2)
print("Please check the result of test()")
# d = Base2()
# print(type(d))
md = MultiDerived()
md.test2()
md.test()
print("*"*10)
md2 = MultiDerived2()
md2.test2()
md2.test()
class Base:
pass
class Derived1(Base):
pass
class Derived2(Derived1):
pass
class Base:
def test(self):
print("In Base test")
def test_alone(self):
print("In Base test: test_alone")
class Derived1(Base):
def test(self):
print("In Derived1 test")
super().test()
def test_alone(self, val):
print("In Derived1 test: test_alone ", val)
# def test_alone(self):
# print("In Base test: test_alone")
class Derived2(Derived1):
def test2(self):
print("in Derived2 test2")
obj = Derived2()
obj.test()
obj.test2()
obj.test_alone()
Base.test(Base)
class Base:
def test(self):
print("In Base test")
def test_base(self):
print("test_base")
def test_alone(self):
print("In Base test: test_alone")
class Derived1(Base):
def test(self):
print("In Derived1 test")
super().test()
self.test_base()
def test_alone(self, val):
print("In Derived1 test: test_alone ", val)
self.test()
obj = Derived1()
obj.test()
obj.test_alone("test")
Base.test(Base)
class Base:
def test(self):
print("In Base test")
def test_base(self):
print("test_base")
def test_alone(self):
print("In Base test: test_alone")
class Derived1(Base):
def test(self):
print("In Derived1 test")
super().test()
Base.test_base(Base)
def test_alone(self, val):
print("In Derived1 test: test_alone ", val)
self.test()
obj = Derived1()
obj.test()
obj.test_alone("test")
Base.test(Base)
class Base:
def test(self):
print("In Base test")
def test_alone(self):
print("In Base test: test_alone")
class Derived1(Base):
def test(self):
print("In Derived1 test")
super().test()
def test_alone(self, val):
print("In Derived1 test: test_alone ", val)
def test_alone(self):
print("In Base test: test_alone")
class Derived2(Derived1):
def test2(self):
print("in Derived2 test2")
obj = Derived2()
obj.test()
obj.test2()
obj.test_alone()
Base.test(Base)
class Base():
def test1(self):
print("In Base test")
class Derived1(Base):
def test(self):
print("In Derived1 test")
class Derived3(Derived1):
pass
d = Derived3()
d.test()
d.test1()
class Base():
def test(self):
print("In Base test")
class Derived1(Base):
def test(self):
print("In Derived1 test", end=", ")
return "Golu"
class Derived3(Derived1):
pass
d = Derived3()
print(d.test())
#### Explicitly calling function
class Base:
def test(self):
print("In Base test")
class Derived1(Base):
def test(self):
print("In Derived1 test")
class Derived2(Derived1):
pass
obj = Derived2
obj.test(obj)
Derived2.test(Derived2)
#### Explicitly calling function
class Base:
def test(self):
print("In Base test")
class Derived1(Base):
def test(self):
print("In Derived1 test")
print(type(self))
class Derived2(Derived1):
pass
obj = Derived2
obj.test(obj)
Derived2.test(Derived2)
#### Explicitly calling function
class Base:
def test(self):
print("In Base test")
class Derived1(Base):
def test(self):
print("In Derived1 test")
print(type(self))
class Derived2(Derived1):
pass
obj = Derived2()
obj.test()
Derived2.test(Derived2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Multilevel Inheritance
Step2: In the multiple inheritance scenario, any specified attribute is searched first in the current class. If not found, the search continues into parent classes in depth-first, left-right fashion without searching same class twice
|
7,702
|
<ASSISTANT_TASK:>
Python Code:
%load_ext watermark
%watermark -v -d -u
import pyprind
n = 150000
bar = pyprind.ProgBar(n)
for i in range(n):
# do some computation
bar.update()
n = 1500
perc = pyprind.ProgPercent(n)
for i in range(n):
# do some computation
perc.update()
for i in pyprind.prog_bar(range(n)):
# do something
pass
for i in pyprind.prog_percent(range(n)):
# do something
pass
n = 150000
bar = pyprind.ProgBar(n)
for i in range(n):
# do some computation
bar.update()
print(bar)
n = 150000
bar = pyprind.ProgBar(n, monitor=True, title='Job_1')
for i in range(n):
# do some computation
bar.update()
# print report for future reference
print(bar)
n = 150000
bar = pyprind.ProgBar(n, monitor=True)
for i in range(n):
# do some computation
bar.update()
print(bar)
n = 1500
perc = pyprind.ProgPercent(n, monitor=True)
for i in range(n):
# do some computation
perc.update()
print(perc)
n = 150000
bar = pyprind.ProgBar(n, title='My 1st Progress Bar')
for i in range(n):
# do some computation
bar.update()
n = 1500
perc = pyprind.ProgPercent(n, title='My 1st Percent Tracker')
for i in range(n):
# do some computation
perc.update()
n = 150000
bar = pyprind.ProgBar(n, width=10)
for i in range(n):
# do some computation
bar.update()
bar = pyprind.ProgBar(n, width=70)
for i in range(n):
# do some computation
bar.update()
n = 150000
bar = pyprind.ProgBar(n, stream=1)
for i in range(n):
# do some computation
bar.update()
bar = pyprind.ProgBar(n, stream=2)
for i in range(n):
# do some computation
bar.update()
import sys
bar = pyprind.ProgBar(n, stream=sys.stdout)
for i in range(n):
# do some computation
bar.update()
n = 150000
bar = pyprind.ProgBar(n)
for i in range(n):
# do some computation
if i == 10000:
bar.stop()
bar.update()
import time
items = ['file_%s.csv' %i for i in range(1,21)]
bar = pyprind.ProgBar(len(items))
for i in items:
time.sleep(0.5) # do some computation
bar.update(item_id = i)
import sys
sys.path = ['/Users/sebastian/github/pyprind/'] + sys.path
import pyprind
import time
n = 100
bar = pyprind.ProgBar(n, bar_char='█')
for i in range(n):
time.sleep(0.1) # do some computation
bar.update()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: PyPrind demo
Step2: <a name='bar_basic'></a>
Step3: <a name='percent_basic'></a>
Step4: <a name='percent_basic'></a>
Step5: <a name='info'></a>
Step6: <a name='percent_cpu'></a>
Step7: <a name='percent_cpu'></a>
Step8: <a name='title'></a>
Step9: <a name='width'></a>
Step10: <a name='streams'></a>
Step11: <br>
Step12: <br>
Step13: <br>
|
7,703
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
import cirq
! [ -z "$(which qvm)" ] &&\
apt update &&\
apt install jq &&\
export LATEST_FOREST_SDK_VERSION=$(curl -s https://downloads.rigetti.com/qcs-sdk/versions | jq -r '.versions[0].sdk') &&\
curl -f https://downloads.rigetti.com/qcs-sdk/forest-sdk-$LATEST_FOREST_SDK_VERSION-linux-deb.tar.bz2 -o $PWD/forest-sdk-$LATEST_FOREST_SDK_VERSION-linux-deb.tar.bz2 &&\
tar -xf forest-sdk-$LATEST_FOREST_SDK_VERSION-linux-deb.tar.bz2 &&\
./forest-sdk-$LATEST_FOREST_SDK_VERSION-linux-deb/forest-sdk-$LATEST_FOREST_SDK_VERSION-linux-deb.run &&\
quilc --version &&\
qvm --version
import subprocess
subprocess.Popen(["qvm", "--quiet", "-S"])
subprocess.Popen(["quilc", "--quiet", "-R"])
bell_circuit = cirq.Circuit()
qubits = cirq.LineQubit.range(2)
bell_circuit.append(cirq.H(qubits[0]))
bell_circuit.append(cirq.CNOT(qubits[0], qubits[1]))
bell_circuit.append(cirq.measure(qubits[0], qubits[1], key='m'))
print(bell_circuit)
from cirq_rigetti import RigettiQCSService
quantum_processors = RigettiQCSService.list_quantum_processors().quantum_processors
processors_list = [quantum_processor.id for quantum_processor in quantum_processors]
print(processors_list)
from cirq_rigetti import circuit_transformers, circuit_sweep_executors, get_rigetti_qcs_service
SERVICE_NAME = processors_list[0]
print(SERVICE_NAME)
service = get_rigetti_qcs_service(SERVICE_NAME, as_qvm=True, noisy=False)
result = service.run(bell_circuit, repetitions=1000)
print(result.histogram(key='m'))
cirq.plot_state_histogram(result.histogram(key='m'))
from pyquil.quilbase import Reset, Pragma
def hook(program, measurement_id_map):
program._instructions.insert(0, Reset())
program._instructions.insert(1, Pragma('INITIAL_REWIRING', freeform_string='GREEDY'))
print(program)
return program, measurement_id_map
# assign qubits explicitly to hardware or virtual machine qubits.
qubit_id_map = {
qubits[0]: 4,
qubits[1]: 5,
}
executor = circuit_sweep_executors.with_quilc_compilation_and_cirq_parameter_resolution
transformer = circuit_transformers.build(qubit_id_map=qubit_id_map, qubits=qubits, post_transformation_hooks=[hook])
service = get_rigetti_qcs_service(SERVICE_NAME, as_qvm=True, executor=executor, transformer=transformer)
result = service.run(bell_circuit, repetitions=1000)
cirq.plot_state_histogram(result.histogram(key='m'))
from cirq_rigetti import get_rigetti_qcs_sampler
executor = circuit_sweep_executors.with_quilc_parametric_compilation
sampler = get_rigetti_qcs_sampler(SERVICE_NAME, as_qvm=True, executor=executor)
import sympy
qubit = cirq.LineQubit.range(1)[0]
circuit = cirq.Circuit(
cirq.H(qubit)**sympy.Symbol('t'),
cirq.measure(qubit, key='m'))
params = cirq.Linspace('t', start=0, stop=4, length=5)
print(circuit)
import pandas as pd
results = sampler.run_sweep(circuit, params, repetitions=10)
data = {f't={result.params["t"]}': [measurement[0] for measurement in result.measurements['m']] for result in results}
pd.DataFrame(data)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Quantum circuits on Rigetti devices
Step2: Running this notebook requires the pyQuil QVM and Compiler. If you are running on Google Colab or a Linux Debian machine, you can run the below cell to install them if necessary. If you are on a non-Linux Debian machine, see these instructions{
Step3: Next, run the pyQuil QVM and Compiler if they are not already running on their default ports.
Step4: Running a Bell state circuit
Step5: Next, we'll import RigettiQCSService and list available quantum processors.
Step6: For now, we'll instantiate the RigettiQCSService as a pyQuil Quantum Virtual Machine{
Step7: We'll use the built-in cirq.plot_state_histogram to visually verify the results of our Bell state.
Step8: As expected, we see states 0 (ie '00') and 3 (ie '11') as the dominant results.
Step9: Running a parameterized circuit
Step10: Next, we will initialize a parameterized circuit in Cirq along with a set of parameter values.
Step11: In much the same way that we invoke, RigettiQCSSerivce.run, we invoke RigettiQCSSampler.run_sweep with our parameters here.
|
7,704
|
<ASSISTANT_TASK:>
Python Code:
%load_ext watermark
%watermark -v -m -a "Lilian Besson (Naereen)" -p numpy,numba -g
import numpy as np
eps = 1e-15 #: Threshold value: everything in [0, 1] is truncated to [eps, 1 - eps]
def klBern(x, y):
r Kullback-Leibler divergence for Bernoulli distributions. https://en.wikipedia.org/wiki/Bernoulli_distribution#Kullback.E2.80.93Leibler_divergence
.. math:: \mathrm{KL}(\mathcal{B}(x), \mathcal{B}(y)) = x \log(\frac{x}{y}) + (1-x) \log(\frac{1-x}{1-y}).
x = min(max(x, eps), 1 - eps)
y = min(max(y, eps), 1 - eps)
return x * np.log(x / y) + (1 - x) * np.log((1 - x) / (1 - y))
klBern(0.5, 0.5)
klBern(0.1, 0.9)
klBern(0.9, 0.1)
klBern(0.4, 0.5)
klBern(0.01, 0.99)
klBern(0, 1)
def klBin(x, y, n):
r Kullback-Leibler divergence for Binomial distributions. https://math.stackexchange.com/questions/320399/kullback-leibner-divergence-of-binomial-distributions
- It is simply the n times :func:`klBern` on x and y.
.. math:: \mathrm{KL}(\mathrm{Bin}(x, n), \mathrm{Bin}(y, n)) = n \times \left(x \log(\frac{x}{y}) + (1-x) \log(\frac{1-x}{1-y}) \right).
.. warning:: The two distributions must have the same parameter n, and x, y are p, q in (0, 1).
x = min(max(x, eps), 1 - eps)
y = min(max(y, eps), 1 - eps)
return n * (x * np.log(x / y) + (1 - x) * np.log((1 - x) / (1 - y)))
klBin(0.5, 0.5, 10)
klBin(0.1, 0.9, 10)
klBin(0.9, 0.1, 10)
klBin(0.4, 0.5, 10)
klBin(0.01, 0.99, 10)
klBin(0, 1, 10)
def klPoisson(x, y):
r Kullback-Leibler divergence for Poison distributions. https://en.wikipedia.org/wiki/Poisson_distribution#Kullback.E2.80.93Leibler_divergence
.. math:: \mathrm{KL}(\mathrm{Poisson}(x), \mathrm{Poisson}(y)) = y - x + x \times \log(\frac{x}{y}).
x = max(x, eps)
y = max(y, eps)
return y - x + x * np.log(x / y)
klPoisson(3, 3)
klPoisson(2, 1)
klPoisson(1, 2)
klPoisson(3, 6)
klPoisson(6, 8)
klPoisson(1, 0)
klPoisson(0, 0)
def klExp(x, y):
r Kullback-Leibler divergence for exponential distributions. https://en.wikipedia.org/wiki/Exponential_distribution#Kullback.E2.80.93Leibler_divergence
.. math::
\mathrm{KL}(\mathrm{Exp}(x), \mathrm{Exp}(y)) = \begin{cases}
\frac{x}{y} - 1 - \log(\frac{x}{y}) & \text{if} x > 0, y > 0\\
+\infty & \text{otherwise}
\end{cases}
if x <= 0 or y <= 0:
return float('+inf')
else:
x = max(x, eps)
y = max(y, eps)
return x / y - 1 - np.log(x / y)
klExp(3, 3)
klExp(3, 6)
klExp(1, 2)
klExp(2, 1)
klExp(4, 2)
klExp(6, 8)
klExp(-3, 2)
klExp(3, -2)
klExp(-3, -2)
def klGamma(x, y, a=1):
r Kullback-Leibler divergence for gamma distributions. https://en.wikipedia.org/wiki/Gamma_distribution#Kullback.E2.80.93Leibler_divergence
- It is simply the a times :func:`klExp` on x and y.
.. math::
\mathrm{KL}(\Gamma(x, a), \Gamma(y, a)) = \begin{cases}
a \times \left( \frac{x}{y} - 1 - \log(\frac{x}{y}) \right) & \text{if} x > 0, y > 0\\
+\infty & \text{otherwise}
\end{cases}
.. warning:: The two distributions must have the same parameter a.
if x <= 0 or y <= 0:
return float('+inf')
else:
x = max(x, eps)
y = max(y, eps)
return a * (x / y - 1 - np.log(x / y))
klGamma(3, 3)
klGamma(3, 6)
klGamma(1, 2)
klGamma(2, 1)
klGamma(4, 2)
klGamma(6, 8)
klGamma(-3, 2)
klGamma(3, -2)
klGamma(-3, -2)
def klNegBin(x, y, r=1):
r Kullback-Leibler divergence for negative binomial distributions. https://en.wikipedia.org/wiki/Negative_binomial_distribution
.. math:: \mathrm{KL}(\mathrm{NegBin}(x, r), \mathrm{NegBin}(y, r)) = r \times \log((r + x) / (r + y)) - x \times \log(y \times (r + x) / (x \times (r + y))).
.. warning:: The two distributions must have the same parameter r.
x = max(x, eps)
y = max(y, eps)
return r * np.log((r + x) / (r + y)) - x * np.log(y * (r + x) / (x * (r + y)))
klNegBin(0.5, 0.5)
klNegBin(0.1, 0.9)
klNegBin(0.9, 0.1)
klNegBin(0.4, 0.5)
klNegBin(0.01, 0.99)
klBern(0, 1)
klNegBin(0.5, 0.5, r=2)
klNegBin(0.1, 0.9, r=2)
klNegBin(0.1, 0.9, r=4)
klNegBin(0.9, 0.1, r=2)
klNegBin(0.4, 0.5, r=2)
klNegBin(0.01, 0.99, r=2)
def klGauss(x, y, sig2x=0.25, sig2y=None):
r Kullback-Leibler divergence for Gaussian distributions of means ``x`` and ``y`` and variances ``sig2x`` and ``sig2y``, :math:`\nu_1 = \mathcal{N}(x, \sigma_x^2)` and :math:`\nu_2 = \mathcal{N}(y, \sigma_x^2)`:
.. math:: \mathrm{KL}(\nu_1, \nu_2) = \frac{(x - y)^2}{2 \sigma_y^2} + \frac{1}{2}\left( \frac{\sigma_x^2}{\sigma_y^2} - 1 \log\left(\frac{\sigma_x^2}{\sigma_y^2}\right) \right).
See https://en.wikipedia.org/wiki/Normal_distribution#Other_properties
- By default, sig2y is assumed to be sig2x (same variance).
if sig2y is None or - eps < (sig2y - sig2x) < eps:
return (x - y) ** 2 / (2. * sig2x)
else:
return (x - y) ** 2 / (2. * sig2y) + 0.5 * ((sig2x/sig2y)**2 - 1 - np.log(sig2x/sig2y))
klGauss(3, 3)
klGauss(3, 6)
klGauss(1, 2)
klGauss(2, 1)
klGauss(4, 2)
klGauss(6, 8)
klGauss(-3, 2)
klGauss(3, -2)
klGauss(-3, -2)
klGauss(3, 2)
klGauss(3, 3, sig2x=10)
klGauss(3, 6, sig2x=10)
klGauss(1, 2, sig2x=10)
klGauss(2, 1, sig2x=10)
klGauss(4, 2, sig2x=10)
klGauss(6, 8, sig2x=10)
klGauss(0, 0, sig2x=0.25, sig2y=0.5)
klGauss(0, 0, sig2x=0.25, sig2y=1.0)
klGauss(0, 0, sig2x=0.5, sig2y=0.25)
klGauss(0, 1, sig2x=0.25, sig2y=0.5)
klGauss(0, 1, sig2x=0.25, sig2y=1.0)
klGauss(0, 1, sig2x=0.5, sig2y=0.25)
klGauss(1, 0, sig2x=0.25, sig2y=0.5)
klGauss(1, 0, sig2x=0.25, sig2y=1.0)
klGauss(1, 0, sig2x=0.5, sig2y=0.25)
def klucb(x, d, kl, upperbound, lowerbound=float('-inf'), precision=1e-6, max_iterations=50):
The generic KL-UCB index computation.
- x: value of the cum reward,
- d: upper bound on the divergence,
- kl: the KL divergence to be used (:func:`klBern`, :func:`klGauss`, etc),
- upperbound, lowerbound=float('-inf'): the known bound of the values x,
- precision=1e-6: the threshold from where to stop the research,
- max_iterations: max number of iterations of the loop (safer to bound it to reduce time complexity).
.. note:: It uses a **bisection search**, and one call to ``kl`` for each step of the bisection search.
value = max(x, lowerbound)
u = upperbound
_count_iteration = 0
while _count_iteration < max_iterations and u - value > precision:
_count_iteration += 1
m = (value + u) / 2.
if kl(x, m) > d:
u = m
else:
value = m
return (value + u) / 2.
x, d = 0.9, 0.2
upperbound = 1
klucb(x, d, klBern, upperbound, lowerbound=0, precision=1e-3, max_iterations=10)
klucb(x, d, klBern, upperbound, lowerbound=0, precision=1e-6, max_iterations=10)
klucb(x, d, klBern, upperbound, lowerbound=0, precision=1e-3, max_iterations=50)
klucb(x, d, klBern, upperbound, lowerbound=0, precision=1e-6, max_iterations=100)
def klucbGauss(x, d, sig2x=0.25, precision=0.):
KL-UCB index computation for Gaussian distributions.
- Note that it does not require any search.
.. warning:: it works only if the good variance constant is given.
return x + np.sqrt(2 * sig2x * d)
klucbGauss(0.1, 0.2)
klucbGauss(0.5, 0.2)
klucbGauss(0.9, 0.2)
klucbGauss(0.1, 0.4)
klucbGauss(0.1, 0.9)
klucbGauss(0.5, 0.4)
klucbGauss(0.5, 0.9)
klucbGauss(0.9, 0.4)
klucbGauss(0.9, 0.9)
def klucbBern(x, d, precision=1e-6):
KL-UCB index computation for Bernoulli distributions, using :func:`klucb`.
upperbound = min(1., klucbGauss(x, d, sig2x=0.25)) # variance 1/4 for [0,1] bounded distributions
# upperbound = min(1., klucbPoisson(x, d)) # also safe, and better ?
return klucb(x, d, klBern, upperbound, precision)
klucbBern(0.1, 0.2)
klucbBern(0.5, 0.2)
klucbBern(0.9, 0.2)
klucbBern(0.1, 0.4)
klucbBern(0.1, 0.9)
klucbBern(0.5, 0.4)
klucbBern(0.5, 0.9)
klucbBern(0.9, 0.4)
klucbBern(0.9, 0.9)
def klucbPoisson(x, d, precision=1e-6):
KL-UCB index computation for Poisson distributions, using :func:`klucb`.
upperbound = x + d + np.sqrt(d * d + 2 * x * d) # looks safe, to check: left (Gaussian) tail of Poisson dev
return klucb(x, d, klPoisson, upperbound, precision)
klucbPoisson(0.1, 0.2)
klucbPoisson(0.5, 0.2)
klucbPoisson(0.9, 0.2)
klucbPoisson(0.1, 0.4)
klucbPoisson(0.1, 0.9)
klucbPoisson(0.5, 0.4)
klucbPoisson(0.5, 0.9)
klucbPoisson(0.9, 0.4)
klucbPoisson(0.9, 0.9)
def klucbExp(x, d, precision=1e-6):
KL-UCB index computation for exponential distributions, using :func:`klucb`.
if d < 0.77: # XXX where does this value come from?
upperbound = x / (1 + 2. / 3 * d - np.sqrt(4. / 9 * d * d + 2 * d))
# safe, klexp(x,y) >= e^2/(2*(1-2e/3)) if x=y(1-e)
else:
upperbound = x * np.exp(d + 1)
if d > 1.61: # XXX where does this value come from?
lowerbound = x * np.exp(d)
else:
lowerbound = x / (1 + d - np.sqrt(d * d + 2 * d))
return klucb(x, d, klGamma, upperbound, lowerbound, precision)
klucbExp(0.1, 0.2)
klucbExp(0.5, 0.2)
klucbExp(0.9, 0.2)
klucbExp(0.1, 0.4)
klucbExp(0.1, 0.9)
klucbExp(0.5, 0.4)
klucbExp(0.5, 0.9)
klucbExp(0.9, 0.4)
klucbExp(0.9, 0.9)
from numba import jit
@jit(nopython=True)
def klBern_numba(x, y):
x = min(max(x, eps), 1 - eps)
y = min(max(y, eps), 1 - eps)
return x * np.log(x / y) + (1 - x) * np.log((1 - x) / (1 - y))
@jit(nopython=True)
def klBin_numba(x, y, n):
x = min(max(x, eps), 1 - eps)
y = min(max(y, eps), 1 - eps)
return n * (x * np.log(x / y) + (1 - x) * np.log((1 - x) / (1 - y)))
@jit(nopython=True)
def klPoisson_numba(x, y):
x = max(x, eps)
y = max(y, eps)
return y - x + x * np.log(x / y)
@jit(nopython=True)
def klExp_numba(x, y):
if x <= 0 or y <= 0:
return inf
else:
x = max(x, eps)
y = max(y, eps)
return x / y - 1 - np.log(x / y)
@jit(nopython=True)
def klGamma_numba(x, y, a=1):
if x <= 0 or y <= 0:
return inf
else:
x = max(x, eps)
y = max(y, eps)
return a * (x / y - 1 - np.log(x / y))
@jit(nopython=True)
def klNegBin_numba(x, y, r=1):
x = max(x, eps)
y = max(y, eps)
return r * np.log((r + x) / (r + y)) - x * np.log(y * (r + x) / (x * (r + y)))
@jit(nopython=True)
def klGauss_numba(x, y, sig2x=0.25, sig2y=0.25):
if - eps < (sig2y - sig2x) and (sig2y - sig2x) < eps:
return (x - y) ** 2 / (2. * sig2x)
else:
return (x - y) ** 2 / (2. * sig2y) + 0.5 * ((sig2x/sig2y)**2 - 1 - np.log(sig2x/sig2y))
@jit
def klucb_numba(x, d, kl, upperbound,
lowerbound=float('-inf'), precision=1e-6, max_iterations=50):
value = max(x, lowerbound)
u = upperbound
_count_iteration = 0
while _count_iteration < max_iterations and u - value > precision:
_count_iteration += 1
m = (value + u) / 2.
if kl(x, m) > d:
u = m
else:
value = m
return (value + u) / 2.
x, d = 0.9, 0.2
upperbound = 1
klucb_numba(x, d, klBern_numba, upperbound, lowerbound=0, precision=1e-3, max_iterations=10)
klucb_numba(x, d, klBern_numba, upperbound, lowerbound=0, precision=1e-6, max_iterations=10)
klucb_numba(x, d, klBern_numba, upperbound, lowerbound=0, precision=1e-3, max_iterations=50)
klucb_numba(x, d, klBern_numba, upperbound, lowerbound=0, precision=1e-6, max_iterations=100)
@jit(nopython=True)
def klucbGauss_numba(x, d, sig2x=0.25, precision=0.):
return x + np.sqrt(2 * sig2x * d)
@jit
def klucbBern_numba(x, d, precision=1e-6):
upperbound = min(1., klucbGauss_numba(x, d, sig2x=0.25)) # variance 1/4 for [0,1] bounded distributions
# upperbound = min(1., klucbPoisson(x, d)) # also safe, and better ?
return klucb_numba(x, d, klBern_numba, upperbound, precision)
@jit
def klucbPoisson_numba(x, d, precision=1e-6):
upperbound = x + d + np.sqrt(d * d + 2 * x * d) # looks safe, to check: left (Gaussian) tail of Poisson dev
return klucb_numba(x, d, klPoisson_numba, upperbound, precision)
@jit
def klucbExp_numba(x, d, precision=1e-6):
if d < 0.77: # XXX where does this value come from?
upperbound = x / (1 + 2. / 3 * d - np.sqrt(4. / 9 * d * d + 2 * d))
# safe, klexp(x,y) >= e^2/(2*(1-2e/3)) if x=y(1-e)
else:
upperbound = x * np.exp(d + 1)
if d > 1.61: # XXX where does this value come from?
lowerbound = x * np.exp(d)
else:
lowerbound = x / (1 + d - np.sqrt(d * d + 2 * d))
return klucb_numba(x, d, klGamma_numba, upperbound, lowerbound, precision)
%load_ext cython
def some_loop(n: int) -> int:
s = 0
for i in range(0, n, 2):
s += i
return s
%%cython
def some_loop_cython(int n) -> int:
cdef int s = 0
cdef int i = 0
for i in range(0, n, 2):
s += i
return s
%timeit np.random.randint(1000)
%timeit some_loop(np.random.randint(1000))
%timeit some_loop_cython(np.random.randint(1000))
14.6 / 2.21
(14.6 - 1.95) / (2.21 - 1.95)
%%cython
from libc.math cimport log
eps = 1e-15 #: Threshold value: everything in [0, 1] is truncated to [eps, 1 - eps]
def klBern_cython(float x, float y) -> float:
x = min(max(x, eps), 1 - eps)
y = min(max(y, eps), 1 - eps)
return x * log(x / y) + (1 - x) * log((1 - x) / (1 - y))
%%cython
from libc.math cimport log
eps = 1e-15 #: Threshold value: everything in [0, 1] is truncated to [eps, 1 - eps]
def klBin_cython(float x, float y, int n) -> float:
x = min(max(x, eps), 1 - eps)
y = min(max(y, eps), 1 - eps)
return n * (x * log(x / y) + (1 - x) * log((1 - x) / (1 - y)))
%%cython
from libc.math cimport log
eps = 1e-15 #: Threshold value: everything in [0, 1] is truncated to [eps, 1 - eps]
def klPoisson_cython(float x, float y) -> float:
x = max(x, eps)
y = max(y, eps)
return y - x + x * log(x / y)
%%cython
from libc.math cimport log
eps = 1e-15 #: Threshold value: everything in [0, 1] is truncated to [eps, 1 - eps]
def klExp_cython(float x, float y) -> float:
if x <= 0 or y <= 0:
return float('+inf')
else:
x = max(x, eps)
y = max(y, eps)
return x / y - 1 - log(x / y)
%%cython
from libc.math cimport log
eps = 1e-15 #: Threshold value: everything in [0, 1] is truncated to [eps, 1 - eps]
def klGamma_cython(float x, float y, float a=1) -> float:
if x <= 0 or y <= 0:
return float('+inf')
else:
x = max(x, eps)
y = max(y, eps)
return a * (x / y - 1 - log(x / y))
%%cython
from libc.math cimport log
eps = 1e-15 #: Threshold value: everything in [0, 1] is truncated to [eps, 1 - eps]
def klNegBin_cython(float x, float y, float r=1) -> float:
x = max(x, eps)
y = max(y, eps)
return r * log((r + x) / (r + y)) - x * log(y * (r + x) / (x * (r + y)))
%%cython
from libc.math cimport log
eps = 1e-15 #: Threshold value: everything in [0, 1] is truncated to [eps, 1 - eps]
def klGauss_cython(float x, float y, float sig2x=0.25, float sig2y=0.25) -> float:
if - eps < (sig2y - sig2x) < eps:
return (x - y) ** 2 / (2. * sig2x)
else:
return (x - y) ** 2 / (2. * sig2y) + 0.5 * ((sig2x/sig2y)**2 - 1 - log(sig2x/sig2y))
%%cython
from libc.math cimport sqrt, log, exp
eps = 1e-15 #: Threshold value: everything in [0, 1] is truncated to [eps, 1 - eps]
def klucbGauss_cython(float x, float d, float sig2x=0.25, float precision=0.) -> float:
return x + sqrt(2 * sig2x * d)
cdef float klucbGauss_cython_x(float x, float d, float sig2x=0.25, float precision=0.):
return x + sqrt(2 * sig2x * d)
def klucb_cython(float x, float d, kl, float upperbound,
float lowerbound=float('-inf'),
float precision=1e-6, int max_iterations=50) -> float:
cdef float value = max(x, lowerbound)
cdef float u = upperbound
cdef int _count_iteration = 0
cdef float m = 0
while _count_iteration < max_iterations and u - value > precision:
_count_iteration += 1
m = (value + u) / 2.
if kl(x, m) > d:
u = m
else:
value = m
return (value + u) / 2.
cdef float klBern_cython_x(float x, float y):
x = min(max(x, eps), 1 - eps)
y = min(max(y, eps), 1 - eps)
return x * log(x / y) + (1 - x) * log((1 - x) / (1 - y))
def klucbBern_cython(float x, float d, float precision=1e-6) -> float:
cdef float upperbound = min(1., klucbGauss_cython_x(x, d, sig2x=0.25)) # variance 1/4 for [0,1] bounded distributions
# upperbound = min(1., klucbPoisson(x, d)) # also safe, and better ?
return klucb_cython(x, d, klBern_cython_x, upperbound, precision)
cdef float klPoisson_cython_x(float x, float y):
x = max(x, eps)
y = max(y, eps)
return y - x + x * log(x / y)
def klucbPoisson_cython(float x, float d, float precision=1e-6) -> float:
cdef float upperbound = x + d + sqrt(d * d + 2 * x * d) # looks safe, to check: left (Gaussian) tail of Poisson dev
return klucb_cython(x, d, klPoisson_cython_x, upperbound, precision)
cdef float klGamma_cython_x(float x, float y):
if x <= 0 or y <= 0:
return float('+inf')
else:
x = max(x, eps)
y = max(y, eps)
return x / y - 1 - log(x / y)
def klucbExp_cython(float x, float d, float precision=1e-6) -> float:
cdef float upperbound = 1
cdef float lowerbound = 0
if d < 0.77: # XXX where does this value come from?
upperbound = x / (1 + 2. / 3 * d - sqrt(4. / 9 * d * d + 2 * d))
# safe, klexp(x,y) >= e^2/(2*(1-2e/3)) if x=y(1-e)
else:
upperbound = x * exp(d + 1)
if d > 1.61: # XXX where does this value come from?
lowerbound = x * exp(d)
else:
lowerbound = x / (1 + d - sqrt(d * d + 2 * d))
return klucb_cython(x, d, klGamma_cython_x, upperbound, lowerbound, precision)
x, d = 0.9, 0.2
upperbound = 1
klucb_cython(x, d, klBern_cython, upperbound, lowerbound=0, precision=1e-3, max_iterations=10)
klucb_cython(x, d, klBern_cython, upperbound, lowerbound=0, precision=1e-6, max_iterations=10)
klucb_cython(x, d, klBern_cython, upperbound, lowerbound=0, precision=1e-3, max_iterations=50)
klucb_cython(x, d, klBern_cython, upperbound, lowerbound=0, precision=1e-6, max_iterations=100)
%%bash
ls -larth *kullback*
[ -f kullback.py ] && mv -vf kullback.py kullback.py.old
!ls -larth kullback*.so
import kullback
help(kullback.klBern)
[ s for s in dir(kullback) if not s.startswith('_') ]
klBern_c = kullback.klBern
klBin_c = kullback.klBin
klExp_c = kullback.klExp
klGamma_c = kullback.klGamma
klGauss_c = kullback.klGauss
klPoisson_c = kullback.klPoisson
klucbBern_c = kullback.klucbBern
klucbExp_c = kullback.klucbExp
klucbGamma_c = kullback.klucbGamma
klucbGauss_c = kullback.klucbGauss
klucbPoisson_c = kullback.klucbPoisson
r = np.random.random
rn = lambda: np.random.randint(1000)
%timeit (r(), r())
%timeit (r(), r(), rn())
def test_fs(fs, inputs, tolerance=1e-5, nb_tests=100):
for _ in range(nb_tests):
args = inputs()
ref_f = fs[0] # Python version
output = ref_f(*args)
for other_f in fs[1:]:
other_output = other_f(*args)
if abs(output) > 1:
rel_diff = (output - other_output) / output
else:
rel_diff = (output - other_output)
assert abs(rel_diff) <= tolerance, "Error: function {} gave {} and function {} gave {} on inputs {}, and the two outputs are too different.".format(ref_f, output, other_f, other_output, args)
test_fs([klBern, klBern_numba, klBern_cython, klBern_c], lambda: (r(), r()))
%timeit klBern(r(), r())
%timeit klBern_numba(r(), r())
%timeit klBern_cython(r(), r())
%timeit klBern_c(r(), r())
(6280 - 576) / (1000 - 576) # for Python vs numba
(6280 - 576) / (882 - 576) # for Python vs Cython
(6280 - 576) / (811 - 576) # for Python vs C
test_fs([klBin, klBin_numba, klBin_cython, klBin_c], lambda: (r(), r(), rn()))
test_fs([klBin, klBin_numba, klBin_cython, klBin_c], lambda: (r(), r(), rn()), tolerance=1e-3)
%timeit klBin(r(), r(), rn())
%timeit klBin_numba(r(), r(), rn())
%timeit klBin_cython(r(), r(), rn())
%timeit klBin_c(r(), r(), rn())
(7005 - 2350) / (3070 - 2350) # for Python vs numba
(7005 - 2350) / (3331 - 2350) # for Python vs Cython
(7005 - 2350) / (2980 - 2350) # for Python vs C
test_fs([klPoisson, klPoisson_numba, klPoisson_cython, klPoisson_c], lambda: (r(), r()))
%timeit klPoisson(r(), r())
%timeit klPoisson_numba(r(), r())
%timeit klPoisson_cython(r(), r())
%timeit klPoisson_c(r(), r())
(2350 - 576) / (935 - 576) # for Python vs numba
(2350 - 576) / (859 - 576) # for Python vs Cython
(2350 - 576) / (811 - 576) # for Python vs C
test_fs([klExp, klExp_numba, klExp_cython, klExp_c], lambda: (r(), r()))
%timeit klExp(r(), r())
%timeit klExp_numba(r(), r())
%timeit klExp_cython(r(), r())
%timeit klExp_c(r(), r())
(2210 - 576) / (1070 - 576) # for Python vs numba
(2210 - 576) / (842 - 576) # for Python vs Cython
(2210 - 576) / (869 - 576) # for Python vs C
klGamma_c = lambda x, y: kullback.klGamma(x, y, 1)
test_fs([klGamma, klGamma_numba, klGamma_cython, klGamma_c], lambda: (r(), r()))
%timeit klGamma(r(), r())
%timeit klGamma_numba(r(), r())
%timeit klGamma_cython(r(), r())
%timeit klGamma_c(r(), r())
(2700 - 576) / (1070 - 576) # for Python vs numba
(2700 - 576) / (889 - 576) # for Python vs Cython
(2700 - 576) / (997 - 576) # for Python vs C
test_fs([klNegBin, klNegBin_numba, klNegBin_cython], lambda: (r(), r()))
%timeit klNegBin(r(), r())
%timeit klNegBin_numba(r(), r())
%timeit klNegBin_cython(r(), r())
(3890 - 576) / (1160 - 576) # for Python vs numba
(3890 - 576) / (901 - 576) # for Python vs Cython
klGauss_c = lambda x, y: kullback.klGauss(x, y, 0.25)
test_fs([klGauss, klGauss_numba, klGauss_cython, klGauss_c], lambda: (r(), r()))
%timeit klGauss(r(), r())
%timeit klGauss_numba(r(), r())
%timeit klGauss_cython(r(), r())
%timeit klGauss_c(r(), r())
(852 - 576) / (1070 - 576) # for Python vs numba
(852 - 576) / (745 - 576) # for Python vs Cython
(852 - 576) / (911 - 576) # for Python vs C
klucbGauss_c = lambda x, y: kullback.klucbGauss(x, y, 0.25)
test_fs([klucbGauss, klucbGauss_numba, klucbGauss_cython, klucbGauss_c], lambda: (r(), r()))
%timeit klucbGauss(r(), r())
%timeit klucbGauss_numba(r(), r())
%timeit klucbGauss_cython(r(), r())
%timeit klucbGauss_c(r(), r())
(1960 - 576) / (31300 - 576) # for Python vs numba
(1960 - 576) / (676 - 576) # for Python vs Cython
(1960 - 576) / (939 - 576) # for Python vs C
klucbBern_c = lambda x, y: kullback.klucbBern(x, y, 1e-6)
test_fs([klucbBern, klucbBern_numba, klucbBern_cython, klucbBern_c], lambda: (r(), r()))
%timeit klucbBern(r(), r())
%timeit klucbBern_numba(r(), r())
%timeit klucbBern_cython(r(), r())
%timeit klucbBern_c(r(), r())
(91900 - 576) / (170000 - 576) # for Python vs numba
(91900 - 576) / (6930 - 576) # for Python vs Cython
(91900 - 576) / abs(314 - 576) # for Python vs C
klucbPoisson_c = lambda x, y: kullback.klucbPoisson(x, y, 1e-6)
test_fs([klucbPoisson, klucbPoisson_numba, klucbPoisson_cython, klucbPoisson_c], lambda: (r(), r()))
%timeit klucbPoisson(r(), r())
%timeit klucbPoisson_numba(r(), r())
%timeit klucbPoisson_cython(r(), r())
%timeit klucbPoisson_c(r(), r())
(72600 - 576) / (167000 - 576) # for Python vs numba
(72600 - 576) / (5330 - 576) # for Python vs Cython
(72600 - 576) / (2180 - 576) # for Python vs Cython
klucbExp_c = lambda x, y: kullback.klucbExp(x, y, 1e-6)
test_fs([klucbExp, klucbExp_numba, klucbExp_cython, klucbExp_c], lambda: (r(), r()))
%timeit klucbExp(r(), r())
%timeit klucbExp_numba(r(), r())
%timeit klucbExp_cython(r(), r())
%timeit klucbExp_c(r(), r())
(78700 - 576) / (156000 - 576) # for Python vs numba
(78700 - 576) / (4410 - 576) # for Python vs Cython
(78700 - 576) / (2040 - 576) # for Python vs Cython
%%bash
[ -f kullback.py.old ] && mv -vf kullback.py.old kullback.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: KL divergences and KL-UCB indexes, in naive Python
Step3: I will include docstrings and examples only for the naive implementation.
Step5: Binomial distributions
Step7: Poisson distributions
Step9: Exponential distributions
Step11: Gamma distributions
Step13: Negative binomial distributions
Step15: Gaussian distributions
Step17: Generic KL-UCB indexes, with a bisection search
Step18: For example, for klucbBern, the two steps are to first compute an upperbound (as precise as possible) and the compute the kl-UCB index
Step20: Distribution-specific KL-UCB indexes
Step22: Bernoulli
Step24: Poisson
Step26: Exponential
Step27: Others
Step28: As much as possible, one should call @jit(nopython=True) to be sure that numba does not fall back silently to naive Python code. With nopython=True, any call to the generated function will fail if the compilation could not succeed.
Step29: Binomial distributions
Step30: Poisson distributions
Step31: Exponential distributions
Step32: Gamma distributions
Step33: Negative binomial distributions
Step34: Gaussian distributions
Step35: Generic KL-UCB indexes, with a bisection search
Step36: For example, for klucbBern, the two steps are to first compute an upperbound (as precise as possible) and the compute the kl-UCB index
Step37: Distribution-specific KL-UCB indexes
Step38: Bernoulli
Step39: Poisson
Step40: Exponential
Step41: With Cython
Step42: A cell can now be written in Cython.
Step43: Here we observe a large speed-up. But how large? $6$ times or $50$ times?
Step44: But if we remove the time taken by the PRNG (which takes the same time for both the naive Python and the Cython function), we get a larger speed-up, closer to reality, about $50$ times and not just $6$ times faster!
Step45: KL divergences
Step46: Binomial distributions
Step47: Poisson distributions
Step48: Exponential distributions
Step49: Gamma distributions
Step50: Negative binomial distributions
Step51: Gaussian distributions
Step52: Generic KL-UCB indexes, with a bisection search
Step53: For example, for klucbBern_cython, the two steps are to first compute an upperbound (as precise as possible) and the compute the kl-UCB index
Step54: With the C API for Python
Step55: If you want to reproduce this notebook, download the kullback_py3.c and follow the build instructions.
Step56: The time to generate random numbers like this is small, but not zero!
Step57: <span style="color
Step58: This is a speed-up ratio of about $12$ times faster for Numba and Cython, and $25$ times faster for the C version.
Step59: Binomial
Step60: Too much numerical difference? Let's try again with a larger tolerance
Step61: This is a speed-up ratio of about $5$ times faster for both Numba and Cython. Not so great, but still something!
Step62: Poisson
Step63: This is a speed-up ratio of about $7.5$ times faster for Numba, and about $7$ times for Cython and C.
Step64: Exponential
Step65: This is a speed-up ratio of about $3$ times faster for Numba and $6$ times faster for Cython.
Step66: Gamma
Step67: This is a speed-up ratio of about $6$ times faster for Numba, and $6$ or $5$ times faster for Cython and C.
Step68: Negative binomial
Step69: This is a speed-up ratio of about $5$ times faster for Numba and $10$ times faster for Cython.
Step70: Gaussian
Step71: This is a speed-up ratio of about $45$ times faster for Cython, but Numba completely here!
Step72: KL-UCB indexes
Step73: This is a speed-up ratio of about $14$ times faster for Cython and $4$ times for C, and one more failure case for Numba.
Step74: Bernoulli
Step75: This is a speed-up ratio of about $15$ times faster for Cython, and one more failure case for Numba.
Step76: Poisson
Step77: This is a speed-up ratio of about $15$ times faster for Cython, and one more failure case for Numba.
Step78: Exponential
Step79: This is a speed-up ratio of about $17$ times faster for Cython, and one more failure case for Numba.
Step80: Clean up
|
7,705
|
<ASSISTANT_TASK:>
Python Code:
!pip install git+https://github.com/openai/baselines >/dev/null
!pip install gym >/dev/null
import numpy as np
import random
import gym
from gym.utils import seeding
from gym import spaces
def state_name_to_int(state):
state_name_map = {
'S': 0,
'A': 1,
'B': 2,
'C': 3,
'D': 4,
'E': 5,
'F': 6,
'G': 7,
'H': 8,
'K': 9,
'L': 10,
'M': 11,
'N': 12,
'O': 13
}
return state_name_map[state]
def int_to_state_name(state_as_int):
state_map = {
0: 'S',
1: 'A',
2: 'B',
3: 'C',
4: 'D',
5: 'E',
6: 'F',
7: 'G',
8: 'H',
9: 'K',
10: 'L',
11: 'M',
12: 'N',
13: 'O'
}
return state_map[state_as_int]
class BeraterEnv(gym.Env):
The Berater Problem
Actions:
There are 4 discrete deterministic actions, each choosing one direction
metadata = {'render.modes': ['ansi']}
showStep = False
showDone = True
envEpisodeModulo = 100
def __init__(self):
# self.map = {
# 'S': [('A', 100), ('B', 400), ('C', 200 )],
# 'A': [('B', 250), ('C', 400), ('S', 100 )],
# 'B': [('A', 250), ('C', 250), ('S', 400 )],
# 'C': [('A', 400), ('B', 250), ('S', 200 )]
# }
self.map = {
'S': [('A', 300), ('B', 100), ('C', 200 )],
'A': [('S', 300), ('B', 100), ('E', 100 ), ('D', 100 )],
'B': [('S', 100), ('A', 100), ('C', 50 ), ('K', 200 )],
'C': [('S', 200), ('B', 50), ('M', 100 ), ('L', 200 )],
'D': [('A', 100), ('F', 50)],
'E': [('A', 100), ('F', 100), ('H', 100)],
'F': [('D', 50), ('E', 100), ('G', 200)],
'G': [('F', 200), ('O', 300)],
'H': [('E', 100), ('K', 300)],
'K': [('B', 200), ('H', 300)],
'L': [('C', 200), ('M', 50)],
'M': [('C', 100), ('L', 50), ('N', 100)],
'N': [('M', 100), ('O', 100)],
'O': [('N', 100), ('G', 300)]
}
max_paths = 4
self.action_space = spaces.Discrete(max_paths)
positions = len(self.map)
# observations: position, reward of all 4 local paths, rest reward of all locations
# non existing path is -1000 and no position change
# look at what #getObservation returns if you are confused
low = np.append(np.append([0], np.full(max_paths, -1000)), np.full(positions, 0))
high = np.append(np.append([positions - 1], np.full(max_paths, 1000)), np.full(positions, 1000))
self.observation_space = spaces.Box(low=low,
high=high,
dtype=np.float32)
self.reward_range = (-1, 1)
self.totalReward = 0
self.stepCount = 0
self.isDone = False
self.envReward = 0
self.envEpisodeCount = 0
self.envStepCount = 0
self.reset()
self.optimum = self.calculate_customers_reward()
def seed(self, seed=None):
self.np_random, seed = seeding.np_random(seed)
return [seed]
def iterate_path(self, state, action):
paths = self.map[state]
if action < len(paths):
return paths[action]
else:
# sorry, no such action, stay where you are and pay a high penalty
return (state, 1000)
def step(self, action):
destination, cost = self.iterate_path(self.state, action)
lastState = self.state
customerReward = self.customer_reward[destination]
reward = (customerReward - cost) / self.optimum
self.state = destination
self.customer_visited(destination)
done = destination == 'S' and self.all_customers_visited()
stateAsInt = state_name_to_int(self.state)
self.totalReward += reward
self.stepCount += 1
self.envReward += reward
self.envStepCount += 1
if self.showStep:
print( "Episode: " + ("%4.0f " % self.envEpisodeCount) +
" Step: " + ("%4.0f " % self.stepCount) +
lastState + ' --' + str(action) + '-> ' + self.state +
' R=' + ("% 2.2f" % reward) + ' totalR=' + ("% 3.2f" % self.totalReward) +
' cost=' + ("%4.0f" % cost) + ' customerR=' + ("%4.0f" % customerReward) + ' optimum=' + ("%4.0f" % self.optimum)
)
if done and not self.isDone:
self.envEpisodeCount += 1
if BeraterEnv.showDone:
episodes = BeraterEnv.envEpisodeModulo
if (self.envEpisodeCount % BeraterEnv.envEpisodeModulo != 0):
episodes = self.envEpisodeCount % BeraterEnv.envEpisodeModulo
print( "Done: " +
("episodes=%6.0f " % self.envEpisodeCount) +
("avgSteps=%6.2f " % (self.envStepCount/episodes)) +
("avgTotalReward=% 3.2f" % (self.envReward/episodes) )
)
if (self.envEpisodeCount%BeraterEnv.envEpisodeModulo) == 0:
self.envReward = 0
self.envStepCount = 0
self.isDone = done
observation = self.getObservation(stateAsInt)
info = {"from": self.state, "to": destination}
return observation, reward, done, info
def getObservation(self, position):
result = np.array([ position,
self.getPathObservation(position, 0),
self.getPathObservation(position, 1),
self.getPathObservation(position, 2),
self.getPathObservation(position, 3)
],
dtype=np.float32)
all_rest_rewards = list(self.customer_reward.values())
result = np.append(result, all_rest_rewards)
return result
def getPathObservation(self, position, path):
source = int_to_state_name(position)
paths = self.map[self.state]
if path < len(paths):
target, cost = paths[path]
reward = self.customer_reward[target]
result = reward - cost
else:
result = -1000
return result
def customer_visited(self, customer):
self.customer_reward[customer] = 0
def all_customers_visited(self):
return self.calculate_customers_reward() == 0
def calculate_customers_reward(self):
sum = 0
for value in self.customer_reward.values():
sum += value
return sum
def modulate_reward(self):
number_of_customers = len(self.map) - 1
number_per_consultant = int(number_of_customers/3)
self.customer_reward = {
'S': 0
}
for customer_nr in range(1, number_of_customers + 1):
self.customer_reward[int_to_state_name(customer_nr)] = 0
# every consultant only visits a few random customers
samples = random.sample(range(1, number_of_customers + 1), k=number_per_consultant)
key_list = list(self.customer_reward.keys())
for sample in samples:
self.customer_reward[key_list[sample]] = 1000
def reset(self):
self.totalReward = 0
self.stepCount = 0
self.isDone = False
self.modulate_reward()
self.state = 'S'
return self.getObservation(state_name_to_int(self.state))
def render(self):
print(self.customer_reward)
env = BeraterEnv()
print(env.reset())
print(env.customer_reward)
BeraterEnv.showStep = True
BeraterEnv.showDone = True
env = BeraterEnv()
print(env)
observation = env.reset()
print(observation)
for t in range(1000):
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
if done:
print("Episode finished after {} timesteps".format(t+1))
break
env.close()
print(observation)
from copy import deepcopy
import json
class Baseline():
def __init__(self, env, max_reward, verbose=1):
self.env = env
self.max_reward = max_reward
self.verbose = verbose
self.reset()
def reset(self):
self.map = self.env.map
self.rewards = self.env.customer_reward.copy()
def as_string(self, state):
# reward/cost does not hurt, but is useless, path obsucres same state
new_state = {
'rewards': state['rewards'],
'position': state['position']
}
return json.dumps(new_state, sort_keys=True)
def is_goal(self, state):
if state['position'] != 'S': return False
for reward in state['rewards'].values():
if reward != 0: return False
return True
def expand(self, state):
states = []
for position, cost in self.map[state['position']]:
new_state = deepcopy(state)
new_state['position'] = position
new_state['rewards'][position] = 0
reward = state['rewards'][position]
new_state['reward'] += reward
new_state['cost'] += cost
new_state['path'].append(position)
states.append(new_state)
return states
def search(self, root, max_depth = 25):
closed = set()
open = [root]
while open:
state = open.pop(0)
if self.as_string(state) in closed: continue
closed.add(self.as_string(state))
depth = len(state['path'])
if depth > max_depth:
if self.verbose > 0:
print("Visited:", len(closed))
print("Reached max depth, without reaching goal")
return None
if self.is_goal(state):
scaled_reward = (state['reward'] - state['cost']) / self.max_reward
state['scaled_reward'] = scaled_reward
if self.verbose > 0:
print("Scaled reward:", scaled_reward)
print("Perfect path", state['path'])
return state
expanded = self.expand(state)
open += expanded
# make this best first
open.sort(key=lambda state: state['cost'])
def find_optimum(self):
initial_state = {
'rewards': self.rewards.copy(),
'position': 'S',
'reward': 0,
'cost': 0,
'path': ['S']
}
return self.search(initial_state)
def benchmark(self, model, sample_runs=100):
self.verbose = 0
BeraterEnv.showStep = False
BeraterEnv.showDone = False
perfect_rewards = []
model_rewards = []
for run in range(sample_runs):
observation = self.env.reset()
self.reset()
optimum_state = self.find_optimum()
perfect_rewards.append(optimum_state['scaled_reward'])
state = np.zeros((1, 2*128))
dones = np.zeros((1))
for t in range(1000):
actions, _, state, _ = model.step(observation, S=state, M=dones)
observation, reward, done, info = self.env.step(actions[0])
if done:
break
model_rewards.append(env.totalReward)
return perfect_rewards, model_rewards
def score(self, model, sample_runs=100):
perfect_rewards, model_rewards = self.benchmark(model, sample_runs=100)
perfect_score_mean, perfect_score_std = np.array(perfect_rewards).mean(), np.array(perfect_rewards).std()
test_score_mean, test_score_std = np.array(model_rewards).mean(), np.array(model_rewards).std()
return perfect_score_mean, perfect_score_std, test_score_mean, test_score_std
!rm -r logs
!mkdir logs
!mkdir logs/berater
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
print(tf.__version__)
# copied from https://github.com/openai/baselines/blob/master/baselines/a2c/utils.py
def ortho_init(scale=1.0):
def _ortho_init(shape, dtype, partition_info=None):
#lasagne ortho init for tf
shape = tuple(shape)
if len(shape) == 2:
flat_shape = shape
elif len(shape) == 4: # assumes NHWC
flat_shape = (np.prod(shape[:-1]), shape[-1])
else:
raise NotImplementedError
a = np.random.normal(0.0, 1.0, flat_shape)
u, _, v = np.linalg.svd(a, full_matrices=False)
q = u if u.shape == flat_shape else v # pick the one with the correct shape
q = q.reshape(shape)
return (scale * q[:shape[0], :shape[1]]).astype(np.float32)
return _ortho_init
def fc(x, scope, nh, *, init_scale=1.0, init_bias=0.0):
with tf.variable_scope(scope):
nin = x.get_shape()[1].value
w = tf.get_variable("w", [nin, nh], initializer=ortho_init(init_scale))
b = tf.get_variable("b", [nh], initializer=tf.constant_initializer(init_bias))
return tf.matmul(x, w)+b
# copied from https://github.com/openai/baselines/blob/master/baselines/common/models.py#L31
def mlp(num_layers=2, num_hidden=64, activation=tf.tanh, layer_norm=False):
Stack of fully-connected layers to be used in a policy / q-function approximator
Parameters:
----------
num_layers: int number of fully-connected layers (default: 2)
num_hidden: int size of fully-connected layers (default: 64)
activation: activation function (default: tf.tanh)
Returns:
-------
function that builds fully connected network with a given input tensor / placeholder
def network_fn(X):
# print('network_fn called')
# Tensor("ppo2_model_4/Ob:0", shape=(1, 19), dtype=float32)
# Tensor("ppo2_model_4/Ob_1:0", shape=(512, 19), dtype=float32)
# print (X)
h = tf.layers.flatten(X)
for i in range(num_layers):
h = fc(h, 'mlp_fc{}'.format(i), nh=num_hidden, init_scale=np.sqrt(2))
if layer_norm:
h = tf.contrib.layers.layer_norm(h, center=True, scale=True)
h = activation(h)
# Tensor("ppo2_model_4/pi/Tanh_2:0", shape=(1, 500), dtype=float32)
# Tensor("ppo2_model_4/pi_2/Tanh_2:0", shape=(512, 500), dtype=float32)
# print(h)
return h
return network_fn
# first the dense layer
def mlp(num_layers=2, num_hidden=64, activation=tf.tanh, layer_norm=False):
def network_fn(X):
h = tf.layers.flatten(X)
for i in range(num_layers):
h = tf.layers.dense(h, units=num_hidden, kernel_initializer=ortho_init(np.sqrt(2)))
# h = fc(h, 'mlp_fc{}'.format(i), nh=num_hidden, init_scale=np.sqrt(2))
if layer_norm:
h = tf.contrib.layers.layer_norm(h, center=True, scale=True)
h = activation(h)
return h
return network_fn
# then initializer, relu activations
def mlp(num_layers=2, num_hidden=64, activation=tf.nn.relu, layer_norm=False):
def network_fn(X):
h = tf.layers.flatten(X)
for i in range(num_layers):
h = tf.layers.dense(h, units=num_hidden, kernel_initializer=tf.initializers.glorot_uniform(seed=13))
if layer_norm:
# h = tf.layers.batch_normalization(h, center=True, scale=True)
h = tf.contrib.layers.layer_norm(h, center=True, scale=True)
h = activation(h)
return h
return network_fn
%%time
# https://github.com/openai/baselines/blob/master/baselines/deepq/experiments/train_pong.py
# log_dir = logger.get_dir()
log_dir = '/content/logs/berater/'
import gym
from baselines import bench
from baselines import logger
from baselines.common.vec_env.dummy_vec_env import DummyVecEnv
from baselines.common.vec_env.vec_monitor import VecMonitor
from baselines.ppo2 import ppo2
BeraterEnv.showStep = False
BeraterEnv.showDone = False
env = BeraterEnv()
wrapped_env = DummyVecEnv([lambda: BeraterEnv()])
monitored_env = VecMonitor(wrapped_env, log_dir)
# https://github.com/openai/baselines/blob/master/baselines/ppo2/ppo2.py
# https://github.com/openai/baselines/blob/master/baselines/common/models.py#L30
# https://arxiv.org/abs/1607.06450 for layer_norm
# lr linear from lr=1e-2 to lr=1e-4 (default lr=3e-4)
def lr_range(frac):
# we get the remaining updates between 1 and 0
start_lr = 1e-2
end_lr = 1e-4
diff_lr = start_lr - end_lr
lr = end_lr + diff_lr * frac
return lr
network = mlp(num_hidden=500, num_layers=3, layer_norm=True)
model = ppo2.learn(
env=monitored_env,
network=network,
lr=lr_range,
gamma=1.0,
ent_coef=0.05,
total_timesteps=1000000)
# model = ppo2.learn(
# env=monitored_env,
# network='mlp',
# num_hidden=500,
# num_layers=3,
# layer_norm=True,
# lr=lr_range,
# gamma=1.0,
# ent_coef=0.05,
# total_timesteps=500000)
# model.save('berater-ppo-v11.pkl')
monitored_env.close()
# !ls -l $log_dir
from baselines.common import plot_util as pu
results = pu.load_results(log_dir)
import matplotlib.pyplot as plt
import numpy as np
r = results[0]
plt.ylim(0, .75)
# plt.plot(np.cumsum(r.monitor.l), r.monitor.r)
plt.plot(np.cumsum(r.monitor.l), pu.smooth(r.monitor.r, radius=100))
import numpy as np
observation = env.reset()
env.render()
baseline = Baseline(env, max_reward=4000)
state = np.zeros((1, 2*128))
dones = np.zeros((1))
BeraterEnv.showStep = True
BeraterEnv.showDone = False
for t in range(1000):
actions, _, state, _ = model.step(observation, S=state, M=dones)
observation, reward, done, info = env.step(actions[0])
if done:
print("Episode finished after {} timesteps, reward={}".format(t+1, env.totalReward))
break
env.close()
%time baseline.find_optimum()
baseline = Baseline(env, max_reward=4000)
perfect_score_mean, perfect_score_std, test_score_mean, test_score_std = baseline.score(model, sample_runs=100)
# perfect scores
perfect_score_mean, perfect_score_std
# test scores for our model
test_score_mean, test_score_std
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Environment
Step3: Try out Environment
Step4: Baseline
Step5: Train model
Step7: Step 1
Step8: Step 2
Step9: Visualizing Results
Step10: Enjoy model
Step11: Evaluation
|
7,706
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure()
fig = plt.figure(figsize=(10, 7))
ax = fig.add_subplot(111) # I'll explain the "111" later. Basically, 1 row and 1 column.
ax.set(xlim=[0.5, 4.5], ylim=[-2, 8], title='An Example Axes', ylabel='Y-Axis', xlabel='X-Axis')
plt.show()
ax.set<TAB>
ax.set_xlim([0.5, 4.5])
ax.set_ylim([-2, 8])
ax.set_title('An Example Axes')
ax.set_ylabel('Y-Axis')
ax.set_xlabel('X-Axis')
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot([1, 2, 3, 4], [10, 20, 25, 30], color='lightblue', linewidth=3)
ax.scatter([0.3, 3.8, 1.2, 2.5], [11, 25, 9, 26], color='darkgreen', marker='^')
ax.set_xlim(0.5, 4.5)
plt.show()
plt.plot([1, 2, 3, 4], [10, 20, 25, 30], color='lightblue', linewidth=3)
plt.scatter([0.3, 3.8, 1.2, 2.5], [11, 25, 9, 26], color='darkgreen', marker='^')
plt.xlim(0.5, 4.5)
plt.show()
import this
fig, axes = plt.subplots(nrows=2, ncols=2)
plt.show()
axes
fig, axes = plt.subplots(nrows=2, ncols=2)
axes[0,0].set(title='Upper Left')
axes[0,1].set(title='Upper Right')
axes[1,0].set(title='Lower Left')
axes[1,1].set(title='Lower Right')
for ax in axes.ravel():
# Remove all xticks and yticks...
ax.set(xticks=[], yticks=[])
plt.show()
# Let's get our standard imports out of the way
%matplotlib inline
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
np.random.seed(1)
x = np.arange(5)
y = np.random.randn(5)
fig, axes = plt.subplots(ncols=2, figsize=plt.figaspect(1./2))
vert_bars = axes[0].bar(x, y, color='lightblue', align='center')
horiz_bars = axes[1].barh(x, y, color='lightblue', align='center')
# I'll also introduce axhline & axvline to draw a line all the way across the axes
# This can be a quick-n-easy way to draw an axis "spine".
axes[0].axhline(0, color='gray', linewidth=2)
axes[1].axvline(0, color='gray', linewidth=2)
plt.show()
np.random.seed(1)
y = np.random.randn(100).cumsum()
x = np.linspace(0, 10, 100)
fig, ax = plt.subplots()
ax.fill_between(x, y, color='lightblue')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Figures
Step2: Nothing happened! This is because by default mpl will not show anything until told to do so.
Step3: Notice the call to set. Matplotlib's objects typically have lots of "explicit setters" -- in other words, functions that start with set_<something> and control a particular option.
Step4: For example, we could have written the third line above as
Step5: Basic Plotting
Step6: Axes methods vs. pyplot
Step7: Much cleaner, and much clearer! So, why will most of my examples not follow the pyplot approach? Because PEP20 "The Zen of Python" says
Step8: While very simple plots, with short scripts would benefit from the conciseness of the pyplot implicit approach, when doing more complicated plots, or working within larger scripts, you will want to explicitly pass around the Axes and/or Figure object to operate upon.
Step9: plt.subplots(...) created a new figure and added 4 subplots to it. The axes object that was returned is a 2D numpy object array. Each item in the array is one of the subplots. They're laid out as you see them on the figure.
Step10: One really nice thing about plt.subplots() is that when it's called with no arguments, it creates a new figure with a single subplot.
Step11: Barplots
Step12: Filled Regions
|
7,707
|
<ASSISTANT_TASK:>
Python Code:
def mysum(a, b):
return a + b
def mysum(a, b):
내가 정의한 덧셈이다.
인자 a와 b에 각각 두 숫자를 입력받아 합을 되돌려준다.
return a + b
help(mysum)
x = 2
y = 3
z = mysum(x,y)
def print42():
print(42)
def return42():
return 42
b = return42()
b
a = print42()
print(a)
return42()
print42()
import operator
help(operator.add)
dir(operator)
import math
math.sin(10)
import math as m
m.pi
from math import sin, pi
sin(10)
pi
math.cos(10)
from math import *
exp(1)
dir(math)
help(math.sqrt)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: 함수정의의 문서화
Step3: 함수 관련 용어
Step4: mysum(x,y)에서 x와 y는 mysum 함수를 호출할 때 사용되는 "인자(argument)" 들이다.
Step5: 주의
Step6: 모듈(Module)
Step7: dir 함수를 이용하면 operator 모듈에 정의되어 있는 함수들의 목록을 확인할 수 있다.
Step8: 모듈 사용하는 법
Step9: math 모듈에 들어있는 함수들을 종종 확인할 필요가 있다. 강의에서 가장 많이 사용하는 모듈 중에 하나이다.
|
7,708
|
<ASSISTANT_TASK:>
Python Code:
##force not printing
%%capture
%matplotlib inline
!pip install h5py
import numpy as np
import numpy.ma as ma
import h5py
from scipy import sparse
import IPython.display as ipd
import matplotlib.pyplot as plt
import matplotlib.colors as col
from mpl_toolkits.mplot3d import Axes3D
import re
import networkx as nx
import scipy as sp
import scipy.sparse as sps
from scipy.sparse import coo_matrix, hstack, vstack, csr_matrix
from sklearn.cluster import KMeans, DBSCAN,MiniBatchKMeans
import sklearn.metrics as skm
##read .h5 file format containing the information about the microcolumn of the averege individual.
file_name='cons_locs_pathways_mc2_Column.h5'
h5=h5py.File(file_name,'r')
##load already computed weighted matrix
file_name='fail_weight.h5'
h5_dist=h5py.File(file_name,'r')
conn_fail=h5_dist['weight']
##scipy sparse matrix of conn_fail
conn_fail=csr_matrix(conn_fail)
N = conn_fail.shape[0]
m_values=list(h5['populations'].keys())
##Normal Laplacian
laplacian_norm=sp.sparse.csgraph.laplacian(conn_fail,normed=True)
##Combinatorial Laplacian
laplacian=sp.sparse.csgraph.laplacian(conn_fail,normed=False)
##Compute the first 4 eigenvalues and eigenvectors of the Laplacian
evalues, evect =sp.sparse.linalg.eigsh(laplacian,k=4,which="SM")
evalues_norm, evect_norm =sp.sparse.linalg.eigsh(laplacian_norm,k=4,which="SM")
plt.figure(figsize=(15,4))
plt.subplot(1, 2, 1)
plt.title('First 4 eigenvalues normalized Laplacian')
plt.plot(evalues_norm, '.-', markersize=15)
plt.subplot(1, 2, 2)
plt.title('First 4 eigenvalues combinatorial Laplacian')
plt.plot(evalues, '.-', markersize=15);
plt.show()
## set the coordinates for the embedding
##Normalized Laplacian
x_norm = evect_norm[:,1]
y_norm = evect_norm[:,2]
z_norm = evect_norm[:,3]
##combinatorial Laplacian
x = evect[:,1]
y = evect[:,2]
z = evect[:,3]
##Plot embeddings with color label
def singleplt2dembeddings(x,y,col1,colmap=None,size=0.3,title1='Embedding with normalized Laplacian eigenmaps'):
plt.figure(figsize=(15,5))
plt.title(title1)
plt.scatter(x, y,c=col1,cmap=colmap, alpha=0.6,s=size)
plt.xlabel('Eigenvector 1')
plt.ylabel('Eigenvector 2')
plt.colorbar()
def plt2dembeddings(x,y, x1,y1,col1,col2,colmap=None,size=0.3,title1='Embedding with normalized Laplacian eigenmaps',title2='Embedding with combinatorial Laplacian eigenmaps'):
plt.figure(figsize=(18,4))
plt.subplot(1, 2, 1)
plt.title(title1)
plt.scatter(x, y,c=col1,cmap=colmap, alpha=0.6,s=size)
plt.xlabel('Eigenvector 1')
plt.ylabel('Eigenvector 2')
if colmap !=None:
plt.colorbar()
plt.subplot(1, 2, 2)
plt.title(title2)
#plt.axis([-0.0001,0.0001,0.00001,0.000071])
plt.axis([-0.0000711,0.000071,-0.00008,0.000025])
plt.xlabel('Eigenvector 1')
plt.ylabel('Eigenvector 2')
if colmap !=None:
plt.colorbar()
plt.scatter(x1, y1,c=col2, cmap=colmap, alpha=0.5,s=24)
def plt3dembeddings(col=None,colmap=None):
fig = plt.figure(figsize=(18,4))
ax = fig.add_subplot(121, projection='3d')
ax.scatter(x_norm,y_norm, z_norm, s=0.5,c=col,cmap=colmap)
ax = fig.add_subplot(122, projection='3d')
ax.scatter(x,y, z, s=5,c=col,cmap=colmap)
plt.axis([-0.0001,0.0001,0.00001,0.000071])
#plt.axis([-0.0000511,-0.000011,-0.00003,0.000015])
ax.set_zlim3d(-0.0003,0.0002)
def plthist(value,face='g'):
plt.figure(figsize=(10,4))
plt.hist(value, bins=50 , facecolor=face,edgecolor='black', linewidth=1.2)
plt.show()
def twoplthist(value1,value2, face='g',tit1=' ',tit2=' '):
plt.figure(figsize=(15,3))
plt.subplot(1,2,1)
plt.hist(value1, bins=50 , facecolor=face,edgecolor='black', linewidth=1.2)
plt.title(tit1)
plt.subplot(1,2,2)
plt.hist(value2, bins=50 , facecolor=face,edgecolor='black', linewidth=1.2)
plt.title(tit2)
plt.show()
plt2dembeddings(x_norm,y_norm,x,y,col1=None, col2=None,colmap=None)
plt3dembeddings(col=None)
##Delete points too far away from the 'core' of the embedding of the combinatorial Laplacian
X=np.column_stack((x,y))
X1=np.array([w for w in X if np.abs(w[0])<0.02 and np.abs(w[1])<0.002])
x1 = X1[:,0]
y1 = X1[:,1]
##k-mean algorithm (k=2) for the combinatorial Laplacian
kmeans = KMeans(n_clusters=2, init='random').fit(X1)
lab=kmeans.labels_
##K-mean algorithm for the normalized Laplacian
Xnorm=np.column_stack((x_norm,y_norm))
kmeans = KMeans(n_clusters=2, init='random').fit(Xnorm)
labnorm=kmeans.labels_
#db=DBSCAN(eps=0.0005, min_samples=9000).fit(Xnorm)
#labnorm=db.labels_
##Function for creating labels
def labelcreate(a,val=0):
col=[]
for i in range(N):
if a[i]>val:
col.append(1)
else:
col.append(-1)
return col
plt2dembeddings(x_norm,y_norm,x1,y1, col1=labnorm, col2=lab,colmap='PiYG',title1='Clustering using K-mean',title2='Clustering using K-mean')
sign=np.sign(evect_norm[:,1])
labels_norm = [sign]
labels = [np.sign(evect[:,1])]
plt2dembeddings(x_norm,y_norm,x,y, col1=labels_norm[0], col2=labels[0],colmap='Spectral',title1='Clustering using Fiedler vector',title2='Clustering using Fiedler vector')
for i in range(len(lab)):
if labnorm[i]==0:
labnorm[i]=-1
err=np.count_nonzero(labnorm - sign)
err = min(err, N - err)
print('The partition given by the K-mean algorithm and the sign of the Fiedler vector differ for {} points'.format(err))
## preparing layers labels
m_type=dict()
for i in range(0, len(m_values)):
m_type[i] =m_values[i]
num_neuron=dict()
for i in range(0, len(m_values)):
num_neuron[i]=len(list(h5['populations'][m_type[i]]['locations']))
label_layer=[]
for i in m_type.keys():
label_layer.extend(num_neuron[i]*list(map(int, re.findall(r'^\D*(\d+)', m_type[i]))))
label_layer = [v if v!= 23 else 2 for v in label_layer]
colors = ['r','y','y','g','b','purple']
plt2dembeddings(x_norm,y_norm,x,y, col1=label_layer, col2=label_layer,colmap=col.ListedColormap(colors),size=0.5, title1= '6 Layers', title2='6 Layers')
plt3dembeddings(col=label_layer,colmap=col.ListedColormap(colors))
colors = ['b','b','b','b','b','purple']
singleplt2dembeddings(x_norm,y_norm, col1=label_layer,colmap=col.ListedColormap(colors),size=0.8, title1= '6 Layers', )
lab_6=labelcreate(label_layer, val=5) ##pay attention if the sign of the Fiedler vector change (this is arbitrary)
acc6= (1-float(np.count_nonzero(sign-np.array(lab_6)))/N)*100
acc6 = max(acc6,100-acc6)
print('The accuracy in using the sign of the Fiedler vector to distinguish neuron in layers 6 is {0:.2f}%'.format(acc6))
acc6KM= (1-float(np.count_nonzero(labnorm-np.array(lab_6)))/N)*100
acc6KM = max(acc6KM, 100- acc6KM)
print('The accuracy in using the K-mean partition to distingish neurons in layer 6 is {0:.2f}%'.format(acc6KM))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2.1.2) Weights with the average failure of connection
Step2: 2.1) Laplacian Matrix and its spectrum
Step3: 2.2) 2D and 3D Embeddings of the Microcircuit using Laplacian eigenmaps of the weighted connectivity matrix
Step4: 2.3) Spectral Clustering of the Microcircuit Network
Step5: The accuracy of the correlation between the fiedler vector and the total flow is very similar to the one computed in notebook 2 and therefore is not repeated here.
Step6: Looks like the Fiedler vector of the normalized Laplacian can distinguish neurons in layer 6 from all the other neurons and can also distinguish layer 5 and 2/3!
|
7,709
|
<ASSISTANT_TASK:>
Python Code:
!pip install unidecode
# Import TensorFlow >= 1.10 and enable eager execution
import tensorflow as tf
# Note: Once you enable eager execution, it cannot be disabled.
tf.enable_eager_execution()
import numpy as np
import os
import re
import random
import unidecode
import time
path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
text = unidecode.unidecode(open(path_to_file).read())
# length of text is the number of characters in it
print (len(text))
# unique contains all the unique characters in the file
unique = sorted(set(text))
# creating a mapping from unique characters to indices
char2idx = {u:i for i, u in enumerate(unique)}
idx2char = {i:u for i, u in enumerate(unique)}
# setting the maximum length sentence we want for a single input in characters
max_length = 100
# length of the vocabulary in chars
vocab_size = len(unique)
# the embedding dimension
embedding_dim = 256
# number of RNN (here GRU) units
units = 1024
# batch size
BATCH_SIZE = 64
# buffer size to shuffle our dataset
BUFFER_SIZE = 10000
input_text = []
target_text = []
for f in range(0, len(text)-max_length, max_length):
inps = text[f:f+max_length]
targ = text[f+1:f+1+max_length]
input_text.append([char2idx[i] for i in inps])
target_text.append([char2idx[t] for t in targ])
print (np.array(input_text).shape)
print (np.array(target_text).shape)
dataset = tf.data.Dataset.from_tensor_slices((input_text, target_text)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
class Model(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, units, batch_size):
super(Model, self).__init__()
self.units = units
self.batch_sz = batch_size
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
if tf.test.is_gpu_available():
self.gru = tf.keras.layers.CuDNNGRU(self.units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
else:
self.gru = tf.keras.layers.GRU(self.units,
return_sequences=True,
return_state=True,
recurrent_activation='sigmoid',
recurrent_initializer='glorot_uniform')
self.fc = tf.keras.layers.Dense(vocab_size)
def call(self, x, hidden):
x = self.embedding(x)
# output shape == (batch_size, max_length, hidden_size)
# states shape == (batch_size, hidden_size)
# states variable to preserve the state of the model
# this will be used to pass at every step to the model while training
output, states = self.gru(x, initial_state=hidden)
# reshaping the output so that we can pass it to the Dense layer
# after reshaping the shape is (batch_size * max_length, hidden_size)
output = tf.reshape(output, (-1, output.shape[2]))
# The dense layer will output predictions for every time_steps(max_length)
# output shape after the dense layer == (max_length * batch_size, vocab_size)
x = self.fc(output)
return x, states
model = Model(vocab_size, embedding_dim, units, BATCH_SIZE)
optimizer = tf.train.AdamOptimizer()
# using sparse_softmax_cross_entropy so that we don't have to create one-hot vectors
def loss_function(real, preds):
return tf.losses.sparse_softmax_cross_entropy(labels=real, logits=preds)
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(optimizer=optimizer,
model=model)
# Training step
EPOCHS = 20
for epoch in range(EPOCHS):
start = time.time()
# initializing the hidden state at the start of every epoch
hidden = model.reset_states()
for (batch, (inp, target)) in enumerate(dataset):
with tf.GradientTape() as tape:
# feeding the hidden state back into the model
# This is the interesting step
predictions, hidden = model(inp, hidden)
# reshaping the target because that's how the
# loss function expects it
target = tf.reshape(target, (-1,))
loss = loss_function(target, predictions)
grads = tape.gradient(loss, model.variables)
optimizer.apply_gradients(zip(grads, model.variables))
if batch % 100 == 0:
print ('Epoch {} Batch {} Loss {:.4f}'.format(epoch+1,
batch,
loss))
# saving (checkpoint) the model every 5 epochs
if (epoch + 1) % 5 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print ('Epoch {} Loss {:.4f}'.format(epoch+1, loss))
print('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
# restoring the latest checkpoint in checkpoint_dir
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
# Evaluation step(generating text using the model learned)
# number of characters to generate
num_generate = 1000
# You can change the start string to experiment
start_string = 'Q'
# converting our start string to numbers(vectorizing!)
input_eval = [char2idx[s] for s in start_string]
input_eval = tf.expand_dims(input_eval, 0)
# empty string to store our results
text_generated = ''
# hidden state shape == (batch_size, number of rnn units); here batch size == 1
hidden = [tf.zeros((1, units))]
for i in range(num_generate):
predictions, hidden = model(input_eval, hidden)
# using argmax to predict the word returned by the model
predicted_id = tf.argmax(predictions[-1]).numpy()
# We pass the predicted word as the next input to the model
# along with the previous hidden state
input_eval = tf.expand_dims([predicted_id], 0)
text_generated += idx2char[predicted_id]
print (start_string + text_generated)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import tensorflow and enable eager execution.
Step2: Download the dataset
Step3: Read the dataset
Step4: Creating dictionaries to map from characters to their indices and vice-versa, which will be used to vectorize the inputs
Step5: Creating the input and output tensors
Step6: Creating batches and shuffling them using tf.data
Step7: Creating the model
Step8: Call the model and set the optimizer and the loss function
Step9: Checkpoints (Object-based saving)
Step10: Train the model
Step11: Restore the latest checkpoint
Step12: Predicting using our trained model
|
7,710
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
import time
%matplotlib inline
# Import Dipy's procedures to process diffusion tensor
import dipy.reconst.dti as dti
# Import Dipy's functions that load and read CENIR data
from dipy.data import fetch_cenir_multib
from dipy.data import read_cenir_multib
# Import Dipy's brain segmentation function
from dipy.segment.mask import median_otsu
# Importing procedures to fit the free water elimination DTI model
from functions import nls_fit_tensor
# Download data using dipy's function fetch_cenir_multib
fetch_cenir_multib(with_raw=False)
# Select the b-values to read
bvals = [200, 400, 1000, 2000]
# Read data using dipy's function read_cenir_multib
img, gtab = read_cenir_multib(bvals)
# Read matrix containing data from img object
data = img.get_data()
maskdata, mask = median_otsu(data, 4, 2, False, vol_idx=[0, 1], dilate=1)
t0 = time.time()
fw_params = nls_fit_tensor(gtab, data, mask)
dt = time.time() - t0
print("This step took %f seconds to run" % dt)
# Extract tissue's diffusion tensor eigenvalues
evals = fw_params[..., :3]
# Extract free water volume fraction estimates
F = fw_params[..., 12]
# Compute the tissue's diffusion tensor fractional anisotropy
# using function fractional_anisotropy of Dipy's module dti
FA = dti.fractional_anisotropy(evals)
# Compute the tissue's diffusion tensor mean diffusivity
# using function mean_diffusivity of Dipy's module dti
MD = dti.mean_diffusivity(evals)
# Define the standard DTI model using Dipy's modules
# given the acquisition parameters that were read
dtimodel = dti.TensorModel(gtab)
# Fit DTI model in voxel pointed by the brain ROI
dtifit = dtimodel.fit(data, mask=mask_roi)
# Extract standard FA and MD from the DTI fitted parameters
dti_FA = dtifit.fa
dti_MD = dtifit.md
axial_slice = 40
fig1, ax = plt.subplots(2, 4, figsize=(12, 6),
subplot_kw={'xticks': [], 'yticks': []})
fig1.subplots_adjust(hspace=0.3, wspace=0.1)
# Plot fwDTI FA
im = ax.flat[0].imshow(FA[:, :, axial_slice].T, origin='lower',
cmap='gray', vmin=0, vmax=0.7)
ax.flat[0].set_title('A) fwDTI FA')
fig1.colorbar(im, ax=ax.flat[0])
# Plot DTI FA
im = ax.flat[1].imshow(dti_FA[:, :, axial_slice].T, origin='lower',
cmap='gray', vmin=0, vmax=0.7)
ax.flat[1].set_title('B) standard DTI FA')
fig1.colorbar(im, ax=ax.flat[1])
# Plot DTI difference
FAdiff = abs(FA[:, :, axial_slice] - dti_FA[:, :, axial_slice])
im = ax.flat[2].imshow(FAdiff.T, origin='lower',
cmap='gray', vmin=0, vmax=0.7)
ax.flat[2].set_title('C) FA difference')
fig1.colorbar(im, ax=ax.flat[2])
# Turn off the unnecessary panel
ax.flat[3].axis('off')
# Plot fwDTI MD
im =ax.flat[4].imshow(MD[:, :, axial_slice].T * 1000,
origin='lower', cmap='gray', vmin=0, vmax=2)
ax.flat[4].set_title('D) fwDTI MD')
fig1.colorbar(im, ax=ax.flat[4])
# Plot DTI MD
im = ax.flat[5].imshow(dti_MD[:, :, axial_slice].T * 1000,
origin='lower', cmap='gray', vmin=0, vmax=2)
ax.flat[5].set_title('E) standard DTI MD')
fig1.colorbar(im, ax=ax.flat[5])
# Plot MD difference
MDdiff = abs(MD[:, :, axial_slice] - dti_MD[:, :, axial_slice]) * 1000
im =ax.flat[6].imshow(MDdiff.T, origin='lower',
cmap='gray', vmin=0, vmax=2)
ax.flat[6].set_title('F) MD difference')
fig1.colorbar(im, ax=ax.flat[6])
# Plot free water volume fraction
im = ax.flat[7].imshow(F[:, :, axial_slice].T, origin='lower',
cmap='gray', vmin=0, vmax=1)
ax.flat[7].set_title('G) free water volume')
fig1.colorbar(im, ax=ax.flat[7])
plt.show()
fig1.savefig('fwdti_in_vivo.png', bbox_inches='tight')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Downloading data (note 1.7 Gb of data will be downloaded)...
Step2: Estimate a brain mask...
Step3: Fitting the free water DTI model...
Step4: Compute tensor statistic from the fitted parameters...
Step5: Compute standard DTI for comparison...
Step6: Plot data for a axial slice of the data ...
|
7,711
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('generator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None )#tf.nn.elu)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
#out = tf.sigmoid(logits)
return out
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('discriminator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation= None)#tf.nn.elu)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
logits = tf.layers.dense(h1, 1, activation=None)
#out = tf.sigmoid(logits)
out = tf.tanh(logits)
return out, logits
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.25
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size,z_size)
# Generator network here
g_model = generator(input_z, input_size)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True)
# Calculate losses
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels = tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels = tf.ones_like(d_logits_real) * -(1 - smooth)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels = tf.ones_like(d_logits_fake) * (1 - smooth)))
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
plt.show()
return fig, axes
batch_size = 100
epochs = 1
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
%time train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator_my.ckpt')
_ = view_samples(-1, samples)
# Save training generator samples
with open('train_samples_my.pkl', 'wb') as f:
pkl.dump(samples, f)
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
_ = view_samples(-1, samples)
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model Inputs
Step2: Generator network
Step3: Discriminator
Step4: Hyperparameters
Step5: Build network
Step6: Discriminator and Generator Losses
Step7: Optimizers
Step8: Training
Step9: Training loss
Step10: Generator samples from training
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
|
7,712
|
<ASSISTANT_TASK:>
Python Code:
# Exécutez cette cellule !
from IPython.core.display import HTML
styles = "<style>\n.travail {\n background-size: 30px;\n background-image: url('https://cdn.pixabay.com/photo/2018/01/04/16/53/building-3061124_960_720.png');\n background-position: left top;\n background-repeat: no-repeat;\n padding-left: 40px;\n}\n\n.bilan {\n background-size: 30px;\n background-image: url('https://cdn.pixabay.com/photo/2016/10/18/19/40/anatomy-1751201_960_720.png');\n background-position: left top;\n background-repeat: no-repeat;\n padding-left: 40px;\n}\n</style>"
HTML(styles)
maliste1 = [3, 5.2, "hop"]
type(maliste1)
maliste1 = None
maliste1 = [3, 5.2, "hop"]
maliste1[0]
maliste1[1]
maliste1[2]
maliste1[3]
maliste1[1] = "nouveau"
maliste1
maliste1
maliste1.append('42')
maliste1
print(len(maliste1))
l = [7,4]
if l[0] > l[1]:
maxi = l[0]
else:
maxi = l[1]
print("Le maximum de la liste vaut", maxi)
l = [7, 4, 12, 5]
# à compléter
print("Le maximum de la liste vaut", maxi)
l = [3, 5, 1, 5]
# à compléter
print("L'indice du maximum de la liste est", imaxi)
l = [10, 11, 12, 16, 14, 15, 13, 17]
# a completer
# exemple
l = [12, 5]
if l[1] < l[0]:
# complétez : échangez les éléments d'indices 0 et 1
print(l)
# idem pour listes de longueur 3
l = [12, 15, 2]
# À compléter...
print(l)
# comme vous êtes en forme vous pouvez peut-être tenter le cas longueur 4 ?
l = [12, 2, 6, 4]
print(l)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Définition
Step2: Une liste peut contenir tous types d'objets comme dans l'exemple ci-dessus où maliste1 contient trois éléments, dans l'ordre un int (entier), un float (flottant) et un str (chaîne de caractères).
Step3: le mot-clé [] décrit un objet un peu particulier
Step4: L'élément 3 est à l'indice 0, l'élément 5.2 à l'indice 1 et l'élément 'hop' à l'indice 2.
Step5: <div class="alert alert-block alert-info bilan">
Step6: <div class="alert alert-block alert-danger travail">
Step7: notez que rien ne s'affiche car la méthode append ne renvoie rien
Step8: <div class="alert alert-block alert-info bilan">
Step9: Maximum dans une liste de longueur 4
Step10: Indice du maximum dans une liste de longueur 4
Step11: Échanger deux valeurs dans une liste
Step12: Tri d'une liste de longueur 3
Step13: Pour vérifier, essayez différents ordres possibles pour la liste de départ.
|
7,713
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'niwa', 'sandbox-3', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
7,714
|
<ASSISTANT_TASK:>
Python Code:
# set up notebook to show plots within the notebook
% matplotlib inline
# Import necessary libraries:
# General utilities:
import os
import sys
from time import time
from scipy.misc import imsave
# Computation:
import numpy as np
import h5py
from skimage import measure
from scipy.cluster.hierarchy import linkage, dendrogram
from scipy.spatial.distance import pdist
# Visualization:
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from mpl_toolkits.axes_grid1 import make_axes_locatable
from IPython.display import display
import ipywidgets as widgets
from mpl_toolkits.axes_grid1 import ImageGrid
# Finally, pycroscopy itself
import pycroscopy as px
image_path = px.io.uiGetFile('*.png *PNG *TIFF * TIF *tif *tiff *BMP *bmp','Images')
#image_path = r"C:\Users\sjz\Desktop\MM workshop\Data\Image_Cleaning\16_55_31_256_256b.tif"
print('Working on: \n{}'.format(image_path))
folder_path, file_name = os.path.split(image_path)
base_name, _ = os.path.splitext(file_name)
# Check if an HDF5 file with the chosen image already exists.
# Only translate if it does not.
h5_path = os.path.join(folder_path, base_name+'.h5')
need_translation = True
if os.path.exists(h5_path):
try:
h5_file = h5py.File(h5_path, 'r+')
h5_raw = h5_file['Measurement_000']['Channel_000']['Raw_Data']
need_translation = False
print('HDF5 file with Raw_Data found. No need to translate.')
except KeyError:
print('Raw Data not found.')
else:
print('No HDF5 file found.')
if need_translation:
# Initialize the Image Translator
tl = px.ImageTranslator()
# create an H5 file that has the image information in it and get the reference to the dataset
h5_raw = tl.translate(image_path)
# create a reference to the file
h5_file = h5_raw.file
print('HDF5 file is located at {}.'.format(h5_file.filename))
print('Datasets and datagroups within the file:')
px.io.hdf_utils.print_tree(h5_file)
print('\nThe main dataset:')
print(h5_file['/Measurement_000/Channel_000/Raw_Data'])
print('\nThe ancillary datasets:')
print(h5_file['/Measurement_000/Channel_000/Position_Indices'])
print(h5_file['/Measurement_000/Channel_000/Position_Values'])
print(h5_file['/Measurement_000/Channel_000/Spectroscopic_Indices'])
print(h5_file['/Measurement_000/Channel_000/Spectroscopic_Values'])
print('\nMetadata or attributes in a datagroup')
for key in h5_file['/Measurement_000'].attrs:
print('{} : {}'.format(key, h5_file['/Measurement_000'].attrs[key]))
# Initialize the windowing class
iw = px.ImageWindow(h5_raw, max_RAM_mb=1024*4)
# grab position indices from the H5 file
h5_pos = h5_raw.parent[h5_raw.attrs['Position_Indices']]
# determine the image size:
num_x = len(np.unique(h5_pos[:,0]))
num_y = len(np.unique(h5_pos[:,1]))
# extract figure data and reshape to proper numpy array
raw_image_mat = np.reshape(h5_raw[()], [num_x,num_y]);
fig, axis = plt.subplots(figsize=(10,10))
img = axis.imshow(raw_image_mat,cmap=px.plot_utils.cmap_jet_white_center(), origin='lower');
divider = make_axes_locatable(axis)
cax = divider.append_axes("right", size="5%", pad=0.2)
plt.colorbar(img, cax=cax)
axis.set_title('Raw Image', fontsize=16);
num_peaks = 2
win_size , psf_width = iw.window_size_extract(num_peaks, save_plots=False, show_plots=True)
print('Window size = {}'.format(win_size))
# Uncomment this line if you need to manually specify a window size
# win_size = 8
# plot a single window
row_offset = int(0.5*(num_x-win_size))
col_offset = int(0.5*(num_y-win_size))
plt.imshow(raw_image_mat[row_offset:row_offset+win_size,
col_offset:col_offset+win_size],
cmap=px.plot_utils.cmap_jet_white_center(),
origin='lower');
# the result should be about the size of a unit cell
# if it is the wrong size, just choose on manually by setting the win_size
windowing_parms = {
'fft_mode': None, # Options are None, 'abs', 'data+abs', or 'complex'
'win_x': win_size,
'win_y': win_size,
'win_step_x': 1,
'win_step_y': 1,
}
win_parms_copy = windowing_parms.copy()
if windowing_parms['fft_mode'] is None:
win_parms_copy['fft_mode'] = 'data'
h5_wins_grp = px.hdf_utils.check_for_old(h5_raw, 'Windowing',
win_parms_copy)
if h5_wins_grp is None:
print('Windows either do not exist or were created with different parameters')
t0 = time()
h5_wins = iw.do_windowing(win_x=windowing_parms['win_x'],
win_y=windowing_parms['win_y'],
save_plots=False,
show_plots=False,
win_fft=windowing_parms['fft_mode'])
print( 'Windowing took {} seconds.'.format(round(time()-t0, 2)))
else:
print('Taking existing windows dataset')
h5_wins = h5_wins_grp['Image_Windows']
print('\nRaw data was of shape {} and the windows dataset is now of shape {}'.format(h5_raw.shape, h5_wins.shape))
print('Now each position (window) is descibed by a set of pixels')
# Peek at a few random windows
num_rand_wins = 9
rand_positions = np.random.randint(0, high=h5_wins.shape[0], size=num_rand_wins)
example_wins = np.zeros(shape=(windowing_parms['win_x'], windowing_parms['win_y'], num_rand_wins), dtype=np.float32)
for rand_ind, rand_pos in enumerate(rand_positions):
example_wins[:, :, rand_ind] = np.reshape(h5_wins[rand_pos], (windowing_parms['win_x'], windowing_parms['win_y']))
px.plot_utils.plot_map_stack(example_wins, heading='Example Windows', cmap=px.plot_utils.cmap_jet_white_center(),
title=['Window # ' + str(win_pos) for win_pos in rand_positions]);
# check to make sure number of components is correct:
num_comp = 1024
num_comp = min(num_comp,
min(h5_wins.shape)*len(h5_wins.dtype))
h5_svd = px.hdf_utils.check_for_old(h5_wins, 'SVD', {'num_components':num_comp})
if h5_svd is None:
print('SVD was either not performed or was performed with different parameters')
h5_svd = px.processing.doSVD(h5_wins, num_comps=num_comp)
else:
print('Taking existing SVD results')
h5_U = h5_svd['U']
h5_S = h5_svd['S']
h5_V = h5_svd['V']
# extract parameters of the SVD results
h5_pos = iw.hdf.file[h5_wins.attrs['Position_Indices']]
num_rows = len(np.unique(h5_pos[:, 0]))
num_cols = len(np.unique(h5_pos[:, 1]))
num_comp = h5_S.size
print("There are a total of {} components.".format(num_comp))
print('\nRaw data was of shape {} and the windows dataset is now of shape {}'.format(h5_raw.shape, h5_wins.shape))
print('Now each position (window) is descibed by a set of pixels')
plot_comps = 49
U_map_stack = np.reshape(h5_U[:, :plot_comps], [num_rows, num_cols, -1])
V_map_stack = np.reshape(h5_V, [num_comp, win_size, win_size])
V_map_stack = np.transpose(V_map_stack,(2,1,0))
fig_S, ax_S = px.plot_utils.plotScree(h5_S[()]);
for field in V_map_stack.dtype.names:
fig_V, ax_V = px.plot_utils.plot_map_stack(V_map_stack[:,:,:][field], heading='', title='Vector-'+field, num_comps=plot_comps,
color_bar_mode='each', cmap=px.plot_utils.cmap_jet_white_center())
fig_U, ax_U = px.plot_utils.plot_map_stack(U_map_stack[:,:,:25], heading='', title='Component', num_comps=plot_comps,
color_bar_mode='each', cmap=px.plot_utils.cmap_jet_white_center())
clean_components = range(36) # np.append(range(5,9),(17,18))
num_components=len(clean_components)
# Check if the image has been reconstructed with the same parameters:
# First, gather all groups created by this tool:
h5_clean_image = None
for item in h5_svd:
if item.startswith('Cleaned_Image_') and isinstance(h5_svd[item],h5py.Group):
grp = h5_svd[item]
old_comps = px.hdf_utils.get_attr(grp, 'components_used')
if old_comps.size == len(list(clean_components)):
if np.all(np.isclose(old_comps, np.array(clean_components))):
h5_clean_image = grp['Cleaned_Image']
print( 'Existing clean image found. No need to rebuild.')
break
if h5_clean_image is None:
t0 = time()
#h5_clean_image = iw.clean_and_build_batch(h5_win=h5_wins, components=clean_components)
h5_clean_image = iw.clean_and_build_separate_components(h5_win=h5_wins, components=clean_components)
print( 'Cleaning and rebuilding image took {} seconds.'.format(round(time()-t0, 2)))
# Building a stack of images from here:
image_vec_components = h5_clean_image[()]
# summing over the components:
for comp_ind in range(1, h5_clean_image.shape[1]):
image_vec_components[:, comp_ind] = np.sum(h5_clean_image[:, :comp_ind+1], axis=1)
# converting to 3D:
image_components = np.reshape(image_vec_components, [num_x, num_y, -1])
# calculating the removed noise:
noise_components = image_components - np.reshape(np.tile(h5_raw[()], [1, h5_clean_image.shape[1]]), image_components.shape)
# defining a helper function to get the FFTs of a stack of images
def get_fft_stack(image_stack):
blackman_window_rows = np.blackman(image_stack.shape[0])
blackman_window_cols = np.blackman(image_stack.shape[1])
fft_stack = np.zeros(image_stack.shape, dtype=np.float)
for image_ind in range(image_stack.shape[2]):
layer = image_stack[:, :, image_ind]
windowed = blackman_window_rows[:, np.newaxis] * layer * blackman_window_cols[np.newaxis, :]
fft_stack[:, :, image_ind] = np.abs(np.fft.fftshift(np.fft.fft2(windowed, axes=(0,1)), axes=(0,1)))
return fft_stack
# get the FFT of the cleaned image and the removed noise:
fft_image_components = get_fft_stack(image_components)
fft_noise_components = get_fft_stack(noise_components)
fig_U, ax_U = px.plot_utils.plot_map_stack(image_components[:,:,:25], heading='', evenly_spaced=False,
title='Upto component', num_comps=plot_comps, color_bar_mode='single',
cmap=px.plot_utils.cmap_jet_white_center())
num_comps = min(16, image_components.shape[2])
img_stdevs = 3
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(14, 14))
axes.flat[0].loglog(h5_S[()], '*-')
axes.flat[0].set_xlim(left=1, right=h5_S[()].size)
axes.flat[0].set_ylim(bottom=np.min(h5_S[()]), top=np.max(h5_S[()]))
axes.flat[0].set_title('Variance', fontsize=16)
vert_line = axes.flat[0].axvline(x=num_comps, color='r')
clean_image_mat = image_components[:, :, num_comps]
img_clean = axes.flat[1].imshow(clean_image_mat, cmap=px.plot_utils.cmap_jet_white_center(), origin='lower')
mean_val = np.mean(clean_image_mat)
std_val = np.std(clean_image_mat)
img_clean.set_clim(vmin=mean_val-img_stdevs*std_val, vmax=mean_val+img_stdevs*std_val)
axes.flat[1].get_yaxis().set_visible(False)
axes.flat[1].get_xaxis().set_visible(False)
axes.flat[1].set_title('Cleaned Image', fontsize=16)
fft_std_dev = np.max(np.std(fft_image_components[:, :, num_comps]))
img_noise_fft = axes.flat[2].imshow(fft_noise_components[:, :, num_comps], cmap=plt.cm.jet,
vmin=0, vmax=4*fft_std_dev, origin='lower')
axes.flat[2].get_yaxis().set_visible(False)
axes.flat[2].get_xaxis().set_visible(False)
axes.flat[2].set_title('FFT of removed noise', fontsize=16)
img_clean_fft = axes.flat[3].imshow(fft_image_components[:, :, num_comps], cmap=plt.cm.jet,
vmin=0, vmax=4*fft_std_dev, origin='lower')
axes.flat[3].set_title('FFT of cleaned image', fontsize=16)
axes.flat[3].get_yaxis().set_visible(False)
axes.flat[3].get_xaxis().set_visible(False)
def move_comp_line(num_comps):
vert_line.set_xdata((num_comps, num_comps))
clean_image_mat = image_components[:, :, num_comps]
img_clean.set_data(clean_image_mat)
mean_val = np.mean(clean_image_mat)
std_val = np.std(clean_image_mat)
img_clean.set_clim(vmin=mean_val-img_stdevs*std_val, vmax=mean_val+img_stdevs*std_val)
img_noise_fft.set_data(fft_noise_components[:, :, num_comps])
img_clean_fft.set_data(fft_image_components[:, :, num_comps])
clean_components = range(num_comps)
display(fig)
widgets.interact(move_comp_line, num_comps=(1, image_components.shape[2]-1, 1));
num_comps = 28
fig, axis = plt.subplots(figsize=(7, 7))
clean_image_mat = image_components[:, :, num_comps]
img_clean = axis.imshow(clean_image_mat, cmap=px.plot_utils.cmap_jet_white_center(), origin='lower')
mean_val = np.mean(clean_image_mat)
std_val = np.std(clean_image_mat)
img_clean.set_clim(vmin=mean_val-img_stdevs*std_val, vmax=mean_val+img_stdevs*std_val)
axis.get_yaxis().set_visible(False)
axis.get_xaxis().set_visible(False)
axis.set_title('Cleaned Image', fontsize=16);
clean_components = 64
num_clusters = 32
# Check for existing Clustering results
estimator = px.Cluster(h5_U, 'KMeans', num_comps=clean_components, n_clusters=num_clusters)
do_cluster = False
# See if there are existing cluster results
try:
h5_kmeans = h5_svd['U-Cluster_000']
print( 'Clustering results loaded. Will now check parameters')
except:
print( 'Could not load Clustering results.')
do_cluster = True
# Check that the same components are used
if not do_cluster:
new_clean = estimator.data_slice[1]
if isinstance(new_clean, np.ndarray):
new_clean = new_clean.tolist()
else:
# print(new_clean)
if new_clean.step is None:
new_clean = range(new_clean.start, new_clean.stop)
else:
new_clean = range(new_clean.start, new_clean.stop, new_clean.step)
if all(h5_kmeans.attrs['components_used']==new_clean):
print( 'Clustering results used the same components as those requested.')
else:
do_cluster = True
print( 'Clustering results used the different components from those requested.')
# Check that the same number of clusters were used
if not do_cluster:
old_clusters = len(np.unique(h5_kmeans['Cluster_Indices']))
if old_clusters==num_clusters:
print( 'Clustering results used the same number of clusters as requested.')
else:
do_cluster = True
print( 'Clustering results used a different number of clusters from those requested.')
# Perform k-means clustering on the U matrix now using the list of components only if needed:
if do_cluster:
t0 = time()
h5_kmeans = estimator.do_cluster()
print( 'kMeans took {} seconds.'.format(round(time()-t0, 2)))
else:
print( 'Using existing results.')
print( 'Clustering results in {}.'.format(h5_kmeans.name))
half_wind = int(win_size*0.5)
# generate a cropped image that was effectively the area that was used for pattern searching
# Need to get the math righ on the counting
cropped_clean_image = clean_image_mat[half_wind:-half_wind + 1, half_wind:-half_wind + 1]
# Plot cluster results Get the labels dataset
labels_mat = np.reshape(h5_kmeans['Labels'][()], [num_rows, num_cols])
fig, axes = plt.subplots(ncols=2, figsize=(14,7))
axes[0].imshow(cropped_clean_image,cmap=px.plot_utils.cmap_jet_white_center(), origin='lower')
axes[0].set_title('Cleaned Image', fontsize=16)
axes[1].imshow(labels_mat, aspect=1, interpolation='none',cmap=px.plot_utils.cmap_jet_white_center(), origin='lower')
axes[1].set_title('K-means cluster labels', fontsize=16);
for axis in axes:
axis.get_yaxis().set_visible(False)
axis.get_xaxis().set_visible(False)
# Plot dendrogram here
#Get the distrance between cluster means
distance_mat = pdist(h5_kmeans['Mean_Response'][()])
#get hierachical pairings of clusters
linkage_pairing = linkage(distance_mat,'weighted')
# Normalize the pairwise distance with the maximum distance
linkage_pairing[:,2] = linkage_pairing[:,2]/max(linkage_pairing[:,2])
# Visualize dendrogram
fig = plt.figure(figsize=(10,3))
retval = dendrogram(linkage_pairing, count_sort=True,
distance_sort=True, leaf_rotation=90)
#fig.axes[0].set_title('Dendrogram')
fig.axes[0].set_xlabel('Cluster number', fontsize=20)
fig.axes[0].set_ylabel('Cluster separation', fontsize=20)
px.plot_utils.set_tick_font_size(fig.axes[0], 12)
motif_win_size = win_size
half_wind = int(motif_win_size*0.5)
current_center = [int(0.5*cropped_clean_image.shape[0]), int(0.5*cropped_clean_image.shape[1])]
fig, axes = plt.subplots(ncols=2, figsize=(14,7))
axes[0].hold(True)
axes[0].imshow(cropped_clean_image,cmap=px.plot_utils.cmap_jet_white_center(), origin='lower')
axes[0].set_title('Cleaned Image', fontsize=16)
axes[1].set_title('Zoomed area', fontsize=16)
vert_line = axes[0].axvline(x=current_center[1], color='k')
hor_line = axes[0].axhline(y=current_center[0], color='k')
motif_box = axes[0].add_patch(patches.Rectangle((current_center[1] - half_wind, current_center[0] - half_wind),
motif_win_size, motif_win_size, fill=False,
color='black', linewidth=2))
add_motif_button = widgets.Button(description="Set as motif")
display(add_motif_button)
def move_zoom_box(coarse_row, coarse_col, fine_row, fine_col):
row = coarse_row + fine_row
col = coarse_col + fine_col
vert_line.set_xdata((col, col))
hor_line.set_ydata((row, row))
current_center[0] = row
current_center[1] = col
indices = (slice(row - half_wind, row + half_wind),
slice(col - half_wind, col + half_wind))
motif_box.set_x(col - half_wind)
motif_box.set_y(row - half_wind)
axes[1].imshow(cropped_clean_image[indices],cmap=px.plot_utils.cmap_jet_white_center(),
vmax=np.max(cropped_clean_image), vmin=np.min(cropped_clean_image), origin='lower')
axes[1].axvline(x=half_wind, color='k')
axes[1].axhline(y=half_wind, color='k')
display(fig)
motif_win_centers = list()
def add_motif(butt):
#print("Setting motif with coordinates ({}, {})".format(current_center[0], current_center[1]))
axes[0].add_patch(patches.Rectangle((current_center[1] - int(0.5*motif_win_size),
current_center[0] - int(0.5*motif_win_size)),
motif_win_size, motif_win_size, fill=False,
color='black', linewidth=2))
motif_win_centers.append((current_center[0], current_center[1]))
# print motif_win_centers
widgets.interact(move_zoom_box, coarse_row=(motif_win_size, cropped_clean_image.shape[0] - motif_win_size, 1),
coarse_col=(motif_win_size, cropped_clean_image.shape[1] - motif_win_size, 1),
fine_row=(-half_wind,half_wind,1), fine_col=(-half_wind,half_wind,1));
add_motif_button.on_click(add_motif)
# select motifs from the cluster labels using the component list:
# motif_win_centers = [(135, 128), (106, 125), (62, 204), (33, 206)]
print('Coordinates of the centers of the chosen motifs:')
print(motif_win_centers)
motif_win_size = win_size
half_wind = int(motif_win_size*0.5)
# Effectively, we end up cropping the image again by the window size while matching patterns so:
double_cropped_image = cropped_clean_image[half_wind:-half_wind, half_wind:-half_wind]
# motif_win_size = 15 # Perhaps the motif should be smaller than the original window
num_motifs = len(motif_win_centers)
motifs = list()
fig, axes = plt.subplots(ncols=3, nrows=num_motifs, figsize=(14,6 * num_motifs))
for window_center, ax_row in zip(motif_win_centers, np.atleast_2d(axes)):
indices = (slice(window_center[0] - half_wind, window_center[0] + half_wind),
slice(window_center[1] - half_wind, window_center[1] + half_wind))
motifs.append(labels_mat[indices])
ax_row[0].hold(True)
ax_row[0].imshow(cropped_clean_image, interpolation='none',cmap=px.plot_utils.cmap_jet_white_center(), origin='lower')
ax_row[0].add_patch(patches.Rectangle((window_center[1] - int(0.5*motif_win_size),
window_center[0] - int(0.5*motif_win_size)),
motif_win_size, motif_win_size, fill=False,
color='black', linewidth=2))
ax_row[0].hold(False)
ax_row[1].hold(True)
ax_row[1].imshow(cropped_clean_image[indices], interpolation='none',cmap=px.plot_utils.cmap_jet_white_center(),
vmax=np.max(cropped_clean_image), vmin=np.min(cropped_clean_image), origin='lower')
ax_row[1].plot([0, motif_win_size-2],[int(0.5*motif_win_size), int(0.5*motif_win_size)], 'k--')
ax_row[1].plot([int(0.5*motif_win_size), int(0.5*motif_win_size)], [0, motif_win_size-2], 'k--')
# ax_row[1].axis('tight')
ax_row[1].set_title('Selected window for motif around (row {}, col {})'.format(window_center[0], window_center[1]))
ax_row[1].hold(False)
ax_row[2].imshow(labels_mat[indices], interpolation='none',cmap=px.plot_utils.cmap_jet_white_center(),
vmax=num_clusters-1, vmin=0, origin='lower')
ax_row[2].set_title('Motif from K-means labels');
motif_match_coeffs = list()
for motif_mat in motifs:
match_mat = np.zeros(shape=(num_rows-motif_win_size, num_cols-motif_win_size))
for row_count, row_pos in enumerate(range(half_wind, num_rows - half_wind - 1, 1)):
for col_count, col_pos in enumerate(range(half_wind, num_cols - half_wind - 1, 1)):
local_cluster_mat = labels_mat[row_pos-half_wind : row_pos+half_wind,
col_pos-half_wind : col_pos+half_wind]
match_mat[row_count, col_count] = np.sum(local_cluster_mat == motif_mat)
# Normalize the dataset:
match_mat = match_mat/np.max(match_mat)
motif_match_coeffs.append(match_mat)
show_legend = True
base_color_map = plt.cm.get_cmap('jet')
fig = plt.figure(figsize=(8, 8))
im = plt.imshow(double_cropped_image, cmap="gray", origin='lower')
if num_motifs > 1:
motif_colors = [base_color_map(int(255 * motif_ind / (num_motifs - 1))) for motif_ind in range(num_motifs)]
else:
motif_colors = [base_color_map(0)]
handles = list()
for motif_ind, current_solid_color, match_mat in zip(range(num_motifs), motif_colors, motif_match_coeffs):
my_cmap = px.plot_utils.make_linear_alpha_cmap('fdfd', current_solid_color, 1)
im = plt.imshow(match_mat, cmap=my_cmap, origin='lower');
current_solid_color = list(current_solid_color)
current_solid_color[3] = 0.5 # maximum alpha value
handles.append(patches.Patch(color=current_solid_color, label='Motif {}'.format(motif_ind)))
if show_legend:
plt.legend(handles=handles, bbox_to_anchor=(1.01, 1), loc=2, borderaxespad=0., fontsize=14)
axis = fig.get_axes()[0]
axis.set_title('Pattern matching scores', fontsize=22)
axis.set_xticklabels([])
axis.set_yticklabels([])
axis.get_xaxis().set_visible(False)
axis.get_yaxis().set_visible(False)
plt.show()
thresholds = [0.25 for x in range(num_motifs)]
thresholded_maps = list()
motif_imgs = list()
base_color_map = plt.cm.jet
fig, axis = plt.subplots(figsize=(10, 10))
axis.hold(True)
axis.imshow(double_cropped_image, cmap="gray")
handles = list()
if num_motifs > 1:
motif_colors = [base_color_map(int(255 * motif_ind / (num_motifs - 1))) for motif_ind in range(num_motifs)]
else:
motif_colors = [base_color_map(0)]
for motif_ind, match_mat, t_hold, current_solid_color in zip(range(num_motifs), motif_match_coeffs,
thresholds, motif_colors):
my_cmap = px.plot_utils.make_linear_alpha_cmap('fdfd', current_solid_color, 1, max_alpha=0.5)
bin_map = np.where(match_mat > t_hold,
np.ones(shape=match_mat.shape, dtype=np.uint8),
np.zeros(shape=match_mat.shape, dtype=np.uint8))
thresholded_maps.append(bin_map)
motif_imgs.append(axis.imshow(bin_map, interpolation='none', cmap=my_cmap))
current_solid_color = list(current_solid_color)
current_solid_color[3] = 0.5
handles.append(patches.Patch(color=current_solid_color,label='Motif {}'.format(motif_ind)))
axis.set_xticklabels([])
axis.set_yticklabels([])
axis.get_xaxis().set_visible(False)
axis.get_yaxis().set_visible(False)
plt.legend(handles=handles, bbox_to_anchor=(1.01, 1), loc=2, borderaxespad=0.)
plt.hold(False)
def threshold_images(thresholds):
# thresholded_maps = list()
# empty the thresholded maps:
del thresholded_maps[:]
for motif_ind, match_mat, t_hold, current_solid_color in zip(range(num_motifs), motif_match_coeffs, thresholds, motif_colors):
my_cmap = px.plot_utils.make_linear_alpha_cmap('fdfd', current_solid_color, 1, max_alpha=0.5)
bin_map = np.where(match_mat > t_hold,
np.ones(shape=match_mat.shape, dtype=np.uint8),
np.zeros(shape=match_mat.shape, dtype=np.uint8))
thresholded_maps.append(bin_map)
def interaction_unpacker(**kwargs):
#threshs = range(num_motifs)
for motif_ind in range(num_motifs):
thresholds[motif_ind] = kwargs['Motif ' + str(motif_ind)]
threshold_images(thresholds)
for img_handle, th_image in zip(motif_imgs, thresholded_maps):
img_handle.set_data(th_image)
display(fig)
temp_thresh = dict()
for motif_ind in range(num_motifs):
temp_thresh['Motif ' + str(motif_ind)] = (0,1,0.025)
widgets.interact(interaction_unpacker, **temp_thresh);
print(thresholds)
atom_labels = list()
for thresh_map in thresholded_maps:
labled_atoms = measure.label(thresh_map, background=0)
map_props = measure.regionprops(labled_atoms)
atom_centroids = np.zeros(shape=(len(map_props),2))
for atom_ind, atom in enumerate(map_props):
atom_centroids[atom_ind] = np.array(atom.centroid)
atom_labels.append(atom_centroids)
# overlay atom positions on original image
fig, axis = plt.subplots(figsize=(8,8))
axis.hold(True)
col_map = plt.cm.jet
axis.imshow(double_cropped_image, interpolation='none',cmap="gray")
legend_handles = list()
for atom_type_ind, atom_centroids in enumerate(atom_labels):
axis.scatter(atom_centroids[:,1], atom_centroids[:,0], color=col_map(int(255 * atom_type_ind / (num_motifs-1))),
label='Motif {}'.format(atom_type_ind), s=30)
axis.set_xlim(0, double_cropped_image.shape[0])
axis.set_ylim(0, double_cropped_image.shape[1]);
axis.invert_yaxis()
axis.set_xticklabels([])
axis.set_yticklabels([])
axis.get_xaxis().set_visible(False)
axis.get_yaxis().set_visible(False)
axis.legend(loc='center left', bbox_to_anchor=(1, 0.5), fontsize=14)
axis.set_title('Atom Positions', fontsize=22)
fig.tight_layout()
#plt.show()
h5_file.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the image that will be cleaned
Step2: Make the image file pycroscopy compatible
Step3: Inspect the contents of this h5 data file
Step4: Initialize an object that will perform image windowing on the .h5 file
Step5: Visualize the source image
Step6: Extract the optimal window size from the image
Step7: Now break the image into a sequence of small windows
Step8: Performing Singular Value Decompostion (SVD) on the windowed data
Step9: Visualize the SVD results
Step10: V (Eigenvectors or end-members)
Step11: U (Abundance maps)
Step12: Reconstruct image (while removing noise)
Step13: Reconstruct the image with the first N components
Step14: Check the cleaned image now
Step15: Atom Finding
Step16: Visualize the hierarchical clustering
Step17: Identifiying the principal patterns
Step18: Visualize the motifs that were selected above
Step19: Calculate matching scores for each motif
Step20: Visualize the matching scores
Step21: Convert matching scores to binary
Step22: Find the atom centers from the binary maps
Step23: Visualize the atom positions
Step24: Save and close
|
7,715
|
<ASSISTANT_TASK:>
Python Code:
def hara(t, c, a, b, **params):
Hyperbolic Absolute Risk Aversion (HARA).
Notes
-----
For Constant Absolute Risk Aversion (CARA), set a=0; for
Constant Relative Risk Aversion (CRRA), set b=0.
return 1 / (a * c + b)
def cobb_douglas_output(k_tilde, alpha, l, **params):
return k_tilde**alpha * l**(1 - alpha)
def cobb_douglas_mpk(k_tilde, alpha, l, **params):
return alpha * k_tilde**(alpha - 1) * l**(1 - alpha)
def c_tilde_dot(t, k_tilde, c_tilde, A0, delta, g, rho, **params):
A = A0 * np.exp(g * t)
r = cobb_douglas_mpk(k_tilde, **params) - delta
return ((r - rho) / (A * hara(t, A * c_tilde, **params))) - g * c_tilde
def k_tilde_dot(t, k_tilde, c_tilde, delta, g, n, **params):
return cobb_douglas_output(k_tilde, **params) - c_tilde - (g + n + delta) * k_tilde
def standard_ramsey_model(t, k_tilde, c_tilde, A0, delta, g, n, rho, **params):
out = [k_tilde_dot(t, k_tilde, c_tilde, delta, g, n, **params),
c_tilde_dot(t, k_tilde, c_tilde, A0, delta, g, rho, **params)]
return out
def initial_condition(t, k_tilde, c_tilde, A0, K0, N0, **params):
return [k_tilde - (K0 / (A0 * N0))]
def terminal_condition(t, k_tilde, c_tilde, **params):
return [c_tilde - equilibrium_consumption(**params)]
def equilibrium_capital(a, alpha, b, delta, g, l, n, rho, **params):
return ((a * alpha * l**(1 - alpha)) / (a * (delta + rho) + g))**(1 / (1 - alpha))
def equilibrium_consumption(a, alpha, b, delta, g, l, n, rho, **params):
kss = equilibrium_capital(a, alpha, b, delta, g, l, n, rho)
return cobb_douglas_output(kss, alpha, l) - (g + n + delta) * kss
# set b=0 for CRRA...
params = {'a': 1.0, 'b': 0.0, 'g': 0.02, 'n': 0.02, 'alpha': 0.15,
'delta': 0.04, 'l': 1.0, 'K0': 1.0, 'A0': 1.0, 'N0': 1.0,
'rho': 0.02}
pycollocation.problems.TwoPointBVP?
standard_ramsey_bvp = pycollocation.problems.TwoPointBVP(bcs_lower=initial_condition,
bcs_upper=terminal_condition,
number_bcs_lower=1,
number_odes=2,
params=params,
rhs=standard_ramsey_model,
)
def initial_mesh(t, T, num, problem):
# compute equilibrium values
cstar = equilibrium_consumption(**problem.params)
kstar = equilibrium_capital(**problem.params)
ystar = cobb_douglas_output(kstar, **problem.params)
# create the mesh for capital
ts = np.linspace(t, T, num)
k0 = problem.params['K0'] / (problem.params['A0'] * problem.params['N0'])
ks = kstar - (kstar - k0) * np.exp(-ts)
# create the mesh for consumption
s = 1 - (cstar / ystar)
y0 = cobb_douglas_output(k0, **problem.params)
c0 = (1 - s) * y0
cs = cstar - (cstar - c0) * np.exp(-ts)
return ts, ks, cs
pycollocation.solvers.Solver?
polynomial_basis = pycollocation.basis_functions.PolynomialBasis()
solver = pycollocation.solvers.Solver(polynomial_basis)
boundary_points = (0, 200)
ts, ks, cs = initial_mesh(*boundary_points, num=1000, problem=standard_ramsey_bvp)
basis_kwargs = {'kind': 'Chebyshev', 'domain': boundary_points, 'degree': 25}
k_poly = polynomial_basis.fit(ts, ks, **basis_kwargs)
c_poly = polynomial_basis.fit(ts, cs, **basis_kwargs)
initial_coefs = np.hstack([k_poly.coef, c_poly.coef])
nodes = polynomial_basis.roots(**basis_kwargs)
solution = solver.solve(basis_kwargs, boundary_points, initial_coefs,
nodes, standard_ramsey_bvp)
ts, _, _ = initial_mesh(*boundary_points, 1000, standard_ramsey_bvp)
k_soln, c_soln = solution.evaluate_solution(ts)
plt.plot(ts, k_soln)
plt.plot(ts, c_soln)
plt.show()
k_resids, c_resids = solution.evaluate_residual(ts)
plt.plot(ts, k_resids)
plt.plot(ts, c_resids)
plt.show()
k_normalized_resids, c_normalized_resids = solution.normalize_residuals(ts)
plt.plot(ts, np.abs(k_normalized_resids))
plt.plot(ts, np.abs(c_normalized_resids))
plt.yscale('log')
plt.show()
bspline_basis = pycollocation.basis_functions.BSplineBasis()
solver = pycollocation.solvers.Solver(bspline_basis)
boundary_points = (0, 200)
ts, ks, cs = initial_mesh(*boundary_points, num=250, problem=standard_ramsey_bvp)
tck, u = bspline_basis.fit([ks, cs], u=ts, k=5, s=0)
knots, coefs, k = tck
initial_coefs = np.hstack(coefs)
basis_kwargs = {'knots': knots, 'degree': k, 'ext': 2}
nodes = np.linspace(*boundary_points, num=249)
solution = solver.solve(basis_kwargs, boundary_points, initial_coefs,
nodes, standard_ramsey_bvp)
ts, _, _ = initial_mesh(*boundary_points, num=1000, problem=standard_ramsey_bvp)
k_soln, c_soln = solution.evaluate_solution(ts)
plt.plot(ts, k_soln)
plt.plot(ts, c_soln)
plt.show()
k_resids, c_resids = solution.evaluate_residual(ts)
plt.plot(ts, k_resids)
plt.plot(ts, c_resids)
plt.show()
k_normalized_resids, c_normalized_resids = solution.normalize_residuals(ts)
plt.plot(ts, np.abs(k_normalized_resids))
plt.plot(ts, np.abs(c_normalized_resids))
plt.yscale('log')
plt.show()
from pycollocation.tests import models
def ces_output(k_tilde, alpha, l, sigma, **params):
gamma = (sigma - 1) / sigma
if gamma == 0:
y = k_tilde**alpha * l**(1 - alpha)
else:
y = (alpha * k_tilde**gamma + (1 - alpha) * l**gamma)**(1 / gamma)
return y
def ces_mpk(k_tilde, alpha, l, sigma, **params):
y = ces_output(k_tilde, alpha, l, sigma)
gamma = (sigma - 1) / sigma
if gamma == 0:
mpk = alpha * (y / k_tilde)
else:
mpk = alpha * k_tilde**(gamma - 1) * (y / (alpha * k_tilde**gamma + (1 - alpha) * l**gamma))
return mpk
def ces_equilibrium_capital(a, alpha, b, delta, g, l, n, rho, sigma, **params):
Steady state value for capital stock (per unit effective labor).
gamma = (sigma - 1) / sigma
if gamma == 1:
kss = ((a * alpha * l**(1 - alpha)) / (a * (delta + rho) + g))**(1 / (1 - alpha))
else:
kss = l * ((1 / (1 - alpha)) * (((a * (delta + rho) + g) / (a * alpha))**(gamma / (1 - gamma)) - alpha))**(-1 / gamma)
return kss
ces_params = {'a': 0.5, 'b': 1.0, 'g': 0.02, 'n': 0.02, 'alpha': 0.15,
'delta': 0.04, 'l': 5.0, 'K0': 2.0, 'A0': 1.0, 'N0': 1.0,
'rho': 0.02, 'sigma': 2.0}
generic_ramsey_bvp = models.RamseyCassKoopmansModel(hara,
ces_output,
ces_equilibrium_capital,
ces_mpk,
ces_params)
polynomial_basis = pycollocation.basis_functions.PolynomialBasis()
solver = pycollocation.solvers.Solver(polynomial_basis)
boundary_points = (0, 500)
ts, ks, cs = initial_mesh(*boundary_points, num=1000, problem=generic_ramsey_bvp)
basis_kwargs = {'kind': 'Chebyshev', 'domain': boundary_points, 'degree': 30}
k_poly = polynomial_basis.fit(ts, ks, **basis_kwargs)
c_poly = polynomial_basis.fit(ts, cs, **basis_kwargs)
initial_coefs = np.hstack([k_poly.coef, c_poly.coef])
nodes = polynomial_basis.roots(**basis_kwargs)
solution = solver.solve(basis_kwargs, boundary_points, initial_coefs,
nodes, generic_ramsey_bvp)
solution.result.success
k_soln, c_soln = solution.evaluate_solution(ts)
plt.plot(ts, k_soln)
plt.plot(ts, c_soln)
plt.show()
k_normalized_resids, c_normalized_resids = solution.normalize_residuals(ts)
plt.plot(ts, np.abs(k_normalized_resids))
plt.plot(ts, np.abs(c_normalized_resids))
plt.yscale('log')
plt.show()
css = generic_ramsey_bvp.equilibrium_consumption(**generic_ramsey_bvp.params)
kss = ces_equilibrium_capital(**generic_ramsey_bvp.params)
plt.plot(k_soln / kss, c_soln / css)
plt.xlabel(r'$\frac{\tilde{k}}{\tilde{k}^*}$', fontsize=20)
plt.ylabel(r'$\frac{\tilde{c}}{\tilde{c}^*}$', fontsize=20, rotation='horizontal')
plt.title("Phase space for the Ramsey-Cass-Koopmans model")
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(k_soln / kss, c_soln / css, ts, label='Ramsey model trajectory')
plt.xlabel(r'$\frac{\tilde{k}}{\tilde{k}^*}$', fontsize=20)
plt.ylabel(r'$\frac{\tilde{c}}{\tilde{c}^*}$', fontsize=20)
ax.set_zlabel('$t$')
ax.legend()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h1>Textbook example
Step2: To complete the model we need to define some parameter values.
Step3: <h2>Solving the model with pyCollocation</h2>
Step4: Finding a good initial guess for $\tilde{k}(t)$
Step5: Solving the model
Step6: <h3> Polynomial basis functions </h3>
Step7: <h3> B-spline basis functions </h3>
Step8: <h1> Generic Ramsey-Cass-Koopmans model</h1>
Step10: Example usage...
Step11: <h3> Phase space plots </h3>
Step12: <h4> 3D phase space </h4>
|
7,716
|
<ASSISTANT_TASK:>
Python Code:
# library to store and manipulate neural-network input and output data
import numpy as np
# library to graphically display any data
import matplotlib.pyplot as plt
# library to manipulate neural-network models
import torch
import torch.nn as nn
import torch.optim as optim
# the code is compatible with Tensflow v1.4.0
print("Pytorch version:", torch.__version__)
# To check whether you code will use a GPU or not, uncomment the following two
# lines of code. You should either see:
# * an "XLA_GPU",
# * or better a "K80" GPU
# * or even better a "T100" GPU
if torch.cuda.is_available():
print('GPU support (%s)' % torch.cuda.get_device_name(0))
else:
print('no GPU support')
import time
# trivial "debug" function to display the duration between time_1 and time_2
def get_duration(time_1, time_2):
duration_time = time_2 - time_1
m, s = divmod(duration_time, 60)
h, m = divmod(m, 60)
s,m,h = int(round(s, 0)), int(round(m, 0)), int(round(h, 0))
duration = "duration: " + "{0:02d}:{1:02d}:{2:02d}".format(h, m, s)
return duration
# DO NOT MODIFY THIS CODE
# IT HAS JUST BEEN WRITTEN TO GENERATE THE DATA
# library fr generating random number
#import random
# secret relationship between X data and Y data
#def generate_random_output_data_correlated_from_input_data(nb_samples):
# generate nb_samples random x between 0 and 1
# X = np.array( [random.random() for i in range(nb_samples)] )
# generate nb_samples y correlated with x
# Y = np.tan(np.sin(X) + np.cos(X))
# return X, Y
#def get_new_X_Y(nb_samples, debug=False):
# X, Y = generate_random_output_data_correlated_from_input_data(nb_samples)
# if debug:
# print("generate %d X and Y samples:" % nb_samples)
# X_Y = zip(X, Y)
# for i, x_y in enumerate(X_Y):
# print("data sample %d: x=%.3f, y=%.3f" % (i, x_y[0], x_y[1]))
# return X, Y
# Number of samples for the training dataset and the test dateset
#nb_samples=50
# Get some data for training the futture neural-network model
#X_train, Y_train = get_new_X_Y(nb_samples)
# Get some other data for evaluating the futture neural-network model
#X_test, Y_test = get_new_X_Y(nb_samples)
# In most cases, it will be necessary to normalize X and Y data with code like:
# X_centered -= X.mean(axis=0)
# X_normalized /= X_centered.std(axis=0)
#def mstr(X):
# my_str ='['
# for x in X:
# my_str += str(float(int(x*1000)/1000)) + ','
# my_str += ']'
# return my_str
## Call get_data to have an idead of what is returned by call data
#generate_data = False
#if generate_data:
# nb_samples = 50
# X_train, Y_train = get_new_X_Y(nb_samples)
# print('X_train = np.array(%s)' % mstr(X_train))
# print('Y_train = np.array(%s)' % mstr(Y_train))
# X_test, Y_test = get_new_X_Y(nb_samples)
# print('X_test = np.array(%s)' % mstr(X_test))
# print('Y_test = np.array(%s)' % mstr(Y_test))
X_train = np.array([0.765,0.838,0.329,0.277,0.45,0.833,0.44,0.634,0.351,0.784,0.589,0.816,0.352,0.591,0.04,0.38,0.816,0.732,0.32,0.597,0.908,0.146,0.691,0.75,0.568,0.866,0.705,0.027,0.607,0.793,0.864,0.057,0.877,0.164,0.729,0.291,0.324,0.745,0.158,0.098,0.113,0.794,0.452,0.765,0.983,0.001,0.474,0.773,0.155,0.875,])
Y_train = np.array([6.322,6.254,3.224,2.87,4.177,6.267,4.088,5.737,3.379,6.334,5.381,6.306,3.389,5.4,1.704,3.602,6.306,6.254,3.157,5.446,5.918,2.147,6.088,6.298,5.204,6.147,6.153,1.653,5.527,6.332,6.156,1.766,6.098,2.236,6.244,2.96,3.183,6.287,2.205,1.934,1.996,6.331,4.188,6.322,5.368,1.561,4.383,6.33,2.192,6.108,])
X_test = np.array([0.329,0.528,0.323,0.952,0.868,0.931,0.69,0.112,0.574,0.421,0.972,0.715,0.7,0.58,0.69,0.163,0.093,0.695,0.493,0.243,0.928,0.409,0.619,0.011,0.218,0.647,0.499,0.354,0.064,0.571,0.836,0.068,0.451,0.074,0.158,0.571,0.754,0.259,0.035,0.595,0.245,0.929,0.546,0.901,0.822,0.797,0.089,0.924,0.903,0.334,])
Y_test = np.array([3.221,4.858,3.176,5.617,6.141,5.769,6.081,1.995,5.259,3.932,5.458,6.193,6.129,5.305,6.081,2.228,1.912,6.106,4.547,2.665,5.791,3.829,5.619,1.598,2.518,5.826,4.603,3.405,1.794,5.23,6.26,1.81,4.18,1.832,2.208,5.234,6.306,2.759,1.684,5.432,2.673,5.781,5.019,5.965,6.295,6.329,1.894,5.816,5.951,3.258,])
print('X_train contains %d samples' % X_train.shape)
print('Y_train contains %d samples' % Y_train.shape)
print('')
print('X_test contains %d samples' % X_test.shape)
print('Y_test contains %d samples' % Y_test.shape)
# Graphically display our training data
plt.scatter(X_train, Y_train, color='green', alpha=0.5)
plt.title('Scatter plot of the training data')
plt.xlabel('x')
plt.ylabel('y')
plt.show()
# Graphically display our test data
plt.scatter(X_test, Y_test, color='blue', alpha=0.5)
plt.title('Scatter plot of the testing data')
plt.xlabel('x')
plt.ylabel('y')
plt.show()
# THIS IS THE ONLY CELL WHERE YOU HAVE TO ADD AND/OR MODIFY CODE
from collections import OrderedDict
def create_model():
# This returns a tensor
model = nn.Sequential(OrderedDict([
('hidden_layer_1', nn.Linear(1,128)), ('hidden_layer_1_act', nn.ReLU()),
('hidden_layer_2', nn.Linear(128,32)), ('hidden_layer_2_act', nn.ReLU()),
('hidden_layer_3', nn.Linear(32,4)), ('hidden_layer_3_act', nn.ReLU()),
('output_layer', nn.Linear(4,1))
]))
# NO COMPILATION AS IN TENSORFLOW
#model.compile(optimizer=tf.keras.optimizers.RMSprop(0.01),
# loss='mean_squared_error',
# metrics=['mean_absolute_error', 'mean_squared_error'])
return model
ann_model = create_model()
# Display a textual summary of the newly created model
# Pay attention to size (a.k.a. total parameters) of the network
print(ann_model)
print('params:', sum(p.numel() for p in ann_model.parameters()))
print('trainable_params:', sum(p.numel() for p in ann_model.parameters() if p.requires_grad))
%%html
As a reminder for understanding, the following ANN unit contains <b>m + 1</b> trainable parameters:<br>
<img src='https://www.degruyter.com/view/j/nanoph.2017.6.issue-3/nanoph-2016-0139/graphic/j_nanoph-2016-0139_fig_002.jpg' alt="perceptron" width="400" />
# Object for storing training results (similar to Tensorflow object)
class Results:
history = {
'train_loss': [],
'valid_loss': []
}
# No Pytorch model.fit() function as it is the case in Tensorflow
# but we can implement it by ourselves.
# validation_split=0.2 means that 20% of the X_train samples will be used
# for a validation test and that "only" 80% will be used for training
def fit(ann_model, X, Y, verbose=False,
batch_size=1, epochs=500, validation_split=0.2):
n_samples = X.shape[0]
n_samples_test = n_samples - int(n_samples * validation_split)
X = torch.from_numpy(X).unsqueeze(1).float()
Y = torch.from_numpy(Y).unsqueeze(1).float()
X_train = X[0:n_samples_test]
Y_train = Y[0:n_samples_test]
X_valid = X[n_samples_test:]
Y_valid = Y[n_samples_test:]
loss_fn = nn.MSELoss()
optimizer = optim.RMSprop(ann_model.parameters(), lr=0.01)
results = Results()
for epoch in range(0, epochs):
Ŷ_train = ann_model(X_train)
train_loss = loss_fn(Ŷ_train, Y_train)
Ŷ_valid = ann_model(X_valid)
valid_loss = loss_fn(Ŷ_valid, Y_valid)
optimizer.zero_grad()
train_loss.backward()
optimizer.step()
results.history['train_loss'].append(float(train_loss))
results.history['valid_loss'].append(float(valid_loss))
if verbose:
if epoch % 1000 == 0:
print('epoch:%d, train_loss:%.3f, valid_loss:%.3f' \
% (epoch, float(train_loss), float(valid_loss)))
return results
# Train the model with the input data and the output_values
# validation_split=0.2 means that 20% of the X_train samples will be used
# for a validation test and that "only" 80% will be used for training
t0 = time.time()
results = fit(ann_model, X_train, Y_train, verbose=True,
batch_size=1, epochs=10000, validation_split=0.2)
t1 = time.time()
print('training_%s' % get_duration(t0, t1))
plt.plot(results.history['train_loss'], label = 'train_loss')
plt.plot(results.history['valid_loss'], label = 'validation_loss')
plt.legend()
plt.show()
# If you can write a file locally (i.e. If Google Drive available on Colab environnement)
# then, you can save your model in a file for future reuse.
# Only uncomment the following file if you can write a file
#torch.save(ann_model.state_dict(), 'ann_101.pt')
# No Pytorch model.evaluate() function as it is the case in Tensorflow
# but we can implement it by ourselves.
def evaluate(ann_model, X_, Y_, verbose=False):
X = torch.from_numpy(X_).unsqueeze(1).float()
Y = torch.from_numpy(Y_).unsqueeze(1).float()
Ŷ = ann_model(X)
# let's calculate the mean square error
# (could also be calculated with sklearn.metrics.mean_squared_error()
# or we could also calculate other errors like in 5% ok
mean_squared_error = torch.sum((Ŷ - Y) ** 2)/Y.shape[0]
if verbose:
print("mean_squared_error:%.3f" % mean_squared_error)
return mean_squared_error
test_loss = evaluate(ann_model, X_test, Y_test, verbose=True)
X_new_values = torch.Tensor([0., 0.2, 0.4, 0.6, 0.8, 1.0]).unsqueeze(1).float()
Y_predicted_values = ann_model(X_new_values).detach().numpy()
Y_predicted_values
# Display training data and predicted data graphically
plt.title('Training data (green color) + Predicted data (red color)')
# training data in green color
plt.scatter(X_train, Y_train, color='green', alpha=0.5)
# training data in green color
#plt.scatter(X_test, Y_test, color='blue', alpha=0.5)
# predicted data in blue color
plt.scatter(X_new_values, Y_predicted_values, color='red', alpha=0.5)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get the data
Step2: Build the artificial neural-network
Step3: Train the artificial neural-network model
Step4: Evaluate the model
Step5: Predict new output data
|
7,717
|
<ASSISTANT_TASK:>
Python Code:
# Import libraries
import tensorflow as tf
import numpy as np
import time
import collections
import os
# Import MNIST data with TensorFlow
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets(os.path.join('datasets', 'mnist'), one_hot=True) # load data in local folder
train_data = mnist.train.images.astype(np.float32)
train_labels = mnist.train.labels
test_data = mnist.test.images.astype(np.float32)
test_labels = mnist.test.labels
print(train_data.shape)
print(train_labels.shape)
print(test_data.shape)
print(test_labels.shape)
# computational graph inputs
batch_size = 100
d = train_data.shape[1]
nc = 10
x = tf.placeholder(tf.float32,[batch_size,d]); print('x=',x,x.get_shape())
y_label = tf.placeholder(tf.float32,[batch_size,nc]); print('y_label=',y_label,y_label.get_shape())
# computational graph variables
initial = tf.truncated_normal([d,nc], stddev=0.1); W = tf.Variable(initial); print('W=',W.get_shape())
b = tf.Variable(tf.zeros([nc],tf.float32)); print('b=',b.get_shape())
# Construct CG / output value
y = tf.matmul(x, W); print('y1=',y,y.get_shape())
y += b; print('y2=',y,y.get_shape())
y = tf.nn.softmax(y); print('y3=',y,y.get_shape())
# Loss
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_label * tf.log(y), 1))
reg_loss = tf.nn.l2_loss(W)
reg_loss += tf.nn.l2_loss(b)
reg_par = 1e-3
total_loss = cross_entropy + reg_par* reg_loss
# Update CG variables / backward pass
train_step = tf.train.GradientDescentOptimizer(0.25).minimize(total_loss)
# Accuracy
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_label,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Create test set
idx = np.random.permutation(test_data.shape[0]) # rand permutation
idx = idx[:batch_size]
test_x, test_y = test_data[idx,:], test_labels[idx]
n = train_data.shape[0]
indices = collections.deque()
# Running Computational Graph
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
for i in range(50):
# Batch extraction
if len(indices) < batch_size:
indices.extend(np.random.permutation(n)) # rand permutation
idx = [indices.popleft() for i in range(batch_size)] # extract n_batch data
batch_x, batch_y = train_data[idx,:], train_labels[idx]
# Run CG for variable training
_,acc_train,total_loss_o = sess.run([train_step,accuracy,total_loss], feed_dict={x: batch_x, y_label: batch_y})
print('\nIteration i=',i,', train accuracy=',acc_train,', loss=',total_loss_o)
# Run CG for testset
acc_test = sess.run(accuracy, feed_dict={x: test_x, y_label: test_y})
print('test accuracy=',acc_test)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1st Step
Step2: Question 2
Step3: Question 3
Step4: Question 4
Step5: Question 5
Step6: Question 6
Step7: Question 7
Step8: Question 8
Step9: 2nd Step
|
7,718
|
<ASSISTANT_TASK:>
Python Code:
#record atom_name chain x y z occupancy atom_type
# | | | | | | | |
#ATOM 1086 CG LYS A 141 -4.812 9.683 2.584 1.00 26.78 N0
# | | | |
# atom_num amino_acid resid_num bfactor
line_frompdb = "ATOM 1086 N SER A 141 -4.812 9.683 2.584 1.00 26.78 N0"
print(line_frompdb[2:4])
#record atom_name chain x y z occupancy atom_type
# | | | | | | | |
#ATOM 1086 CG LYS A 141 -4.812 9.683 2.584 1.00 26.78 N0
# | | | |
# atom_num amino_acid resid_num bfactor
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Predict what the following will do
Step2: Write a program that
|
7,719
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
np.__version__
x = [1,2]
y = [[4, 1], [2, 2]]
#print np.dot(x, y)
#print np.dot(y, x)
#print np.matmul(x, y)
#print np.inner(x, y)
#print np.inner(y, x)
x = [[1, 0], [0, 1]]
y = [[4, 1], [2, 2], [1, 1]]
#print np.dot(y, x)
#print np.matmul(y, x)
x = np.array([[1, 4], [5, 6]])
y = np.array([[4, 1], [2, 2]])
#print np.vdot(x, y)
#print np.vdot(y, x)
#print np.dot(x.flatten(), y.flatten())
#print np.inner(x.flatten(), y.flatten())
#print (x*y).sum()
x = np.array(['a', 'b'], dtype=object)
y = np.array([1, 2])
#print np.inner(x, y)
#print np.inner(y, x)
#print np.outer(x, y)
#print np.outer(y, x)
x = np.array([[4, 12, -16], [12, 37, -43], [-16, -43, 98]], dtype=np.int32)
x = np.array([[12, -51, 4], [6, 167, -68], [-4, 24, -41]], dtype=np.float32)
x = np.array([[1, 0, 0, 0, 2], [0, 0, 3, 0, 0], [0, 0, 0, 0, 0], [0, 2, 0, 0, 0]], dtype=np.float32)
x = np.diag((1, 2, 3))
#print np.array_equal(np.dot(x, eigenvecs), eigenvals * eigenvecs)
x = np.arange(1, 10).reshape((3, 3))
x = np.arange(1, 5).reshape((2, 2))
x = np.eye(4)
x = np.arange(1, 5).reshape((2, 2))
x = np.eye(4)
x = np.array([[1., 2.], [3., 4.]])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Matrix and vector products
Step2: Q2. Predict the results of the following code.
Step3: Q3. Predict the results of the following code.
Step4: Q4. Predict the results of the following code.
Step5: Decompositions
Step6: Q6. Compute the qr factorization of x and verify it.
Step7: Q7. Factor x by Singular Value Decomposition and verify it.
Step8: Matrix eigenvalues
Step9: Q9. Predict the results of the following code.
Step10: Norms and other numbers
Step11: Q11. Calculate the determinant of x.
Step12: Q12. Calculate the rank of x.
Step13: Q13. Compute the sign and natural logarithm of the determinant of x.
Step14: Q14. Return the sum along the diagonal of x.
Step15: Solving equations and inverting matrices
|
7,720
|
<ASSISTANT_TASK:>
Python Code:
a, b = symbols("a b")
r, r0 = symbols("r r0")
f = a*(r-r0) + b
f
dr = symbols("\Delta")
f0, fp = symbols("f_i f_{i+1}")
rm12 = r0 - Rational(1,2)*dr
rp12 = r0 + Rational(1,2)*dr
rp32 = r0 + Rational(3,2)*dr
r1 = r0 + dr
rm12, rp12, rp32
r0, r1
A = simplify(integrate(f*r/(r0*dr), (r, rm12, rp12)))
B = simplify(integrate(f*r/(r1*dr), (r, rp12, rp32)))
A, B
coeffs = solve([A-f0, B-fp], [a,b])
coeffs
l = f.subs(a,coeffs[a]).subs(b,coeffs[b])
l
a = l.subs(r, rp12)
simplify(a)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: constraints
Step2: interfaces
Step3: centers
Step4: The analytic forms of the integrals
Step5: Our linear system is now
Step6: And in pretty form, here's the polynomial
Step7: Now, evaluate it on the interface
|
7,721
|
<ASSISTANT_TASK:>
Python Code:
try:
import google.colab
IN_COLAB = True
except:
IN_COLAB = False
if IN_COLAB:
print("Downloading Colab files")
! shred -u setup_google_colab.py
! wget https://raw.githubusercontent.com/hse-aml/bayesian-methods-for-ml/master/setup_google_colab.py -O setup_google_colab.py
import setup_google_colab
setup_google_colab.load_data_final_project()
! pip install GPy gpyopt
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import clear_output
import tensorflow as tf
import GPy
import GPyOpt
import keras
from keras.layers import Input, Dense, Lambda, InputLayer, concatenate, Activation, Flatten, Reshape
from keras.layers.normalization import BatchNormalization
from keras.layers.convolutional import Conv2D, Deconv2D
from keras.losses import MSE
from keras.models import Model, Sequential
from keras import backend as K
from keras import metrics
from keras.datasets import mnist
from keras.utils import np_utils
from tensorflow.python.framework import ops
from tensorflow.python.framework import dtypes
import utils
import os
%matplotlib inline
sess = tf.InteractiveSession()
K.set_session(sess)
latent_size = 8
! shred -u CelebA_VAE_small_8.h5; wget https://github.com/hse-aml/bayesian-methods-for-ml/releases/download/v0.1/CelebA_VAE_small_8.h5 -O CelebA_VAE_small_8.h5
vae, encoder, decoder = utils.create_vae(batch_size=128, latent=latent_size)
sess.run(tf.global_variables_initializer())
vae.load_weights('CelebA_VAE_small_8.h5')
K.set_learning_phase(False)
latent_placeholder = tf.placeholder(tf.float32, (1, latent_size))
decode = decoder(latent_placeholder)
### TODO: Draw 25 samples from VAE here
plt.figure(figsize=(10, 10))
for i in range(25):
plt.subplot(5, 5, i+1)
z = np.random.normal(size=(1, latent_size))
image = sess.run(decode, feed_dict={latent_placeholder : z})[0] ### YOUR CODE HERE
plt.imshow(np.clip(image, 0, 1))
plt.axis('off')
class FacialComposit:
def __init__(self, decoder, latent_size):
self.latent_size = latent_size
self.latent_placeholder = tf.placeholder(tf.float32, (1, latent_size))
self.decode = decoder(self.latent_placeholder)
self.samples = None
self.images = None
self.rating = None
def _get_image(self, latent):
img = sess.run(self.decode,
feed_dict={self.latent_placeholder: latent[None, :]})[0]
img = np.clip(img, 0, 1)
return img
@staticmethod
def _show_images(images, titles):
assert len(images) == len(titles)
clear_output()
plt.figure(figsize=(3*len(images), 3))
n = len(titles)
for i in range(n):
plt.subplot(1, n, i+1)
plt.imshow(images[i])
plt.title(str(titles[i]))
plt.axis('off')
plt.show()
@staticmethod
def _draw_border(image, w=2):
bordred_image = image.copy()
bordred_image[:, :w] = [1, 0, 0]
bordred_image[:, -w:] = [1, 0, 0]
bordred_image[:w, :] = [1, 0, 0]
bordred_image[-w:, :] = [1, 0, 0]
return bordred_image
def query_initial(self, n_start=10, select_top=10):
'''
Creates initial points for Bayesian optimization
Generate *n_start* random images and asks user to rank them.
Gives maximum score to the best image and minimum to the worst.
:param n_start: number of images to rank initialy.
:param select_top: number of images to keep
'''
samples = np.random.normal(size=(n_start, latent_size))
images = np.array([self._get_image(samples[i]) for i in range(n_start)])
self._show_images(images, range(n_start))
rating = []
idx = []
print('Please rank images from best to worst: 0 = worst')
for i in range(select_top):
idx.append(i)
msg = 'Ranking for image {0}: '.format(i)
rank = int(input(msg))
rating.append(rank)
self.samples = np.array(samples[idx]) ### YOUR CODE HERE (size: select_top x 64 x 64 x 3)
self.images = np.array(images[idx]) ### YOUR CODE HERE (size: select_top x 64 x 64 x 3)
self.rating = np.array(rating) ### YOUR CODE HERE (size: select_top)
print(self.samples.shape,
self.images.shape,
self.rating.shape
)
# Check that tensor sizes are correct
np.testing.assert_equal(self.rating.shape, [select_top])
np.testing.assert_equal(self.images.shape, [select_top, 64, 64, 3])
np.testing.assert_equal(self.samples.shape, [select_top, self.latent_size])
def evaluate(self, candidate):
'''
Queries candidate vs known image set.
Adds candidate into images pool.
:param candidate: latent vector of size 1xlatent_size
'''
initial_size = len(self.images)
### YOUR CODE HERE
## Show user an image and ask to assign score to it.
## You may want to show some images to user along with their scores
## You should also save candidate, corresponding image and rating
candidate_image = self._get_image(candidate[0]).reshape((1, 64, 64, 3))
worst_image = self.images[np.argmin(self.rating)]
best_image = self.images[np.argmax(self.rating)]
images = [candidate_image[0], worst_image, best_image]
titles = ['Candidate', 'Worst image', 'Best image']
self._show_images(images, titles)
candidate_rating = int(input('Ranking for candidate image')) ### YOUR CODE HERE
print(candidate_rating)
self.images = np.append(self.images, candidate_image, axis=0)
self.rating = np.append(self.rating, candidate_rating)
self.samples = np.append(self.samples, candidate, axis=0)
assert len(self.images) == initial_size + 1
assert len(self.rating) == initial_size + 1
assert len(self.samples) == initial_size + 1
return candidate_rating
def optimize(self, n_iter=10, w=4, acquisition_type='MPI', acquisition_par=0.3):
if self.samples is None:
self.query_initial()
bounds = [{'name': 'z_{0:03d}'.format(i),
'type': 'continuous',
'domain': (-w, w)}
for i in range(self.latent_size)]
optimizer = GPyOpt.methods.BayesianOptimization(f=self.evaluate, domain=bounds,
acquisition_type = acquisition_type,
acquisition_par = acquisition_par,
exact_eval=False, # Since we are not sure
model_type='GP',
X=self.samples,
Y=self.rating[:, None],
maximize=True)
optimizer.run_optimization(max_iter=n_iter, eps=-1)
def get_best(self):
index_best = np.argmax(self.rating)
return self.images[index_best]
def draw_best(self, title=''):
index_best = np.argmax(self.rating)
image = self.images[index_best]
plt.imshow(image)
plt.title(title)
plt.axis('off')
plt.show()
composit = FacialComposit(decoder, 8)
composit.optimize()
composit.draw_best('Darkest hair')
composit = FacialComposit(decoder, 8)
composit.optimize()
composit.draw_best('Widest smile')
composit = FacialComposit(decoder, 8)
composit.optimize()
composit.draw_best('Lecturer')
### Your code here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Grading
Step2: GRADED 1 (3 points)
Step3: Search procedure
Step4: GRADED 2 (3 points)
Step5: Task 3.2. Finding person with the widest smile (3 points)
Step6: Task 3.3. Finding Daniil Polykovskiy or Alexander Novikov — lecturers of this course (3 points)
Step7: <small>Don't forget to post resulting image of lecturers on the forum ;)</small>
|
7,722
|
<ASSISTANT_TASK:>
Python Code:
X, y = make_circles(noise=0.2, factor=0.5, random_state=1);
from sklearn.preprocessing import StandardScaler
X = StandardScaler().fit_transform(X)
from matplotlib.colors import ListedColormap
cm = plt.cm.RdBu
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
ax = plt.subplot()
ax.set_title("Input data")
# Plot the training points
ax.scatter(X[:, 0], X[:, 1], c=y, cmap=cm_bright)
ax.set_xticks(())
ax.set_yticks(())
plt.tight_layout()
plt.show()
from sklearn.model_selection import GridSearchCV
grid = GridSearchCV(SVC(), param_grid={"C":[0.1, 1, 10], "gamma": [1, 0.1, 0.01]}, cv=4)
grid.fit(X, y)
print("The best parameters are %s with a score of %0.2f"
% (grid.best_params_, grid.best_score_))
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max,0.02),
np.arange(y_min, y_max, 0.02))
for i, C in enumerate((0.1, 1, 10)):
for j, gamma in enumerate((1, 0.1, 0.01)):
plt.subplot()
clf = SVC(C=C, gamma=gamma)
clf.fit(X,y)
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.coolwarm, alpha=0.8)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.coolwarm)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.xticks(())
plt.yticks(())
plt.xlabel(" gamma=" + str(gamma) + " C=" + str(C))
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 我们先看看我的数据是什么样子的,这里做一次可视化如下:
Step2: 好了,现在我们要对这个数据集进行SVM RBF分类了,分类时我们使用了网格搜索,在C=(0.1,1,10)和gamma=(1, 0.1, 0.01)形成的9种情况中选择最好的超参数,我们用了4折交叉验证。这里只是一个例子,实际运用中,你可能需要更多的参数组合来进行调参。
Step3: 也就是说,通过网格搜索,在我们给定的9组超参数中,C=10, Gamma=0.1 分数最高,这就是我们最终的参数候选。
|
7,723
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import load_digits
digits = load_digits()
digits.keys()
digits.images.shape
print(digits.images[0])
import matplotlib.pyplot as plt
%matplotlib notebook
plt.matshow(digits.images[0], cmap=plt.cm.Greys)
digits.data.shape
digits.target.shape
digits.target
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(digits.data,
digits.target)
# %load solutions/load_iris.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data is always a numpy array (or sparse matrix) of shape (n_samples, n_features)
Step2: Exercises
|
7,724
|
<ASSISTANT_TASK:>
Python Code:
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
data_path = mne.datasets.sample.data_path()
fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis-ave.fif')
evoked = mne.read_evokeds(fname, baseline=(None, 0), proj=True)
print(evoked)
evoked_l_aud = evoked[0]
evoked_r_aud = evoked[1]
evoked_l_vis = evoked[2]
evoked_r_vis = evoked[3]
fig = evoked_l_aud.plot(exclude=(), time_unit='s')
fig.tight_layout()
evoked_l_aud.plot(spatial_colors=True, gfp=True, picks='meg')
evoked_l_aud.plot_topomap(time_unit='s')
times = np.arange(0.05, 0.151, 0.05)
evoked_r_aud.plot_topomap(times=times, ch_type='mag', time_unit='s')
evoked_r_aud.plot_topomap(times='peaks', ch_type='mag', time_unit='s')
fig, ax = plt.subplots(1, 5, figsize=(8, 2))
kwargs = dict(times=0.1, show=False, vmin=-300, vmax=300, time_unit='s')
evoked_l_aud.plot_topomap(axes=ax[0], colorbar=True, **kwargs)
evoked_r_aud.plot_topomap(axes=ax[1], colorbar=False, **kwargs)
evoked_l_vis.plot_topomap(axes=ax[2], colorbar=False, **kwargs)
evoked_r_vis.plot_topomap(axes=ax[3], colorbar=False, **kwargs)
for ax, title in zip(ax[:4], ['Aud/L', 'Aud/R', 'Vis/L', 'Vis/R']):
ax.set_title(title)
plt.show()
ts_args = dict(gfp=True, time_unit='s')
topomap_args = dict(sensors=False, time_unit='s')
evoked_r_aud.plot_joint(title='right auditory', times=[.09, .20],
ts_args=ts_args, topomap_args=topomap_args)
conditions = ["Left Auditory", "Right Auditory", "Left visual", "Right visual"]
evoked_dict = dict()
for condition in conditions:
evoked_dict[condition.replace(" ", "/")] = mne.read_evokeds(
fname, baseline=(None, 0), proj=True, condition=condition)
print(evoked_dict)
colors = dict(Left="Crimson", Right="CornFlowerBlue")
linestyles = dict(Auditory='-', visual='--')
pick = evoked_dict["Left/Auditory"].ch_names.index('MEG 1811')
mne.viz.plot_compare_evokeds(evoked_dict, picks=pick, colors=colors,
linestyles=linestyles, split_legend=True)
evoked_r_aud.plot_image(picks='meg')
title = 'MNE sample data\n(condition : %s)'
evoked_l_aud.plot_topo(title=title % evoked_l_aud.comment,
background_color='k', color=['white'])
mne.viz.plot_evoked_topo(evoked, title=title % 'Left/Right Auditory/Visual',
background_color='w')
mne.viz.plot_compare_evokeds(evoked_dict, picks="eeg", colors=colors,
linestyles=linestyles, split_legend=True,
axes="topo")
evoked_l_aud_mag = evoked_l_aud.copy().pick_types(meg='mag')
mne.viz.plot_arrowmap(evoked_l_aud_mag.data[:, 175], evoked_l_aud_mag.info)
subjects_dir = data_path + '/subjects'
trans_fname = data_path + '/MEG/sample/sample_audvis_raw-trans.fif'
maps = mne.make_field_map(evoked_l_aud, trans=trans_fname, subject='sample',
subjects_dir=subjects_dir, n_jobs=1)
# Finally, explore several points in time
field_map = evoked_l_aud.plot_field(maps, time=.1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we read the evoked object from a file. Check out
Step2: Notice that evoked is a list of
Step3: Let's start with a simple one. We plot event related potentials / fields
Step4: All plotting functions of MNE-python return a handle to the figure instance.
Step5: Now we will make it a bit fancier and only use MEG channels. Many of the
Step6: Notice the legend on the left. The colors would suggest that there may be two
Step7: By default the topomaps are drawn from evenly spread out points of time over
Step8: Or we can select automatically the peaks.
Step9: See ex-evoked-topomap for
Step10: Notice that we created five axes, but had only four categories. The fifth
Step11: Sometimes, you may want to compare two or more conditions at a selection of
Step12: We can also plot the activations as images. The time runs along the x-axis
Step13: Finally we plot the sensor data as a topographical view. In the simple case
Step14: For small numbers of sensors, it is also possible to create a more refined
Step15: We can also plot the activations as arrow maps on top of the topoplot.
Step16: Visualizing field lines in 3D
|
7,725
|
<ASSISTANT_TASK:>
Python Code:
from ipyparallel import Client
cl = Client()
cl.ids
cl[:].targets
%%px --noblock
# run a whole cell in non-blocking mode, by default on all engines
# note: the magic has to be at the top of the cell
import time
time.sleep(1)
time.time()
%pxresult
# get the result from the AsyncResult object
%%px --local
# run whole cell on all engines a well as in the local IPython session
import sys
sys.path.insert(0, '/home/claudius/Downloads/dadi')
sys.path[0]
%%px --local
import dadi
%whos
%px %who
%ll
% ll dadiExercises/
! cat dadiExercises/ERY.FOLDED.sfs.dadi_format
! cat dadiExercises/PAR.FOLDED.sfs.dadi_format
%%px --local
# import 1D spectrum of ery on all engines:
fs_ery = dadi.Spectrum.from_file('dadiExercises/ERY.FOLDED.sfs.dadi_format')
# import 1D spectrum of ery on all engines:
fs_par = dadi.Spectrum.from_file('dadiExercises/PAR.FOLDED.sfs.dadi_format')
%px %who
%who
%%px --local
ns = fs_ery.sample_sizes # both populations have the same sample size
fs_ery.pop_ids = ['ery']
fs_par.pop_ids = ['par']
# setting the smallest grid size slightly larger than the largest population sample size (36)
pts_l = [50, 60, 70]
%psource dadi.Demographics1D.growth
%%px --local
# create link to function that specifies a simple growth or decline model
func = dadi.Demographics1D.growth
# create extrapolating version of the function
func_ex = dadi.Numerics.make_extrap_log_func(func)
# set lower and upper bounds to nu and T
upper_bound = [100, 5]
lower_bound = [1e-3, 0]
%px %who
from itertools import product
import numpy as np
from __future__ import print_function
# create range of starting values for nu and T, evenly distributed in log space
p0_nu = np.logspace(-3, np.log10(50), base=10.0, num=10)
p0_T = np.logspace(-3, np.log10(2), base=10, num=10)
# create all possible pairwise combinations of nu and T
i = 0
num_comb = len(p0_nu)*len(p0_T)
for p0 in product(p0_nu, p0_T):
i+=1
if i>20 and i<=(num_comb-20):
continue # continue with the next iteration of the loop
print( i , p0 )
print("The total number of parameter combinations is ", i)
# create load balanced view of engines
lbview = cl.load_balanced_view()
%pdoc dadi.Inference.optimize_log
# create parallel function for ery spectrum
@lbview.parallel(block=False)
def run_dadi_on_ery(p_init): # for the function to be called with map, it needs to have one input variable
p_init: initial parameter values to run optimisation from
# run optimisation of paramters
popt = dadi.Inference.optimize_log(p0=p_init, data=fs_ery, model_func=func_ex, pts=pts_l, \
lower_bound=lower_bound, upper_bound=upper_bound, \
verbose=0, maxiter=100, full_output=False)
return (p_init, popt) # returns a tuple with starting values and optimal nu and T
# run dadi in parallel on all combinations of starting values
ar = run_dadi_on_ery( product(p0_nu, p0_T) )
# wait for results to come in and print progress report
#ar.wait_interactive()
ar.progress
# run after optimisations have finished
ar.wall_time/60
# save optimsation results to file
# can be loaded back into another IPython session with: ar = pickle.load(filehandle)
# unfortunately, I cannot save the AsyncResult object with pickle (dill can save it, but emits a
# "maximum recursion depth exceeded" error when I try to load the object from file again).
# I need to save the result as a plain list
import dill
with open("exp_growth_optim_res_ERY.pickle", "w") as fhandle:
dill.dump(list(ar.get()), fhandle)
# this can now be loaded back from file into memory:
with open("exp_growth_optim_res_ERY.pickle", "r") as fhandle:
ra = dill.load(fhandle)
?pickle
ar.get()[:10]
?print # print_function from __future__
# print result of optimisations
for i, p in enumerate(ar):
if (i+1) % 2:
#print(i, p)
print("starting values: nu = %.4f, T = %.6f" % (p[0][0], p[0][1]), end=" ==> ")
else:
#print(i, p)
print("optim. values: nu = %.4f, T = %.6f" % (p[0], p[1]))
# get the optimized parameter values
popt = [p for i, p in enumerate(ar) if not (i+1) % 2]
print( popt[:10] )
len(popt)
# get the starting values
pstart = [p for i, p in enumerate(ar) if (i+1) % 2]
pstart[:10]
# get log likelihood values for parameter combinations
@lbview.parallel(block=True)
def get_ll(p):
# calculate the best-fit model SFS given paramter combination
expected_sfs = func_ex(p, ns, pts_l)
# calculate the log likelihood of the expected SFS given the observed SFS
return dadi.Inference.ll_multinom(expected_sfs, fs_ery)
# run get_ll on all sets of parameters in parallel
ll = get_ll.map(popt)
sorted(ll, reverse=True)[:20]
# get arrays of individual parameters from each optimisation run
nu = [p_set[0] for p_set in popt]
T = [p_set[1] for p_set in popt]
nu_start = [p_set[0][0] for p_set in pstart]
T_start = [p_set[0][1] for p_set in pstart]
print(len(ll), len(nu), len(T), len(nu_start), len(T_start))
import pandas as pd
# create pandas data frame
df = pd.DataFrame({
'a_nu_start' : nu_start,
'b_T_start' : T_start,
'c_nu' : nu,
'd_T' : T,
'e_log ll' : ll
})
df.head()
df.dtypes
# get statistics for each column
df.describe()
df.sort_values(by='e_log ll', ascending=False).head(50)
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['font.size'] = 20.0
plt.figure(figsize=[12.0, 10.0])
plt.scatter(np.log(nu), np.log(T), c=ll, s=75, alpha=0.5)
plt.xlabel(r'log $\nu$')
plt.ylabel("log T")
plt.title("log likelihoods for parameter combinations")
plt.grid()
plt.colorbar()
p0 = df.sort_values(by='e_log ll', ascending=False).iloc[0, 2:4]
p_init = np.array(p0)
p_init
# create direct view of engines
dview = cl[:]
# define function to run dadi with perturbed starting values
# note the high maxiter
def run_dadi_on_ery(p_init): # for the function to be called with map, it needs to have one input variable
p_init: initial parameter values to run optimisation from
# perturb starting values by up to a factor of 2 times 'fold'
p0 = dadi.Misc.perturb_params(p_init, fold=2, upper_bound=upper_bound, lower_bound=lower_bound)
# run optimisation of paramters
popt = dadi.Inference.optimize_log(p0=p0, data=fs_ery, model_func=func_ex, pts=pts_l, \
lower_bound=lower_bound, upper_bound=upper_bound, \
verbose=0, maxiter=1e9, full_output=False)
return popt # returns a tuple with starting values and optimal nu and T
%pdoc run_dadi_on_ery
# run optimisation with perturbed initial values on all 11 engines
fine_opt_res = dview.apply_async( run_dadi_on_ery, p_init )
# get running time
fine_opt_res.wall_time
# get optimised parameters
fine_opt_res.get()
# run get_ll on all sets of parameters in parallel
ll = get_ll.map(fine_opt_res.get())
ll
nu_opt = [p[0] for p in fine_opt_res.get()]
T_opt = [p[1] for p in fine_opt_res.get()]
df = pd.DataFrame({
'a_nu' : nu_opt,
'b_T' : T_opt,
'c_ll' : ll
})
df
p_init
# calculate the best-fit model SFS given the paramter combination with the highest likelihood
expected_sfs = func_ex(p_init, ns, pts_l)
expected_sfs.fold()
plt.rcParams['figure.figsize'] = [12.0, 10.0]
print(p_init)
dadi.Plotting.plot_1d_comp_multinom(expected_sfs.fold()[:19], fs_ery[:19], residual='linear')
p = np.array(df.iloc[0, 0:2])
print(p)
expected_sfs = func_ex(p_init, ns, pts_l)
dadi.Plotting.plot_1d_comp_multinom(expected_sfs.fold()[:19], fs_ery[:19], residual='linear')
df
p = np.array(df.iloc[6, 0:2])
print(p)
expected_sfs = func_ex(p_init, ns, pts_l)
dadi.Plotting.plot_1d_comp_multinom(expected_sfs.fold()[:19], fs_ery[:19], residual='linear')
# define function to run dadi with perturbed starting values
def run_dadi_on_ery(p_init, dadi_opt_func, maxiter=100): # for the function to be called with map, it needs to have one input variable
p_init: initial parameter values to perturb and then run optimisation from
dadi_opt_func:
+ optimize_log (BFGS based)
+ optimize_log_fmin (Nelder-Mead)
maxiter: maximum number of iterations
# perturb starting values by up to a factor of 2 times 'fold'
p0 = dadi.Misc.perturb_params(p_init, fold=2, upper_bound=upper_bound, lower_bound=lower_bound)
# run optimisation of paramters
popt = dadi_opt_func(p0=p0, data=fs_ery, model_func=func_ex, pts=pts_l, \
lower_bound=lower_bound, upper_bound=upper_bound, \
verbose=0, maxiter=maxiter, full_output=False)
return popt # returns a tuple with starting values and optimal nu and T
%pdoc run_dadi_on_ery
# run optimisation with perturbed initial values on all 11 engines
fine_opt_res = dview.apply_async( run_dadi_on_ery, p_init, dadi.Inference.optimize_log_fmin, maxiter=1e9 )
fine_opt_res.wall_time
ll = get_ll.map( fine_opt_res.get() )
nu = [p[0] for p in fine_opt_res.get()]
T = [p[1] for p in fine_opt_res.get()]
df = pd.DataFrame({
'a_nu' : nu,
'b_T' : T,
'c_ll' : ll
})
df
# create parallel function for par spectrum
@lbview.parallel(block=False)
def run_dadi_on_par(p_init): # for the function to be called with map, it needs to have one input variable
p_init: initial parameter values to run optimisation from
# run optimisation of paramters
popt = dadi.Inference.optimize_log(p0=p_init, data=fs_par, model_func=func_ex, pts=pts_l, \
lower_bound=lower_bound, upper_bound=upper_bound, \
verbose=0, maxiter=100, full_output=False)
return (p_init, popt) # returns a tuple with starting values and optimal nu and T
# run dadi in parallel on all combinations of starting values
#ar = run_dadi_on_par( product(p0_nu, p0_T) )
# wait for results to come in and print progress report
#ar.wait_interactive()
ar.wall_time/60
%psource dadi.Inference.optimize_log
# save optimsation results to file
# can be loaded back into another IPython session with: ar = pickle.load(filehandle)
# unfortunately, I cannot save the AsyncResult object with pickle (dill can save it, but emits a
# "maximum recursion depth exceeded" error when I try to load the object from file again).
# I need to save the result as a plain list
import dill
with open("exp_growth_optim_res_PAR.pickle", "w") as fhandle:
dill.dump(list(ar.get()), fhandle)
# get the optimized parameter values
popt = [p for i, p in enumerate(ar) if not (i+1) % 2]
# get the starting values
pstart = [p for i, p in enumerate(ar) if (i+1) % 2]
# get arrays of individual parameters from each optimisation run
nu = [p_set[0] for p_set in popt]
T = [p_set[1] for p_set in popt]
nu_start = [p_set[0][0] for p_set in pstart]
T_start = [p_set[0][1] for p_set in pstart]
# run get_ll on all sets of parameters in parallel
ll = get_ll.map(popt)
# create pandas data frame
df = pd.DataFrame({
'a_nu_start' : nu_start,
'b_T_start' : T_start,
'c_nu' : nu,
'd_T' : T,
'e_log ll' : ll
})
df.head()
df.sort_values(by='e_log ll', ascending=False).head(30)
plt.rcParams['font.size'] = 20.0
plt.figure(figsize=[12.0, 10.0])
plt.scatter(np.log(nu), np.log(T), c=ll, s=75, alpha=0.5)
plt.xlabel(r'log $\nu$')
plt.ylabel("log T")
plt.title("log likelihoods for parameter combinations")
plt.grid()
plt.colorbar()
# define function to run dadi with perturbed starting values
# note the high maxiter
def run_dadi_on_par(p_init): # for the function to be called with map, it needs to have one input variable
p_init: initial parameter values to run optimisation from
# perturb starting values by up to a factor of 2 times 'fold'
p0 = dadi.Misc.perturb_params(p_init, fold=2, upper_bound=upper_bound, lower_bound=lower_bound)
# run optimisation of paramters
popt = dadi.Inference.optimize_log(p0=p0, data=fs_par, model_func=func_ex, pts=pts_l, \
lower_bound=lower_bound, upper_bound=upper_bound, \
verbose=0, maxiter=1e9, full_output=False)
return popt # returns optimal nu and T
p_init
# get parameter combination with highest likelihood as starting values
p_init = np.array( df.sort_values(by='e_log ll', ascending=False).head().iloc[0, 2:4] )
# run optimisation with perturbed initial values on all 11 engines
fine_opt_res = dview.apply_async( run_dadi_on_par, p_init )
fine_opt_res.wall_time
ll = get_ll.map(fine_opt_res.get())
nu_opt = [p[0] for p in fine_opt_res.get()]
T_opt = [p[1] for p in fine_opt_res.get()]
df = pd.DataFrame({
'a_nu' : nu_opt,
'b_T' : T_opt,
'c_ll' : ll
})
df
# define function to run dadi with perturbed starting values
def run_dadi_optimisation(p_init, dadi_opt_func, sfs, maxiter=100): # for the function to be called with map, it needs to have one input variable
p_init: initial parameter values to perturb and then run optimisation from
dadi_opt_func:
+ optimize_log (BFGS based)
+ optimize_log_fmin (Nelder-Mead)
sfs: ipyparallel Reference to dadi Spectrum object (previously imported into all engines)
maxiter: maximum number of iterations
# perturb starting values by up to a factor of 2 times 'fold'
p0 = dadi.Misc.perturb_params(p_init, fold=2, upper_bound=upper_bound, lower_bound=lower_bound)
# run optimisation of paramters
popt = dadi_opt_func(p0=p0, data=sfs, model_func=func_ex, pts=pts_l, \
lower_bound=lower_bound, upper_bound=upper_bound, \
verbose=0, maxiter=maxiter, full_output=False)
return popt # returns a tuple with starting values and optimal nu and T
from ipyparallel import Reference
sfs = Reference('fs_par')
# run optimisation with perturbed initial values on all 11 engines
# use Nelder-Mead algorithm this time
fine_opt_res = dview.apply_async( run_dadi_optimisation, \
p_init, dadi.Inference.optimize_log_fmin, sfs, maxiter=100 )
fine_opt_res.wall_time
ll = get_ll.map(fine_opt_res.get())
nu_opt = [p[0] for p in fine_opt_res.get()]
T_opt = [p[1] for p in fine_opt_res.get()]
df = pd.DataFrame({
'a_nu' : nu_opt,
'b_T' : T_opt,
'c_ll' : ll
})
df
# calculate the best-fit model SFS given the paramter combination with the highest likelihood
expected_sfs = func_ex(p_init, ns, pts_l)
print(p_init)
dadi.Plotting.plot_1d_comp_multinom(expected_sfs.fold()[:19], fs_par[:19], residual='linear')
# get optimally scaled (by theta) expected SFS for given data
model = dadi.Inference.optimally_scaled_sfs(expected_sfs.fold(), fs_par)
print(p_init)
plt.plot(model, 'ro-', label="model")
plt.plot(fs_par[:19], 'bo-', label="data")
plt.legend()
plt.xlabel('minor allele frequency')
plt.ylabel('SNP count')
%time run_dadi.map(range(20))
%time
# import 1D spectrum of ery on all engines:
fs_ery = dadi.Spectrum.from_file('dadiExercises/ERY.FOLDED.sfs.dadi_format')
fs_ery.pop_ids = ['ery']
ns = fs_ery.sample_sizes
# create link to function that specifies a simple growth or decline model
func = dadi.Demographics1D.growth
# create extrapolating version of the function
func_ex = dadi.Numerics.make_extrap_log_func(func)
# set lower and upper bounds to nu and T
upper_bound = [100, 3]
lower_bound = [1e-2, 0]
# set starting value
p0 = [1, 1] # corresponds to constant population size
# setting the smallest grid size slightly larger than the largest population sample size (36)
pts_l = [40, 50, 60]
# perturb starting values by up to a factor of 2 times fold:
p1 = dadi.Misc.perturb_params(p0, fold=1.5, upper_bound=upper_bound, lower_bound=lower_bound)
# run optimisation of paramters
popt = dadi.Inference.optimize_log(p0=p1, data=fs_ery, model_func=func_ex, pts=pts_l, \
lower_bound=lower_bound, upper_bound=upper_bound, \
verbose=0, maxiter=100, full_output=False)
popt
%time
# perturb starting values by up to a factor of 2 times fold:
p1 = dadi.Misc.perturb_params(p0, fold=1.5, upper_bound=upper_bound, lower_bound=lower_bound)
# run optimisation of paramters
popt = dadi.Inference.optimize_log(p0=p1, data=fs_ery, model_func=func_ex, pts=pts_l, \
lower_bound=lower_bound, upper_bound=upper_bound, \
verbose=0, maxiter=100, full_output=False)
popt
%px %who
?run_dadi.map
lbview.block
cl.metadata
# create serial function
def serial_run_dadi(x): # for the function to be called with map, it needs to have one input variable
# perturb starting values by up to a factor of 2 times fold:
p1 = dadi.Misc.perturb_params(p0, fold=1.5, upper_bound=upper_bound, lower_bound=lower_bound)
# run optimisation of paramters
popt = dadi.Inference.optimize_log(p0=p1, data=fs_ery, model_func=func_ex, pts=pts_l, \
lower_bound=lower_bound, upper_bound=upper_bound, \
verbose=0, maxiter=100, full_output=False)
return popt # returns a numpy array of optimal nu and T
# run dadi optimisation in non blocking mode
ar = lbview.map(serial_run_dadi, range(20))
ar.wait_interactive()
# the sum of the computation time of all of the tasks:
ar.serial_time
# the time between the first task submitted and last result received.
# This is the actual cost of computation, including IPython overhead
ar.wall_time
parallel_speedup = ar.serial_time / ar.wall_time
parallel_speedup
ar.get()
for i, r in enumerate(ar):
print "Optimsation number %i: %.3f (nu), %.3f (T)" % (i, r[0], r[1])
# create serial function
def serial_run_dadi_on_data(sfs):
sfs: Spectrum object, provided to 'data' option of dadi.Inference.optimize_log
# perturb starting values by up to a factor of 2 times fold:
p1 = dadi.Misc.perturb_params(p0, fold=1.5, upper_bound=upper_bound, lower_bound=lower_bound)
# run optimisation of paramters
popt = dadi.Inference.optimize_log(p0=p1, data=sfs, model_func=func_ex, pts=pts_l, \
lower_bound=lower_bound, upper_bound=upper_bound, \
verbose=0, maxiter=100, full_output=False)
return popt # returns a numpy array of optimal nu and T
from ipyparallel import Reference
# create a Reference to `fs_ery` on the engines.
data = Reference('fs_ery')
ar = lbview.map(serial_run_dadi_on_data, [data] * 20)
for i, r in enumerate(ar):
print "Optimsation number %i: %.3f (nu), %.3f (T)" % (i, r[0], r[1])
# run parallel optimisation on 1D SFS of par
ar = lbview.map(serial_run_dadi_on_data, [data] * 20)
# print result
for i, r in enumerate(ar):
print "Optimsation number %i: %.3f (nu), %.3f (T)" % (i, r[0], r[1])
# run parallel optimisation on 1D SFS of par
# return results as they become ready (unordered)
# 3 calls of the function per task (chunksize)
ar = lbview.map(serial_run_dadi_on_data, [data] * 50, ordered=False, chunksize=3)
# print result
for i, r in enumerate(ar):
print "Optimsation number %i: %.3f (nu), %.3f (T)" % (i, r[0], r[1])
numpy.mean(ar[0])
%who
from ipyparallel import Client
cl = Client()
cl.ids
%%px --local
# run whole cell on all engines a well as in the local IPython session
import numpy as np
import sys
sys.path.insert(0, '/home/claudius/Downloads/dadi')
import dadi
%%px --local
# import 1D spectrum of ery on all engines:
fs_ery = dadi.Spectrum.from_file('ERY.unfolded.sfs').fold()
# import 1D spectrum of ery on all engines:
fs_par = dadi.Spectrum.from_file('PAR.unfolded.sfs').fold()
cl[0]['fs_ery']
%%px --local
ns = fs_ery.sample_sizes # both populations have the same sample size
fs_ery.pop_ids = ['ery']
fs_par.pop_ids = ['par']
# setting the smallest grid size slightly larger than the largest population sample size (36)
pts_l = [40, 50, 60]
%%px --local
# create link to function that specifies the model
func = dadi.Demographics1D.growth
# create extrapolating version of the model function
func_ex = dadi.Numerics.make_extrap_log_func(func)
%pdoc dadi.Demographics1D.growth
# create load balanced view of engines
lbview = cl.load_balanced_view()
def run_dadi(p_init): # for the function to be called with map, it needs to have one input variable
p_init: initial parameter values to run optimisation from
if perturb == True:
p_init = dadi.Misc.perturb_params(p_init, fold=fold,
upper_bound=upper_bound, lower_bound=lower_bound)
# note upper_bound and lower_bound variables are expected to be in the namespace of each engine
# run optimisation of paramters
popt = dadi_opt_func(p0=p_init, data=sfs, model_func=func_ex, pts=pts_l, \
lower_bound=lower_bound, upper_bound=upper_bound, \
verbose=verbose, maxiter=maxiter, full_output=full_output)
return p_init, popt
%%px
# set up global variables on engines required for run_dadi function call
dadi_opt_func = dadi.Inference.optimize_log # uses BFGS algorithm
sfs = fs_ery # use ERY spectrum
perturb = False
fold = 1
maxiter = 10 # run a maximum of 20 iterations
verbose = 0
full_output = True # need to have full output to get the warnflags (see below)
%%px
# set lower and upper bounds to nu and T
upper_bound = [1e4, 4]
lower_bound = [1e-4, 0]
p0_nu = np.logspace(-3, 3, base=10.0, num=15)
p0_T = np.logspace(-3, np.log10(3), base=10, num=15)
from itertools import product
# DO NOT RUN
#ar_ery = lbview.map(run_dadi, product(p0_nu, p0_T), block=False, order=True)
ar_ery.progress
ar_ery.get()[0]
%%px --local
import dill # better version of pickle
%%px --local
# set outname stub on all engines
outname = "OUT_exp_growth_model/ERY"
print outname
i=0
for p_init in product(p0_nu, p0_T):
i+=1
if i > 3: break
name = outname[:]
for p in p_init:
if p < 1:
name += "_%.4f" % (p)
else:
name += "_%4d" % (p)
print name
def run_dadi(p_init): # for the function to be called with map, it needs to have one input variable
p_init: initial parameter values to run optimisation from
if perturb == True:
p_init = dadi.Misc.perturb_params(p_init, fold=fold,
upper_bound=upper_bound, lower_bound=lower_bound)
# note upper_bound and lower_bound variables are expected to be in the namespace of each engine
# run optimisation of paramters
popt = dadi_opt_func(p0=p_init, data=sfs, model_func=func_ex, pts=pts_l, \
lower_bound=lower_bound, upper_bound=upper_bound, \
verbose=verbose, maxiter=maxiter, full_output=full_output)
# pickle to file
import dill
name = outname[:] # make copy of file name stub!
for p in p_init:
name += "_%.4f" % (p)
with open(name + ".dill", "w") as fh:
dill.dump((p_init, popt), fh)
return p_init, popt
%ll
ar_ery = lbview.map(run_dadi, product(p0_nu, p0_T), block=False, order=True)
ar_ery.progress
import glob
len( glob.glob('OUT_exp_growth_model/*dill') )
ar_ery = []
for filename in glob.glob('OUT_exp_growth_model/*dill'):
ar_ery.append( dill.load(open(filename)) )
ar_ery[1]
def get_flag_count(out, NM=True):
out: list of tuples, each containing p_init and popt + additional info, including warnflags
as produced by run_dadi.py
from collections import defaultdict
if NM: # if ar from Nelder-Mead
i = 4 # the warnflag is reported at index position 4 in the output array
else: # ar from BFGS optimisation
i = 6
warnflag = defaultdict(int)
for res in out:
if res[1][i] == 1: # notice the change in indexing
warnflag[1] +=1
elif res[1][i] == 2:
warnflag[2] += 1
elif res[1][i] == 0:
warnflag[0] += 1
else:
warnflag[999] +=1
if NM:
print "success", warnflag[0]
print "Maximum number of function evaluations made.", warnflag[1]
print "Maximum number of iterations reached.", warnflag[2]
print "unknown flag", warnflag[999]
else:
print "success", warnflag[0]
print "Maximum number of iterations exceeded.", warnflag[1]
print "Gradient and/or function calls not changing.", warnflag[2]
print "unknown flag", warnflag[999]
get_flag_count(ar_ery, NM=False)
def flatten(array):
Returns a list of flattened elements of every inner lists (or tuples)
****RECURSIVE****
import numpy
res = []
for el in array:
if isinstance(el, (list, tuple, numpy.ndarray)):
res.extend(flatten(el))
continue
res.append(el)
return list( res )
import pandas as pd
i = 6 # where to find flag, 6 for BFGS, 4 for Nelder-Mead
successfull_popt_ery = [flatten(out)[:5] for out in ar_ery if out[1][i] == 0]
df = pd.DataFrame(data=successfull_popt_ery, \
columns=['nu_0', 'T_0', 'nu_opt', 'T_opt', '-logL'])
df.sort_values(by='-logL', ascending=True) # smaller is better
%%px
lower_bound = [1e-4, 1e-6]
cl[0]['lower_bound']
%%px
# set up global variables on engines required for run_dadi function call
dadi_opt_func = dadi.Inference.optimize_log # uses BFGS algorithm
sfs = fs_ery # use ERY spectrum
perturb = True
fold = 1
maxiter = 10 # run a maximum of 20 iterations
verbose = 0
full_output = True # need to have full output to get the warnflags (see below)
outname = "OUT_exp_growth_model/ERY_perturb"
p_init = [1, 1] # this should a neutral model, i. e. constant population size
from itertools import repeat
ar_ery = lbview.map(run_dadi, repeat(p_init, 20), block=False, order=True)
ar_ery.progress
get_flag_count(ar_ery, NM=False)
%ll OUT_exp_growth_model/ERY_perturb*
! ls -l OUT_exp_growth_model/ERY_perturb* | wc -l
%%px
# use Nelder-Mead algorithm
dadi_opt_func = dadi.Inference.optimize_log_fmin
! rm -f OUT_exp_growth_model/ERY_perturb*
! ls -l OUT_exp_growth_model/ERY_perturb* | wc -l
ar_ery = lbview.map(run_dadi, repeat(p_init, 20), block=False, order=False)
ar_ery.elapsed
get_flag_count(ar_ery, NM=True)
%%px
# increase perturbation
fold = 3
ar_ery = lbview.map(run_dadi, repeat(p_init, 20), block=False, order=False)
ar_ery.progress
ar_ery.elapsed
get_flag_count(ar_ery, NM=True)
! rm -f OUT_exp_growth_model/ERY_perturb*
%%px
maxiter = 50
ar_ery = lbview.map(run_dadi, repeat(p_init, 20), block=False, order=False)
ar_ery.progress
ar_ery.elapsed
get_flag_count(ar_ery, NM=True)
i = 4 # where to find flag, 6 for BFGS, 4 for Nelder-Mead
successfull_popt_ery = [flatten(out)[:5] for out in ar_ery if out[1][i] == 0]
df = pd.DataFrame(data=successfull_popt_ery, \
columns=['nu_0', 'T_0', 'nu_opt', 'T_opt', '-logL'])
df.sort_values(by='-logL', ascending=True) # smaller is better
p_init = np.array( df.sort_values(by='-logL').iloc[0, 2:4] )
p_init
ar_ery = lbview.map(run_dadi, repeat(p_init, 20), block=False, order=False)
ar_ery.elapsed
get_flag_count(ar_ery, NM=True)
i = 4 # where to find flag, 6 for BFGS, 4 for Nelder-Mead
successfull_popt_ery = [flatten(out)[:5] for out in ar_ery if out[1][i] == 0]
df = pd.DataFrame(data=successfull_popt_ery, \
columns=['nu_0', 'T_0', 'nu_opt', 'T_opt', '-logL'])
df.sort_values(by='-logL', ascending=True) # smaller is better
%%px
maxiter = 100
ar_ery = lbview.map(run_dadi, repeat(p_init, 20), block=False, order=False)
ar_ery.elapsed
get_flag_count(ar_ery, NM=True)
i = 4 # where to find flag, 6 for BFGS, 4 for Nelder-Mead
successfull_popt_ery = [flatten(out)[:5] for out in ar_ery if out[1][i] == 0]
df = pd.DataFrame(data=successfull_popt_ery, \
columns=['nu_0', 'T_0', 'nu_opt', 'T_opt', '-logL'])
df.sort_values(by='-logL', ascending=True) # smaller is better
# safe optimisation result to file
# dill can dump but not load an AsyncResult object
dill.dump(list(ar_ery.get()), open("OUT_exp_growth_model/ERY_perturb_ar_ery.dill", "w"))
test = dill.load(open("OUT_exp_growth_model/ERY_perturb_ar_ery.dill"))
get_flag_count(test, NM=True)
%%px
# set up global variables on engines required for run_dadi function call
dadi_opt_func = dadi.Inference.optimize_log_fmin # uses Nelder-Mead algorithm
sfs = fs_par # use PAR spectrum
perturb = True
fold = 3
maxiter = 50 # run a maximum of 50 iterations
verbose = 0
full_output = True # need to have full output to get the warnflags (see below)
outname = "OUT_exp_growth_model/PAR_perturb"
p_init = [1, 1]
ar_par = lbview.map(run_dadi, repeat(p_init, 20), block=False, order=False)
ar_par.elapsed/60
ar_par.progress
ar_par = []
for filename in glob.glob('OUT_exp_growth_model/PAR_perturb*'):
ar_par.append( dill.load(open(filename)) )
get_flag_count(ar_par, NM=True)
i = 4 # where to find flag, 6 for BFGS, 4 for Nelder-Mead
successfull_popt_par = [flatten(out)[:5] for out in ar_par if out[1][i] == 0]
df = pd.DataFrame(data=successfull_popt_par, \
columns=['nu_0', 'T_0', 'nu_opt', 'T_opt', '-logL'])
df.sort_values(by='-logL', ascending=True) # smaller is better
%%px
# set up global variables on engines required for run_dadi function call
dadi_opt_func = dadi.Inference.optimize_log # uses BFGS algorithm
sfs = fs_par # use PAR spectrum
perturb = False
fold = 3
maxiter = 10 # run a maximum of 10 iterations
verbose = 0
full_output = True # need to have full output to get the warnflags (see below)
outname = "OUT_exp_growth_model/PAR"
ar_par = lbview.map(run_dadi, product(p0_nu, p0_T), block=False, order=False)
ar_par.progress
ar_par.elapsed/60
import glob, dill
ar_par = []
for filename in glob.glob('OUT_exp_growth_model/PAR_[0123456789]*'):
try:
ar_par.append( dill.load( open(filename) ) )
except:
pass
get_flag_count(ar_par, NM=False)
%who
! rm -f OUT_exp_growth_model/PAR_[0123456789]*
from ipyparallel import Client
cl = Client()
cl.ids
%%px --local
# run whole cell on all engines a well as in the local IPython session
import numpy as np
import sys
sys.path.insert(0, '/home/claudius/Downloads/dadi')
import dadi
%%px --local
# import 1D spectrum of ery on all engines:
fs_ery = dadi.Spectrum.from_file('ERY.unfolded.sfs').fold()
# import 1D spectrum of ery on all engines:
fs_par = dadi.Spectrum.from_file('PAR.unfolded.sfs').fold()
%%px --local
ns = fs_ery.sample_sizes # both populations have the same sample size
fs_ery.pop_ids = ['ery']
fs_par.pop_ids = ['par']
# setting the smallest grid size slightly larger than the largest population sample size (36)
pts_l = [40, 50, 60]
%%px --local
# create link to function that specifies the model
func = dadi.Demographics1D.growth
# create extrapolating version of the model function
func_ex = dadi.Numerics.make_extrap_log_func(func)
# create load balanced view of engines
lbview = cl.load_balanced_view()
%%px --local
import dill # better version of pickle
from itertools import product
def run_dadi(p_init): # for the function to be called with map, it needs to have one input variable
p_init: initial parameter values to run optimisation from
if perturb == True:
p_init = dadi.Misc.perturb_params(p_init, fold=fold,
upper_bound=upper_bound, lower_bound=lower_bound)
# note upper_bound and lower_bound variables are expected to be in the namespace of each engine
# run optimisation of paramters
popt = dadi_opt_func(p0=p_init, data=sfs, model_func=func_ex, pts=pts_l, \
lower_bound=lower_bound, upper_bound=upper_bound, \
verbose=verbose, maxiter=maxiter, full_output=full_output)
import dill
# pickle to file
name = outname[:] # make copy of file name stub!
for p in p_init:
name += "_%.4f" % (p)
with open(name + ".dill", "w") as fh:
dill.dump((p_init, popt), fh)
return p_init, popt
%%px
# set lower and upper bounds to nu and T
upper_bound = [1e4, 4]
lower_bound = [1e-4, 1e-6]
p0_nu = np.logspace(-2, 2, base=10.0, num=10)
p0_T = np.logspace(-2, np.log10(2), base=10, num=10)
%%px
# set up global variables on engines required for run_dadi function call
dadi_opt_func = dadi.Inference.optimize_log # uses BFGS algorithm
sfs = fs_par # use PAR spectrum
perturb = False
fold = 3
maxiter = 10 # run a maximum of 10 iterations
verbose = 0
full_output = True # need to have full output to get the warnflags (see below)
outname = "OUT_exp_growth_model/PAR"
ar_par = lbview.map(run_dadi, product(p0_nu, p0_T), block=False, order=False)
def get_flag_count(out, NM=True):
out: list of tuples, each containing p_init and popt + additional info, including warnflags
as produced by run_dadi.py
from collections import defaultdict
if NM: # if ar from Nelder-Mead
i = 4 # the warnflag is reported at index position 4 in the output array
else: # ar from BFGS optimisation
i = 6
warnflag = defaultdict(int)
for res in out:
if res[1][i] == 1: # notice the change in indexing
warnflag[1] +=1
elif res[1][i] == 2:
warnflag[2] += 1
elif res[1][i] == 0:
warnflag[0] += 1
else:
warnflag[999] +=1
if NM:
print "success", warnflag[0]
print "Maximum number of function evaluations made.", warnflag[1]
print "Maximum number of iterations reached.", warnflag[2]
print "unknown flag", warnflag[999]
else:
print "success", warnflag[0]
print "Maximum number of iterations exceeded.", warnflag[1]
print "Gradient and/or function calls not changing.", warnflag[2]
print "unknown flag", warnflag[999]
ar_par = []
for filename in glob.glob('OUT_exp_growth_model/PAR_[0123456789]*'):
try:
ar_par.append( dill.load( open(filename) ) )
except:
pass
get_flag_count(ar_par, NM=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dadi should now be imported on all remote engines as well as locally.
Step2: The remote namespace can be checked with
Step3: As can be seen, dadi is imported on all engines.
Step4: Exponential growth model
Step5: This model assumes that exponential growth (or decline) started some time $T$ in the past and the current effective population size is a multiple $\nu$ of the ancient populations size, i. e. before exponential growth began. So this model just estimates two parameters. If $\nu$ is estimated to be 1, this indicates that the population size hasn't changed (although see Myers2008). If it is below one, this indicates exponential population decline (how realistic is that?). If it is above 1, this indicates exponential population growth.
Step6: create combinations of starting values for optimisation
Step7: These starting values cover a very wide range of possible parameter combinations. This should be better than merely randomly perturbing a single parameter combination a certain number of times (and running the optimisation from those perturbed starting values).
Step9: dadi also offers other otimisation algorithms
Step10: It took 7 minutes to run these 100 optimisations.
Step11: The tuples returned from my parallel function have been flattened into one array. That's not nice.
Step12: Clearly, different starting parameter values lead to different optimised parameter values. There is a great variance in the returned optimised parameters. Maybe the maximum number of iterations (maxiter) is not enough?
Step13: I would like to create a data frame with pandas with these arrays and then sort the table by ll.
Step14: Given the range of starting values that result in almost exactly the same optimal parameter values, I am fairly convinced that the optimal parameters for this model are close to $\nu = 0.136$ and $T = 0.0075$. Given the model, the optimal parameters indicate that the ancestral populations to the erythropus sample started to decline exponentially (how realistic is that?) $0.0075 \times 2N_a$ generations ago to a contemporary population size of about 14% of the ancient effective population size ($N_a$).
Step16: The parameter combination $\nu = 0.001049$ and $T = 0.000023$ has only slightly smaller log likelihood. This parameter combination corresponds to a much more recent and much more drastic population reduction. I think a likelihood ratio test would be required to see whether the data can distinguish between these two scenarios.
Step17: This toke only 6 seconds to compute on 11 engines.
Step18: There are still three outlier results. Maybe I need to use a different optimsiation function.
Step19: Compared to the standard neutral model prediction, it seems that only the first frequency class has a markedly reduced residual with this model (about 20 standard deviations).
Step20: I cannot make out a clear difference between these this and the previous parameter set.
Step22: Again, even this very different parameter set seems to produce a very similar SFS to the ones produced by the two more likely parameter sets.
Step23: Now run dadi optimisation with Nelder-Mead algorithm.
Step25: Half of the optimisations result in a combination of parameters that do not have the highest likelihood. So it does seem as though Nelder-Mead is more robust than BFGS (but note the random perturbation of starting values).
Step26: This runs definitely too long! run_dadi_on_par spends most of its time on the last three tasks. A maxiter of 100, therefore, does not seem to be limiting the search. Set to lower maxiter next time. Maybe 50.
Step28: The parameter combination with the highest likelihood is not found several times from different starting values. This does not give me much confidence in the maximum likelihood parameter set.
Step30: None of these parameter combinations is the most likely determined above! I can therefore not be very confident that I have found the most likely parameter combination.
Step31: Again, none of these parameter combinations is close to the most likely ones determined above.
Step32: These logarithmic plots are not very helpfull.
Step33: So, if I could believe the optimal parameter combination found here to be the global maximum, it would imply that the ancestral populations of the parallelus sample have undergone an exponential decline to about 1/1000th of the ancient population size, $N_{ref}$ (i. e. very strong bottleneck) and this happend in the very recent past, i. e. only $2.3 \times 10^{-5} \times 2N_{ref}$ generations ago.
Step34: Most of the time this just runs on a single core! I don't know why. Note the difference between the wall time and the total time.
Step35: It really shouldn't take 2 minutes to run this 20 times with different starting values.
Step36: Note, that p1 and popt are not in the namespace of the remote engines. They apparently have been cleared automatically.
Step38: Jobs submitted to the load balanced view run asynchonously in non-blocking mode.
Step39: I think it should be a good idea to use load balanced views of the engines, since I think some optimisations take longer than others.
Step40: Look at max_iter from dadi.Inference.optimize_log.
Step42: Use unfolded ANGSD spectra and fold with dadi
Step43: Ery
Step44: This lets the optimisations run with the faster BFGS algorithm and stop after a maximum of 10 iterations. This is intended for a broad sweep across the parameter space. Also note that this sets the optimisations to fit the ery spectrum.
Step45: I needed to kill a single very long lasting optimisation job from the terminal (because it is not possible to do that with ipyparallel). Unfortunately that also means that the results from all 224 completed (or terminated) optimisation runs are lost as well. This is the major shortcoming of ipyparallel at the moment.
Step47: Now, define a new run_dadi function that pickles output to file
Step48: I killed the last optimisation.
Step50: Get flag count from optimisations
Step52: There were 39 successfull optimisations.
Step53: These results are completely inconclusive. I think it would make sense to limit the search space for $T$ to something well above 0 and start a few optimisations from parameter values close to a neutral model.
Step54: None of these optimisations was able to converge on optimal parameter values.
Step55: The perturbation has not been very effective.
Step56: Why are there only 19 output files? There should be 20! Two output filenames happened to be identical.
Step57: OK. Three successful optimisations.
Step58: OK, that looks promising. Two quite different parameter combinations have lead to the same optimal parameter combination.
Step59: This looks like convergence.
Step60: This is still extremely fast!
Step61: Par
Step62: I needed to kill the last two processes in the terminal because they ran more than 15 minutes. I cannot use the AsyncResult object because of that and unstead need to load the pickled output from file.
Step63: Only 1 successful optimisation.
Step64: That is hitting the upper limit for the time parameter.
Step65: Unfortunately, this is too slow. I need to abort this by stopping the cluster of engines.
Step67: After restarting the cluster, it is important to be EXTREMELY careful to load all required variables and modules on all engines again.
Step69: I am trying to reduce the initial parameter sweep.
|
7,726
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image(filename='images/mdgxs.png', width=350)
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import openmc
import openmc.mgxs as mgxs
# Instantiate some Nuclides
h1 = openmc.Nuclide('H1')
o16 = openmc.Nuclide('O16')
u235 = openmc.Nuclide('U235')
u238 = openmc.Nuclide('U238')
pu239 = openmc.Nuclide('Pu239')
zr90 = openmc.Nuclide('Zr90')
# Instantiate a Material and register the Nuclides
inf_medium = openmc.Material(name='moderator')
inf_medium.set_density('g/cc', 5.)
inf_medium.add_nuclide(h1, 0.03)
inf_medium.add_nuclide(o16, 0.015)
inf_medium.add_nuclide(u235 , 0.0001)
inf_medium.add_nuclide(u238 , 0.007)
inf_medium.add_nuclide(pu239, 0.00003)
inf_medium.add_nuclide(zr90, 0.002)
# Instantiate a Materials collection and export to XML
materials_file = openmc.Materials([inf_medium])
materials_file.default_xs = '71c'
materials_file.export_to_xml()
# Instantiate boundary Planes
min_x = openmc.XPlane(boundary_type='reflective', x0=-0.63)
max_x = openmc.XPlane(boundary_type='reflective', x0=0.63)
min_y = openmc.YPlane(boundary_type='reflective', y0=-0.63)
max_y = openmc.YPlane(boundary_type='reflective', y0=0.63)
# Instantiate a Cell
cell = openmc.Cell(cell_id=1, name='cell')
# Register bounding Surfaces with the Cell
cell.region = +min_x & -max_x & +min_y & -max_y
# Fill the Cell with the Material
cell.fill = inf_medium
# Instantiate Universe
root_universe = openmc.Universe(universe_id=0, name='root universe')
root_universe.add_cell(cell)
# Create Geometry and set root Universe
openmc_geometry = openmc.Geometry()
openmc_geometry.root_universe = root_universe
# Export to "geometry.xml"
openmc_geometry.export_to_xml()
# OpenMC simulation parameters
batches = 50
inactive = 10
particles = 5000
# Instantiate a Settings object
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': True}
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.source.Source(space=uniform_dist)
# Export to "settings.xml"
settings_file.export_to_xml()
# Instantiate a 100-group EnergyGroups object
energy_groups = mgxs.EnergyGroups()
energy_groups.group_edges = np.logspace(-3, 7.3, 101)
# Instantiate a 1-group EnergyGroups object
one_group = mgxs.EnergyGroups()
one_group.group_edges = np.array([energy_groups.group_edges[0], energy_groups.group_edges[-1]])
delayed_groups = list(range(1,7))
# Instantiate a few different sections
chi_prompt = mgxs.Chi(domain=cell, groups=energy_groups, by_nuclide=True, prompt=True)
prompt_nu_fission = mgxs.FissionXS(domain=cell, groups=energy_groups, by_nuclide=True, nu=True, prompt=True)
chi_delayed = mgxs.ChiDelayed(domain=cell, energy_groups=energy_groups, by_nuclide=True)
delayed_nu_fission = mgxs.DelayedNuFissionXS(domain=cell, energy_groups=energy_groups, delayed_groups=delayed_groups, by_nuclide=True)
beta = mgxs.Beta(domain=cell, energy_groups=energy_groups, delayed_groups=delayed_groups, by_nuclide=True)
decay_rate = mgxs.DecayRate(domain=cell, energy_groups=one_group, delayed_groups=delayed_groups, by_nuclide=True)
chi_prompt.nuclides = ['U235', 'Pu239']
prompt_nu_fission.nuclides = ['U235', 'Pu239']
chi_delayed.nuclides = ['U235', 'Pu239']
delayed_nu_fission.nuclides = ['U235', 'Pu239']
beta.nuclides = ['U235', 'Pu239']
decay_rate.nuclides = ['U235', 'Pu239']
decay_rate.tallies
# Instantiate an empty Tallies object
tallies_file = openmc.Tallies()
# Add chi-prompt tallies to the tallies file
tallies_file += chi_prompt.tallies.values()
# Add prompt-nu-fission tallies to the tallies file
tallies_file += prompt_nu_fission.tallies.values()
# Add chi-delayed tallies to the tallies file
tallies_file += chi_delayed.tallies.values()
# Add delayed-nu-fission tallies to the tallies file
tallies_file += delayed_nu_fission.tallies.values()
# Add beta tallies to the tallies file
tallies_file += beta.tallies.values()
# Add decay rate tallies to the tallies file
tallies_file += decay_rate.tallies.values()
# Export to "tallies.xml"
tallies_file.export_to_xml()
# Run OpenMC
openmc.run()
# Load the last statepoint file
sp = openmc.StatePoint('statepoint.50.h5')
# Load the tallies from the statepoint into each MGXS object
chi_prompt.load_from_statepoint(sp)
prompt_nu_fission.load_from_statepoint(sp)
chi_delayed.load_from_statepoint(sp)
delayed_nu_fission.load_from_statepoint(sp)
beta.load_from_statepoint(sp)
decay_rate.load_from_statepoint(sp)
delayed_nu_fission.get_condensed_xs(one_group).get_xs()
df = delayed_nu_fission.get_pandas_dataframe()
df.head(10)
df = decay_rate.get_pandas_dataframe()
df.head(12)
beta.export_xs_data(filename='beta', format='excel')
chi_prompt.build_hdf5_store(filename='mdgxs', append=True)
chi_delayed.build_hdf5_store(filename='mdgxs', append=True)
# Get the decay rate data
dr_tally = decay_rate.xs_tally
dr_u235 = dr_tally.get_values(nuclides=['U235']).flatten()
dr_pu239 = dr_tally.get_values(nuclides=['Pu239']).flatten()
# Compute the exponential decay of the precursors
time = np.logspace(-3,3)
dr_u235_points = np.exp(-np.outer(dr_u235, time))
dr_pu239_points = np.exp(-np.outer(dr_pu239, time))
# Create a plot of the fraction of the precursors remaining as a f(time)
colors = ['b', 'g', 'r', 'c', 'm', 'k']
legend = []
fig = plt.figure(figsize=(8,6))
for g,c in enumerate(colors):
plt.semilogx(time, dr_u235_points [g,:], color=c, linestyle='--', linewidth=3)
plt.semilogx(time, dr_pu239_points[g,:], color=c, linestyle=':' , linewidth=3)
legend.append('U-235 $t_{1/2}$ = ' + '{0:1.2f} seconds'.format(np.log(2) / dr_u235[g]))
legend.append('Pu-239 $t_{1/2}$ = ' + '{0:1.2f} seconds'.format(np.log(2) / dr_pu239[g]))
plt.title('Delayed Neutron Precursor Decay Rates')
plt.xlabel('Time (s)')
plt.ylabel('Fraction Remaining')
plt.legend(legend, loc=1, bbox_to_anchor=(1.55, 0.95))
# Use tally arithmetic to compute the precursor concentrations
precursor_conc = beta.get_condensed_xs(one_group).xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) * \
delayed_nu_fission.get_condensed_xs(one_group).xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) / \
decay_rate.xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True)
# Get the Pandas DataFrames for inspection
precursor_conc.get_pandas_dataframe()
energy_filter = [f for f in beta.xs_tally.filters if type(f) is openmc.EnergyFilter]
beta_integrated = beta.get_condensed_xs(one_group).xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True)
beta_u235 = beta_integrated.get_values(nuclides=['U235'])
beta_pu239 = beta_integrated.get_values(nuclides=['Pu239'])
# Reshape the betas
beta_u235.shape = (beta_u235.shape[0])
beta_pu239.shape = (beta_pu239.shape[0])
df = beta_integrated.summation(filter_type=openmc.DelayedGroupFilter, remove_filter=True).get_pandas_dataframe()
print('Beta (U-235) : {:.6f} +/- {:.6f}'.format(df[df['nuclide'] == 'U235']['mean'][0], df[df['nuclide'] == 'U235']['std. dev.'][0]))
print('Beta (Pu-239): {:.6f} +/- {:.6f}'.format(df[df['nuclide'] == 'Pu239']['mean'][1], df[df['nuclide'] == 'Pu239']['std. dev.'][1]))
beta_u235 = np.append(beta_u235[0], beta_u235)
beta_pu239 = np.append(beta_pu239[0], beta_pu239)
# Create a step plot for the MGXS
plt.plot(np.arange(0.5, 7.5, 1), beta_u235, drawstyle='steps', color='b', linewidth=3)
plt.plot(np.arange(0.5, 7.5, 1), beta_pu239, drawstyle='steps', color='g', linewidth=3)
plt.title('Delayed Neutron Fraction (beta)')
plt.xlabel('Delayed Group')
plt.ylabel('Beta(fraction total neutrons)')
plt.legend(['U-235', 'Pu-239'])
plt.xlim([0,7])
chi_d_u235 = np.squeeze(chi_delayed.get_xs(nuclides=['U235'], order_groups='decreasing'))
chi_d_pu239 = np.squeeze(chi_delayed.get_xs(nuclides=['Pu239'], order_groups='decreasing'))
chi_p_u235 = np.squeeze(chi_prompt.get_xs(nuclides=['U235'], order_groups='decreasing'))
chi_p_pu239 = np.squeeze(chi_prompt.get_xs(nuclides=['Pu239'], order_groups='decreasing'))
chi_d_u235 = np.append(chi_d_u235 , chi_d_u235[0])
chi_d_pu239 = np.append(chi_d_pu239, chi_d_pu239[0])
chi_p_u235 = np.append(chi_p_u235 , chi_p_u235[0])
chi_p_pu239 = np.append(chi_p_pu239, chi_p_pu239[0])
# Create a step plot for the MGXS
plt.semilogx(energy_groups.group_edges, chi_d_u235 , drawstyle='steps', color='b', linestyle='--', linewidth=3)
plt.semilogx(energy_groups.group_edges, chi_d_pu239, drawstyle='steps', color='g', linestyle='--', linewidth=3)
plt.semilogx(energy_groups.group_edges, chi_p_u235 , drawstyle='steps', color='b', linestyle=':', linewidth=3)
plt.semilogx(energy_groups.group_edges, chi_p_pu239, drawstyle='steps', color='g', linestyle=':', linewidth=3)
plt.title('Energy Spectrum for Fission Neutrons')
plt.xlabel('Energy (eV)')
plt.ylabel('Fraction on emitted neutrons')
plt.legend(['U-235 delayed', 'Pu-239 delayed', 'U-235 prompt', 'Pu-239 prompt'],loc=2)
plt.xlim(1.0e3, 20.0e6)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A variety of tools employing different methodologies have been developed over the years to compute multi-group cross sections for certain applications, including NJOY (LANL), MC$^2$-3 (ANL), and Serpent (VTT). The openmc.mgxs Python module is designed to leverage OpenMC's tally system to calculate multi-group cross sections with arbitrary energy discretizations and different delayed group models (e.g. 6, 7, or 8 delayed group models) for fine-mesh heterogeneous deterministic neutron transport applications.
Step2: First we need to define materials that will be used in the problem. Before defining a material, we must create nuclides that are used in the material.
Step3: With the nuclides we defined, we will now create a material for the homogeneous medium.
Step4: With our material, we can now create a Materials object that can be exported to an actual XML file.
Step5: Now let's move on to the geometry. This problem will be a simple square cell with reflective boundary conditions to simulate an infinite homogeneous medium. The first step is to create the outer bounding surfaces of the problem.
Step6: With the surfaces defined, we can now create a cell that is defined by intersections of half-spaces created by the surfaces.
Step7: OpenMC requires that there is a "root" universe. Let us create a root universe and add our square cell to it.
Step8: We now must create a geometry that is assigned a root universe and export it to XML.
Step9: Next, we must define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 2500 particles.
Step10: Now we are ready to generate multi-group cross sections! First, let's define a 100-energy-group structure and 1-energy-group structure using the built-in EnergyGroups class. We will also create a 6-delayed-group list.
Step11: We can now use the EnergyGroups object and delayed group list, along with our previously created materials and geometry, to instantiate some MGXS objects from the openmc.mgxs module. In particular, the following are subclasses of the generic and abstract MGXS class
Step12: Each multi-group cross section object stores its tallies in a Python dictionary called tallies. We can inspect the tallies in the dictionary for our Decay Rate object as follows.
Step13: The Beta object includes tracklength tallies for the 'nu-fission' and 'delayed-nu-fission' scores in the 100-energy-group and 6-delayed-group structure in cell 1. Now that each MGXS and MDGXS object contains the tallies that it needs, we must add these tallies to a Tallies object to generate the "tallies.xml" input file for OpenMC.
Step14: Now we a have a complete set of inputs, so we can go ahead and run our simulation.
Step15: Tally Data Processing
Step16: In addition to the statepoint file, our simulation also created a summary file which encapsulates information about the materials and geometry. By default, a Summary object is automatically linked when a StatePoint is loaded. This is necessary for the openmc.mgxs module to properly process the tally data.
Step17: Voila! Our multi-group cross sections are now ready to rock 'n roll!
Step18: Since the openmc.mgxs module uses tally arithmetic under-the-hood, the cross section is stored as a "derived" Tally object. This means that it can be queried and manipulated using all of the same methods supported for the Tally class in the OpenMC Python API. For example, we can construct a Pandas DataFrame of the multi-group cross section data.
Step19: Each multi-group cross section object can be easily exported to a variety of file formats, including CSV, Excel, and LaTeX for storage or data processing.
Step20: The following code snippet shows how to export the chi-prompt and chi-delayed MGXS to the same HDF5 binary data store.
Step21: Using Tally Arithmetic to Compute the Delayed Neutron Precursor Concentrations
Step22: Now let's compute the initial concentration of the delayed neutron precursors
Step23: We can plot the delayed neutron fractions for each nuclide.
Step24: We can also plot the energy spectrum for fission emission of prompt and delayed neutrons.
|
7,727
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
plt.style.use("seaborn-pastel")
%%capture
%pip install -qq --upgrade git+https://github.com/lawrennd/ods
%pip install -qq --upgrade git+https://github.com/SheffieldML/GPy.git
try:
import GPy, pods
except ModuleNotFoundError:
%pip install -qq GPy,
import GPy, pods
import numpy as np
np.random.seed(42)
subject = "16"
motion = ["02", "21"]
data = pods.datasets.cmu_mocap(subject, motion)
data["Y"].shape
print(data["citation"])
print(data["info"])
print(data["details"])
# Make figure move in place.
data["Y"][:, 0:3] = 0.0
Y = data["Y"]
model = GPy.models.GPLVM(Y, 2, init="PCA", normalizer=True)
model.optimize(optimizer="lbfgs", messages=True, max_f_eval=1e4, max_iters=1e4)
%matplotlib inline
def plot_skeleton(ax, Y_vec):
Z = data["skel"].to_xyz(Y_vec)
ax.scatter(Z[:, 0], Z[:, 2], Z[:, 1], marker=".", color="b")
connect = data["skel"].connection_matrix() # Get the connectivity matrix.
I, J = np.nonzero(connect)
xyz = np.zeros((len(I) * 3, 3))
idx = 0
for i, j in zip(I, J):
xyz[idx] = Z[i, :]
xyz[idx + 1] = Z[j, :]
xyz[idx + 2] = [np.nan] * 3
idx += 3
line_handle = ax.plot(xyz[:, 0], xyz[:, 2], xyz[:, 1], "-", color="b")
ax.set_xlim(-15, 15)
ax.set_ylim(-15, 15)
ax.set_zlim(-15, 15)
ax.set_yticks([])
ax.set_xticks([])
ax.set_zticks([])
plt.tight_layout()
# fig = plt.figure(figsize=(7,2.5))
fig = plt.figure(figsize=(14, 5))
ax_a = fig.add_subplot(131)
ax_a.set_title("Latent Space")
n = len(Y)
idx_a = 51 # jumping
idx_b = 180 # standing
other_indices = np.arange(n)[~np.isin(range(n), [idx_a, idx_b])]
jump = np.arange(n)[data["lbls"][:, 0] == 1]
walk = np.arange(n)[data["lbls"][:, 0] == 0]
jump = jump[jump != idx_a]
walk = walk[walk != idx_b]
ax_a.scatter(model.X[jump, 0], model.X[jump, 1], label="jumping motion")
ax_a.scatter(model.X[walk, 0], model.X[walk, 1], label="walking motion")
ax_a.scatter(model.X[idx_a, 0], model.X[idx_a, 1], label="Pose A", marker="^", s=150, c="red")
ax_a.scatter(model.X[idx_b, 0], model.X[idx_b, 1], label="Pose B", marker="+", s=150, c="red")
ax_a.legend(loc="lower left") # , fontsize='x-small')
plt.tight_layout()
ax_b = fig.add_subplot(132, projection="3d")
plot_skeleton(ax_b, Y[idx_a, :])
ax_b.set_title("Pose A")
ax_c = fig.add_subplot(133, projection="3d")
plot_skeleton(ax_c, Y[idx_b, :])
ax_c.set_title("Pose B")
# print(fig)
plt.savefig("gplvm-mocap.pdf")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: CMU Mocap Database
Step2: The data dictionary contains the keys ‘Y’ and ‘skel,’ which represent
Step3: And extra information about the data is included, as standard, under the
Step4: Fit GP-LVM
Step5: We can also remove the mean of the data.
Step6: Now we create the GP-LVM model.
Step7: Now we optimize the model.
Step8: Plotting the skeleton
|
7,728
|
<ASSISTANT_TASK:>
Python Code:
from pipeline import meso
source_scan = dict(animal_id=25133, session=3, scan_idx=11)
target_scan = dict(animal_id=25133, session=4, scan_idx=13)
pairing = (meso.ScanInfo & source_scan).proj(src_session='session', src_scan_idx='scan_idx') * (meso.ScanInfo & target_scan).proj()
meso.ScansToMatch.insert(pairing)
# 150 units from the source scan
unit_ids = [ 46, 75, 117, 272, 342, 381, 395, 408, 414, 463, 537,
568, 581, 633, 670, 800, 801, 842, 873, 1042, 1078, 1085,
1175, 1193, 1246, 1420, 1440, 1443, 1451, 1464, 1719, 1755, 1823,
1863, 2107, 2128, 2161, 2199, 2231, 2371, 2438, 2522, 2572, 2585,
2644, 2764, 2809, 2810, 2873, 2924, 2973, 2989, 3028, 3035, 3083,
3107, 3129, 3131, 3139, 3189, 3192, 3214, 3318, 3513, 3551, 3613,
3618, 3671, 3680, 3742, 3810, 3945, 3973, 4065, 4069, 4085, 4123,
4131, 4134, 4184, 4221, 4353, 4369, 4426, 4490, 4512, 4532, 4865,
4971, 5140, 5171, 5227, 5276, 5694, 5746, 5810, 5817, 5856, 5910,
6013, 6061, 6078, 6108, 6216, 6254, 6273, 6292, 6301, 6368, 6486,
6497, 6558, 6569, 6618, 6620, 6825, 6887, 6911, 6984, 7091, 7199,
7205, 7242, 7331, 7372, 7415, 7429, 7433, 7659, 7715, 7927, 7946,
8085, 8096, 8181, 8317, 8391, 8392, 8395, 8396, 8415, 8472, 8478,
8572, 8580, 8610, 8663, 8681, 8683, 8700]
# create list of entries
src_units = [dict(source_scan, unit_id=unit) for unit in unit_ids]
meso.SourceUnitsToMatch.insert(meso.ScanSet.Unit.proj() & src_units)
meso.ProximityCellMatch.populate(display_progress=True)
meso.BestProximityCellMatch().populate()
meso.BestProximityCellMatch()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Following demonstrates how to find matches between source scan
Step2: Designate the pairing as what needs to be matched
Step3: Now also specify which units from the source should be matched
Step4: Now we have specified scans to match and source scan units, we can populate ProximityCellMatch
Step5: Now find the best proximity match
|
7,729
|
<ASSISTANT_TASK:>
Python Code:
%%capture --no-stderr
!pip3 install kfp --upgrade
import kfp.components as comp
dataflow_template_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.3/components/gcp/dataflow/launch_template/component.yaml')
help(dataflow_template_op)
!gsutil cat gs://dataflow-samples/shakespeare/kinglear.txt
# Required Parameters
PROJECT_ID = '<Please put your project ID here>'
GCS_WORKING_DIR = 'gs://<Please put your GCS path here>' # No ending slash
# Optional Parameters
EXPERIMENT_NAME = 'Dataflow - Launch Template'
OUTPUT_PATH = '{}/out/wc'.format(GCS_WORKING_DIR)
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataflow launch template pipeline',
description='Dataflow launch template pipeline'
)
def pipeline(
project_id = PROJECT_ID,
gcs_path = 'gs://dataflow-templates/latest/Word_Count',
launch_parameters = json.dumps({
'parameters': {
'inputFile': 'gs://dataflow-samples/shakespeare/kinglear.txt',
'output': OUTPUT_PATH
}
}),
location = '',
validate_only = 'False',
staging_dir = GCS_WORKING_DIR,
wait_interval = 30):
dataflow_template_op(
project_id = project_id,
gcs_path = gcs_path,
launch_parameters = launch_parameters,
location = location,
validate_only = validate_only,
staging_dir = staging_dir,
wait_interval = wait_interval)
pipeline_func = pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
!gsutil cat $OUTPUT_PATH*
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the component using KFP SDK
Step2: Sample
Step3: Set sample parameters
Step4: Example pipeline that uses the component
Step5: Compile the pipeline
Step6: Submit the pipeline for execution
Step7: Inspect the output
|
7,730
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from threeML import *
# we will need XPSEC models for extinction
from astromodels.xspec import *
# The filter library takes a while to load so you must import it explicitly..
from threeML.plugins.photometry.filter_library import threeML_filter_library
import speclite.filters as spec_filters
my_backyard_telescope_filter = spec_filters.load_filter('bessell-r')
# NOTE:
my_backyard_telescope_filter.name
my_backyard_telescope = PhotometryLike('backyard_astronomy',
filters=my_backyard_telescope_filter, # the filter
R=(20,.1) ) # the magnitude and error
my_backyard_telescope.display_filters()
threeML_filter_library.SLOAN
spec_filters.plot_filters(threeML_filter_library.SLOAN.SDSS)
spec_filters.plot_filters(threeML_filter_library.Herschel.SPIRE)
spec_filters.plot_filters(threeML_filter_library.Keck.NIRC2)
fangs_g = spec_filters.FilterResponse(
wavelength = [3800, 4500, 5200] * u.Angstrom,
response = [0, 0.5, 0], meta=dict(group_name='fangs', band_name='g'))
fangs_r = spec_filters.FilterResponse(
wavelength = [4800, 5500, 6200] * u.Angstrom,
response = [0, 0.5, 0], meta=dict(group_name='fangs', band_name='r'))
fangs = spec_filters.load_filters('fangs-g', 'fangs-r')
fangslike = PhotometryLike('fangs',filters=fangs,g=(20,.1),r=(18,.1))
fangslike.display_filters()
grond = PhotometryLike('GROND',
filters=threeML_filter_library.ESO.GROND,
#g=(21.5.93,.23), # we exclude these filters
#r=(22.,0.12),
i=(21.8,.01),
z=(21.2,.01),
J=(19.6,.01),
H=(18.6,.01),
K=(18.,.01))
grond.display_filters()
spec = Powerlaw() * XS_zdust() * XS_zdust()
data_list = DataList(grond)
model = Model(PointSource('grb',0,0,spectral_shape=spec))
spec.piv_1 = 1E-2
spec.index_1.fix=False
spec.redshift_2 = 0.347
spec.redshift_2.fix = True
spec.e_bmv_2 = 5./2.93
spec.e_bmv_2.fix = True
spec.rv_2 = 2.93
spec.rv_2.fix = True
spec.method_2 = 3
spec.method_2.fix=True
spec.e_bmv_3 = .002/3.08
spec.e_bmv_3.fix = True
spec.rv_3= 3.08
spec.rv_3.fix=True
spec.redshift_3 = 0
spec.redshift_3.fix=True
spec.method_3 = 1
spec.method_3.fix=True
jl = JointLikelihood(model,data_list)
_ = jl.fit()
_=display_photometry_model_magnitudes(jl)
_ = plot_point_source_spectra(jl.results,flux_unit='erg/(cm2 s keV)',
xscale='linear',
energy_unit='nm',ene_min=1E3, ene_max=1E5, num_ene=200 )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup
Step2: NOTE
Step3: 3ML filter library
Step4: Build your own filters
Step5: GROND Example
Step6: Model specification
Step7: We compute $m_{\rm AB}$ from astromodels photon fluxes. This is done by convolving the differential flux over the filter response
Step8: We can now look at the fit in magnitude space or model space as with any plugin.
|
7,731
|
<ASSISTANT_TASK:>
Python Code:
import nltk
from nltk.book import text7 as text
letters = ' '.join(text)
letters = [letter.lower() for letter in letters] # Get the lowercase
symbols = set(letters)
Nletters = len(letters)
Nsymbols = len(symbols)
symbols
print('Number of letters', Nletters)
print('Nymbols', Nsymbols)
freq_letters = nltk.FreqDist(letters) # Get the most frequent letters
most_common_letter = freq_letters.most_common(1)[0][0]
print('most common letter', most_common_letter)
freq_letters.plot()
bigrams = nltk.bigrams(letters)
freq_bigrams = nltk.FreqDist(bigrams)
master_dictionary = {}
next_letters = {}
for symbol in symbols:
master_dictionary[symbol] = [(key[1], value) for key,value in freq_bigrams.items() if key[0]==symbol]
for symbol in symbols:
aux = max(master_dictionary[symbol], key=lambda x:x[1]) # Maximize over the second element of the tuple
next_letters[symbol] = aux[0]
prediction = 0
for letter in letters:
if letter == most_common_letter:
prediction += 1 # Get's the result right
prediction /= Nletters
print('Predictions using the most comon letter', prediction * 100.0)
# Now we make use of the temporal information
prediction_temp = 0
last_letter = None
for index, letter in enumerate(letters):
if last_letter: # If last_letter is not None
if next_letters[last_letter] == letter:
prediction_temp += 1
# Save the last letter
last_letter = letter
prediction_temp /= Nletters
print('Prediction using bigramsl information', prediction_temp * 100)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we extract the information from the text.
Step2: We get the frequency for all the letters and the most common which turns out to be a space. Latter we will analyze how the result changes when we remove space from the whole analysis.
Step3: We will the bigrams frequency as well
Step4: Now we want to extract the next most probable letter for every letter. From the bigran frequency first we make a dictionary (master dictionary ) of all the next letters and their frequency for each letter (each symbol here). In particular we use a dictionary where the key is the symbol and the value is a list with a the tuples of the next letter and its frequency. With this in our hand we take for every list the one with the maximun frequency using a lambda function and build the next letter dictionary with it.
Step5: Predictions
|
7,732
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inm', 'inm-cm4-8', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
|
7,733
|
<ASSISTANT_TASK:>
Python Code:
import ibis
import os
hdfs_port = os.environ.get('IBIS_WEBHDFS_PORT', 50070)
hdfs = ibis.hdfs_connect(host='quickstart.cloudera', port=hdfs_port)
con = ibis.impala.connect(host='quickstart.cloudera', database='ibis_testing',
hdfs_client=hdfs)
ibis.options.interactive = True
table = con.table('functional_alltypes')
table.limit(5)
table.double_col.mean()
cond = table.bigint_col > table.double_col.mean()
expr = table[cond & table.bool_col].limit(5)
expr
region = con.table('tpch_region')
nation = con.table('tpch_nation')
customer = con.table('tpch_customer')
orders = con.table('tpch_orders')
fields_of_interest = [customer,
region.r_name.name('region'),
orders.o_totalprice,
orders.o_orderdate.cast('timestamp').name('odate')]
tpch = (region.join(nation, region.r_regionkey == nation.n_regionkey)
.join(customer, customer.c_nationkey == nation.n_nationkey)
.join(orders, orders.o_custkey == customer.c_custkey)
[fields_of_interest])
tpch.limit(5)
t2 = tpch.view()
conditional_avg = t2[(t2.region == tpch.region)].o_totalprice.mean()
amount_filter = tpch.o_totalprice > conditional_avg
tpch[amount_filter].limit(10)
tpch.count()
tpch[amount_filter].count()
tpch.schema()
year = tpch.odate.year().name('year')
pre_sizes = tpch.group_by(year).size()
post_sizes = tpch[amount_filter].group_by(year).size().view()
percent = ((post_sizes['count'] / pre_sizes['count'].cast('double'))
.name('fraction'))
expr = (pre_sizes.join(post_sizes, pre_sizes.year == post_sizes.year)
[pre_sizes.year,
pre_sizes['count'].name('pre_count'),
post_sizes['count'].name('post_count'),
percent])
expr
customer = con.table('tpch_customer')
orders = con.table('tpch_orders')
orders.limit(5)
has_open_orders = ((orders.o_orderstatus == 'O') &
(customer.c_custkey == orders.o_custkey)).any()
customer[has_open_orders].limit(10)
has_open_urgent_orders = ((orders.o_orderstatus == 'O') &
(orders.o_orderpriority == '1-URGENT') &
(customer.c_custkey == orders.o_custkey)).any()
customer[-has_open_urgent_orders].count()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using scalar aggregates in filters
Step2: We could always compute some aggregate value from the table and use that in another expression, or we can use a data-derived aggregate in the filter. Take the average of a column for example
Step3: You can use this expression as a substitute for a scalar value in a filter, and the execution engine will combine everything into a single query rather than having to access Impala multiple times
Step4: Conditional aggregates
Step5: In this particular case, filtering based on the conditional average o_totalprice by region requires creating a table view (similar to the self-join examples from earlier) that can be treated as a distinct table entity in the expression. This would not be required if we were computing a conditional statistic from some other table. So this is a little more complicated than some other cases would be
Step6: Once you've done this, you can use the conditional average in a filter expression
Step7: By looking at the table sizes before and after applying the filter you can see the relative size of the subset taken.
Step8: Or even group by year and compare before and after
Step9: "Existence" filters
Step10: We introduce the any reduction
Step11: This is now a valid filter
Step12: For the second example, in which we want to find customers not having any open urgent orders, we write down the condition that they do have some first
Step13: Now, we can negate this condition and use it as a filter
|
7,734
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Eric Larson <larson.eric.d@gmail.com>
# License: BSD (3-clause)
from functools import partial
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D # noqa, analysis:ignore
import mne
from mne.stats import (ttest_1samp_no_p, bonferroni_correction, fdr_correction,
permutation_t_test, permutation_cluster_1samp_test)
print(__doc__)
width = 40
n_subjects = 10
signal_mean = 100
signal_sd = 100
noise_sd = 0.01
gaussian_sd = 5
sigma = 1e-3 # sigma for the "hat" method
n_permutations = 'all' # run an exact test
n_src = width * width
# For each "subject", make a smoothed noisy signal with a centered peak
rng = np.random.RandomState(2)
X = noise_sd * rng.randn(n_subjects, width, width)
# Add a signal at the center
X[:, width // 2, width // 2] = signal_mean + rng.randn(n_subjects) * signal_sd
# Spatially smooth with a 2D Gaussian kernel
size = width // 2 - 1
gaussian = np.exp(-(np.arange(-size, size + 1) ** 2 / float(gaussian_sd ** 2)))
for si in range(X.shape[0]):
for ri in range(X.shape[1]):
X[si, ri, :] = np.convolve(X[si, ri, :], gaussian, 'same')
for ci in range(X.shape[2]):
X[si, :, ci] = np.convolve(X[si, :, ci], gaussian, 'same')
fig, ax = plt.subplots()
ax.imshow(X.mean(0), cmap='inferno')
ax.set(xticks=[], yticks=[], title="Data averaged over subjects")
titles = ['t']
out = stats.ttest_1samp(X, 0, axis=0)
ts = [out[0]]
ps = [out[1]]
mccs = [False] # these are not multiple-comparisons corrected
def plot_t_p(t, p, title, mcc, axes=None):
if axes is None:
fig = plt.figure(figsize=(6, 3))
axes = [fig.add_subplot(121, projection='3d'), fig.add_subplot(122)]
show = True
else:
show = False
p_lims = [0.1, 0.001]
t_lims = -stats.distributions.t.ppf(p_lims, n_subjects - 1)
p_lims = [-np.log10(p) for p in p_lims]
# t plot
x, y = np.mgrid[0:width, 0:width]
surf = axes[0].plot_surface(x, y, np.reshape(t, (width, width)),
rstride=1, cstride=1, linewidth=0,
vmin=t_lims[0], vmax=t_lims[1], cmap='viridis')
axes[0].set(xticks=[], yticks=[], zticks=[],
xlim=[0, width - 1], ylim=[0, width - 1])
axes[0].view_init(30, 15)
cbar = plt.colorbar(ax=axes[0], shrink=0.75, orientation='horizontal',
fraction=0.1, pad=0.025, mappable=surf)
cbar.set_ticks(t_lims)
cbar.set_ticklabels(['%0.1f' % t_lim for t_lim in t_lims])
cbar.set_label('t-value')
cbar.ax.get_xaxis().set_label_coords(0.5, -0.3)
if not show:
axes[0].set(title=title)
if mcc:
axes[0].title.set_weight('bold')
# p plot
use_p = -np.log10(np.reshape(np.maximum(p, 1e-5), (width, width)))
img = axes[1].imshow(use_p, cmap='inferno', vmin=p_lims[0], vmax=p_lims[1],
interpolation='nearest')
axes[1].set(xticks=[], yticks=[])
cbar = plt.colorbar(ax=axes[1], shrink=0.75, orientation='horizontal',
fraction=0.1, pad=0.025, mappable=img)
cbar.set_ticks(p_lims)
cbar.set_ticklabels(['%0.1f' % p_lim for p_lim in p_lims])
cbar.set_label(r'$-\log_{10}(p)$')
cbar.ax.get_xaxis().set_label_coords(0.5, -0.3)
if show:
text = fig.suptitle(title)
if mcc:
text.set_weight('bold')
plt.subplots_adjust(0, 0.05, 1, 0.9, wspace=0, hspace=0)
mne.viz.utils.plt_show()
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
ts.append(ttest_1samp_no_p(X, sigma=sigma))
ps.append(stats.distributions.t.sf(np.abs(ts[-1]), len(X) - 1) * 2)
titles.append(r'$\mathrm{t_{hat}}$')
mccs.append(False)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
# Here we have to do a bit of gymnastics to get our function to do
# a permutation test without correcting for multiple comparisons:
X.shape = (n_subjects, n_src) # flatten the array for simplicity
titles.append('Permutation')
ts.append(np.zeros(width * width))
ps.append(np.zeros(width * width))
mccs.append(False)
for ii in range(n_src):
ts[-1][ii], ps[-1][ii] = permutation_t_test(X[:, [ii]], verbose=False)[:2]
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
N = np.arange(1, 80)
alpha = 0.05
p_type_I = 1 - (1 - alpha) ** N
fig, ax = plt.subplots(figsize=(4, 3))
ax.scatter(N, p_type_I, 3)
ax.set(xlim=N[[0, -1]], ylim=[0, 1], xlabel=r'$N_{\mathrm{test}}$',
ylabel=u'Probability of at least\none type I error')
ax.grid(True)
fig.tight_layout()
fig.show()
titles.append('Bonferroni')
ts.append(ts[-1])
ps.append(bonferroni_correction(ps[0])[1])
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
titles.append('FDR')
ts.append(ts[-1])
ps.append(fdr_correction(ps[0])[1])
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
titles.append(r'$\mathbf{Perm_{max}}$')
out = permutation_t_test(X, verbose=False)[:2]
ts.append(out[0])
ps.append(out[1])
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
from sklearn.feature_extraction.image import grid_to_graph # noqa: E402
mini_adjacency = grid_to_graph(3, 3).toarray()
assert mini_adjacency.shape == (9, 9)
print(mini_adjacency[0])
titles.append('Clustering')
# Reshape data to what is equivalent to (n_samples, n_space, n_time)
X.shape = (n_subjects, width, width)
# Compute threshold from t distribution (this is also the default)
threshold = stats.distributions.t.ppf(1 - alpha, n_subjects - 1)
t_clust, clusters, p_values, H0 = permutation_cluster_1samp_test(
X, n_jobs=1, threshold=threshold, adjacency=None,
n_permutations=n_permutations, out_type='mask')
# Put the cluster data in a viewable format
p_clust = np.ones((width, width))
for cl, p in zip(clusters, p_values):
p_clust[cl] = p
ts.append(t_clust)
ps.append(p_clust)
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
titles.append(r'$\mathbf{C_{hat}}$')
stat_fun_hat = partial(ttest_1samp_no_p, sigma=sigma)
t_hat, clusters, p_values, H0 = permutation_cluster_1samp_test(
X, n_jobs=1, threshold=threshold, adjacency=None, out_type='mask',
n_permutations=n_permutations, stat_fun=stat_fun_hat, buffer_size=None)
p_hat = np.ones((width, width))
for cl, p in zip(clusters, p_values):
p_hat[cl] = p
ts.append(t_hat)
ps.append(p_hat)
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
titles.append(r'$\mathbf{C_{TFCE}}$')
threshold_tfce = dict(start=0, step=0.2)
t_tfce, _, p_tfce, H0 = permutation_cluster_1samp_test(
X, n_jobs=1, threshold=threshold_tfce, adjacency=None,
n_permutations=n_permutations, out_type='mask')
ts.append(t_tfce)
ps.append(p_tfce)
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
titles.append(r'$\mathbf{C_{hat,TFCE}}$')
t_tfce_hat, _, p_tfce_hat, H0 = permutation_cluster_1samp_test(
X, n_jobs=1, threshold=threshold_tfce, adjacency=None, out_type='mask',
n_permutations=n_permutations, stat_fun=stat_fun_hat, buffer_size=None)
ts.append(t_tfce_hat)
ps.append(p_tfce_hat)
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
fig = plt.figure(facecolor='w', figsize=(14, 3))
assert len(ts) == len(titles) == len(ps)
for ii in range(len(ts)):
ax = [fig.add_subplot(2, 10, ii + 1, projection='3d'),
fig.add_subplot(2, 10, 11 + ii)]
plot_t_p(ts[ii], ps[ii], titles[ii], mccs[ii], ax)
fig.tight_layout(pad=0, w_pad=0.05, h_pad=0.1)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Hypothesis testing
Step2: The data averaged over all subjects looks like this
Step3: In this case, a null hypothesis we could test for each voxel is
Step4: "Hat" variance adjustment
Step5: Non-parametric tests
Step6: Multiple comparisons
Step7: To combat this problem, several methods exist. Typically these
Step8: False discovery rate (FDR) correction
Step9: Non-parametric resampling test with a maximum statistic
Step10: Clustering
Step11: In general the adjacency between voxels can be more complex, such as
Step12: "Hat" variance adjustment
Step13: Threshold-free cluster enhancement (TFCE)
Step14: We can also combine TFCE and the "hat" correction
Step15: Visualize and compare methods
|
7,735
|
<ASSISTANT_TASK:>
Python Code:
import json
from mdf_forge.forge import Forge
mdf = Forge()
# First, let's aggregate all the nist_xps_db data.
all_entries = mdf.aggregate_sources("nist_xps_db")
print(len(all_entries))
# Now, let's parse out the enery_uncertainty_ev and print the results for analysis.
uncertainties = {}
for record in all_entries:
if record["mdf"]["resource_type"] == "record":
unc = record.get("nist_xps_db_v1", {}).get("energy_uncertainty_ev", 0)
if not uncertainties.get(unc):
uncertainties[unc] = 1
else:
uncertainties[unc] += 1
print(json.dumps(uncertainties, sort_keys=True, indent=4, separators=(',', ': ')))
# First, let's aggregate everything that has "Ga" in the list of elements.
all_results = mdf.aggregate("material.elements:Ga")
print(len(all_results))
# Now, let's parse out the other elements in each record and keep a running tally to print out.
elements = {}
for record in all_results:
if record["mdf"]["resource_type"] == "record":
elems = record["material"]["elements"]
for elem in elems:
if elem in elements.keys():
elements[elem] += 1
else:
elements[elem] = 1
print(json.dumps(elements, sort_keys=True, indent=4, separators=(',', ': ')))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: aggregate_source - NIST XPS DB
Step2: aggregate - Multiple Datasets
|
7,736
|
<ASSISTANT_TASK:>
Python Code:
print("Hello, world")
# Python 2 version
print("Hello, world")
print(“Hello world“)
print("Hello world")
import unicodedata
# Good double quote:
unicodedata.category('"')
# Good single quote
unicodedata.category("'")
# BAD...double quote
unicodedata.category('“')
# BAD...single quote
unicodedata.category("`")
name = input()
print("Hello" + name)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Not working
Step2: Troubleshooting
Step3: SOLUTION
|
7,737
|
<ASSISTANT_TASK:>
Python Code:
from fastai.text import TextLMDataBunch as lmdb
from fastai.text.transform import Tokenizer
import pandas as pd
from pathlib import Path
# note: download the data and place in right directory before running this code!
valid_df = pd.read_hdf(Path('../data/2_partitioned_df/valid_df.hdf'))
train_df = pd.read_hdf(Path('../data/2_partitioned_df/train_df.hdf'))
print(f'rows in train_df:, {train_df.shape[0]:,}')
print(f'rows in valid_df:, {valid_df.shape[0]:,}')
train_df.head(3)
def pass_through(x):
return x
# only thing is we are changing pre_rules to be pass through since we have already done all of the pre-rules.
# you don't want to accidentally apply pre-rules again otherwhise it will corrupt the data.
tokenizer = Tokenizer(pre_rules=[pass_through], n_cpus=31)
path = Path('../model/lang_model/')
# Note you want your own tokenizer, without pre-rules
data_lm = lmdb.from_df(path=path,
train_df=train_df,
valid_df=valid_df,
text_cols='text',
tokenizer=tokenizer,
chunksize=6000000)
data_lm.save() # saves to self.path/data_save.hdf
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read in Data
Step2: Create The DataBunch
Step3: Specify path for saving language model artifacts
Step4: Create The Language Model Data Bunch
|
7,738
|
<ASSISTANT_TASK:>
Python Code:
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,pandas,matplotlib,scikit-learn
from sklearn.grid_search import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
from sklearn.datasets import load_iris
from sklearn.cross_validation import train_test_split
# load and split data
iris = load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)
# pipeline setup
cls = SVC(C=10.0, kernel='rbf', gamma=0.1, decision_function_shape='ovr')
kernel_svm = Pipeline([('std', StandardScaler()),
('svc', cls)])
# gridsearch setup
param_grid = [
{'svc__C': [1, 10, 100, 1000],
'svc__gamma': [0.001, 0.0001],
'svc__kernel': ['rbf']},
]
# setup multiple GridSearchCV objects, 1 for each algorithm
gs_svm = GridSearchCV(estimator=kernel_svm,
param_grid=param_grid,
scoring='accuracy',
n_jobs=-1,
cv=5,
verbose=0,
refit=True,
pre_dispatch='2*n_jobs')
import numpy as np
from sklearn.cross_validation import cross_val_score
scores = cross_val_score(gs_svm, X_train, y_train, scoring='accuracy', cv=5)
print('\nAverage Accuracy %.2f +/- %.2f' % (np.mean(scores), np.std(scores)))
from sklearn.cross_validation import StratifiedKFold
from sklearn.metrics import accuracy_score
import numpy as np
params = []
scores = []
skfold = StratifiedKFold(y=y_train, n_folds=5, shuffle=False, random_state=1)
for train_idx, test_idx in skfold:
gs_svm.fit(X_train[train_idx], y_train[train_idx])
y_pred = gs_svm.predict(X_train[test_idx])
acc = accuracy_score(y_true=y_train[test_idx], y_pred=y_pred)
params.append(gs_svm.best_params_)
scores.append(acc)
print('SVM models:')
for idx, m in enumerate(zip(params, scores)):
print('%s. Acc: %.2f Params: %s' % (idx+1, m[1], m[0]))
print('\nAverage Accuracy %.2f +/- %.2f' % (np.mean(scores), np.std(scores)))
gs_svm.fit(X_train, y_train)
print('Best parameters %s' % gs_svm.best_params_)
train_acc = accuracy_score(y_true=y_train, y_pred=gs_svm.predict(X_train))
test_acc = accuracy_score(y_true=y_test, y_pred=gs_svm.predict(X_test))
print('Training accuracy: %.2f' % train_acc)
print('Test accuracy: %.2f' % test_acc)
print('Parameters: %s' % gs_svm.best_params_)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dataset and Estimator Setup
Step2: A. Nested Crossvalidation - Quick Version
Step3: B. Nested Crossvalidation - Manual Approach Printing the Model Parameters
Step4: Regular K-fold CV to Optimize the Model on the Complete Training Set
|
7,739
|
<ASSISTANT_TASK:>
Python Code:
PROJECT_ID = "YOUR PROJECT ID"
BUCKET_NAME = "gs://YOUR BUCKET NAME"
REGION = "YOUR REGION"
SERVICE_ACCOUNT = "YOUR SERVICE ACCOUNT"
content_name = "tf-keras-img-cls-dist-multi-worker-cpu-cust-cont"
hostname = "gcr.io"
image_name = content_name
tag = "latest"
custom_container_image_uri = f"{hostname}/{PROJECT_ID}/{image_name}:{tag}"
! cd trainer && docker build -t $custom_container_image_uri -f cpu.Dockerfile .
! docker run --rm $custom_container_image_uri --epochs 2 --local-mode
! docker push $custom_container_image_uri
! gcloud container images list --repository $hostname/$PROJECT_ID
! pip install -r requirements.txt
from google.cloud import aiplatform
aiplatform.init(
project=PROJECT_ID,
staging_bucket=BUCKET_NAME,
location=REGION,
)
tensorboard = aiplatform.Tensorboard.create(
display_name=content_name,
)
display_name = content_name
gcs_output_uri_prefix = f"{BUCKET_NAME}/{display_name}"
replica_count = 4
machine_type = "n1-standard-4"
container_args = [
"--epochs",
"50",
"--batch-size",
"32",
]
custom_container_training_job = aiplatform.CustomContainerTrainingJob(
display_name=display_name,
container_uri=custom_container_image_uri,
)
custom_container_training_job.run(
args=container_args,
base_output_dir=gcs_output_uri_prefix,
replica_count=replica_count,
machine_type=machine_type,
tensorboard=tensorboard.resource_name,
service_account=SERVICE_ACCOUNT,
)
print(f"Custom Training Job Name: {custom_container_training_job.resource_name}")
print(f"GCS Output URI Prefix: {gcs_output_uri_prefix}")
! gsutil ls $gcs_output_uri_prefix
! gsutil rm -rf $gcs_output_uri_prefix
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Vertex Training using Vertex SDK and Custom Container
Step2: Initialize Vertex SDK
Step3: Create a Vertex Tensorboard Instance
Step4: Option
Step5: Training Output Artifact
Step6: Clean Up Artifact
|
7,740
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pylab as plt
import padasip as pa
%matplotlib inline
plt.style.use('ggplot') # nicer plots
np.random.seed(52102) # always use the same random seed to make results comparable
def measure_x():
# input vector of size 3
x = np.random.random(3)
return x
def measure_d(x):
# meausure system output
d = 2*x[0] + 1*x[1] - 1.5*x[2]
return d
filt = pa.filters.FilterNLMS(3, mu=1.)
N = 100
log_d = np.zeros(N)
log_y = np.zeros(N)
for k in range(N):
# measure input
x = measure_x()
# predict new value
y = filt.predict(x)
# do the important stuff with prediction output
pass
# measure output
d = measure_d(x)
# update filter
filt.adapt(d, x)
# log values
log_d[k] = d
log_y[k] = y
plt.figure(figsize=(12.5,6))
plt.plot(log_d, "b", label="target")
plt.plot(log_y, "g", label="prediction")
plt.xlabel("discrete time index [k]")
plt.legend()
plt.tight_layout()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: One Sample Ahead Prediction Example with the NLMS Filter
Step2: For prediction of the variable $d(k)$ it is possible to use any implemented fitler (LMS, RLS, NLMS). In this case the NLMS filter is used. The filter (as a size of 3 in this example) can be created as follows
Step3: Now the created filter can be used in the loop in real time as
Step4: Now, according to logged values it is possible to display the learning process of the filter.
|
7,741
|
<ASSISTANT_TASK:>
Python Code:
# Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def show_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
show_digit(4)
# Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
# Include the input layer, hidden layer(s), and set how you want to train the model
net = tflearn.input_data([None, 784])
net = tflearn.fully_connected(net, 400, activation='ReLU')
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
# This model assumes that your network is named "net"
model = tflearn.DNN(net)
return model
# Build the model
model = build_model()
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=20)
# Compare the labels that our model predicts with the actual labels
# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Retrieving training and test data
Step2: Visualize the training data
Step3: Building the network
Step4: Training the network
Step5: Testing
|
7,742
|
<ASSISTANT_TASK:>
Python Code:
import pymc3 as pm
with pm.Model() as model:
parameter = pm.Exponential("poisson_param", 1)
data_generator = pm.Poisson("data_generator", parameter)
with model:
data_plus_one = data_generator + 1
parameter.tag.test_value
with pm.Model() as model:
theta = pm.Exponential("theta", 2)
data_generator = pm.Poisson("data_generator", theta)
with pm.Model() as ab_testing:
p_A = pm.Uniform("P(A)", 0, 1)
p_B = pm.Uniform("P(B)", 0, 1)
print("parameter.tag.test_value =", parameter.tag.test_value)
print("data_generator.tag.test_value =", data_generator.tag.test_value)
print("data_plus_one.tag.test_value =", data_plus_one.tag.test_value)
with pm.Model() as model:
parameter = pm.Exponential("poisson_param", 1, testval=0.5)
print("\nparameter.tag.test_value =", parameter.tag.test_value)
with pm.Model() as model:
lambda_1 = pm.Exponential("lambda_1", 1)
lambda_2 = pm.Exponential("lambda_2", 1)
tau = pm.DiscreteUniform("tau", lower=0, upper=10)
new_deterministic_variable = lambda_1 + lambda_2
import numpy as np
n_data_points = 5 # in CH1 we had ~70 data points
idx = np.arange(n_data_points)
with model:
lambda_ = pm.math.switch(tau >= idx, lambda_1, lambda_2)
import theano.tensor as tt
with pm.Model() as theano_test:
p1 = pm.Uniform("p", 0, 1)
p2 = 1 - p1
p = tt.stack([p1, p2])
assignment = pm.Categorical("assignment", p)
%matplotlib inline
from IPython.core.pylabtools import figsize
import matplotlib.pyplot as plt
import scipy.stats as stats
figsize(12.5, 4)
samples = [lambda_1.random()[0] for i in range(20000)]
plt.hist(samples, bins=70, normed=True, histtype="stepfilled")
plt.title("Prior distribution for $\lambda_1$")
plt.xlim(0, 8);
data = np.array([10, 5])
with model:
fixed_variable = pm.Poisson("fxd", 1, observed=data)
print("value: ", fixed_variable.tag.test_value)
# We're using some fake data here
data = np.array([10, 25, 15, 20, 35])
with model:
obs = pm.Poisson("obs", lambda_, observed=data)
print(obs.tag.test_value)
tau = np.random.randint(0, 80)
print(tau)
alpha = 1./20.
lambda_1, lambda_2 = np.random.exponential(scale=1/alpha, size=2)
print(lambda_1, lambda_2)
data = np.r_[stats.poisson.rvs(mu=lambda_1, size=tau), stats.poisson.rvs(mu=lambda_2, size = 80 - tau)]
plt.bar(np.arange(80), data, color="#348ABD")
plt.bar(tau-1, data[tau - 1], color="r", label="user behaviour changed")
plt.xlabel("Time (days)")
plt.ylabel("count of text-msgs received")
plt.title("Artificial dataset")
plt.xlim(0, 80)
plt.legend();
def plot_artificial_sms_dataset():
tau = stats.randint.rvs(0, 80)
alpha = 1./20.
lambda_1, lambda_2 = stats.expon.rvs(scale=1/alpha, size=2)
data = np.r_[stats.poisson.rvs(mu=lambda_1, size=tau), stats.poisson.rvs(mu=lambda_2, size=80 - tau)]
plt.bar(np.arange(80), data, color="#348ABD")
plt.bar(tau - 1, data[tau-1], color="r", label="user behaviour changed")
plt.xlim(0, 80);
figsize(12.5, 5)
plt.title("More example of artificial datasets")
for i in range(4):
plt.subplot(4, 1, i+1)
plot_artificial_sms_dataset()
import pymc3 as pm
# The parameters are the bounds of the Uniform.
with pm.Model() as model:
p = pm.Uniform('p', lower=0, upper=1)
#set constants
p_true = 0.05 # remember, this is unknown.
N = 1500
# sample N Bernoulli random variables from Ber(0.05).
# each random variable has a 0.05 chance of being a 1.
# this is the data-generation step
occurrences = stats.bernoulli.rvs(p_true, size=N)
print(occurrences) # Remember: Python treats True == 1, and False == 0
print(np.sum(occurrences))
# Occurrences.mean is equal to n/N.
print("What is the observed frequency in Group A? %.4f" % np.mean(occurrences))
print("Does this equal the true frequency? %s" % (np.mean(occurrences) == p_true))
#include the observations, which are Bernoulli
with model:
obs = pm.Bernoulli("obs", p, observed=occurrences)
# To be explained in chapter 3
step = pm.Metropolis()
trace = pm.sample(18000, step=step)
burned_trace = trace[1000:]
figsize(12.5, 4)
plt.title("Posterior distribution of $p_A$, the true effectiveness of site A")
plt.vlines(p_true, 0, 90, linestyle="--", label="true $p_A$ (unknown)")
plt.hist(burned_trace["p"], bins=25, histtype="stepfilled", normed=True)
plt.legend();
import pymc3 as pm
figsize(12, 4)
#these two quantities are unknown to us.
true_p_A = 0.05
true_p_B = 0.04
#notice the unequal sample sizes -- no problem in Bayesian analysis.
N_A = 1500
N_B = 750
#generate some observations
observations_A = stats.bernoulli.rvs(true_p_A, size=N_A)
observations_B = stats.bernoulli.rvs(true_p_B, size=N_B)
print("Obs from Site A: ", observations_A[:30], "...")
print("Obs from Site B: ", observations_B[:30], "...")
print(np.mean(observations_A))
print(np.mean(observations_B))
# Set up the pymc3 model. Again assume Uniform priors for p_A and p_B.
with pm.Model() as model:
p_A = pm.Uniform("p_A", 0, 1)
p_B = pm.Uniform("p_B", 0, 1)
# Define the deterministic delta function. This is our unknown of interest.
delta = pm.Deterministic("delta", p_A - p_B)
# Set of observations, in this case we have two observation datasets.
obs_A = pm.Bernoulli("obs_A", p_A, observed=observations_A)
obs_B = pm.Bernoulli("obs_B", p_B, observed=observations_B)
# To be explained in chapter 3.
step = pm.Metropolis()
trace = pm.sample(20000, step=step)
burned_trace=trace[1000:]
p_A_samples = burned_trace["p_A"]
p_B_samples = burned_trace["p_B"]
delta_samples = burned_trace["delta"]
figsize(12.5, 10)
#histogram of posteriors
ax = plt.subplot(311)
plt.xlim(0, .1)
plt.hist(p_A_samples, histtype='stepfilled', bins=25, alpha=0.85,
label="posterior of $p_A$", color="#A60628", normed=True)
plt.vlines(true_p_A, 0, 80, linestyle="--", label="true $p_A$ (unknown)")
plt.legend(loc="upper right")
plt.title("Posterior distributions of $p_A$, $p_B$, and delta unknowns")
ax = plt.subplot(312)
plt.xlim(0, .1)
plt.hist(p_B_samples, histtype='stepfilled', bins=25, alpha=0.85,
label="posterior of $p_B$", color="#467821", normed=True)
plt.vlines(true_p_B, 0, 80, linestyle="--", label="true $p_B$ (unknown)")
plt.legend(loc="upper right")
ax = plt.subplot(313)
plt.hist(delta_samples, histtype='stepfilled', bins=30, alpha=0.85,
label="posterior of delta", color="#7A68A6", normed=True)
plt.vlines(true_p_A - true_p_B, 0, 60, linestyle="--",
label="true delta (unknown)")
plt.vlines(0, 0, 60, color="black", alpha=0.2)
plt.legend(loc="upper right");
# Count the number of samples less than 0, i.e. the area under the curve
# before 0, represent the probability that site A is worse than site B.
print("Probability site A is WORSE than site B: %.3f" % \
np.mean(delta_samples < 0))
print("Probability site A is BETTER than site B: %.3f" % \
np.mean(delta_samples > 0))
figsize(12.5, 4)
import scipy.stats as stats
binomial = stats.binom
parameters = [(10, .4), (10, .9)]
colors = ["#348ABD", "#A60628"]
for i in range(2):
N, p = parameters[i]
_x = np.arange(N + 1)
plt.bar(_x - 0.5, binomial.pmf(_x, N, p), color=colors[i],
edgecolor=colors[i],
alpha=0.6,
label="$N$: %d, $p$: %.1f" % (N, p),
linewidth=3)
plt.legend(loc="upper left")
plt.xlim(0, 10.5)
plt.xlabel("$k$")
plt.ylabel("$P(X = k)$")
plt.title("Probability mass distributions of binomial random variables");
import pymc3 as pm
N = 100
with pm.Model() as model:
p = pm.Uniform("freq_cheating", 0, 1)
with model:
true_answers = pm.Bernoulli("truths", p, shape=N, testval=np.random.binomial(1, 0.5, N))
with model:
first_coin_flips = pm.Bernoulli("first_flips", 0.5, shape=N, testval=np.random.binomial(1, 0.5, N))
print(first_coin_flips.tag.test_value)
with model:
second_coin_flips = pm.Bernoulli("second_flips", 0.5, shape=N, testval=np.random.binomial(1, 0.5, N))
import theano.tensor as tt
with model:
val = first_coin_flips*true_answers + (1 - first_coin_flips)*second_coin_flips
observed_proportion = pm.Deterministic("observed_proportion", tt.sum(val)/float(N))
observed_proportion.tag.test_value
X = 35
with model:
observations = pm.Binomial("obs", N, observed_proportion, observed=X)
# To be explained in Chapter 3!
with model:
step = pm.Metropolis(vars=[p])
trace = pm.sample(40000, step=step)
burned_trace = trace[15000:]
figsize(12.5, 3)
p_trace = burned_trace["freq_cheating"][15000:]
plt.hist(p_trace, histtype="stepfilled", normed=True, alpha=0.85, bins=30,
label="posterior distribution", color="#348ABD")
plt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.3)
plt.xlim(0, 1)
plt.legend();
with pm.Model() as model:
p = pm.Uniform("freq_cheating", 0, 1)
p_skewed = pm.Deterministic("p_skewed", 0.5*p + 0.25)
with model:
yes_responses = pm.Binomial("number_cheaters", 100, p_skewed, observed=35)
with model:
# To Be Explained in Chapter 3!
step = pm.Metropolis()
trace = pm.sample(25000, step=step)
burned_trace = trace[2500:]
figsize(12.5, 3)
p_trace = burned_trace["freq_cheating"]
plt.hist(p_trace, histtype="stepfilled", normed=True, alpha=0.85, bins=30,
label="posterior distribution", color="#348ABD")
plt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.2)
plt.xlim(0, 1)
plt.legend();
N = 10
x = np.ones(N, dtype=object)
with pm.Model() as model:
for i in range(0, N):
x[i] = pm.Exponential('x_%i' % i, (i+1)**2)
figsize(12.5, 3.5)
np.set_printoptions(precision=3, suppress=True)
challenger_data = np.genfromtxt("data/challenger_data.csv", skip_header=1,
usecols=[1, 2], missing_values="NA",
delimiter=",")
#drop the NA values
challenger_data = challenger_data[~np.isnan(challenger_data[:, 1])]
#plot it, as a function of tempature (the first column)
print("Temp (F), O-Ring failure?")
print(challenger_data)
plt.scatter(challenger_data[:, 0], challenger_data[:, 1], s=75, color="k",
alpha=0.5)
plt.yticks([0, 1])
plt.ylabel("Damage Incident?")
plt.xlabel("Outside temperature (Fahrenheit)")
plt.title("Defects of the Space Shuttle O-Rings vs temperature");
figsize(12, 3)
def logistic(x, beta):
return 1.0 / (1.0 + np.exp(beta * x))
x = np.linspace(-4, 4, 100)
plt.plot(x, logistic(x, 1), label=r"$\beta = 1$")
plt.plot(x, logistic(x, 3), label=r"$\beta = 3$")
plt.plot(x, logistic(x, -5), label=r"$\beta = -5$")
plt.legend();
def logistic(x, beta, alpha=0):
return 1.0 / (1.0 + np.exp(np.dot(beta, x) + alpha))
x = np.linspace(-4, 4, 100)
plt.plot(x, logistic(x, 1), label=r"$\beta = 1$", ls="--", lw=1)
plt.plot(x, logistic(x, 3), label=r"$\beta = 3$", ls="--", lw=1)
plt.plot(x, logistic(x, -5), label=r"$\beta = -5$", ls="--", lw=1)
plt.plot(x, logistic(x, 1, 1), label=r"$\beta = 1, \alpha = 1$",
color="#348ABD")
plt.plot(x, logistic(x, 3, -2), label=r"$\beta = 3, \alpha = -2$",
color="#A60628")
plt.plot(x, logistic(x, -5, 7), label=r"$\beta = -5, \alpha = 7$",
color="#7A68A6")
plt.legend(loc="lower left");
import scipy.stats as stats
nor = stats.norm
x = np.linspace(-8, 7, 150)
mu = (-2, 0, 3)
tau = (.7, 1, 2.8)
colors = ["#348ABD", "#A60628", "#7A68A6"]
parameters = zip(mu, tau, colors)
for _mu, _tau, _color in parameters:
plt.plot(x, nor.pdf(x, _mu, scale=1./_tau),
label="$\mu = %d,\;\\tau = %.1f$" % (_mu, _tau), color=_color)
plt.fill_between(x, nor.pdf(x, _mu, scale=1./_tau), color=_color,
alpha=.33)
plt.legend(loc="upper right")
plt.xlabel("$x$")
plt.ylabel("density function at $x$")
plt.title("Probability distribution of three different Normal random \
variables");
import pymc3 as pm
temperature = challenger_data[:, 0]
D = challenger_data[:, 1] # defect or not?
#notice the`value` here. We explain why below.
with pm.Model() as model:
beta = pm.Normal("beta", mu=0, tau=0.001, testval=0)
alpha = pm.Normal("alpha", mu=0, tau=0.001, testval=0)
p = pm.Deterministic("p", 1.0/(1. + tt.exp(beta*temperature + alpha)))
# connect the probabilities in `p` with our observations through a
# Bernoulli random variable.
with model:
observed = pm.Bernoulli("bernoulli_obs", p, observed=D)
# Mysterious code to be explained in Chapter 3
start = pm.find_MAP()
step = pm.Metropolis()
trace = pm.sample(120000, step=step, start=start)
burned_trace = trace[100000::2]
alpha_samples = burned_trace["alpha"][:, None] # best to make them 1d
beta_samples = burned_trace["beta"][:, None]
figsize(12.5, 6)
#histogram of the samples:
plt.subplot(211)
plt.title(r"Posterior distributions of the variables $\alpha, \beta$")
plt.hist(beta_samples, histtype='stepfilled', bins=35, alpha=0.85,
label=r"posterior of $\beta$", color="#7A68A6", normed=True)
plt.legend()
plt.subplot(212)
plt.hist(alpha_samples, histtype='stepfilled', bins=35, alpha=0.85,
label=r"posterior of $\alpha$", color="#A60628", normed=True)
plt.legend();
t = np.linspace(temperature.min() - 5, temperature.max()+5, 50)[:, None]
p_t = logistic(t.T, beta_samples, alpha_samples)
mean_prob_t = p_t.mean(axis=0)
figsize(12.5, 4)
plt.plot(t, mean_prob_t, lw=3, label="average posterior \nprobability \
of defect")
plt.plot(t, p_t[0, :], ls="--", label="realization from posterior")
plt.plot(t, p_t[-2, :], ls="--", label="realization from posterior")
plt.scatter(temperature, D, color="k", s=50, alpha=0.5)
plt.title("Posterior expected value of probability of defect; \
plus realizations")
plt.legend(loc="lower left")
plt.ylim(-0.1, 1.1)
plt.xlim(t.min(), t.max())
plt.ylabel("probability")
plt.xlabel("temperature");
from scipy.stats.mstats import mquantiles
# vectorized bottom and top 2.5% quantiles for "confidence interval"
qs = mquantiles(p_t, [0.025, 0.975], axis=0)
plt.fill_between(t[:, 0], *qs, alpha=0.7,
color="#7A68A6")
plt.plot(t[:, 0], qs[0], label="95% CI", color="#7A68A6", alpha=0.7)
plt.plot(t, mean_prob_t, lw=1, ls="--", color="k",
label="average posterior \nprobability of defect")
plt.xlim(t.min(), t.max())
plt.ylim(-0.02, 1.02)
plt.legend(loc="lower left")
plt.scatter(temperature, D, color="k", s=50, alpha=0.5)
plt.xlabel("temp, $t$")
plt.ylabel("probability estimate")
plt.title("Posterior probability estimates given temp. $t$");
figsize(12.5, 2.5)
prob_31 = logistic(31, beta_samples, alpha_samples)
plt.xlim(0.995, 1)
plt.hist(prob_31, bins=1000, normed=True, histtype='stepfilled')
plt.title("Posterior distribution of probability of defect, given $t = 31$")
plt.xlabel("probability of defect occurring in O-ring");
N = 10000
with pm.Model() as model:
beta = pm.Normal("beta", mu=0, tau=0.001, testval=0)
alpha = pm.Normal("alpha", mu=0, tau=0.001, testval=0)
p = pm.Deterministic("p", 1.0/(1. + tt.exp(beta*temperature + alpha)))
observed = pm.Bernoulli("bernoulli_obs", p, observed=D)
simulated = pm.Bernoulli("bernoulli_sim", p, shape=p.tag.test_value.shape)
step = pm.Metropolis(vars=[p])
trace = pm.sample(N, step=step)
figsize(12.5, 5)
simulations = trace["bernoulli_sim"]
print(simulations.shape)
plt.title("Simulated dataset using posterior parameters")
figsize(12.5, 6)
for i in range(4):
ax = plt.subplot(4, 1, i+1)
plt.scatter(temperature, simulations[1000*i, :], color="k",
s=50, alpha=0.6)
posterior_probability = simulations.mean(axis=0)
print("posterior prob of defect | realized defect ")
for i in range(len(D)):
print("%.2f | %d" % (posterior_probability[i], D[i]))
ix = np.argsort(posterior_probability)
print("probb | defect ")
for i in range(len(D)):
print("%.2f | %d" % (posterior_probability[ix[i]], D[ix[i]]))
from separation_plot import separation_plot
figsize(11., 1.5)
separation_plot(posterior_probability, D)
figsize(11., 1.25)
# Our temperature-dependent model
separation_plot(posterior_probability, D)
plt.title("Temperature-dependent model")
# Perfect model
# i.e. the probability of defect is equal to if a defect occurred or not.
p = D
separation_plot(p, D)
plt.title("Perfect model")
# random predictions
p = np.random.rand(23)
separation_plot(p, D)
plt.title("Random model")
# constant model
constant_prob = 7./23*np.ones(23)
separation_plot(constant_prob, D)
plt.title("Constant-prediction model");
#type your code here.
figsize(12.5, 4)
plt.scatter(alpha_samples, beta_samples, alpha=0.1)
plt.title("Why does the plot look like this?")
plt.xlabel(r"$\alpha$")
plt.ylabel(r"$\beta$");
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This is an extra layer of convenience compared to PyMC. Any variables created within a given Model's context will be automatically assigned to that model. If you try to define a variable outside of the context of a model, you will get an error.
Step2: We can examine the same variables outside of the model context once they have been defined, but to define more variables that the model will recognize they have to be within the context.
Step3: Each variable assigned to a model will be defined with its own name, the first string parameter (we will cover this further in the variables section). To create a different model object with the same name as one we have used previously, we need only run the first block of code again.
Step4: We can also define an entirely separate model. Note that we are free to name our models whatever we like, so if we do not want to overwrite an old model we need only make another.
Step5: You probably noticed that PyMC3 will often give you notifications about transformations when you add variables to your model. These transformations are done internally by PyMC3 to modify the space that the variable is sampled in (when we get to actually sampling the model). This is an internal feature which helps with the convergence of our samples to the posterior distribution and serves to improve the results.
Step6: The test_value is used only for the model, as the starting point for sampling if no other start is specified. It will not change as a result of sampling. This initial state can be changed at variable creation by specifying a value for the testval parameter.
Step7: This can be helpful if you are using a more unstable prior that may require a better starting point.
Step8: If we want a deterministic variable to actually be tracked by our sampling, however, we need to define it explicitly as a named deterministic variable with the constructor.
Step9: Clearly, if $\tau, \lambda_1$ and $\lambda_2$ are known, then $\lambda$ is known completely, hence it is a deterministic variable. We use the switch function here to change from $\lambda_1$ to $\lambda_2$ at the appropriate time. This function is directly from the theano package, which we will discuss in the next section.
Step10: Here we use theano's stack() function in the same way we would use one of NumPy's stacking functions
Step11: To frame this in the notation of the first chapter, though this is a slight abuse of notation, we have specified $P(A)$. Our next goal is to include data/evidence/observations $X$ into our model.
Step12: This is how we include data into our models
Step13: Modeling approaches
Step14: 2. Draw $\lambda_1$ and $\lambda_2$ from an $\text{Exp}(\alpha)$ distribution
Step15: 3. For days before $\tau$, represent the user's received SMS count by sampling from $\text{Poi}(\lambda_1)$, and sample from $\text{Poi}(\lambda_2)$ for days after $\tau$. For example
Step16: 4. Plot the artificial dataset
Step17: It is okay that our fictional dataset does not look like our observed dataset
Step18: Later we will see how we use this to make predictions and test the appropriateness of our models.
Step19: Had we had stronger beliefs, we could have expressed them in the prior above.
Step20: The observed frequency is
Step21: We combine the observations into the PyMC3 observed variable, and run our inference algorithm
Step22: We plot the posterior distribution of the unknown $p_A$ below
Step23: Our posterior distribution puts most weight near the true value of $p_A$, but also some weights in the tails. This is a measure of how uncertain we should be, given our observations. Try changing the number of observations, N, and observe how the posterior distribution changes.
Step24: Below we plot the posterior distributions for the three unknowns
Step25: Notice that as a result of N_B < N_A, i.e. we have less data from site B, our posterior distribution of $p_B$ is fatter, implying we are less certain about the true value of $p_B$ than we are of $p_A$.
Step26: If this probability is too high for comfortable decision-making, we can perform more trials on site B (as site B has less samples to begin with, each additional data point for site B contributes more inferential "power" than each additional data point for site A).
Step27: The special case when $N = 1$ corresponds to the Bernoulli distribution. There is another connection between Bernoulli and Binomial random variables. If we have $X_1, X_2, ... , X_N$ Bernoulli random variables with the same $p$, then $Z = X_1 + X_2 + ... + X_N \sim \text{Binomial}(N, p )$.
Step28: Again, thinking of our data-generation model, we assign Bernoulli random variables to the 100 students
Step29: If we carry out the algorithm, the next step that occurs is the first coin-flip each student makes. This can be modeled again by sampling 100 Bernoulli random variables with $p=1/2$
Step30: Although not everyone flips a second time, we can still model the possible realization of second coin-flips
Step31: Using these variables, we can return a possible realization of the observed proportion of "Yes" responses. We do this using a PyMC3 deterministic variable
Step32: The line fc*t_a + (1-fc)*sc contains the heart of the Privacy algorithm. Elements in this array are 1 if and only if i) the first toss is heads and the student cheated or ii) the first toss is tails, and the second is heads, and are 0 else. Finally, the last line sums this vector and divides by float(N), produces a proportion.
Step33: Next we need a dataset. After performing our coin-flipped interviews the researchers received 35 "Yes" responses. To put this into a relative perspective, if there truly were no cheaters, we should expect to see on average 1/4 of all responses being a "Yes" (half chance of having first coin land Tails, and another half chance of having second coin land Heads), so about 25 responses in a cheat-free world. On the other hand, if all students cheated, we should expected to see approximately 3/4 of all responses be "Yes".
Step34: Below we add all the variables of interest to a Model container and run our black-box algorithm over the model.
Step35: With regards to the above plot, we are still pretty uncertain about what the true frequency of cheaters might be, but we have narrowed it down to a range between 0.05 to 0.35 (marked by the solid lines). This is pretty good, as a priori we had no idea how many students might have cheated (hence the uniform distribution for our prior). On the other hand, it is also pretty bad since there is a .3 length window the true value most likely lives in. Have we even gained anything, or are we still too uncertain about the true frequency?
Step36: I could have typed p_skewed = 0.5*p + 0.25 instead for a one-liner, as the elementary operations of addition and scalar multiplication will implicitly create a deterministic variable, but I wanted to make the deterministic boilerplate explicit for clarity's sake.
Step37: Below we add all the variables of interest to a Model container and run our black-box algorithm over the model.
Step38: More PyMC3 Tricks
Step39: The remainder of this chapter examines some practical examples of PyMC3 and PyMC3 modeling
Step40: It looks clear that the probability of damage incidents occurring increases as the outside temperature decreases. We are interested in modeling the probability here because it does not look like there is a strict cutoff point between temperature and a damage incident occurring. The best we can do is ask "At temperature $t$, what is the probability of a damage incident?". The goal of this example is to answer that question.
Step41: But something is missing. In the plot of the logistic function, the probability changes only near zero, but in our data above the probability changes around 65 to 70. We need to add a bias term to our logistic function
Step42: Adding a constant term $\alpha$ amounts to shifting the curve left or right (hence why it is called a bias).
Step43: A Normal random variable can be take on any real number, but the variable is very likely to be relatively close to $\mu$. In fact, the expected value of a Normal is equal to its $\mu$ parameter
Step44: We have our probabilities, but how do we connect them to our observed data? A Bernoulli random variable with parameter $p$, denoted $\text{Ber}(p)$, is a random variable that takes value 1 with probability $p$, and 0 else. Thus, our model can look like
Step45: We have trained our model on the observed data, now we can sample values from the posterior. Let's look at the posterior distributions for $\alpha$ and $\beta$
Step46: All samples of $\beta$ are greater than 0. If instead the posterior was centered around 0, we may suspect that $\beta = 0$, implying that temperature has no effect on the probability of defect.
Step47: Above we also plotted two possible realizations of what the actual underlying system might be. Both are equally likely as any other draw. The blue line is what occurs when we average all the 20000 possible dotted lines together.
Step48: The 95% credible interval, or 95% CI, painted in purple, represents the interval, for each temperature, that contains 95% of the distribution. For example, at 65 degrees, we can be 95% sure that the probability of defect lies between 0.25 and 0.75.
Step49: Is our model appropriate?
Step50: Note that the above plots are different (if you can think of a cleaner way to present this, please send a pull request and answer here!).
Step51: Next we sort each column by the posterior probabilities
Step52: We can present the above data better in a figure
Step53: The snaking-line is the sorted probabilities, blue bars denote defects, and empty space (or grey bars for the optimistic readers) denote non-defects. As the probability rises, we see more and more defects occur. On the right hand side, the plot suggests that as the posterior probability is large (line close to 1), then more defects are realized. This is good behaviour. Ideally, all the blue bars should be close to the right-hand side, and deviations from this reflect missed predictions.
Step54: In the random model, we can see that as the probability increases there is no clustering of defects to the right-hand side. Similarly for the constant model.
Step55: References
|
7,743
|
<ASSISTANT_TASK:>
Python Code:
import pandas
titanic = pandas.read_csv('data/titanic.csv')
titanic.head()
# basic statistics
titanic[['Age', 'Fare']].describe()
# percentage of missing values
titanic[['Age', 'Fare']].isna().mean()
# imputing missing values with the median
titanic['Age'].fillna(titanic['Age'].median(), inplace=True)
titanic[['Age', 'Fare']].isna().mean()
# plot distributions
titanic[['Age', 'Fare']].boxplot();
# dropping rows with 5% higher fares
titanic = titanic[titanic['Fare'] < titanic['Fare'].quantile(.95)]
titanic[['Age', 'Fare']].boxplot();
# plotting correlation
titanic.plot(kind='scatter', x='Age', y='Fare');
# compute correlation
titanic[['Age', 'Fare']].corr()
# get dummies for categories
dummies = pandas.get_dummies(titanic[['Sex', 'Embarked']])
dummies.head()
# prepare data for machine learning models
x = titanic[['Pclass', 'Age', 'SibSp', 'Parch', 'Fare']].join(dummies)
y = titanic['Survived']
x.head()
import sklearn.ensemble
clf = sklearn.ensemble.RandomForestClassifier()
clf.fit(x, y)
pandas.Series(clf.feature_importances_, index=x.columns).sort_values(ascending=False)
titanic.plot(kind='scatter', x='Age', y='Fare', color=y.replace({0: 'red', 1: 'blue'}));
# some code resembling pandas to make the traceback look nice
import numpy
cols_from_csv = rows_from_csv = lambda fname: 10_000_000
def read_csv(fname, *args, **kwargs):
cols = cols_from_csv(fname)
rows = rows_from_csv(fname)
create_block_manager_from_arrays(rows, cols)
def create_block_manager_from_arrays(rows, cols):
float_blocks = _multi_blockify((rows, cols), numpy.float64)
def _multi_blockify(shape, dtype):
_stack_arrays(shape, dtype)
def _stack_arrays(shape, dtype):
stacked = numpy.empty(shape, dtype)
original_read_csv = pandas.read_csv
pandas.read_csv = read_csv
import pandas
df = pandas.read_csv('data/huge_file.csv')
pandas.read_csv = original_read_csv
original_read_csv = pandas.read_csv
class DataFrame:
def __init__(self, *args, **kwargs):
pass
@staticmethod
def sum():
import IPython.display
img = IPython.display.Image(filename='img/waiting.jpg')
IPython.display.display(img)
pandas.read_csv = DataFrame
import pandas
df = pandas.read_csv('data/huge_file.csv')
total = df.sum()
pandas.read_csv = original_read_csv
import pandas
titanic = pandas.read_csv('data/titanic.csv')
titanic_male = titanic[titanic['Sex'] == 'male']
titanic_male['Age'].fillna(titanic_male['Age'].median(), inplace=True)
import warnings
import pandas
warnings.filterwarnings('ignore')
titanic = pandas.read_csv('data/titanic.csv')
titanic_male = titanic[titanic['Sex'] == 'male']
titanic_male['Age'].fillna(titanic_male['Age'].median(), inplace=True)
# pandas.read_csv('data/titanic.csv')
import csv
data = []
with open('data/titanic.csv') as f:
reader = csv.reader(f)
columns = next(reader)
for row in reader:
data.append(row)
columns = ('Name', 'Sex', 'Age', 'Fare', 'Survived')
data = [('Montvila, Rev. Juozas', 'male', 27., 13., 0),
('Graham, Miss. Margaret Edith', 'female', 19., 30., 1),
('Johnston, Miss. Catherine Helen "Carrie"', 'female', None, 23.45, 0),
('Behr, Mr. Karl Howell', 'male', 26., 30., 1),
('Dooley, Mr. Patrick', 'male', 32., 7.75, 0)]
# df.iloc[3]
data[3]
# df['Fare']
fares = list(zip(*data))[columns.index('Fare')]
fares
# df['Fare'].mean()
mean_fare = sum(fares) / len(fares)
mean_fare
# df[df.Sex == 'male']
male = list(filter(lambda x: x[columns.index('Sex')] == 'male', data))
male
0.1 + 0.2
import numpy
numpy.array([256], dtype=numpy.uint8)
import pandas
s = pandas.Series([1, 2])
print(s)
s.loc[0] = float('NaN')
print(s)
import random
size = 10_000_000
list1 = [random.random() for i in range(size)]
list2 = [random.random() for i in range(size)]
import numpy
import pandas
series1 = pandas.Series(list1, dtype=numpy.uint8)
series2 = pandas.Series(list2, dtype=numpy.uint8)
print('Memory consumed: {:.2f} Mb'.format(series1.memory_usage(index=False) / 1024 / 1024))
%timeit (series1 > series2).mean()
import numpy
import pandas
series1 = pandas.Series(list1, dtype=numpy.float64)
series2 = pandas.Series(list2, dtype=numpy.float64)
print('Memory consumed: {:.2f} Mb'.format(series1.memory_usage(index=False) / 1024 / 1024))
%timeit (series1 > series2).mean()
import random
import functools
import operator
size = 10_000_000
list1 = [random.random() for i in range(size)]
%timeit functools.reduce(operator.add, list1)
import pandas
s1 = pandas.Series(list1)
%timeit s1.sum()
%timeit sum(item1 > item2 for item1, item2 in zip(list1, list2)) / size
import pandas
series1 = pandas.Series(list1)
series2 = pandas.Series(list2)
%timeit (series1 > series2).mean()
data = [1, 'foo', 3.141592, ['Alan', 'Dennis', 'Linus'],
{'black': '#000000', 'white': '#ffffff'}]
data
import pandas
pandas.Series(data)
import pandas
series1 = pandas.Series(list1, dtype='object')
series2 = pandas.Series(list2, dtype='object')
%timeit (series1 > series2).mean()
import numpy
import pandas
series1 = pandas.Series(list1, dtype=numpy.float64)
series2 = pandas.Series(list2, dtype=numpy.float64)
%timeit (series1 > series2).mean()
import pandas
df = pandas.DataFrame({'foo': [1, 3, 7],
'bar': [.55, 1.76, 3.33],
'foobar': [109, 60, 13]},
columns=['foo', 'bar', 'foobar'])
print(df.dtypes)
df
df._data
df._data.blocks
df._data.blocks[1].values
type(df._data.blocks[1].values)
df._data.blocks[1].values.data
df._data.blocks[1].values.nbytes
bytes_ = df._data.blocks[1].values.tobytes()
print(''.join('{:08b}'.format(byte) for byte in bytes_))
df._data.blocks[1].values.strides
import numpy
data = numpy.empty((1000, 1000), numpy.float64)
%timeit data.sum(axis=0)
%timeit data.sum(axis=1)
import pandas
df = pandas.DataFrame({'foo': [1, 2, 3], 'bar': [5, 10, 15]})
df_view_or_copy = df[df.foo > 2]
df_view_or_copy['bar'] = 0 # are we modifying `df`? <- Chained indexing with unknown result: WARNING
df[df.foo > 2]['bar'] = 0 # same as before
df.loc[df.foo > 2, 'bar'] = 0 # we are modifying `df` (pandas manages __setitem__)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This is pandas, and it is super cool
Step2: ...you know you did something wrong
Step3: Why pandas? Why not pure Python?
Step4: Special numbers exist in floating point but not integer representation
Step5: Example 1
Step6: Computer architecture
Step7: Example 3
Step8: numpy and pandas implementation are somehow similar to the machine code we've seen.
Step9: Address | PyObject |
Step10: Pandas internal structure
Step11: Source
Step12: Memory copy
|
7,744
|
<ASSISTANT_TASK:>
Python Code:
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.1
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
%%bash
ls *.csv
import shutil
import numpy as np
import tensorflow as tf
print(tf.__version__)
# Determine CSV, label, and key columns
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks,key'.split(',')
LABEL_COLUMN = 'weight_pounds'
KEY_COLUMN = 'key'
# Set default values for each CSV column. Treat is_male and plurality as strings.
DEFAULTS = [[0.0], ['null'], [0.0], ['null'], [0.0], ['nokey']]
def features_and_labels(row_data):
for unwanted_col in ['key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
# load the training data
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = (tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS)
.map(features_and_labels) # features, label
)
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(1000).repeat()
dataset = dataset.prefetch(1) # take advantage of multi-threading; 1=AUTOTUNE
return dataset
## Build a Keras wide-and-deep model using its Functional API
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
# Helper function to handle categorical columns
def categorical_fc(name, values):
orig = tf.feature_column.categorical_column_with_vocabulary_list(name, values)
wrapped = tf.feature_column.indicator_column(orig)
return orig, wrapped
def build_wd_model(dnn_hidden_units = [64, 32], nembeds = 3):
# input layer
deep_inputs = {
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='float32')
for colname in ['mother_age', 'gestation_weeks']
}
wide_inputs = {
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='string')
for colname in ['is_male', 'plurality']
}
inputs = {**wide_inputs, **deep_inputs}
# feature columns from inputs
deep_fc = {
colname : tf.feature_column.numeric_column(colname)
for colname in ['mother_age', 'gestation_weeks']
}
wide_fc = {}
is_male, wide_fc['is_male'] = categorical_fc('is_male', ['True', 'False', 'Unknown'])
plurality, wide_fc['plurality'] = categorical_fc('plurality',
['Single(1)', 'Twins(2)', 'Triplets(3)',
'Quadruplets(4)', 'Quintuplets(5)','Multiple(2+)'])
# TODO bucketize the float fields. This makes them wide
# https://www.tensorflow.org/api_docs/python/tf/feature_column/bucketized_column
age_buckets = tf.feature_column.bucketized_column() # TODO
wide_fc['age_buckets'] = tf.feature_column.indicator_column(age_buckets)
gestation_buckets = tf.feature_column.bucketized_column() # TODO
wide_fc['gestation_buckets'] = tf.feature_column.indicator_column(gestation_buckets)
# cross all the wide columns. We have to do the crossing before we one-hot encode
crossed = tf.feature_column.crossed_column(
[is_male, plurality, age_buckets, gestation_buckets], hash_bucket_size=20000)
deep_fc['crossed_embeds'] = tf.feature_column.embedding_column(crossed, nembeds)
# the constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires that you specify: LayerConstructor()(inputs)
wide_inputs = tf.keras.layers.DenseFeatures() # TODO
deep_inputs = tf.keras.layers.DenseFeatures() # TODO
# hidden layers for the deep side
layers = [int(x) for x in dnn_hidden_units]
deep = deep_inputs
for layerno, numnodes in enumerate(layers):
deep = tf.keras.layers.Dense(numnodes, activation='relu', name='dnn_{}'.format(layerno+1))(deep)
deep_out = deep
# linear model for the wide side
wide_out = tf.keras.layers.Dense(10, activation='relu', name='linear')(wide_inputs)
# concatenate the two sides
both = tf.keras.layers.concatenate([deep_out, wide_out], name='both')
# final output is a linear activation because this is regression
output = tf.keras.layers.Dense(1, activation='linear', name='weight')(both)
model = tf.keras.models.Model(inputs, output)
model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])
return model
print("Here is our Wide-and-Deep architecture so far:\n")
model = build_wd_model()
print(model.summary())
tf.keras.utils.plot_model(model, 'wd_model.png', show_shapes=False, rankdir='LR')
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, so it will wrap around
NUM_EVALS = 5 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample, but not so much that it slows down
trainds = load_dataset('train*', TRAIN_BATCH_SIZE, tf.estimator.ModeKeys.TRAIN)
evalds = load_dataset('eval*', 1000, tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
history = model.fit(trainds,
validation_data=evalds,
epochs=NUM_EVALS,
steps_per_epoch=steps_per_epoch)
# plot
import matplotlib.pyplot as plt
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(['loss', 'rmse']):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
import shutil, os, datetime
OUTPUT_DIR = 'babyweight_trained'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(OUTPUT_DIR, datetime.datetime.now().strftime('%Y%m%d%H%M%S'))
tf.saved_model.save(model, EXPORT_PATH) # with default serving function
print("Exported trained model to {}".format(EXPORT_PATH))
!ls $EXPORT_PATH
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Keras model
Step2: Next, define the feature columns. mother_age and gestation_weeks should be numeric.
Step3: We can visualize the DNN using the Keras plot_model utility.
Step4: Train and evaluate
Step5: Visualize loss curve
Step6: Save the model
|
7,745
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'emac-2-53-vol', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
7,746
|
<ASSISTANT_TASK:>
Python Code:
# set the midpoint
midpoint = 5
# make two empty lists
lower = []; upper = []
# split the numbers into lower and upper
for i in range(10):
if (i < midpoint):
lower.append(i)
else:
upper.append(i)
print("lower:", lower)
print("upper:", upper)
x = 1 + 2 + 3 + 4 +\
5 + 6 + 7 + 8
x = (1 + 2 + 3 + 4 +
5 + 6 + 7 + 8)
x = 5
if x < 4:
y = x * 2
print(x)
x = 5
if x < 4:
y = x * 2
print(x)
x=1+2
x = 1 + 2
x = 1 + 2
2 * (3 + 4)
# example
L = [4,2,3,1]
L.sort()
print(L)
x = "4"
print("String: ", type(x), x)
x = int(x)
print("Integer: ", type(x), x)
x = float(x)
print("Float: ", type(x), x)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This script is a bit silly, but it compactly illustrates several of the important aspects of Python syntax.
Step2: It is also possible to continue expressions on the next line within parentheses (or brackets), without using the "\" marker
Step3: The best way is to use the line continuation (within parentheses).
Step4: Python's use of meaningful whitespace leads to much more consistent and readable code than languages that do not enforce indentation of code blocks.
Step5: Abusing this flexibility can lead to issues with code readibility – in fact, abusing white space is often one of the primary means of intentionally obfuscating code (which some people do for sport).
Step6: They can also be used to indicate that a function is being called.
Step7: The "()" after sort indicates that the function should be executed, and is required even if no arguments are necessary.
|
7,747
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import numpy.linalg as la
A = np.array(range(1,5)).reshape(2,2)
determinant_A = la.det(A)
print(A)
print("Determinant is: {}".format(determinant_A)) # Notice the rounding error.
# Let's check it's eigenvalues.
print("The Matrix A has eigenvalues: {}".format([x for x in la.eigvals(A)]))
print("And their product is: {}".format(np.product(la.eigvals(A))))
determinant_A - np.product(la.eigvals(A)) < 10**-5
# Lets define an epsilon and check if they are the same.
eps = 10 ** -5
def check_floats_equal(float_1, float_2, eps=eps):
return float_1 - float_2 < eps
check_floats_equal(np.product(la.eigvals(A)), la.det(A))
vals, vecs = la.eig(A)
print(vecs)
# Lets define a distance function for vectors in a matrix.
def l2_distance_cols(matrix):
norms = []
for row in matrix.transpose():
dist = 0
for val in row:
dist += val ** 2
norms.append(dist)
return np.array(norms)
l2_distance_cols(vecs)
A = np.array([0, -1, 1, 0]).reshape(2, 2)
A
np.linalg.det(A)
vals, vecs = np.linalg.eig(A)
print("Values:\n{}".format(vals))
print("Vectors:\n{}".format(vecs))
# First a quick vector making helper function.
def list_to_vector(items):
return np.array(items).reshape(len(items), 1)
np.matmul(A, list_to_vector([1,0]))
np.matmul(A, list_to_vector([0,1]))
A_inv = np.linalg.inv(A)
print("Here is A^-1\n{}".format(A_inv))
np.matmul(A_inv, list_to_vector([1,0]))
np.matmul(A_inv, list_to_vector([0,1]))
I_2 = np.matmul(A, A_inv)
print("{} \n* \n{} \n=\n{}".format(A, A_inv, I_2))
A = np.eye(2) * 2
print(A)
vals, vecs = np.linalg.eig(A)
print("Vals:\n{}".format(vals))
print("Vecs:\n{}".format(vecs))
A_inv = np.linalg.inv(A)
print("The inverse is:\n{}".format(A_inv))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Hmm.... is this related to anything??
Step2: Hmm... Interesting.
Step3: It looks like the eigenvectors are normalized to length 1.
Step4: Hmm.... what is happening there, why does it have imaginary eigenvectors??
Step5: This looks like a twist to me. It's sending
Step6: Hmm...
Step7: We just get the identity matrix back -- just like we wanted!!
Step8: What should the inverse of $A$ be?
|
7,748
|
<ASSISTANT_TASK:>
Python Code:
# import os
# from scripts.hpc05 import HPC05Client
# os.environ['SSH_AUTH_SOCK'] = os.path.join(os.path.expanduser('~'), 'ssh-agent.socket')
# cluster = HPC05Client()
from ipyparallel import Client
cluster = Client()
v = cluster[:]
lview = cluster.load_balanced_view()
len(v)
%%px --local
import numpy as np
import sys
import os
sys.path.append(os.path.join(os.path.expanduser('~'), 'orbitalfield'))
from fun import *
from fun2 import *
def evs(lead, p):
h0 = lead.cell_hamiltonian(args=[p])
t0 = lead.inter_cell_hopping(args=[p])
ham = h0 + t0 + t0.conj().T
ev = np.linalg.eigvalsh(ham)
return np.abs(ev).min()
import holoviews as hv
from itertools import product
hv.notebook_extension()
%opts Contours (linewidth=1 color='k') [aspect='square']
%output size=100 dpi=100
%%px --local
constants.a = 10
p = make_params(t_interface=7/8*constants.t, Delta=68.4, r1=50, r2=70,
phi=135, orbital=True, A_correction=False) #r2=70
p.V = lambda x, y, z: 2 / 50 * z
# p.angle = 0 # WIRE WITH SC ON TOP
p.angle = 45 # WIRE WITH SC ON SIDE
lead = make_3d_wire_external_sc(a=constants.a, r1=p.r1, r2=p.r2, angle=p.angle, phi=p.phi)
mus = np.linspace(0, 20, 100)
deltas = np.linspace(60, 80, 50)
vals = list(product(deltas, mus))
v.scatter('vals', vals)
%px res = [evs(lead, p) for p.Delta, p.mu in vals]
res = np.reshape(v.gather('res', block=True), (len(deltas), len(mus)))
Delta_ind = {delta: hv.Contours((mus, evs), kdims=[dimensions.mu, dimensions.delta_ind])
for evs, delta in zip(res, deltas)}
hm = hv.HoloMap(Delta_ind, kdims=[dimensions.delta])
hm.select(mu=(0, 20), **{'$\Delta_{ind}$': (0.2, 10)}) * hv.HLine(0.25)
delta = 68.4
chosen_curve = Delta_ind[nearest(deltas, delta)].select(mu=(4, 6), **{'$\Delta_{ind}$': (0.23, 0.28)})
(chosen_curve * hv.HLine(0.25)).relabel(r'$\Delta={}$ meV, $t_s=7/8t$'.format(delta))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Make sure to add the correct path like
Step2: Uncomment the lines for the wire that you want to use.
Step3: You can specify the intervals of $\mu$ and $\Delta$ that you want to check.
Step4: Run the next cell and use the slider to find the correct $\Delta$.
Step5: As you can see $\Delta=68.4$ meV is the parameter we need with the superconductor on the side if we want $\Delta_{ind}$.
|
7,749
|
<ASSISTANT_TASK:>
Python Code:
# Two general packages
import os
import sys
an_integer = 3
print(type(an_integer))
an_integer
# type casting: converting the integer to a float type
float(an_integer)
a_float = 0.2
type(a_float)
a_complex = 1.5 + 0.5j
# get the real or imaginary part of the complex number by using the functions
# real and imag on the variable
print(type(a_complex), a_complex.real, a_complex.imag)
a_boolean = (3 > 4)
a_boolean
print (7 * 3.)
print (2**10)
print (8 % 3)
print(3/2)
print(3/2.)
print(3.//2.) #integer division
a_list = [2.,'aa', 0.2]
a_list
# accessing individual object in the list
a_list[1]
# negative indices are used to count from the back
a_list[-1]
another_list = ['first', 'second', 'third', 'fourth', 'fifth']
print(another_list[3:])
print(another_list[:2])
print(another_list[::2])
another_list[3] = 'newFourth'
print(another_list)
another_list[1:3] = ['newSecond', 'newThird']
print(another_list)
a = ['a', 'b']
b = a
b[0] = 1
print(a)
#dir(list)
a_third_list = ['red', 'blue', 'green', 'black', 'white']
# Appending
a_third_list.append('pink')
a_third_list
# Removes and returns the last element
a_third_list.pop()
a_third_list
# Extends the list in-place
a_third_list.extend(['pink', 'purple'])
a_third_list
# Reverse the list
a_third_list.reverse()
a_third_list
# Remove the first occurence of an element
a_third_list.remove('white')
a_third_list
# Sort list
a_third_list.sort()
a_third_list
a_third_list.count?
a_third_list.index??
a_third_list = ['red', 'blue', 'green', 'black', 'white']
# remove the last two elements
a_third_list = a_third_list[:-2]
a_third_list
a_third_list[::-1]
a_list = ['pink', 'orange']
a_concatenated_list = a_third_list + a_list
a_concatenated_list
reverted = a_third_list.reverse()
## comment out the next lines to test the error:
#a_concatenated_list = a_third_list + reverted
#a_concatenated_list
# Repeating lists
a_repeated_list = a_concatenated_list*10
print(a_repeated_list)
number_list = [1, 2, 3, 4]
[i**2 for i in number_list]
[i**2 for i in number_list if i>1]
[i**2 for i in number_list if i>1]
# Let's try multiplying with two on a list of strings:
print([i*2 for i in a_repeated_list])
s = 'Never gonna give you up'
print(s)
s = "never gonna let you down"
print(s)
s = '''Never gonna run around
and desert you'''
print(s)
s = Never gonna make you cry,
never gonna say goodbye
print(s)
## pay attention when using apostrophes! - test out the next two lines one at a time
#print('Hi, what's up?')
#print("Hi, what's up?")
print('''Never gonna tell a lie and hurt you.
Never gonna give you up,\tnever gonna let you down
Never \ngonna\n run around and\t desert\t you''')
a_string = "hello"
print(a_string[0])
print(a_string[1:5])
print(a_string[-4:-1:2])
print(u'Hello\u0020World !')
#a_string[3] = 'q' # uncomment this cell
#dir(str) # uncomment this cell
another_string = "Strawberry-raspBerry pAstry package party"
another_string.lower().replace('r', 'l', 7)
print('An integer: %i; a float: %f; another string: %s' % (1, 0.1, 'string'))
print('An integer: {}; a float: {}; another string: {}'.format(1, 0.1, 'string'))
n_dataset_number = 20
sFilename = 'processing_of_dataset_%d.txt' % n_dataset_number
print(sFilename)
[el for el in dir(list) if not el[0]=='_']
sentence = "the quick brown fox jumps over the lazy dog"
#split in words and get word lengths
[len(word) for word in sentence.split()]
# Always key : value combinations, datatypes can be mixed
hourly_wage = {'Jos':10, 'Frida': 9, 'Gaspard': '13', 23 : 3}
hourly_wage
hourly_wage['Jos']
hourly_wage['Antoinette'] = 15
hourly_wage
hourly_wage.keys()
hourly_wage.values()
hourly_wage.items() # all combinations in a list
# ignore this loop for now, this will be explained later
for key, value in hourly_wage.items():
print(key,' earns ', value, '€/hour')
hourly_wage = {'Jos':10, 'Frida': 9, 'Gaspard': '13', 23 : 3}
str_key = []
for key in hourly_wage.keys():
str_key.append(str(key))
str_key
a_tuple = (2, 3, 'aa', [1, 2])
a_tuple
a_second_tuple = 2, 3, 'aa', [1,2]
a_second_tuple
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Basic python datatypes
Step2: A Python shell can therefore replace your pocket calculator, with the basic arithmetic operations addition, substraction, division ... are natively implemented
Step3: Attention !
Step4: Containers
Step5: Slicing
Step6: Note that L[start
Step7: Warning, with views equal to each other, they point to the same point in memory. Changing one of them is also changing the other!!
Step8: List methods
Step9: <div class="alert alert-success">
Step10:
Step11: <div class="alert alert-success">
Step12: Concatenating lists is just the same as summing both lists
Step13: <div class="alert alert alert-danger">
Step14: The list itself is reversed and no output is returned, so reverted is None, which can not be added to a list
Step15: List comprehensions
Step16: and with conditional options
Step18: Cool, this works! let's check more about strings
Step19: The newline character is \n, and the tab character is \t.
Step20: Strings are collections like lists. Hence they can be indexed and sliced, using the same syntax and rules.
Step21: Accents and special characters can also be handled in Unicode strings (see http
Step22: A string is an immutable object and it is not possible to modify its contents. One may however create new strings from the original one.
Step23: We won't introduce all methods on strings, but let's check the namespace and apply a few of them
Step24: String formatting to make the output as wanted can be done as follows
Step25: The format string print options in python 3 are able to interpret the conversions itself
Step26: <div class="alert alert alert-success">
Step27: <div class="alert alert alert-success">
Step28: Dictionaries
Step29: Adding an extra element
Step30: You can get the keys and values separately
Step31: <div class="alert alert alert-success">
Step32: Tuples
|
7,750
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
! pip install -q tensorflow-model-optimization
import tensorflow as tf
import numpy as np
import tempfile
import zipfile
import os
# Load MNIST dataset
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(filters=12, kernel_size=(3, 3),
activation=tf.nn.relu),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10)
])
opt = tf.keras.optimizers.Adam(learning_rate=1e-3)
# Train the digit classification model
model.compile(optimizer=opt,
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
validation_split=0.1,
epochs=10
)
_, baseline_model_accuracy = model.evaluate(
test_images, test_labels, verbose=0)
print('Baseline test accuracy:', baseline_model_accuracy)
_, keras_file = tempfile.mkstemp('.h5')
print('Saving model to: ', keras_file)
tf.keras.models.save_model(model, keras_file, include_optimizer=False)
import tensorflow_model_optimization as tfmot
prune_low_magnitude = tfmot.sparsity.keras.prune_low_magnitude
pruning_params = {
'pruning_schedule': tfmot.sparsity.keras.ConstantSparsity(0.5, begin_step=0, frequency=100)
}
callbacks = [
tfmot.sparsity.keras.UpdatePruningStep()
]
pruned_model = prune_low_magnitude(model, **pruning_params)
# Use smaller learning rate for fine-tuning
opt = tf.keras.optimizers.Adam(learning_rate=1e-5)
pruned_model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=opt,
metrics=['accuracy'])
# Fine-tune model
pruned_model.fit(
train_images,
train_labels,
epochs=3,
validation_split=0.1,
callbacks=callbacks)
def print_model_weights_sparsity(model):
for layer in model.layers:
if isinstance(layer, tf.keras.layers.Wrapper):
weights = layer.trainable_weights
else:
weights = layer.weights
for weight in weights:
if "kernel" not in weight.name or "centroid" in weight.name:
continue
weight_size = weight.numpy().size
zero_num = np.count_nonzero(weight == 0)
print(
f"{weight.name}: {zero_num/weight_size:.2%} sparsity ",
f"({zero_num}/{weight_size})",
)
def print_model_weight_clusters(model):
for layer in model.layers:
if isinstance(layer, tf.keras.layers.Wrapper):
weights = layer.trainable_weights
else:
weights = layer.weights
for weight in weights:
# ignore auxiliary quantization weights
if "quantize_layer" in weight.name:
continue
if "kernel" in weight.name:
unique_count = len(np.unique(weight))
print(
f"{layer.name}/{weight.name}: {unique_count} clusters "
)
stripped_pruned_model = tfmot.sparsity.keras.strip_pruning(pruned_model)
print_model_weights_sparsity(stripped_pruned_model)
import tensorflow_model_optimization as tfmot
from tensorflow_model_optimization.python.core.clustering.keras.experimental import (
cluster,
)
cluster_weights = tfmot.clustering.keras.cluster_weights
CentroidInitialization = tfmot.clustering.keras.CentroidInitialization
cluster_weights = cluster.cluster_weights
clustering_params = {
'number_of_clusters': 8,
'cluster_centroids_init': CentroidInitialization.KMEANS_PLUS_PLUS,
'preserve_sparsity': True
}
sparsity_clustered_model = cluster_weights(stripped_pruned_model, **clustering_params)
sparsity_clustered_model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
print('Train sparsity preserving clustering model:')
sparsity_clustered_model.fit(train_images, train_labels,epochs=3, validation_split=0.1)
stripped_clustered_model = tfmot.clustering.keras.strip_clustering(sparsity_clustered_model)
print("Model sparsity:\n")
print_model_weights_sparsity(stripped_clustered_model)
print("\nModel clusters:\n")
print_model_weight_clusters(stripped_clustered_model)
# QAT
qat_model = tfmot.quantization.keras.quantize_model(stripped_clustered_model)
qat_model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
print('Train qat model:')
qat_model.fit(train_images, train_labels, batch_size=128, epochs=1, validation_split=0.1)
# PCQAT
quant_aware_annotate_model = tfmot.quantization.keras.quantize_annotate_model(
stripped_clustered_model)
pcqat_model = tfmot.quantization.keras.quantize_apply(
quant_aware_annotate_model,
tfmot.experimental.combine.Default8BitClusterPreserveQuantizeScheme(preserve_sparsity=True))
pcqat_model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
print('Train pcqat model:')
pcqat_model.fit(train_images, train_labels, batch_size=128, epochs=1, validation_split=0.1)
print("QAT Model clusters:")
print_model_weight_clusters(qat_model)
print("\nQAT Model sparsity:")
print_model_weights_sparsity(qat_model)
print("\nPCQAT Model clusters:")
print_model_weight_clusters(pcqat_model)
print("\nPCQAT Model sparsity:")
print_model_weights_sparsity(pcqat_model)
def get_gzipped_model_size(file):
# It returns the size of the gzipped model in kilobytes.
_, zipped_file = tempfile.mkstemp('.zip')
with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f:
f.write(file)
return os.path.getsize(zipped_file)/1000
# QAT model
converter = tf.lite.TFLiteConverter.from_keras_model(qat_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
qat_tflite_model = converter.convert()
qat_model_file = 'qat_model.tflite'
# Save the model.
with open(qat_model_file, 'wb') as f:
f.write(qat_tflite_model)
# PCQAT model
converter = tf.lite.TFLiteConverter.from_keras_model(pcqat_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
pcqat_tflite_model = converter.convert()
pcqat_model_file = 'pcqat_model.tflite'
# Save the model.
with open(pcqat_model_file, 'wb') as f:
f.write(pcqat_tflite_model)
print("QAT model size: ", get_gzipped_model_size(qat_model_file), ' KB')
print("PCQAT model size: ", get_gzipped_model_size(pcqat_model_file), ' KB')
def eval_model(interpreter):
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on every image in the "test" dataset.
prediction_digits = []
for i, test_image in enumerate(test_images):
if i % 1000 == 0:
print(f"Evaluated on {i} results so far.")
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = interpreter.tensor(output_index)
digit = np.argmax(output()[0])
prediction_digits.append(digit)
print('\n')
# Compare prediction results with ground truth labels to calculate accuracy.
prediction_digits = np.array(prediction_digits)
accuracy = (prediction_digits == test_labels).mean()
return accuracy
interpreter = tf.lite.Interpreter(pcqat_model_file)
interpreter.allocate_tensors()
pcqat_test_accuracy = eval_model(interpreter)
print('Pruned, clustered and quantized TFLite test_accuracy:', pcqat_test_accuracy)
print('Baseline TF test accuracy:', baseline_model_accuracy)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <table class="tfo-notebook-buttons" align="left">
Step2: Train a tf.keras model for MNIST to be pruned and clustered
Step3: Evaluate the baseline model and save it for later usage
Step4: Prune and fine-tune the model to 50% sparsity
Step5: Fine-tune the model, check sparsity, and evaluate the accuracy against baseline
Step6: Define helper functions to calculate and print the sparsity and clusters of the model.
Step7: Let's strip the pruning wrapper first, then check that the model kernels were correctly pruned.
Step8: Apply sparsity preserving clustering and check its effect on model sparsity in both cases
Step9: Strip the clustering wrapper first, then check that the model is correctly pruned and clustered.
Step10: Apply QAT and PCQAT and check effect on model clusters and sparsity
Step11: See compression benefits of PCQAT model
Step12: Observe that applying sparsity, clustering and PCQAT to a model yields significant compression benefits.
Step13: See the persistence of accuracy from TF to TFLite
Step14: Evaluate the model, which has been pruned, clustered and quantized, and then see that the accuracy from TensorFlow persists in the TFLite backend.
|
7,751
|
<ASSISTANT_TASK:>
Python Code:
import os
from tinydb import TinyDB
import pandas as pd
import time
from DashPykpi.kpistats import KpiStats, GitURLs, GraphKPIs
# or... use a list of URLS fetched from the GitURLs class
url_fetch = GitURLs()
urls = url_fetch.urls
print("Retrieved {0} urls.".format(len(urls)))
# These projects don't exist anymore and break get_repo_stats
urls.remove('https://github.com/UCL/ucl')
urls.remove('https://github.com/UCL-RITS/ucl-rits')
urls
# needed = [
# "https://github.com/UCL-RITS",
# "https://github.com/astro-informatics",
# "https://github.com/bempp",
# "https://github.com/DCPROGS",
# "https://github.com/OPTIMET",
# "https://github.com/UCL-Biochemical-Engineering",
# "https://github.com/UCL-CSS",
# "https://github.com/UCLProgrammingHub",
# "https://github.com/UCL-HIC",
# "https://github.com/EIT-team",
# "https://github.com/Astrophysics-UCL",
# "https://github.com/Bahler-Lab",
# "https://github.com/CCPPETMR",
# "https://github.com/Euclid-OULE3",
# "https://github.com/FieldingChemistryUCL",
# "https://github.com/PancreaticProject",
# "https://github.com/ShapsUcl",
# "https://github.com/msslsolar",
# "https://github.com/mssl-plasma-group",
# "https://github.com/UCL-ShippingGroup"]
# need_accounts = [n.split('/')[3] for n in needed]
# got_accounts = [url.split('/')[-2] for url in urls]
# from github3 import GitHub
# fn = open("secret_key")
# g = GitHub(token=fn.read().split()[0])
# for thing in g.iter_user_repos(login='benlaken'):
# print(thing)
# for need in need_accounts:
# if not need in got_accounts:
# print("Missing {0} repos ".format(need, need in got_accounts))
# print(" Attempting to access - will work if public")
# for thing in g.iter_user_repos(login=need):
# print(" {0}".format(thing))
#for url in urls:
# user_str, repo_str = url.split('/')[-2:]
# print(url, user_str, repo_str)
# Run the code to generate a dababase if needed...
test = KpiStats(urls=urls)
test.work(verbose=False, debug=False, status=True)
db = TinyDB('tinydb_for_KPI.json')
df = pd.DataFrame(db.all())
df.head()
#for n in df['repo_owner']:
# print(n)
running = 0
num_repos = 0
tmp_hold = {}
for n, weekly in enumerate(df['weekly_commits']):
if sum(weekly) > 1:
tmp_hold[df['repo_name'][n]] = weekly
running += sum(weekly)
num_repos += 1
print("{0:3,} commits, in {1} active repos (out of {2} total repos), during past 52 weeks".format(
running, num_repos, len(df)))
import numpy as np
tmp = []
for n, weekly in enumerate(df['weekly_commits']):
if sum(weekly) > 1:
tmp.append(weekly)
tmp = np.array(tmp)
tmp = tmp.sum(axis=0)
all_weekly_commits = {"All repos": tmp}
width=4
data = tmp
result = data[:(data.size // width) * width].reshape(-1, width).sum(axis=1)
result
print(52/4)
all_weekly_commits
#from bokeh.charts import Area, show, output_notebook, defaults
#import numpy as np
from bokeh.charts import show, output_notebook
output_notebook()
bk = GraphKPIs()
show(bk.weekly_activity(per_repo=False, verbose=True, bin=4))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create DB
Step2: Plotting section
Step3: Stacked area chart
|
7,752
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
from io import StringIO
commits_raw = pd.read_csv(StringIO(log),
sep="#",
header=None,
names=['file_stats','sha', 'date', 'author'])
commits_raw.head()
commit_metadata = commits_raw[['sha', 'date', 'author']].fillna(method='ffill')
commit_metadata.head(5)
file_info = commits_raw['file_stats'].dropna().str.split("\t", expand=True)
file_info.columns = ['additions', "deletions", "filename"]
file_info.head()
file_info['additions'] = pd.to_numeric(file_info['additions'], errors='coerce')
file_info['deletions'] = pd.to_numeric(file_info['deletions'], errors='coerce')
file_info.dtypes
commits = commit_metadata.join(file_info, how='right')
commits = commits.dropna()
commits.head()
commits.groupby('author').sum()[['additions']].sort_values(by='additions', ascending=False)
%matplotlib inline
timed_commits = commits.set_index(pd.DatetimeIndex(commits['date']))[['insertions', 'deletions']].resample('1D').sum()
(timed_commits['insertions'] - timed_commits['deletions']).cumsum().fillna(method='ffill').plot()
%matplotlib inline
commits['author'].value_counts().plot(kind='pie', figsize=(10,10))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Wrangling
Step2: With this, we can focus on extracting the information of a commit info row. The next command could be looking a little frightening, but don't worry. We go through it step by step.
Step3: OK, this part is ready, let's have a look at the file statistics!
|
7,753
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow_datasets as tfds
import tensorflow as tf
mirrored_strategy = tf.distribute.MirroredStrategy()
def get_data():
datasets, ds_info = tfds.load(name='mnist', with_info=True, as_supervised=True)
mnist_train, mnist_test = datasets['train'], datasets['test']
BUFFER_SIZE = 10000
BATCH_SIZE_PER_REPLICA = 64
BATCH_SIZE = BATCH_SIZE_PER_REPLICA * mirrored_strategy.num_replicas_in_sync
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
train_dataset = mnist_train.map(scale).cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
eval_dataset = mnist_test.map(scale).batch(BATCH_SIZE)
return train_dataset, eval_dataset
def get_model():
with mirrored_strategy.scope():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(),
metrics=[tf.metrics.SparseCategoricalAccuracy()])
return model
model = get_model()
train_dataset, eval_dataset = get_data()
model.fit(train_dataset, epochs=2)
keras_model_path = "/tmp/keras_save"
model.save(keras_model_path)
restored_keras_model = tf.keras.models.load_model(keras_model_path)
restored_keras_model.fit(train_dataset, epochs=2)
another_strategy = tf.distribute.OneDeviceStrategy("/cpu:0")
with another_strategy.scope():
restored_keras_model_ds = tf.keras.models.load_model(keras_model_path)
restored_keras_model_ds.fit(train_dataset, epochs=2)
model = get_model() # get a fresh model
saved_model_path = "/tmp/tf_save"
tf.saved_model.save(model, saved_model_path)
DEFAULT_FUNCTION_KEY = "serving_default"
loaded = tf.saved_model.load(saved_model_path)
inference_func = loaded.signatures[DEFAULT_FUNCTION_KEY]
predict_dataset = eval_dataset.map(lambda image, label: image)
for batch in predict_dataset.take(1):
print(inference_func(batch))
another_strategy = tf.distribute.MirroredStrategy()
with another_strategy.scope():
loaded = tf.saved_model.load(saved_model_path)
inference_func = loaded.signatures[DEFAULT_FUNCTION_KEY]
dist_predict_dataset = another_strategy.experimental_distribute_dataset(
predict_dataset)
# Calling the function in a distributed manner
for batch in dist_predict_dataset:
another_strategy.run(inference_func,args=(batch,))
import tensorflow_hub as hub
def build_model(loaded):
x = tf.keras.layers.Input(shape=(28, 28, 1), name='input_x')
# Wrap what's loaded to a KerasLayer
keras_layer = hub.KerasLayer(loaded, trainable=True)(x)
model = tf.keras.Model(x, keras_layer)
return model
another_strategy = tf.distribute.MirroredStrategy()
with another_strategy.scope():
loaded = tf.saved_model.load(saved_model_path)
model = build_model(loaded)
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(),
metrics=[tf.metrics.SparseCategoricalAccuracy()])
model.fit(train_dataset, epochs=2)
model = get_model()
# Saving the model using Keras's save() API
model.save(keras_model_path)
another_strategy = tf.distribute.MirroredStrategy()
# Loading the model using lower level API
with another_strategy.scope():
loaded = tf.saved_model.load(keras_model_path)
model = get_model()
# Saving the model to a path on localhost.
saved_model_path = "/tmp/tf_save"
save_options = tf.saved_model.SaveOptions(experimental_io_device='/job:localhost')
model.save(saved_model_path, options=save_options)
# Loading the model from a path on localhost.
another_strategy = tf.distribute.MirroredStrategy()
with another_strategy.scope():
load_options = tf.saved_model.LoadOptions(experimental_io_device='/job:localhost')
loaded = tf.keras.models.load_model(saved_model_path, options=load_options)
class SubclassedModel(tf.keras.Model):
output_name = 'output_layer'
def __init__(self):
super(SubclassedModel, self).__init__()
self._dense_layer = tf.keras.layers.Dense(
5, dtype=tf.dtypes.float32, name=self.output_name)
def call(self, inputs):
return self._dense_layer(inputs)
my_model = SubclassedModel()
# my_model.save(keras_model_path) # ERROR!
tf.saved_model.save(my_model, saved_model_path)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 使用分布策略保存和加载模型
Step2: 使用 tf.distribute.Strategy 准备数据和模型:
Step3: 训练模型:
Step4: 保存和加载模型
Step5: 恢复无 tf.distribute.Strategy 的模型:
Step6: 恢复模型后,您可以继续在它上面进行训练,甚至无需再次调用 compile(),因为在保存之前已经对其进行了编译。模型以 TensorFlow 的标准 SavedModel proto 格式保存。有关更多信息,请参阅 saved_model 格式指南。
Step7: 如您所见, tf.distribute.Strategy 可以按预期进行加载。此处使用的策略不必与保存前所用策略相同。
Step8: 可以使用 tf.saved_model.load() 进行加载。但是,由于该 API 级别较低(因此用例范围更广泛),所以不会返回 Keras 模型。相反,它返回一个对象,其中包含可用于进行推断的函数。例如:
Step9: 加载的对象可能包含多个函数,每个函数与一个键关联。"serving_default" 是使用已保存的 Keras 模型的推断函数的默认键。要使用此函数进行推断,请运行以下代码:
Step10: 您还可以采用分布式方式加载和进行推断:
Step11: 调用已恢复的函数只是基于已保存模型的前向传递(预测)。如果您想继续训练加载的函数,或者将加载的函数嵌入到更大的模型中,应如何操作? 通常的做法是将此加载对象包装到 Keras 层以实现此目的。幸运的是,TF Hub 为此提供了 hub.KerasLayer,如下所示:
Step12: 如您所见,hub.KerasLayer 可将从 tf.saved_model.load() 加载回的结果封装到可用于构建其他模型的 Keras 层。这对于迁移学习非常实用。
Step13: 从本地设备保存/加载
Step14: 警告
|
7,754
|
<ASSISTANT_TASK:>
Python Code:
from functools import reduce
find_my_sum = [5, 3, 19, 48, 2, 31, 29]
def sum_func(x, y):
return x + y
total = reduce(sum_func, find_my_sum)
print(total)
word_lst = ["hello", "there", "martha", "how", "are", "you", "doing"]
sentence = reduce(lambda x,y: x + " " + y, word_lst)
nums = [5, 5, 39, 29, 48, 98, 23, 48]
max_num = reduce(lambda num1,num2: num1 if num1 > num2 else num2, nums)
print(max_num)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's look at how it worked. First, we made a list, and defined the function. After we supplied the arguments, the reduce function spreaded as described above.
Step2: Here, we used a lambda expressiont unpack two variables (both strings), and then continue to create a sentence as the reduce continously applies the function at hand.
|
7,755
|
<ASSISTANT_TASK:>
Python Code:
# The keyword categories to help parse website text:
mission = ['mission',' vision ', 'vision:', 'mission:', 'our purpose', 'our ideals', 'ideals:', 'our cause', 'cause:', 'goals', 'objective']
curriculum = ['curriculum', 'curricular', 'program', 'method', 'pedagogy', 'pedagogical', 'approach', 'model', 'system', 'structure']
philosophy = ['philosophy', 'philosophical', 'beliefs', 'believe', 'principles', 'creed', 'credo', 'value', 'moral']
history = ['history', 'our story', 'the story', 'school story', 'background', 'founding', 'founded', 'established', 'establishment', 'our school began', 'we began', 'doors opened', 'school opened']
general = ['about us', 'our school', 'who we are', 'overview', 'general information', 'our identity', 'profile', 'highlights']
#!/usr/bin/env python
# -*- coding: UTF-8
# IMPORTING KEY PACKAGES
import csv # for reading in CSVs and turning them into dictionaries
import re # for regular expressions
import os # for navigating file trees
import nltk # for natural language processing tools
import pandas # for working with dataframes
import numpy as np # for working with numbers
# FOR CLEANING, TOKENIZING, AND STEMMING THE TEXT
from nltk import word_tokenize, sent_tokenize # widely used text tokenizer
from nltk.stem.porter import PorterStemmer # an approximate method of stemming words (it just cuts off the ends)
from nltk.corpus import stopwords # for one method of eliminating stop words, to clean the text
stopenglish = list(stopwords.words("english")) # assign the string of english stopwords to a variable and turn it into a list
import string # for one method of eliminating punctuation
punctuations = list(string.punctuation) # assign the string of common punctuation symbols to a variable and turn it into a list
# FOR ANALYZING WITH THE TEXT
from sklearn.feature_extraction.text import CountVectorizer # to work with document-term matrices, especially
countvec = CountVectorizer(tokenizer=nltk.word_tokenize)
from sklearn.feature_extraction.text import TfidfVectorizer # for creating TF-IDFs
tfidfvec = TfidfVectorizer()
from sklearn.decomposition import LatentDirichletAllocation # for topic modeling
import gensim # for word embedding models
from scipy.spatial.distance import cosine # for cosine similarity
from sklearn.metrics import pairwise # for pairwise similarity
from sklearn.manifold import MDS, TSNE # for multi-dimensional scaling
# FOR VISUALIZATIONS
import matplotlib
import matplotlib.pyplot as plt
# Visualization parameters
% pylab inline
% matplotlib inline
matplotlib.style.use('ggplot')
sample = [] # make empty list
with open('../data_URAP_etc/mission_data_prelim.csv', 'r', encoding = 'Latin-1')\
as csvfile: # open file
reader = csv.DictReader(csvfile) # create a reader
for row in reader: # loop through rows
sample.append(row) # append each row to the list
sample[0]
# Take a look at the most important contents and the variables list
# in our sample (a list of dictionaries)--let's look at just the first entry
print(sample[1]["SCHNAM"], "\n", sample[1]["URL"], "\n", sample[1]["WEBTEXT"], "\n")
print(sample[1].keys()) # look at all the variables!
# Read the data in as a pandas dataframe
df = pandas.read_csv("../data_URAP_etc/mission_data_prelim.csv", encoding = 'Latin-1')
df = df.dropna(subset=["WEBTEXT"]) # drop any schools with no webtext that might have snuck in (none currently)
# Add additional variables for analysis:
# PCTETH = percentage of enrolled students belonging to a racial minority
# this includes American Indian, Asian, Hispanic, Black, Hawaiian, or Pacific Islander
df["PCTETH"] = (df["AM"] + df["ASIAN"] + df["HISP"] + df["BLACK"] + df["PACIFIC"]) / df["MEMBER"]
df["STR"] = df["MEMBER"] / df["FTE"] # Student/teacher ratio
df["PCTFRPL"] = df["TOTFRL"] / df["MEMBER"] # Percent of students receiving FRPL
# Another interesting variable:
# TYPE = type of school, where 1 = regular, 2 = special ed, 3 = vocational, 4 = other/alternative, 5 = reportable program
## Print the webtext from the first school in the dataframe
print(df.iloc[0]["WEBTEXT"])
print(df.describe()) # get descriptive statistics for all numerical columns
print()
print(df['ULOCAL'].value_counts()) # frequency counts for categorical data
print()
print(df['LEVEL'].value_counts()) # treat grade range served as categorical
# Codes for level/ grade range served: 3 = High school, 2 = Middle school, 1 = Elementary, 4 = Other)
print()
print(df['LSTATE'].mode()) # find the most common state represented in these data
print(df['ULOCAL'].mode()) # find the most urbanicity represented in these data
# print(df['FTE']).mean() # What's the average number of full-time employees by school?
# print(df['STR']).mean() # And the average student-teacher ratio?
# here's the number of schools from each state, in a graph:
grouped_state = df.groupby('LSTATE')
grouped_state['WEBTEXT'].count().sort_values(ascending=True).plot(kind = 'bar', title='Schools mostly in CA, TX, AZ, FL--similar to national trend')
plt.show()
# and here's the number of schools in each urban category, in a graph:
grouped_urban = df.groupby('ULOCAL')
grouped_urban['WEBTEXT'].count().sort_values(ascending=True).plot(kind = 'bar', title='Most schools are in large cities or large suburbs')
plt.show()
# Now we clean the webtext by rendering each word lower-case then removing punctuation.
df['webtext_lc'] = df['WEBTEXT'].str.lower() # make the webtext lower case
df['webtokens'] = df['webtext_lc'].apply(nltk.word_tokenize) # tokenize the lower-case webtext by word
df['webtokens_nopunct'] = df['webtokens'].apply(lambda x: [word for word in x if word not in list(string.punctuation)]) # remove punctuation
print(df.iloc[0]["webtokens"]) # the tokenized text without punctuation
# Now we remove stopwords and stem. This will improve the results
df['webtokens_clean'] = df['webtokens_nopunct'].apply(lambda x: [word for word in x if word not in list(stopenglish)]) # remove stopwords
df['webtokens_stemmed'] = df['webtokens_clean'].apply(lambda x: [PorterStemmer().stem(word) for word in x])
# Some analyses require a string version of the webtext without punctuation or numbers.
# To get this, we join together the cleaned and stemmed tokens created above, and then remove numbers and punctuation:
df['webtext_stemmed'] = df['webtokens_stemmed'].apply(lambda x: ' '.join(char for char in x))
df['webtext_stemmed'] = df['webtext_stemmed'].apply(lambda x: ''.join(char for char in x if char not in punctuations))
df['webtext_stemmed'] = df['webtext_stemmed'].apply(lambda x: ''.join(char for char in x if not char.isdigit()))
df['webtext_stemmed'][0]
# Some analyses require tokenized sentences. I'll do this with the list of dictionaries.
# I'll use cleaned, tokenized sentences (with stopwords) to create both a dictionary variable and a separate list for word2vec
words_by_sentence = [] # initialize the list of tokenized sentences as an empty list
for school in sample:
school["sent_toksclean"] = []
school["sent_tokens"] = [word_tokenize(sentence) for sentence in sent_tokenize(school["WEBTEXT"])]
for sent in school["sent_tokens"]:
school["sent_toksclean"].append([PorterStemmer().stem(word.lower()) for word in sent if (word not in punctuations)]) # for each word: stem, lower-case, and remove punctuations
words_by_sentence.append([PorterStemmer().stem(word.lower()) for word in sent if (word not in punctuations)])
words_by_sentence[:2]
# We can also count document lengths. I'll mostly use the version with punctuation removed but including stopwords,
# because stopwords are also part of these schools' public image/ self-presentation to potential parents, regulators, etc.
df['webstem_count'] = df['webtokens_stemmed'].apply(len) # find word count without stopwords or punctuation
df['webpunct_count'] = df['webtokens_nopunct'].apply(len) # find length with stopwords still in there (but no punctuation)
df['webclean_count'] = df['webtokens_clean'].apply(len) # find word count without stopwords or punctuation
# For which urban status are website self-description the longest?
print(grouped_urban['webpunct_count'].mean().sort_values(ascending=False))
# here's the mean website self-description word count for schools grouped by urban proximity, in a graph:
grouped_urban['webpunct_count'].mean().sort_values(ascending=True).plot(kind = 'bar', title='Schools in mid-sized cities and suburbs have longer self-descriptions than in fringe areas', yerr = grouped_state["webpunct_count"].std())
plt.show()
# Look at 'FTE' (proxy for # administrators) clustered by urban proximity and whether it explains this
grouped_urban['FTE'].mean().sort_values(ascending=True).plot(kind = 'bar', title='Title', yerr = grouped_state["FTE"].std())
plt.show()
# Now let's calculate the type-token ratio (TTR) for each school, which compares
# the number of types (unique words used) with the number of words (including repetitions of words).
df['numtypes'] = df['webtokens_nopunct'].apply(lambda x: len(set(x))) # this is the number of unique words per site
df['TTR'] = df['numtypes'] / df['webpunct_count'] # calculate TTR
# here's the mean TTR for schools grouped by urban category:
grouped_urban = df.groupby('ULOCAL')
grouped_urban['TTR'].mean().sort_values(ascending=True).plot(kind = 'bar', title='Charters in cities and suburbs have higher textual redundancy than in fringe areas', yerr = grouped_urban["TTR"].std())
plt.show()
# First, aggregate all the cleaned webtext:
webtext_all = []
df['webtokens_clean'].apply(lambda x: [webtext_all.append(word) for word in x])
webtext_all[:20]
# Now apply the nltk function FreqDist to count the number of times each token occurs.
word_frequency = nltk.FreqDist(webtext_all)
#print out the 50 most frequent words using the function most_common
print(word_frequency.most_common(50))
sklearn_dtm = countvec.fit_transform(df['webtext_stemmed'])
print(sklearn_dtm)
# What are some of the words in the DTM?
print(countvec.get_feature_names()[:10])
# now we can create the dtm, but with cells weigthed by the tf-idf score.
dtm_tfidf_df = pandas.DataFrame(tfidfvec.fit_transform(df.webtext_stemmed).toarray(), columns=tfidfvec.get_feature_names(), index = df.index)
dtm_tfidf_df[:20] # let's take a look!
# What are the 20 words with the highest TF-IDF scores?
print(dtm_tfidf_df.max().sort_values(ascending=False)[:20])
# train the model, using a minimum of 5 words
model = gensim.models.Word2Vec(words_by_sentence, size=100, window=5, \
min_count=2, sg=1, alpha=0.025, iter=5, batch_words=10000, workers=1)
# dictionary of words in model (may not work for old gensim)
# print(len(model.vocab))
# model.vocab
# Find cosine distance between two given word vectors
print(model.similarity('college-prep','align')) # these two are close to essentialism
print(model.similarity('emot', 'curios')) # these two are close to progressivism
# create some rough dictionaries for our contrasting educational philosophies
essentialism = ['excel', 'perform', 'prep', 'rigor', 'standard', 'align', 'comprehens', 'content', \
'data-driven', 'market', 'research', 'research-bas', 'program', 'standards-bas']
progressivism = ['inquir', 'curios', 'project', 'teamwork', 'social', 'emot', 'reflect', 'creat',\
'ethic', 'independ', 'discov', 'deep', 'problem-solv', 'natur']
# Let's look at two vectors that demonstrate the binary between these philosophies: align and emot
print(model.most_similar('align')) # words core to essentialism
print()
print(model.most_similar('emot')) # words core to progressivism
print(model.most_similar('emot')) # words core to progressivism
# Let's work with the binary between progressivism vs. essentialism
# first let's find the 50 words closest to each philosophy using the two 14-term dictionaries defined above
prog_words = model.most_similar(progressivism, topn=50)
prog_words = [word for word, similarity in prog_words]
for word in progressivism:
prog_words.append(word)
print(prog_words[:20])
ess_words = model.most_similar(essentialism, topn=50) # now let's get the 50 most similar words for our essentialist dictionary
ess_words = [word for word, similarity in ess_words]
for word in essentialism:
ess_words.append(word)
print(ess_words[:20])
# construct an combined dictionary
phil_words = ess_words + prog_words
# preparing for visualizing this binary with word2vec
x = [model.similarity('emot', word) for word in phil_words]
y = [model.similarity('align', word) for word in phil_words]
# here's a visual of the progressivism/essentialism binary:
# top-left half is essentialism, bottom-right half is progressivism
_, ax = plt.subplots(figsize=(20,20))
ax.scatter(x, y, alpha=1, color='b')
for i in range(len(phil_words)):
ax.annotate(phil_words[i], (x[i], y[i]))
ax.set_xlim(.635, 1.005)
ax.set_ylim(.635, 1.005)
plt.plot([0, 1], [0, 1], linestyle='--');
####Adopted From:
#Author: Olivier Grisel <olivier.grisel@ensta.org>
# Lars Buitinck
# Chyi-Kwei Yau <chyikwei.yau@gmail.com>
# License: BSD 3 clause
# Initialize the variables needed for the topic models
n_samples = 2000
n_topics = 3
n_top_words = 50
# Create helper function that prints out the top words for each topic in a pretty way
def print_top_words(model, feature_names, n_top_words):
for topic_idx, topic in enumerate(model.components_):
print("\nTopic #%d:" % topic_idx)
print(" ".join([feature_names[i]
for i in topic.argsort()[:-n_top_words - 1:-1]]))
print()
# Vectorize our text using CountVectorizer
print("Extracting tf features for LDA...")
tf_vectorizer = CountVectorizer(max_df=70, min_df=4,
max_features=None,
stop_words=stopenglish, lowercase=1
)
tf = tf_vectorizer.fit_transform(df.WEBTEXT)
print("Fitting LDA models with tf features, "
"n_samples=%d and n_topics=%d..."
% (n_samples, n_topics))
# define the lda function, with desired options
lda = LatentDirichletAllocation(n_topics=n_topics, max_iter=20,
learning_method='online',
learning_offset=80.,
total_samples=n_samples,
random_state=0)
#fit the model
lda.fit(tf)
# print the top words per topic, using the function defined above.
print("\nTopics in LDA model:")
tf_feature_names = tf_vectorizer.get_feature_names()
print_top_words(lda, tf_feature_names, n_top_words)
# Preparation for looking at distribution of topics over schools
topic_dist = lda.transform(tf) # transpose topic distribution
topic_dist_df = pandas.DataFrame(topic_dist) # turn into a df
df_w_topics = topic_dist_df.join(df) # merge with charter MS dataframe
df_w_topics[:20] # check out the merged df with topics!
topic_columns = range(0,n_topics) # Set numerical range of topic columns for use in analyses, using n_topics from above
# Which schools are weighted highest for topic 0? How do they trend with regard to urban proximity and student class?
print(df_w_topics[['LSTATE', 'ULOCAL', 'PCTETH', 'PCTFRPL', 0, 1, 2]].sort_values(by=[0], ascending=False))
# Preparation for comparing total number of words aligned with each topic
# To weight each topic by its prevalenced in the corpus, multiply each topic by the word count from above
col_list = []
for num in topic_columns:
col = "%d_wc" % num
col_list.append(col)
df_w_topics[col] = df_w_topics[num] * df_w_topics['webpunct_count']
df_w_topics[:20]
# Now we can see the prevalence of each topic over words for each urban category and state
grouped_urban = df_w_topics.groupby('ULOCAL')
for e in col_list:
print(e)
print(grouped_urban[e].sum()/grouped_urban['webpunct_count'].sum())
grouped_state = df_w_topics.groupby('LSTATE')
for e in col_list:
print(e)
print(grouped_state[e].sum()/grouped_state['webpunct_count'].sum())
# Here's the distribution of urban proximity over the three topics:
fig1 = plt.figure()
chrt = 0
for num in topic_columns:
chrt += 1
ax = fig1.add_subplot(2,3, chrt)
grouped_urban[num].mean().plot(kind = 'bar', yerr = grouped_urban[num].std(), ylim=0, ax=ax, title=num)
fig1.tight_layout()
plt.show()
# Here's the distribution of each topic over words, for each urban category:
fig2 = plt.figure()
chrt = 0
for e in col_list:
chrt += 1
ax2 = fig2.add_subplot(2,3, chrt)
(grouped_urban[e].sum()/grouped_urban['webpunct_count'].sum()).plot(kind = 'bar', ylim=0, ax=ax2, title=e)
fig2.tight_layout()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Initializing Python
Step2: Reading in preliminary data
Step3: Descriptive statistics
Step4: What these numbers say about the charter schools in the sample
Step5: Counting document lengths
Step6: (Excessively) Frequent words
Step7: ### These are prolific, ritual, empty words and will be excluded from topic models!
Step8: Like the frequent words above, these highly "unique" words are empty of meaning and will be excluded from topic models!
Step9: Binary of essentialist (top-left) and progressivist (bottom-right) word vectors
Step10: These topics seem to mean
|
7,756
|
<ASSISTANT_TASK:>
Python Code:
import torch
import torch.nn as nn
import torch.optim as optim
from torchtext.legacy.datasets import Multi30k
from torchtext.legacy.data import Field, BucketIterator
import spacy
import numpy as np
import random
import math
import time
SEED = 1234
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
spacy_de = spacy.load('de_core_news_sm')
spacy_en = spacy.load('en_core_web_sm')
def tokenize_de(text):
Tokenizes German text from a string into a list of strings (tokens) and reverses it
return [tok.text for tok in spacy_de.tokenizer(text)][::-1]
def tokenize_en(text):
Tokenizes English text from a string into a list of strings (tokens)
return [tok.text for tok in spacy_en.tokenizer(text)]
SRC = Field(tokenize = tokenize_de,
init_token = '<sos>',
eos_token = '<eos>',
lower = True)
TRG = Field(tokenize = tokenize_en,
init_token = '<sos>',
eos_token = '<eos>',
lower = True)
train_data, valid_data, test_data = Multi30k.splits(exts = ('.de', '.en'),
fields = (SRC, TRG))
print(f"Number of training examples: {len(train_data.examples)}")
print(f"Number of validation examples: {len(valid_data.examples)}")
print(f"Number of testing examples: {len(test_data.examples)}")
print(vars(train_data.examples[0]))
SRC.build_vocab(train_data, min_freq = 2)
TRG.build_vocab(train_data, min_freq = 2)
print(f"Unique tokens in source (de) vocabulary: {len(SRC.vocab)}")
print(f"Unique tokens in target (en) vocabulary: {len(TRG.vocab)}")
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
BATCH_SIZE = 128
train_iterator, valid_iterator, test_iterator = BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size = BATCH_SIZE,
device = device)
class Encoder(nn.Module):
def __init__(self, input_dim, emb_dim, hid_dim, n_layers, dropout):
super().__init__()
self.hid_dim = hid_dim
self.n_layers = n_layers
self.embedding = nn.Embedding(input_dim, emb_dim)
self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout = dropout)
self.dropout = nn.Dropout(dropout)
def forward(self, src):
#src = [src len, batch size]
embedded = self.dropout(self.embedding(src))
#embedded = [src len, batch size, emb dim]
outputs, (hidden, cell) = self.rnn(embedded)
#outputs = [src len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
#outputs are always from the top hidden layer
return hidden, cell
class Decoder(nn.Module):
def __init__(self, output_dim, emb_dim, hid_dim, n_layers, dropout):
super().__init__()
self.output_dim = output_dim
self.hid_dim = hid_dim
self.n_layers = n_layers
self.embedding = nn.Embedding(output_dim, emb_dim)
self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout = dropout)
self.fc_out = nn.Linear(hid_dim, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, input, hidden, cell):
#input = [batch size]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
#n directions in the decoder will both always be 1, therefore:
#hidden = [n layers, batch size, hid dim]
#context = [n layers, batch size, hid dim]
input = input.unsqueeze(0)
#input = [1, batch size]
embedded = self.dropout(self.embedding(input))
#embedded = [1, batch size, emb dim]
output, (hidden, cell) = self.rnn(embedded, (hidden, cell))
#output = [seq len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
#seq len and n directions will always be 1 in the decoder, therefore:
#output = [1, batch size, hid dim]
#hidden = [n layers, batch size, hid dim]
#cell = [n layers, batch size, hid dim]
prediction = self.fc_out(output.squeeze(0))
#prediction = [batch size, output dim]
return prediction, hidden, cell
class Seq2Seq(nn.Module):
def __init__(self, encoder, decoder, device):
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.device = device
assert encoder.hid_dim == decoder.hid_dim, \
"Hidden dimensions of encoder and decoder must be equal!"
assert encoder.n_layers == decoder.n_layers, \
"Encoder and decoder must have equal number of layers!"
def forward(self, src, trg, teacher_forcing_ratio = 0.5):
#src = [src len, batch size]
#trg = [trg len, batch size]
#teacher_forcing_ratio is probability to use teacher forcing
#e.g. if teacher_forcing_ratio is 0.75 we use ground-truth inputs 75% of the time
batch_size = trg.shape[1]
trg_len = trg.shape[0]
trg_vocab_size = self.decoder.output_dim
#tensor to store decoder outputs
outputs = torch.zeros(trg_len, batch_size, trg_vocab_size).to(self.device)
#last hidden state of the encoder is used as the initial hidden state of the decoder
hidden, cell = self.encoder(src)
#first input to the decoder is the <sos> tokens
input = trg[0,:]
for t in range(1, trg_len):
#insert input token embedding, previous hidden and previous cell states
#receive output tensor (predictions) and new hidden and cell states
output, hidden, cell = self.decoder(input, hidden, cell)
#place predictions in a tensor holding predictions for each token
outputs[t] = output
#decide if we are going to use teacher forcing or not
teacher_force = random.random() < teacher_forcing_ratio
#get the highest predicted token from our predictions
top1 = output.argmax(1)
#if teacher forcing, use actual next token as next input
#if not, use predicted token
input = trg[t] if teacher_force else top1
return outputs
INPUT_DIM = len(SRC.vocab)
OUTPUT_DIM = len(TRG.vocab)
ENC_EMB_DIM = 256
DEC_EMB_DIM = 256
HID_DIM = 512
N_LAYERS = 2
ENC_DROPOUT = 0.5
DEC_DROPOUT = 0.5
enc = Encoder(INPUT_DIM, ENC_EMB_DIM, HID_DIM, N_LAYERS, ENC_DROPOUT)
dec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, HID_DIM, N_LAYERS, DEC_DROPOUT)
model = Seq2Seq(enc, dec, device).to(device)
def init_weights(m):
for name, param in m.named_parameters():
nn.init.uniform_(param.data, -0.08, 0.08)
model.apply(init_weights)
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
optimizer = optim.Adam(model.parameters())
TRG_PAD_IDX = TRG.vocab.stoi[TRG.pad_token]
criterion = nn.CrossEntropyLoss(ignore_index = TRG_PAD_IDX)
def train(model, iterator, optimizer, criterion, clip):
model.train()
epoch_loss = 0
for i, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
optimizer.zero_grad()
output = model(src, trg)
#trg = [trg len, batch size]
#output = [trg len, batch size, output dim]
output_dim = output.shape[-1]
output = output[1:].view(-1, output_dim)
trg = trg[1:].view(-1)
#trg = [(trg len - 1) * batch size]
#output = [(trg len - 1) * batch size, output dim]
loss = criterion(output, trg)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
epoch_loss += loss.item()
return epoch_loss / len(iterator)
def evaluate(model, iterator, criterion):
model.eval()
epoch_loss = 0
with torch.no_grad():
for i, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
output = model(src, trg, 0) #turn off teacher forcing
#trg = [trg len, batch size]
#output = [trg len, batch size, output dim]
output_dim = output.shape[-1]
output = output[1:].view(-1, output_dim)
trg = trg[1:].view(-1)
#trg = [(trg len - 1) * batch size]
#output = [(trg len - 1) * batch size, output dim]
loss = criterion(output, trg)
epoch_loss += loss.item()
return epoch_loss / len(iterator)
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
N_EPOCHS = 10
CLIP = 1
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss = train(model, train_iterator, optimizer, criterion, CLIP)
valid_loss = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut1-model.pt')
print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}')
model.load_state_dict(torch.load('tut1-model.pt'))
test_loss = evaluate(model, test_iterator, criterion)
print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We'll set the random seeds for deterministic results.
Step2: Next, we'll create the tokenizers. A tokenizer is used to turn a string containing a sentence into a list of individual tokens that make up that string, e.g. "good morning!" becomes ["good", "morning", "!"]. We'll start talking about the sentences being a sequence of tokens from now, instead of saying they're a sequence of words. What's the difference? Well, "good" and "morning" are both words and tokens, but "!" is a token, not a word.
Step5: Next, we create the tokenizer functions. These can be passed to torchtext and will take in the sentence as a string and return the sentence as a list of tokens.
Step6: torchtext's Fields handle how data should be processed. All of the possible arguments are detailed here.
Step7: Next, we download and load the train, validation and test data.
Step8: We can double check that we've loaded the right number of examples
Step9: We can also print out an example, making sure the source sentence is reversed
Step10: The period is at the beginning of the German (src) sentence, so it looks like the sentence has been correctly reversed.
Step11: The final step of preparing the data is to create the iterators. These can be iterated on to return a batch of data which will have a src attribute (the PyTorch tensors containing a batch of numericalized source sentences) and a trg attribute (the PyTorch tensors containing a batch of numericalized target sentences). Numericalized is just a fancy way of saying they have been converted from a sequence of readable tokens to a sequence of corresponding indexes, using the vocabulary.
Step12: Building the Seq2Seq Model
Step13: Decoder
Step14: Seq2Seq
Step15: Training the Seq2Seq Model
Step16: Next up is initializing the weights of our model. In the paper they state they initialize all weights from a uniform distribution between -0.08 and +0.08, i.e. $\mathcal{U}(-0.08, 0.08)$.
Step17: We also define a function that will calculate the number of trainable parameters in the model.
Step18: We define our optimizer, which we use to update our parameters in the training loop. Check out this post for information about different optimizers. Here, we'll use Adam.
Step19: Next, we define our loss function. The CrossEntropyLoss function calculates both the log softmax as well as the negative log-likelihood of our predictions.
Step20: Next, we'll define our training loop.
Step21: Our evaluation loop is similar to our training loop, however as we aren't updating any parameters we don't need to pass an optimizer or a clip value.
Step22: Next, we'll create a function that we'll use to tell us how long an epoch takes.
Step23: We can finally start training our model!
Step24: We'll load the parameters (state_dict) that gave our model the best validation loss and run it the model on the test set.
|
7,757
|
<ASSISTANT_TASK:>
Python Code:
data = [i for i in range(10000)]
data[:10]
def binary_search(data, item):
takes in a sorted list of items, and item to find, and returns item number if item found, -1 if not found
low = 0
high = len(data) - 1
while low <= high:
mid = (low + high) // 2
if data[mid] == item:
return mid
elif data[mid] < item:
low = mid + 1
else:
high = mid - 1
return -1
binary_search(data, 10)
def binary_search_recursive(data, item):
takes in a sorted list and item to find
returns index number if found, -1 if not
def recur(low, high):
mid = (low + high) // 2
if low > high:
return -1
elif item < data[mid]:
return recur(low, mid-1)
elif item > data[mid]:
return recur(mid+1, high)
else:
#print('found', item, 'in position', mid)
return mid
return recur(0, len(data) - 1)
binary_search_recursive(data, 2)
# non-recursive
%timeit(binary_search(data, 2))
# recursive
%timeit(binary_search_recursive(data, 2))
for i in range(0,100):
assert i == binary_search(data, i)
assert i == binary_search_recursive(data,i)
binary_search(data,-1)
binary_search_recursive(data,-1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: straight forward binary search
Step4: Recursive attempt at binary search
Step5: Comparing the two
Step6: So a straightforward algo is faster.
|
7,758
|
<ASSISTANT_TASK:>
Python Code:
import exatomic
exatomic.__version__
exatomic.Universe?
uni = exatomic.Universe()
uni
atom = exatomic.Atom.from_dict({'x': [0.0, 0.0], 'y': [0.0, 0.0], 'z': [-0.34, 0.34],
'symbol': ["H", "H"], 'frame': [0, 0]})
uni = exatomic.Universe(atom=atom)
uni.atom
uni.frame # This was computed on-the-fly as we didn't instantiate it above
exatomic.UniverseWidget(uni)
uni.atom_two
uni.molecule
uni.compute_molecule_com()
uni.molecule
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting help in the Jupyter notebook is easy, just put a "?" after a class or function.
Step2: The Universe object contains all of the information about a simulation, nuclear coordinates, orbitals, etc.
Step3: Empty universes can be useful...but it is more interesting with data
Step4: The frame column is how we track state (e.g. time, theory, etc.)
Step5: Visualization of this simple universe can be accomplished directly in the notebook
Step6: In building the visualization, bonds were automatically computed
Step7: Note again that distances are in atomic units
Step8: Center of masses can also be computed
|
7,759
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import math
import matplotlib.pyplot as plt
import scipy
from scipy import optimize, integrate
import pints
# Defining variables for use later
k = 1.5 # from equation 3
y0 = 1
times = np.linspace(0,10,50)
# A one-compartment PK model is basically an ODE for an exponential decay curve
def onecomp(y, t, k):
A one-compartment PK model, aka simple exponential decay
Parameters
----------
y: float
y at time t
t: float
time
k: float
model parameter
dydt = -k * y
return dydt
# You can solve ODEs with the Forward Euler method
def ForwardEuler(func, k, y0, times):
Numerically calculate an ODE (forward Euler technique)
Parameters
----------
func: function
function giving the derivative
k: float
parameter the function requires as input
y0: float
y at t=0
times: list
array of times at which to calculate y
y = y0
y_solution = [y0]
h = times[2] - times[1]
for n in times[:-1]:
dy = func(y, n, k)
y = y + h*dy
y_solution.append(y)
return y_solution
# You can also solve ODEs with scipy.integrate.odeint
def simulate(func, parameters, y0, times):
Numerically calculate an ODE
Parameters
----------
func: function
function giving the derivative
parameters: list
parameters the function requires as input
y0: float
y at t=0
times: list
array of times at which to calculate y
l = scipy.integrate.odeint(func, y0, times, (parameters,)) # returns a list of lists
flatlist = [item for sublist in l for item in sublist] # to single list of 'y's
return flatlist
# This is what the system output looks like. We don't actually know these values
actual_values_euler = ForwardEuler(onecomp, k, y0, times)
actual_values = simulate(onecomp, k, y0, times)
# Plot the model results
plt.figure()
plt.xlabel('Time')
plt.ylabel('Concentration, y')
plt.plot(times, actual_values, '--', label='scipy')
plt.plot(times, actual_values_euler, label='forward Euler')
plt.legend()
plt.show()
# Make noisy data that we're inferring from. noisy_data is known to us.
noise = np.random.normal(0, 0.03, len(actual_values))
noisy_data = actual_values + noise
plt.figure()
plt.plot(times, noisy_data, '.', label='Measured values (we know these)')
plt.plot(times, actual_values, label='Actual values (we don\'t know these)')
plt.xlabel('Time')
plt.ylabel('Concentration')
plt.legend()
plt.show()
# So what do we want to minimise?
def sumofsquares(y_model, y_data):
Gives the sum of the square of all errors between model and experimental data.
In
y_model: list of output values from model
y_data: list of experimental (i.e. noisy) values
Out: the sum of square error
sq_error = []
for t in range(len(y_model)):
sq_error.append((y_data[t] - y_model[t])**2)
return sum(sq_error)
# Optimise it with scipy
def scalar_to_minimise(parameters):
For a one compartment model & sum of squares this is what's minimised
y_model = simulate(onecomp, parameters, 1, times)
y_data = noisy_data
return sumofsquares(y_model, y_data) / len(y_model)
start_params = 11
result = scipy.optimize.minimize_scalar(scalar_to_minimise)
print('Calculated k: \t'+str(result.x))
print('Real k: \t'+str(k))
# What does that look like?
recon_model = simulate(onecomp, result.x, 1, times)
plt.figure()
plt.plot(times, noisy_data, '.', label='Measured values')
plt.plot(times, actual_values, label='Actual values')
plt.plot(times, recon_model, '--', label='Inferred values')
plt.xlabel('Time')
plt.ylabel('Concentration')
plt.legend()
plt.show()
# Need to make the model into a Pints one
class PintsOneComp(pints.ForwardModel):
def n_parameters(self):
return 1
def simulate(self, parameter, times):
return simulate(onecomp, parameter, 1, times)
problem = pints.SingleOutputProblem(PintsOneComp(), times, noisy_data) # Create a model instance with measured data
error_measure = pints.SumOfSquaresError(problem) # Define the error measure to be used (as sumofsquares function)
optimisation = pints.OptimisationController(error_measure, [1], method=pints.XNES) # Define a statistical problem
optimisation.set_log_to_screen(False) # Suppress log output
parameters, error = optimisation.run() # Run the statistical model
# How does it look?
print('Custom calculation result: \t'+str(result.x))
print('Pints calculation result: \t'+str(parameters[0]))
plt.figure()
plt.plot(times, noisy_data, '.', label='Measured values')
plt.plot(times, recon_model, '--', label='Custom inferred values')
plt.plot(times, PintsOneComp().simulate(parameters, times), '--', lw=2, label='Pints inferred values')
plt.legend()
plt.show()
# This model stores all the model-related variables, for simplicity.
class OdeModel():
def __init__(self, thetas, covariates, prior, likelihood, modeltype):
self.thetas = thetas
self.covariates = covariates
self.modeltype = modeltype
self.prior = prior
self.likelihood = likelihood
def uniform_prior(theta):
Returns 0.1 if entire input list is between 0 & 10, else 0
prior = []
for key, param in theta.items():
if param > 0 and param < 10:
prior.append(0.1)
else:
prior.append(0)
return min(prior)
def likelihood_k(theta, y_data):
Returns the likelihood, P(theta|y)
k = theta['k']
sigma = 0.03
pdf = []
y_model = simulate(onecomp, k, 1, times)
other_bit = 1/(2*math.pi*sigma**2)
for t in range(len(y_data)): # this loop gives a normally distributed pdf
square_error = (y_data[t] - y_model[t])**2
exponential = math.exp(-square_error/(2*sigma**2))
pdf.append(exponential*other_bit)
return np.prod(pdf)
def propose_new_theta(model, y_data, theta):
Randomly proposes a new theta and decides whether to accept or not
In
model: instance of OdeModel class
y_data: list with experimental data
theta: parameters, in a list
Out: new parameters, either the same (if proposed not accepted) or different
numerator = model.prior(theta) * model.likelihood(theta, y_data)
# randomly get a proposed theta & calculate its numerator
proposed_theta = {}
for key, value in theta.items():
proposed_k = np.random.normal(value, model.covariates[key])
proposed_theta[key] = proposed_k
proposed_numerator = model.prior(proposed_theta) * model.likelihood(proposed_theta, y_data)
# if the new numerator should be accepted (metropolis hastings criteria), replace theta
if proposed_numerator == 0:
pass
elif proposed_numerator > numerator:
theta = proposed_theta
numerator = proposed_numerator
elif np.random.rand() < proposed_numerator/numerator:
theta = proposed_theta
numerator = proposed_numerator
return theta
# This just runs propose_new_theta repeatedly
def metropolis_singlethread(model, y_data, threadnum, max_iters):
iters = 0
while iters < max_iters:
theta = propose_new_theta(model, y_data, model.thetas[threadnum][-1])
model.thetas[threadnum].append(theta)
iters = iters + 1
def metropolishastings(model, y_data, blocksize, number_of_blocks):
n = 0
while n < number_of_blocks:
for threadnum, thetas_onelot in enumerate(model.thetas):
metropolis_singlethread(model, y_data, threadnum, blocksize)
n = n+1
ks = np.linspace(0,10,100)
likelihoods = []
for n in ks:
likelihoods.append(likelihood_k({'k':n}, noisy_data))
plt.figure()
plt.plot(ks, likelihoods)
plt.xlabel('input parameter, k')
plt.ylabel('likelihood')
plt.axvline(1.5, color='k', label='True value of k')
plt.show()
# Run the metropolis hastings algorithm
thetas_k = [[{'k':5}], [{'k':3}], [{'k':1}]] # Three initial guesses for k
covariates_k = {'k':0.05} # Step size (SD of normal distribution for choosing next proposed theta)
model = OdeModel(thetas_k, covariates_k, uniform_prior, likelihood_k, onecomp)
metropolishastings(model, noisy_data, 10, 100)
# This is how k looks (from all start-points) as the algorithm progresses
plt.figure()
for n in range(len(model.thetas)):
ks_list= [theta['k'] for theta in model.thetas[n]]
plt.plot(ks_list[:500]) # only first 500
plt.xlabel('iteration #')
plt.ylabel('k')
plt.show()
# Here are the occurrences of all k estimates throughout the algorithm
all_ks = []
for n in range(len(model.thetas)):
ks_list = [theta['k'] for theta in model.thetas[n]]
all_ks.append(ks_list)
plt.figure()
plt.hist(all_ks, bins=100, stacked=True)
plt.xlabel('k')
plt.ylabel('occurrence')
plt.show()
import pints
log_likelihood = pints.GaussianKnownSigmaLogLikelihood(problem, sigma=0.05) # Define & wrap a physical model
startpoints = [[1],[3],[5]] # Start 3 Markov chains from arbitrary points
mcmc = pints.MCMCController(log_likelihood, 3, startpoints, method=pints.HaarioBardenetACMC) # Define a statistical problem
mcmc.set_max_iterations(2000) # Set number of iterations to attempt
mcmc.set_log_to_screen(False) # Suppress log output
samples = mcmc.run() # Run the statistical model
# Use a diagnostic plot to check if the chains have converged
import pints.plot
pints.plot.trace(samples)
plt.show()
# Plot several predictions that are all likely sources of the experimental data
pints.plot.series(np.vstack(samples[:,1000:]), problem)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Table of Contents
Step5: Now that we have a model and an initial value $y0$, we want to estimate, for any given parameter $k$, $y$ for all values of $t$.
Step6: Bayesian optimisation <a class="anchor" id="optimisation"></a>
Step8: We're trying to find which value of $k$ makes our model output the closest to the real system output, assuming that the input dose for each is the same. To do this we define a function, scalar_to_minimise, whose output reduces in size, the closer model output is to measured system output.
Step10: By modelling many values of $k$ and finding the one that gives the lowest value from scalar_to_minimise, we can make a good estimate of $k$ in the real-life system. As you can see, this is inherently imperfect, but can be quite accurate.
Step11: Using PINTS
Step12: There is an important conceptual change in the code above. Previously, we have dealt with a physical model of the real-life system of interest (the function onecomp). In the example described, the physical model takes $y(t_n, param)$ as input and gives $\frac{d}{dt}y(t_n, param)$ as output. With PINTS, the physical model is provided by the user as a simulate function.
Step15: The functions below provide the prior and likelihood for the numerator of Bayes' rule.
Step17: The function propose_new_theta does steps 2 & 3 in the instructions above. It takes the model, the current $\theta$ value, and measured data.
Step18: Let's see this work. First, let's see what the likelihood is when we fix the measured data and take a variety of different proposed $k$ parameters.
Step19: Using PINTS
|
7,760
|
<ASSISTANT_TASK:>
Python Code:
#grade (enter your code in this cell - DO NOT DELETE THIS LINE)
labels = ['Lose', '$1', '$2', '$3 (Win)']
graph.draw_matrix(G, labels)
x0 = np.array([0.0, 1.0, 0.0, 0.0])
# define xstar1
# Print out the probability
print(np.round(xstar1 * 100,2))
#grade (enter your code in this cell - DO NOT DELETE THIS LINE)
print(np.round(xstar2 * 100, 2))
x = np.random.rand(4)
x = x / la.norm(x,1)
print(np.round(hf.power_iteration(G, x),2))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You can display your matrix as a graph to check your work.
Step2: Suppose the gambler starts with $\$1$ ($100\%$ probability of being in the $\$1$ state). Write the initial state as the array x0
Step3: We will use the function power_iteration that you defined previously (and now available to you as a helper function) to get the probability of losing and winning using the initial state defined as x0 and store your result in xstar1
Step4: Check your answers!
Step5: Print out xstar2. Is this result different from xstar1?
Step6: Because we can no longer reach every state from every other state, we no longer have a unique steady state. A Markov chain of this type is said to be "reducible".
|
7,761
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import xarray as xr
import cartopy.crs as ccrs
from matplotlib import pyplot as plt
ds = xr.tutorial.open_dataset('rasm').load()
ds
print(ds.xc.attrs)
print(ds.yc.attrs)
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(14,4))
ds.xc.plot(ax=ax1)
ds.yc.plot(ax=ax2)
ds.Tair[0].plot()
plt.figure(figsize=(14,6))
ax = plt.axes(projection=ccrs.PlateCarree())
ax.set_global()
ds.Tair[0].plot.pcolormesh(ax=ax, transform=ccrs.PlateCarree(), x='xc', y='yc', add_colorbar=False)
ax.coastlines()
ax.set_ylim([0,90]);
# define two-degree wide latitude bins
lat_bins = np.arange(0,91,2)
# define a label for each bin corresponding to the central latitude
lat_center = np.arange(1,90,2)
# group according to those bins and take the mean
Tair_lat_mean = ds.Tair.groupby_bins('xc', lat_bins, labels=lat_center).mean(dim=xr.ALL_DIMS)
# plot the result
Tair_lat_mean.plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As an example, consider this dataset from the xarray-data repository.
Step2: In this example, the logical coordinates are x and y, while the physical coordinates are xc and yc, which represent the latitudes and longitude of the data.
Step3: Plotting
Step4: Note that the variables xc (longitude) and yc (latitude) are two-dimensional scalar fields.
Step5: In order to visualize the data on a conventional latitude-longitude grid, we can take advantage of xarray's ability to apply cartopy map projections.
Step6: Multidimensional Groupby
|
7,762
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pygem as pg
params = pg.params.FFDParameters()
params.read_parameters(filename='../tests/test_datasets/parameters_test_ffd_sphere.prm')
stl_handler = pg.stlhandler.StlHandler()
mesh_points = stl_handler.parse('../tests/test_datasets/test_sphere.stl')
stl_handler.plot(plot_file='../tests/test_datasets/test_sphere.stl')
free_form = pg.freeform.FFD(params, mesh_points)
free_form.perform()
new_mesh_points = free_form.modified_mesh_points
stl_handler.write(new_mesh_points, 'test_sphere_mod.stl')
stl_handler.plot(plot_file='test_sphere_mod.stl')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We need to read a parameters file. If does not exist the FFDParameters() class creates a default prm file that you have to edit for your problem specifications.
Step2: The following is the parameters file for the case at hand. In particular, if you look at the Box info section, there is 3-by-2-by-2 lattice around an emisphere. Then we rotate it by 10 deg around y and 15 deg around z axis. In Parameters weights we just change the weights of the control point with indices 2, 0, 0.
Step3: This is the original geometry
Step4: Finally we can actually perform the FFD with the freeform module. The inputs are the parameters and the points positions we extracted before.
Step5: Now we can write the modified stl file with the new points coordinates.
Step6: And here we have the final plot
|
7,763
|
<ASSISTANT_TASK:>
Python Code:
df_N2 = pd.read_csv("N2.csv", skiprows=1)
N2_isotherm = pyiast.ModelIsotherm(df_N2, loading_key="Loading(mmol/g)",
pressure_key="P(bar)", model='Henry')
pyiast.plot_isotherm(N2_isotherm)
N2_isotherm.print_params()
df_CO2 = pd.read_csv("CO2.csv", skiprows=1)
CO2_isotherm = pyiast.ModelIsotherm(df_CO2, loading_key="Loading(mmol/g)",
pressure_key="P(bar)", model="Langmuir")
CO2_isotherm.print_params()
pyiast.plot_isotherm(CO2_isotherm)
df_H2O = pd.read_csv("H2O.csv", skiprows=1)
H2O_isotherm = pyiast.InterpolatorIsotherm(df_H2O, loading_key="Loading(mmol/g)",
pressure_key="P(bar)",
fill_value=df_H2O["Loading(mmol/g)"].max())
pyiast.plot_isotherm(H2O_isotherm)
p = np.array([.166, .679]) # mbar
print("total P = ", np.sum(p))
q = pyiast.iast(p, [CO2_isotherm, N2_isotherm])
print(q)
p3 = np.array([.166, .679, .02]) # mbar
print("total P = ", np.sum(p3))
q3 = pyiast.iast(p3, [CO2_isotherm, N2_isotherm, H2O_isotherm], verboseflag=True)
q3
p_mix = np.array([.16562, .67912])
q_mix = np.array([.34, .27])
yerr = np.array([.1, .14])
fig = plt.figure(facecolor='w')
plt.plot(df_CO2['P(bar)'], df_CO2['Loading(mmol/g)'], marker='o', color='g', label='pure CO$_2$', markersize=10)
plt.plot(df_N2['P(bar)'], df_N2['Loading(mmol/g)'], marker='*', color='b', label='pure N$_2$', markersize=12)
plt.xlabel("Pressure (bar)")
# plt.scatter(p3[:-1], q3[:-1], color='orange', marker='s',s=50,zorder=110, label='IAST')
plt.scatter(p, q, color='orange', marker='s',s=45,zorder=110, label='IAST')
plt.scatter(p_mix, q_mix, marker='x', zorder=200, color='k', s=50, label='Expt')
plt.errorbar(p_mix, q_mix,color='k', yerr=yerr, linestyle='none', markersize=50)
plt.xlim([0, 1.0])
plt.ylim([0, 2.0])
plt.ylabel("Gas uptake (mmol/g)")
plt.legend(loc='upper left')
plt.savefig('JaradMason.pdf', format='pdf', facecolor=fig.get_facecolor())
fig = plt.figure(facecolor='w')
p_plot = np.linspace(0, 1)
plt.scatter(df_CO2['P(bar)'], df_CO2['Loading(mmol/g)'],
marker='o', color='g', label='pure CO$_2$', s=60)
plt.plot(p_plot, CO2_isotherm.loading(p_plot), color='g')
plt.scatter(df_N2['P(bar)'], df_N2['Loading(mmol/g)'],
marker='s', color='b', label='pure N$_2$', s=60)
plt.plot(p_plot, N2_isotherm.loading(p_plot), color='b')
plt.xlabel("Pressure (bar)")
plt.axvline(x=0.679, linewidth=2, color='b', linestyle='--')
plt.axvline(x=0.166, linewidth=2, color='g', linestyle='--')
# plt.scatter(p3[:-1], q3[:-1], color='orange', marker='s',s=45,zorder=110, label='IAST')
# plt.scatter(p, q, color='orange', marker='s',s=45,zorder=110, label='IAST')
# plt.scatter(p_mix, q_mix, marker='x', zorder=200, color='k', s=56, label='Expt')
# plt.errorbar(p_mix, q_mix,color='k', yerr=yerr, linestyle='none')
plt.xlim([-.05, 1.0])
plt.ylim([-.1, 2.0])
plt.ylabel("Gas uptake (mmol/g)")
plt.legend(loc='upper left')
plt.tight_layout()
plt.savefig('JaradMason_N2_and_CO2.pdf', format='pdf', facecolor=fig.get_facecolor())
fig = plt.figure(facecolor='w')
plt.scatter(df_H2O['P(bar)'], df_H2O['Loading(mmol/g)'], marker='o',
color='r', label='pure H$_2$O', s=60)
plt.plot(np.linspace(0, .07), H2O_isotherm.loading(np.linspace(0, 0.07)), color='r')
plt.axvline(x=.02, linewidth=2, color='r', linestyle='--')
plt.xlabel("Pressure (bar)")
plt.xlim([-.05*.07, 0.07])
plt.ylim([-.05*45, 45.])
plt.ylabel("Water uptake (mmol/g)")
plt.legend(loc='upper center')
plt.tight_layout()
plt.savefig('JaradMason_H2O.pdf', format='pdf', facecolor=fig.get_facecolor())
ind = np.arange(3) # the x locations for the groups
width = 0.35 # the width of the bars
fig, ax = plt.subplots(facecolor='w')
rects1 = ax.bar(ind, q3, width, color=['g', 'b', 'r'], hatch='*')
# rects1 = ax.bar(np.arange(2), q, width, color=['g', 'b'], hatch='//')
rects2 = ax.bar(np.arange(2)+width, q_mix, width, color=['g', 'b', 'r'], yerr=yerr, ecolor='k')
# add some text for labels, title and axes ticks
ax.set_ylabel('Gas uptake (mmol/g)')
# ax.set_title('Sc')
ax.set_xticks(ind+width)
ax.set_xticklabels(('CO$_2$', 'N$_2$', r'H$_2$O') )
#x.legend( (rects1[0], rects2[0]), ('Exp\'t', 'IAST') , loc='upper center')
def autolabel(rects):
# attach some text labels
for rect in rects:
height = rect.get_height()
#ax.text(rect.get_x()+rect.get_width()/2., 1.05*height, '%.2f'% height,
ax.text(rect.get_x()+rect.get_width()/2., .05, '%.2f'% height,
ha='center', va='bottom', fontsize=15, weight='bold',
backgroundcolor='w')
bbox_props = dict(boxstyle="round", fc="w", ec="0.5", alpha=0.9)
ax.text(2.35+rects1[0].get_width()/2., .05, 'N/A',
ha='center', va='bottom',
fontsize=15, weight='bold',
backgroundcolor='w')
plt.xlim([-.05,3-width+.05])
autolabel(rects1)
autolabel(rects2)
plt.tight_layout()
plt.savefig('JaradMason_IAST.pdf', format='pdf', facecolor=fig.get_facecolor())
plt.savefig('JaradMason_IAST.png', format='png', facecolor=fig.get_facecolor(), dpi=250)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: binary (CO$_2$/N$_2$ adsorption)
Step2: ternary (CO$_2$/N$_2$/H$_2$O adsorption)
Step3: compare to experiment
|
7,764
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from astropy.table import Table, join
from astropy import units as u
from astropy.coordinates import SkyCoord, search_around_sky
from IPython.display import clear_output
import pickle
import os
from mltier1 import (get_center, get_n_m, estimate_q_m, Field, MultiMLEstimatorU,
parallel_process, get_sigma_all, get_q_m)
%load_ext autoreload
%autoreload
from IPython.display import clear_output
%pylab inline
save_intermediate = True
plot_intermediate = True
idp = "idata/final"
if not os.path.isdir(idp):
os.makedirs(idp)
# Busy week Edinburgh 2017
ra_down = 172.09
ra_up = 187.5833
dec_down = 46.106
dec_up = 56.1611
# Busy week Hatfield 2017
ra_down = 170.
ra_up = 190.
dec_down = 46.8
dec_up = 55.9
# Full field July 2017
ra_down = 160.
ra_up = 232.
dec_down = 42.
dec_up = 62.
field = Field(170.0, 190.0, 46.8, 55.9)
field_full = Field(160.0, 232.0, 42.0, 62.0)
combined_all = Table.read("pw.fits")
lofar_all = Table.read("data/LOFAR_HBA_T1_DR1_catalog_v0.9.srl.fits")
np.array(combined_all.colnames)
np.array(lofar_all.colnames)
lofar = field_full.filter_catalogue(lofar_all, colnames=("RA", "DEC"))
combined = field_full.filter_catalogue(combined_all,
colnames=("ra", "dec"))
combined["colour"] = combined["i"] - combined["W1mag"]
combined_aux_index = np.arange(len(combined))
coords_combined = SkyCoord(combined['ra'],
combined['dec'],
unit=(u.deg, u.deg),
frame='icrs')
coords_lofar = SkyCoord(lofar['RA'],
lofar['DEC'],
unit=(u.deg, u.deg),
frame='icrs')
combined_matched = (~np.isnan(combined["i"]) & ~np.isnan(combined["W1mag"])) # Matched i-W1 sources
combined_panstarrs = (~np.isnan(combined["i"]) & np.isnan(combined["W1mag"])) # Sources with only i-band
combined_wise =(np.isnan(combined["i"]) & ~np.isnan(combined["W1mag"])) # Sources with only W1-band
combined_i = combined_matched | combined_panstarrs
combined_w1 = combined_matched | combined_wise
#combined_only_i = combined_panstarrs & ~combined_matched
#combined_only_w1 = combined_wise & ~combined_matched
print("Total - ", len(combined))
print("i and W1 - ", np.sum(combined_matched))
print("Only i - ", np.sum(combined_panstarrs))
print("With i - ", np.sum(combined_i))
print("Only W1 - ", np.sum(combined_wise))
print("With W1 - ", np.sum(combined_w1))
colour_limits = [-0.5, 0.0, 0.5, 1.0, 1.25, 1.5, 1.75, 2.0, 2.25, 2.5, 2.75, 3.0, 3.5, 4.0]
# Start with the W1-only, i-only and "less than lower colour" bins
colour_bin_def = [{"name":"only W1", "condition": combined_wise},
{"name":"only i", "condition": combined_panstarrs},
{"name":"-inf to {}".format(colour_limits[0]),
"condition": (combined["colour"] < colour_limits[0])}]
# Get the colour bins
for i in range(len(colour_limits)-1):
name = "{} to {}".format(colour_limits[i], colour_limits[i+1])
condition = ((combined["colour"] >= colour_limits[i]) &
(combined["colour"] < colour_limits[i+1]))
colour_bin_def.append({"name":name, "condition":condition})
# Add the "more than higher colour" bin
colour_bin_def.append({"name":"{} to inf".format(colour_limits[-1]),
"condition": (combined["colour"] >= colour_limits[-1])})
combined["category"] = np.nan
for i in range(len(colour_bin_def)):
combined["category"][colour_bin_def[i]["condition"]] = i
np.sum(np.isnan(combined["category"]))
numbers_combined_bins = np.array([np.sum(a["condition"]) for a in colour_bin_def])
numbers_combined_bins
bin_list, centers, Q_0_colour, n_m, q_m = pickle.load(open("lofar_params.pckl", "rb"))
likelihood_ratio_function = MultiMLEstimator(Q_0_colour, n_m, q_m, centers)
radius = 15
lr_threshold = 0.36
selection = ~np.isnan(combined["category"]) # Avoid the dreaded sources with no actual data
catalogue = combined[selection]
def apply_ml(i, likelihood_ratio_function):
idx_0 = idx_i[idx_lofar == i]
d2d_0 = d2d[idx_lofar == i]
category = catalogue["category"][idx_0].astype(int)
mag = catalogue["i"][idx_0]
mag[category == 0] = catalogue["W1mag"][idx_0][category == 0]
lofar_ra = lofar[i]["RA"]
lofar_dec = lofar[i]["DEC"]
lofar_pa = lofar[i]["PA"]
lofar_maj_err = lofar[i]["E_Maj"]
lofar_min_err = lofar[i]["E_Min"]
c_ra = catalogue["ra"][idx_0]
c_dec = catalogue["dec"][idx_0]
c_ra_err = catalogue["raErr"][idx_0]
c_dec_err = catalogue["decErr"][idx_0]
sigma, sigma_maj, sigma_min = get_sigma_all(lofar_maj_err, lofar_min_err, lofar_pa,
lofar_ra, lofar_dec,
c_ra, c_dec, c_ra_err, c_dec_err)
lr_0 = likelihood_ratio_function(mag, d2d_0.arcsec, sigma, sigma_maj, sigma_min, category)
chosen_index = (lr_0 >= lr_threshold)
result = [combined_aux_index[selection][idx_0[chosen_index]], # Index
(d2d_0.arcsec)[chosen_index], # distance
lr_0[chosen_index]] # LR
return result
idx_lofar, idx_i, d2d, d3d = search_around_sky(
coords_lofar, coords_combined[selection], radius*u.arcsec)
idx_lofar_unique = np.unique(idx_lofar)
total_sources = len(idx_lofar_unique)
def ml(i):
return apply_ml(i, likelihood_ratio_function)
res = parallel_process(idx_lofar_unique, ml, n_jobs=8)
lofar_aux_index = np.arange(len(lofar))
lr = []
lr_dist = []
lr_index = []
lr_order = []
lr_lofar_index = []
for i, idx in enumerate(idx_lofar_unique):
result = res[i]
n = len(result[0])
lofar_index = lofar_aux_index[idx]
if n > 0:
order = np.argsort(result[2])[::-1]
lr.extend(result[2][order])
lr_dist.extend(result[1][order])
lr_index.extend(result[0][order])
lr_order.extend(np.arange(n, dtype=int) + 1)
lr_lofar_index.extend(np.ones(n, dtype=int)*lofar_index)
else:
lr.append(np.nan)
lr_dist.append(np.nan)
lr_index.append(np.nan)
lr_order.append(np.nan)
lr_lofar_index.append(lofar_index)
aux_table = Table()
aux_table['aux_index'] = lr_lofar_index
aux_table['lr'] = lr
aux_table['lr_dist'] = lr_dist
aux_table['lr_index'] = lr_index
aux_table['lr_order'] = lr_order
aux_table
lofar["aux_index"] = lofar_aux_index
lofar_lr = join(lofar, aux_table, join_type='outer', keys='aux_index')
lofar_lr
combined["lr_index"] = combined_aux_index.astype(float)
for col in ['lr', 'lr_dist', 'lr_index', 'lr_order']:
lofar_lr[col].fill_value = np.nan
pwl = join(lofar_lr.filled(), combined, join_type='left', keys='lr_index')
len(pwl)
pwl_columns = pwl.colnames
for col in pwl_columns:
fv = pwl[col].fill_value
#print(col, fv)
if (isinstance(fv, np.float64) and (fv != 1e+20)):
print(col, fv)
pwl[col].fill_value = 1e+20
columns_save = ['Source_Name', 'RA', 'E_RA', 'E_RA_tot', 'DEC', 'E_DEC', 'E_DEC_tot',
'Peak_flux', 'E_Peak_flux', 'E_Peak_flux_tot', 'Total_flux', 'E_Total_flux', 'E_Total_flux_tot',
'Maj', 'E_Maj', 'Min', 'E_Min', 'PA', 'E_PA', 'Isl_rms', 'S_Code', 'Mosaic_ID', 'Isl_id',
'AllWISE', 'objID', 'ra', 'dec', 'raErr', 'decErr',
'W1mag', 'W1magErr', 'i', 'iErr', 'colour', 'category',
'lr', 'lr_dist', 'lr_order']
pwl[columns_save].filled().write('lofar_multi_lr_pw.fits', format="fits")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: General configuration
Step2: Area limits
Step3: Load data
Step4: Filter catalogues
Step5: Additional data
Step6: Sky coordinates
Step7: Class of sources in the combined catalogue
Step8: Colour categories
Step9: We get the number of sources of the combined catalogue in each colour category. It will be used at a later stage to compute the $Q_0$ values
Step10: Maximum Likelihood
Step11: ML match
Step12: Run the cross-match
Step13: Run the ML matching
Step14: Selection and match
Step15: Save combined catalogue
|
7,765
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'awi', 'awi-cm-1-0-mr', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
|
7,766
|
<ASSISTANT_TASK:>
Python Code:
# remove comment to use latest development version
import sys; sys.path.insert(0, '../')
# import libraries
import raccoon as rc
# empty DataFrame
srs = rc.Series()
srs
# with indexes but no data
srs = rc.Series(index=[1, 2, 3])
srs
# with data
srs = rc.Series(data=[4, 5, 6], index=[10, 11, 12])
srs
srs.print()
print(srs)
# data_name
srs.data_name
srs.data_name = 'new_data'
print(srs)
# index
srs.index
#indexes can be any non-repeating unique values
srs.index = ['apple', 'pear', 7.7]
srs.print()
srs.index = [10, 11, 12]
print(srs)
# the index can also have a name, befault it is "index"
srs.index_name
srs.index_name = 'units'
srs.index_name
# data is a shallow copy, be careful on how this is used
srs.index_name = 'index'
srs.data
srs.select_index(11)
# set a single cell
srs.set(10, 100)
print(srs)
# set a value outside current range creates a new row. Can also use [] for setting
srs[13] = 9
srs.print()
# set a subset of rows
srs[[10, 12]] = 66
print(srs)
# using boolean list
srs.set([True, False, True, False], [88, 99])
print(srs)
# setting with slices
srs[12:13] = 33
print(srs)
srs[10:12] = [1, 2, 3]
print(srs)
# set a location
srs.set_location(1, 22)
print(srs)
# set multiple locations
srs.set_locations([0, 2], [11, 27])
print(srs)
# append a row, DANGEROUS as there is not validation checking, but can be used for speed
srs.append_row(14, 99)
print(srs)
# append multiple rows, again no sort check
srs.append_rows([15, 16], [100, 110])
print(srs)
# get a single cell
srs[10]
# get subset of the index
srs[[11, 12, 13]].print()
# get using slices
srs[11:13].print()
# return as a list
srs.get([11, 12, 13], as_list=True)
print(srs.get_location(2))
srs.get_location(-1)
srs.get_locations(locations=[0, 2]).print()
srs.get_locations(locations=[0, 2], as_list=True)
srs.set_locations([-1, -2], values=[10, 9])
print(srs)
srs.head(2).print()
srs.tail(2).print()
srs.delete([10, 13])
print(srs)
# return a dict
srs.to_dict()
# exclude the index
srs.to_dict(index=False)
# return an OrderedDict()
srs.to_dict(ordered=True)
srs = rc.Series([6, 7, 8, 9], index=[25, 24, 23, 22])
print(srs)
# sort by index. Sorts are inplace
srs.sort_index()
print(srs)
srs = rc.Series([1, 2, 3])
# test for equality
srs.equality(value=3)
# all math methods can operate on a subset of the index
srs.equality(indexes=[1, 2], value=2)
tuples = [('a', 1, 3), ('a', 1, 4), ('a', 2, 3), ('b', 1, 4), ('b', 2, 1), ('b', 3, 3)]
srs = rc.Series([1, 2, 3, 4, 5, 6], index=tuples)
print(srs)
compare = ('a', None, None)
srs.select_index(compare)
compare = ('a', None, 3)
srs.select_index(compare, 'boolean')
compare = (None, 2, None)
srs.select_index(compare, 'value')
compare = (None, None, 3)
srs.select_index(compare, 'value')
compare = (None, None, None)
srs.select_index(compare)
srs = rc.Series([1, 2, 3], index=[9, 10, 11])
print(srs)
srs.reset_index()
srs
srs = rc.Series([1, 2, 3], index=[9, 10, 11], index_name='new name')
print(srs)
srs.reset_index()
print(srs)
srs = rc.Series([3, 5, 4], index=[12, 15, 14], sort=True)
srs.print()
srs[16] = 9
print(srs)
srs.set(indexes=13, values=3.5)
print(srs)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Initialize
Step2: Print
Step3: Setters and Getters
Step4: Select Index
Step5: Set Values
Step6: Get Values
Step7: Set and Get by Location
Step8: Head and Tail
Step9: Delete rows
Step10: Convert
Step11: Sort by Index
Step12: Math Methods
Step13: Multi-Index
Step14: The select_index method works with tuples by allowing the * to act as a wild card for matching.
Step15: Reset Index
Step16: Sorted Series
Step17: When sorted=True on initialization the data will be sorted by index to start
|
7,767
|
<ASSISTANT_TASK:>
Python Code:
# import necessary packages
from dkrz_forms import form_handler, utils, wflow_handler, checks
from datetime import datetime
from pprint import pprint
# load workflow form object
info_file = "path_to_file.json"
my_form = utils.load_workflow_form(info_file)
# show the workflow steps for this form (long-name, short-name)
# to select a specific action, you can use the long name, e.g. 'data_ingest' or the related short name e.g. 'ing'
wflow_dict = wflow_handler.get_wflow_description(my_form)
pprint(wflow_dict)
# 'start_action' updates the form with information on who is currently working on the form
# internal information on this (timestamp, status information) is automatically set ..
# the resulting 'working version' of the form is commited to the work repository
wflow_handler.start_action('data_submission_review',my_form,"stephan kindermann")
review_report = {}
review_report['comment'] = 'needed to change and correct submission form'
review_report['additional_info'] = "mail exchange with a@b with respect to question ..."
myform = wflow_handler.finish_action('data_submission_review',my_form,"stephan kindermann",review_report)
my_form.rev.entity_out.report
report = checks.check_report(my_form,"sub")
checks.display_report(report)
my_form.rev.entity_in.check_status
my_form.sub.activity.ticket_url
part = checks.check_step_form(my_form,"sub")
checks.display_check(part,"sub")
## global check
res = checks.check_generic_form(my_form)
checks.display_checks(my_form,res)
print(my_form.sub.entity_out.status)
print(my_form.rev.entity_in.form_json)
print(my_form.sub.activity.ticket_id)
pprint(my_form.workflow)
workflow_form = utils.load_workflow_form(info_file)
review = workflow_form.rev
# any additional information keys can be added,
# yet they are invisible to generic information management tools ..
workflow_form.status = "review"
review.activity.status = "1:in-review"
review.activity.start_time = str(datetime.now())
review.activity.review_comment = "data volume check to be done"
review.agent.responsible_person = "sk"
sf = form_handler.save_form(workflow_form, "sk: review started")
review.activity.status = "3:accepted"
review.activity.ticket_id = "25389"
review.activity.end_time = str(datetime.now())
review.entity_out.comment = "This submission is related to submission abc_cde"
review.entity_out.tag = "sub:abc_cde" # tags are used to relate different forms to each other
review.entity_out.report = {'x':'y'} # result of validation in a dict (self defined properties)
# ToDo: test and document save_form for data managers (config setting for repo)
sf = form_handler.save_form(workflow_form, "kindermann: form_review()")
workflow_form = utils.load_workflow_form(info_file)
ingest = workflow_form.ing
?ingest.entity_out
# agent related info
workflow_form.status = "ingest"
ingest.activity.status = "started"
ingest.agent.responsible_person = "hdh"
ingest.activity.start_time=str(datetime.now())
# activity related info
ingest.activity.comment = "data pull: credentials needed for remote site"
sf = form_handler.save_form(workflow_form, "kindermann: form_review()")
ingest.activity.status = "completed"
ingest.activity.end_time = str(datetime.now())
# report of the ingest process (entity_out of ingest workflow step)
ingest_report = ingest.entity_out
ingest_report.tag = "a:b:c" # tag structure to be defined
ingest_report.status = "completed"
# free entries for detailed report information
ingest_report.report.remote_server = "gridftp.awi.de://export/data/CMIP6/test"
ingest_report.report.server_credentials = "in server_cred.krb keypass"
ingest_report.report.target_path = ".."
sf = form_handler.save_form(workflow_form, "kindermann: form_review()")
ingest_report.report.
from datetime import datetime
workflow_form = utils.load_workflow_form(info_file)
qua = workflow_form.qua
workflow_form.status = "quality assurance"
qua.agent.responsible_person = "hdh"
qua.activity.status = "starting"
qua.activity.start_time = str(datetime.now())
sf = form_handler.save_form(workflow_form, "hdh: qa start")
qua.entity_out.status = "completed"
qua.entity_out.report = {
"QA_conclusion": "PASS",
"project": "CORDEX",
"institute": "CLMcom",
"model": "CLMcom-CCLM4-8-17-CLM3-5",
"domain": "AUS-44",
"driving_experiment": [ "ICHEC-EC-EARTH"],
"experiment": [ "history", "rcp45", "rcp85"],
"ensemble_member": [ "r12i1p1" ],
"frequency": [ "day", "mon", "sem" ],
"annotation":
[
{
"scope": ["mon", "sem"],
"variable": [ "tasmax", "tasmin", "sfcWindmax" ],
"caption": "attribute <variable>:cell_methods for climatologies requires <time>:climatology instead of time_bnds",
"comment": "due to the format of the data, climatology is equivalent to time_bnds",
"severity": "note"
}
]
}
sf = form_handler.save_form(workflow_form, "hdh: qua complete")
workflow_form = utils.load_workflow_form(info_file)
workflow_form.status = "publishing"
pub = workflow_form.pub
pub.agent.responsible_person = "katharina"
pub.activity.status = "starting"
pub.activity.start_time = str(datetime.now())
sf = form_handler.save_form(workflow_form, "kb: publishing")
pub.activity.status = "completed"
pub.activity.comment = "..."
pub.activity.end_time = ".."
pub.activity.report = {'model':"MPI-M"} # activity related report information
pub.entity_out.report = {'model':"MPI-M"} # the report of the publication action - all info characterizing the publication
sf = form_handler.save_form(workflow_form, "kb: published")
sf = form_handler.save_form(workflow_form, "kindermann: form demo run 1")
sf.sub.activity.commit_hash
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: demo examples - step by step
Step2: Step 2
Step3: Step 3
Step4: interactive "help"
Step5: Display status of report
Step6: Display status of form
Step7: Appendix
Step8: add data ingest step related information
Step9: workflow step
Step10: workflow step
|
7,768
|
<ASSISTANT_TASK:>
Python Code:
import csv
import numpy as np
fichier_csv = csv.reader(open('train.csv', 'r'))
entetes = fichier_csv.__next__() # on récupère la première ligne qui contient les entetes
donnees = list() # on crée la liste qui va servir à récupérer les données
for ligne in fichier_csv: # pour chaque ligne lue dans le fichier csv
donnees.append(ligne) # on ajoute les valeurs lues dans le tableau donness
#entete = donnees[0]
#donnees[0] = []
donnees = np.array(donnees) # le tableau donnees est transformé en numpy array
print (donnees)
print (donnees[1:15, 5])
donnees[1:15, 5].astype(np.int)
import pandas as pd
import numpy as np
df = pd.read_csv('train.csv')
df.head(6)
type(donnees)
type(df)
df.dtypes
df.info()
df.describe()
df['Age'][0:15]
df.Age[0:15]
df.Age.mean()
colonnes_interessantes = ['Sex', 'Pclass', 'Age']
df[ colonnes_interessantes ]
df[df['Age'] > 60]
df[df['Age'] > 60][['Pclass', 'Sex', 'Age', 'Survived']]
df[df.Age.isnull()][['Sex', 'Pclass', 'Age']]
for i in range(1, 4):
print ("Dans la classe", i, ", il y a", len( df[ (df['Sex'] == 'male') & (df['Pclass'] == i) ]), "hommes")
print ("Dans la classe", i, ", il y a", len( df[ (df['Sex'] == 'female') & (df['Pclass'] == i) ]), "femmes")
df.Age.hist(bins=20, range=(0,80))
df['Gender'] = 4 # on ajoute une nouvelle colonne dans laquelle toutes les valeurs sont à 4
df.head()
df['Gender'] = df['Sex'].map( {'female': 0, 'male': 1} ) # la colonne Gender prend 0 pour les femmes et 1 pour les hommes
df.head()
df['FamilySize'] = df.SibSp + df.Parch
df.head()
ages_medians = np.zeros((2, 3))
ages_medians
for i in range(0,2):
for j in range(0,3):
ages_medians[i,j] = df[ (df['Gender'] == i) & (df['Pclass'] == j+1) ]['Age'].median()
ages_medians
for i in range(0, 2):
for j in range (0, 3):
df.loc[ (df.Age.isnull()) & (df.Gender == i) & (df.Pclass == j+1), 'AgeFill'] = ages_medians[i,j]
# pour afficher les 10 premières valeurs qui sont complétées
df [df.Age.isnull()][['Gender', 'Pclass', 'Age', 'AgeFill']].head(10)
import pickle
f = open('masauvegarde.pck', 'wb')
pickle.dump(df, f)
f.close()
with open('masauvegarde.pck', 'rb') as f:
dff = pickle.load(f)
ex = df[ ['Gender', 'Pclass'] ] # on choisit seulement quelques features.
X = ex.as_matrix() # on convertit en numpy array
print(ex.head(5))
print(X[:5,:])
y = df['Survived'].as_matrix()
print (y[:5])
from sklearn import svm
clf = svm.SVC()
clf.fit(X,y)
print(clf.predict(X[:10,:]))
print (y[:10])
from sklearn import cross_validation
scores = cross_validation.cross_val_score(clf, X, y, cv=7)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
df['AgeFilled'] = df.Age # on recopie la colonne Age
df.loc[df.AgeFilled.isnull(), 'AgeFilled'] = df[df.Age.isnull()]['AgeFill'] # on met l'age médian pour les valeurs non renseignées
X = df[['Gender', 'Pclass', 'AgeFilled']].as_matrix()
scores = cross_validation.cross_val_score(svm.SVC(), X, y, cv=7)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Regardons comment sont stockées les données en mémoire
Step2: Regardons maintenant la colonne de l'âge, n'affichons que les 15 premières valeurs
Step3: On peut donc remarquer que les âges sont stockés comme des chaîne de caractères. Transformons les en réels
Step4: Numpy ne sait pas convertir la chaîne de caractère vide '' (en 6e position dans notre liste) en réels. Pour traiter ces données, il faudrait écrire un petit algorithme. Nous allons voir comment on peut utiliser pandas pour faire ces traitements beaucoup plus facilement.
Step5: Pour lire le fichier csv nous allons utiliser la fonction read_csv
Step6: Pour vérifier si cela a bien fonctionné, affichons les premières valeurs. On voit apparaître l'identifiant du passager, s'il a survécu, sa classe, son nom, son sexe, son âge, le nombre de frères/soeurs/époux/épouse sur le bâteau, le nombre de parents ou d'enfants, le numéro de ticket, le prix, le numéro de cabine et le port d'embarquement.
Step7: Comparons le type de donnees, obtenu précédemment. C'est un numpy array. Le type de df est un objet spécifique à pandas.
Step8: Nous avions vu qu'avec numpy, toutes les valeurs importées étaient des chaînes de caractères. Vérifions ce qu'il en est avec pandas
Step9: On peut voir que Pandas a détecté automatiquement le types des données de notre fichier csv
Step10: L'âge n'est pas renseigné pour tous les passagers, seulement pour 714 passagers sur 891. Idem pour le numéro de cabine et le port d'embarquement. On peut également utiliser describe() pour calculer plusieurs indicateurs statistiques utiles.
Step11: On peut voir que pandas a calculé automatiquement les indicateurs statistiques en tenant compte uniquement des données renseignées. Par exemple, il a calculé la moyenne d'âge uniquement sur les 714 valeurs connues. pandas a laissé de coté les valeurs non-numériques (nom, sexe, ticket, cabine, port d'embarquement).
Step12: On peut également utiliser la syntaxe
Step13: On peut calculer des critères statistiques directement sur les colonnes
Step14: On peut voir que c'est la même valeur que celle affichée dans describe. Cette syntaxe permet d'utiliser facilement la valeur de la moyenne dans des calculs ou des algorithmes.
Step15: En analyse, on est souvent intéressé par filtrer les données en fonction de certains critères. Par exemple, l'âge maximum est 80 ans. On peut examiner les informations relatives aux personnes âgées
Step16: Comme on a trop d'informations, on peut les filtrer
Step17: On peut voir que parmis les persones âges, il y a principalement des hommes. Les personnes qui ont survécues était principalement des femmes.
Step18: Pour combiner des filtres, on peut utiliser '&'. Affichons le nombre d'hommes dans chaque classe
Step19: Visualisons maintenant l'histogramme de répartition des âges.
Step20: Créations et modifications des colonnes
Step21: Pour créer et renommer de nouvelles colonnes, on peut également agréger des informations issues de différentes colonnes. Créons par exemple une colonne pour stocker les nombre de personnes de la même famille à bord du Titanic.
Step22: Nous allons remplir les valeurs manquantes de l'âge avec la valeur médiane dépendant de la classe et du sexe.
Step23: On va créer une nouvelle colonne AgeFill qui va utiliser ces âges médians
Step24: Pour sauvegarder votre travail, vous pouvez utiliser le module pickle qui compresse et sauvegarde vos données
Step25: Pour récuperer votre travail, on utilise l'opération inverse, toujours avec pickle
Step26: Retour à numpy pour l'apprentissage
Step27: On cherche à prévoir la survie, on extrait donc l'information utile
Step28: L'apprentissage du classifieur est fait, c'est-à-dire que nous avons entraîné une SVM sur nos données $X$ pour qu'elle soit capable de prédire la survie $y$. Pour vérifier que notre SVM a bien appris à prédire la survie des passagers, nous pouvons utiliser la méthode predict() et comparer visuellement pour les dix premières valeurs prédite par la SVM et la survie réelle des passagers.
Step29: La SVM a bien appris à prédire ce que nous lui avons montré. Cela ne permet pas cependant d'évaluer sa capacité à généraliser à des cas qu'elle n'a pas vu. Pour ce faire, une approche classique est de faire de la validation croisée, c'est à dire qu'on entraîne le classifieur sur une partie des données et qu'on le teste sur une autre. Scikit-learn en donne une implémentation très simple d'utilisation.
Step30: Sur les 7 partitions de nos données, la SVM prédit la survie des passagers dans 77% des cas, avec un écart-type de 0,04.
Step31: Nous pouvons maintenant créer un nouveau $X$ incluant l'âge, en plus du sexe et de la classe, et vérifier si cela améliore les performances de la SVM.
|
7,769
|
<ASSISTANT_TASK:>
Python Code:
cluster = '<qumulo-cluster>' # Qumulo cluster hostname or IP where you're setting up users
api_user = '<qumulo-user>' # Qumulo api user name
api_password = '<qumulo-password>' # Qumulo api password
base_dir = 'users'
user_name = 'tommy' # the new "user" to set up.
import os
import sys
import traceback
from qumulo.rest_client import RestClient
from qumulo.rest.nfs import NFSRestriction
full_path = '/'+ base_dir + '/' + user_name
rc = RestClient(cluster, 8000)
rc.login(api_user, api_password)
def create_dir(rc, name, dir_path='/'):
try:
rc.fs.create_directory(name = name, dir_path = dir_path)
except:
exc_type, exc_value, exc_traceback = sys.exc_info()
print("Exception: %s" % exc_value)
# Create base user directory, if it doesn't already exist
create_dir(rc, name=base_dir, dir_path='/')
dir_res = rc.fs.create_directory(name=user_name, dir_path='/'+ base_dir)
print("Directory '%s' created with id: %s" % (full_path, dir_res['file_number']))
dir_id = dir_res['file_number']
quota_res = rc.quota.create_quota(id_ = dir_id, limit_in_bytes = 20000000000)
nfs_res = rc.nfs.nfs_add_share(export_path = '/' + user_name,
fs_path = full_path,
description = "%s home directory" % user_name,
restrictions = [NFSRestriction({
'read_only': False,
'host_restrictions': [],
'user_mapping': 'NFS_MAP_NONE',
'map_to_user_id': '0'})]
)
print("NFS export created: %s with id %s" % (full_path, nfs_res['id']))
smb_res = rc.smb.smb_add_share(share_name = user_name,
fs_path = full_path,
description = "%s home directory" % user_name
)
print("SMB share created: %s with id %s" % (full_path, smb_res['id']))
snap_res = rc.snapshot.create_policy(name = "User %s" % user_name,
schedule_info = {"creation_schedule":
{"frequency":"SCHEDULE_DAILY_OR_WEEKLY",
"hour":2,"minute":15,
"on_days":["MON","TUE","WED","THU","FRI","SAT","SUN"],
"timezone":"America/Los_Angeles"},
"expiration_time_to_live":"7days"
},
directory_id = str(dir_id),
enabled = True)
print("Snapshot policy created with id %s" % snap_res['id'])
rc.quota.delete_quota(id_ = quota_res['id'])
rc.snapshot.delete_policy(policy_id = snap_res['id'])
rc.smb.smb_delete_share(id_ = smb_res['id'])
rc.nfs.nfs_delete_share(id_ = nfs_res['id'])
if full_path != '/': # small sanity check since tree delete is rather powerful.
rc.fs.delete_tree(path = full_path)
print("Everything is cleaned up!")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create directory
Step2: Create 20GB Quota
Step3: Create NFS export
Step4: Create SMB share
Step5: Set up snapshot policy
Step6: Clean up everything
|
7,770
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'sandbox-3', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
|
7,771
|
<ASSISTANT_TASK:>
Python Code:
# Run the datacleaning notebook to get all the variables
%run 'Teknisk Tirsdag - Data Cleaning.ipynb'
corr = overall_set.corr()
fig = plt.figure(figsize=(20, 16))
ax = sb.heatmap(corr, xticklabels=corr.columns.values,
yticklabels=corr.columns.values,
linewidths=0.25, vmax=1.0, square=True,
linecolor='black', annot=False
)
plt.show()
overall_set['label'] = overall_set['Club'].isin(topklub_set.Club).astype(int)
y = overall_set['label']
X = overall_set.iloc[:,0:-1].select_dtypes(include=['float64', 'int64'])
overall_set.groupby('label').mean()
# hent nødvendige pakker fra Scikit Learn biblioteket (generelt super hvis man vil lave data science)
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
model = LogisticRegression()
model = model.fit(X,y)
model.score(X,y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2)
print('Træningsæt størrelse: {} - Testsæt størrelse: {}'.format(len(X_train), len(X_test)))
model2 = LogisticRegression()
model2 = model2.fit(X_train, y_train)
model2.score(X_train, y_train)
y_pred = model2.predict(X_test)
y_probs = model2.predict_proba(X_test)
# Evalueringsmålinger
from sklearn import metrics
print('Nøjagtigheden af vores logistiske regressions models prediction på testsættet er {:.0f}'.format(100*metrics.accuracy_score(y_test, y_pred))+'%', '\n')
print('Arealet under vores ROC AUC kurve er {:.0f}'.format(100*metrics.roc_auc_score(y_test, y_probs[:, 1]))+'%')
confusion_matrix = metrics.confusion_matrix(y_test, y_pred)
print(confusion_matrix)
print(metrics.classification_report(y_test, y_pred))
# 10-folds cross-validation
from sklearn.model_selection import cross_val_score
scores = cross_val_score(LogisticRegression(), X, y, scoring='accuracy', cv=10)
print(scores,'\n')
print(scores.mean())
dansker_pred = None ### Fjern NONE og UDFYLD MIG ###
dansker_probs = None ### Fjern NONE og UDFYLD MIG ###
dansker_set_df = dansker_set.copy()
dansker_set_df[['prob1','prob2']] = pd.DataFrame(dansker_probs, index=dansker_set.index)
dansker_set_df['Probabilities [0,1]'] = dansker_set_df[['prob1','prob2']].values.tolist()
dansker_set_df['Prediction'] = pd.Series(dansker_pred, index=dansker_set.index)
del dansker_set_df['prob1'], dansker_set_df['prob2']
# dansker_set_df.head()
dansker_set_df.loc[:,'pred=1'] = dansker_set_df['Probabilities [0,1]'].map(lambda x: x[1]).sort_values(ascending=False)
dansker_sorted = dansker_set_df.sort_values('pred=1', ascending=False)
dansker_sorted = dansker_sorted[['Name', 'Club', 'Overall', 'Potential', 'Probabilities [0,1]', 'Prediction']]
dansker_sorted.loc[:,'in'] = np.arange(1, len(dansker_set_df)+1)
dansker_sorted.set_index('in')
dansker_sorted[dansker_sorted['Club'].isin(top_clubs)].set_index('in')
dansker_sorted.loc[dansker_sorted.Name == 'N. Bendtner'].set_index('in')
df.loc[df.Name == 'N. Bendtner']
top_df = df[df.Club.isin(top_clubs)]
top_df[top_df.Overall < 70].sort_values('Overall', ascending=True)
bund_df = df[~df.Club.isin(top_clubs)]
bund_df[bund_df.Overall > 70]
top_clubs
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Efter at have hentet vores rensede data, hvor vi minder os selv om at vi har
Step2: Hvad vi ser her, er en korrelationsmatrix. Jo mørkere farver, des højere korrelation, rød for positiv- og blå for negativ-korrelation. <br>
Step3: Vi kan kigge lidt overordnet på tallene mellem de 2 klasser.
Step4: Observationer
Step5: Vi fitter nu en logistic regression classifier til vores data, og fitter en model, så den kan genkende om man spiller for en topklub eller ej, og evaluere resultatet
Step6: Altså har vores model ret i
Step7: Og vi er nu klar til at prøve igen!
Step8: Modellen matcher nu
Step9: Det ser jo ret fornuftigt ud.<br>
Step10: Resultatet fortæller os, at vi har {{confusion_matrix[0,0]}}+{{confusion_matrix[1,1]}} = {{confusion_matrix[0,0]+confusion_matrix[1,1]}} korrekte forudsigelser og {{confusion_matrix[0,1]}}+{{confusion_matrix[1,0]}} = {{confusion_matrix[0,1]+confusion_matrix[1,0]}} ukorrekte
Step11: Logistisk regression med krydsvalidering
Step12: Her preformer modellen altså i gennemsnit
Step13: Modellen har fundet {{np.bincount(dansker_pred)[0]}} nuller og {{np.bincount(dansker_pred)[1]}} ét-taller
Step14: Og sortere listen, så de bedste danske spillere står øvers, og tilføjer et index, så vi kan få et bedre overblik
Step15: Efter flot hattrick mod Irland, kan man vidst ikke være i tvivl om Kong Christian tager pladsen på tronen
Step16: Man kan undre sig over hvad Jacob Larsen laver hos stopklubben Borussia Dortmund, men en hurtig googling viser, at han simpelthen blev headhuntet til klubben som 16-årig.
Step17: Opgave
Step18: Ekstra lege/analyse opgaver
Step19: Vi kan altså se, at der bliver satset på ungdommen, hvor deres kommende potentiale nok taler for deres plads i en storklub.<br>
Step20: Måske er de 22 klubber, vi har udvalgt ikke helt nok til at beskrive topklubber
|
7,772
|
<ASSISTANT_TASK:>
Python Code:
sns.displot(data=penguins, x="flipper_length_mm", kind="ecdf")
sns.displot(data=penguins, x="flipper_length_mm", kde=True)
sns.displot(data=penguins, x="flipper_length_mm", y="bill_length_mm")
sns.displot(data=penguins, x="flipper_length_mm", y="bill_length_mm", kind="kde")
g = sns.displot(data=penguins, x="flipper_length_mm", y="bill_length_mm", kind="kde", rug=True)
sns.displot(data=penguins, x="flipper_length_mm", hue="species", kind="kde")
sns.displot(data=penguins, x="flipper_length_mm", hue="species", multiple="stack")
sns.displot(data=penguins, x="flipper_length_mm", hue="species", col="sex", kind="kde")
sns.displot(
data=penguins, y="flipper_length_mm", hue="sex", col="species",
kind="ecdf", height=4, aspect=.7,
)
g = sns.displot(
data=penguins, y="flipper_length_mm", hue="sex", col="species",
kind="kde", height=4, aspect=.7,
)
g.set_axis_labels("Density (a.u.)", "Flipper length (mm)")
g.set_titles("{col_name} penguins")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: While in histogram mode, it is also possible to add a KDE curve
Step2: To draw a bivariate plot, assign both x and y
Step3: Currently, bivariate plots are available only for histograms and KDEs
Step4: For each kind of plot, you can also show individual observations with a marginal "rug"
Step5: Additional keyword arguments are passed to the appropriate underlying plotting function, allowing for further customization
|
7,773
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
n=np.random.standard_normal?
n=np.random.standard_normal
n=np.random.randn
n=np.random.randn
def random_line(m, b, sigma, size=10):
Create a line y = m*x + b + N(0,sigma**2) between x=[-1.0,1.0]
Parameters
----------
m : float
The slope of the line.
b : float
The y-intercept of the line.
sigma : float
The standard deviation of the y direction normal distribution noise.
size : int
The number of points to create for the line.
Returns
-------
x : array of floats
The array of x values for the line with `size` points.
y : array of floats
The array of y values for the lines with `size` points.
N=np.random.normal(0,sigma**2)
x=np.linspace(-1.0,1.0,size)
if sigma==0:
y=m*x +b
else:
y=m*x +b+N
return y
m = 0.0; b = 1.0; sigma=0.0; size=3
x, y = random_line(m, b, sigma, size)
assert len(x)==len(y)==size
assert list(x)==[-1.0,0.0,1.0]
assert list(y)==[1.0,1.0,1.0]
sigma = 1.0
m = 0.0; b = 0.0
size = 500
x, y = random_line(m, b, sigma, size)
assert np.allclose(np.mean(y-m*x-b), 0.0, rtol=0.1, atol=0.1)
assert np.allclose(np.std(y-m*x-b), sigma, rtol=0.1, atol=0.1)
def ticks_out(ax):
Move the ticks to the outside of the box.
ax.get_xaxis().set_tick_params(direction='out', width=1, which='both')
ax.get_yaxis().set_tick_params(direction='out', width=1, which='both')
def plot_random_line(m, b, sigma, size=10, color='red'):
Plot a random line with slope m, intercept b and size points.
x=np.linspace(-1.0,1.0,size)
plt.plot(x,random_line(m,b,sigma,size),color)
plt.ylim(-10.0,10.0)
plt.vlines(0,-10,10)
plt.hlines(0,-1,1)
plt.box(False)
plot_random_line(5.0, -1.0, 2.0, 50)
assert True # use this cell to grade the plot_random_line function
interact(plot_random_line,m=(-10.0,10.0),b=(-5.0,5.0),sigma=(0,5.0,.01),size=(10,100,10),color=('r','g','b'))
#### assert True # use this cell to grade the plot_random_line interact
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Line with Gaussian noise
Step5: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function
Step6: Use interact to explore the plot_random_line function using
|
7,774
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
decoded = tf.nn.sigmoid(logits, name='decoded')
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Network Architecture
Step2: Training
Step3: Denoising
Step4: Checking out the performance
|
7,775
|
<ASSISTANT_TASK:>
Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
import matplotlib.pyplot as plt
import numpy as np
import os
import tarfile
import urllib
from IPython.display import display, Image
from scipy import ndimage
from sklearn.linear_model import LogisticRegression
import cPickle as pickle
url = 'http://yaroslavvb.com/upload/notMNIST/'
def maybe_download(filename, expected_bytes):
Download a file if not present, and make sure it's the right size.
if not os.path.exists(filename):
filename, _ = urllib.urlretrieve(url + filename, filename)
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print 'Found and verified', filename
else:
raise Exception(
'Failed to verify' + filename + '. Can you get to it with a browser?')
return filename
train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
num_classes = 10
def extract(filename):
tar = tarfile.open(filename)
tar.extractall()
tar.close()
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
data_folders = [os.path.join(root, d) for d in sorted(os.listdir(root))]
if len(data_folders) != num_classes:
raise Exception(
'Expected %d folders, one per class. Found %d instead.' % (
num_folders, len(data_folders)))
print data_folders
return data_folders
train_folders = extract(train_filename)
test_folders = extract(test_filename)
from IPython.display import Image
image_size = 28 # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.
def load(data_folders, min_num_images, max_num_images):
dataset = np.ndarray(
shape=(max_num_images, image_size, image_size), dtype=np.float32)
labels = np.ndarray(shape=(max_num_images), dtype=np.int32)
label_index = 0
image_index = 0
for folder in data_folders:
print folder
for image in os.listdir(folder):
if image_index >= max_num_images:
raise Exception('More images than expected: %d >= %d' % (
num_images, max_num_images))
image_file = os.path.join(folder, image)
#if image_index % 20000 == 0:
# display(Image(filename=image_file))
#plt.show()
try:
image_data = (ndimage.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[image_index, :, :] = image_data
labels[image_index] = label_index
image_index += 1
except IOError as e:
print 'Could not read:', image_file, ':', e, '- it\'s ok, skipping.'
label_index += 1
num_images = image_index
dataset = dataset[0:num_images, :, :]
labels = labels[0:num_images]
if num_images < min_num_images:
raise Exception('Many fewer images than expected: %d < %d' % (
num_images, min_num_images))
print 'Full dataset tensor:', dataset.shape
print 'Mean:', np.mean(dataset)
print 'Standard deviation:', np.std(dataset)
print 'Labels:', labels.shape
return dataset, labels
train_dataset, train_labels = load(train_folders, 450000, 550000)
test_dataset, test_labels = load(test_folders, 18000, 20000)
from string import join
from matplotlib.pyplot import imshow
from time import sleep
num_png = 0
directory = "notMNIST_large/A"
for png in os.listdir(directory):
#print png
num_png += 1
#display(Image(os.path.join(directory,png)))
#sleep(2)
if num_png > 10:
break
#Image(filename="notMNIST_large/A/ISBKYW1pcm9xdWFpICEudHRm.png")
np.flatnonzero()
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
for label in range(0,10):
idxs = np.random.choice(np.flatnonzero(train_labels == label),10,replace=False)
for i,idx in enumerate(idxs):
pos = i*10+1+label
plt.subplot(10,10,pos)
plt.imshow(train_dataset[idx,])
plt.axis("off")
plt.show()
#plt.imshow(train_dataset[0,])
np.random.seed(133)
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
train_dataset, train_labels = randomize(train_dataset, train_labels)
test_dataset, test_labels = randomize(test_dataset, test_labels)
for label in range(0,10):
idxs = np.random.choice(np.flatnonzero(train_labels == label),10,replace=False)
for i,idx in enumerate(idxs):
pos = i*10+1+label
plt.subplot(10,10,pos)
plt.imshow(train_dataset[idx,])
plt.axis("off")
plt.show()
#plt.imshow(train_dataset[0,])
def numexample(labels,label):
return np.sum(labels == label)
for i in range(0,10):
print i,"\t",numexample(train_labels,i)
train_size = 200000
valid_size = 10000
valid_dataset = train_dataset[:valid_size,:,:]
valid_labels = train_labels[:valid_size]
train_dataset = train_dataset[valid_size:valid_size+train_size,:,:]
train_labels = train_labels[valid_size:valid_size+train_size]
print 'Training', train_dataset.shape, train_labels.shape
print 'Validation', valid_dataset.shape, valid_labels.shape
pickle_file = 'notMNIST.pickle'
try:
f = open(pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print 'Unable to save data to', pickle_file, ':', e
raise
statinfo = os.stat(pickle_file)
print 'Compressed pickle size:', statinfo.st_size
train_dataset.shape,test_dataset.shape,valid_dataset.shape
def similar(dataset1,dataset2):
print dataset2.shape
nexm,nrow,ncol = dataset1.shape
dataset1 = np.reshape(dataset1[0:5000,],(5000,nrow*ncol))
nexm,nrow,ncol = dataset2.shape
dataset2 = np.reshape(dataset2[0:1000,],(1000,1,nrow*ncol))
dataset = dataset1 - dataset2
return dataset.T
def overlap(S):
#m = np.mean(S)
return np.sum(S == 0)
STrainVal = similar(train_dataset,valid_dataset)
print "train Val overlap: ",overlap(STrainVal)
SValTest = similar(valid_dataset,test_dataset)
print "Val Test overlap: ", overlap(SValTest)
train = np.reshape(train_dataset,(train_dataset.shape[0],28*28))
#val = valid_dataset
#test = test_dataset
train = train[0:30000,]
train_labels = train_labels[0:30000,]
clf = LogisticRegression()
clf.fit(train,train_labels)
train.shape,train_labels.shape
train_labels.shape
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
Step3: Extract the dataset from the compressed .tar.gz file.
Step4: Problem 1
Step5: Problem 2
Step6: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
Step7: Problem 3
Step8: Problem 4
Step9: Prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed.
Step10: Finally, let's save the data for later reuse
Step11: Problem 5
Step12: Problem 6
|
7,776
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division, print_function
%matplotlib inline
path = "data/17flowers/"
import os, json
from glob import glob
import numpy as np
np.set_printoptions(precision=4, linewidth=100)
from matplotlib import pyplot as plt
# check that ~/.keras/keras.json is set for Theano and includes "image_data_format": "channels_first"
from importlib import reload # Python 3
import utils; reload(utils)
from utils import plots
# As large as you can, but no larger than 64 is recommended.
batch_size = 8
#batch_size = 64
# Import our class, and instantiate
import vgg16; reload(vgg16)
from vgg16 import Vgg16
vgg = Vgg16()
# Grab a few images at a time for training and validation.
# NB: They must be in subdirectories named based on their category
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2)
# please note that the vgg model is compiled inside the finetune method.
vgg.finetune(batches)
vgg.fit(batches, val_batches, batch_size, nb_epoch=1)
batches = vgg.get_batches(path+'test', batch_size=5)
imgs,labels = next(batches)
print(labels)
#plots(imgs, titles=labels)
plots(imgs)
vgg.predict(imgs, True)
vgg.classes[:20]
# make new Vgg16 instance
vgg2 = Vgg16()
# need to compile the model before using
vgg2.compile()
batches = vgg2.get_batches(path+'test', batch_size=5)
imgs,labels = next(batches)
print(labels)
plots(imgs)
vgg2.predict(imgs, True)
vgg2.classes[:20]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Use a pretrained VGG model with our Vgg16 class
Step2: The original pre-trained Vgg16 class classifies images into one of the 1000 categories. This number of categories depends on the dataset which Vgg16 was trained with. (http
Step3: Use Vgg16 for basic image recognition
Step4: With finetuning, the model vgg outputs the probability of 17 classes. As you can see below, they are generally relevant to the input images.
Step5: Without finetune?
Step6: Again get some images for prediction
Step7: Without finetuning, the model vgg2 outputs the probability of 1000 classes. As you can see below, they are sometimes totally irrelevant.
|
7,777
|
<ASSISTANT_TASK:>
Python Code:
import sqlite3
import pandas as pd
from pprint import pprint
from pandas import DataFrame
import pandas as pd
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
import math
import numpy as np
conn = sqlite3.connect('bicycle.db')
c=conn.cursor()
c.execute('SELECT LoTemp, Precip, Manhattan FROM bicycle')
data=c.fetchall()
Data=DataFrame(data, columns=[ 'LoTemp', 'Precip', 'Manhattan'])
n=len(Data)
newData = DataFrame( index=range(0,n-1),columns=[ 'LoTemp', 'Precip', 'Manhattan'])
Data.loc[1,'LoTemp']
newData.loc[1,'LoTemp']=1
for x in range(0,n-1):
for y in [ 'LoTemp', 'Precip', 'Manhattan']:
if (Data.loc[x,y]<Data.loc[x+1,y]):
newData.loc[x, y]=1
elif (Data.loc[x,y]>Data.loc[x+1,y]):
newData.loc[x, y]=-1
else:
newData.loc[x, y]=0
datalist=newData.values.tolist()
from pymining import seqmining
freq_seqs = seqmining.freq_seq_enum(datalist, 20)
sorted(freq_seqs)
conn.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1
Step1: Step2
Step2: Step3
|
7,778
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
# 使用风格seaborn白底
plt.style.use('seaborn-whitegrid')
import numpy as np
x = np.linspace(0, 10, 30)
y = np.sin(x)
plt.plot(x, y, 'o', color='black');
rng=np.random.RandomState(0)
for marker in ['o', '.', ',', 'x', '+', 'v', '^', '<', '>', 's', 'd']:
plt.plot(rng.rand(5),rng.rand(5),marker,label="marker for '{0}'".format(marker))
plt.legend(numpoints=1)
plt.xlim(0, 1.8);
plt.plot(x, y, '-ok');
plt.plot(x, y, '-p', color='gray',
markersize=15, linewidth=4,
markerfacecolor='white',
markeredgecolor='gray',
markeredgewidth=2)
plt.ylim(-1.2, 1.2);
plt.scatter(x, y, marker='o');
rng = np.random.RandomState(0)
x = rng.randn(100)
y = rng.randn(100)
# 随机颜色
colors = rng.rand(100)
sizes = 1000 * rng.rand(100)
# 设置透明度
plt.scatter(x, y, c=colors, s=sizes, alpha=0.3,
cmap='viridis')
plt.colorbar(); # show color scale
from sklearn.datasets import load_iris
iris = load_iris()
features = iris.data.T
plt.scatter(features[0], features[1], alpha=0.2,
s=100*features[3], c=iris.target, cmap='viridis')
plt.xlabel(iris.feature_names[0])
plt.ylabel(iris.feature_names[1]);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: plt.plot制作散点图
Step2: 函数调用中的第三个参数是一个字符,代表用于绘图的符号类型。正如可以指定诸如“-”,“-”之类的选项来控制线条样式一样,标记样式也具有自己的一组短字符串代码。可用符号的完整列表可以在plt.plot文档或Matplotlib的在线文档中找到。大多数可能性都是相当直观的,我们将在此处显示一些更常见的可能性:
Step3: 为了获得更多的可能性,这些字符代码可以与线和颜色代码一起用于绘制点以及连接它们的线:
Step4: plt.plot的其他关键字参数指定线和标记的广泛属性:
Step5: plt.plot函数中的这种灵活性允许使用多种可能的可视化选项。有关可用选项的完整说明,请参阅plt.plot文档。
Step6: plt.scatter与plt.plot的主要区别在于,它可以用于创建散点图,在散点图中可以单独控制每个点的属性(大小,面色,边缘颜色等)或将其映射到数据。
Step7: 请注意,color参数会自动映射到色标(此处由colorbar()命令显示),并且size参数以像素为单位。这样,点的颜色和大小可用于在可视化中传达信息,以可视化多维数据。
|
7,779
|
<ASSISTANT_TASK:>
Python Code:
import quantiacsToolbox
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn import svm
%matplotlib inline
%%html
<style>
table {float:left}
</style>
F_AD = pd.read_csv('./tickerData/F_AD.txt')
CLOSE = np.array(F_AD.loc[:252-1, [' CLOSE']])
plt.plot(CLOSE)
X = np.concatenate([CLOSE[i:i+5] for i in range(252-5)], axis=1).T
y = np.sign((CLOSE[5:] - CLOSE[5-1: -1]).T[0])
clf = svm.SVC()
clf.fit(X, y)
clf.predict(CLOSE[-5:].T)
F_AD.loc[251:252, ['DATE', ' CLOSE']]
class myStrategy(object):
def myTradingSystem(self, DATE, OPEN, HIGH, LOW, CLOSE, VOL, OI, P, R, RINFO, exposure, equity, settings):
For 4 lookback days and 3 markets, CLOSE is a numpy array looks like
[[ 12798. 11537.5 9010. ]
[ 12822. 11487.5 9020. ]
[ 12774. 11462.5 8940. ]
[ 12966. 11587.5 9220. ]]
# define helper function
# use close price predict the trend of the next day
def predict(CLOSE, gap):
lookback = CLOSE.shape[0]
X = np.concatenate([CLOSE[i:i + gap] for i in range(lookback - gap)], axis=1).T
y = np.sign((CLOSE[gap:lookback] - CLOSE[gap - 1:lookback - 1]).T[0])
y[y==0] = 1
clf = svm.SVC()
clf.fit(X, y)
return clf.predict(CLOSE[-gap:].T)
nMarkets = len(settings['markets'])
gap = settings['gap']
pos = np.zeros((1, nMarkets), dtype='float')
for i in range(nMarkets):
try:
pos[0, i] = predict(CLOSE[:, i].reshape(-1, 1),
gap, )
# for NaN data set position to 0
except ValueError:
pos[0, i] = 0.
return pos, settings
def mySettings(self):
Define your trading system settings here
settings = {}
# Futures Contracts
settings['markets'] = ['CASH', 'F_AD', 'F_BO', 'F_BP', 'F_C', 'F_CC', 'F_CD',
'F_CL', 'F_CT', 'F_DX', 'F_EC', 'F_ED', 'F_ES', 'F_FC', 'F_FV', 'F_GC',
'F_HG', 'F_HO', 'F_JY', 'F_KC', 'F_LB', 'F_LC', 'F_LN', 'F_MD', 'F_MP',
'F_NG', 'F_NQ', 'F_NR', 'F_O', 'F_OJ', 'F_PA', 'F_PL', 'F_RB', 'F_RU',
'F_S', 'F_SB', 'F_SF', 'F_SI', 'F_SM', 'F_TU', 'F_TY', 'F_US', 'F_W', 'F_XX',
'F_YM']
settings['lookback'] = 252
settings['budget'] = 10 ** 6
settings['slippage'] = 0.05
settings['gap'] = 5
return settings
result = quantiacsToolbox.runts(myStrategy)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For developing and testing a strategy, we will use the raw data in the tickerData folder that has been downloaded via the Toolbox's loadData() function.
Step2: Now we can create samples.
Step3: Use svm to learn and predict
Step4: 1 shows that the close price will go up tomorrow.
Step7: Hooray! Our strategy successfully predicted the trend.
|
7,780
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'sandbox-2', 'landice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Ice Albedo
Step7: 1.4. Atmospheric Coupling Variables
Step8: 1.5. Oceanic Coupling Variables
Step9: 1.6. Prognostic Variables
Step10: 2. Key Properties --> Software Properties
Step11: 2.2. Code Version
Step12: 2.3. Code Languages
Step13: 3. Grid
Step14: 3.2. Adaptive Grid
Step15: 3.3. Base Resolution
Step16: 3.4. Resolution Limit
Step17: 3.5. Projection
Step18: 4. Glaciers
Step19: 4.2. Description
Step20: 4.3. Dynamic Areal Extent
Step21: 5. Ice
Step22: 5.2. Grounding Line Method
Step23: 5.3. Ice Sheet
Step24: 5.4. Ice Shelf
Step25: 6. Ice --> Mass Balance
Step26: 7. Ice --> Mass Balance --> Basal
Step27: 7.2. Ocean
Step28: 8. Ice --> Mass Balance --> Frontal
Step29: 8.2. Melting
Step30: 9. Ice --> Dynamics
Step31: 9.2. Approximation
Step32: 9.3. Adaptive Timestep
Step33: 9.4. Timestep
|
7,781
|
<ASSISTANT_TASK:>
Python Code:
# Tell ipython to load the matplotlib environment.
%matplotlib inline
import itertools
import pandas
import numpy
import seaborn
import matplotlib.pyplot
import tabulate
_DATA_FILEPATH = 'datagovdatasetsviewmetrics.csv'
_ROTATION_DEGREES = 90
_BOTTOM_MARGIN = 0.35
_COLOR_THEME = 'coolwarm'
_LABEL_X = 'Organizations'
_LABEL_Y = 'Views'
_TITLE = 'Organizations with Most Views'
_ORGANIZATION_COUNT = 10
_MAX_LABEL_LENGTH = 20
def read_data():
d = pandas.read_csv(_DATA_FILEPATH)
return d
def process_data(d):
# Group by organization.
def sum_views(df):
return sum(df['Views per Month'])
g = d.groupby('Organization Name').apply(sum_views)
# Sort by views (descendingly).
g.sort(ascending=False)
# Grab the first N to plot.
items = g.iteritems()
s = itertools.islice(items, 0, _ORGANIZATION_COUNT)
s = list(s)
# Sort them in ascending order, this time, so that the larger ones are on
# the right (in red) in the chart. This has a side-effect of flattening the
# generator while we're at it.
s = sorted(s, key=lambda (n, v): v)
# Truncate the names (otherwise they're unwieldy).
distilled = []
for (name, views) in s:
if len(name) > (_MAX_LABEL_LENGTH - 3):
name = name[:17] + '...'
distilled.append((name, views))
return distilled
def print_table(distilled):
headings = ['Organization', 'Views']
print(tabulate.tabulate(distilled, headers=headings))
def plot_chart(distilled):
# Split the series into separate vectors of labels and values.
labels_raw = []
values_raw = []
for (name, views) in distilled:
labels_raw.append(name)
values_raw.append(views)
labels = numpy.array(labels_raw)
values = numpy.array(values_raw)
# Create one plot.
seaborn.set(style="white", context="talk")
(f, ax) = matplotlib.pyplot.subplots(1)
b = seaborn.barplot(
labels,
values,
ci=None,
palette=_COLOR_THEME,
hline=0,
ax=ax,
x_order=labels)
# Set labels.
ax.set_title(_TITLE)
ax.set_xlabel(_LABEL_X)
ax.set_ylabel(_LABEL_Y)
# Rotate the x-labels (otherwise they'll overlap). Seaborn also doesn't do
# very well with diagonal labels so we'll go vertical.
b.set_xticklabels(labels, rotation=_ROTATION_DEGREES)
# Add some margin to the bottom so the labels aren't cut-off.
matplotlib.pyplot.subplots_adjust(bottom=_BOTTOM_MARGIN)
df = read_data()
distilled = process_data(df)
print_table(distilled)
plot_chart(distilled)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set configurables
Step2: We use pandas to read, group, and sort our data
Step3: Use the tabulate library to render a nice table. This is one of my most favorite Python tools
Step4: Use seaborn (one of the nicest chart-libraries for Python) to make a bar-chart. Notice that pandas is just a layer around numpy.
Step5: Run the program
|
7,782
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
import numpy as np
import problem_unittests as tests
import re
from collections import Counter
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
cnt = Counter(text)
vocab = sorted(cnt, key=cnt.get, reverse=True)
vocab_to_int, int_to_vocab = {}, {}
for idx, word in enumerate(vocab, 1):
vocab_to_int[word] = idx
int_to_vocab[idx] = word
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
intab = " ".join(.,";!?()).split() # too lazy to write a list by hand
intab.append("--")
intab.append("\n")
outtab = "<period> <comma> <quotation_mark> <semicolon> <exclamation_mark> <quesion_mark> <l_paren> <r_paren> <dash> <return>".split()
return {sym:token for sym,token in zip(intab, outtab)}
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
tf_in = tf.placeholder(tf.int32, shape=[None, None], name="input")
tf_target = tf.placeholder(tf.int32, shape=[None, None], name="target")
tf_lr = tf.placeholder(tf.float32, None, name="learning_rate")
return tf_in, tf_target, tf_lr
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
lstm1 = tf.contrib.rnn.BasicLSTMCell(num_units=rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([lstm1])
initial_state = cell.zero_state(batch_size, tf.int32)
return cell, tf.identity(initial_state, name="initial_state")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
return tf.contrib.layers.embed_sequence(input_data, vocab_size, embed_dim)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
return outputs, tf.identity(final_state, name="final_state")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
embedding = get_embed(input_data, vocab_size, embed_dim)
outputs, final_state = build_rnn(cell, embedding)
logits = tf.contrib.layers.fully_connected(
outputs,
vocab_size,
activation_fn=None,
weights_initializer=tf.truncated_normal_initializer(stddev=0.01),
biases_initializer=tf.zeros_initializer())
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
a = np.array(int_text)
n_batches = int(len(a) / batch_size / seq_length)
shp = (n_batches, 2, batch_size, seq_length)
n = a.strides[0]
s = np.lib.stride_tricks.as_strided
strides=(shp[3]*n, n, shp[0]*shp[3]*n, n)
out = s(a, shp, strides=strides)
out[-1,1,-1][-1] = out[0,0,0,0] # Satisfy last value of last batch = first value of first batch constraint
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
# Number of Epochs
num_epochs = 40
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 1000
# Embedding Dimension Size
embed_dim = 200
# Sequence Length
seq_length = 20
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 10
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
_input = loaded_graph.get_tensor_by_name('input:0')
initial_state = loaded_graph.get_tensor_by_name('initial_state:0')
final_state = loaded_graph.get_tensor_by_name('final_state:0')
probs = loaded_graph.get_tensor_by_name('probs:0')
return _input, initial_state, final_state, probs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
return int_to_vocab[np.random.choice(len(probabilities), 30, p=probabilities)[0]]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
Step3: Explore the Data
Step6: Implement Preprocessing Functions
Step10: Tokenize Punctuation
Step12: Preprocess all the data and save it
Step14: Check Point
Step16: Build the Neural Network
Step19: Input
Step22: Build RNN Cell and Initialize
Step25: Word Embedding
Step28: Build RNN
Step31: Build the Neural Network
Step34: Batches
Step36: Neural Network Training
Step38: Build the Graph
Step40: Train
Step42: Save Parameters
Step44: Checkpoint
Step47: Implement Generate Functions
Step50: Choose Word
Step52: Generate TV Script
|
7,783
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division # Python 2 compatibility if needed
# Builtin implementation, as a reference
from itertools import permutations as itertools_permutations
itertools_permutations([1, 2])
for p in itertools_permutations([1, 2]):
print(p)
list(itertools_permutations([1, 2, 3]))
%time len(list(itertools_permutations(list(range(4)))))
%time len(list(itertools_permutations(list(range(8)))))
%time len(list(itertools_permutations(list(range(9)))))
%timeit len(list(itertools_permutations(list(range(10)))))
def ins_all_positions(x, l):
Return a list of lists obtained from l by inserting x at every possible index.
res = []
for i in range(0, len(l) + 1):
res.append(l[:i] + [x] + l[i:])
return res
from functools import reduce
# reduce(lambda acc, p: acc + f(p), l, []) is the same as the concatenation of list f(p) for p in l
# Now the main permutations generator.
def first_permutations(iterable):
Second algorithm, insert-into-all-positions solution.
if len(iterable) == 0:
return []
# we must specify this edge case
elif len(iterable) == 1:
return [[iterable[0]]]
else:
x, xs = iterable[0], iterable[1:]
# reduce is needed instead of a simple sum(...) as sum() only works for numerical values
return reduce(lambda acc, p: acc + ins_all_positions(x, p), first_permutations(xs), [])
first_permutations([1, 2, 3])
%time len(list(first_permutations(list(range(4)))))
%time len(list(first_permutations(list(range(5)))))
%time len(list(first_permutations(list(range(6)))))
%time len(list(first_permutations(list(range(7)))))
%time len(list(first_permutations(list(range(8)))))
def rm(x, l):
List l without element x.
return [y for y in l if x != y]
def head_of_all(x, l):
List of lists from l where x is the head of all the lists.
return [[x] + p for p in l]
def second_permutations(iterable):
Second algorithm, fixed-head solution.
if len(iterable) == 0:
return []
# we must specify this edge case
elif len(iterable) == 1:
return [[iterable[0]]]
else:
return reduce(lambda acc, x: acc + head_of_all(x, second_permutations(rm(x, iterable))), iterable, [])
second_permutations([1, 2, 3])
%time len(list(second_permutations(list(range(4)))))
%time len(list(second_permutations(list(range(5)))))
%time len(list(second_permutations(list(range(6)))))
%time len(list(second_permutations(list(range(7)))))
%time len(list(second_permutations(list(range(8)))))
%time len(list(second_permutations(list(range(9)))))
from math import factorial
factorial(8)
left = False
right = True
def attach_direction(t, d=left):
Attach the direction d to all elements of array t.
return [(x, d) for x in t]
def remove_direction(t):
Remove the attached direction d to all elements of array t.
return [y for y, _ in t]
def swap(t, i, j):
Swap t[i] and t[j] in array t.
t[i], t[j] = t[j], t[i]
def is_movable(a, i):
Can a[i] be moved?
x, d = a[i]
if d == left:
return i > 0 and x > a[i - 1][0]
elif d == right:
return i < len(a) - 1 and x > a[i + 1][0]
else:
raise ValueError("unknown direction d = {}".format(d))
def move(a, i):
Move it if possible.
x, d = a[i]
if is_movable(a, i):
if d == left:
swap(a, i, i - 1)
elif d == right:
swap(a, i, i + 1)
else:
raise ValueError("unknown direction d = {}".format(d))
else:
raise ValueError("not movable")
def scan_largest_movable(a):
Find the largest movable element.
def aux(acc, i):
if i >= len(a):
return acc
else:
if not is_movable(a, i):
return aux(acc, i + 1)
else:
x, _ = a[i]
if acc is None:
return aux(i, i + 1)
else:
j = acc if x < a[acc][0] else i
return aux(j, i + 1)
return aux(None, 0)
def flip(d):
Flip direction d : left -> right, right -> left
return not d
def scan_flip_larger(x, a):
Scan to flip larger.
for i, (y, d) in enumerate(a):
if y > x:
a[i] = y, flip(d)
def third_permutations(iterable):
Third algorithm, Johnson-Trotter algorithm.
i = sorted(list(iterable)) # Required by the algorithm
# We attach directions, and we will only use the array a
a = attach_direction(i)
# First permutation
r = list(iterable)[:]
while True:
yield r[:] # A copy of the current permutation is yielded
i = scan_largest_movable(a)
if i is None: # No more permutation!
raise StopIteration
else:
x, _ = a[i]
move(a, i)
scan_flip_larger(x, a)
# The next permutation should not have direction information attached to it
r = remove_direction(a)
third_permutations([1, 2, 3])
list(third_permutations([1, 2, 3]))
%time len(list(second_permutations(list(range(4)))))
%time len(list(second_permutations(list(range(5)))))
%time len(list(second_permutations(list(range(6)))))
%time len(list(second_permutations(list(range(7)))))
%time len(list(second_permutations(list(range(8)))))
%time len(list(second_permutations(list(range(9)))))
%timeit len(list(itertools_permutations([1, 2, 3])))
%timeit len(list(third_permutations([1, 2, 3])))
%timeit len(list(itertools_permutations([1, 2, 3, 4, 5])))
%timeit len(list(third_permutations([1, 2, 3, 4, 5])))
from numpy.random import choice
def random_list_of_size_n(n=5, N=1000):
return list(choice(list(range(1, N + 1)), size=n, replace=False))
random_list_of_size_n(5)
%timeit len(list(itertools_permutations(random_list_of_size_n(5))))
%timeit len(list(third_permutations(random_list_of_size_n(5))))
%timeit len(list(itertools_permutations(random_list_of_size_n(6))))
%timeit len(list(third_permutations(random_list_of_size_n(6))))
def test(list_of_f, iterable):
Test that all functions in list_of_f give the same list of permutation on this iterable.
print("Testing for the list of functions {} ...".format([f.__name__ for f in list_of_f])) # DEBUG
result = True
print("Testing for the iterable {} ...".format(iterable)) # DEBUG
i = iterable
allperms = []
for f in list_of_f:
allperms.append(sorted([list(p) for p in f(iterable)]))
for i, pi in enumerate(allperms):
for j in range(i + 1, len(allperms)):
pj = allperms[j]
if pi != pj:
print(" - Function #{} ({.__name__}) gave a different list of permutations as function #{} ({.__name__}) ...".format(i, list_of_f[i], j, list_of_f[j])) # DEBUG
result = False
else:
print(" - Function #{} ({.__name__}) gave the same list of permutations as function #{} ({.__name__}) ...".format(i, list_of_f[i], j, list_of_f[j])) # DEBUG
return result
list_of_f = [itertools_permutations, first_permutations, second_permutations, third_permutations]
iterable = [1, 2, 3]
test(list_of_f, iterable)
iterable = [1, 2, 3, 4, 5]
test(list_of_f, iterable)
iterable = [1, 2, 3, 4, 5, 6]
test(list_of_f, iterable)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Reference implementation
Step2: This will obviously be the quickest implementation, and there is no hope of beating it with pure Python (in terms of computational speed), as it is written in C and not in Python.
Step3: What's that weird result? In fact, itertools.permutations() does not return the list of all permutations, but rather an iterator.
Step4: So, what's the advantage of returning an iterator and not a list of lists? Memory and time efficiency !
Step6: There is $n!$ permutations to generate, so obviously any algorithm is running in $\Omega(n!)$ time to generate all of them, and that is approximately the behavior observed above.
Step8: Then we write a recursive function, following the description of the algorithm
Step9: We can try it out, but only on small list as it is not efficient.
Step10: And let's measure its efficiency on small lists of size $4,5,6,7,8$
Step12: $\implies$ This implementation take about $8 s$ for a list with $n = 8$ elements
Step14: Note that with comparisons on indexes instead of comparisons on values, we could treat the general case not much harder.
Step16: And finally, the fixed-head algorithm is easy to implement, as a recursive function.
Step17: Let's try it out
Step18: And let's measure its efficiency on small lists of size $4,5,6,7,8$
Step19: $\implies$ this second algorithm is more efficient, as it requires only $0.6 s$ to generate the $8! = 40320$ different permutations of the list $[0, 1, 2, 3, 4, 5, 6, 7]$.
Step20: 3. Third algorithm
Step23: We will need a first function to attach a direction to every element of an array t, and then to remove them.
Step25: This classical function swap(t, i, j) exchange the position of the elements t[i] and t[j]
Step27: We first need to know if the element a[i] can be moved, according to its attached direction, to the left or right.
Step29: Then the function move(a, i) simply swaps a[i] to the left or right, if it is possible.
Step31: Then we need a function to scan the array a, from its beginning, to find the largest movable element.
Step33: Directions will be flipped, alternating left and right, with flip(d)
Step35: Then the list will need to be scanned, and flip the directions of all elements larger than some x
Step37: We finally have all the pieces needed to implement the Johnson-Trotter algorithm
Step38: Yeay, we finally have an iterator on permutations of a list, instead of generating all of them.
Step39: And let's measure its efficiency on small lists of size $4,5,6,7,8$
Step40: $\implies$ the Johnson-Trotter algorithm is, as expected, quicker than the previous naive implementations, but it's still pretty slow compared to the reference implementation itertools.permutation.
Step41: However, IPython's %timeit function warns us that itertools.permutation could use caching, and that could bias the result.
Step43: 5. Testing our $3$ implementations
Step44: We will test and compare the reference implementation, itertools.permutation, with the three other implementations given above.
|
7,784
|
<ASSISTANT_TASK:>
Python Code:
# sklearn
# classes_ ; 타겟 Y 의 클래스(라벨)
# class_count_ ; 타켓 Y 의 값이 특정한 클래스인 표본 데이터의 수
# feature_count_ ; 1) 베르누이 분포 ;
# 2) 다항분포 ;
# class_prior_ (가우시안 정규분포) ; P(Y)
# class_log_prior_ (베르누이, 다항 분포) ; log P(Y)
# theta_, sigma_ (가우시안 정규분포)
# feature_log_prob_ (베르누이, 다항 분포)
# predicet_proba(x_new) ; P(Y|Xnew)
import numpy as np
from sklearn.naive_bayes import BernoulliNB
X = np.random.randint(2, size=(10, 4))
y = np.array([0,0,0,1,1,1,1,1,1,1])
print(X)
print(y)
clf_bern = BernoulliNB().fit(X, y)
clf_bern.classes_
clf_bern.class_count_
clf_bern.feature_count_
theta = np.exp(clf_bern.feature_log_prob_)
theta
x_new = np.array([1,1,1,0])
clf_bern.predict_proba([x_new])
from sklearn.naive_bayes import MultinomialNB
clf_mult = MultinomialNB().fit(X, y)
clf_mult.classes_
clf_mult.class_count_
clf_mult.feature_count_
theta = np.exp(clf_mult.feature_log_prob_)
theta
x_new = np.array([10,15,29,32])
clf_mult.predict_proba([x_new])
from sklearn.naive_bayes import GaussianNB
# 학생 0 의 시험성적
X00 = sp.stats.norm(70, 10).rvs(10)
X01 = sp.stats.norm(80, 5).rvs(10)
X02 = sp.stats.norm(100, 5).rvs(10)
X03 = sp.stats.norm(40, 5).rvs(10)
# 학생 1 의 시험성적
X10 = sp.stats.norm(90, 10).rvs(10)
X11 = sp.stats.norm(75, 5).rvs(10)
X12 = sp.stats.norm(60, 5).rvs(10)
X13 = sp.stats.norm(50, 10).rvs(10)
# 학생 2 의 시험성적
X20 = sp.stats.norm(60, 10).rvs(10)
X21 = sp.stats.norm(55, 5).rvs(10)
X22 = sp.stats.norm(100, 5).rvs(10)
X23 = sp.stats.norm(25, 10).rvs(10)
# Case 1)
X0 = np.vstack([X00, X01, X02, X03]).T
X1 = np.vstack([X10, X11, X12, X13]).T
X2 = np.vstack([X20, X21, X22, X23]).T
X = np.vstack([X0, X1, X2])
# 학생(=클래스) 구분하기
y0 = np.zeros(10)
y1 = np.ones(10)
y2 = np.ones(10) * 2
y = np.hstack([y0,y1,y2])
np.shape(X0)
np.shape(X)
np.shape(y)
clf_norm = GaussianNB().fit(X, y)
clf_norm.classes_
clf_norm.class_count_
clf_norm.class_prior_
clf_norm.theta_
clf_norm.sigma_
x_new = np.array([70, 60, 75, 35])
clf_norm.predict_proba([x_new])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1) 베르누이 분포 나이브 베이즈 모형
Step2: 2) 다항 분포 나이브 베이즈 모형
Step3: 3) 가우시안 정규 분포 나이브 베이즈 모형
|
7,785
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
a = np.array([[10,50,30],[60,20,40]])
result = np.unravel_index(a.argmax(), a.shape)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
7,786
|
<ASSISTANT_TASK:>
Python Code:
from theano.sandbox import cuda
%matplotlib inline
import utils; reload(utils)
from utils import *
from __future__ import division, print_function
#path = "data/fish/sample/"
path = "data/fish/"
batch_size=64
batches = get_batches(path+'train', batch_size=batch_size)
val_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=False)
(val_classes, trn_classes, val_labels, trn_labels,
val_filenames, filenames, test_filenames) = get_classes(path)
raw_filenames = [f.split('/')[-1] for f in filenames]
raw_test_filenames = [f.split('/')[-1] for f in test_filenames]
raw_val_filenames = [f.split('/')[-1] for f in val_filenames]
%cd data/fish
%cd train
%mkdir ../valid
g = glob('*')
for d in g: os.mkdir('../valid/'+d)
g = glob('*/*.jpg')
shuf = np.random.permutation(g)
for i in range(500): os.rename(shuf[i], '../valid/' + shuf[i])
%mkdir ../sample
%mkdir ../sample/train
%mkdir ../sample/valid
from shutil import copyfile
g = glob('*')
for d in g:
os.mkdir('../sample/train/'+d)
os.mkdir('../sample/valid/'+d)
g = glob('*/*.jpg')
shuf = np.random.permutation(g)
for i in range(400): copyfile(shuf[i], '../sample/train/' + shuf[i])
%cd ../valid
g = glob('*/*.jpg')
shuf = np.random.permutation(g)
for i in range(200): copyfile(shuf[i], '../sample/valid/' + shuf[i])
%cd ..
%mkdir results
%mkdir sample/results
%cd ../..
from vgg16bn import Vgg16BN
model = vgg_ft_bn(8)
trn = get_data(path+'train')
val = get_data(path+'valid')
test = get_data(path+'test')
save_array(path+'results/trn.dat', trn)
save_array(path+'results/val.dat', val)
save_array(path+'results/test.dat', test)
trn = load_array(path+'results/trn.dat')
val = load_array(path+'results/val.dat')
test = load_array(path+'results/test.dat')
gen = image.ImageDataGenerator()
model.compile(optimizer=Adam(1e-3),
loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(trn, trn_labels, batch_size=batch_size, nb_epoch=3, validation_data=(val, val_labels))
model.save_weights(path+'results/ft1.h5')
model.load_weights(path+'results/ft1.h5')
conv_layers,fc_layers = split_at(model, Convolution2D)
conv_model = Sequential(conv_layers)
conv_feat = conv_model.predict(trn)
conv_val_feat = conv_model.predict(val)
conv_test_feat = conv_model.predict(test)
save_array(path+'results/conv_val_feat.dat', conv_val_feat)
save_array(path+'results/conv_feat.dat', conv_feat)
save_array(path+'results/conv_test_feat.dat', conv_test_feat)
conv_feat = load_array(path+'results/conv_feat.dat')
conv_val_feat = load_array(path+'results/conv_val_feat.dat')
conv_test_feat = load_array(path+'results/conv_test_feat.dat')
conv_val_feat.shape
def get_bn_layers(p):
return [
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
BatchNormalization(axis=1),
Dropout(p/4),
Flatten(),
Dense(512, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(512, activation='relu'),
BatchNormalization(),
Dropout(p/2),
Dense(8, activation='softmax')
]
p=0.6
bn_model = Sequential(get_bn_layers(p))
bn_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=3,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr = 1e-4
bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=7,
validation_data=(conv_val_feat, val_labels))
bn_model.save_weights(path+'models/conv_512_6.h5')
bn_model.evaluate(conv_val_feat, val_labels)
bn_model.load_weights(path+'models/conv_512_6.h5')
sizes = [PIL.Image.open(path+'train/'+f).size for f in filenames]
id2size = list(set(sizes))
size2id = {o:i for i,o in enumerate(id2size)}
import collections
collections.Counter(sizes)
trn_sizes_orig = to_categorical([size2id[o] for o in sizes], len(id2size))
raw_val_sizes = [PIL.Image.open(path+'valid/'+f).size for f in val_filenames]
val_sizes = to_categorical([size2id[o] for o in raw_val_sizes], len(id2size))
trn_sizes = trn_sizes_orig-trn_sizes_orig.mean(axis=0)/trn_sizes_orig.std(axis=0)
val_sizes = val_sizes-trn_sizes_orig.mean(axis=0)/trn_sizes_orig.std(axis=0)
p=0.6
inp = Input(conv_layers[-1].output_shape[1:])
sz_inp = Input((len(id2size),))
bn_inp = BatchNormalization()(sz_inp)
x = MaxPooling2D()(inp)
x = BatchNormalization(axis=1)(x)
x = Dropout(p/4)(x)
x = Flatten()(x)
x = Dense(512, activation='relu')(x)
x = BatchNormalization()(x)
x = Dropout(p)(x)
x = Dense(512, activation='relu')(x)
x = BatchNormalization()(x)
x = Dropout(p/2)(x)
x = merge([x,bn_inp], 'concat')
x = Dense(8, activation='softmax')(x)
model = Model([inp, sz_inp], x)
model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit([conv_feat, trn_sizes], trn_labels, batch_size=batch_size, nb_epoch=3,
validation_data=([conv_val_feat, val_sizes], val_labels))
bn_model.optimizer.lr = 1e-4
bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=8,
validation_data=(conv_val_feat, val_labels))
import ujson as json
anno_classes = ['alb', 'bet', 'dol', 'lag', 'other', 'shark', 'yft']
def get_annotations():
annot_urls = {
'5458/bet_labels.json': 'bd20591439b650f44b36b72a98d3ce27',
'5459/shark_labels.json': '94b1b3110ca58ff4788fb659eda7da90',
'5460/dol_labels.json': '91a25d29a29b7e8b8d7a8770355993de',
'5461/yft_labels.json': '9ef63caad8f076457d48a21986d81ddc',
'5462/alb_labels.json': '731c74d347748b5272042f0661dad37c',
'5463/lag_labels.json': '92d75d9218c3333ac31d74125f2b380a'
}
cache_subdir = os.path.abspath(os.path.join(path, 'annos'))
url_prefix = 'https://kaggle2.blob.core.windows.net/forum-message-attachments/147157/'
if not os.path.exists(cache_subdir):
os.makedirs(cache_subdir)
for url_suffix, md5_hash in annot_urls.iteritems():
fname = url_suffix.rsplit('/', 1)[-1]
get_file(fname, url_prefix + url_suffix, cache_subdir=cache_subdir, md5_hash=md5_hash)
get_annotations()
bb_json = {}
for c in anno_classes:
if c == 'other': continue # no annotation file for "other" class
j = json.load(open('{}annos/{}_labels.json'.format(path, c), 'r'))
for l in j:
if 'annotations' in l.keys() and len(l['annotations'])>0:
bb_json[l['filename'].split('/')[-1]] = sorted(
l['annotations'], key=lambda x: x['height']*x['width'])[-1]
bb_json['img_04908.jpg']
file2idx = {o:i for i,o in enumerate(raw_filenames)}
val_file2idx = {o:i for i,o in enumerate(raw_val_filenames)}
empty_bbox = {'height': 0., 'width': 0., 'x': 0., 'y': 0.}
for f in raw_filenames:
if not f in bb_json.keys(): bb_json[f] = empty_bbox
for f in raw_val_filenames:
if not f in bb_json.keys(): bb_json[f] = empty_bbox
bb_params = ['height', 'width', 'x', 'y']
def convert_bb(bb, size):
bb = [bb[p] for p in bb_params]
conv_x = (224. / size[0])
conv_y = (224. / size[1])
bb[0] = bb[0]*conv_y
bb[1] = bb[1]*conv_x
bb[2] = max(bb[2]*conv_x, 0)
bb[3] = max(bb[3]*conv_y, 0)
return bb
trn_bbox = np.stack([convert_bb(bb_json[f], s) for f,s in zip(raw_filenames, sizes)],
).astype(np.float32)
val_bbox = np.stack([convert_bb(bb_json[f], s)
for f,s in zip(raw_val_filenames, raw_val_sizes)]).astype(np.float32)
def create_rect(bb, color='red'):
return plt.Rectangle((bb[2], bb[3]), bb[1], bb[0], color=color, fill=False, lw=3)
def show_bb(i):
bb = val_bbox[i]
plot(val[i])
plt.gca().add_patch(create_rect(bb))
show_bb(0)
p=0.6
inp = Input(conv_layers[-1].output_shape[1:])
x = MaxPooling2D()(inp)
x = BatchNormalization(axis=1)(x)
x = Dropout(p/4)(x)
x = Flatten()(x)
x = Dense(512, activation='relu')(x)
x = BatchNormalization()(x)
x = Dropout(p)(x)
x = Dense(512, activation='relu')(x)
x = BatchNormalization()(x)
x = Dropout(p/2)(x)
x_bb = Dense(4, name='bb')(x)
x_class = Dense(8, activation='softmax', name='class')(x)
model = Model([inp], [x_bb, x_class])
model.compile(Adam(lr=0.001), loss=['mse', 'categorical_crossentropy'], metrics=['accuracy'],
loss_weights=[.001, 1.])
model.fit(conv_feat, [trn_bbox, trn_labels], batch_size=batch_size, nb_epoch=3,
validation_data=(conv_val_feat, [val_bbox, val_labels]))
model.optimizer.lr = 1e-5
model.fit(conv_feat, [trn_bbox, trn_labels], batch_size=batch_size, nb_epoch=10,
validation_data=(conv_val_feat, [val_bbox, val_labels]))
pred = model.predict(conv_val_feat[0:10])
def show_bb_pred(i):
bb = val_bbox[i]
bb_pred = pred[0][i]
plt.figure(figsize=(6,6))
plot(val[i])
ax=plt.gca()
ax.add_patch(create_rect(bb_pred, 'yellow'))
ax.add_patch(create_rect(bb))
show_bb_pred(6)
model.evaluate(conv_val_feat, [val_bbox, val_labels])
model.save_weights(path+'models/bn_anno.h5')
model.load_weights(path+'models/bn_anno.h5')
trn = get_data(path+'train', (360,640))
val = get_data(path+'valid', (360,640))
plot(trn[0])
test = get_data(path+'test', (360,640))
save_array(path+'results/trn_640.dat', trn)
save_array(path+'results/val_640.dat', val)
save_array(path+'results/test_640.dat', test)
trn = load_array(path+'results/trn_640.dat')
val = load_array(path+'results/val_640.dat')
vgg640 = Vgg16BN((360, 640)).model
vgg640.pop()
vgg640.input_shape, vgg640.output_shape
vgg640.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy'])
conv_val_feat = vgg640.predict(val, batch_size=32, verbose=1)
conv_trn_feat = vgg640.predict(trn, batch_size=32, verbose=1)
save_array(path+'results/conv_val_640.dat', conv_val_feat)
save_array(path+'results/conv_trn_640.dat', conv_trn_feat)
conv_test_feat = vgg640.predict(test, batch_size=32, verbose=1)
save_array(path+'results/conv_test_640.dat', conv_test_feat)
conv_val_feat = load_array(path+'results/conv_val_640.dat')
conv_trn_feat = load_array(path+'results/conv_trn_640.dat')
conv_test_feat = load_array(path+'results/conv_test_640.dat')
conv_layers,_ = split_at(vgg640, Convolution2D)
nf=128; p=0.
def get_lrg_layers():
return [
BatchNormalization(axis=1, input_shape=conv_layers[-1].output_shape[1:]),
Convolution2D(nf,3,3, activation='relu', border_mode='same'),
BatchNormalization(axis=1),
MaxPooling2D(),
Convolution2D(nf,3,3, activation='relu', border_mode='same'),
BatchNormalization(axis=1),
MaxPooling2D(),
Convolution2D(nf,3,3, activation='relu', border_mode='same'),
BatchNormalization(axis=1),
MaxPooling2D((1,2)),
Convolution2D(8,3,3, border_mode='same'),
Dropout(p),
GlobalAveragePooling2D(),
Activation('softmax')
]
lrg_model = Sequential(get_lrg_layers())
lrg_model.summary()
lrg_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
lrg_model.fit(conv_trn_feat, trn_labels, batch_size=batch_size, nb_epoch=2,
validation_data=(conv_val_feat, val_labels))
lrg_model.optimizer.lr=1e-5
lrg_model.fit(conv_trn_feat, trn_labels, batch_size=batch_size, nb_epoch=6,
validation_data=(conv_val_feat, val_labels))
lrg_model.save_weights(path+'models/lrg_nmp.h5')
lrg_model.load_weights(path+'models/lrg_nmp.h5')
lrg_model.evaluate(conv_val_feat, val_labels)
l = lrg_model.layers
conv_fn = K.function([l[0].input, K.learning_phase()], l[-4].output)
def get_cm(inp, label):
conv = conv_fn([inp,0])[0, label]
return scipy.misc.imresize(conv, (360,640), interp='nearest')
inp = np.expand_dims(conv_val_feat[0], 0)
np.round(lrg_model.predict(inp)[0],2)
plt.imshow(to_plot(val[0]))
cm = get_cm(inp, 0)
plt.imshow(cm, cmap="cool")
def get_lrg_layers():
return [
BatchNormalization(axis=1, input_shape=conv_layers[-1].output_shape[1:]),
Convolution2D(nf,3,3, activation='relu', border_mode='same'),
BatchNormalization(axis=1),
Convolution2D(nf,3,3, activation='relu', border_mode='same'),
BatchNormalization(axis=1),
Convolution2D(nf,3,3, activation='relu', border_mode='same'),
BatchNormalization(axis=1),
Convolution2D(8,3,3, border_mode='same'),
GlobalAveragePooling2D(),
Activation('softmax')
]
lrg_model = Sequential(get_lrg_layers())
lrg_model.summary()
lrg_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
lrg_model.fit(conv_trn_feat, trn_labels, batch_size=batch_size, nb_epoch=2,
validation_data=(conv_val_feat, val_labels))
lrg_model.optimizer.lr=1e-5
lrg_model.fit(conv_trn_feat, trn_labels, batch_size=batch_size, nb_epoch=6,
validation_data=(conv_val_feat, val_labels))
lrg_model.save_weights(path+'models/lrg_0mp.h5')
lrg_model.load_weights(path+'models/lrg_0mp.h5')
l = lrg_model.layers
conv_fn = K.function([l[0].input, K.learning_phase()], l[-3].output)
def get_cm2(inp, label):
conv = conv_fn([inp,0])[0, label]
return scipy.misc.imresize(conv, (360,640))
inp = np.expand_dims(conv_val_feat[0], 0)
plt.imshow(to_plot(val[0]))
cm = get_cm2(inp, 0)
cm = get_cm2(inp, 4)
plt.imshow(cm, cmap="cool")
plt.figure(figsize=(10,10))
plot(val[0])
plt.imshow(cm, cmap="cool", alpha=0.5)
def conv2d_bn(x, nb_filter, nb_row, nb_col, subsample=(1, 1)):
x = Convolution2D(nb_filter, nb_row, nb_col,
subsample=subsample, activation='relu', border_mode='same')(x)
return BatchNormalization(axis=1)(x)
def incep_block(x):
branch1x1 = conv2d_bn(x, 32, 1, 1, subsample=(2, 2))
branch5x5 = conv2d_bn(x, 24, 1, 1)
branch5x5 = conv2d_bn(branch5x5, 32, 5, 5, subsample=(2, 2))
branch3x3dbl = conv2d_bn(x, 32, 1, 1)
branch3x3dbl = conv2d_bn(branch3x3dbl, 48, 3, 3)
branch3x3dbl = conv2d_bn(branch3x3dbl, 48, 3, 3, subsample=(2, 2))
branch_pool = AveragePooling2D(
(3, 3), strides=(2, 2), border_mode='same')(x)
branch_pool = conv2d_bn(branch_pool, 16, 1, 1)
return merge([branch1x1, branch5x5, branch3x3dbl, branch_pool],
mode='concat', concat_axis=1)
inp = Input(vgg640.layers[-1].output_shape[1:])
x = BatchNormalization(axis=1)(inp)
x = incep_block(x)
x = incep_block(x)
x = incep_block(x)
x = Dropout(0.75)(x)
x = Convolution2D(8,3,3, border_mode='same')(x)
x = GlobalAveragePooling2D()(x)
outp = Activation('softmax')(x)
lrg_model = Model([inp], outp)
lrg_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
lrg_model.fit(conv_trn_feat, trn_labels, batch_size=batch_size, nb_epoch=2,
validation_data=(conv_val_feat, val_labels))
lrg_model.optimizer.lr=1e-5
lrg_model.fit(conv_trn_feat, trn_labels, batch_size=batch_size, nb_epoch=6,
validation_data=(conv_val_feat, val_labels))
lrg_model.fit(conv_trn_feat, trn_labels, batch_size=batch_size, nb_epoch=10,
validation_data=(conv_val_feat, val_labels))
lrg_model.save_weights(path+'models/lrg_nmp.h5')
lrg_model.load_weights(path+'models/lrg_nmp.h5')
preds = model.predict([conv_test_feat, test_sizes], batch_size=batch_size*2)
gen = image.ImageDataGenerator()
test_batches = gen.flow(conv_test_feat, preds, batch_size=16)
val_batches = gen.flow(conv_val_feat, val_labels, batch_size=4)
batches = gen.flow(conv_feat, trn_labels, batch_size=44)
mi = MixIterator([batches, test_batches, val_batches])
bn_model.fit_generator(mi, mi.N, nb_epoch=8, validation_data=(conv_val_feat, val_labels))
def do_clip(arr, mx): return np.clip(arr, (1-mx)/7, mx)
lrg_model.evaluate(conv_val_feat, val_labels, batch_size*2)
preds = model.predict(conv_test_feat, batch_size=batch_size)
preds = preds[1]
test = load_array(path+'results/test_640.dat')
test = load_array(path+'results/test.dat')
preds = conv_model.predict(test, batch_size=32)
subm = do_clip(preds,0.82)
subm_name = path+'results/subm_bb.gz'
# classes = sorted(batches.class_indices, key=batches.class_indices.get)
classes = ['ALB', 'BET', 'DOL', 'LAG', 'NoF', 'OTHER', 'SHARK', 'YFT']
submission = pd.DataFrame(subm, columns=classes)
submission.insert(0, 'image', raw_test_filenames)
submission.head()
submission.to_csv(subm_name, index=False, compression='gzip')
FileLink(subm_name)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sometimes it's helpful to have just the filenames, without the path.
Step2: Setup dirs
Step3: Basic VGG
Step4: Precompute convolutional output
Step5: Train model
Step6: Multi-input
Step7: Then we one-hot encode them (since we want to treat them as categorical) and normalize the data.
Step8: To use this additional "meta-data", we create a model with multiple input layers - sz_inp will be our input for the size information.
Step9: When we compile the model, we have to specify all the input layers in an array.
Step10: And when we train the model, we have to provide all the input layers' data in an array.
Step11: The model did not show an improvement by using the leakage, other than in the early epochs. This is most likely because the information about what boat the picture came from is readily identified from the image itself, so the meta-data turned out not to add any additional information.
Step12: For any images that have no annotations, we'll create an empty bounding box.
Step13: Finally, we convert the dictionary into an array, and convert the coordinates to our resized 224x224 images.
Step14: Now we can check our work by drawing one of the annotations.
Step15: Create & train model
Step16: Since we have multiple outputs, we need to provide them to the model constructor in an array, and we also need to say what loss function to use for each. We also weight the bounding box loss function down by 1000x since the scale of the cross-entropy loss and the MSE is very different.
Step17: Excitingly, it turned out that the classification model is much improved by giving it this additional task. Let's see how well the bounding box model did by taking a look at its output.
Step18: The image shows that it can find fish that are tricky for us to see!
Step19: Larger size
Step20: The image shows that things are much clearer at this size.
Step21: We can now create our VGG model - we'll need to tell it we're not using the normal 224x224 images, which also means it won't include the fully connected layers (since they don't make sense for non-default sizes). We will also remove the last max pooling layer, since we don't want to throw away information yet.
Step22: We can now pre-compute the output of the convolutional part of VGG.
Step23: Fully convolutional net (FCN)
Step24: I'm not using any dropout, since I found I got better results without it.
Step25: When I submitted the results of this model to Kaggle, I got the best single model results of any shown here (ranked 22nd on the leaderboard as at Dec-6-2016.)
Step26: Another benefit of this kind of model is that the last convolutional layer has to learn to classify each part of the image (since there's only an average pooling layer after). Let's create a function that grabs the output of this layer (which is the 4th-last layer of our model).
Step27: We have to add an extra dimension to our input since the CNN expects a 'batch' (even if it's just a batch of one).
Step28: The heatmap shows that (at very low resolution) the model is finding the fish!
Step29: All convolutional net heatmap
Step30: Create heatmap
Step31: Inception mini-net
Step32: Pseudo-labeling
Step33: Submit
|
7,787
|
<ASSISTANT_TASK:>
Python Code:
from pyoptools.all import *
from numpy import pi
L1=SphericalLens(radius=25,curvature_s1=1./100.,curvature_s2=-1./100,
thickness=10,material=material.schott["N-BK7"])
S=System(complist=[(L1,(0,0,100),(0,0,0))],n=1)
R=[Ray(pos=(0,0,0),dir=(0,.2,1),wavelength=.650),
Ray(pos=(0,0,0),dir=(0,-.2,1),wavelength=.650),
Ray(pos=(0,0,0),dir=(.2,0,1),wavelength=.650),
Ray(pos=(0,0,0),dir=(-.2,0,1),wavelength=.650),
Ray(pos=(0,0,0),dir=(0,0,1),wavelength=.650)]
S.ray_add(R)
S.propagate()
Plot3D(S,center=(0,0,100),size=(300,100),scale=2,rot=[(0,-pi/2,0),(pi/20,-pi/10,0)])
L1=SphericalLens(radius=25,curvature_s1=1./100.,curvature_s2=-1./100,thickness=10,material=material.schott["N-BK7"])
SEN1=CCD(size=(10,10))
S=System(complist=[(L1,(0,0,200),(0,0,0)),
(SEN1,(0,0,400),(0,0,0))],n=1)
R=[Ray(pos=(0,0,0),dir=(0,.1,1),wavelength=.650),
Ray(pos=(0,0,0),dir=(0,-.1,1),wavelength=.650),
Ray(pos=(0,0,0),dir=(.1,0,1),wavelength=.650),
Ray(pos=(0,0,0),dir=(-.1,0,1),wavelength=.650),
Ray(pos=(0,0,0),dir=(0,0,1),wavelength=.650)]
S.ray_add(R)
S.propagate()
Plot3D(S,center=(0,0,200),size=(450,100),scale=2,rot=[(0,-pi/2,0),(pi/20,-pi/10,0)])
spot_diagram(SEN1)
for C,R in SEN1.hit_list:
print(C)
L1=SphericalLens(radius=25,curvature_s1=1./100.,curvature_s2=-1./100,thickness=10,material=material.schott["N-BK7"])
SEN1=CCD(size=(10,10))
ST1=Stop(shape=Circular(radius=20), ap_shape=Circular(radius=5))
S=System(complist=[(L1,(0,0,200),(0,0,0)),
(SEN1,(0,0,400),(0,0,0)),
(ST1,(0,0,180),(0,0,0))],n=1)
R=[Ray(pos=(0,0,0),dir=(0,.1,1),wavelength=.650),
Ray(pos=(0,0,0),dir=(0,-.1,1),wavelength=.650),
Ray(pos=(0,0,0),dir=(.1,0,1),wavelength=.650),
Ray(pos=(0,0,0),dir=(-.1,0,1),wavelength=.650),
Ray(pos=(0,0,0),dir=(0,0,1),wavelength=.650)]
S.ray_add(R)
S.propagate()
Plot3D(S,center=(0,0,200),size=(450,100),scale=2,rot=[(0,-pi/2,0),(pi/20,-pi/10,0)])
BL1=Block(size=(15,15,30), material=material.schott["N-BK7"])
S=System(complist=[(BL1,(0,0,50),(pi/8,0,0)),])
S.ray_add(Ray(pos=(0,0,0),dir=(0,0,1),wavelength=.650))
S.propagate()
Plot3D(S,center=(0,0,50), size=(100,50),scale=5,rot=[(0,-pi/2,0)])
RAP1=RightAnglePrism(width=20, height=20, material=material.schott["N-BK7"] )
S=System(complist=[(RAP1,(0,0,50),(0,-3*pi/4,0)),])
S.ray_add(Ray(pos=(10,0,0),dir=(0,0,1),wavelength=.650))
S.propagate()
Plot3D(S,center=(0,0,50), size=(100,50),scale=5,rot=[(pi/2,0,0)])
M1=RectMirror(size=(25.0, 25.0, 3.0), reflectivity=0.5, material=material.schott["N-BK7"] )
M2=RoundMirror(radius=12.5, thickness=3,reflectivity=1., material=material.schott["N-BK7"] )
S=System(complist=[(M1,(0,0,50),(pi/4,0,0)),
(M2,(0,50,50),(pi/4,0,0)),
], n=1)
S.ray_add(Ray(pos=(0,0,0),dir=(0,0,1),wavelength=.650))
S.propagate()
Plot3D(S,center=(0,25,50), size=(150,100),scale=4,rot=[(0,-pi/2,0)])
BS=BeamSplittingCube()
S=System(complist=[(BS,(0,0,50),(0,0,0)),])
S.ray_add(Ray(pos=(0,0,0),dir=(0,0,1),wavelength=.650))
S.propagate()
Plot3D(S,center=(0,0,50), size=(100,150),scale=5,rot=[(pi/2,0,0)])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Spherical lens
Step2: Besides the SphericalLens, pyOptools has classes to create the following lenses
Step3: After the rays propagation is finished, it is possible to plot the spot diagram of the CCD. In this case it can be seen that the CCD is not located close to the image plane.
Step4: The sensor has an attribute called hit_list. This is a list containing a set of tuples with the coordinates of each hit point, and the ray that hitted at this point. In the following example, the coordinates of the hit point is printed
Step5: Stop
Step6: Block
Step7: Right Angle Prism
Step8: Besides the RightAnglePrism, pyOptools has classes to create the following prisms
Step9: BeamSplittingCube
|
7,788
|
<ASSISTANT_TASK:>
Python Code::
model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
EPOCHS = 15
train_max=0
test_max=0
for epoch in range(EPOCHS):
print("EPOCH:", epoch)
train(model, device, train_loader, optimizer, epoch)
test(model, device, test_loader)
print(f"\nMaximum training accuracy: {train_max}\n")
print(f"\nMaximum test accuracy: {test_max}\n")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
7,789
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from __future__ import print_function # only necessary if using Python 2.x
import matplotlib.pyplot as plt
import numpy as np
from pyshtools.shclasses import SHCoeffs, SHGrid, SHWindow
# Spherical harmonic coefficients are stored as a numpy array of
# dimension (2, lmax+1, lmax+1). The first column corresponds to
# the cosine [0], and sine [1] components, the second column
# corresponds to the spherical harmonic degree l, and the third
# column corresponds to the angular order m.
nl = 200
coeffs = np.zeros((2, nl, nl))
coeffs[0, 5, 2] = 1.
# Initialize the coefficient class. The default assumes 4-pi
# normalized coefficients, but the normalization convention
# can be specified by setting the optional parameters
# 'normalization' and 'csphase'. The normalization parameter
# can be either '4pi', 'ortho' or 'schmidt'. The Condon-Shortely
# phase is not used by default (1) but can be used by setting
# this equal to -1.
coeffs_l5m2 = SHCoeffs.from_array(coeffs)
coeffs_l5m2.plot_powerperdegree()
grid_l5m2 = coeffs_l5m2.expand('DH2')
grid_l5m2.plot_rawdata()
nl = 100 # l = [0, 199]
lmax = nl - 1
a = 10 # scale length
ls = np.arange(nl, dtype=np.float)
power = 1. / (1. + (ls / a) ** 2) ** 0.5
coeffs_global = SHCoeffs.from_random(power)
coeffs_global.plot_powerperband()
grid_global = coeffs_global.expand('DH2')
grid_global.plot_rawdata()
nl = 200
coeffs = np.zeros((2, nl, nl))
coeffs[0, 5, 0] = 1.
coeffs_l5m0 = SHCoeffs.from_array(coeffs)
# Rotations are specified by the three Euler angles alpha, beta, and gamma.
# There are several different convenctions for specifying these angles,
# and they can either provide the angles for rotating the physical body
# or coordinate system. Please read the documentation of this method
# before proceeding!
alpha = 0. # around z-axis
beta = 90. # around x-axis (lon=0)
gamma = 10. # around z-axis again
coeffs_l5m0_rot = coeffs_l5m0.rotate(alpha, beta, gamma, degrees=True)
fig, ax = coeffs_l5m0_rot.plot_powerperdegree(show=False)
ax.set(ylim=(1e-1, 1e0))
grid_l5m0_rot = coeffs_l5m0_rot.expand('DH2')
grid_l5m0_rot.plot_rawdata()
grid_new = grid_l5m0_rot + grid_l5m2
grid_new.plot_rawdata()
coeffs = grid_new.expand()
coeffs.plot_powerperdegree();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plot a single spherical harmonic function
Step2: The coefficient class provides functions and methods that stay completely in coefficient space. To plot the function that corresponds to the coefficients, we need to expand it on a spatial grid (called DH2 in this case). The expand function returns a SHGrid class. The type and resolution of the grid is determined automatically to correspond to the maximum degree of the spherical harmonics coefficients to ensure good sampling.
Step3: Initialize with a random model
Step4: Rotating the coordindate system
Step5: Addition, multiplication, and subtraction
|
7,790
|
<ASSISTANT_TASK:>
Python Code:
!pip install google-cloud-bigquery
%load_ext google.cloud.bigquery
import os
PROJECT = 'data-science-on-gcp-180606' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'data-science-on-gcp' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
os.environ['BUCKET'] = BUCKET
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
import google.cloud.bigquery as bigquery
bq = bigquery.Client()
sql =
SELECT DISTANCE, DEP_DELAY
FROM `flights.tzcorr`
WHERE RAND() < 0.001 AND dep_delay > -20 AND dep_delay < 30 AND distance < 2000
df = bq.query(sql).to_dataframe()
sns.set_style("whitegrid")
g = sns.jointplot(df['DISTANCE'], df['DEP_DELAY'], kind="hex", size=10, joint_kws={'gridsize':20})
from pyspark.sql import SparkSession
spark = SparkSession \
.builder \
.appName("Bayes classification using Spark") \
.getOrCreate()
print(spark)
from pyspark.sql.types import StringType, FloatType, StructType, StructField
header = 'FL_DATE,UNIQUE_CARRIER,AIRLINE_ID,CARRIER,FL_NUM,ORIGIN_AIRPORT_ID,ORIGIN_AIRPORT_SEQ_ID,ORIGIN_CITY_MARKET_ID,ORIGIN,DEST_AIRPORT_ID,DEST_AIRPORT_SEQ_ID,DEST_CITY_MARKET_ID,DEST,CRS_DEP_TIME,DEP_TIME,DEP_DELAY,TAXI_OUT,WHEELS_OFF,WHEELS_ON,TAXI_IN,CRS_ARR_TIME,ARR_TIME,ARR_DELAY,CANCELLED,CANCELLATION_CODE,DIVERTED,DISTANCE,DEP_AIRPORT_LAT,DEP_AIRPORT_LON,DEP_AIRPORT_TZOFFSET,ARR_AIRPORT_LAT,ARR_AIRPORT_LON,ARR_AIRPORT_TZOFFSET,EVENT,NOTIFY_TIME'
def get_structfield(colname):
if colname in ['ARR_DELAY', 'DEP_DELAY', 'DISTANCE']:
return StructField(colname, FloatType(), True)
else:
return StructField(colname, StringType(), True)
schema = StructType([get_structfield(colname) for colname in header.split(',')])
print(schema)
inputs = 'gs://{}/flights/tzcorr/all_flights-00000-*'.format(BUCKET) # 1/30th
#inputs = 'gs://{}/flights/tzcorr/all_flights-*'.format(BUCKET) # FULL
flights = spark.read\
.schema(schema)\
.csv(inputs)
# this view can now be queried ...
flights.createOrReplaceTempView('flights')
results = spark.sql('SELECT COUNT(*) FROM flights WHERE dep_delay > -20 AND distance < 2000')
results.show()
sql =
SELECT *
FROM `flights.trainday`
df = bq.query(sql).to_dataframe()
df.to_csv('trainday.csv', index=False)
!head -3 trainday.csv
%%bash
gsutil cp trainday.csv gs://${BUCKET}/flights/trainday.csv
traindays = spark.read \
.option("header", "true") \
.option("inferSchema", "true") \
.csv('gs://{}/flights/trainday.csv'.format(BUCKET))
traindays.createOrReplaceTempView('traindays')
results = spark.sql('SELECT * FROM traindays')
results.head(5)
statement =
SELECT
f.FL_DATE AS date,
distance,
dep_delay
FROM flights f
JOIN traindays t
ON f.FL_DATE == t.FL_DATE
WHERE
t.is_train_day AND
f.dep_delay IS NOT NULL
ORDER BY
f.dep_delay DESC
flights = spark.sql(statement)
df = flights[(flights['distance'] < 2000) & (flights['dep_delay'] > -20) & (flights['dep_delay'] < 30)]
df.describe().show()
pdf = df.sample(False, 0.02, 20).toPandas() # to 100,000 rows approx on complete dataset
g = sns.jointplot(pdf['distance'], pdf['dep_delay'], kind="hex", size=10, joint_kws={'gridsize':20})
distthresh = flights.approxQuantile('distance', list(np.arange(0, 1.0, 0.1)), 0.02)
distthresh
delaythresh = flights.approxQuantile('dep_delay', list(np.arange(0, 1.0, 0.1)), 0.05)
delaythresh
results = spark.sql('SELECT COUNT(*) FROM flights WHERE dep_delay >= 3 AND dep_delay < 8 AND distance >= 447 AND distance < 557')
results.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup
Step3: <h3> Exploration using BigQuery </h3>
Step4: <h3> Set up views in Spark SQL </h3>
Step5: Set up the schema to read in the CSV files on GCS
Step6: Create a table definition (this is done lazily; the files won't be read until we issue a query)
Step7: Example query over the view (this will take a while; it's Spark SQL, not BigQuery)
Step9: <h2> Restrict to train days </h2>
Step11: Create dataframe of traindays, but this time because the file has a header, and is a small file, we can have Spark infer the schema
Step12: <h3> Hexbin plot </h3>
Step13: Sample the dataframe so that it fits into memory (not a problem in development, but will be on full dataset); then plot it.
Step14: <h3> Quantization </h3>
|
7,791
|
<ASSISTANT_TASK:>
Python Code:
sample = np.random.choice([1,2,3,4,5,6], 100)
# посчитаем число выпадений каждой из сторон:
from collections import Counter
c = Counter(sample)
print("Число выпадений каждой из сторон:")
print(c)
# теперь поделим на общее число подбрасываний и получим вероятности:
print("Вероятности выпадений каждой из сторон:")
print({k: v/100.0 for k, v in c.items()})
norm_rv = sts.norm(0, 1)
sample = norm_rv.rvs(100)
x = np.linspace(-4,4,100)
cdf = norm_rv.cdf(x)
plt.plot(x, cdf, label='theoretical CDF')
# для построения ECDF используем библиотеку statsmodels
from statsmodels.distributions.empirical_distribution import ECDF
ecdf = ECDF(sample)
plt.step(ecdf.x, ecdf.y, label='ECDF')
plt.ylabel('$f(x)$')
plt.xlabel('$x$')
plt.legend(loc='upper left')
plt.hist(sample, normed=True)
plt.ylabel('fraction of samples')
plt.xlabel('$x$')
plt.hist(sample, bins=3, normed=True)
plt.ylabel('fraction of samples')
plt.xlabel('$x$')
plt.hist(sample, bins=40, normed=True)
plt.ylabel('fraction of samples')
plt.xlabel('$x$')
# для построения используем библиотеку Pandas:
df = pd.DataFrame(sample, columns=['KDE'])
ax = df.plot(kind='density')
# на том же графике построим теоретическую плотность распределения:
x = np.linspace(-4,4,100)
pdf = norm_rv.pdf(x)
plt.plot(x, pdf, label='theoretical pdf', alpha=0.5)
plt.legend()
plt.ylabel('$f(x)$')
plt.xlabel('$x$')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Представим теперь, что эта выборка была получена не искусственно, а путём подбрасывания симметричного шестигранного кубика 100 раз. Оценим вероятности выпадения каждой из сторон с помощью частот
Step2: Это и есть оценка функции вероятности дискретного распределения.
Step3: Эмпирическая функция распределения для полученной выборки
Step4: Гистограмма выборки
Step5: Попробуем задавать число карманов гистограммы вручную
Step6: Эмпирическая оценка плотности, построенная по выборке с помощью ядерного сглаживания
|
7,792
|
<ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.2,<2.3"
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
phoebe.devel_on() # CURRENTLY REQUIRED FOR WD-STYLE MESHING (WHICH IS EXPERIMENTAL)
logger = phoebe.logger()
b = phoebe.default_binary()
b.set_value_all('mesh_method', 'wd')
b.set_value_all('eclipse_method', 'graham')
b.add_dataset('mesh', compute_times=[0, 0.5], dataset='mesh01', columns=['visibilities'])
b.run_compute(irrad_method='none')
afig, mplfig = b['mesh01@model'].plot(time=0.5, x='us', y='vs',
show=True)
afig, mplfig = b['primary@mesh01@model'].plot(time=0.0, x='us', y='vs',
ec='blue', fc='gray',
xlim=(-0.2,0.2), ylim=(-0.2,0.2),
show=True)
afig, mplfig = b['primary@mesh01@model'].plot(time=0.0, x='us', y='ws',
ec='blue', fc='gray',
xlim=(-0.1,0.1), ylim=(-2.75,-2.55),
show=True)
afig, mplfig = b['secondary@mesh01@model'].plot(time=0.0, x='us', y='ws',
ec='face', fc='visibilities',
show=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Changing Meshing Options
Step3: Adding Datasets
Step4: Running Compute
Step5: Plotting
Step6: Now let's zoom in so we can see the layout of the triangles. Note that Wilson-Devinney uses trapezoids, but since PHOEBE uses triangles, we take each of the trapezoids and split it into two triangles.
Step7: And now looking down from above. Here you can see the gaps between the surface elements (and you can also see some of the subdivision that's taking place along the limb).
Step8: And see which elements are visible at the current time. This defaults to use the 'RdYlGn' colormap which will make visible elements green, partially hidden elements yellow, and hidden elements red. Note that the observer is in the positive w-direction.
|
7,793
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pylab as pl
def gcd(a,b):
if a<b:
a,b = b,a
while b>0:
a,b=b,a%b
return a
def lcm(a,b):
return a*b//gcd(a,b)
# amplitudes
A = np.array([1,1])
# frequencies
f = np.array([3,5])
# phased difference
delta = 0*np.pi/4
# time periods
T = 1/f
# the time grid
tmin, tmax = 0, lcm(f[0], f[1])
dt=(tmax-tmin)/2000
t=np.arange(tmin, tmax+dt, dt)
# the oscillations - mutually perpendicular
x = A[0]*np.cos(2*np.pi*f[0]*t )
y = A[1]*np.sin(2*np.pi*f[1]*t+delta)
fig = pl.figure(1, figsize=(7,7))
ax = fig.add_subplot(111)
pl.plot(x,y)
pl.grid('on')
d=1.1*max(A)
pl.xlim((-d, d))
pl.ylim((-d, d))
ax.set_aspect(1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will need to calculate the LCM of the time periods for the two oscillators. Hence we define the LCM of two integers.
Step2: The parameters of the oscillators. You need not change anything starting from the time periods.
|
7,794
|
<ASSISTANT_TASK:>
Python Code:
from openhunt.mordorutils import *
spark = get_spark()
mordor_file = "https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/persistence/host/empire_wmi_local_event_subscriptions_elevated_user.zip"
registerMordorSQLTable(spark, mordor_file, "mordorTable")
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, User, EventNamespace, Name, Query
FROM mordorTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 19
'''
)
df.show(10,False)
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, User, Name, Type, Destination
FROM mordorTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 20
'''
)
df.show(10,False)
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, User, Operation, Consumer, Filter
FROM mordorTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 21
'''
)
df.show(10,False)
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, Message
FROM mordorTable
WHERE Channel = "Microsoft-Windows-WMI-Activity/Operational"
AND EventID = 5861
'''
)
df.show(10,False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download & Process Mordor Dataset
Step2: Analytic I
Step3: Analytic II
Step4: Analytic III
Step5: Analytic IV
|
7,795
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
#target_text
#view_sentence_range = (0, 10)
view_sentence_range = (31, 40)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_sentences = source_text.split('\n')
target_sentences = target_text.split('\n')
#print(source_vocab_to_int)
source_id_text = []
for sentence in source_sentences:
words = sentence.split()
mysentence = []
for word in words:
mysentence.append(source_vocab_to_int.get(word,0)) # return 0 if not in the dd
#mysentence.append(source_vocab_to_int[word])
#print(source_vocab_to_int[word])
#print(source_vocab_to_int.get(word,0))
source_id_text.append(mysentence)
target_id_text = []
for sentence in target_sentences:
words = sentence.split()
mysentence = []
for word in words:
mysentence.append(target_vocab_to_int.get(word,0)) # return 0 is the word doesn't exit in the dd
mysentence.append(target_vocab_to_int['<EOS>'])
target_id_text.append(mysentence)
# print(source_id_text[0])
# print(target_id_text[0])
#
# use list comprehension is more efficient
#
#target_ids = [[target_vocab_to_int.get(word) for word in line.split()] + [target_vocab_to_int['<EOS>']] for line in target_text.split('\n')]
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
# TODO: Implement Function
inputs = tf.placeholder(dtype = tf.int32,
shape=(None, None), name='input')
targets = tf.placeholder(dtype = tf.int32,
shape=(None, None), name='targets')
learning_rate = tf.placeholder(dtype = tf.float32,
name='learning_rate')
keep_prob = tf.placeholder(dtype = tf.float32,
name='keep_prob')
return (inputs, targets, learning_rate, keep_prob)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for decoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
newbatch = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
newtarget = tf.concat([tf.fill([batch_size, 1],
target_vocab_to_int['<GO>']),
newbatch], 1)
return newtarget
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
# TODO: Implement Function
cell = tf.contrib.rnn.BasicLSTMCell(rnn_size) # lstm cell
cell = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob = keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([cell] * num_layers)
output, state = tf.nn.dynamic_rnn(cell, rnn_inputs, dtype=tf.float32)
return state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
# TODO: Implement Function
decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
outputs, state, context = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell,
decoder_fn,
inputs = dec_embed_input,
sequence_length=sequence_length,
scope=decoding_scope)
training_logits = output_fn(outputs)
# add additional dropout
# tf.nn.dropout(training_logits, keep_prob)
return training_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
# TODO: Implement Function
infer_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(output_fn,
encoder_state,
dec_embeddings,
start_of_sequence_id,
end_of_sequence_id,
maximum_length,
num_decoder_symbols = vocab_size,
dtype = tf.int32)
dp_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob = keep_prob)
outputs, state, context = tf.contrib.seq2seq.dynamic_rnn_decoder(dp_cell,
infer_fn,
sequence_length=maximum_length,
scope=decoding_scope)
return outputs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
start_symb, end_symb = target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>']
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
dropout = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
stack_lstm = tf.contrib.rnn.MultiRNNCell([dropout] * num_layers)
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, activation_fn=None,scope = decoding_scope)
with tf.variable_scope('decoding') as decoding_scope:
training_logits = decoding_layer_train(encoder_state,
stack_lstm,
dec_embed_input,
sequence_length,
decoding_scope,
output_fn,
keep_prob)
with tf.variable_scope('decoding', reuse=True) as decoding_scope:
infer_logits = decoding_layer_infer(encoder_state,
stack_lstm,
dec_embeddings,
start_symb,
end_symb,
sequence_length,
vocab_size,
decoding_scope,
output_fn,
keep_prob)
# option 2: more concise
# decoding_scope.reuse_variables()
# infer_logits = decoding_layer_infer(encoder_state,
# stack_lstm,
# dec_embeddings,
# start_symb,
# end_symb,
# sequence_length,
# vocab_size,
# decoding_scope,
# output_fn,
# keep_prob)
return (training_logits, infer_logits)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
enc_embed = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
encode = encoding_layer(enc_embed, rnn_size, num_layers, keep_prob)
dec_process = process_decoding_input(target_data, target_vocab_to_int, batch_size)
dec_embed = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
dec_input = tf.nn.embedding_lookup(dec_embed, dec_process)
train_logits, infer_logits = decoding_layer(dec_input,
dec_embed,
encode,
target_vocab_size,
sequence_length,
rnn_size,
num_layers,
target_vocab_to_int,
keep_prob)
return (train_logits, infer_logits)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
# Number of Epochs
epochs = 4
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 256
# Number of Layers
num_layers = 3
# Embedding Size
encoding_embedding_size = 128
decoding_embedding_size = 128
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.8
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
wid_list = []
for word in sentence.lower().split():
wid_list.append(vocab_to_int.get(word, vocab_to_int['<UNK>']))
return wid_list
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
Step3: Explore the Data
Step6: Implement Preprocessing Function
Step8: Preprocess all the data and save it
Step10: Check Point
Step12: Check the Version of TensorFlow and Access to GPU
Step15: Build the Neural Network
Step18: Process Decoding Input
Step21: Encoding
Step24: Decoding - Training
Step27: Decoding - Inference
Step30: Build the Decoding Layer
Step33: Build the Neural Network
Step34: Neural Network Training
Step36: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Sentence to Sequence
Step48: Translate
|
7,796
|
<ASSISTANT_TASK:>
Python Code:
ls data/arabian_nights
f = open('data/arabian_nights/848.txt', 'r')
text = f.read()
f.close()
print(text[:500])
with open('data/arabian_nights/848.txt', 'r') as f:
text = f.read()
print(text[:500])
import os
filenames = os.listdir('data/arabian_nights')
print(len(filenames))
print(filenames[:20])
# your code goes here
os.listdir('data/belgian_nights')
print(os.path.isdir('data/arabian_nights'))
print(os.path.isdir('data/belgian_nights'))
os.mkdir('belgian_nights')
ls
print(os.path.isdir('belgian_nights'))
import shutil
shutil.rmtree('belgian_nights')
print(os.path.isdir('belgian_nights'))
os.rmdir('data/arabian_nights')
os.mkdir('belgian_nights')
f = open('belgian_nights/1001.txt', 'w')
f.write('Content')
f.close()
print(os.path.exists('belgian_nights/1001.txt'))
os.remove('belgian_nights/1001.txt')
print(os.path.exists('belgian_nights/1001.txt'))
shutil.copyfile('data/arabian_nights/66.txt', 'new_66.txt')
ls
os.remove('new_66.txt')
os.path.abspath('data/arabian_nights/848.txt')
filenames = os.listdir('data/arabian_nights')
random_filename = filenames[9]
with open(random_filename, 'r') as f:
text = f.read()
print(text[:500])
filenames = os.listdir('data/arabian_nights')
random_filename = filenames[9]
with open('data/arabian_nights/'+ random_filename, 'r') as f:
text = f.read()
print(text[:500])
import glob
filenames = glob.glob('data/arabian_nights/*')
print(filenames[:10])
filenames = glob.glob('data/arabian_nights/*.txt')
print(filenames[:10])
filenames = []
for fn in os.listdir('data/arabian_nights'):
if fn.endswith('.txt'):
filenames.append(fn)
print(filenames[:10])
filenames = [fn for fn in os.listdir('data/arabian_nights') if fn.endswith('.txt')]
filenames = glob.glob('data/arabian_nights/*.txt')
fn = filenames[10]
# simple string splitting:
print(fn.split('/')[-1])
# using os.sep:
print(fn.split(os.sep)[-1])
# using os.path:
print(os.path.basename(fn))
for root, directory, filename in os.walk("data"):
print(filename)
help(os.walk)
# your quiz code
for fn in sorted(os.listdir('data/1001')):
print(fn)
for fn in sorted(os.listdir('data/arabian_nights/')):
print(fn)
for fn in sorted(os.listdir('data/arabian_nights/'),
key=lambda nb: int(nb[:-4])):
print(fn)
import re
def preprocess(in_str):
out_str = ''
for c in in_str.lower():
if c.isalpha() or c.isspace():
out_str += c
whitespace = re.compile(r'\s+')
out_str = whitespace.sub(' ', out_str)
return out_str
old_str = 'This; is -- a very DIRTY string!'
new_str = preprocess(old_str)
print(new_str)
with open('data/1001/0007.txt', 'r') as f:
in_str = f.read()
print(preprocess(in_str))
def tokenize(in_str):
tokens = in_str.split()
tokens = [t for t in tokens if t]
return tokens
with open('data/1001/0007.txt', 'r') as f:
in_str = f.read()
tokens = tokenize(preprocess(in_str))
print(tokens[:10])
print(len(tokens))
# your quiz code
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(word_counts)
plt.plot(range(0, len(word_counts)), word_counts)
filenames = sorted(os.listdir('data/1001'))
idxs = [int(i[:-4]) for i in filenames]
print(idxs[:20])
print(min(idxs))
print(max(idxs))
plt.plot(idxs, word_counts, color='r')
plt.xlabel('Word length')
plt.ylabel('# words (absolute counts)')
plt.title('The Arabian Nights')
plt.xlim(1, 1001)
plt.plot(idxs, word_counts, color='r')
plt.xlabel('Word length')
plt.ylabel('# words (absolute counts)')
plt.title(r'The Arabian Nights')
plt.xlim(1, 1001)
plt.axvline(500, color='g')
# quiz code goes here
cnts = {}
for word in tokens:
if word in cnts:
cnts[word] += 1
else:
cnts[word] = 1
print(cnts)
from collections import Counter
cnt = Counter(tokens)
print(cnt)
print(cnt.most_common(25))
cnt = Counter()
cnt.update(tokens)
cnt.update(tokens)
print(cnt.most_common(25))
# quiz code
freqs = [f for _, f in vocab.most_common(15)]
words = [w for w, _ in vocab.most_common(15)] # note the use of underscores for 'throwaway' variables
idxs = range(1, len(freqs)+1)
plt.barh(idxs, freqs, align='center')
plt.yticks(idxs, words)
plt.xlabel('Words')
plt.ylabel('Cumulative absolute frequencies')
from IPython.core.display import HTML
def css_styling():
styles = open("styles/custom.css", "r").read()
return HTML(styles)
css_styling()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As you can see, this folder holds a number of plain text files, ending in the .txt extension. Let us open a random file
Step2: Here, we use the open() function to create a file object f, which we can use to access the actual text content of the file. Make sure that you do not pass the 'w' parameter ("write") to open(), instead of 'r' ("read"), since this would overwrite and thus erase the existing file. After assigning the string returned by f.read() to the variable text, we print the 500 first characters of text to get an impression of what it contains, using simple string indexing ([
Step3: This code block does exactly the same thing as the previous one but saves you some typing. In this chapter we would like to work with all the files in the arabian_nights directory. This is where loops come in handy of course, since what we really would like to do, is iterate over the contents of the directory. Accessing these contents in Python is easy, but requires importing some extra functionality. In this case, we need to import the os module, which contains all functionality related to the 'operating system' of your machine, such as directory information
Step4: Using the dot-syntax (os.xxx), we can now access all functions that come with this module, such as listdir(), which returns a list of the items which are included under a given directory
Step5: The function os.listdir() returns a list of strings, representing the filenames contained under a directory.
Step6: With os.listdir(), you need to make sure that you pass the correct path to an existing directory
Step7: It might therefore be convenient to check whether a directory actually exists in a given location
Step8: The second directory, naturally, does not exist and isdir() evaluates to False in this case. Creating a new (and thus empty) directory is also easy using os
Step9: We can see that it lives in the present working directory now, by typing ls again
Step10: Or we use Python
Step11: Removing directories is also easy, but PLEASE watch out, sometimes it is too easy
Step12: And lo behold
Step13: Here, we use the rmtree() command to remove the entire directory in a recursive way
Step14: The folder contains things and therefore cannot be removed using this function. There are, of course, also ways to remove individual files or check whether they exist
Step15: Here, we created a directory, wrote a new file to it (1001.txt), and removed it again. Using os.path.exists() we monitored at which point the file existed. Finally, the shutil module also ships with a useful copyfile() function which allows you to copy files from one location to another, possibly with another name. To copy night 66 to the present directory, for instance, we could do
Step16: Indeed, we have added an exact copy of night 66 to our present working directory
Step17: We can safely remove it again
Step18: Paths
Step19: While absolute paths are longer to type, they have the advantage that they can be used anywhere on your computer (i.e. irrespective of where you run your code from). Paths can be tricky. Suppose that we would like to open one of our filenames
Step20: Python throws a FileNotFoundError, complaining that the file we wish to open does not exist. This situation stems from the fact that os.listdir() only returns the base name of a given file, and not an entire (absolute or relative) path to it. To properly access the file, we must therefore not forget to include the rest of the path again
Step21: Apart from os.listdir() there are a number of other common ways to obtain directory listings in Python. Using the glob module for instance, we can easily access the full relative path leading to our Arabian Nights
Step22: The asterisk (*) in the argument passed to glob.glob() is worth noting here. Just like with regular expressions, this asterisk is a sort of wildcard which will match any series of characters (i.e. the filenames under arabian_nights). When we exploit this wildcard syntax, glob.glob() offers another distinct advantage
Step23: Interestingly, the command in this code block will only load filenames that end in ".txt". This is interesting when we would like to ignore other sorts of junk files etc. that might be present in a directory. To replicate similar behaviour with os.listdir(), we would have needed a typical for-loop, such as
Step24: Or for you stylish coders out there, you can show off with a list comprehension
Step25: However, when using glob.glob(), you might sometimes want to be able to extract a file's base name again. There are several solutions to this
Step26: Both os.sep and os.path.basename have the advantage that they know what separator is used for paths in the operating system, so you don't need to explicitly code it like in the first solution. Separators differ between Windows (backslash) and Mac/Linux (forward slash).
Step27: As you can see, os.walk() allows you to efficiently loop over the entire tree. As always, don't forget that help is right around the corner in your notebooks. Using help(), you can quickly access the documentation of modules and their functions etc. (but only after you have imported the modules first!).
Step28: Quiz
Step29: Parsing files
Step30: Using the old filenames, this was not possible directly, because of the way Python sorts strings of unequal lengths. Note that the number in the filenames are represented as strings, which are completely different from real numeric integers, and thus will be sorted differently
Step31: Note
Step32: Should you be interested
Step33: This code reviews some of the materials from previous chapters, including the use of a regular expression, which converts all consecutive instances of whitespace (including line breaks, for instance) to a single space. After executing the previous code block, we can now test our function
Step34: We can now apply this function to the contents from a random night
Step35: This text looks cleaner already! We can now start to extract individual tokens from the text and count them. This process is called tokenization. Here, we make the naive assumption that words are simply space-free alphabetic strings -- which is of course wrong in the case of English words like "can't". Note that for many languages there exist better tokenizers in Python (such as the ones in the Natural Language Toolkit (nltk). We suffice with a simpler approach for now
Step36: Using the list comprehension, we make sure that we do not accidentally return empty strings as a token, for instance, at the beginning of a text which starts with a newline. Remember that anything in Python with a length of 0, will evaluate to False, which explains the if t in the comprehension
Step37: We can now start analyzing our nights. A good start would be to check the length of each night in words
Step38: Quiz
Step39: We now have a list of numbers, which we can plot over time. We will cover plotting more extensively in one of the next chapters. The things below are just a teaser. Start by importing matplotlib, which is imported as follows by convention
Step40: The second line is needed to make sure that the plots will properly show up in our notebook. Let us start with a simple visualization
Step41: As you can see, this simple command can be used to quickly obtain a visualization that shows interesting trends. On the y-axis, we plot absolute word counts for each of our nights. The x-axis is figured out automatically by matplotlib and adds an index on the horizontal x-axis. Implicitly, it interprets our command as follows
Step42: When plt.plot receives two flat lists as arguments, it plots the first along the x-axis, and the second along the y-axis. If it only receives one list, it plots it along the y-axis and uses the range we now (redundantly) specified here for the x-axis. This is in fact a subtoptimal plot, since the index of the first data point we plot is zero, although the name of the first night is '1.txt'. Additionally, we know that there are some nights missing in our data. To set this straight, we could pass in our own x-coordinates as follows
Step43: We can now make our plot more truthful, and add some bells and whistles
Step44: Quiz
Step45: Write code that plots the position of the missing nights using this function (and blue lines).
Step46: Right now, we are visualizing texts, but we might also be interested in the vocabulary used in the story collection. Counting how often a word appears in a text is trivial for you right now with custom code, for instance
Step47: One interesting item which you can use for counting in Python is the Counter object, which we can import as follows
Step48: This Counter makes it much easier to write code for counting. Below you can see how this counter automatically creates a dictionary-like structure
Step49: If we would like to find which items are most frequent for instance, we could simply do
Step50: We can also pass the Counter the tokens to count in multiple stages
Step51: After passing our tokens twice to the counter, we see that the numbers double in size.
Step52: Let us now finally visualize the frequencies of the 15 most frequent items using a standard barplot in matplotlib. This can be achieved as follows. We first split out the names and frequencies, since .mostcommon(n) returns a list of tuples, and we create indices
Step53: Next, we simply do
Step54: Et voilà!
|
7,797
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
return x / 255.0
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
def make_one_hots(n):
one_hots = {}
for i in range(n):
oh = np.zeros(n)
oh[i] = 1
one_hots[i] = oh
return one_hots
one_hots = make_one_hots(10)
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
return np.array([ one_hots[i] for i in x ])
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
x = tf.placeholder(tf.float32, shape=(None, image_shape[0], image_shape[1], image_shape[2]), name="x")
return x
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
y = tf.placeholder(tf.float32, shape=(None, n_classes), name="y")
return y
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
keep_prob = tf.placeholder(tf.float32, name="keep_prob")
return keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
xshape = x_tensor.get_shape().as_list()
weight = tf.Variable(tf.truncated_normal([
conv_ksize[0], conv_ksize[1], xshape[3], conv_num_outputs], stddev=0.05))
bias = tf.Variable(tf.constant(0.1, shape=[conv_num_outputs]))
padding = 'SAME'
strides = [1, conv_strides[0], conv_strides[1], 1]
conv2d = tf.nn.conv2d(x_tensor, weight, strides, padding) + bias
conv2d = tf.nn.relu(conv2d)
ksize = [1, pool_ksize[0], pool_ksize[1], 1]
strides = [1, pool_strides[0], pool_strides[1], 1]
conv2d = tf.nn.max_pool(conv2d, ksize, strides, padding)
return conv2d
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
dim = np.prod(x_tensor.get_shape().as_list()[1:])
x2 = tf.reshape(x_tensor, [-1, dim])
return x2
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
xshape = x_tensor.get_shape().as_list()
weight = tf.Variable(tf.truncated_normal([xshape[1], num_outputs], stddev=0.1))
bias = tf.Variable(tf.constant(0.1, shape=[num_outputs]))
fully = tf.nn.relu(tf.matmul(x_tensor, weight) + bias)
return fully
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
xshape = x_tensor.get_shape().as_list()
weight = tf.Variable(tf.truncated_normal([xshape[1], num_outputs], stddev=0.1))
bias = tf.Variable(tf.constant(0.1, shape=[num_outputs]))
o = tf.matmul(x_tensor, weight) + bias
return o
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
conv_num_outputs_1 = 16
conv_ksize_1 = (5,5)
conv_strides_1 = (1,1)
pool_ksize_1 = (2,2)
pool_strides_1 = (1,1)
conv_num_outputs_2 = 64
conv_ksize_2 = (5,5)
conv_strides_2 = (1,1)
pool_ksize_2 = (2,2)
pool_strides_2 = (2,2)
conv_num_outputs_3 = 96
conv_ksize_3 = (2,2)
conv_strides_3 = (2,2)
pool_ksize_3 = (2,2)
pool_strides_3 = (2,2)
fully_numouts_1 = 300
fully_numouts_2 = 100
fully_numouts_3 = 20
num_outputs = 10
print('\nMODEL:')
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
x_tensor = conv2d_maxpool(x, conv_num_outputs_1, conv_ksize_1, conv_strides_1, pool_ksize_1, pool_strides_1)
print('CONV', x_tensor.get_shape().as_list())
x_tensor = conv2d_maxpool(x_tensor, conv_num_outputs_2, conv_ksize_2, conv_strides_2, pool_ksize_2, pool_strides_2)
print('CONV', x_tensor.get_shape().as_list())
x_tensor = conv2d_maxpool(x_tensor, conv_num_outputs_3, conv_ksize_3, conv_strides_3, pool_ksize_3, pool_strides_3)
print('CONV', x_tensor.get_shape().as_list())
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
x_tensor = flatten(x_tensor)
print('FLAT', x_tensor.get_shape().as_list())
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
x_tensor = fully_conn(x_tensor, fully_numouts_1)
print('FC', x_tensor.get_shape().as_list())
x_tensor = tf.nn.dropout(x_tensor, keep_prob)
print('DROP')
x_tensor = fully_conn(x_tensor, fully_numouts_2)
print('FC', x_tensor.get_shape().as_list())
x_tensor = tf.nn.dropout(x_tensor, keep_prob)
print('DROP')
x_tensor = fully_conn(x_tensor, fully_numouts_3)
print('FC', x_tensor.get_shape().as_list())
x_tensor = tf.nn.dropout(x_tensor, keep_prob)
print('DROP')
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
o = output(x_tensor, num_outputs)
print('OUT:', o.get_shape().as_list())
# TODO: return output
return o
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run(optimizer, feed_dict={ x:feature_batch, y:label_batch, keep_prob:keep_probability} )
pass
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
cst = sess.run(cost, feed_dict={ x:feature_batch, y:label_batch, keep_prob:1.0})
acc = sess.run(accuracy, feed_dict={x:valid_features, y:valid_labels, keep_prob:1.0})
print('Loss %f - Accuracy %.1f%%' % (cst, acc*100))
pass
# TODO: Tune Parameters
epochs = 50
batch_size = 64
keep_probability = .5
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
Step2: Explore the Data
Step5: Implement Preprocess Functions
Step8: One-hot encode
Step10: Randomize Data
Step12: Check Point
Step17: Build the network
Step20: Convolution and Max Pooling Layer
Step23: Flatten Layer
Step26: Fully-Connected Layer
Step29: Output Layer
Step32: Create Convolutional Model
Step35: Train the Neural Network
Step37: Show Stats
Step38: Hyperparameters
Step40: Train on a Single CIFAR-10 Batch
Step42: Fully Train the Model
Step45: Checkpoint
|
7,798
|
<ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.1,<2.2"
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
print(b['exptime'])
b['exptime'] = 1, 'hr'
print(b['fti_method'])
b['fti_method'] = 'oversample'
print(b['fti_oversample'])
b.run_compute(fti_method='none', irrad_method='none', model='fti_off')
b.run_compute(fti_method='oversample', irrad_method='none', model='fit_on')
afig, mplfig = b.plot(show=True, legend=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Relevant Parameters
Step3: Let's set the exposure time to 1 hr to make the convolution obvious in our 1-day default binary.
Step4: An 'fti_method' parameter exists for each set of compute options and each lc dataset. By default this is set to 'none' - meaning that the exposure times are ignored during b.run_compute().
Step5: Once we set fti_method to be 'oversample', the corresponding 'fti_oversample' parameter(s) become visible. This option defines how many different time-points PHOEBE should sample over the width of the exposure time and then average to return a single flux point. By default this is set to 5.
Step6: Influence on Light Curves
Step7: The phase-smearing (convolution) caused by the exposure time is most evident in areas of the light curve with sharp derivatives, where the flux changes significantly over the course of the single exposure. Here we can see that the 1-hr exposure time significantly changes the observed shapes of ingress and egress as well as the observed depth of the eclipse.
|
7,799
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
# Add your filename and uncomment the following line:
Image(filename='MarijuanaUsage.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Graphical excellence and integrity
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.