code
stringlengths 38
801k
| repo_path
stringlengths 6
263
|
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# + [markdown] origin_pos=0
# # Adam
# :label:`sec_adam`
#
# In the discussions leading up to this section we encountered a number of techniques for efficient optimization. Let us recap them in detail here:
#
# * We saw that :numref:`sec_sgd` is more effective than Gradient Descent when solving optimization problems, e.g., due to its inherent resilience to redundant data.
# * We saw that :numref:`sec_minibatch_sgd` affords significant additional efficiency arising from vectorization, using larger sets of observations in one minibatch. This is the key to efficient multi-machine, multi-GPU and overall parallel processing.
# * :numref:`sec_momentum` added a mechanism for aggregating a history of past gradients to accelerate convergence.
# * :numref:`sec_adagrad` used per-coordinate scaling to allow for a computationally efficient preconditioner.
# * :numref:`sec_rmsprop` decoupled per-coordinate scaling from a learning rate adjustment.
#
# Adam :cite:`Kingma.Ba.2014` combines all these techniques into one efficient learning algorithm. As expected, this is an algorithm that has become rather popular as one of the more robust and effective optimization algorithms to use in deep learning. It is not without issues, though. In particular, :cite:`Reddi.Kale.Kumar.2019` show that there are situations where Adam can diverge due to poor variance control. In a follow-up work :cite:`Zaheer.Reddi.Sachan.ea.2018` proposed a hotfix to Adam, called Yogi which addresses these issues. More on this later. For now let us review the Adam algorithm.
#
# ## The Algorithm
#
# One of the key components of Adam is that it uses exponential weighted moving averages (also known as leaky averaging) to obtain an estimate of both the momentum and also the second moment of the gradient. That is, it uses the state variables
#
# $$\begin{aligned}
# \mathbf{v}_t & \leftarrow \beta_1 \mathbf{v}_{t-1} + (1 - \beta_1) \mathbf{g}_t, \\
# \mathbf{s}_t & \leftarrow \beta_2 \mathbf{s}_{t-1} + (1 - \beta_2) \mathbf{g}_t^2.
# \end{aligned}$$
#
# Here $\beta_1$ and $\beta_2$ are nonnegative weighting parameters. Common choices for them are $\beta_1 = 0.9$ and $\beta_2 = 0.999$. That is, the variance estimate moves *much more slowly* than the momentum term. Note that if we initialize $\mathbf{v}_0 = \mathbf{s}_0 = 0$ we have a significant amount of bias initially towards smaller values. This can be addressed by using the fact that $\sum_{i=0}^t \beta^i = \frac{1 - \beta^t}{1 - \beta}$ to re-normalize terms. Correspondingly the normalized state variables are given by
#
# $$\hat{\mathbf{v}}_t = \frac{\mathbf{v}_t}{1 - \beta_1^t} \text{ and } \hat{\mathbf{s}}_t = \frac{\mathbf{s}_t}{1 - \beta_2^t}.$$
#
# Armed with the proper estimates we can now write out the update equations. First, we rescale the gradient in a manner very much akin to that of RMSProp to obtain
#
# $$\mathbf{g}_t' = \frac{\eta \hat{\mathbf{v}}_t}{\sqrt{\hat{\mathbf{s}}_t} + \epsilon}.$$
#
# Unlike RMSProp our update uses the momentum $\hat{\mathbf{v}}_t$ rather than the gradient itself. Moreover, there is a slight cosmetic difference as the rescaling happens using $\frac{1}{\sqrt{\hat{\mathbf{s}}_t} + \epsilon}$ instead of $\frac{1}{\sqrt{\hat{\mathbf{s}}_t + \epsilon}}$. The former works arguably slightly better in practice, hence the deviation from RMSProp. Typically we pick $\epsilon = 10^{-6}$ for a good trade-off between numerical stability and fidelity.
#
# Now we have all the pieces in place to compute updates. This is slightly anticlimactic and we have a simple update of the form
#
# $$\mathbf{x}_t \leftarrow \mathbf{x}_{t-1} - \mathbf{g}_t'.$$
#
# Reviewing the design of Adam its inspiration is clear. Momentum and scale are clearly visible in the state variables. Their rather peculiar definition forces us to debias terms (this could be fixed by a slightly different initialization and update condition). Second, the combination of both terms is pretty straightforward, given RMSProp. Last, the explicit learning rate $\eta$ allows us to control the step length to address issues of convergence.
#
# ## Implementation
#
# Implementing Adam from scratch is not very daunting. For convenience we store the time step counter $t$ in the `hyperparams` dictionary. Beyond that all is straightforward.
#
# + origin_pos=1 tab=["mxnet"]
# %matplotlib inline
from mxnet import np, npx
from d2l import mxnet as d2l
npx.set_np()
def init_adam_states(feature_dim):
v_w, v_b = np.zeros((feature_dim, 1)), np.zeros(1)
s_w, s_b = np.zeros((feature_dim, 1)), np.zeros(1)
return ((v_w, s_w), (v_b, s_b))
def adam(params, states, hyperparams):
beta1, beta2, eps = 0.9, 0.999, 1e-6
for p, (v, s) in zip(params, states):
v[:] = beta1 * v + (1 - beta1) * p.grad
s[:] = beta2 * s + (1 - beta2) * np.square(p.grad)
v_bias_corr = v / (1 - beta1 ** hyperparams['t'])
s_bias_corr = s / (1 - beta2 ** hyperparams['t'])
p[:] -= hyperparams['lr'] * v_bias_corr / (np.sqrt(s_bias_corr) + eps)
hyperparams['t'] += 1
# + [markdown] origin_pos=4
# We are ready to use Adam to train the model. We use a learning rate of $\eta = 0.01$.
#
# + origin_pos=5 tab=["mxnet"]
data_iter, feature_dim = d2l.get_data_ch11(batch_size=10)
d2l.train_ch11(adam, init_adam_states(feature_dim),
{'lr': 0.01, 't': 1}, data_iter, feature_dim);
# + [markdown] origin_pos=6
# A more concise implementation is straightforward since `adam` is one of the algorithms provided as part of the Gluon `trainer` optimization library. Hence we only need to pass configuration parameters for an implementation in Gluon.
#
# + origin_pos=7 tab=["mxnet"]
d2l.train_concise_ch11('adam', {'learning_rate': 0.01}, data_iter)
# + [markdown] origin_pos=10
# ## Yogi
#
# One of the problems of Adam is that it can fail to converge even in convex settings when the second moment estimate in $\mathbf{s}_t$ blows up. As a fix :cite:`Zaheer.Reddi.Sachan.ea.2018` proposed a refined update (and initialization) for $\mathbf{s}_t$. To understand what's going on, let us rewrite the Adam update as follows:
#
# $$\mathbf{s}_t \leftarrow \mathbf{s}_{t-1} + (1 - \beta_2) \left(\mathbf{g}_t^2 - \mathbf{s}_{t-1}\right).$$
#
# Whenever $\mathbf{g}_t^2$ has high variance or updates are sparse, $\mathbf{s}_t$ might forget past values too quickly. A possible fix for this is to replace $\mathbf{g}_t^2 - \mathbf{s}_{t-1}$ by $\mathbf{g}_t^2 \odot \mathop{\mathrm{sgn}}(\mathbf{g}_t^2 - \mathbf{s}_{t-1})$. Now the magnitude of the update no longer depends on the amount of deviation. This yields the Yogi updates
#
# $$\mathbf{s}_t \leftarrow \mathbf{s}_{t-1} + (1 - \beta_2) \mathbf{g}_t^2 \odot \mathop{\mathrm{sgn}}(\mathbf{g}_t^2 - \mathbf{s}_{t-1}).$$
#
# The authors furthermore advise to initialize the momentum on a larger initial batch rather than just initial pointwise estimate. We omit the details since they are not material to the discussion and since even without this convergence remains pretty good.
#
# + origin_pos=11 tab=["mxnet"]
def yogi(params, states, hyperparams):
beta1, beta2, eps = 0.9, 0.999, 1e-3
for p, (v, s) in zip(params, states):
v[:] = beta1 * v + (1 - beta1) * p.grad
s[:] = s + (1 - beta2) * np.sign(
np.square(p.grad) - s) * np.square(p.grad)
v_bias_corr = v / (1 - beta1 ** hyperparams['t'])
s_bias_corr = s / (1 - beta2 ** hyperparams['t'])
p[:] -= hyperparams['lr'] * v_bias_corr / (np.sqrt(s_bias_corr) + eps)
hyperparams['t'] += 1
data_iter, feature_dim = d2l.get_data_ch11(batch_size=10)
d2l.train_ch11(yogi, init_adam_states(feature_dim),
{'lr': 0.01, 't': 1}, data_iter, feature_dim);
# + [markdown] origin_pos=14
# ## Summary
#
# * Adam combines features of many optimization algorithms into a fairly robust update rule.
# * Created on the basis of RMSProp, Adam also uses EWMA on the minibatch stochastic gradient.
# * Adam uses bias correction to adjust for a slow startup when estimating momentum and a second moment.
# * For gradients with significant variance we may encounter issues with convergence. They can be amended by using larger minibatches or by switching to an improved estimate for $\mathbf{s}_t$. Yogi offers such an alternative.
#
# ## Exercises
#
# 1. Adjust the learning rate and observe and analyze the experimental results.
# 1. Can you rewrite momentum and second moment updates such that it does not require bias correction?
# 1. Why do you need to reduce the learning rate $\eta$ as we converge?
# 1. Try to construct a case for which Adam diverges and Yogi converges?
#
# + [markdown] origin_pos=15 tab=["mxnet"]
# [Discussions](https://discuss.d2l.ai/t/358)
#
|
python/d2l-en/mxnet/chapter_optimization/adam.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Import Libraries
import csv
import re
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
# ## Read Config File
import configparser
config = configparser.ConfigParser()
config.read('config.ini')
ip = config['DEFAULT']['IP']
port = config['DEFAULT']['MongoDB-Port']
db_name = config['DEFAULT']['DB-Name']
# ## Connect MongoDB
from pymongo import MongoClient
client = MongoClient(ip, int(port))
db_twitter = client[db_name]
collections_twitter = db_twitter.collection_names()
# ## Collection : Number of records
# +
dic_collection = {}
for i in collections_twitter:
dic_collection[i] = "{:,}".format(db_twitter[i].find({}).count())
for key in sorted(dic_collection):
print("%s: %s" % (key, dic_collection[key]))
# -
# ## Pipeline
pipeline = [
{"$match": { "entities.hashtags": {"$exists":True,"$ne":[]}}},
{"$match": { "lang" : "en"}},
{ "$group": {
"_id": "$entities.hashtags",
"count": { "$sum": 1 },
}
}
]
# ## Supporting Functions
def get_dic(dic_hashtag, data, h, i):
if len(dic_hashtag)>0:
if h in dic_hashtag:
dic_hashtag[h] += data[i]["count"]
else:
dic_hashtag[h] = data[i]["count"]
else:
dic_hashtag[h] = data[i]["count"]
return dic_hashtag
def write_csv(output_file, top_100_htag):
csv_columns = ['hashtag','count']
with open(output_file, 'w') as f:
writer = csv.DictWriter(f, fieldnames=csv_columns)
writer.writeheader()
for key in top_100_htag.keys():
f.write("%s,%s\n"%(key,top_100_htag[key]))
print(output_file + " is ready.")
# ## Get Top 100 Hashtags CSV File Per Collection
for collection in sorted(dic_collection):
print("-------------------")
print("Processing on collection: " + collection)
# get hashtag list
dic_hashtag={}
data = list(db_twitter[collection].aggregate(pipeline,allowDiskUse=True))
if len(data) > 0 :
for i in range(len(data)):
for j in data[i]["_id"]:
h = j["text"]
if(re.match("^[a-zA-Z0-9]*$",h)):
dic_hashtag = get_dic(dic_hashtag, data, h, i)
print("hashtag dictionary for collection " + collection + " is finished")
# get top 100 hashtags
top_100_htag = dict(sorted(dic_hashtag.items(), key=lambda x: x[1], reverse=True)[:100])
# export to csv
output_file = collection + ".csv"
write_csv(output_file, top_100_htag)
print("-------------------")
|
f0014/f0014-b.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] tags=[]
# # MBQC Quick Start Guide
#
# <em> Copyright (c) 2021 Institute for Quantum Computing, Baidu Inc. All Rights Reserved. </em>
# + [markdown] tags=[]
# ## Introduction
#
# Quantum computation utilizes the peculiar laws in the quantum world and provides us with a novel and promising way of information processing. The essence of quantum computation is to evolve the initially prepared quantum state into another expected one, and then make measurements on it to obtain the required classical results. However, the approaches of quantum state evolution are varied in different computation models. The widely used **quantum circuit model** [1,2] completes the evolution by performing quantum gate operations, which can be regarded as a quantum analog of the classical computing model. In contrast, **measurement-based quantum computation (MBQC)** provides a completely different approach for quantum computing.
#
# As its name suggests, the entire evolution in MBQC is completed via quantum measurements. There are mainly two variants of measurement-based quantum computation in the literature: **teleportation-based quantum computing (TQC)** model[3-5] and **one-way quantum computer (1WQC)** model [6-9]. The former requires joint measurements on multiple qubits, while the latter only requires single-qubit measurements. After these two variants were proposed, they were proved to be highly correlated and admit a one-to-one correspondence [10]. So without further declaration, **all of the following discussions about MBQC will refer to the 1WQC model.**
#
# MBQC is a unique model in quantum computation and has no classical analog. The model controls the computation by measuring part of the qubits of an entangled state, with those remaining unmeasured undergoing the evolution correspondingly. By controlling measurements, we can complete any desired evolution. The computation in MBQC is mainly divided into three steps. The first step is to prepare a resource state, which is a highly entangled many-body quantum state. This state can be prepared offline and can be independent of specific computational tasks. The second step is to sequentially perform single-qubit measurements on each qubit of the prepared resource state, where subsequent measurements can depend on previous measurement outcomes, that is, measurements can be adaptive. The third step is to perform byproduct corrections on the final state. Finally, we do classical data processing on measurement outcomes to obtain the required computation results.
#
# A typical example of MBQC algorithms is shown in Figure 1. The grid represents a commonly used quantum resource state (called cluster state, see below for details). Each vertex on the grid represents a qubit, while the entire grid represents a highly entangled quantum state. We measure each qubit one by one in a specific measurement basis (In the vertices, X, Y, Z, XY, etc. represent the corresponding measurement basis), and then perform byproduct corrections (to eliminate the effect of Pauli X and Pauli Z operators), to complete the computation.
#
# 
# <div style="text-align:center">Figure 1: A typical example of MBQC algorithm where computation is proceeded by measuring each qubit on the vertex. </div>
#
# The "three-step" process of MBQC has brought us quantities of benefits. For example, if the quantum state prepared in the first step is too noisy, we can simply discard this state **before computation begins** (that is, before any measurement is implemented), and prepare it again to ensure the accuracy of the computational results. Since the resource state can be prepared offline and independent of specific computing tasks, it can also be applied to secure delegated quantum computation [11,12] to protect clients' privacy. In addition, single-qubit measurement is easier to be implemented in practice than quantum gates. Non-adaptive quantum measurements can even be carried out simultaneously, thereby, reducing the computation depth and requiring less coherence time of the quantum system. The difficulty of realizing MBQC mainly lies in resource state preparation in the first step. Such a quantum state is highly entangled and the number of qubits required is much larger than that of the usual circuit model. For recent progress on the resource state preparation, please refer to [13,14]. Table 1 briefly summarizes both advantages and limitations of MBQC and quantum circuit models.
#
# | | Quantum circuit model | MBQC model |
# |:---: | :---: | :---: |
# | Pros| has classical analog <br/> easy to understand <br/> and develop applications | resource state can be prepared offline <br/> easy to implement single-qubit measurement <br/> measurements can be implemented simultaneously <br/> leading to lower implementation depth |
# |Cons| implementation order fixed <br/> depth restricted by coherence time| no classical analog thus super-intuitive <br/> resource state requires a large number of qubits <br/> thus hard to prepare in practice|
#
# <div style="text-align:center">Table 1: Advantages and limitations of MBQC and quantum circuit models </div>
#
# Since MBQC does not have a classical analog, it may be difficult for beginners to understand it intuitively. However, it is this super-intuitive approach that brings a wide range of opportunities to explore the unknowns. So, let's dive into the world of MBQC and explore the mysteries together!
# + [markdown] tags=[]
# ## Prerequisites
#
# Before introducing MBQC and our module in more detail, let's briefly review the two building blocks of MBQC.
#
# ### 1. Graph and graph state
#
# Given a graph $G=(V, E)$ with vertices set $V$ and edges set $E$, we can prepare an entangled quantum state by initializing a plus state $|+\rangle = (|0\rangle + |1\rangle) / \sqrt{2}$ to each vertex of $G$ and performing a control Z operation $CZ = |0\rangle\langle 0| \otimes I + |1\rangle\langle1|\otimes Z$ between each connected qubit pair. The resulting quantum state is called the **graph state** of $G$, denoted by $|G\rangle$, such that:
#
# $$
# |G\rangle = \prod_{(a,b) \in E} CZ_{ab} \left(\bigotimes_{v \in V}|+\rangle_v\right). \tag{1}
# $$
#
# The concept of graph state is nothing particular. Actually, the well-known Bell state and GHZ state are both graph states up to local unitary transformations. Besides, if the underlying graph we consider is a 2D grid then the corresponding graph state is called **cluster state**, depicted in Figure 2.
#
# 
# <div style="text-align:center">Figure 2:(i) the graph of a $Bell$ state; (ii) the graph of a 4-qubit $GHZ$ state; (iii) the graph of a cluster state </div>
#
# ### 2. Projective measurement
#
# Quantum measurement is one of the main concepts in quantum information processing. In the circuit model, measurements are performed usually at the end of the circuit to extract classical results from the quantum state. However, in MBQC, quantum measurements are also used to drive the computation. In the MBQC model, we use single-qubit measurements by default and mainly use 0/1 projection measurement. According to Born's rule [17], given a projective measurement basis $\{|\psi_0\rangle, |\psi_1\rangle\}$ and a quantum state $|\phi\rangle$, the probability that the outcome $s \in \{0,1\}$ occurs is given by $p(s) = |\langle \psi_s|\phi\rangle|^2$, and the corresponding post-measurement state is $| \psi_s\rangle\langle\psi_s|\phi\rangle / \sqrt{p(s)}$. In other words, the state of the measured qubit collapses into $|\psi_s\rangle$, while the state of other qubits evolves to $\langle\psi_s|\phi\rangle / \sqrt{p(s)}$.
#
# Single-qubit measurements are commonly used, especially the binary projective measurements on the $XY$, $YZ$ and $XZ$ planes, defined respectively by the following orthonormal bases,
#
# - XY-plane measurement: $M^{XY}(\theta) = \{R_z(\theta) |+\rangle, R_z(\theta) |-\rangle \}$, reducing to $X$ measurement if $\theta = 0$ and $Y$ measurement if $\theta = \frac{\pi}{2}$;
#
# - YZ-plane measurement: $M^{YZ}(\theta) = \{R_x(\theta)|0\rangle, R_x(\theta)|1\rangle\}$, reducing to $Z$ measurement if $\theta = 0$;
#
# - XZ-plane measurement: $M^{XZ}(\theta) = \{R_y(\theta)|0\rangle, R_y(\theta)|1\rangle\}$, reducing to $Z$ measurement if $\theta = 0$.
#
# In the above definitions, we use $|+\rangle = (|0\rangle + |1\rangle)/ \sqrt{2},|-\rangle = (|0\rangle - |1\rangle)/ \sqrt{2}$, and $R_x, R_y, R_z$ are rotation gates around $x,y,z$ axes respectively.
# + [markdown] tags=[]
# ## MBQC Module Framework
#
#
# ### 1. Model and code implementation
#
# #### "Three-step" process
#
# As is mentioned above, MBQC is different from the quantum circuit model. The computation in MBQC is driven by measuring each qubit on a graph state. To be specific, the MBQC model consists of the following three steps.
#
# - **Graph state preparation**: that is, to prepare a many-body entangled state. Given vertices and edges in a graph, we can prepare a graph state by initializing a plus state on each vertex and performing a control Z operation between each connected qubit pair. Since a graph state and its underlying graph have a one-to-one correspondence, it suffices to work with the graph only. In addition, we can selectively replace some of the plus states in the graph with a customized input state if necessary.
#
# - **Single-qubit measurement**: that is, to perform single-qubit measurements on the prepared graph state with specific measurement bases. The measurement angles can be adaptively adjusted according to previous outcomes. Non-adaptive measurements commute with each other in simulation and can even be performed simultaneously in experiments.
#
# - **Byproduct correction**: Due to the random nature of quantum measurement, the evolution of the unmeasured quantum state cannot be uniquely determined. In other words, the unmeasured quantum state may undergo some extra evolutions, called **byproducts**. So the last step is to correct these to obtain the expected result. If the final output is not a quantum state but the measurement outcomes, it suffices to eliminate the effect of byproducts via classical data processing only.
#
# In conclusion, the "three-step" process of MBQC includes graph state preparation, single-qubit measurement, and byproduct correction. The first two steps are indispensable while the implementation of the third step depends on the form of expected results.
#
# #### Measurement pattern and "EMC" language
#
# Besides the "three-step" process, an MBQC model can also be described by the **EMC** language from the measurement calculus [18]. As is mentioned above, MBQC admits a one-to-one correspondence to the circuit model. We can usually call the MBQC equivalent of a quantum circuit as a measurement **pattern** while the equivalent of a specific gate/measurement is called **subpattern** [18]. In the "EMC" language, we usually call an entanglement operation "an entanglement command", denoted by "E"; call a measurement operation "a measurement command", denoted by "M"; call a byproduct correction operation "a byproduct correction command", denoted by "C". Therefore, in parallel with the"three-step" process, MBQC is also characterized by an "EMC" command list. The process of computation is to execute commands in the command list in order. However, to familiarize ourselves with MBQC quickly, we will adopt the conventional "three-step" process to describe MBQC in this tutorial.
#
# #### Code implementation
#
# In terms of code implementation, we provide a simulation module ``simulator`` that mainly consists of a class `MBQC` with attributes and methods necessary for MBQC simulation. We can instantiate an MBQC class, create and perform our MBQC-based algorithms with it.
#
# ```python
# # code implementation
# from paddle_quantum.mbqc.simulator import MBQC
#
# class MBQC:
# def __init__():
# ...
# ```
#
# After instantiation, we can call class methods step by step to complete the MBQC computation process. Here, we briefly introduce some frequently used methods and their functionalities in Table 2. Please refer to the API documentation for details.
#
# |MBQC class method|Functionality|
# |:---:|:---:|
# |set_graph|input a graph for MBQC|
# |set_pattern|input a measurement pattern for MBQC|
# |set_input_state|input initial quantum state|
# |draw_process|draw the dymanical process of MBQC computation|
# |track_progress|track the running progress of MBQC computation|
# |measure|perform single-qubit measurement|
# |sum_outcomes|sum outcomes of the measured qubits|
# |correct_byproduct|correct byproduct operators|
# |run_pattern|run the input measurement pattern|
# |get_classical_output|return classical results|
# |get_quantum_output|return quantum results|
#
# <div style="text-align:center">Table 2: Frequently used methods of the class MBQC and their functionalities </div>
# <br/>
#
# In the ``simulator`` module, we provide two simulation modes, "graph" and "pattern", corresponding to the two equivalent descriptions of the MBQC computation process respectively. If we set a graph, the whole computation needs to follow the "three-step" process. It is worth mentioning that we design a **vertex dynamic classification algorithm** to simulate the MBQC computation process efficiently. Roughly speaking, we integrate the first two steps of the process, change the execution order of entanglement and measurement operations automatically to reduce the number of effective qubits involved in the computation and thereby improve the efficiency. The outline to use the simulation module is as follows:
#
# ```python
# """
# MBQC simulation module usage (set a graph and proceed with the "three-step" process)
# """
# from paddle_quantum.mbqc.simulator import MBQC
#
# # Instantiate MBQC and create an MBQC model
# mbqc = MBQC()
#
# # First step of the "three-step" process, set a graph
# mbqc.set_graph(graph)
#
# # Set an initial input state (optional)
# mbqc.set_input_state(input_state)
#
# # Second step of the "three-step" process, perform single-qubit measurements
# mbqc.measure(which_qubit, basis)
# mbqc.measure(which_qubit, basis)
# ......
#
# # Third step of the "three-step" process, correct byproducts
# mbqc.correct_byproduct(gate, which_qubit, power)
#
# # Obtain the classical and quantum outputs
# classical_output = mbqc.get_classical_output()
# quantum_output = mbqc.get_quantum_output()
# ```
#
# If we set a pattern to the ``MBQC`` class, we need to call the `run_pattern` method to complete the simulation.
#
# ```python
# """
# MBQC simulation module usage (set a pattern and simulate by "EMC" commands)
# """
# from paddle_quantum.mbqc.simulator import MBQC
#
# # Instantiate MBQC and create an MBQC model
# mbqc = MBQC()
#
# # Set a measurement pattern
# mbqc.set_pattern(pattern)
#
# # Set an initial input state (optional)
# mbqc.set_input_state(input_state)
#
# # Run the measurement pattern
# mbqc.run_pattern()
#
# # Obtain the classical and quantum outputs
# classical_output = mbqc.get_classical_output()
# quantum_output = mbqc.get_quantum_output()
# ```
# -
# After going through the above introduction, I am sure you already have a basic understanding of MBQC and our simulation module. Now, let's do some combat exercises with the following two examples!
# ### 2. Example: general single-qubit unitary gate in MBQC
#
# For a general single-qubit unitary gate $U$, it can be decomposed to $ U = R_x(\gamma)R_z(\beta)R_x(\alpha)$ up to a global phase [17]. In MBQC, this unitary gate can be realized in the following way [15]. As shown in Figure 3: prepare five qubits, with input on the leftmost qubit while output on the rightmost qubit; input a state $|\psi\rangle$ and initialize other qubits with $|+\rangle$; apply a $CZ$ operation to each connected qubit pair; perform $X$-measurement on the first qubit and adaptive measurements in the $XY$-plane on the middle three qubits, with the four measured qubits' outcomes recorded as $s_1$, $s_2$, $s_3$, $s_4$; correct byproducts to the state on qubit $5$ after all measurements. Then, the output state on qubit 5 will be $U|\psi\rangle$.
#
#
# 
# <div style="text-align:center">Figure 3: Realizing a general single-qubit unitary gate in MBQC </div>
#
# **Note**: after measuring the first four qubits, state on qubit $5$ has the form of $X^{s_2 + s_4}Z^{s_1 + s_3} U|\psi\rangle$, where $X^{s_2 + s_4}$ and $Z^{s_1 + s_3}$ are the so-called byproducts. We need to correct them according to the measurement outcomes to get the desired state of $U|\psi\rangle$.
#
# Here is the code implementation:
# #### Import relevant modules
#
# We first import two common modules `numpy` and `paddle`. Then we need to import the MBQC simulation module ``simulator`` which mainly contains the class ``MBQC``. We can instantiate this class and create an MBQC model. We also need to import the ``qobject`` module which contains quantum objects that are frequently used in quantum information processing (e.g. ``State``, ``Circuit``, ``Pattern``). Finally, we import the ``utils`` module that provides commonly used functions (e.g. ``plus_state``, ``basis`` etc.).
# Import common calculation modules
from numpy import pi
from paddle import to_tensor, matmul
# Import relevant modules for MBQC simulation
from paddle_quantum.mbqc.simulator import MBQC
from paddle_quantum.mbqc.qobject import State
from paddle_quantum.mbqc.utils import rotation_gate, basis, random_state_vector, compare_by_vector
# #### Set graph and state
#
# Then, we can set the graph on our own. For this instance in Figure 3, we need five vertices (recorded as `['1', '2', '3', '4', '5']`) and four edges (recorded as (`[('1', '2'), ('2', '3'), ('3', '4'), ('4', '5')]`)). We need to set an input the state on vertex `'1'` and initialize measurement angles.
# Construct the underlying graph
V = ['1', '2', '3', '4', '5']
E = [('1', '2'), ('2', '3'), ('3', '4'), ('4', '5')]
G = [V, E]
# Generate a random state vector
input_psi = random_state_vector(1)
# Construct a quantum state on vertex '1'
input_state = State(input_psi, ['1'])
# Initialize measurement angles of type Tensor
alpha = to_tensor([pi / 6], dtype='float64')
beta = to_tensor([pi / 4], dtype='float64')
gamma = to_tensor([pi / 3], dtype='float64')
# #### Instantiate an MBQC model
#
# Then we can construct our own MBQC model by instantiating the class `MBQC` and setting the graph and input state.
# Instantiate MBQC
mbqc = MBQC()
# Set the graph
mbqc.set_graph(G)
# Set the input state
mbqc.set_input_state(input_state)
# Then, we perform measurements on the first four vertices.
# #### Measure the first vertex
#
# As shown in Figure 3, we perform $X$-measurement on the first vertex, that is, the measurement in $XY$-plane with an angle of $\theta_1 = 0$。
# Calculate the angle for the first measurement
theta1 = to_tensor([0], dtype='float64')
# Measure the first vertex
mbqc.measure('1', basis('XY', theta1))
# Measurement on the first vertex is straightforward because it is not adaptive. However, things will be tougher to the second, third, and fourth vertices, as the measurement angles are set adaptively according to the previous measurement outcomes.
# #### Measure the second vertex
#
# As shown in Figure 3, the measurement on the second vertex has a form of $M^{XY}(\theta_2)$, where
#
# $$
# \theta_2 = (-1)^{s_1 + 1} \alpha, \tag{2}
# $$
#
# This is a measurement in the $XY$-plane with an adaptive angle $(-1)^{s_1 + 1} \alpha$, where $s_1$ is the outcome of the first vertex.
#
# There is a method `sum_outcomes` in the class `MBQC` to calculate the summation of outcomes for vertices in the first argument. If we want to add an extra value "$x$" on top of the summation, we can set the second argument to be $x$.Otherwise, the default value of the second argument is $0$.
# Calculate the angle for the second measurement
theta2 = to_tensor((-1) ** mbqc.sum_outcomes(['1'], 1), dtype='float64') * alpha
# Measure the second vertex
mbqc.measure('2', basis('XY', theta2))
# #### Measure the third vertex
#
# As shown in Figure 3, the measurement on the third vertex has a form of $M^{XY}(\theta_3)$, where
#
# $$
# \theta_3 = (-1)^{s_2 + 1} \beta, \tag{3}
# $$
#
# This is a measurement in the $XY$-plane with an adaptive angle $(-1)^{s_2 + 1} \beta$, where $s_2$ is the outcome of the second vertex.
# Calculate the angle for the third measurement
theta3 = to_tensor((-1) ** mbqc.sum_outcomes(['2'], 1), dtype='float64') * beta
# Measure the third vertex
mbqc.measure('3', basis('XY', theta3))
# #### Measure the fourth vertex
#
# As shown in Figure 3, the measurement on the fourth vertex has a form of $M^{XY}(\theta_4)$, where
#
# $$
# \theta_4 = (-1)^{s_1 + s_3 + 1} \gamma, \tag{4}
# $$
#
# This is a measurement in the $XY$-plane with an adaptive angle $(-1)^{s_1 + s_3 + 1} \gamma$, where $s_1$ and $s_3$ are respectively the outcomes of the first and the third vertices.
# Calculate the angle for the fourth measurement
theta4 = to_tensor((-1) ** mbqc.sum_outcomes(['1', '3'], 1), dtype='float64') * gamma
# Measure the fourth vertex
mbqc.measure('4', basis('XY', theta4))
# #### Correct byproducts on the fifth vertex
#
# After measurements on the first four vertices, the state on the fifth vertex is not exactly $U|\psi\rangle$, but a state with byproducts $X^{s_2 + s_4}Z^{s_1 + s_3} U|\psi\rangle$. To obtain the desired $U|\psi\rangle$, we must correct byproducts on the fifth vertex.
# Correct byproducts on the fifth vertex
mbqc.correct_byproduct('X','5', mbqc.sum_outcomes(['2','4']))
mbqc.correct_byproduct('Z','5', mbqc.sum_outcomes(['1','3']))
# #### Obtain the final output state and compare it with the expected one
#
# We can call `get_classical_output` and `get_quantum_output` to obtain the classical and quantum outputs after simulation. We also provide in the module ``utils`` two functions `compare_by_vector` and `compare_by_density` to check if two given quantum states are identical. The former function compares two states by their state vectors, while the second function compares their density matrices. If two states are identical, both functions will return the norm difference of these two states, and print a statement: "They are exactly the same states." (Note: we regard two states as the same if their norm difference is within the range of 1e-14 and 1e-16.)
# +
# Obtain the classcial result
classical_output = mbqc.get_classical_output()
# Obtain the quantum result
quantum_output = mbqc.get_quantum_output()
# Compute the expected state vector
vector_std = matmul(rotation_gate('x', gamma),
matmul(rotation_gate('z', beta),
matmul(rotation_gate('x', alpha), input_psi)))
# Construct the expected state on vertex '5'
state_std = State(vector_std, ['5'])
# Compare with the expected state
compare_by_vector(quantum_output, state_std)
# -
# ### 3. Example: CNOT gate in MBQC
#
# CNOT gate is one of the most frequently used gates in the circuit model. In MBQC, the realization of a CNOT gate is shown in Figure 4 [7]: prepare $15$ qubits, with $1$, $9$ being the input qubits and $7$, $15$ being the output qubits; input a state $|\psi\rangle$ and initialize other vertices to $|+\rangle$; apply a CZ operator to each connected qubit pairs; perform $X$-measurements on the vertices $1, 9, 10, 11, 13, 14$ and $Y$-measurement on the vertices $2, 3, 4, 5, 6, 8, 12$ (Note: All of these measurements are non-adaptive measurements, so the order of their executions can be permuted); correct byproducts on $7$ and $15$ to obtain the output state $\text{CNOT}|\psi\rangle$.
#
# 
# <div style="text-align:center">Figure 4: Realization of CNOT gate in MBQC </div>
#
# **Note**: Similar to the first example, byproduct corrections are necessary to get the desired $\text{CNOT}|\psi\rangle$.
#
# Here is a complete code implementation:
# +
# Import common calculation modules
from paddle import to_tensor, matmul
# Import relevant modules for MBQC simulation
from paddle_quantum.mbqc.simulator import MBQC
from paddle_quantum.mbqc.qobject import State
from paddle_quantum.mbqc.utils import pauli_gate, cnot_gate, basis, random_state_vector, compare_by_vector
# Define Pauli X and Pauli Z gates and X, Y measurement bases
X = pauli_gate('X')
Z = pauli_gate('Z')
X_basis = basis('X')
Y_basis = basis('Y')
# Define the underlying graph for computation
V = [str(i) for i in range(1, 16)]
E = [('1', '2'), ('2', '3'), ('3', '4'), ('4', '5'),
('5', '6'), ('6', '7'), ('4', '8'), ('8', '12'),
('9', '10'), ('10', '11'), ('11', '12'),
('12', '13'), ('13', '14'), ('14', '15')]
G = [V, E]
# Generate a random state vector
input_psi = random_state_vector(2)
# Construct a quantum state on vertices '1' and '9'
input_state = State(input_psi, ['1','9'])
# Instantiate a MBQC class
mbqc = MBQC()
# Set the graph state
mbqc.set_graph(G)
# Set the input state
mbqc.set_input_state(input_state)
# Measure each qubit step by step
mbqc.measure('1', X_basis)
mbqc.measure('2', Y_basis)
mbqc.measure('3', Y_basis)
mbqc.measure('4', Y_basis)
mbqc.measure('5', Y_basis)
mbqc.measure('6', Y_basis)
mbqc.measure('8', Y_basis)
mbqc.measure('9', X_basis)
mbqc.measure('10', X_basis)
mbqc.measure('11', X_basis)
mbqc.measure('12', Y_basis)
mbqc.measure('13', X_basis)
mbqc.measure('14', X_basis)
# Compute the power of byproduct operators
cx = mbqc.sum_outcomes(['2', '3', '5', '6'])
tx = mbqc.sum_outcomes(['2', '3', '8', '10', '12', '14'])
cz = mbqc.sum_outcomes(['1', '3', '4', '5', '8', '9', '11'], 1)
tz = mbqc.sum_outcomes(['9', '11', '13'])
# Correct the byproduct operators
mbqc.correct_byproduct('X', '7', cx)
mbqc.correct_byproduct('X', '15', tx)
mbqc.correct_byproduct('Z', '7', cz)
mbqc.correct_byproduct('Z', '15', tz)
# Obtain the classcial result
classical_output = mbqc.get_classical_output()
# Obtain the quantum result
quantum_output = mbqc.get_quantum_output()
# Construct the expected result
vector_std = matmul(to_tensor(cnot_gate()), input_psi)
state_std = State(vector_std, ['7', '15'])
# Compare with the expected result
compare_by_vector(quantum_output, state_std)
# -
# ## Welcome Aboard!
#
# After this tutorial, we highly recommend learning the following ones for further exploration:
#
# - [Measurement-based Quantum Approximate Optimization Algorithm](QAOA_EN.ipynb)
# - [Polynomial Unconstrained Boolean Optimization Problem in MBQC](PUBO_EN.ipynb)
#
# Our MBQC module provides all the essential building blocks for the implementation of a general MBQC algorithm. It can do much beyond than what we have listed above. We sincerely encourage you to explore more potential applications with MBQC and our module! If you are interested in a more detailed study of MBQC itself, please refer to [15,16].
# ---
#
# ## References
#
# [1] Deutsch, <NAME>. "Quantum computational networks." [Proceedings of the Royal Society of London. A. 425.1868 (1989): 73-90.](https://royalsocietypublishing.org/doi/abs/10.1098/rspa.1989.0099)
#
# [2] Barenco, Adriano, et al. "Elementary gates for quantum computation." [Physical review A 52.5 (1995): 3457.](https://journals.aps.org/pra/abstract/10.1103/PhysRevA.52.3457)
#
# [3] <NAME>, and <NAME>. "Demonstrating the viability of universal quantum computation using teleportation and single-qubit operations." [Nature 402.6760 (1999): 390-393.](https://www.nature.com/articles/46503?__hstc=13887208.d9c6f9c40e1956d463f0af8da73a29a7.1475020800048.1475020800050.1475020800051.2&__hssc=13887208.1.1475020800051&__hsfp=1773666937)
#
# [4] Nielsen, <NAME>. "Quantum computation by measurement and quantum memory." [Physics Letters A 308.2-3 (2003): 96-100.](https://www.sciencedirect.com/science/article/abs/pii/S0375960102018030)
#
# [5] Leung, <NAME>. "Quantum computation by measurements." [International Journal of Quantum Information 2.01 (2004): 33-43.](https://www.worldscientific.com/doi/abs/10.1142/S0219749904000055)
#
# [6] <NAME>, et al. "A one-way quantum computer." [Physical Review Letters 86.22 (2001): 5188.](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.86.5188)
#
# [7] Raussendorf, Robert, and <NAME>. "Computational model underlying the one-way quantum computer." [Quantum Information & Computation 2.6 (2002): 443-486.](https://dl.acm.org/doi/abs/10.5555/2011492.2011495)
#
# [8] <NAME>, et al. "Measurement-based quantum computation on cluster states." [Physical Review A 68.2 (2003): 022312.](https://journals.aps.org/pra/abstract/10.1103/PhysRevA.68.022312)
#
# [9] Briegel, <NAME>., et al. "Measurement-based quantum computation." [Nature Physics 5.1 (2009): 19-26.](https://www.nature.com/articles/nphys1157)
#
# [10] Aliferis, Panos, and <NAME>. "Computation by measurements: a unifying picture." [Physical Review A 70.6 (2004): 062314.](https://journals.aps.org/pra/abstract/10.1103/PhysRevA.70.062314)
#
# [11] Broadbent, Anne, et al. "Universal blind quantum computation." [2009 50th Annual IEEE Symposium on Foundations of Computer Science. IEEE, 2009.](https://arxiv.org/abs/0807.4154)
#
# [12] <NAME>. "Verification for measurement-only blind quantum computing." [Physical Review A 89.6 (2014): 060302.](https://journals.aps.org/pra/abstract/10.1103/PhysRevA.89.060302)
#
# [13] Larsen, <NAME>., et al. "Deterministic generation of a two-dimensional cluster state." [Science 366.6463 (2019): 369-372.](https://science.sciencemag.org/content/366/6463/369)
#
# [14] <NAME>, et al. "Generation of time-domain-multiplexed two-dimensional cluster state." [Science 366.6463 (2019): 373-376.](https://science.sciencemag.org/content/366/6463/373)
#
# [15] <NAME>, et al. "An introduction to measurement based quantum computation." [arXiv:quant-ph/0508124](https://arxiv.org/abs/quant-ph/0508124v2)
#
# [16] <NAME>. "Cluster-state quantum computation." [Reports on Mathematical Physics 57.1 (2006): 147-161.](https://www.sciencedirect.com/science/article/abs/pii/S0034487706800145)
#
# [17] Nielsen, <NAME>., and <NAME>. "Quantum computation and quantum information."[Cambridge university press (2010).](https://www.cambridge.org/core/books/quantum-computation-and-quantum-information/01E10196D0A682A6AEFFEA52D53BE9AE)
#
# [18] <NAME>, et al. "The measurement calculus." [Journal of the ACM (JACM) 54.2 (2007): 8-es.](https://dl.acm.org/doi/abs/10.1145/1219092.1219096)
|
tutorial/mbqc/MBQC_EN.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #### Name: <NAME>
#
# Date: 7/3/2018
#
# Version: 3.0
#
# Environment: Python 3.6.0 and Jupyter notebook
#
# Libraries used:
# * pandas (for dataframe, included in Anaconda Python 3.6)
#
# ## 1.Introduction
# This assignment make extraction of the data from an excel file and save the result into a csv file.The result of the extraction is in 202 rows and 14 columns. The requirement can be explain as follow:
# 1. The first column contains 202 country names.
# 2. The columns after the 'Country Name' is named with a sequence of numbers start from 0.
# 3. All null values and desh are replaced with space.
# 4. All data values are rounded integers and the result are stored in a csv file.
#
# The detail of each section is shown below.
# ## 1. Import libraries
# To import libraries needed in this assignment
import pandas as pd
# ## 2. Parse Excel File
# The first step is to load the data stored in the exact excel file without loading some rows in the head and in the last. Then set the column names with a sequence of numbers.
# The loading skip the head 5 rows and the last 39 rows.
df = pd.read_excel('basic_indicators.xlsx',skipfooter = 39,skiprows = 5,encoding='utf-8')
df.columns = list(range(len(df.columns)))
df
# To check that 202 rows and 24 columnes have been loeaded.
df.shape
# To make arrangement to the data in column 2 to 14. It includes changing the data to rounded integer and changing the desh values to be space. There is a problem when changing the desh values to integer, so it should first change all the values to be numeric.
# Chnage all the data to be a numeric type and the invaild values will be set as null. Save the result in df2.
df2 = df.iloc[0:202,2:15].apply(pd.to_numeric, errors='coerce')
# Fill all the null values to be "88888888". This can avoid making changes to the original data that contains 0.
df3 = df2.fillna(88888888)
# Change the values to be round before integer because int type always rounds down.Save the changes in df4.
df4 = round(df3).astype(int)
# change the "88888888" values back to space. Save them in df5.
df5 = df4.replace(88888888,' ')
# rearrange the original data values to be the new values.
df.iloc[0:202,2:15] = df5
df
# To use the same method to arrange the data in column 16 and save all the changes in the dataframe.
# +
df6 = df.iloc[0:202,16].apply(pd.to_numeric, errors='coerce')
df7 = df6.fillna(88888888)
df8 = round(df7).astype(int)
df9 = df8.replace(88888888,' ')
df.iloc[0:202,16] = df9
df
# -
# To keep the columns that contain more than 101 non-null values. In this step alos drop the columns that only contain 'x'.
# The columns that have more than 101 non-null values will be kept in a new DataFrame df10.
# The "x" values, which is conmbined with the values in its left column,will also be drop.
df10 = df.dropna(thresh = 101, axis = 1)
df10
# Rename the first colunm name to be "Country Name".
# The data after renaming is stored in the df11 for future use.
df11 = df10.rename(columns = {df10.columns.values[0]:'Country Name'})
df11
# Arrange the column names after the "Country Name" to start from 0.
# Use the for loop to reset the column names from 0 to 13.
for i in range(0,len(df11.columns.values)-1):
df11 = df11.rename(columns = {df11.columns.values[i+1]:i})
df11
# To set the "Country Name" column to be the index column of the table.
# Set the index column in a new DataFrame and output the final result.
df_result = df11.set_index('Country Name', inplace = False)
df_result
# To save the extracted data in df_result to a new csv file named "basic_indicators".
df_result.to_csv('basic_indicators.csv')
# ## 3. Summary
# After finishing this assignment task, what I have learned is concluded as follow:
# 1. When dropping the columns/rows that has null values, using "how = all/any" is not always working because in this assignment, there are some "x" between the null values, which will stop the dropping of some columns. Using "thresh" can set a proper limitation for dropping columns/rows.
# 2. For loop combined with rename() can reset the column names one by one.
# 3. The sequence of execution matters. Changing the values (including the desh values) to be muneric and change them to be rounded integer. Using some special data like "88888888" for making replacement.
# 4. Before strating the assignment, it is needed to observe through the whole data in the excel file.
|
excel.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a>
# # Using the Landlab ListricKinematicExtender component
#
# *(<NAME>, University of Colorado Boulder, March 2021)*
#
# <hr>
# <small>For more Landlab tutorials, click here: <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html</a></small>
# <hr>
#
# This tutorial demonstrates how to use the `ListricKinematicExtender` component. `ListricKinematicExtender` models the vertical subsidence and lateral tectonic motion associated with a listric detachment fault. A listric fault is one that shallows with depth, such that the fault plane has a concave-upward profile. The word "kinematic" indicates that this component does not calculate the mechanics of stress and strain involved in an extensional fault; it simply aims to mimic them geometrically. The basic concept, described in detail below, is to divide the resulting tectonics into a vertical component and a horizontal component. The vertical component is modeled by imposing a subsidence rate that decays exponentially with distance from the fault's initial surface location. The horizontal component is modeled by shifting elevation values (and optionally other fields) by one cell at regular time intervals, based on a given extension rate.
# ## Theory
#
# ### Describing a listric fault plane
#
# Consider a fault plane with dip angle $\alpha$ relative to the horizontal. The fault plane has a listric shape, in which the dip angle at the surface is $\alpha_0$, and it becomes increasingly shallow with depth, ultimately asymptoting to horizontal at depth $h$ (we'll refer to $h$ as the detachment depth). We can express the dip angle in terms of gradient $G = \tan\alpha$, and $G_0 = \tan\alpha_0$. Let the gradient decay exponentially with distance from its surface trace, $x$, starting from the surface value $G_0$:
#
# $$G(x) = G_0 e^{-x/\lambda}$$
#
# where $\lambda$ is a length scale that we'll define in a moment. Because $G$ is the rate of change of fault plane elevation, $z$ with distance $x$, we can write:
#
# $$\frac{dz}{dx} = -G_0 e^{-x/\lambda}\hskip1em\mbox{(1)}$$
#
# Integrating,
#
# $$z(x) = G_0\lambda e^{-x/\lambda} + C$$
#
# Evaluate constant of integration by noting that $z = z_0$ (the elevation of the initial surface trace) at $x = 0$,
#
# $$z_0 = G_0\lambda + C$$
#
# so
#
# $$z(x) = z_0 - G_0\lambda (1 - e^{-x/\lambda})$$
#
# Note that the fault elevation asymptotes to a detachment depth $h = G_0\lambda$. This gives us a physical basis for $\lambda$, and means we can express our fault plane geometry by $h$ instead of $\lambda$:
#
# $$\boxed{z(x) = z_0 - h \left(1 - e^{-x G_0 / h}\right)}$$
#
# Let's plot it:
# +
import numpy as np
import matplotlib.pyplot as plt
alpha0 = 60.0 # fault dip at surface, degrees
z0 = 0.0 # elevation of surface trace
h = 10.0 # detachment depth, km
G0 = np.tan(np.deg2rad(60.0))
x = np.arange(0, 41.0)
z = z0 - h * (1.0 - np.exp(-x * G0 / h))
plt.plot(x, z, "k")
plt.xlabel("Distance (km)")
plt.ylabel("Fault plane elevation (km)")
# -
# ### Describing subsidence due to fault motion
#
# From here, we can think about the subsidence rate of the hangingwall as a function of horizontal extension velocity, $u$. We can think of the hangingwall as an enormous, floppy sled that glides down the slope of the fault plane. Consider a point on the hangingwall. In the reference frame of the footwall, the thickness of the underlying hangingwall block shrinks over time as the hangingwall moves to the "right". If the fault plane is fixed, then the vertical rate of change of surface elevation, $v$, in a reference frame fixed to the footwall, is equal to the rate of change of local hangingwall thickness. The time rate of change of hangingwall thickness, $H_h$, is the product of the *spatial* gradient in thickness times the extension rate, $u$,
#
# $$v = \frac{dH_h}{dt} = -u \frac{dH_h}{dx}$$
#
# If the footwall is rigid (which we'll assume for now), the time rate of change of surface elevation due to hangingwall motion---again, in the reference frame of the footwall---equals the rate of change of hangingwall thickness.
#
# The hangingwall thickness equals its surface elevation, $\eta(x,t)$, minus the fault-plane elevation, $z(x)$:
#
# $$H_h(x,t) = \eta(x,t) - (z_0 - h (1 - e^{-x G_0 / h}))$$
#
# where again $x$ is the initial location of the fault's surface trace. Suppose that there were no erosion or sedimentation. We can rewrite the above as
#
# $$H_h(x,t) = \eta(x-ut, 0) - (z_0 - h (1 - e^{-(x-ut) G_0 / h}))$$
#
# As an illustration, suppose the topographic surface is initially level and equal to zero. In that case,
#
# $$H_h(x,t) = h (1 - e^{-(x-ut) G_0 / h}))$$
#
# The corresponding height of the topographic surface at a given position and time is
#
# $$\boxed{\eta(x,t) = z(x) + H_h(x,t) = h e^{-x G_0 / h} - h e^{-(x-ut) G_0 / h}}$$
#
# Our implementation trick will be to apply this subsidence to grid cells in an Eulerian frame, but also capture the horizontal component of motion by shifting hangingwall grid cells every time the cumulative horizontal displacement equals or exceeds one grid cell width.
#
# The block of code below shows an example of an initially level topographic surface that has accumulated subsidence over time according to the above equation. Note how the subsidence profile reflects the "rightward" motion of the hangingwall relative to the (fixed) footwall.
# +
dt = 100000.0 # time span, y
xf = 10000.0 # initial location of surface trace of fault, m
u = 0.01 # extension rate, m/y
h = 10000.0 # detachment depth, m
nprofiles = 5
x = np.arange(0.0, 40100.0, 100.0)
dist_from_fault = np.maximum(x - xf, 0.0)
z = z0 - h * (1.0 - np.exp(-dist_from_fault * G0 / h))
plt.plot(x, z, "r", label="Fault plane")
for i in range(nprofiles):
t = i * dt
shifted_dist_from_fault = np.maximum(dist_from_fault - u * t, 0.0)
# WAIT
# Calculate the surface topography
eta = h * (
np.exp(-dist_from_fault * G0 / h) - np.exp(-shifted_dist_from_fault * G0 / h)
)
# Calculate thickness
# thickness = h * (1.0 - np.exp(-shifted_dist_from_fault * G0 / h))
# eta won't be less than the fault-plane elevation
eta[eta < z] = z[eta < z]
plt.plot(x, eta, "k", label="Surface elevation " + str(i))
# plt.plot(x, thickness, 'b', label='Thickness' + str(i))
plt.xlabel("Distance (km)")
plt.ylabel("Elevation (km)")
plt.legend()
# -
# ## Numerical implementation
#
# The numerical approach is to divide the problem into two parts: subsidence that results from the descent of the hangingwall as it moves along the fault plane, and lateral translation of topography. The mathematical basis for this starts with expressing the hangingwall thickness, $H_f$, in terms of surface topography, $\eta$ and fault plane elevation, $z$:
#
# $$H_f = \eta - z$$
#
# We can therefore decompose the local rate of hangingwall subsidence (in the footwall frame of reference) into two components:
#
# $$v = -u \left( \frac{d\eta}{dx} - \frac{dz}{dx}\right)$$
#
# The second term represents subsidence of hangingwall rock that occurs because of downward motion along the fault plane. Substituting equation (1), this component is:
#
# $$v_s = -u G_0 \exp(-x G_0 / h)$$
#
# where $x$ is defined as distance from the original position of the surface fault trace. However, it only applies where the hangingwall is still present, and not to those locations where the hangingwall has slipped off to reveal the fault plane at the surface. Therefore, we will track the $x$ coordinate of the "left" edge of the hangingwall, and only apply this component of subsidence to those locations. The subsidence rate component $v_s$ is applied continuously to the topography, i.e., at every time step.
#
# The second component, represented by $-u d\eta / dx$, represents the local subsidence that occurs because the topography is translating lateral with respect to the footwall. This component we do *not* want to apply continuously, because it would result in artificial diffusion of the topography. Instead, the algorithm periodically shifts the topography in the entire hangingwall portion of the grid by one cell to the "right". To accomplish this, the algorithm keeps track of cumulative lateral motion since the last shift, executing a new shift whenever that value exceeds one grid-cell width, and decrementing the cumulative lateral motion by one cell width. This method preserves the hangingwall topography (and any other associated fields), at the expense of introducing episodic lateral tectonic motion. However, because of the direct translation, the *relative* change in topography between adjacent cells is minimized.
#
# ### Fields
#
# The `ListricKinematicExtender` requires `topographic__elevation` as a field; it applies subsidence to this field. It creates one output field: `subsidence_rate` records the latest subsidence rate at grid nodes.
#
# There are also two optional fields that are used only if the user selects the `track_thickness` option, which is designed to support combining this component with lithosphere flexure by also tracking changes in crustal thickness that result from extension. `upper_crust_thickness` is an input-and-output field that contains the current thickness of the upper crust (however defined), and the `cumulative_subsidence_depth` field records the accumulated subsidence since the most recent horizontal shift (see below).
#
# ### Vertical subsidence
#
# The `run_one_step()` method calculates the subsidence rate field at nodes using the exponential function above, then multiplies this by the given time-step duration `dt` and subtracts this value from the node elevations.
#
# Alternatively, a user may wish to calculate the subsidence rates without having the compent actually apply them to the elevation field. To accomplish this, the component provides a public function `update_subsidence_rate`. This function updates the subsidence rate field without changing elevations.
#
# ### Horizontal motion
#
# To represent horizontal motion of the hangingwall relative to the footwall (which is the fixed datum), the component keeps track of cumulative horizontal motion, updating it each time `run_one_step` is called. When the cumulative motion equals or exceeds one grid-cell width, the component shifts the elevation values in the hangingwall portion of the domain to the "right", representing offset of one cell width. The cumulative horizontal offset is then decremented by one grid cell. The position of the "left" edge of the hangingwall is also increased by one cell width (its initial position is the user-specified fault position). This means that the boundary between the footwall and hangingwall also migrates to the "right" at the specified extension rate, and that the area of active subsidence gradually shrinks over time. However, the subsidence rate profile is still calculated using fault position. Mathematically, this can be expressed as:
#
# $$v(x, t) = \begin{cases}
# -v G_0 \exp ( -(x - x_f) G_0 / h ) & \mbox{if } x > x_h(t) \\
# 0 & \mbox{otherwise}
# \end{cases}$$
#
# $$x_h(t) = x_f + u t$$
#
# where $x_f$ is the initial $x$ position of the surface fault trace, and $x_h$ represents the "left" edge of the hangingwall.
#
# In addition to "shifting" elevation values, the user may pass a list of node field names in the `fields_to_shift` parameter, and these will also be shifted.
#
# ### Integrating with flexure
#
# By itself, `ListricKinematicExtender` does not include rift-shoulder uplift, which in nature (at least in the author's understanding) occurs as a result of flexural isostatic uplift in response to extensional thinning of the crust, and also possibly as a result of thermal isostatic uplift in the underlying mantle. To handle the first of these, `ListricKinematicExtender` is designed to work together with a flexural isostasy component. The basic idea is to calculate explicitly the thinning of the crustal column that results from extension, so that this reduction in crustal thickness can be used by an isostasy component such as `Flexure`.
#
# The basic concept behind `ListricKinematicExtender` is that thinning occurs when the hangingwall block is dragged away from the footwall, in effect sliding down the fault plane, as illustrated in the plot of topography and fault plane above. In order to combine with a flexural isostasy component, we need to keep track of the progressive reduction in crustal thickness. This tracking is activated when the `track_crustal_thickness` option is set to `True` (the default is `False`). The user must provide an `upper_crust_thickness` node field. As noted above, the algorithm separates the vertical and horizontal components of motion, with horizontal motion only explicitly implemented when the cumulative displacement equals or exceeds a full grid-cell width. In keeping with this approach, the thickness field is only modified when a cell-shift occurs. But that approach could cause a problem if one wishes to incorporate flexural isostasy: a natural approach to flexural isostasy is to keep track of an evolving crustal thickness field (which thins under erosion and thickens under deposition), and calculate surface topography as the sum of a crustal datum, flexural offset, and crustal thickness above the datum. To enable this approach, we somehow need to keep track of the extensional subsidence that occurs *between* horizontal offsets. To do this, the `ListricKinematicExtender` keeps track of cumulative subsidence since the last horizontal shift. This quantity is tracked by the optional output field `cumulative_subsidence_depth` (the field is created only if the user sets `track_crustal_thickness` to `True`). One can then calculate elevation at any time step by summing a crustal datum elevation, the thickness of crust above this datum, the isostatic deflection, and the cumulative extensional subsidence. Whenever a shift occurs, the thickness field is included in the shift: those crustal columns to the "right" of the hangingwall edge are shifted by one cell, along with the topography. The cumulative subsidence since the last shift is then subtracted from the thickness field to record the accumulated thinning associated with that shift. This method effectively captures the thinning of crust along a listric fault plane without needing to explicitly track the fault plane or of separate hangingwall and footwall columns.
# ## Examples
#
# ### Example 1: Quasi-1D
#
# The first example uses a quasi-1D setup to represent an initially level topography on which subsidence progressively accumulates.
import numpy as np
import matplotlib.pyplot as plt
from landlab import RasterModelGrid, imshow_grid
from landlab.components import ListricKinematicExtender
# parameters
nrows = 3
ncols = 51
dx = 1000.0 # grid spacing, m
nsteps = 20 # number of iterations
dt = 2.5e5 # time step, y
extension_rate = 0.001 # m/y
detachment_depth = 10000.0 # m
fault_dip = 60.0 # fault dip angle, degrees
fault_loc = 10000.0 # m from left side of model
# +
# Create grid and elevation field
grid = RasterModelGrid((nrows, ncols), xy_spacing=dx)
elev = grid.add_zeros("topographic__elevation", at="node")
# Instantiate component
extender = ListricKinematicExtender(
grid,
extension_rate=extension_rate,
fault_dip=fault_dip,
fault_location=fault_loc,
detachment_depth=detachment_depth,
)
# +
# Plot the starting elevations, in cross-section (middle row)
midrow = np.arange(ncols, 2 * ncols, dtype=int)
plt.plot(grid.x_of_node[midrow] / 1000.0, elev[midrow], "k")
plt.xlabel("Distance (km)")
plt.ylabel("Elevation (m)")
plt.xlim([10.0, 40.0])
# Add a plot of the fault plane
dist_from_fault = grid.x_of_node - fault_loc
dist_from_fault[dist_from_fault < 0.0] = 0.0
x0 = detachment_depth / np.tan(np.deg2rad(fault_dip))
fault_plane = -(detachment_depth * (1.0 - np.exp(-dist_from_fault / x0)))
plt.plot(grid.x_of_node[midrow] / 1000.0, fault_plane[midrow], "r")
for i in range(nsteps):
extender.run_one_step(dt)
plt.plot(grid.x_of_node[midrow] / 1000.0, elev[midrow], "k")
# Add the analytic solution
total_time = nsteps * dt
G0 = np.tan(np.deg2rad(fault_dip))
shifted_dist_from_fault = np.maximum(dist_from_fault - extension_rate * total_time, 0.0)
elev_pred = detachment_depth * (
np.exp(-dist_from_fault * G0 / h) - np.exp(-(shifted_dist_from_fault * G0 / h))
)
elev_pred = np.maximum(elev_pred, fault_plane)
plt.plot(grid.x_of_node[midrow] / 1000.0, elev_pred[midrow], "b:")
# -
# ### Example 2: quasi-1D with topography
# +
period = 15000.0 # period of sinusoidal variations in initial topography, m
ampl = 500.0 # amplitude of variations, m
# Create grid and elevation field
grid = RasterModelGrid((nrows, ncols), xy_spacing=dx)
elev = grid.add_zeros("topographic__elevation", at="node")
elev[:] = ampl * np.sin(2 * np.pi * grid.x_of_node / period)
# Instantiate component
extender = ListricKinematicExtender(
grid,
extension_rate=extension_rate,
fault_dip=fault_dip,
fault_location=fault_loc,
detachment_depth=detachment_depth,
)
# +
# Plot the starting elevations, in cross-section (middle row)
midrow = np.arange(ncols, 2 * ncols, dtype=int)
plt.plot(grid.x_of_node[midrow] / 1000.0, elev[midrow], "k")
plt.xlabel("Distance (km)")
plt.ylabel("Elevation (m)")
plt.grid(True)
# Add a plot of the fault plane
dist_from_fault = grid.x_of_node - fault_loc
dist_from_fault[dist_from_fault < 0.0] = 0.0
x0 = detachment_depth / np.tan(np.deg2rad(fault_dip))
fault_plane = -(detachment_depth * (1.0 - np.exp(-dist_from_fault / x0)))
plt.plot(grid.x_of_node[midrow] / 1000.0, fault_plane[midrow], "r")
for i in range(nsteps):
extender.run_one_step(dt)
c = 1.0 - i / nsteps
plt.plot(grid.x_of_node[midrow] / 1000.0, elev[midrow], color=[c, c, c])
# -
# ### Example 3: extending to 2D
# +
from landlab import imshow_grid
# parameters
nrows = 31
ncols = 51
dx = 1000.0 # grid spacing, m
nsteps = 20 # number of iterations
dt = 2.5e5 # time step, y
extension_rate = 0.001 # m/y
detachment_depth = 10000.0 # m
fault_dip = 60.0 # fault dip angle, degrees
fault_loc = 10000.0 # m from left side of model
period = 15000.0 # period of sinusoidal variations in initial topography, m
ampl = 500.0 # amplitude of variations, m
# +
# Create grid and elevation field
grid = RasterModelGrid((nrows, ncols), xy_spacing=dx)
elev = grid.add_zeros("topographic__elevation", at="node")
elev[:] = (
ampl
* np.sin(2 * np.pi * grid.x_of_node / period)
* np.sin(2 * np.pi * grid.y_of_node / period)
)
# Instantiate component
extender = ListricKinematicExtender(
grid,
extension_rate=extension_rate,
fault_dip=fault_dip,
fault_location=fault_loc,
detachment_depth=detachment_depth,
)
# -
# Plot the starting topography
imshow_grid(grid, elev)
for i in range(nsteps // 2):
extender.run_one_step(dt)
imshow_grid(grid, elev)
for i in range(nsteps // 2):
extender.run_one_step(dt)
imshow_grid(grid, elev)
imshow_grid(grid, extender._fault_normal_coord)
# +
# Plot a cross-section
start_node = 6 * ncols
end_node = start_node + ncols
midrow = np.arange(start_node, end_node, dtype=int)
plt.plot(grid.x_of_node[midrow] / 1000.0, elev[midrow], "k")
plt.xlabel("Distance (km)")
plt.ylabel("Elevation (m)")
plt.grid(True)
# Add a plot of the fault plane
dist_from_fault = grid.x_of_node - fault_loc
dist_from_fault[dist_from_fault < 0.0] = 0.0
x0 = detachment_depth / np.tan(np.deg2rad(fault_dip))
fault_plane = -(detachment_depth * (1.0 - np.exp(-dist_from_fault / x0)))
plt.plot(grid.x_of_node[midrow] / 1000.0, fault_plane[midrow], "r")
# -
# ### Example 4: hex grid
# +
from landlab import HexModelGrid
# parameters
nrows = 31
ncols = 51
dx = 1000.0 # grid spacing, m
nsteps = 20 # number of iterations
dt = 2.5e5 # time step, y
extension_rate = 0.001 # m/y
detachment_depth = 10000.0 # m
fault_dip = 60.0 # fault dip angle, degrees
fault_loc = 10000.0 # m from left side of model
period = 15000.0 # period of sinusoidal variations in initial topography, m
ampl = 500.0 # amplitude of variations, m
# +
# Create grid and elevation field
grid = HexModelGrid((nrows, ncols), spacing=dx, node_layout="rect")
elev = grid.add_zeros("topographic__elevation", at="node")
elev[:] = (
ampl
* np.sin(2 * np.pi * grid.x_of_node / period)
* np.sin(2 * np.pi * grid.y_of_node / period)
)
# Instantiate component
extender = ListricKinematicExtender(
grid,
extension_rate=extension_rate,
fault_dip=fault_dip,
fault_location=fault_loc,
detachment_depth=detachment_depth,
)
# -
# Plot the starting topography
imshow_grid(grid, elev)
for i in range(nsteps // 2):
extender.run_one_step(dt)
imshow_grid(grid, elev)
for i in range(nsteps // 2):
extender.run_one_step(dt)
imshow_grid(grid, elev)
# +
# Plot a cross-section
start_node = 6 * ncols
end_node = start_node + ncols
midrow = np.arange(start_node, end_node, dtype=int)
plt.plot(grid.x_of_node[midrow] / 1000.0, elev[midrow], "k")
plt.xlabel("Distance (km)")
plt.ylabel("Elevation (m)")
plt.grid(True)
# Add a plot of the fault plane
dist_from_fault = grid.x_of_node - fault_loc
dist_from_fault[dist_from_fault < 0.0] = 0.0
x0 = detachment_depth / np.tan(np.deg2rad(fault_dip))
fault_plane = -(detachment_depth * (1.0 - np.exp(-dist_from_fault / x0)))
plt.plot(grid.x_of_node[midrow] / 1000.0, fault_plane[midrow], "r")
# -
# ### Example 5: combining with lithosphere flexure
# +
from landlab.components import Flexure
# parameters
nrows = 31
ncols = 51
dx = 1000.0 # grid spacing, m
nsteps = 20 # number of iterations
dt = 2.5e5 # time step, y
extension_rate = 0.001 # m/y
detachment_depth = 10000.0 # m
fault_dip = 60.0 # fault dip angle, degrees
fault_loc = 10000.0 # m from left side of model
period = 15000.0 # period of sinusoidal variations in initial topography, m
ampl = 500.0 # amplitude of variations, m
# flexural parameters
eet = 5000.0 # effective elastic thickness, m (here very thin)
crust_datum = -10000.0 # elevation of crustal reference datum, m
rhoc = 2700.0 # crust density, kg/m3
g = 9.8 # guess what?
# +
# Create grid and elevation field
grid = RasterModelGrid((nrows, ncols), xy_spacing=dx)
elev = grid.add_zeros("topographic__elevation", at="node")
elev[:] = (
ampl
* np.sin(2 * np.pi * grid.x_of_node / period)
* np.sin(2 * np.pi * grid.y_of_node / period)
)
thickness = grid.add_zeros("upper_crust_thickness", at="node")
load = grid.add_zeros("lithosphere__overlying_pressure_increment", at="node")
# Instantiate components
extender = ListricKinematicExtender(
grid,
extension_rate=extension_rate,
fault_dip=fault_dip,
fault_location=fault_loc,
detachment_depth=detachment_depth,
track_crustal_thickness=True,
)
cum_subs = grid.at_node["cumulative_subsidence_depth"]
flexer = Flexure(grid, eet=eet, method="flexure")
deflection = grid.at_node["lithosphere_surface__elevation_increment"]
# +
# set up thickness and flexure
unit_wt = rhoc * g
thickness[:] = elev - crust_datum
load[:] = unit_wt * thickness
flexer.update()
init_flex = deflection.copy()
# -
# show initial deflection field (positive downward)
imshow_grid(grid, init_flex)
for i in range(nsteps):
extender.run_one_step(dt)
load[:] = unit_wt * thickness
flexer.update()
net_deflection = deflection - init_flex
elev[:] = crust_datum + thickness - (cum_subs + net_deflection)
imshow_grid(grid, thickness)
imshow_grid(grid, net_deflection)
imshow_grid(grid, cum_subs)
imshow_grid(grid, elev)
plt.plot(elev.reshape(31, 51)[:, 10], label="Rift shoulder")
plt.plot(elev.reshape(31, 51)[:, 12], label="Rift basin")
plt.plot(-net_deflection.reshape(31, 51)[:, 10], label="Isostatic uplift profile")
plt.xlabel("North-south distance (km)")
plt.ylabel("Height (m)")
plt.legend()
# ### Click here for more <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">Landlab tutorials</a>
|
notebooks/tutorials/tectonics/listric_kinematic_extender.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
def factorial(a):
f = 1
for i in range(1,a+1):
f *= i
#end for
yield f
n=int(input())
x=factorial(n)
for i in x:
print(i)
|
Untitled.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6 - AzureML
# language: python
# name: python3-azureml
# ---
from utils.data import Data
from scipy.stats.mstats import *
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
df1 = Data().get300K()
describe(df1.IE)
quantiles = mquantiles(df1.IE)
print(f'25% {quantiles[0]}\n'
f'50% {quantiles[1]}\n'
f'75% {quantiles[2]}')
df1.IE.mean()
df1.IE.median()
type(df1.IE.to_numpy())
df1.IE.quantile(.5)
quantiles = df1[['IE','C33']].quantile([.05,.5,.95])
#quantiles.loc[:,['IE']]
quantiles
df1.IE.plot.density()
#plt.vlines(df1.IE.quantile(.5),0,1)
#plt.vlines(df1.IE.quantile(.9),0,1)
plt.vlines(quantiles.loc[:,['IE']],0,5)
df1.boxplot(['IE'])
plt.hlines( df1[['IE']].quantile([.05,.25,.75,.95]),.9,1.1, colors=['b','r','r','b'])
plt.boxplot(df1.IE, vert=False)
plt.vlines( df1[['IE']].quantile([.05,.25,.75,.95]),.9,1.1, colors=['b','r','r','b'])
plt.violinplot(df1.IE, vert=False)
plt.vlines( df1[['IE']].quantile([.05,.25,.5,.75,.95]),.9,1.1, colors=['b','r','k','r','b'])
df1.C.plot.density()
plt.violinplot(df1.C33)
plt.hlines( df1[['C33']].quantile([.05,.25,.5,.75,.95]),.9,1.1, colors=['b','r','k','r','b'])
# # 18M
df2 = Data().get18M()
quantiles2 = df2[['IE','C33']].quantile([.05,.5,.95])
#quantiles.loc[:,['IE']]
quantiles2
# # 300k AND 18m QUANTILES
quantiles
q_concat = pd.concat([quantiles, quantiles2], axis=1, keys=['300K','18M'])
describe_concat = pd.concat([df1[['IE','C33']].describe(), df2[['IE','C33']].describe()], axis=1, keys=['300K','18M'])
summary_stats = pd.concat([q_concat, describe_concat])
summary_stats.drop(['count'])
|
stats-01.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# <div class="alert alert-block alert-success">
# <b><center>RECURRENT NEURAL NETWORK</center></b>
# <b><center>RNN 기본 모델들</center></b>
# </div>
# # Configure Learning Environment
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
# # !pip install git+https://github.com/nockchun/rspy --force
# # !pip install mybatis_mapper2sql
import rspy as rsp
rsp.setSystemWarning(off=True)
# %matplotlib widget
# +
import os, math
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras import models, layers, backend, utils
# -
print("GPU ", "사용 가능" if tf.test.is_gpu_available() else "사용 불가능")
# # Prepare Data
data = np.array([
[[1.], [2.], [3.]],
[[2.], [3.], [4.]],
[[3.], [4.], [5.]]
])
label = np.array([
[6.], [7.], [8.]
])
data.shape, label.shape
# # Single-Layered / Unidirectional & Many-To-One
# * Input Data : ( batch size, time_step, input_dim(feature size) )
model = models.Sequential([
layers.Input([3, 1]),
layers.SimpleRNN(5),
layers.Dense(1)
])
model.summary()
utils.plot_model(model, to_file="model.png", show_shapes=True)
model.compile(
loss="mse",
optimizer="adam"
)
history = model.fit(
x = data,
y = label,
epochs = 1000,
batch_size = 1,
verbose = 0
)
model.predict(data[0:1])
model.predict(data)
# # Single-Layered / Unidirectional & Many-To-Many
# ## return_sequences
model = models.Sequential([
layers.Input([3, 1]),
layers.SimpleRNN(5, return_sequences = True),
layers.Dense(1)
])
model.summary()
utils.plot_model(model, to_file="model.png", show_shapes=True)
model.compile(
loss="mse",
optimizer="adam"
)
history = model.fit(
x = data,
y = label,
epochs = 1000,
batch_size = 1,
verbose = 0
)
model.predict(data[0:1])
model.predict(data)
# ## TimeDistributed
model = models.Sequential([
layers.Input([3, 1]),
layers.SimpleRNN(5, return_sequences = True),
layers.TimeDistributed(layers.Dense(1))
])
model.summary()
utils.plot_model(model, to_file="model.png", show_shapes=True)
model.compile(
loss="mse",
optimizer="adam"
)
history = model.fit(
x = data,
y = label,
epochs = 1000,
batch_size = 1,
verbose = 0
)
model.predict(data[0:1])
model.predict(data)
# ## Using Backend
model = models.Sequential([
layers.Input([3, 1]),
layers.SimpleRNN(5, return_sequences = True),
layers.Lambda(lambda x: backend.mean(x, axis=1)),
layers.Dense(1)
])
model.summary()
utils.plot_model(model, to_file="model.png", show_shapes=True)
model.compile(
loss="mse",
optimizer="adam"
)
history = model.fit(
x = data,
y = label,
epochs = 1000,
batch_size = 1,
verbose = 0
)
model.predict(data[0:1])
model.predict(data)
# ## Using Lambda Function
@tf.function
def sequences_sum(x):
return tf.reduce_mean(x, axis=1)
model = models.Sequential([
layers.Input([3, 1]),
layers.SimpleRNN(5, return_sequences = True),
layers.Lambda(lambda x: sequences_sum(x)),
layers.Dense(1)
])
model.summary()
utils.plot_model(model, to_file="model.png", show_shapes=True)
model.compile(
loss="mse",
optimizer="adam"
)
history = model.fit(
x = data,
y = label,
epochs = 1000,
batch_size = 1,
verbose = 0
)
model.predict(data[0:1])
model.predict(data)
# # Single-Layered / Bidirectional & Many-To-One
model = models.Sequential([
layers.Input([3, 1]),
layers.Bidirectional(layers.SimpleRNN(5), merge_mode="concat"),
layers.Dense(1)
])
model.summary()
utils.plot_model(model, to_file="model.png", show_shapes=True)
model.compile(
loss="mse",
optimizer="adam"
)
history = model.fit(
x = data,
y = label,
epochs = 1000,
batch_size = 1,
verbose = 0
)
model.predict(data)
# # Single-Layered / Bidirectional & Many-To-Many
model = models.Sequential([
layers.Input([3, 1]),
layers.Bidirectional(layers.SimpleRNN(5, return_sequences=True), merge_mode="concat"),
layers.TimeDistributed(layers.Dense(1))
])
model.summary()
utils.plot_model(model, to_file="model.png", show_shapes=True)
model.compile(
loss="mse",
optimizer="adam"
)
history = model.fit(
x = data,
y = label,
epochs = 1000,
batch_size = 1,
verbose = 0
)
model.predict(data[0:1])
model.predict(data)
# # Multi-Layered / Unidirectional & Many-To-One
model = models.Sequential([
layers.Input([3, 1]),
layers.SimpleRNN(5, return_sequences=True),
layers.SimpleRNN(5),
layers.Dense(1)
])
model.summary()
utils.plot_model(model, to_file="model.png", show_shapes=True)
model.compile(
loss="mse",
optimizer="adam"
)
history = model.fit(
x = data,
y = label,
epochs = 1000,
batch_size = 1,
verbose = 0
)
model.predict(data)
# # Multi-Layered / Bidirectional & Many-To-One
# ## Unidirectional > Bidirectional
model = models.Sequential([
layers.Input([3, 1]),
layers.SimpleRNN(5, return_sequences=True),
layers.Bidirectional(layers.SimpleRNN(5), merge_mode="concat"),
layers.Dense(1)
])
model.summary()
utils.plot_model(model, to_file="model.png", show_shapes=True)
model.compile(
loss="mse",
optimizer="adam"
)
history = model.fit(
x = data,
y = label,
epochs = 1000,
batch_size = 1,
verbose = 0
)
model.predict(data)
# ## Bidirectional > Bidirectional
model = models.Sequential([
layers.Input([3, 1]),
layers.Bidirectional(layers.SimpleRNN(5, return_sequences=True), merge_mode="concat"),
layers.Bidirectional(layers.SimpleRNN(5)),
layers.Dense(1)
])
model.summary()
utils.plot_model(model, to_file="model.png", show_shapes=True)
model.compile(
loss="mse",
optimizer="adam"
)
history = model.fit(
x = data,
y = label,
epochs = 1000,
batch_size = 1,
verbose = 0
)
model.predict(data)
# # Multi-Layered / Bidirectional & Many-To-Many
# ## Unidirectional > Bidirectional
model = models.Sequential([
layers.Input([3, 1]),
layers.SimpleRNN(5, return_sequences=True),
layers.Bidirectional(layers.SimpleRNN(5, return_sequences=True), merge_mode="concat"),
layers.TimeDistributed(layers.Dense(1))
])
model.summary()
utils.plot_model(model, to_file="model.png", show_shapes=True)
model.compile(
loss="mse",
optimizer="adam"
)
history = model.fit(
x = data,
y = label,
epochs = 1000,
batch_size = 1,
verbose = 0
)
model.predict(data)
# ## Bidirectional > Bidirectional
model = models.Sequential([
layers.Input([3, 1]),
layers.Bidirectional(layers.SimpleRNN(5, return_sequences=True), merge_mode="concat"),
layers.Bidirectional(layers.SimpleRNN(5, return_sequences=True), merge_mode="concat"),
layers.TimeDistributed(layers.Dense(1))
])
model.summary()
utils.plot_model(model, to_file="model.png", show_shapes=True)
model.compile(
loss="mse",
optimizer="adam"
)
history = model.fit(
x = data,
y = label,
epochs = 1000,
batch_size = 1,
verbose = 0
)
model.predict(data)
|
lecture_source/machine_learning/0213_rnn_basic.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Sway hull equation
# %load_ext autoreload
# %autoreload 2
# %matplotlib inline
import sympy as sp
from sympy.plotting import plot as plot
from sympy.plotting import plot3d as plot3d
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
sp.init_printing()
from IPython.core.display import HTML
import seaman.helpers
import seaman_symbol as ss
import sway_hull_equations as equations
import sway_hull_lambda_functions as lambda_functions
from bis_system import BisSystem
# ## Coordinate system
# 
# ## Symbols
from seaman_symbols import *
HTML(ss.create_html_table(symbols=equations.total_sway_hull_equation_SI.free_symbols))
# ## Sway equation
equations.sway_hull_equation
# ### Force due to drift
equations.sway_drift_equation
# Same equation in SI units
equations.sway_drift_equation_SI
# ### Force due to yaw rate
equations.sway_yaw_rate_equation
equations.sway_yaw_rate_equation_SI
# ### Nonlinear force
# The nonlinear force is calculated as the sectional cross flow drag.
# 
equations.sway_none_linear_equation
# Simple assumption for section draught:
equations.section_draught_equation
equations.simplified_sway_none_linear_equation
# Nonlinear force equation expressed as bis force:
equations.simplified_sway_none_linear_equation_bis
equations.sway_hull_equation_SI
equations.sway_hull_equation_SI
equations.total_sway_hull_equation_SI
# ### Plotting the total sway hull force equation
# +
df = pd.DataFrame()
df['v_w'] = np.linspace(-0.3,3,10)
df['u_w'] = 5.0
df['r_w'] = 0.0
df['rho'] = 1025
df['t_a'] = 1.0
df['t_f'] = 1.0
df['L'] = 1.0
df['Y_uv'] = 1.0
df['Y_uuv'] = 1.0
df['Y_ur'] = 1.0
df['Y_uur'] = 1.0
df['C_d'] = 0.5
df['g'] = 9.81
df['disp'] = 23
result = df.copy()
result['fy'] = lambda_functions.Y_h_function(**df)
result.plot(x = 'v_w',y = 'fy');
# -
# ### Plotting with coefficients from a real seaman ship model
import generate_input
shipdict = seaman.ShipDict.load('../../tests/test_ship.ship')
# +
df = pd.DataFrame()
df['v_w'] = np.linspace(-3,3,20)
df['rho'] = 1025.0
df['g'] = 9.81
df['u_w'] = 5.0
df['r_w'] = 0.0
df_input = generate_input.add_shipdict_inputs(lambda_function=lambda_functions.Y_h_function,
shipdict = shipdict,
df = df)
df_input
# -
result = df_input.copy()
result['fy'] = lambda_functions.Y_h_function(**df_input)
result.plot(x = 'v_w',y = 'fy');
# ## Real seaman++
# Run real seaman in C++ to verify that the documented model is correct.
import run_real_seaman
# +
df = pd.DataFrame()
df['v_w'] = np.linspace(-3,3,20)
df['rho'] = 1025.0
df['g'] = 9.81
df['u_w'] = 5.0
df['r_w'] = 0.0
result_comparison = run_real_seaman.compare_with_seaman(lambda_function=lambda_functions.Y_h_function,
shipdict = shipdict,
df = df)
fig,ax = plt.subplots()
result_comparison.plot(x = 'v_w',y = ['fy','fy_seaman'],ax = ax)
ax.set_title('Drift angle variation');
# +
df = pd.DataFrame()
df['r_w'] = np.linspace(-0.1,0.1,20)
df['rho'] = 1025.0
df['g'] = 9.81
df['u_w'] = 5.0
df['v_w'] = 0.0
df_input = generate_input.add_shipdict_inputs(lambda_function=lambda_functions.Y_h_function,
shipdict = shipdict,
df = df)
result_comparison = run_real_seaman.compare_with_seaman(lambda_function=lambda_functions.Y_h_function,
shipdict = shipdict,
df = df,)
fig,ax = plt.subplots()
result_comparison.plot(x = 'r_w',y = ['fy','fy_seaman'],ax = ax)
ax.set_title('Yaw rate variation');
# -
df_input
|
docs/seaman/02.1_seaman_sway_hull_equation.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: tnc_velma
# language: python
# name: tnc_velma
# ---
# # Exploring VELMA outputs
import __init__
import scripts.config as config
import numpy as np
import pandas as pd
import tempfile
import datetime
from sklearn.svm import SVR
import geopandas as gpd
from sklearn.metrics import mean_squared_error as mse
from matplotlib.font_manager import FontProperties
import seaborn as sns
# import matplotlib as mpl
import matplotlib.pyplot as plt
import importlib
# +
# %matplotlib inline
XSMALL_SIZE = 6
SMALL_SIZE = 7
MEDIUM_SIZE = 8
BIGGER_SIZE = 11
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=SMALL_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=XSMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=XSMALL_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=XSMALL_SIZE) # legend fontsize
plt.rc('axes', titlesize=8) # fontsize of the figure title
plt.rcParams['figure.dpi'] = 140
# +
results_dir = config.data_path.parents[0] / 'results'
baseline04_19_d = pd.read_csv(results_dir / 'ellsworth_baseline_04_07/DailyResults.csv')
# Format datetime of results
jday_pad = baseline04_19['Day'].apply(lambda x: str(x).zfill(3))
str_year = baseline04_19['Year'].apply(lambda x: str(x))
baseline04_19_d['year_jday'] = str_year + jday_pad
baseline04_19_d.index = pd.to_datetime(baseline04_19_d['year_jday'], format='%Y%j')
# -
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', 100)
cols = baseline04_19_d.columns.tolist()
del_dem = np.loadtxt(config.dem_velma.parents[0] / 'delineated_dem.asc', skiprows=6)
plt.imshow(del_dem)
|
notebooks/archived/outputs.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/KonradBier/kBI/blob/master/line_chart.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="81qytjCrlbBf" colab_type="code" colab={}
import plotly.express as px
# + id="_22YZF19lkIT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 111} outputId="48ebf104-66bf-4948-8501-5802493a01d3"
df = px.data.gapminder()
df.head(2)
# + id="x5vSwOcclxya" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 542} outputId="83778847-e03c-402c-dd6f-ea10a4aebe0d"
px.line(df.query("country=='Poland'"), x='year', y='pop')
# + id="6QTmyOKnmjcW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 542} outputId="55140db4-62e4-45ed-e74c-d18d8cc45a09"
px.line(df.query("continent=='Europe'"), x='year', y='pop', color='country')
|
line_chart.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## In this notebook we will explore the control problem
# - Control problem improves the policy being evaluated
# - MDP is not known, so agent interacts with the environment sequentially
# - Target Policy: Policy that agent wants to learn
# - Behavioural Policy: Policy used to collect data by interacting with env
# - Policy improvement (Control) happens w.r.t value function in policy evaluation phase
# ### GPI (Generalized Policy improvement) for Control problems
# - Loop: { Policy evaluation , Policy improvement}
# - Policy improvement is greedy policy with respect to value function
# - Greedy policy improvement works when complete MDP of environment is known
# - When MDP is not known, we can still use GPI to solve
# - Policy evaluation methods: MC (first visit/Every visit), TD($\lambda$)
# - Policy improvement: CANNOT be completley greedy with respect to value function, we need to explore since MDP not known
# - Two main changes for solving control problems when MDP not known
# - We estimate action values fuction Q(s,a)
# - We use epsilon-greedy policy improvement instead of completly greedy policy improvement
# ### Monte Carlo control method
# - start with any policy
# - Use MC (first visit/ every visit) for policy evaluation
# - Policy improvement based on $\epsilon$-greedy
# - Policy improvement step can take place after all episodes are finished (akin to policy iteration)
# - Policy improvement step can take place at the end of every episode (akin to value iteration)
#
# - **NOTE** We will improve policy at the end of every episode (akin value iteration)
# +
import gym
import numpy as np
from matplotlib import pyplot as plt
from tqdm import tqdm
# %matplotlib inline
np.random.seed(0)
# -
env = gym.make("FrozenLake8x8-v0")
# +
# borrowed from my notebook : Prediction-MC-TD.ipynb
def gen_trajectory(pi, env, maxsteps):
'''
[1] pi: policy
[2] env: environment
[3] maxsteps: maximum number of steps to be taken
'''
done = False
stepcount = 0
s = env.reset()
trajectory = []
while not done:
stepcount += 1
next_state, reward, done, info = env.step(pi[s])
experience = (s, pi[s], reward, next_state, done)
trajectory.append(experience)
if done or stepcount >= maxsteps:
break
s = next_state
return trajectory
def learningrate_schedule(start_alpha, min_alpha, max_episodes):
'''
[1] start_alpha :learning rate at the start of episode
[2] min_alpha: minimum learning rate
[3] max_episodes: maximum number of episodes
'''
t = np.arange(0,max_episodes)
alpha_s, alpha_f = start_alpha, min_alpha
assert max_episodes > 1
alpha_sch = alpha_s * (alpha_f/alpha_s)**(t/(max_episodes-1))
return alpha_sch
test = 0
if test==1:
alpha_sch = learningrate_schedule(start_alpha=0.6, min_alpha=0.1, max_episodes=1000)
plt.figure()
plt.plot(alpha_sch)
# -
def MC_control(pi, env, gamma=1.0, start_alpha=0.5, min_alpha=0.1, espilon_start=1.0, epsilon_end=0.1, \
max_episodes=1000, maxsteps=200, first_visit=True):
'''
[1] pi: policy which needs to be evaluated
[2] env: environment
[3] gamma: discount factor
[4] min_alpha: minimum learning rate
[5] max_episodes: maximum episodes for MC
[6] maxsteps: maximum number of steps for trajectory generation
[7] first_visit: boolean flag to indicate First Visit version of MC prediction
[8] espilon_start: start value of epsilon for epsilon greedy
[9] epsilon_end: end value of espilon greedy
'''
nS = len(pi)
nA = env.action_space.n
Q = np.zeros((nS, nA))
Q_track = np.zeros((max_episodes, nS, nA)) # action value function snapshot for every episode, just for vis purpose
pi_track = [] # keeps track of greedy policy at the end of every episode
# generate learning rate schedule
alpha_sch = learningrate_schedule(start_alpha, min_alpha, max_episodes) # this will be exponential decay schedule
# epsilon rate schedule
eps_sch = learningrate_schedule(espilon_start, epsilon_end, max_episodes) # this will be exponential decay schedule
# lambda function to choose action based on epsilon-greedy
choose_action = lambda qvals, eps: np.random.randint(nA) if np.random.random() <= eps else np.argmax(qvals)
for ep in tqdm(range(max_episodes)):
# generate trajectory
trajectory = gen_trajectory(pi, env, maxsteps)
visited = np.zeros(nS, dtype=np.bool) # reset the visited flag
for t, experience in enumerate(trajectory):
state, action, reward, next_state, _ = experience
if visited[state]==1 and first_visit:
continue
visited[state] = True
# compute the discounted return
num_steps = len(trajectory[t:])
gamma_seq = gamma**(np.arange(0,num_steps))
reward_list = np.array([experience[2] for experience in trajectory[t:]])
returns = np.sum(gamma_seq * reward_list) #G_{t:T} = sum( gamma_seq * undiscounted returns )
# update the state value
Q[state, action] = Q[state, action] + alpha_sch[ep] * (returns - Q[state, action])
# update V_track at the end of every episode
Q_track[ep, :, :] = Q
# update the policy using eps greedy (Policy improvement)
pi = {s: choose_action(Q[s,:], eps_sch[ep]) for s in range(nS)}
# update greedy policy at the end of every episode
pi_track.append({s: choose_action(Q[s,:], 0.0) for s in range(nS)})
#print(f"episode :{ep}")
return Q, Q_track, pi, pi_track, alpha_sch,eps_sch
# start with some random initial policy
pi_init = {s: np.random.randint(env.env.nA) for s in range(env.env.nS)}
Q, Q_track, pi_conv, pi_track, alpha_sch, eps_sch = MC_control(pi_init, env, gamma=1.0, start_alpha=0.5, min_alpha=0.1, \
espilon_start=1.0, epsilon_end=0.02, max_episodes=2_00_000, \
maxsteps=200, first_visit=True)
Q
### calculate average rewards over some episode based on some policy
def mean_rewards(pi, num_episodes, env):
total_reward_list = []
for ep in range(num_episodes):
total_reward = 0
s = env.reset()
done = False
while not done:
next_state, reward, done, info = env.step(pi[s])
total_reward += reward
s = next_state
if done:
print(f"ep:{ep}, reward:{reward}")
total_reward_list.append(total_reward)
break
return np.array(total_reward_list).mean()
mean_rew_init_policy = mean_rewards(pi_init, num_episodes=1000, env=env)
mean_rew_conv_policy = mean_rewards(pi_conv, num_episodes=1000, env=env)
print(f"avg reward over {str(1000)} episodes for initial policy : {mean_rew_init_policy}")
print(f"avg reward over {str(1000)} episodes for converged policy : {mean_rew_conv_policy}")
mean_rew_conv_policy = mean_rewards(pi_track[-1], num_episodes=1000, env=env)
print(f"avg reward over {str(1000)} episodes for converged policy : {mean_rew_conv_policy}")
# #### As you can see, when using MC/TD control we can achieve success ~ 50-55% as compared to policy/value iteration algos where succes rate was 85-90% when full MDP was known. Any random policy success rate is about 0-5%
# ### SARSA
# - replace MC evaluation with TD
# - we achieve what is called TD control
#
# Below we replace MC evaluation with TD lambda
# this function implements the backward view of TD(/lambda)
def SARSA_lambda(pi, env, gamma=1.0, start_alpha=0.5, min_alpha=0.1, espilon_start=1.0, epsilon_end=0.1, \
max_episodes=1000, lambda_=0.3):
'''
[1] pi: policy which needs to be evaluated
[2] env: environment
[3] gamma: discount factor
[4] min_alpha: minimum learning rate
[5] max_episodes: maximum episodes for MC
[6] lambda_: weight mix-in factor in TD(lambda)
[7] espilon_start: start value of epsilon for epsilon greedy
[8] epsilon_end: end value of espilon greedy
'''
nS = len(pi)
nA = env.action_space.n
Q = np.zeros((nS, nA))
E = np.zeros((nS,nA)) # Eligibility trace
Q_track = np.zeros((max_episodes, nS, nA)) # action value function snapshot for every episode, just for vis purpose
pi_track = [] # keeps track of greedy policy at the end of every episode
# generate learning rate schedule
alpha_sch = learningrate_schedule(start_alpha, min_alpha, max_episodes) # this will be exponential decay schedule
# epsilon rate schedule
eps_sch = learningrate_schedule(espilon_start, epsilon_end, max_episodes) # this will be exponential decay schedule
# lambda function to choose action based on epsilon-greedy
choose_action = lambda qvals, eps: np.random.randint(nA) if np.random.random() <= eps else np.argmax(qvals)
for ep in tqdm(range(max_episodes)):
# initialize state and action
state, done = env.reset(), False
action = choose_action(Q[state,:], eps_sch[ep])
E.fill(0) # reset Eligibility for new episode
while not done:
E[state, action] += 1
next_state, reward, done, info = env.step(action)
# choose next action based on eps-greedy
next_action = choose_action(Q[next_state,:], eps_sch[ep])
target = reward + gamma * Q[next_state, next_action] * (not done)
# update all state action values based on eligibility
Q = Q + alpha_sch[ep] *(target - Q[state, action]) * E
# decay Eligibility vector
E = lambda_ * alpha_sch[ep] * E
if done:
break
state, action = next_state, next_action
Q_track[ep,:,:] = Q
# update greedy policy at the end of every episode
pi_track.append({s: choose_action(Q[s,:], 0.0) for s in range(nS)})
# update the final converged policy
pi = {s: choose_action(Q[s,:], 0.0) for s in range(nS)}
return Q, Q_track, pi, pi_track, alpha_sch, eps_sch
# start with some random initial policy
pi_init = {s: np.random.randint(env.env.nA) for s in range(env.env.nS)}
Q, Q_track, pi_conv, pi_track, alpha_sch, eps_sch = SARSA_lambda(pi_init, env, gamma=1.0, start_alpha=0.5,\
espilon_start=1.0, epsilon_end=0.02,\
min_alpha=0.1, max_episodes=1_00_000, lambda_=0.3)
Q
mean_rew_init_policy = mean_rewards(pi_init, num_episodes=1000, env=env)
mean_rew_conv_policy = mean_rewards(pi_conv, num_episodes=1000, env=env)
print(f"avg reward over {str(1000)} episodes for initial policy : {mean_rew_init_policy}")
print(f"avg reward over {str(1000)} episodes for converged policy : {mean_rew_conv_policy}")
plt.figure()
plt.imshow(np.argmax(Q,axis=1).reshape(8,8))
plt.colorbar()
# **SARSA($\lambda$) achieves ~70% success rate**
# ### Q Learning
# - SARSA is online learning (on-policy: target policy and behavioural policy is the same)
# - Drawback of online learning is that you learn from your own current mistake
# - What if you want to learn from previous or someone else's mistake - cannot do with online learning
# - Here comes Q learning (off policy, target policy and behavioural policy is different)
# - The only difference b/w SARSA and Q leaning is that Q[s,a] bootstraps towards max_over_a` of Q[s`,a`]
# $$Q[s,a] = Q[s,a] + \alpha (reward + \gamma \max_{a'}Q[s',a'] - Q[s,a])$$
#
# where experience is (s,a,reward,s') and a' is the set of possible actions in state s`
#
# this function implements q learning agent
def qlearning(pi, env, gamma=1.0, start_alpha=0.5, min_alpha=0.1, espilon_start=1.0, epsilon_end=0.1, \
max_episodes=1000):
'''
[1] pi: policy which needs to be evaluated
[2] env: environment
[3] gamma: discount factor
[4] min_alpha: minimum learning rate
[5] max_episodes: maximum episodes for MC
[6] espilon_start: start value of epsilon for epsilon greedy
[7] epsilon_end: end value of espilon greedy
'''
nS = len(pi)
nA = env.action_space.n
Q = np.zeros((nS, nA))
Q_track = np.zeros((max_episodes, nS, nA)) # action value function snapshot for every episode, just for vis purpose
pi_track = [] # keeps track of greedy policy at the end of every episode
# generate learning rate schedule
alpha_sch = learningrate_schedule(start_alpha, min_alpha, max_episodes) # this will be exponential decay schedule
# epsilon rate schedule
eps_sch = learningrate_schedule(espilon_start, epsilon_end, max_episodes) # this will be exponential decay schedule
# lambda function to choose action based on epsilon-greedy (behavioural or data/experience collection policy)
choose_action = lambda qvals, eps: np.random.randint(nA) if np.random.random() <= eps else np.argmax(qvals)
for ep in tqdm(range(max_episodes)):
# initialize state and action
state, done = env.reset(), False
while not done:
action = choose_action(Q[state,:], eps_sch[ep])
next_state, reward, done, info = env.step(action)
# choose max over actions for next_state,i.e. max(Q[next_state,:])
max_next_state_qval = np.amax(Q[next_state, :])
target = reward + gamma * max_next_state_qval * (not done)
# update the state action value for the current state,action
Q[state, action] = Q[state, action] + alpha_sch[ep] *(target - Q[state, action])
if done:
break
state = next_state
Q_track[ep,:,:] = Q
# update greedy policy at the end of every episode
pi_track.append({s: choose_action(Q[s,:], 0.0) for s in range(nS)})
# update the final converged policy
pi = {s: choose_action(Q[s,:], 0.0) for s in range(nS)}
return Q, Q_track, pi, pi_track, alpha_sch, eps_sch
# start with some random initial policy
pi_init = {s: np.random.randint(env.env.nA) for s in range(env.env.nS)}
Q, Q_track, pi_conv, pi_track, alpha_sch, eps_sch = qlearning(pi_init, env, gamma=1.0, start_alpha=0.5, min_alpha=0.1,\
espilon_start=1.0, epsilon_end=0.02, max_episodes=1_00_000)
Q
mean_rew_init_policy = mean_rewards(pi_init, num_episodes=1000, env=env)
mean_rew_conv_policy = mean_rewards(pi_conv, num_episodes=1000, env=env)
print(f"avg reward over {str(1000)} episodes for initial policy : {mean_rew_init_policy}")
print(f"avg reward over {str(1000)} episodes for converged policy : {mean_rew_conv_policy}")
# save the final Q values from q learning
Qvals_qlearning = Q
# ### Double Q learning
# - Problem with Q learning is that it overestimates the Q values because our target policy is max over all actions in next state
# - Taking Max creates bias
# - If bias is same across all states, then overestimation is not a problem, however it is not True
# - Double Q learning tries to lower the bias in action values function of q learning agent
# this function implements q learning agent
def doubleqlearning(pi, env, gamma=1.0, start_alpha=0.5, min_alpha=0.1, espilon_start=1.0, epsilon_end=0.1, \
max_episodes=1000):
'''
[1] pi: policy which needs to be evaluated
[2] env: environment
[3] gamma: discount factor
[4] min_alpha: minimum learning rate
[5] max_episodes: maximum episodes for MC
[6] espilon_start: start value of epsilon for epsilon greedy
[7] epsilon_end: end value of espilon greedy
'''
nS = len(pi)
nA = env.action_space.n
Q1 = np.zeros((nS, nA))
Q1_track = np.zeros((max_episodes, nS, nA)) # action value function snapshot for every episode, just for vis purpose
Q2 = np.zeros((nS, nA))
Q2_track = np.zeros((max_episodes, nS, nA)) # action value function snapshot for every episode, just for vis purpose
pi_track = [] # keeps track of greedy policy at the end of every episode
# generate learning rate schedule
alpha_sch = learningrate_schedule(start_alpha, min_alpha, max_episodes) # this will be exponential decay schedule
# epsilon rate schedule
eps_sch = learningrate_schedule(espilon_start, epsilon_end, max_episodes) # this will be exponential decay schedule
# lambda function to choose action based on epsilon-greedy (behavioural or data/experience collection policy)
choose_action = lambda qvals, eps: np.random.randint(nA) if np.random.random() <= eps else np.argmax(qvals)
for ep in tqdm(range(max_episodes)):
# initialize state and action
state, done = env.reset(), False
while not done:
action = choose_action((Q1[state,:] + Q2[state,:])/2, eps_sch[ep]) # choose action using avg of Q1 and Q2
next_state, reward, done, info = env.step(action)
# do a toss
if np.random.randint(2):
# choose best action using Q2
best_action = np.argmax(Q2[next_state, :]) # best action is chosen using Q2
target = reward + gamma * np.amax(Q1[next_state, best_action]) * (not done) # action value is based on Q1
# update the state action value for the current state,action for Q1
Q1[state, action] = Q1[state, action] + alpha_sch[ep] *(target - Q1[state, action])
else:
# choose best action using Q1
best_action = np.argmax(Q1[next_state, :]) # best action is chosen using Q1
target = reward + gamma * np.amax(Q2[next_state, best_action]) * (not done) # action value is based on Q2
# update the state action value for the current state,action for Q1
Q2[state, action] = Q2[state, action] + alpha_sch[ep] *(target - Q2[state, action])
if done:
break
state = next_state
Q1_track[ep,:,:] = Q1
Q2_track[ep,:,:] = Q2
# update greedy policy at the end of every episode
pi_track.append({s: choose_action((Q1[s,:] + Q2[s,:])/2, 0.0) for s in range(nS)})
# update the final converged policy
pi = {s: choose_action((Q1[s,:]+Q2[s,:])/2, 0.0) for s in range(nS)}
return Q1, Q2, Q1_track, Q2_track, pi, pi_track, alpha_sch, eps_sch
# start with some random initial policy
pi_init = {s: np.random.randint(env.env.nA) for s in range(env.env.nS)}
Q1, Q2, Q1_track, Q2_track, pi_conv, pi_track, alpha_sch, eps_sch = doubleqlearning(pi_init, env, \
gamma=1.0, start_alpha=0.5, min_alpha=0.1,\
espilon_start=1.0, epsilon_end=0.1, max_episodes=1_00_000)
Qavg = (Q1+Q2)/2
Qavg
mean_rew_init_policy = mean_rewards(pi_init, num_episodes=1000, env=env)
mean_rew_conv_policy = mean_rewards(pi_conv, num_episodes=1000, env=env)
print(f"avg reward over {str(1000)} episodes for initial policy : {mean_rew_init_policy}")
print(f"avg reward over {str(1000)} episodes for converged policy : {mean_rew_conv_policy}")
Qvals_doubleqlearning = Qavg
V_qlearning = np.amax(Qvals_qlearning, axis=1)
V_doubleqlearning = np.amax(Qvals_doubleqlearning, axis=1)
plt.figure()
plt.plot(V_qlearning, 'b-', label='qlearning')
plt.plot(V_doubleqlearning, 'r-', label='double-qlearning')
plt.legend()
plt.xlabel("States")
plt.ylabel("Optimal values")
plt.title("Overestimation of optimal state values in qlearning vs. double-qlearning")
|
PI_VI_FrozenLake/Control-MC-TD-SARSA-QLearning.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
gdpFile = os.path.join(os.path.curdir,'../data/external/gdp_pc.csv')
df = pd.read_csv(gdpFile)
gdp16 = df['2016'].values
gdp16 = gdp16[~np.isnan(gdp16)]
np.median(gdp16)
plt.plot(gdp16)
plt.show()
gdp16 = np.sort(gdp16)[::-1]
gdp16
|
notebooks/11-Numpy GDP Fancy Indexing.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import math
from numpy.linalg import norm
diff90_threshold = 10 #Angle needs to be within 10 degrees of 90 degrees to qualify as reset point
# -
#read in the data
df=pd.read_csv('Data/bodyweight_squat_side_view.csv')
df.head()
# +
######################################################################
### STARING POSITION - ANKLES ARE SHOULDER WIDTH APART REQUIREMENT ###
### START ###
######################################################################
shoulderLeftList = df["ShoulderLeft"].tolist()
shoulderRightList = df["ShoulderRight"].tolist()
shoulderLeftListX = []
shoulderLeftListY = []
shoulderLeftListZ = []
shoulderRightListX = []
shoulderRightListY = []
shoulderRightListZ = []
# numbers in the squatdf are in strings separated by space
for shoulderLeftListData in shoulderLeftList:
shoulderLeftDataXYZ = shoulderLeftListData.split()
shoulderLeftListX.append(float(shoulderLeftDataXYZ[0]))
shoulderLeftListY.append(float(shoulderLeftDataXYZ[1]))
shoulderLeftListZ.append(float(shoulderLeftDataXYZ[2]))
for shoulderRightListData in shoulderRightList:
shoulderRightDataXYZ = shoulderRightListData.split()
shoulderRightListX.append(float(shoulderRightDataXYZ[0]))
shoulderRightListY.append(float(shoulderRightDataXYZ[1]))
shoulderRightListZ.append(float(shoulderRightDataXYZ[2]))
leftShoulderStart = (shoulderLeftListX[0], shoulderLeftListY[0], shoulderLeftListZ[0])
rightShoulderStart = (shoulderRightListX[0], shoulderRightListY[0], shoulderRightListZ[0])
print(leftShoulderStart)
print(rightShoulderStart)
# +
ankleLeftList = df["AnkleLeft"].tolist()
ankleRightList = df["AnkleRight"].tolist()
ankleLeftListX = []
ankleLeftListY = []
ankleLeftListZ = []
ankleRightListX = []
ankleRightListY = []
ankleRightListZ = []
for ankleLeftListData in ankleLeftList:
ankleLeftDataXYZ = ankleLeftListData.split()
ankleLeftListX.append(float(ankleLeftDataXYZ[0]))
ankleLeftListY.append(float(ankleLeftDataXYZ[1]))
ankleLeftListZ.append(float(ankleLeftDataXYZ[2]))
for ankleRightListData in ankleRightList:
ankleRightDataXYZ = ankleRightListData.split()
ankleRightListX.append(float(ankleRightDataXYZ[0]))
ankleRightListY.append(float(ankleRightDataXYZ[1]))
ankleRightListZ.append(float(ankleRightDataXYZ[2]))
leftAnkleStart = (ankleLeftListX[0], ankleLeftListY[0], ankleLeftListZ[0])
rightAnkleStart = (ankleRightListX[0], ankleRightListY[0], ankleRightListZ[0])
print(leftAnkleStart)
print(rightAnkleStart)
# +
# calculating the 3d distances on the left and right side of body
# focusing on shoulder and feet coordinates
shoulderLeftRightDistance = np.sqrt((rightShoulderStart[0] - leftShoulderStart[0]) ** 2 + (rightShoulderStart[1] - leftShoulderStart[1]) ** 2 + (rightShoulderStart[2] - leftShoulderStart[2]) ** 2)
ankleLeftRightDistance = np.sqrt((rightAnkleStart[0] - leftAnkleStart[0]) ** 2 + (rightAnkleStart[1] - leftAnkleStart[1]) ** 2 + (rightAnkleStart[2] - leftAnkleStart[2]) ** 2)
delta = shoulderLeftRightDistance * .10
print(shoulderLeftRightDistance)
print(ankleLeftRightDistance)
print(delta)
# +
# Compare 3d distance calculations to a specified threshold/heuristic
# the values must be within a specific value from each other to be a valid starting position
val = np.abs(shoulderLeftRightDistance - ankleLeftRightDistance)
print(val)
### Heuristic values are chosen to be 10% of shoulder width ###
heuristic_1 = shoulderLeftRightDistance - delta # min heuristic value that is allowed
heuristic_2 = shoulderLeftRightDistance + delta # max heuristic value that is allowed
# NOTE: Print statements can be changed to return strings instead or whatever format desired
if val < heuristic_1:
print("Please widen your stance")
elif val > heuristic_2:
print("Please narrow your stance")
else:
print("Good starting stance! Please begin the exercise")
######################################################################
### STARING POSITION - ANKLES ARE SHOULDER WIDTH APART REQUIREMENT ###
### END ###
######################################################################
# +
#################################################################
############# KNEES BENT TO 90 DEGREES REQUIREMENT ##############
#################################################################
#Keep relevant columns for Squat
squat_column = ['KneeRight','KneeLeft','HipRight','HipLeft','AnkleRight','AnkleLeft']
df_squat = df[squat_column]
df_squat.head()
# -
#Do not need to normalize so just create df
dfsample = df_squat
dfsample[['KneeRight_x','KneeRight_y','KneeRight_z']]= dfsample.KneeRight.apply( lambda x: pd.Series(str(x).split()))
dfsample[['KneeLeft_x','KneeLeft_y','KneeLeft_z']]= dfsample.KneeLeft.apply( lambda x: pd.Series(str(x).split()))
dfsample[['HipRight_x','HipRight_y','HipRight_z']]= dfsample.HipRight.apply( lambda x: pd.Series(str(x).split()))
dfsample[['HipLeft_x','HipLeft_y','HipLeft_z']]= dfsample.HipLeft.apply( lambda x: pd.Series(str(x).split()))
dfsample[['AnkleRight_x','AnkleRight_y','AnkleRight_z']]= dfsample.AnkleRight.apply( lambda x: pd.Series(str(x).split()))
dfsample[['AnkleLeft_x','AnkleLeft_y','AnkleLeft_z']]= dfsample.HipLeft.apply( lambda x: pd.Series(str(x).split()))
dfsample = dfsample.drop(['KneeRight','KneeLeft','HipLeft','HipRight','AnkleRight','AnkleLeft'],axis=1)
dfsample = dfsample.astype(float)
# +
#Create right knee hip vector
dfsample['right_knee_hip_x'] = dfsample['KneeRight_x'] - dfsample['HipRight_x']
dfsample['right_knee_hip_y'] = dfsample['KneeRight_y'] - dfsample['HipRight_y']
dfsample['right_knee_hip_z'] = dfsample['KneeRight_z'] - dfsample['HipRight_z']
#Create left knee hip vector
dfsample['left_knee_hip_x'] = dfsample['KneeLeft_x'] - dfsample['HipLeft_x']
dfsample['left_knee_hip_y'] = dfsample['KneeLeft_y'] - dfsample['HipLeft_y']
dfsample['left_knee_hip_z'] = dfsample['KneeLeft_z'] - dfsample['HipLeft_z']
#Create right ankle knee vector
dfsample['right_ankle_knee_x'] = dfsample['AnkleRight_x'] - dfsample['KneeRight_x']
dfsample['right_ankle_knee_y'] = dfsample['AnkleRight_y'] - dfsample['KneeRight_y']
dfsample['right_ankle_knee_z'] = dfsample['AnkleRight_z'] - dfsample['KneeRight_z']
#Create left ankle knee vector
dfsample['left_ankle_knee_x'] = dfsample['AnkleLeft_x'] - dfsample['KneeLeft_x']
dfsample['left_ankle_knee_y'] = dfsample['AnkleLeft_y'] - dfsample['KneeLeft_y']
dfsample['left_ankle_knee_z'] = dfsample['AnkleLeft_z'] - dfsample['KneeLeft_z']
#Angle = arccos(dotproduct(v1,v2)/norm(v1)*norm(v2)
#For a complete rep, the knee-hip and ankle-knee angle should start close to 180 deg and reach to about 90+-10 degrees
#calculate the dot product of knee-hip and ankle-knee 1-2
dfsample['dotp12'] = dfsample.apply(lambda x: np.dot(
np.array([x['right_knee_hip_x'],x['right_knee_hip_y'],x['right_knee_hip_z']]),
np.array([x['right_ankle_knee_x'],x['right_ankle_knee_y'],x['right_ankle_knee_z']])
),
axis = 1)
#calculate the angle between knee-hip and ankle-knee 1-2
dfsample['angle12'] = dfsample.apply(lambda x:np.rad2deg(np.arccos(x['dotp12']/
(
norm(np.array([x['right_knee_hip_x'],x['right_knee_hip_y'],x['right_knee_hip_z']]))*
norm(np.array([x['right_ankle_knee_x'],x['right_ankle_knee_y'],x['right_ankle_knee_z']]))
))),
axis = 1)
#same for left side
dfsample['dotp23'] = dfsample.apply(lambda x: np.dot(
np.array([x['left_knee_hip_x'],x['left_knee_hip_y'],x['left_knee_hip_z']]),
np.array([x['left_ankle_knee_x'],x['left_ankle_knee_y'],x['left_ankle_knee_z']])
),
axis = 1)
#calculate the angle between knee-hip and ankle-knee 1-2
dfsample['angle23'] = dfsample.apply(lambda x:np.rad2deg(np.arccos(x['dotp12']/
(
norm(np.array([x['left_knee_hip_x'],x['left_knee_hip_y'],x['left_knee_hip_z']]))*
norm(np.array([x['left_ankle_knee_x'],x['left_ankle_knee_y'],x['left_ankle_knee_z']]))
))),
axis = 1)
dfsample.head(20)
# -
angle12list = dfsample['angle12'].tolist()
plt.plot(angle12list)
plt.show()
# +
#Create reps:TBD
dfsample['angle12_diff90'] = np.absolute(dfsample['angle12'] - 90)
dfsample['min_angle12_diff90'] = dfsample.angle12_diff90[(dfsample.angle12_diff90.shift(1) > dfsample.angle12_diff90) & (dfsample.angle12_diff90.shift(-1) > dfsample.angle12_diff90)
&(dfsample.angle12_diff90<=diff90_threshold)
&(dfsample.angle12.shift(1) < dfsample.angle12) & (dfsample.angle12.shift(-1) > dfsample.angle12)]
plt.scatter(dfsample.index, dfsample['min_angle12_diff90'], c='r') #reds indicate the start of a rep
dfsample.angle12.plot()
# +
#mark the rep number for each timestamp
repnumber = np.ones(dfsample.shape[0]) #create an array of ones for rep number
rep_count = 1
for row_index,row in dfsample.iterrows():
#print('\nrow number:',row_index, '\n-------------')
#print(row['min_angle12_diff90'])
if (np.isnan(row['min_angle12_diff90'])):
rep_count=rep_count
else:
rep_count=rep_count + 1
#print(rep_count)
repnumber[row_index] = rep_count
dfsample['repnumber'] = repnumber.astype(int)
# -
#Isolate data for each rep
repnum = 3
dfsub = dfsample[dfsample['repnumber'] ==repnum]
dfsub.angle12.plot()
# +
#Geometric evaluation by rep
#Create a dataframe with max of angle12
dfsumm = dfsample.groupby('repnumber')['angle12'].max().rename('angle12_max').to_frame()
#append min of angle12
dfsumm['angle12_min'] = dfsample.groupby('repnumber')['angle12'].min()
#append min of angle23
dfsumm['angle23_min'] = dfsample.groupby('repnumber')['angle23'].min()
#append max of angle12
dfsumm['angle12_max'] = dfsample.groupby('repnumber')['angle12'].max()
#append max of angle23
dfsumm['angle23_max'] = dfsample.groupby('repnumber')['angle23'].max()
#append count of frames
dfsumm['angle12_count'] = dfsample.groupby('repnumber')['angle23'].count()
#Note that we might have to cut out beginning or end reps OR both
#find out if the squat is balanced and not too deep or not too shallow
#angle limits used here are based on simple math: 180 - knee flexion angle for too deep/ too shallow squat
dfsumm['resultEqual'] = 'Squat not properly balanced'
dfsumm.loc[(dfsumm['angle12_min']!=dfsumm['angle23_min']) & (dfsumm['angle12_min']!=dfsumm['angle23_min']), 'resultEqual'] = 'Squat properly balanced'
dfsumm['result12'] = 'Too deep in squat'
dfsumm.loc[dfsumm['angle12_min']<=60, 'result12'] = 'Good squat'
dfsumm['result23'] = 'Too shallow Squat'
dfsumm.loc[dfsumm['angle23_max']>=120, 'result23'] = 'Good Squat'
dfsumm['goodrep'] = 0
dfsumm.loc[(dfsumm['angle23_max']<=120) & (dfsumm['angle12_min']<=60), 'goodrep'] = 1
dfsumm
# +
#################################################################
############# KNEES BENT TO 90 DEGREES REQUIREMENT ##############
############# END ##############
#################################################################
|
Data Analysis/.ipynb_checkpoints/Evaluate_Squat-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python3
# name: python3
# ---
# # <font color='red'> Bayesian Classification </font>
#
# We obtain the probability distribution of the line parameters rather than scalar estimates of slope and y intercept.
#
# +
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import scipy as sc
from scipy.stats import multivariate_normal
import pandas as pd
plt.rcParams.update({'font.size': 16})
plt.rcParams['figure.figsize'] = [12, 6]
# -
# ## Reading the data
fname = 'fishes_1.csv'
data = pd.read_csv(fname)
data.head(10)
# ## Extracting the different features
# +
# Exctracting the fields with Fishtype=1 (bass)
bass = data[data["Fishtype"] == 1]
basslightness = bass['lightness'].to_numpy()
basslength = bass['length'].to_numpy()
basscount = bass['lightness'].count()
# Exctracting the fields with Fishtype=0 (salmon)
salmon = data[data["Fishtype"] == 0]
salmonlightness = salmon['lightness'].to_numpy()
salmonlength = salmon['length'].to_numpy()
salmoncount = salmon['lightness'].count()
# -
# ## Estimation of the probability densities
#
# We will rely on histograms to estimate the conditional probability densities $p({\rm lightness}|{\rm bass})$, $p({\rm lightness}|{\rm salmon})$
#
# The prior probabilities $p({\rm salmon})$ from the training data as
#
# $$p({\rm salmon}) = \frac{N_{\rm salmon}}{N_{\rm bass} + N_{\rm salmon}}$$
#
# and similarly
#
# $$p({\rm bass}) = \frac{N_{\rm bass}}{N_{\rm bass} + N_{\rm salmon}}$$
#
# The joint probabilities can be evaluated as
#
# $$p({\rm salmon,lightness}) = p({\rm lightness|salmon})p({\rm salmon})$$
#
# and
#
# $$p({\rm bass,lightness}) = p({\rm lightness|bass})p({\rm bass})$$
#
# The marginal $p({\rm lightness})$ can be obtained using the sum rule as
#
# $$p({\rm lightness}) = p({\rm bass,lightness})+p({\rm salmon,lightness})$$
#
# +
Nbins = 100
bins = np.linspace(-6,6,Nbins+1)
bincenters = (bins[0:Nbins]+bins[1:Nbins+1])/2
# Conditional probabilities
p_l_given_salmon,bc = np.histogram(salmonlightness, bins=bins)
p_l_given_bass,bc = np.histogram(basslightness, bins=bins)
p_l_given_bass = p_l_given_bass/basscount
p_l_given_salmon = p_l_given_salmon/salmoncount
# Priors
pbass = basscount/(basscount+salmoncount)
psalmon = salmoncount/(basscount+salmoncount)
# Joint probabilities
p_l_and_bass = p_l_given_bass*pbass
p_l_and_salmon = p_l_given_salmon*psalmon
# Evidence
p_lightness = p_l_and_bass + p_l_and_salmon
p_lightness = p_lightness + 1e-8*(p_lightness==0)
# Posterior probabilities: Bayes estimate
p_bass_given_l = p_l_given_bass*pbass/p_lightness
p_salmon_given_l = p_l_given_salmon*psalmon/p_lightness
# PLOTTING THE PROBABILITIES
#---------------------------
fig = plt.figure(figsize=[12,12])
s=plt.plot(bincenters,p_bass_given_l,'b',label='$p(bass|lightness)$',linewidth=4)
s=plt.plot(bincenters,p_salmon_given_l,'r',label='$p(salmon|lightness)$',linewidth=4)
bass_region = p_bass_given_l > p_salmon_given_l
salmon_region = p_bass_given_l <= p_salmon_given_l
s=plt.fill_between(bincenters,salmon_region,label='$salmon: R_1$',facecolor='red', alpha=0.2)
s=plt.fill_between(bincenters,bass_region,label='$bass: R_2$',facecolor='blue', alpha=0.2)
fig = plt.figure(figsize=[12,12])
s=plt.plot(bincenters,p_l_and_bass,'b',label='p(lightness,bass) ',linewidth=4)
s=plt.plot(bincenters,p_l_and_salmon,'r',label='p(lightness,salmon)',linewidth=4)
bass_region = (p_l_and_bass > p_l_and_salmon)
salmon_region = (p_l_and_bass <= p_l_and_salmon)
salmon_err = p_l_and_salmon*bass_region
bass_err = p_l_and_bass*salmon_region
bass_region = 0.03*bass_region
salmon_region = 0.03*salmon_region
s=plt.fill_between(bincenters,salmon_region,label='$salmon: R_1$',facecolor='red', alpha=0.2)
s=plt.fill_between(bincenters,bass_region,label='$bass: R_2 $',facecolor='blue', alpha=0.2)
s=plt.fill_between(bincenters,salmon_err,0*salmon_err,label='Error:salmon',facecolor='red', alpha=0.8)
s=plt.fill_between(bincenters,bass_err,0*bass_err,label='Error:bass',facecolor='blue', alpha=0.8)
# -
# # Reject option to reduce error
#
# Bayes classification relies on the decision boundary
#
# $$p({\rm bass}|l) > p({\rm salmon}|l)~:~~ l\rightarrow \rm bass$$
#
# Note that the posteriors add up to one $p({\rm bass}|l)+ p({\rm salmon}|l)=1$. Combining the two equations, we get
#
# $$p({\rm bass}|l) > 1-p({\rm bass}|l) ~:~~ l\rightarrow \rm bass$$
#
# or equivalently
#
# $$p({\rm bass}|l) > \frac{1}{2} ~:~~ l\rightarrow \rm bass$$
#
# The threshold of 0.5 results in high errors, close to the boundary. We can minimize the error by choosing more conservative thresholds
#
# $$p({\rm bass}|l) > 0.85 ~:~~ l\rightarrow \rm bass$$
# $$p({\rm salmon}|l) > 0.85 ~:~~ l\rightarrow \rm salmon$$
#
# +
fig = plt.figure(figsize=[12,12])
s=plt.plot(bincenters,p_l_and_bass,'b',label='p(lightness,bass) ',linewidth=4)
s=plt.plot(bincenters,p_l_and_salmon,'r',label='p(lightness,salmon)',linewidth=4)
# TODO: add the code for the new salmon and bass regions
bass_region = p_bass_given_l > 0.85
salmon_region = p_salmon_given_l > 0.85
salmon_err = p_l_and_salmon*bass_region
bass_err = p_l_and_bass*salmon_region
bass_region = 0.03*bass_region
salmon_region = 0.03*salmon_region
s=plt.fill_between(bincenters,salmon_region,label='$salmon: R_1$',facecolor='red', alpha=0.2)
s=plt.fill_between(bincenters,bass_region,label='$bass: R_2 $',facecolor='blue', alpha=0.2)
s=plt.fill_between(bincenters,salmon_err,0*salmon_err,label='Error:salmon',facecolor='red', alpha=0.8)
s=plt.fill_between(bincenters,bass_err,0*bass_err,label='Error:bass',facecolor='blue', alpha=0.8)
s = plt.title('Error minimization')
s = fig.gca().legend()
# -
# ## Risk minimization
#
#
# <font color = red>class 1: Salmon</font>
#
# <font color = blue>class 2: Bass</font>
#
# Risk in classifying <font color = red>class 1=Salmon</font> as <font color = red>class 1=Salmon</font>: $\lambda_{11}=0$
#
# Risk in classifying <font color = red>class 1=Salmon</font> as <font color = blue>class 2 = Bass</font>: $\lambda_{12}=6$
#
# Risk in classifying <font color = blue>class 2=Bass</font> as <font color = red>class 1=Salmon</font>: $\lambda_{21}=0.5$
#
# Risk in classifying <font color = blue>class 2=Bass</font> as <font color = blue>class 2=Bass</font>: $\lambda_{22}=0$
# +
fig = plt.figure(figsize=[12,12])
s=plt.plot(bincenters,p_l_and_bass,'b',label='p(lightness,bass) ',linewidth=4)
s=plt.plot(bincenters,p_l_and_salmon,'r',label='p(lightness,salmon)',linewidth=4)
# TO DO -----------------------------------
# Compute weighted probabilities
lambda12_p1 = 6 * p_l_and_salmon
lambda21_p2 = .5 * p_l_and_bass
# Evaluate the regions
salmon_region = lambda12_p1 > lambda21_p2
bass_region = lambda12_p1 <= lambda21_p2
#------------------------------------------
# Plotting
salmon_err = p_l_and_salmon*bass_region
bass_err = p_l_and_bass*salmon_region
s=plt.plot(bincenters,lambda12_p1,'r:',label='lambda12 x p(lightness,1) ',linewidth=4)
s=plt.plot(bincenters,lambda21_p2,'b:',label='lambda21 x p(lightness,2)',linewidth=4)
bass_region = 0.08*bass_region
salmon_region = 0.08*salmon_region
s=plt.fill_between(bincenters,salmon_region,label='$salmon: R_1$',facecolor='red', alpha=0.2)
s=plt.fill_between(bincenters,bass_region,label='$bass: R_2 $',facecolor='blue', alpha=0.2)
s=plt.fill_between(bincenters,salmon_err,0*salmon_err,label='Risk:salmon',facecolor='red', alpha=0.8)
s=plt.fill_between(bincenters,bass_err,0*bass_err,label='Risk:bass',facecolor='blue', alpha=0.8)
s = plt.title('Risk minimization')
s = fig.gca().legend()
|
Risk_Minimization.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #### 1. 얕은복사, 깊은복사
#
# - mutable : 컬렉션 데이터 타입 : list, dict, set
# - immutable: 기본 데이터 타입 : int, float, str, bool, tuple
# +
# immutable
# -
data1 = 1
data2 = data1
data3 = 1
id(data1), id(data2), id(data3)
# +
#mutable
# -
data1 = [1,2]
data2 = data1 #data1의 주소값이 data2에 저장된다.
data3 = [1,2]
id(data1), id(data2), id(data3)
# +
# 클래스의 객체
# -
class Data:
def __init__(self, data):
self.data = data
data1 = Data("class obj")
data2 = data1 # 얕은 복사: 주소값복사: call by reference > 주소값이 같다.
id(data1), id(data2)
import copy
data3 = copy.deepcopy(data1)
id(data1), id(data2), id(data3) # 깊은 복사: 값 복사: call by value
# #### linked list
#
# - 단방향 링크드 리스트
# - 양방향 링크드 리스트
class Node:
def __init__(self, data, next_node=None):
self.data = data
self.next_node = next_node
def __repr__(self):
return self.data
def add_node(self, node):
node.next_node = self.next_node
self.next_node = node
n1 = Node("1.C언어")
n2 = Node("2.파이썬")
n3 = Node("3.자바")
n4 = Node("4.HTML")
n1.next_node = n2
n2.next_node = n3
def disp(node):
curr_node = node
while True:
print(curr_node)
if curr_node.next_node is None:
break
curr_node = curr_node.next_node
n2.add_node(n4)
disp(n2)
|
practice/linked_list.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 5.3.1 The Validation Set Approach
# +
# imports and setup
import numpy as np
import pandas as pd
pd.set_option('precision', 2) # number precision for pandas
pd.set_option('display.max_rows', 12)
pd.set_option('display.float_format', '{:20,.2f}'.format) # get rid of scientific notation
# +
# load data
auto = pd.read_csv('../datasets/Auto.csv', na_values=['?'])
auto.dropna(axis=0, inplace=True)
auto.cylinders = auto.cylinders.astype('category')
auto.name = auto.name.astype('category')
auto.reset_index(inplace=True)
auto['horsepower_2'] = np.power(auto.horsepower, 2)
auto['horsepower_3'] = np.power(auto.horsepower, 3)
auto['horsepower_4'] = np.power(auto.horsepower, 4)
auto['horsepower_5'] = np.power(auto.horsepower, 5)
# Polynomial Features using sklearn:
from sklearn.preprocessing import PolynomialFeatures
pol = PolynomialFeatures(degree=5, interaction_only=False, include_bias=False)
polf = pol.fit_transform(auto.loc[:, 'horsepower'].values.reshape(-1, 1))
# +
from sklearn.model_selection import train_test_split
X, y = auto.loc[:, ['horsepower', 'horsepower_2', 'horsepower_3']], auto.mpg
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=42)
# +
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
# ols model with intercept
lm1 = LinearRegression(fit_intercept=True)
lm2 = LinearRegression(fit_intercept=True)
lm3 = LinearRegression(fit_intercept=True)
lm1_fit = lm1.fit(X_train.loc[:, 'horsepower'].values.reshape(-1, 1), y_train)
lm2_fit = lm2.fit(X_train.loc[:, ['horsepower', 'horsepower_2']], y_train)
lm3_fit = lm3.fit(X_train.loc[:, ['horsepower', 'horsepower_2', 'horsepower_3']], y_train)
lm1_predict = lm1_fit.predict(X_test.loc[:, 'horsepower'].values.reshape(-1, 1))
lm2_predict = lm2_fit.predict(X_test.loc[:, ['horsepower', 'horsepower_2']])
lm3_predict = lm3_fit.predict(X_test.loc[:, ['horsepower', 'horsepower_2', 'horsepower_3']])
print('lm1 MSE:', mean_squared_error(y_test, lm1_predict))
print('lm2 MSE:', mean_squared_error(y_test, lm2_predict))
print('lm3 MSE:', mean_squared_error(y_test, lm3_predict))
# -
# # 5.3.2 Leave-One-Out Cross-Validation
# +
from sklearn.model_selection import LeaveOneOut
X, y = auto.loc[:, ['horsepower', 'horsepower_2', 'horsepower_3', 'horsepower_4', 'horsepower_5']], auto.mpg
loocv = LeaveOneOut()
loocv.get_n_splits(X)
# +
loocv_mse = []
lm = LinearRegression(fit_intercept=True)
for train_index, test_index in loocv.split(X):
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
lm1_fit = lm.fit(X_train.loc[:, 'horsepower'].values.reshape(-1, 1), y_train)
lm1_predict = lm1_fit.predict(X_test.loc[:, 'horsepower'].values.reshape(-1, 1))
loocv_mse.append(mean_squared_error(y_test, lm1_predict))
np.array(loocv_mse).mean()
# +
# using sklearn cross_validation_score
from sklearn.model_selection import cross_val_score
lm = LinearRegression(fit_intercept=True)
cval = cross_val_score(lm,
auto.loc[:, 'horsepower'].values.reshape(-1, 1),
auto.mpg,
cv=len(auto), # k=n k-Fold -> LOOCV
n_jobs=-1,
scoring='neg_mean_squared_error')
-cval.mean()
# +
# Loop for 5 degree polinomial linear regressions with LOOCV
loocv_poly = {}
for i in range(1, 6):
loocv_mse = []
for train_index, test_index in loocv.split(X):
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
if i == 1:
X_TRAIN = X_train.iloc[:,0:1].values.reshape(-1, 1)
X_TEST = X_test.iloc[:,0:1].values.reshape(-1, 1)
else:
X_TRAIN = X_train.iloc[:,0:i]
X_TEST = X_test.iloc[:,0:i]
loocv_mse.append(
mean_squared_error(
y_test,
LinearRegression(fit_intercept=True)
.fit(
X_TRAIN,
y_train
)
.predict(
X_TEST
)
)
)
loocv_poly['lm' + str(i) + '_MSE'] = np.array(loocv_mse).mean()
# -
loocv_poly
# # 5.3.3 k-Fold Cross-Validation
# +
from sklearn.model_selection import KFold
X, y = auto.loc[:, ['horsepower', 'horsepower_2', 'horsepower_3', 'horsepower_4', 'horsepower_5']], auto.mpg
kf = KFold(n_splits=10, shuffle=True, random_state=42)
# +
# Loop for 5 degree polinomial linear regressions with k-Fold CV
kf_poly = {}
for i in range(1, 6):
kf_mse = []
for train_index, test_index in kf.split(X):
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
if i == 1:
X_TRAIN = X_train.iloc[:,0:1].values.reshape(-1, 1)
X_TEST = X_test.iloc[:,0:1].values.reshape(-1, 1)
else:
X_TRAIN = X_train.iloc[:,0:i]
X_TEST = X_test.iloc[:,0:i]
kf_mse.append(
mean_squared_error(
y_test,
LinearRegression(fit_intercept=True)
.fit(
X_TRAIN,
y_train
)
.predict(
X_TEST
)
)
)
kf_poly['lm' + str(i) + '_MSE'] = np.array(kf_mse).mean()
kf_poly
# -
# # 5.3.4 The Bootstrap
portfolio = pd.read_csv('../datasets/Portfolio.csv', index_col=0)
def alpha_fn(data, start_index, end_index):
X = data['X'][start_index:end_index]
Y = data['Y'][start_index:end_index]
return ((np.var(Y) - np.cov(X, Y)[0][1]) / (np.var(X) + np.var(Y) - 2 * np.cov(X, Y)[0][1]))
alpha_fn(portfolio, 0, 100)
# +
from sklearn.utils import resample
portfolio_bs = resample(portfolio, replace=True, n_samples=100)
alpha_fn(portfolio_bs, 0, 100)
# +
# Bootstrap is removed from sklearn
bs_alpha = []
for i in range(0, 1000):
bs_alpha.append(
alpha_fn(resample(portfolio, replace=True, n_samples=100), 0, 100)
)
bs_alpha = np.array(bs_alpha)
print('Bootstrapped alpha:', bs_alpha.mean())
print('SE:', bs_alpha.std())
# +
def boot_fn(data, start_index, end_index):
m = LinearRegression(fit_intercept=True).fit(
data['horsepower'][start_index:end_index].values.reshape(-1, 1),
data['mpg'][start_index:end_index]
)
return m.intercept_, m.coef_
boot_fn(auto, 0, 392)
# -
boot_fn(resample(auto, replace=True, n_samples=392), 0, 392)
# +
bs_boot = {'t1': [], 't2': []}
for i in range(0, 1000):
bs_boot['t1'].append(
boot_fn(resample(auto, replace=True, n_samples=392), 0, 392)[0]
)
bs_boot['t2'].append(
boot_fn(resample(auto, replace=True, n_samples=392), 0, 392)[1][0]
)
t1_es = np.array(bs_boot['t1']).mean()
t1_se = np.array(bs_boot['t1']).std()
t2_es = np.array(bs_boot['t2']).mean()
t2_se = np.array(bs_boot['t2']).std()
print('t1 bs estimate & se:', t1_es, t1_se)
print('t2 bs estimate & se:', t2_es, t2_se)
# +
import statsmodels.formula.api as sm
ols = sm.ols('mpg ~ horsepower', data=auto).fit()
ols.summary().tables[1]
# -
def boot_fn2(data, start_index, end_index):
m = LinearRegression(fit_intercept=True).fit(
data[['horsepower', 'horsepower_2']][start_index:end_index],
data['mpg'][start_index:end_index]
)
return m.intercept_, m.coef_
# +
bs_boot2 = {'t1': [], 't2': [], 't3': []}
for i in range(0, 1000):
bs_boot2['t1'].append(
boot_fn2(resample(auto, replace=True, n_samples=392), 0, 392)[0]
)
bs_boot2['t2'].append(
boot_fn2(resample(auto, replace=True, n_samples=392), 0, 392)[1][0]
)
bs_boot2['t3'].append(
boot_fn2(resample(auto, replace=True, n_samples=392), 0, 392)[1][1]
)
t1_es = np.array(bs_boot2['t1']).mean()
t1_se = np.array(bs_boot2['t1']).std()
t2_es = np.array(bs_boot2['t2']).mean()
t2_se = np.array(bs_boot2['t2']).std()
t3_es = np.array(bs_boot2['t3']).mean()
t3_se = np.array(bs_boot2['t3']).std()
print('t1 bs estimate & se:', t1_es, t1_se)
print('t2 bs estimate & se:', t2_es, t2_se)
print('t3 bs estimate & se:', t3_es, t3_se)
# +
import statsmodels.formula.api as sm
ols2 = sm.ols('mpg ~ horsepower + horsepower_2', data=auto).fit()
ols2.summary().tables[1]
|
labs_emredjan/labs/lab_05.3_cross_validation_and_the_bootstrap.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/vishalnarnaware/TextSummary/blob/master/TextSum.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="Jg-Yo496jdj8" colab_type="code" outputId="db5c8d61-fda0-4124-bfd2-f13bf4dc4706" colab={"base_uri": "https://localhost:8080/", "height": 141}
from google.colab import drive
drive.mount('/gdrive')
# %cd /gdrive
# + id="delP6IYdmS7w" colab_type="code" outputId="d345c677-a31e-423e-919a-241b20519076" colab={"base_uri": "https://localhost:8080/", "height": 35}
# cd /gdrive/My Drive/Colab Notebooks/textsummary
# + id="8w5YU8-9l38Q" colab_type="code" outputId="d198b888-13e6-4275-92e6-3d624a506d6f" colab={"base_uri": "https://localhost:8080/", "height": 124}
# #!pip install nltk
import nltk
nltk.download('punkt')
nltk.download('stopwords')
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from nltk.tokenize import word_tokenize, sent_tokenize
import time
file_name=input("Enter file name:")
file_name=file_name+str('.txt')
file=open(file_name,"r")
t = file.readlines()
def listToString(s):
# initialize an empty string
str1 = ""
# traverse in the string
for ele in s:
str1 += ele
# return string
return str1
text_str = listToString(t)
def _create_frequency_table(text_string) -> dict:
"""
we create a dictionary for the word frequency table.
For this, we should only use the words that are not part of the stopWords array.
Removing stop words and making frequency table
Stemmer - an algorithm to bring words to its root word.
:rtype: dict
"""
stopWords = set(stopwords.words("english"))
words = word_tokenize(text_string)
ps = PorterStemmer()
freqTable = dict()
for word in words:
word = ps.stem(word)
if word in stopWords:
continue
if word in freqTable:
freqTable[word] += 1
else:
freqTable[word] = 1
return freqTable
def _score_sentences(sentences, freqTable) -> dict:
"""
score a sentence by its words
Basic algorithm: adding the frequency of every non-stop word in a sentence divided by total no of words in a sentence.
:rtype: dict
"""
sentenceValue = dict()
for sentence in sentences:
word_count_in_sentence = (len(word_tokenize(sentence)))
word_count_in_sentence_except_stop_words = 0
for wordValue in freqTable:
if wordValue in sentence.lower():
word_count_in_sentence_except_stop_words += 1
if sentence[:10] in sentenceValue:
sentenceValue[sentence[:10]] += freqTable[wordValue]
else:
sentenceValue[sentence[:10]] = freqTable[wordValue]
if sentence[:10] in sentenceValue:
sentenceValue[sentence[:10]] = sentenceValue[sentence[:10]] / word_count_in_sentence_except_stop_words
'''
Notice that a potential issue with our score algorithm is that long sentences will have an advantage over short sentences.
To solve this, we're dividing every sentence score by the number of words in the sentence.
Note that here sentence[:10] is the first 10 character of any sentence, this is to save memory while saving keys of
the dictionary.
'''
return sentenceValue
def _find_average_score(sentenceValue) -> int:
"""
Find the average score from the sentence value dictionary
:rtype: int
"""
sumValues = 0
for entry in sentenceValue:
sumValues += sentenceValue[entry]
# Average value of a sentence from original text
average = (sumValues / len(sentenceValue))
return average
def _generate_summary(sentences, sentenceValue, threshold):
sentence_count = 0
summary = ''
for sentence in sentences:
if sentence[:10] in sentenceValue and sentenceValue[sentence[:10]] >= (threshold):
summary += " " + sentence
sentence_count += 1
return summary
def run_summarization(text):
# 1 Create the word frequency table
freq_table = _create_frequency_table(text)
'''
We already have a sentence tokenizer, so we just need
to run the sent_tokenize() method to create the array of sentences.
'''
# 2 Tokenize the sentences
sentences = sent_tokenize(text)
# 3 Important Algorithm: score the sentences
sentence_scores = _score_sentences(sentences, freq_table)
# 4 Find the threshold
threshold = _find_average_score(sentence_scores)
# 5 Important Algorithm: Generate the summary
summary = _generate_summary(sentences, sentence_scores, 0.95 * threshold)
return summary
if __name__ == '__main__':
tic=time.time()
result = run_summarization(text_str)
print(result)
toc=time.time()
print("Time Taken: ",(toc-tic)*1000,"sec")
# + id="z_yKQrbljcRw" colab_type="code" colab={}
|
TextSum.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Euler Problem 169
# =================
#
# Define f(0)=1 and f(n) to be the number of different ways n can be expressed as a sum of integer powers of 2 using each power no more than twice.
#
# For example, f(10)=5 since there are five different ways to express 10:
#
# 1 + 1 + 8
# 1 + 1 + 4 + 4
# 1 + 1 + 2 + 2 + 4
# 2 + 4 + 4
# 2 + 8
#
# What is f(10<sup>25</sup>)?
#
# +
F = {}
F[0] = F[1] = 1
def f(n):
if n in F:
return F[n]
if (n % 2):
val = f(n//2)
else:
val = f(n//2) + f(n//2-1)
F[n] = val
return val
print(f(10**25))
# -
|
Euler 169 - Diatonic sequence.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Setting up the colors for different data structures
# +
import numpy as np
from cycler import cycler as cy
colors = list(map(lambda n: tuple(0.95 * (np.array(n) / 255)), [
[128, 0, 0],
[230, 25, 75], [255, 225, 25], [ 0, 130, 200],
[245, 130, 48], [145, 30, 180], [ 70, 240, 240], [240, 50, 230],
[0.9*210, 0.9*245, 60], [250, 190, 190], [ 0, 128, 128],#, [230, 190, 255],
[170, 110, 40], [ 60, 180, 75], [0.8*230, 0.8*190, 0.8*255], [170, 255, 195],
[128, 128, 0], [ 0, 0, 128], [255, 215, 180], [128, 128, 128],
[85, 85, 85], [170, 170, 170], [0.9*255, 0.9*250, 0.9*200], [0, 0, 0], [0, 0, 0]]))
cyl = cy('color', colors)
cyl = cyl + (cy('lw', np.linspace(1.25, 2, 6)) * cy('linestyle', [':', '-', '--', '-.']))
finite_cy_iter = iter(cyl)
styles = {
structure:next(finite_cy_iter)
for structure in ['ArrayMap', 'ClojureHashMap', 'ClojureTreeMap',
'ClojureVectorMap', 'IntChamp32Java', 'IntChamp32Kotlin',
'IntHamt16Java', 'IntHamt32Java', 'IntHamt32Kotlin',
'IntHamt64Java', 'IntImplicitKeyHamtKotlin',
'PaguroHashMap', 'PaguroTreeMap',
'PaguroRrbMap', 'PaguroVectorMap', 'RadixTree',
'RadixTreeRedux', 'ScalaHashMap', 'ScalaIntMap',
'ScalaRrbMap', 'ScalaTreeMap', 'SdkMap', 'RrbTree']
}
styles['L1 cache'] = {'lw': 0.25, 'linestyle': ':', 'color': (0,0,0)}
styles['L2 cache'] = {'lw': 0.25, 'linestyle': '--', 'color': (0,0,0)}
styles['L3 cache'] = {'lw': 0.25, 'linestyle': '-', 'color': (0,0,0)}
def get_style(name):
res = styles.get(name)
if res == None:
res = styles[name.replace("V2", "")]
r = res.copy()
r['lw'] += 0.5;
return r;
else:
return res;
()
# -
|
analysis/colors.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pickle
import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
X_good = pickle.load(open('partitioned_features_good.pickle', 'rb'))
m = 99999
for i in range(len(X_good)):
if len(X_good[i]) < m:
m = len(X_good[i])
print("length: {}, index: {}".format(m, i))
newlist = list()
for i in range(len(X_good)):
if len(X_good[i]) > 600:
newlist.append(X_good[i][:600])
len(newlist)/len(X_good)
newlist = np.array(newlist)[:,:,:10]
newlist.shape
X_good = newlist
#good ones
y_good = np.zeros(len(X_good))
X_def = pickle.load(open('partitioned_features_defective.pickle', 'rb'))
newlist = list()
for i in range(len(X_def)):
if len(X_def[i]) > 600:
newlist.append(X_def[i][:600])
len(newlist)/len(X_def)
newlist = np.array(newlist)[:,:,:10]
X_def = newlist
y_def = np.ones(len(X_def))
X_def.shape
X = np.concatenate([X_good, X_def], axis=0)
y = np.concatenate([y_good, y_def], axis=0)
print('{},{}'.format(len(X),len(y)))
np.save('y_raw.npy',y)
np.save('y_norm.npy',y)
# +
a,b = np.argwhere(np.isnan(X_def)),np.argwhere(np.isnan(X_good))
print(len(a), len(b))
# -
def normalize_ts(X):
for i in range(len(X)):
ts = X[i]
scaler = MinMaxScaler()
scaler.fit(ts)
X[i] = scaler.transform(ts)
return X
X = normalize_ts(X)
print(X[0])
np.save('X_norm.npy', X)
np.save('y_norm.npy', y)
a = np.load('X.npy')
a[0]
X = np.load('data_lstm/x_features.npy')
y = np.load('data_lstm/y.npy')
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2)
X_e = tsne.fit_transform(X, y=None)
for e in X:
print(e)
|
data/clustering_zeros.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # J = 1 to 2 parameter scans - Data analysis
# Running the data analysis for the data from Nov 9 2021 to for the parameter scans performed while observing the signal on R(1), F' = 3 and R(2), F' = 3. These scans give information about the optimal frequency and power for the second step of SPA.
#
# Load the data analysis script:
# +
# %load_ext autoreload
# %autoreload 2
from pathlib import Path
from SPA_camera_data_analysis import analyze_SPA_dataset
# Suppressing warnings if desired (getting annoying warning about run names)
import warnings
warnings.filterwarnings(action = 'ignore')
# -
# Set up paths:
# Define path to data
DATA_DIR = Path(
"D:\Google Drive\CeNTREX Oskari\State preparation\SPA\Data analysis\Data"
)
DATA_FNAME = Path("SPA_test_11_9_2021.hdf")
filepath = DATA_DIR / DATA_FNAME
# Define number of bootstraps:
n_bs = 1000
# ## R(1), F' = 3 frequency scan
#
analyze_SPA_dataset(
filepath,
16,
0,
scan_param_name='SynthHD Pro SPA J12 SetFrequencyCHAGUI',
scan_param_new_name='SPAJ12Frequency',
switch_name = 'MicrowavesON',
n_bs = n_bs,
)
# ## R(1), F' = 3 power scan
analyze_SPA_dataset(
filepath,
17,
0,
scan_param_name= 'SynthHD Pro SPA J12 SetPowerCHAGUI',
scan_param_new_name='SPAJ12Power',
switch_name = 'MicrowavesON',
n_bs = n_bs,
)
# ## R(2), F' = 4 power scan
analyze_SPA_dataset(
filepath,
21,
0,
scan_param_name= 'SynthHD Pro SPA J12 SetPowerCHAGUI',
scan_param_new_name='SPAJ12Power',
switch_name = None,
n_bs = n_bs,
)
# ## R(2), F' = 4 frequency scan
analyze_SPA_dataset(
filepath,
22,
0,
scan_param_name= 'SynthHD Pro SPA J12 SetFrequencyCHAGUI',
scan_param_new_name='SPAJ12Frequency',
switch_name = None,
n_bs = n_bs,
)
|
SPA/J = 1 to 2 parameter scans - analysis - 11-9-2021.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R [conda env:ATACseq_GeneScoring]
# language: R
# name: conda-env-ATACseq_GeneScoring-r
# ---
# ### Installation
# `conda install bioconductor-genomicranges bioconductor-summarizedexperiment -y`
#
# `R`
# `devtools::install_github("caleblareau/BuenColors")`
# ### Import packages
library(GenomicRanges)
library(SummarizedExperiment)
library(data.table)
library(dplyr)
library(BuenColors)
library(Matrix)
# ### Preprocess
# `bsub < count_reads_peaks_erisone.sh`
path = './count_reads_peaks_output/'
files <- list.files(path,pattern = "\\.txt$")
length(files)
#assuming tab separated values with a header
datalist = lapply(files, function(x)fread(paste0(path,x))$V4)
#assuming the same header/columns for all files
datafr = do.call("cbind", datalist)
dim(datafr)
df_regions = read.csv("../../input/combined.sorted.merged.bed",
sep = '\t',header=FALSE,stringsAsFactors=FALSE)
dim(df_regions)
peaknames = paste(df_regions$V1,df_regions$V2,df_regions$V3,sep = "_")
head(peaknames)
head(sapply(strsplit(files,'\\.'),'[', 2))
colnames(datafr) = sapply(strsplit(files,'\\.'),'[', 2)
rownames(datafr) = peaknames
datafr[1:3,1:3]
dim(datafr)
# +
# saveRDS(datafr, file = './datafr.rds')
# datafr = readRDS('./datafr.rds')
# -
filter_peaks <- function (datafr,cutoff = 0.01){
binary_mat = as.matrix((datafr > 0) + 0)
binary_mat = Matrix(binary_mat, sparse = TRUE)
num_cells_ncounted = Matrix::rowSums(binary_mat)
ncounts = binary_mat[num_cells_ncounted >= dim(binary_mat)[2]*cutoff,]
ncounts = ncounts[rowSums(ncounts) > 0,]
options(repr.plot.width=4, repr.plot.height=4)
hist(log10(num_cells_ncounted),main="No. of Cells Each Site is Observed In",breaks=50)
abline(v=log10(min(num_cells_ncounted[num_cells_ncounted >= dim(binary_mat)[2]*cutoff])),lwd=2,col="indianred")
# hist(log10(new_counts),main="Number of Sites Each Cell Uses",breaks=50)
datafr_filtered = datafr[rownames(ncounts),]
return(datafr_filtered)
}
# ### Obtain Feature Matrix
start_time <- Sys.time()
set.seed(2019)
metadata <- read.table('../../input/metadata.tsv',
header = TRUE,
stringsAsFactors=FALSE,quote="",row.names=1)
datafr_filtered <- filter_peaks(datafr)
dim(datafr_filtered)
# import counts
counts <- data.matrix(datafr_filtered)
dim(counts)
counts[1:3,1:3]
# import gene bodies; restrict to TSS
gdf <- read.table("../../input/mm9/mm9-tss.bed", stringsAsFactors = FALSE)
dim(gdf)
gdf[1:3,1:3]
tss <- data.frame(chr = gdf$V1, gene = gdf$V4, stringsAsFactors = FALSE)
tss$tss <- ifelse(gdf$V5 == "+", gdf$V3, gdf$V2)
tss$start <- ifelse(tss$tss - 50000 > 0, tss$tss - 50000, 0)
tss$stop <- tss$tss + 50000
tss_idx <- makeGRangesFromDataFrame(tss, keep.extra.columns = TRUE)
# +
# import ATAC peaks
# adf <- data.frame(fread('../../input/combined.sorted.merged.bed'))
# colnames(adf) <- c("chr", "start", "end")
adf <- data.frame(do.call(rbind,strsplit(rownames(datafr_filtered),'_')),stringsAsFactors = FALSE)
colnames(adf) <- c("chr", "start", "end")
adf$start <- as.integer(adf$start)
adf$end <- as.integer(adf$end)
dim(adf)
adf$mp <- (adf$start + adf$end)/2
atacgranges <- makeGRangesFromDataFrame(adf, start.field = "mp", end.field = "mp")
# -
# find overlap between ATAC peaks and Ranges linker
ov <- findOverlaps(atacgranges, tss_idx) #(query, subject)
options(repr.plot.width=3, repr.plot.height=3)
# plot a histogram showing peaks per gene
p1 <- qplot(table(subjectHits(ov)), binwidth = 1) + theme(plot.subtitle = element_text(vjust = 1),
plot.caption = element_text(vjust = 1)) +
labs(title = "Histogram of peaks per gene", x = "Peaks / gene", y="Frequency") + pretty_plot()
p1
# calculate distance decay for the weights
dist <- abs(mcols(tss_idx)$tss[subjectHits(ov)] - start(atacgranges)[queryHits(ov)])
exp_dist_model <- exp(-1*dist/5000)
# prepare an outcome matrix
m <- Matrix::sparseMatrix(i = c(queryHits(ov), length(atacgranges)),
j = c(subjectHits(ov), length(tss_idx)),
x = c(exp_dist_model,0))
colnames(m) <- gdf$V4 # gene name
m <- m[,which(Matrix::colSums(m) != 0)]
fm_genescoring <- data.matrix(t(m) %*% counts)
dim(fm_genescoring)
fm_genescoring[1:3,1:3]
end_time <- Sys.time()
end_time - start_time
all(colnames(fm_genescoring) == rownames(metadata))
fm_genescoring = fm_genescoring[,rownames(metadata)]
all(colnames(fm_genescoring) == rownames(metadata))
saveRDS(fm_genescoring, file = '../../output/feature_matrices/FM_GeneScoring_cusanovich2018subset.rds')
sessionInfo()
save.image(file = 'GeneScoring_cusanovich2018subset.RData')
|
Real_Data/Cusanovich_2018_subset/run_methods/GeneScoring/GeneScoring_cusanovich2018subset.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
from pandas import read_csv
from pandas import datetime
from matplotlib import pyplot as plt
from datetime import datetime
from statsmodels.tsa.statespace.sarimax import SARIMAX
from statsmodels.tsa.holtwinters import ExponentialSmoothing
from statsmodels.tsa.stattools import adfuller
import statsmodels.api as sm
from sklearn.metrics import mean_squared_error
from math import sqrt
import pandas as pd
import os
import enum
import numpy as np
from statsmodels.graphics.tsaplots import plot_pacf
from statsmodels.graphics.tsaplots import plot_acf
from tqdm import tqdm_notebook
from itertools import product
import warnings
warnings.filterwarnings('ignore')
class TrainignTimeType(enum.IntEnum):
ONE_WEEK = 10080
ONE_MONTH = 43200
class TestingTimeType(enum.IntEnum):
ONE_DAY = 1440
#Save the time series given as parameter
def save_series_to_csv(series, fileName):
path = "results/SARIMA" + originFileName[:-4]
if not os.path.isdir(path):
try:
os.mkdir(path)
except OSError:
print("Creation of the directory %s failed" % path)
path = "results/SARIMA" + originFileName[:-4] + "/" + seriesName
if not os.path.isdir(path):
try:
os.mkdir(path)
except OSError:
print("Creation of the directory %s failed" % path)
day = trainSize / 1440
file = open(path + "/" + str(int(day)) + "days_" + fileName, "w")
file.write(series.to_csv(header=False))
file.close()
#Save the plot from pyplot
def save_plot():
path = "results/SARIMA" + originFileName[:-4]
if not os.path.isdir(path):
try:
os.mkdir(path)
except OSError:
print("Creation of the directory %s failed" % path)
path = "results/SARIMA" + originFileName[:-4] + "/" + seriesName
if not os.path.isdir(path):
try:
os.mkdir(path)
except OSError:
print("Creation of the directory %s failed" % path)
day = trainSize / 1440
finalPath = path + "/" + str(int(day)) + "days_plot.png"
plt.savefig(finalPath, dpi=100)
#Parser for the read_csv
def parser(x):
return datetime.strptime(x, '%y-%m-%d %H:%M:%S')
# Accuracy metrics
def forecast_accuracy(forecast, actual):
mape = np.mean(np.abs(forecast - actual)/np.abs(actual)) # MAPE
corr = np.corrcoef(forecast, actual)[0,1] # corr
mins = np.amin(np.hstack([forecast[:,None],
actual[:,None]]), axis=1)
maxs = np.amax(np.hstack([forecast[:,None],
actual[:,None]]), axis=1)
minmax = 1 - np.mean(mins/maxs) # minmax
return({'mape':mape,
'corr':corr, 'minmax':minmax})
'''
PUT HERE THE CONFIGURATION VALUES
'''
trainSize = TrainignTimeType.ONE_WEEK
testSize = TestingTimeType.ONE_DAY
originFileName = "ukdale_def4.csv"
seriesName = "Tv_Dvd_Lamp"
#main function
numbersOfRowToRead = int(trainSize) + int(testSize)
#Reading the series from the dataset file
series = read_csv("Dataset/" + originFileName,header=0,index_col=0,nrows=numbersOfRowToRead)
#print(series[seriesName].head())
plot_pacf(series[seriesName]);
plt.show()
plot_acf(series[seriesName]);
plt.show()
ad_fuller_result = adfuller(series[seriesName])
print(f'ADF Statistic: {ad_fuller_result[0]}')
print(f'p-value: {ad_fuller_result[1]}')
series[seriesName] = np.log(series[seriesName])
series[seriesName] = series[seriesName].diff()
series = series.drop(series.index[0])
plt.figure(figsize=[15, 7.5]); # Set dimensions for figure
plt.plot(series[seriesName])
ax = plt.gca()
ax.axes.xaxis.set_visible(False)
plt.title("Log Difference of Quarterly EPS for Johnson & Johnson")
plt.show()
# Seasonal differencing
series[seriesName] = series[seriesName].diff(12)
series = series.dropna().reset_index(drop=True)
plt.figure(figsize=[15, 7.5]); # Set dimensions for figure
plt.plot(series[seriesName])
ax = plt.gca()
ax.axes.xaxis.set_visible(False)
plt.title("Log Difference of Quarterly EPS for Johnson & Johnson")
plt.show()
ad_fuller_result = adfuller(series[seriesName])
print(f'ADF Statistic: {ad_fuller_result[0]}')
print(f'p-value: {ad_fuller_result[1]}')
plot_pacf(series[seriesName]);
plot_acf(series[seriesName]);
def optimize_SARIMA(parameters_list, d, D, s, exog):
"""
Return dataframe with parameters, corresponding AIC and SSE
parameters_list - list with (p, q, P, Q) tuples
d - integration order
D - seasonal integration order
s - length of season
exog - the exogenous variable
"""
results = []
for param in tqdm_notebook(parameters_list):
try:
model = SARIMAX(exog, order=(param[0], d, param[1]), seasonal_order=(param[2], D, param[3], s)).fit(disp=-1)
except:
continue
aic = model.aic
results.append([param, aic])
result_df = pd.DataFrame(results)
result_df.columns = ['(p,q)x(P,Q)', 'AIC']
#Sort in ascending order, lower AIC is better
result_df = result_df.sort_values(by='AIC', ascending=True).reset_index(drop=True)
return result_df
p = range(0, 4, 1)
d = 1
q = range(0, 4, 1)
P = range(0, 4, 1)
D = 1
Q = range(0, 4, 1)
s = 12
parameters = product(p, q, P, Q)
parameters_list = list(parameters)
print(len(parameters_list))
result_df = optimize_SARIMA(parameters_list, 1, 1, 4, series[seriesName])
print(result_df)
"""
#Splitting the dataset into training and testing
X = series[seriesName]
train, test = X[0:trainSize], X[trainSize:trainSize+testSize]
history = [x for x in train]
predictions = list()
print("\nTraining the model...\n")
maxLen = len(test)
#creating SARIMA model
my_order = (0, 0, 0)
my_seasonal_order = (1, 1, 0, 12)
# define model
model = SARIMAX(train, order=my_order, seasonal_order=my_seasonal_order)
model_fit = model.fit()
# plot forecasts against actual outcomes
yhat = model_fit.predict(start=0, end=len(test))
#print(yhat)
predictions = list()
for value in yhat[1:]:
predictions.append(value)
print("Testing...")
fc_series = pd.Series(predictions,index=test.index)
# evaluate forecasts
print(forecast_accuracy(fc_series.values, test.values))
plt.figure(figsize=(12,5), dpi=100)
plt.plot(train, color='blue')
plt.plot(test, color='blue')
plt.plot(fc_series, color='red')
day = trainSize / 1440
plt.title(seriesName + " " + str(int(day)) + " days trained")
#pyplot.xticks(rotation=90)
ax = plt.gca()
ax.axes.xaxis.set_visible(False)
#saving date
#save_series_to_csv(train, "train.csv")
#save_series_to_csv(test, "test.csv")
#save_series_to_csv(fc_series, "predictions.csv")
#save_plot()
plt.show()
print("\nAll done!\n")
"""
# +
f = open("sarima.txt", "w")
f.write(str(result_df.values))
f.close()
"""
type(result_df)
for p in range(0, 255):
print(result_df[p])
"""
|
sarima2Notebook.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + init_cell=true
# %logstop
# %logstart -rtq ~/.logs/DS_Intro_Statistics.py append
# %matplotlib inline
import matplotlib
import seaborn as sns
sns.set()
matplotlib.rcParams['figure.dpi'] = 144
# -
# # Introduction to Statistics
# Statistics is the study of how random variables behave in aggregate. It is also the use of that behavior to make inferences and arguments. While much of the math behind statistical calculations is rigorous and precise, its application to real data often involves making imperfect assumptions. In this notebook we'll review some fundamental statistics and pay special attention to the assumptions we make in their application.
# ## Hypothesis Testing and Parameter Estimator
# We often use statistics to describe groups of people or events; for example we compare the current temperature to the *average* temperature for the day or season or we compare a change in stock price to the *volatility* of the stock (in the language of statistics, volatility is called **standard deviation**) or we might wonder what the *average* salary of a data scientist is in a particular country. All of these questions and comparisons are rudimentary forms of statistical inference. Statistical inference often falls into one of two categories: hypothesis testing or parameter estimator.
#
# Examples of hypothesis testing are:
# - Testing if an increase in a stock's price is significant or just random chance
# - Testing if there is a significant difference in salaries between employees with and without advanced degrees
# - Testing whether there is a significant correlation between the amount of money a customer spent at a store and which advertisements they'd been shown
#
# Examples of parameter estimation are:
# - Estimating the average annual return of a stock
# - Estimating the variance of salaries for a particular job across companies
# - Estimating the correlation coefficient between annual advertising budget and annual revenue
#
# We'll explore the processes of statistical inference by considering the example of salaries with and without advanced degrees.
#
# **Exercise:** Decide for each example given in the first sentence whether it is an example of hypothesis testing or parameter estimation.
# ## Estimating the Mean
# Suppose that we know from a prior study that employees with advanced degrees in the USA make on average $70k. To answer the question "do people without advanced degrees earn significantly less than people with advanced degrees?" we must first estimate how much people without advanced degrees earn on average.
#
# To do that, we will have to collect some data. Suppose we take a representative, unbiased sample of 1000 employed adults without advanced degrees and learn their salaries. To estimate the mean salary of people without advanced degrees, we simply calculate the mean of this sample:
#
# $$ \overline X = \frac{1}{n} \sum_{k=1}^n X_k. $$
#
# Let's write some code that will simulate sampling some salaries for employees without advanced degrees.
# +
import scipy as sp
import numpy as np
import matplotlib.pyplot as plt
from ipywidgets import interact, IntSlider
salaries = sp.stats.lognorm(1, loc=20, scale=25)
def plot_sample(dist):
def plotter(size):
X = dist.rvs(size=size)
ys, bins, _ = plt.hist(X, bins=20, density=True)
plt.ylim([0, ys.max() / (ys * (bins[1] - bins[0])).sum() * 1.25])
plt.axvline(dist.mean(), color='r', label='true mean')
plt.axvline(X.mean(), color='g', label='sample mean')
plt.plot(np.arange(20, 100, .01), salaries.pdf(np.arange(20, 100, .01)), 'k--')
plt.legend()
return plotter
# -
sample_size_slider = IntSlider(min=10, max=200, step=10, value=10, description='sample size')
interact(plot_sample(salaries), size=sample_size_slider)
# ## Standard Error of the Mean
# Notice that each time we run the code to generate the plot above, we draw a different sample. While the "true" mean remains fixed, the sample mean changes as we draw new samples. In other words, our estimate (the sample mean) of the true mean is noisy and has some error. How noisy is it? How much does it typically differ from the true mean? *What is the **standard deviation** of the sample mean from the true mean*?
#
# Let's take many samples and make a histogram of the sample means to visualize the typical difference between the sample mean and the true mean.
def plot_sampling_dist(dist):
def plotter(sample_size):
means = np.array([dist.rvs(size=sample_size).mean() for _ in range(300)]) - dist.mean()
plt.hist(means, bins=20, density=True, label='sample means')
# plot central limit theorem distribution
Xs = np.linspace(means.min(), means.max(), 1000)
plt.plot(Xs, sp.stats.norm.pdf(Xs, scale=np.sqrt(dist.var()/sample_size)), 'k--',
label='central limit theorem')
plt.legend()
return plotter
sample_size_slider = IntSlider(min=10, max=500, step=10, value=10, description='sample size')
interact(plot_sampling_dist(salaries),
sample_size=sample_size_slider)
# As we increase the size of our samples, the distribution of sample means comes to resemble a normal distribution. In fact this occurs regardless of the underlying distribution of individual salaries. This phenomenon is described by the Central Limit Theorem, which states that as the sample size increases, the sample mean will tend to follow a normal distribution with a standard deviation
#
# $$ \sigma_{\overline X} = \sqrt{\frac{\sigma^2}{n}}.$$
#
# This quantity is called the **standard error**, and it quantifies the standard deviation of the sample mean from the true mean.
#
# **Exercise:** In your own words, explain the difference between the standard deviation and the standard error of salaries in our example.
# ## Hypothesis Testing and z-scores
# Now that we can calculate how much we may typically expect the sample mean to differ from the true mean by random chance, we can perform a **hypothesis test**. In hypothesis testing, we assume that the true mean is a known quantity. We then collect a sample and calculate the difference between the sample mean and the assumed true mean. If this difference is large compared to the standard error (i.e. the typical difference we might expect to arise from random chance), then we conclude that the true mean is unlikely to be the value that we had assumed. Let's be more precise with out example.
#
# 1. Suppose that we know from a prior study that employees with advanced degrees in the USA make on average \$70k. Our **null hypothesis** will be that employees without advanced degrees make the same salary: $H_0: \mu = 70$. We will also choose a threshold of significance for our evidence. In order to decide that our null hypothesis is wrong, we must find evidence that would have less than a certain probability $\alpha$ of occurring due to random chance.
mu = 70
# 2. Next we collect a sample of salaries from $n$ employees without advanced degrees and calculate the mean of the sample salaries. Below we'll sample 100 employees.
sample_salaries = salaries.rvs(size=100)
print('Sample mean: {}'.format(sample_salaries.mean()))
# 3. Now we compare the difference between the sample mean and the assumed true mean to the standard error. This quantity is called a **z-score**.
#
# $$ z = \frac{\overline X - \mu}{\sigma / \sqrt{n}} $$
z = (sample_salaries.mean() - mu) / np.sqrt(salaries.var() / sample_salaries.size)
print('z-score: {}'.format(z))
# 4. The z-score can be used with the standard normal distribution (due to the Central Limit Theorem) to calculate the probability that the difference between the sample mean and the null hypothesis is due only to random chance. This probability is called a **p-value**.
p = sp.stats.norm.cdf(z)
print('p-value: {}'.format(p))
# +
plt.subplot(211)
stderr = np.sqrt(salaries.var() / sample_salaries.size)
Xs = np.linspace(mu - 3*stderr, mu + 3*stderr, 1000)
clt = sp.stats.norm.pdf(Xs, loc=mu, scale=stderr)
plt.plot(Xs, clt, 'k--',
label='central limit theorem')
plt.axvline(sample_salaries.mean(), color='b', label='sample mean')
plt.fill_between(Xs[Xs < mu - 2*stderr], 0, clt[Xs < mu - 2*stderr], color='r', label='critical region')
plt.legend()
plt.subplot(212)
Xs = np.linspace(-3, 3, 1000)
normal = sp.stats.norm.pdf(Xs)
plt.plot(Xs, normal, 'k--', label='standard normal distribution')
plt.axvline(z, color='b', label='z-score')
plt.fill_between(Xs[Xs < -2], 0, normal[Xs < -2], color='r', label='critical region')
plt.legend()
# -
# 5. If our p-value is less than $\alpha$ then we can reject the null hypothesis; since we found evidence that was very unlikely to arise by random chance, it must be that our initial assumption about the value of the true mean was wrong.
#
# This is a very simplified picture of hypothesis testing, but the central idea can be a useful tool outside of the formal hypothesis testing framework. By calculating the difference between an observed quantity and the value we would expect, and then comparing this difference to our expectation for how large the difference might be due to random chance, we can quickly make intuitive judgments about quantities that we have measured or calculated.
# ## Confidence Intervals
# We can also use the Central Limit Theorem to help us perform parameter estimation. Using our sample mean, we estimate the average salary of employees without advanced degrees. However, we also know that this estimate deviates somewhat from the true mean due to the randomness of our sample. Therefore we should put probabilistic bounds on our estimate. We can again use the standard error to help us calculate this probability.
# +
print("Confidence interval (95%) for average salary: ({:.2f} {:.2f})".format(sample_salaries.mean() - 2 * stderr,
sample_salaries.mean() + 2 * stderr))
Xs = np.linspace(sample_salaries.mean() - 3*stderr,
sample_salaries.mean() + 3*stderr,
1000)
ci = sp.stats.norm.pdf(Xs, loc=sample_salaries.mean(), scale=stderr)
plt.plot(Xs, ci, 'k--',
label='confidence interval pdf')
plt.fill_between(Xs[(Xs > sample_salaries.mean() - 2*stderr) & (Xs < sample_salaries.mean() + 2*stderr)],
0,
clt[(Xs > sample_salaries.mean() - 2*stderr) & (Xs < sample_salaries.mean() + 2*stderr)],
color='r', label='confidence interval')
plt.legend(loc = 'upper right')
# -
# *Copyright © 2020 The Data Incubator. All rights reserved.*
|
DS_Intro_Statistics.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Artificial Intelligence Nanodegree
#
# ## Convolutional Neural Networks
#
# ---
#
# In this notebook, we train an MLP to classify images from the CIFAR-10 database.
#
# ### 1. Load CIFAR-10 Database
# +
import keras
from keras.datasets import cifar10
# load the pre-shuffled train and test data
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
# +
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
fig = plt.figure(figsize=(20,5))
for i in range(36):
ax = fig.add_subplot(3, 12, i + 1, xticks=[], yticks=[])
ax.imshow(np.squeeze(x_train[i]))
# -
# ### 3. Rescale the Images by Dividing Every Pixel in Every Image by 255
# +
from keras.utils import np_utils
# rescale [0,255] --> [0,1]
x_train = x_train.astype('float32')/255
x_test = x_test.astype('float32')/255
# one-hot encode the labels
num_classes = len(np.unique(y_train))
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
# -
# ### 4. Break Dataset into Training, Testing, and Validation Sets
# +
# break training set into training and validation sets
(x_train, x_valid) = x_train[5000:], x_train[:5000]
(y_train, y_valid) = y_train[5000:], y_train[:5000]
# print shape of training set
print('x_train shape:', x_train.shape)
print('y_train shape:', y_train.shape)
# print number of training, validation, and test images
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
print(x_valid.shape[0], 'validation samples')
# -
# ### 5. Define the Model Architecture
# +
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
# define the model
model = Sequential()
model.add(Flatten(input_shape = x_train.shape[1:]))
model.add(Dense(1000, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))
model.summary()
# -
# ### 6. Compile the Model
# compile the model
model.compile(loss='categorical_crossentropy', optimizer='adam',
metrics=['accuracy'])
# ### 7. Train the Model
# +
from keras.callbacks import ModelCheckpoint
# train the model
checkpointer = ModelCheckpoint(filepath='MLP.weights.best.hdf5', verbose=1,
save_best_only=True)
hist = model.fit(x_train, y_train, batch_size=32, epochs=20,
validation_data=(x_valid, y_valid), callbacks=[checkpointer],
verbose=2, shuffle=True)
# -
# ### 8. Load the Model with the Best Classification Accuracy on the Validation Set
# load the weights that yielded the best validation accuracy
model.load_weights('MLP.weights.best.hdf5')
# ### 9. Calculate Classification Accuracy on Test Set
# evaluate and print test accuracy
score = model.evaluate(x_test, y_test, verbose=0)
print('\n', 'Test accuracy:', score[1])
|
cifar10-classification/cifar10_mlp.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction to the Quantum Bit
# ### Where we'll explore:
# * **Quantum Superposition**
# * **Quantum Entanglement**
# * **Running experiments on a laptop-hosted simulator**
# * **Running experiments on a real quantum computer**
#
# ### <NAME>
# ### SDE, Zonar Systems
# github.com/brandonwarren/intro-to-qubit contains this Jupyter notebook and installation tips.
import py_cas_slides as slides
# real 6-qubit quantum computer, incl interface electronics
slides.system()
# +
# import QISkit, define function to set backend that will execute our circuits
HISTO_SIZE = (9,4) # width, height in inches
CIRCUIT_SIZE = 1.0 # scale (e.g. 0.5 is half-size)
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, execute
from qiskit import BasicAer as Aer
from qiskit.tools.visualization import plot_histogram
from qiskit import __qiskit_version__
print(__qiskit_version__)
def set_backend(use_simulator: bool, n_qubits: int, preferred_backend: str=''):
if use_simulator:
backend = Aer.get_backend('qasm_simulator')
else:
from qiskit import IBMQ
provider = IBMQ.load_account()
if preferred_backend:
# use backend specified by caller
backend = provider.get_backend(preferred_backend)
print(f"Using {backend.name()}")
else:
# use least-busy backend that has enough qubits
from qiskit.providers.ibmq import least_busy
large_enough_devices = provider.backends(filters=lambda x: x.configuration().n_qubits >= n_qubits and not x.configuration().simulator)
backend = least_busy(large_enough_devices)
print(f"The best backend is {backend.name()}")
return backend
def add_missing_keys(counts):
# we want all keys present in counts, even if they are zero value
for key in ['00', '01', '10', '11']:
if key not in counts:
counts[key] = 0
# -
# use simulator for now
backend = set_backend(use_simulator=True, n_qubits=2)
# write code to build this quantum circuit
# logic flows left to right
# quantum bits begin in ground state (zero)
# measurement copies result to classical bit
slides.simple_2qubits() # simplest possible 2-qubit circuit
# +
# 1. Build simplest possible 2-qubit quantum circuit and draw it
q_reg = QuantumRegister(2, 'q') # the 2 qubits we'll be using
c_reg = ClassicalRegister(2, 'c') # clasical bits to hold results of measurements
circuit = QuantumCircuit(q_reg, c_reg) # begin circuit - just 2 qubits and 2 classical bits
# measure while still in ground state
circuit.measure(q_reg, c_reg) # measure qubits, place results in classical bits
# circuit is now complete
circuit.draw(output='mpl', scale=CIRCUIT_SIZE)
# -
# run it 1000 times on simulator
result = execute(circuit, backend=backend, shots=1000).result()
counts = result.get_counts(circuit)
print(counts)
add_missing_keys(counts)
print(counts)
plot_histogram(counts, figsize=HISTO_SIZE)
# +
# 2. Apply X gate (NOT gate) to high qubit (q1)
q_reg = QuantumRegister(2, 'q')
c_reg = ClassicalRegister(2, 'c')
circuit = QuantumCircuit(q_reg, c_reg)
###### apply X gate to high qubit ######
circuit.x(q_reg[1])
circuit.measure(q_reg, c_reg)
circuit.draw(output='mpl', scale=CIRCUIT_SIZE)
# -
# run it 1000 times on simulator
result = execute(circuit, backend=backend, shots=1000).result()
counts = result.get_counts(circuit)
print(counts)
add_missing_keys(counts)
plot_histogram(counts, figsize=HISTO_SIZE)
# +
# We've seen the two simplest quantum circuits possible.
# Let's take it up a notch and place each qubit into a quantum superposition.
# # ?
slides.super_def()
# -
# Like you flip a coin - while it is spinning it is H and T.
# When you catch it, it is H or T.
# BUT: it is as if it was that way all along.
# What's the difference between that, and a coin under a
# piece of paper that is revealed?
slides.feynman_quote()
slides.double_slit()
# (2)
# +
# Like the photon that is in 2 places at once, the qubit can
# be in 2 states at once, and become 0 or 1 when it is measured.
# Let's place our 2 qubits in superposion and measure them.
# The act of measurement collapses the superposition,
# resulting in 1 of the 2 possible values.
# H - Hadamard will turn our 0 into a superposition of 0 and 1.
# It rotates the state of the qubit.
# (coin over table analogy)
# 3. Apply H gate to both qubits
q_reg = QuantumRegister(2, 'q')
c_reg = ClassicalRegister(2, 'c')
circuit = QuantumCircuit(q_reg, c_reg)
###### apply H gate to both qubits ######
circuit.h(q_reg[0])
circuit.h(q_reg[1])
circuit.measure(q_reg, c_reg)
circuit.draw(output='mpl', scale=CIRCUIT_SIZE)
# -
# histo - 2 bits x 2 possibilities = 4 combinations of equal probability
result = execute(circuit, backend=backend, shots=1000).result()
counts = result.get_counts(circuit)
print(counts)
add_missing_keys(counts)
plot_histogram(counts, figsize=HISTO_SIZE)
# TRUE random numbers! (when run on real device)
# Special case of superposition, entanglement, revealed by EPR expmt
slides.mermin_quote()
# Before we get to that, i'd like to set the stage by intro
# 2 concepts: locality and hidden variables.
# The principle of locality says that for one thing to affect
# another, they have to be in the same location, or need some
# kind of field or signal connecting the two, with
# the fastest possible propagation speed being that of light.
# This even applies to gravity, which prop at the speed of light.
# [We are 8 light-minutes from the Sun, so if the Sun all of a
# sudden vanished somehow, we would still orbit for another 8 min.]
#
# Even though Einstein helped launch the new field of QM, he never
# really liked it. In particular, he couln't accept the randomness.
slides.einstein_dice()
slides.bohr_response()
# (3)
slides.epr_nyt()
# (4)
slides.einstein_vs_bohr()
# +
# [Describe entanglement using coins odd,even]
# 4. Entanglement - even-parity
q_reg = QuantumRegister(2, 'q')
c_reg = ClassicalRegister(2, 'c')
circuit = QuantumCircuit(q_reg, c_reg)
###### place q[0] in superposition ######
circuit.h(q_reg[0])
###### CNOT gate - control=q[0] target=q[1] - places into even-parity Bell state
# Target is inverted if control is true
circuit.cx(q_reg[0], q_reg[1])
circuit.measure(q_reg, c_reg)
circuit.draw(output='mpl', scale=CIRCUIT_SIZE)
# -
result = execute(circuit, backend=backend, shots=1000).result()
counts = result.get_counts(circuit)
print(counts)
add_missing_keys(counts)
plot_histogram(counts, figsize=HISTO_SIZE)
# +
# 5. Entanglement - odd-parity
q_reg = QuantumRegister(2, 'q')
c_reg = ClassicalRegister(2, 'c')
circuit = QuantumCircuit(q_reg, c_reg)
###### place q[0] in superposition ######
circuit.h(q_reg[0])
###### CNOT gate - control=q[0] target=q[1] - places into even-parity Bell state
# Target is inverted if control is true
circuit.cx(q_reg[0], q_reg[1])
# a 0/1 superposition is converted to a 1/0 superposition
# i.e. rotates state 180 degrees
# creates odd-parity entanglement
circuit.x(q_reg[0])
circuit.measure(q_reg, c_reg)
circuit.draw(output='mpl', scale=CIRCUIT_SIZE)
# -
result = execute(circuit, backend=backend, shots=1000).result()
counts = result.get_counts(circuit)
print(counts)
add_missing_keys(counts)
plot_histogram(counts, figsize=HISTO_SIZE)
# (5)
slides.Bell_CHSH_inequality()
# Let's run the Bell expmt on a real device.
# This will not be a simulation!
# backend = set_backend(use_simulator=False, n_qubits=2) # 1st avail is RISKY
backend = set_backend(use_simulator=False, n_qubits=2, preferred_backend='ibmq_ourense')
# +
# [quickly: draw circuits, execute, then go over code and circuits]
# 6. Bell experiment
import numpy as np
# Define the Quantum and Classical Registers
q = QuantumRegister(2, 'q')
c = ClassicalRegister(2, 'c')
# create Bell state
bell = QuantumCircuit(q, c)
bell.h(q[0]) # place q[0] in superposition
bell.cx(q[0], q[1]) # CNOT gate - control=q[0] target=q[1] - places into even-parity Bell state
# setup measurement circuits
# ZZ not used for Bell inequality, but interesting for real device (i.e. not perfect)
meas_zz = QuantumCircuit(q, c)
meas_zz.barrier()
meas_zz.measure(q, c)
# ZW: A=Z=0° B=W=45°
meas_zw = QuantumCircuit(q, c)
meas_zw.barrier()
meas_zw.s(q[1])
meas_zw.h(q[1])
meas_zw.t(q[1])
meas_zw.h(q[1])
meas_zw.measure(q, c)
# ZV: A=Z=0° B=V=-45°
meas_zv = QuantumCircuit(q, c)
meas_zv.barrier()
meas_zv.s(q[1])
meas_zv.h(q[1])
meas_zv.tdg(q[1])
meas_zv.h(q[1])
meas_zv.measure(q, c)
# XW: A=X=90° B=W=45°
meas_xw = QuantumCircuit(q, c)
meas_xw.barrier()
meas_xw.h(q[0])
meas_xw.s(q[1])
meas_xw.h(q[1])
meas_xw.t(q[1])
meas_xw.h(q[1])
meas_xw.measure(q, c)
# XV: A=X=90° B=V=-45° - instead of being 45° diff,
# they are 90°+45°=135° = 180°-45°,
# which is why the correlation is negative and we negate it
# before adding the the rest of the correlations.
meas_xv = QuantumCircuit(q, c)
meas_xv.barrier()
meas_xv.h(q[0])
meas_xv.s(q[1])
meas_xv.h(q[1])
meas_xv.tdg(q[1])
meas_xv.h(q[1])
meas_xv.measure(q, c)
# build circuits
circuits = []
labels = []
ab_labels = []
circuits.append(bell + meas_zz)
labels.append('ZZ')
ab_labels.append("") # not used
circuits.append(bell + meas_zw)
labels.append('ZW')
ab_labels.append("<AB>")
circuits.append(bell + meas_zv)
labels.append('ZV')
ab_labels.append("<AB'>")
circuits.append(bell + meas_xw)
labels.append('XW')
ab_labels.append("<A'B>")
circuits.append(bell + meas_xv)
labels.append('XV')
ab_labels.append("<A'B'>")
print("Circuit to measure ZZ (A=Z=0° B=Z=0°) - NOT part of Bell expmt")
circuits[0].draw(output='mpl', scale=CIRCUIT_SIZE)
# -
print("Circuit to measure ZW (A=Z=0° B=W=45°)")
print("The gates to the right of the vertical bar rotate the measurement axis.")
circuits[1].draw(output='mpl', scale=CIRCUIT_SIZE)
print("Circuit to measure ZV (A=Z=0° B=V=-45°)")
circuits[2].draw(output='mpl', scale=CIRCUIT_SIZE)
print("Circuit to measure XW (A=X=90° B=W=45°)")
circuits[3].draw(output='mpl', scale=CIRCUIT_SIZE)
print("Circuit to meas XV (A=X=90° B=V=-45°) (negative correlation)")
circuits[4].draw(output='mpl', scale=CIRCUIT_SIZE)
# +
# execute, then review while waiting
from datetime import datetime, timezone
import time
# execute circuits
shots = 1024
job = execute(circuits, backend=backend, shots=shots)
print('after call execute()')
if backend.name() != 'qasm_simulator':
try:
info = None
max_tries = 3
while max_tries>0 and not info:
time.sleep(1) # need to wait a little bit before calling queue_info()
info = job.queue_info()
print(f'queue_info: {info}')
max_tries -= 1
now_utc = datetime.now(timezone.utc)
print(f'\njob status: {info._status} as of {now_utc.strftime("%H:%M:%S")} UTC')
print(f'position: {info.position}')
print(f'estimated start time: {info.estimated_start_time.strftime("%H:%M:%S")}')
print(f'estimated complete time: {info.estimated_complete_time.strftime("%H:%M:%S")}')
wait_time = info.estimated_complete_time - now_utc
wait_min, wait_sec = divmod(wait_time.seconds, 60)
print(f'estimated wait time is {wait_min} minutes {wait_sec} seconds')
except Exception as err:
print(f'error getting job info: {err}')
result = job.result() # blocks until complete
print(f'job complete as of {datetime.now(timezone.utc).strftime("%H:%M:%S")} UTC')
# gather data
counts = []
for i, label in enumerate(labels):
circuit = circuits[i]
data = result.get_counts(circuit)
counts.append(data)
# show counts of Bell state measured in Z-axis
print('\n', labels[0], counts[0], '\n')
# show histogram of Bell state measured in Z-axis
# real devices are not yet perfect. due to noise.
add_missing_keys(counts[0])
plot_histogram(counts[0], figsize=HISTO_SIZE)
# +
# tabular output
print(' (+) (+) (-) (-)')
print(' P(00) P(11) P(01) P(10) correlation')
C = 0.0
for i in range(1, len(labels)):
AB = 0.0
print(f'{labels[i]} ', end ='')
N = 0
for out in ('00', '11', '01', '10'):
P = counts[i][out]/float(shots)
N += counts[i][out]
if out in ('00', '11'):
AB += P
else:
AB -= P
print(f'{P:.3f} ', end='')
if N != shots:
print(f'ERROR: N={N} shots={shots}')
print(f'{AB:6.3f} {ab_labels[i]}')
if labels[i] == 'XV':
# the negative correlation - make it positive before summing it
C -= AB
else:
C += AB
print(f"\nC = <AB> + <AB'> + <A'B> - <A'B'>")
print(f' = <ZW> + <ZV> + <XW> - <XV>')
print(f' = {C:.2f}\n')
if C <= 2.0:
print("Einstein: 1 Quantum theory: 0")
else:
print("Einstein: 0 Quantum theory: 1")
# -
# ## Superposition and entanglement main points
# * Superposition is demonstrated by the double-slit experiment, which suggests that a photon can be in two positions at once, because the interference pattern only forms if two photons interfere with each other, and it forms even if we send one photon at a time.
#
# * Hidden variable theories seek to provide determinism to quantum physics.
#
# * The principle of locality states that an influence of one particle on another cannot propagate faster than the speed of light.
#
# * Entanglement cannot be explained by local hidden variable theories.
#
# ## Summary
# * Two of the strangest concepts in quantum physics, superposition and entanglement, are used in quantum computing, and are waiting to be explored by you.
#
# * You can run simple experiments on your laptop, and when you're ready, run them on a real quantum computer, over the cloud, for free.
#
# * IBM's qiskit.org contains software, tutorials, and an active Slack community.
#
# * My Github repo includes this presentation, tips on installing IBM's Qiskit on your laptop, and links for varying levels of explanations of superpositions and entanglements:
# github.com/brandonwarren/intro-to-qubit
#
|
talk-executed.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Working with Streaming Data
#
# Learning Objectives
# 1. Learn how to process real-time data for ML models using Cloud Dataflow
# 2. Learn how to serve online predictions using real-time data
#
# ## Introduction
#
# It can be useful to leverage real time data in a machine learning model when making a prediction. However, doing so requires setting up a streaming data pipeline which can be non-trivial.
#
# Typically you will have the following:
# - A series of IoT devices generating and sending data from the field in real-time (in our case these are the taxis)
# - A messaging bus to that receives and temporarily stores the IoT data (in our case this is Cloud Pub/Sub)
# - A streaming processing service that subscribes to the messaging bus, windows the messages and performs data transformations on each window (in our case this is Cloud Dataflow)
# - A persistent store to keep the processed data (in our case this is BigQuery)
#
# These steps happen continuously and in real-time, and are illustrated by the blue arrows in the diagram below.
#
# Once this streaming data pipeline is established, we need to modify our model serving to leverage it. This simply means adding a call to the persistent store (BigQuery) to fetch the latest real-time data when a prediction request comes in. This flow is illustrated by the red arrows in the diagram below.
#
# <img src='../assets/taxi_streaming_data.png' width='80%'>
#
#
# In this lab we will address how to process real-time data for machine learning models. We will use the same data as our previous 'taxifare' labs, but with the addition of `trips_last_5min` data as an additional feature. This is our proxy for real-time traffic.
#
#
# +
import os
import googleapiclient.discovery
import shutil
from google.cloud import bigquery
# -
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# For Bash Code
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
# ## Re-train our model with `trips_last_5min` feature
#
# In this lab, we want to show how to process real-time data for training and prediction. So, we need to retrain our previous model with this additional feature. Go through the notebook `train.ipynb`. Open and run the notebook to train and save a model. This notebook is very similar to what we did in the Introduction to Tensorflow module but note the added feature for `trips_last_5min` in the model and the dataset.
# ## Simulate Real Time Taxi Data
#
# Since we don’t actually have real-time taxi data we will synthesize it using a simple python script. The script publishes events to Google Cloud Pub/Sub.
#
# Inspect the `iot_devices.py` script in the `taxicab_traffic` folder. It is configured to send about 2,000 trip messages every five minutes with some randomness in the frequency to mimic traffic fluctuations. These numbers come from looking at the historical average of taxi ride frequency in BigQuery.
#
# In production this script would be replaced with actual taxis with IoT devices sending trip data to Cloud Pub/Sub.
#
# To execute the iot_devices.py script, launch a terminal and navigate to the `training-data-analyst/courses/machine_learning/production_ml` directory. Then run the following two commands.
# ```bash
# PROJECT_ID=$(gcloud config list project --format "value(core.project)")
# python3 ./taxicab_traffic/iot_devices.py --project=$PROJECT_ID
# ```
# You will see new messages being published every 5 seconds. **Keep this terminal open** so it continues to publish events to the Pub/Sub topic. If you open [Pub/Sub in your Google Cloud Console](https://console.cloud.google.com/cloudpubsub/topic/list), you should be able to see a topic called `taxifares`.
# ## Create a BigQuery table to collect the processed data
#
# In the next section, we will create a dataflow pipeline to write processed taxifare data to a BigQuery Table, however that table does not yet exist. Execute the following commands to create a BigQuery dataset called `taxifare` and a table within that dataset called `taxifare`.
# +
bq = bigquery.Client()
dataset = bigquery.Dataset(bq.dataset("taxifare"))
try:
bq.create_dataset(dataset) # will fail if dataset already exists
print("Dataset created.")
except:
print("Dataset already exists.")
# -
# Next, we create a table called `taxifare_realtime` and set up the schema.
# +
dataset = bigquery.Dataset(bq.dataset("taxifare"))
table_ref = dataset.table("traffic_realtime")
SCHEMA = [
bigquery.SchemaField("trips_last_5min", "INTEGER", mode="REQUIRED"),
bigquery.SchemaField("time", "TIMESTAMP", mode="REQUIRED"),
]
table = bigquery.Table(table_ref, schema=SCHEMA)
try:
client.create_table(table)
print("Table created.")
except:
print("Table already exists.")
# -
# ## Launch Streaming Dataflow Pipeline
#
# Now that we have our taxi data being pushed to Pub/Sub, and our BigQuery table set up, let’s consume the Pub/Sub data using a streaming DataFlow pipeline.
#
# The pipeline is defined in `./taxicab_traffic/streaming_count.py`. Open that file and inspect it.
#
# There are 5 transformations being applied:
# - Read from PubSub
# - Window the messages
# - Count number of messages in the window
# - Format the count for BigQuery
# - Write results to BigQuery
#
# TODO 1: Leave the second transform in `./taxicab_traffic/streaming_count.py` as a TODO: Specify a sliding window that is 5 minutes long, and gets recalculated every 15 seconds.
# Hint: Reference the [beam programming guide](https://beam.apache.org/documentation/programming-guide/#windowing) for guidance. To check your answer reference the solution.
#
# For the second transform, we specify a sliding window that is 5 minutes long, and recalculate values every 15 seconds.
#
# In a new terminal, launch the dataflow pipeline using the command below. You can change the `BUCKET` variable, if necessary. Here it is assumed to be your `PROJECT_ID`.
# ```bash
# PROJECT_ID=$(gcloud config list project --format "value(core.project)")
# BUCKET=$PROJECT_ID # CHANGE AS NECESSARY
# python3 ./taxicab_traffic/streaming_count.py \
# --input_topic taxi_rides \
# --runner=DataflowRunner \
# --project=$PROJECT_ID \
# --temp_location=gs://$BUCKET/dataflow_streaming
# ```
# Once you've submitted the command above you can examine the progress of that job in the [Dataflow section of Cloud console](https://console.cloud.google.com/dataflow).
# ## Explore the data in the table
# After a few moments, you should also see new data written to your BigQuery table as well.
#
# Re-run the query periodically to observe new data streaming in! You should see a new row every 15 seconds.
# %load_ext google.cloud.bigquery
# %%bigquery
SELECT
*
FROM
`taxifare.traffic_realtime`
ORDER BY
time DESC
LIMIT 10
# ## Make predictions from the new data
#
# In the rest of the lab, we'll referece the model we trained and deployed from the previous labs, so make sure you have run the code in the `train.ipynb` notebook.
#
# The `add_traffic_last_5min` function below will query the `traffic_realtime` table to find the most recent traffic information and add that feature to our instance for prediction.
# TODO 2a. Write a function to take most recent entry in `traffic_realtime` table and add it to instance.
def add_traffic_last_5min(instance):
bq = bigquery.Client()
query_string = """
SELECT
*
FROM
`taxifare.traffic_realtime`
ORDER BY
time DESC
LIMIT 1
"""
trips = bq.query(query_string).to_dataframe()['trips_last_5min'][0]
instance['traffic_last_5min'] = int(trips)
return instance
# The `traffic_realtime` table is updated in realtime using Cloud Pub/Sub and Dataflow so, if you run the cell below periodically, you should see the `traffic_last_5min` feature added to the instance and change over time.
add_traffic_last_5min(instance={'dayofweek': 4,
'hourofday': 13,
'pickup_longitude': -73.99,
'pickup_latitude': 40.758,
'dropoff_latitude': 41.742,
'dropoff_longitude': -73.07})
# Finally, we'll use the python api to call predictions on an instance, using the realtime traffic information in our prediction. Just as above, you should notice that our resulting predicitons change with time as our realtime traffic information changes as well.
# +
# TODO 2b. Write code to call prediction on instance using realtime traffic info.
#Hint: Look at the "Serving online predictions" section of this page https://cloud.google.com/ml-engine/docs/tensorflow/custom-prediction-routine-keras
MODEL_NAME = 'taxifare'
VERSION_NAME = 'traffic'
service = googleapiclient.discovery.build('ml', 'v1', cache_discovery=False)
name = 'projects/{}/models/{}/versions/{}'.format(PROJECT,
MODEL_NAME,
VERSION_NAME)
instance = add_traffic_last_5min({'dayofweek': 4,
'hourofday': 13,
'pickup_longitude': -73.99,
'pickup_latitude': 40.758,
'dropoff_latitude': 41.742,
'dropoff_longitude': -73.07})
response = service.projects().predict(
name=name,
body={'instances': [instance]}
).execute()
if 'error' in response:
raise RuntimeError(response['error'])
else:
print(response['predictions'][0]['output_1'][0])
# -
# Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
|
courses/machine_learning/deepdive2/building_production_ml_systems/solutions/4b_streaming_data_inference.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Installing twarc and accessing the API
# + tags=[]
pip install twarc
# + tags=[]
from twarc import Twarc
# -
# Alternatives
# * [tweepy](https://www.tweepy.org/) (Python)
# * [twitteR](https://www.rdocumentation.org/packages/twitteR/versions/1.1.9) (R)
# * Using twarc from the command line [https://twarc-project.readthedocs.io/en/latest/](https://twarc-project.readthedocs.io/en/latest/).
# +
# NOTE: these are my personal credentials. For this code to work on your
# computer, you will need a file named "API_credentials.txt" in the same
# directory as this script, with the information stored in the form
# access_token=WWW
# access_token_secret=XXX
# consumer_key=YYY
# consumer_secret=ZZZ
credentials = {}
with open('API_credentials.txt', 'r') as f:
for line in f:
credentials[line.split('=')[0]] = line.split('=')[1].strip('\n')
access_token = credentials['access_token']
access_token_secret = credentials['access_token_secret']
consumer_key = credentials['consumer_key']
consumer_secret = credentials['consumer_secret']
# -
# alternatively, you can also paste the info from the app you just created here
# see https://developer.twitter.com/en/portal/projects-and-apps
access_token = 'WWW'
access_token_secret = 'XXX'
consumer_key = 'YYY'
consumer_secret = 'ZZZ'
# # Different endpoints
# Have a look at the [documentation](https://twarc-project.readthedocs.io/en/latest/api/client/#twarc.client) for additional info and more endpoints!
# ## Search
# +
# instantiate a Twarc client with your API access credentials
t = Twarc(consumer_key, consumer_secret, access_token, access_token_secret)
# empty list to store the search results
tweets = []
# tweets we look for should contain the following search string
search_string = '#Göttingen'
# search Twitter for Tweets containing the search string and store all the
# results in the list
for tweet in t.search(search_string):
tweets.append(tweet)
# -
tweets[0]['full_text']
# ## Timeline
tweets = []
# instead of the "search" endpoint, we now use the "timeline" endpoint to
# retrieve all Tweets by a given user (identified by their user name)
for tweet in t.timeline(screen_name='janalasser'):
tweets.append(tweet)
tweets[0]['full_text']
# ## Followers
followers = []
# we can use the "followers" endpoint to get the user ids of all followers of
# a given user
for follower_id in t.follower_ids('janalasser'):
followers.append(follower_id)
followers[0]
# ## User lookup
users = []
# given a list of user IDs, we can retrieve their user profile information by
# using the "user lookup" endpoint
for user in t.user_lookup(followers[0:10]):
users.append(user)
users[0]
# # Data fields
# The API returns JSON objects which are parsed as dictionaries in Python.
# Dictionaries contain pairs of (key, value), where "key" is the name of a
# "data field", such as "id" for the Tweet ID, and "value" contains the value
# of the specific data field
tweets[0]['id']
tweets[0].keys()
# Different API endpoints return different JSON objects, depending on whether
# they return Tweet or User objects
users[0].keys()
# # API limitations
# **Standard access with V1.1 API**
# * Rate limits (see detailed info for the [GET endpoint](https://developer.twitter.com/en/docs/twitter-api/v1/rate-limits))
# * Example: rate limit on the search endpoint is 180 requests / 15 min. Every request can return a maximum of 100 tweets. Therefore you can download a maximum of 72000 Tweets / hour.
# * Only tweets from the last 7 days accessible -> look into streaming tweets if you want more.
#
# **V2 API & academic access**
# * Full archival search
# * 10 mio tweets / month
# * ```counts``` endpoint (very useful!)
# * See [documentation](https://developer.twitter.com/en/docs/twitter-api/early-access) for more info
# # A "counts" example
# +
credentials_V2 = {}
with open('API_credentials_V2.txt', 'r') as f:
for line in f:
credentials_V2[line.split('=')[0]] = line.split('=')[1].strip('\n')
bearer_token = credentials_V2['bearer_token']
# -
from twarc import Twarc2
from datetime import datetime
tV2 = Twarc2(bearer_token=bearer_token)
day_count = []
start = datetime.strptime('2021-01-01', '%Y-%m-%d')
end = datetime.strptime('2021-09-24', '%Y-%m-%d')
search_string = '#btw21'
for c in tV2.counts_all(search_string, start_time=start, end_time=end, granularity='day'):
day_count.extend(c['data'])
day_count[0:10]
pip install pandas
# +
import pandas as pd
counts = pd.DataFrame()
for day in day_count:
counts = counts.append(day, ignore_index=True)
counts.head(3)
# -
counts = counts.sort_values(by='start')
counts['start'] = pd.to_datetime(counts['start'])
counts['end'] = pd.to_datetime(counts['end'])
counts.head(3)
pip install matplotlib
# +
import matplotlib.pyplot as plt
fix, ax = plt.subplots(figsize=(10, 4))
ax.plot(counts['start'], counts['tweet_count'])
ticks = ['2021-01-01', '2021-04-01', '2021-07-01', '2021-10-01']
ax.set_xticks([pd.to_datetime(tick) for tick in ticks])
ax.set_ylabel('tweet count')
ax.set_title('tweets containing {}'.format(search_string), fontsize=20);
|
basic_twitter_scraping.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="Q31XCKTbRSkJ" colab_type="text"
# ## Imports
# + id="VAANoPXNQUTT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="639a1788-3ea4-40f5-c109-4441ee296367"
# imports
import pandas as pd
import re
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing import sequence
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Embedding, Dropout
from tensorflow.keras.layers import LSTM
import numpy as np
import pickle
import keras
from tensorflow.keras.constraints import unit_norm
from tensorflow.keras.callbacks import EarlyStopping
'''
from helper_functions import load_data
from helper_functions import my_split
from helper_functions import upsample_minority
from helper_functions import downsample_majority
from helper_functions import model_prep
from helper_functions import get_results
from helper_functions import get_f1
from helper_functions import clean_text
'''
# + id="L8Np6tZxQrpV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 272} outputId="35558604-38f1-4768-a528-f261cb49e42e"
# !pip install category_encoders
# + id="zGgFSBRYQaLe" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="38385dee-c77f-4535-8b11-d63e1edd2f45"
# imports
import os
import pandas as pd
from sklearn.utils import resample
import category_encoders as ce
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score
from sklearn.metrics import roc_auc_score
from sklearn.metrics import f1_score
import keras.backend as K
import re
def load_data():
FILE_PATH = os.path.join(os.getcwd(), 'data', 'large_data.csv')
return pd.read_csv(FILE_PATH, index_col=None)
def upsample_minority(df):
counts = df['final_status'].value_counts().index
majority = counts[0]
minority = counts[1]
df_majority = df[df['final_status'] == majority]
df_minority = df[df['final_status'] == minority]
majority_class_size = len(df_majority)
minority_class_size = len(df_minority)
minority_upsampled = resample(df_minority,
replace=True,
n_samples=majority_class_size,
random_state=42)
return pd.concat([df_majority, minority_upsampled])
def downsample_majority(df):
counts = df['final_status'].value_counts().index
majority = counts[0]
minority = counts[1]
df_majority = df[df['final_status'] == majority]
df_minority = df[df['final_status'] == minority]
majority_class_size = len(df_majority)
minority_class_size = len(df_minority)
majority_downsampled = resample(df_majority,
replace=False,
n_samples=minority_class_size,
random_state=42)
return pd.concat([df_minority, majority_downsampled])
def my_split(df, year):
train = df[df['launch_year'] < year]
test = df[df['launch_year'] == year]
return train, test
def model_prep(train, test, features, target, onehot=True, scale=True):
encoder = ce.one_hot.OneHotEncoder(use_cat_names=True)
scaler = StandardScaler()
X_train = train[features]
if onehot:
X_train = encoder.fit_transform(X_train)
if scale:
X_train = scaler.fit_transform(X_train)
y_train = train[target]
X_test = test[features]
if onehot:
X_test = encoder.transform(X_test)
if scale:
X_test = scaler.transform(X_test)
y_test = test[target]
return X_train, y_train, X_test, y_test
def get_results(y_true, y_pred):
accuracy_metric = accuracy_score(y_true, y_pred)
roc_auc_metric = roc_auc_score(y_true, y_pred)
f1_metric = f1_score(y_true, y_pred)
print('-------------------------------')
print(f'Accuracy Score: {accuracy_metric}')
print(f'ROC AUC Score: {roc_auc_metric}')
print(f'F1 Score: {f1_metric}')
return
# code credit to https://medium.com/@aakashgoel12/how-to-add-user-defined-function-get-f1-score-in-keras-metrics-3013f979ce0d
def get_f1(y_true, y_pred): #taken from old keras source code
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
recall = true_positives / (possible_positives + K.epsilon())
f1_val = 2*(precision*recall)/(precision+recall+K.epsilon())
return f1_val
def clean_text(text):
tokens = re.sub('[^a-zA-Z 0-9]', '', text)
tokens = tokens.lower().split()
return tokens
# + id="SW0mGltgQUTa" colab_type="code" colab={}
# loading the data
'''
df = load_data()
df.head()
'''
df = pd.read_csv('/content/large_data.csv')
# + id="DhvjDvbmQUTh" colab_type="code" colab={}
# setting variables
batch_size = 32
max_features = 10000
features = 'name'
target = 'final_status'
maxlen= 10
oov_token = <PASSWORD>
# + id="l5-_5288ljxO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="a94fa7e6-d469-4126-d313-00fde7bfd364"
oov_token
# + id="Zod_MjydQUTl" colab_type="code" colab={}
# cleaning the data
df[features] = df[features].fillna('')
df[features] = df[features].apply(lambda x: clean_text(x))
# + tags=[] id="yTFwDCs2QUTu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 408} outputId="ede37a58-56db-4ae8-e00c-3b30d22ce949"
# train/test split
year = 2020
train, test = my_split(df, year)
# transforming words to integer values
tokenizer = Tokenizer(num_words=max_features)
tokenizer.fit_on_texts(train[features])
train[features] = tokenizer.texts_to_sequences(train[features])
test[features] = tokenizer.texts_to_sequences(test[features])
# processing data
X_train, y_train, X_test, y_test = model_prep(train, test, features, target, onehot=False, scale=False)
#maxlen = max([len(each) for each in train[features]])
# padding sequences to all be the same length
X_train = sequence.pad_sequences(X_train, maxlen=maxlen, padding='post')
X_test = sequence.pad_sequences(X_test, maxlen=maxlen, padding='post')
# instantiating the model
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
# compiling the model
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy', get_f1])
# fitting the model
history = model.fit(X_train, y_train,
batch_size=batch_size,
epochs=5,
validation_data=(X_test,y_test))
# + id="1ire2cP2QUTz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="96cb174a-3dc1-44bd-8006-cfa369091b9c"
y_pred = (model.predict(X_test) > 0.5).astype("int32")
y_true = y_test
get_results(y_true, y_pred)
# + id="gl8F4XPAi26j" colab_type="code" colab={}
with open('tokenizer.pickle', 'wb') as handle:
pickle.dump(tokenizer, handle, protocol=pickle.HIGHEST_PROTOCOL)
# + id="BUIjOJ2IlvIl" colab_type="code" colab={}
model.save('basic_model.h5')
# + id="pNZrp44VpBmp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 408} outputId="1df7598f-d27d-4fb0-e36e-63fdbeb69e0f"
# train/test split
year = 2020
train, test = my_split(df, year)
# transforming words to integer values
tokenizer = Tokenizer(num_words=max_features)
tokenizer.fit_on_texts(train[features])
train[features] = tokenizer.texts_to_sequences(train[features])
test[features] = tokenizer.texts_to_sequences(test[features])
# processing data
X_train, y_train, X_test, y_test = model_prep(train, test, features, target, onehot=False, scale=False)
#maxlen = max([len(each) for each in train[features]])
# padding sequences to all be the same length
X_train = sequence.pad_sequences(X_train, maxlen=maxlen, padding='post')
X_test = sequence.pad_sequences(X_test, maxlen=maxlen, padding='post')
# instantiating the model
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(10, activation='relu'))
model.add(Dropout(.1))
model.add(Dense(1, activation='sigmoid'))
# compiling the model
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# fitting the model
history = model.fit(X_train, y_train,
batch_size=batch_size,
epochs=5,
validation_data=(X_test,y_test))
# + id="xSlMBlLhqbUl" colab_type="code" colab={}
model.save('basic_name_model.h5')
# + id="vVt5fm9U9yZY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="48e38e5b-1004-4acb-d4f6-c808e84986a2"
y_pred = (model.predict(X_test) > 0.5).astype("int32")
y_true = y_test
get_results(y_true, y_pred)
# + id="BfiScvmY9zAJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 544} outputId="761b1492-6e28-435a-f118-2060c9830e07"
# it appears that model was slightly better so training it with more epochs
# train/test split
year = 2020
train, test = my_split(df, year)
# transforming words to integer values
tokenizer = Tokenizer(num_words=max_features, oov_token=oov_token)
tokenizer.fit_on_texts(train[features])
train[features] = tokenizer.texts_to_sequences(train[features])
test[features] = tokenizer.texts_to_sequences(test[features])
# processing data
X_train, y_train, X_test, y_test = model_prep(train, test, features, target, onehot=False, scale=False)
#maxlen = max([len(each) for each in train[features]])
# padding sequences to all be the same length
X_train = sequence.pad_sequences(X_train, maxlen=maxlen, padding='post')
X_test = sequence.pad_sequences(X_test, maxlen=maxlen, padding='post')
# instantiating the model
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128,
dropout=0.2, recurrent_dropout=0.3,
kernel_constraint=unit_norm(), recurrent_constraint=unit_norm()))
model.add(Dense(1, activation='sigmoid'))
# compiling the model
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
es = EarlyStopping(monitor='val_loss', mode='min', patience=5)
# fitting the model
history = model.fit(X_train, y_train,
batch_size=batch_size,
epochs=25,
validation_data=(X_test,y_test),
callbacks=[es])
# + id="X_iI2Ra0-C9-" colab_type="code" colab={}
model.save('name_model.h5')
# + id="F20n4FlbFv6X" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 17} outputId="bb68111c-09be-414b-f746-ab10c0a842a6"
from google.colab import files
files.download('name_model.h5')
# + id="URtVlOgKGB92" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 17} outputId="eabab82b-9e1e-4373-afec-c62aa8b98ad9"
with open('new_tokenizer.pickle', 'wb') as handle:
pickle.dump(tokenizer, handle, protocol=pickle.HIGHEST_PROTOCOL)
files.download('new_tokenizer.pickle')
# + id="nL41voBCGTOo" colab_type="code" colab={}
|
data_model/nlp_name.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6.13 ('nlp')
# language: python
# name: python3
# ---
# <center><h2><b>Imports</b></h2></center>
from tqdm import tqdm
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import re
from konlpy.tag import Okt
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
# <center><h2><b>Loading Data</b></h2></center>
df = pd.read_csv('./data/processed/df.csv')
df.head(3)
# +
# Stop words
stopwords = pd.read_table('./data/stopwords.txt')
# ['의','가','이','은','들','는','좀','잘','걍','과','도','를','으로','자','에','와','한','하다']
stopwords[:5]
# -
# <center><h2><b>Data Preprocessing</b></h2></center>
okt = Okt()
okt.morphs('와 이런 것도 영화라고 차라리 뮤직비디오를 만드는 게 나을 뻔', stem = True)
X_train = []
for sentence in tqdm(df['review'][:7465]): # 데이터의 80%를 train 데이터로 쪼갬
tokenized_sentence = okt.morphs(sentence, stem=True) # 토큰화
stopwords_removed_sentence = [word for word in tokenized_sentence if not word in stopwords] # 불용어 제거
X_train.append(stopwords_removed_sentence)
print(X_train[:3])
X_test = []
for sentence in tqdm(df['review'][7465:]): # 데이터의 20%를 test 데이터로 쪼갬
tokenized_sentence = okt.morphs(sentence, stem=True) # 토큰화
stopwords_removed_sentence = [word for word in tokenized_sentence if not word in stopwords] # 불용어 제거
X_test.append(stopwords_removed_sentence)
# #### **Intger Encoding**
tokenizer = Tokenizer()
tokenizer.fit_on_texts(X_train) # 단어 집합이 생성되는 동시에 고유한 정수가 부여됨
print(list(tokenizer.word_index)[:5]) # 단어가 11,000 개가 넘게 존재하며, 부여된 고유한 정수를 확인할 수 있음
# 각 정수는 전체 훈련 데이터에서 등장 빈도수가 높은 순서대로 부여되었기 때문에, 높은 정수가 부여된 단어들은 등장 빈도수가 매우 낮다는 것을 의미함
# 여기서는 빈도수가 낮은 단어들은 자연어 처리에서 배제하고자 함
# 등장 빈도수가 3회 미만인 단어들이 이 데이터에서 얼만큼의 비중을 차지하는지 확인해볼 것임
# +
threshold = 3
total_cnt = len(tokenizer.word_index) # 단어의 수
rare_cnt = 0 # 등장 빈도수가 threshold보다 작은 단어의 개수를 카운트
total_freq = 0 # 훈련 데이터의 전체 단어 빈도수 총 합
rare_freq = 0 # 등장 빈도수가 threshold보다 작은 단어의 등장 빈도수의 총 합
# 단어와 빈도수의 쌍(pair)을 key와 value로 받는다.
for key, value in tokenizer.word_counts.items():
total_freq = total_freq + value
# 단어의 등장 빈도수가 threshold보다 작으면
if(value < threshold):
rare_cnt = rare_cnt + 1
rare_freq = rare_freq + value
print('단어 집합(vocabulary)의 크기 :',total_cnt)
print('등장 빈도가 %s번 이하인 희귀 단어의 수: %s'%(threshold - 1, rare_cnt))
print("단어 집합에서 희귀 단어의 비율:", (rare_cnt / total_cnt)*100)
print("전체 등장 빈도에서 희귀 단어 등장 빈도 비율:", (rare_freq / total_freq)*100)
# -
# 전체 단어 개수 중 빈도수 2이하인 단어는 제거.
# 0번 패딩 토큰을 고려하여 + 1
vocab_size = total_cnt - rare_cnt + 1
print('단어 집합의 크기 :',vocab_size)
# +
# 이를 케라스 토크나이저의 인자로 넘겨주고 텍스트 시퀀스를 정수 시퀀스로 변환함
tokenizer = Tokenizer(vocab_size)
tokenizer.fit_on_texts(X_train)
X_train = tokenizer.texts_to_sequences(X_train)
X_test = tokenizer.texts_to_sequences(X_test)
# -
print(X_train[:3])
print(X_test[:3])
# +
y_train = np.array(df['sentiment'][:7465])
y_test = np.array(df['sentiment'][7465:])
print(len(y_train),len(y_test)) # 길이를 보아하니 잘 나누어 진 것 같다.
# -
# #### remove empty samples
# remove sentence which length is less than 1
drop_train = [index for index, sentence in enumerate(X_train) if len(sentence) < 1]
print(len(drop_train)) # 길이가 0인 샘플은 10개가 있다.
# +
# drop
print(len(X_train))
print(len(y_train))
X_train = np.delete(X_train, drop_train, axis=0)
y_train = np.delete(y_train, drop_train, axis=0)
print(len(X_train))
print(len(y_train))
# -
# #### add padding
# +
print('리뷰의 최대 길이 :',max(len(review) for review in X_train))
print('리뷰의 평균 길이 :',sum(map(len, X_train))/len(X_train))
# check sentence length distribution
plt.hist([len(review) for review in X_train], bins=50)
plt.xlabel('length of samples')
plt.ylabel('number of samples')
plt.show()
# -
# 가장 긴 리뷰의 길이는 339이며, 그래프를 봤을 때 전체 데이터의 길이 분포는 대체적으로 약 25내외의 길이를 가지는 것을 볼 수 있슴
# 모델이 처리할 수 있도록 X_train과 X_test의 모든 샘플의 길이를 특정 길이로 동일하게 맞춰줄 필요가 있슴
# 특정 길이 변수를 max_len으로 정함
# 대부분의 리뷰가 내용이 잘리지 않도록 할 수 있는 최적의 max_len의 값은 얼마인지 확인하기 위해, 전체 샘플 중 길이가 max_len 이하인 샘플의 비율이 몇 %인지 확인하는 함수를 만듦
def below_threshold_len(max_len, nested_list):
count = 0
for sentence in nested_list:
if(len(sentence) <= max_len):
count = count + 1
print('전체 샘플 중 길이가 %s 이하인 샘플의 비율: %s'%(max_len, (count / len(nested_list))*100))
# +
max_len = 50
below_threshold_len(max_len, X_train)
# -
# 전체 훈련 데이터 중 약 94%의 리뷰가 50이하의 길이를 가지는 것을 확인함 모든 샘플의 길이를 50으로 맞추어 보겠다.
X_train = pad_sequences(X_train, maxlen=max_len)
X_test = pad_sequences(X_test, maxlen=max_len)
# <center><h2><b>LSTM Modeling</b></h2></center>
# +
from tensorflow.keras.layers import Embedding, Dense, LSTM
from tensorflow.keras.models import Sequential
from tensorflow.keras.models import load_model
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
embedding_dim = 100
hidden_units = 128
model = Sequential()
model.add(Embedding(vocab_size, embedding_dim))
model.add(LSTM(hidden_units))
model.add(Dense(1, activation='sigmoid'))
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=4)
mc = ModelCheckpoint('./model/lnh_lstm_model.h5', monitor='val_acc', mode='max', verbose=1, save_best_only=True)
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc'])
history = model.fit(X_train, y_train, epochs=15, callbacks=[es, mc], batch_size=64, validation_split=0.2)
# -
print("\n 테스트 정확도: %.4f" % (model.evaluate(X_test, y_test)[1]))
# <center><h2><b>Evaluation</b></h2></center>
def sentiment_predict(new_sentence):
loaded_model = load_model('./model/lnh_lstm_model.h5')
new_sentence = re.sub(r'[^ㄱ-ㅎㅏ-ㅣ가-힣 ]','', new_sentence)
new_sentence = okt.morphs(new_sentence, stem=True) # 토큰화
new_sentence = [word for word in new_sentence if not word in stopwords] # 불용어 제거
encoded = tokenizer.texts_to_sequences([new_sentence]) # 정수 인코딩
pad_new = pad_sequences(encoded, maxlen = max_len) # 패딩
score = float(loaded_model.predict(pad_new)) # 예측
if(score > 0.5):
print("{:.2f}% 확률로 긍정 리뷰입니다.\n".format(score * 100))
else:
print("{:.2f}% 확률로 부정 리뷰입니다.\n".format((1 - score) * 100))
sentiment_predict('이 영화 개꿀잼 ㅋㅋㅋ')
sentiment_predict('이 영화 핵노잼 ㅠㅠ')
sentiment_predict('이딴게 영화냐 ㅉㅉ')
sentiment_predict('감독 뭐하는 놈이냐?')
sentiment_predict('와 개쩐다 정말 세계관 최강자들의 영화다')
|
2_3_lnh_example_okt.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Training on Admission data
# +
import numpy as np
import pandas as pd
admissions = pd.read_csv('binary.csv')
# -
admissions.head(10)
# ### data processing and standardizing
# +
# Make dummy variables for rank
data = pd.concat([admissions, pd.get_dummies(admissions['rank'], prefix='rank')], axis=1)
data = data.drop('rank', axis=1)
# -
data.head()
data.loc[:,'gre'][:5]
# +
# standarize features, since gpa and gre score are higher in range
# (x-μ) / σ
for field in ['gre','gpa']:
mean, std = data[field].mean(), data[field].std()
data.loc[:, field] = (data[field]-mean)/std
# -
data.head()
np.random.seed(42)
data.index
# +
# split random 10% data for testing
sample = np.random.choice(data.index, size = int(len(data) * 0.9), replace=False)
# -
len(sample)
sample[:10]
# note: here sample returns the index within the data points, so later we can use these sample index for training data and
# remaining index location data points to the test data
data, test_data = data.ix[sample], data.drop(sample)
# training data
data.head()
# test data
test_data.head()
# +
# split into features and target
features, target = data.drop('admit', axis=1), data['admit']
features_test, targets_test = test_data.drop('admit', axis=1), test_data['admit']
# -
features.head()
target.head()
features.shape
target.shape
features.values
# +
def sigmoid(x):
"""
Calculate sigmoid
"""
return 1 / (1 + np.exp(-x))
# TODO: We haven't provided the sigmoid_prime function like we did in
# the previous lesson to encourage you to come up with a more
# efficient solution. If you need a hint, check out the comments
# in solution.py from the previous lecture.
# Use to same seed to make debugging easier
np.random.seed(42)
n_records, n_features = features.shape
last_loss = None
# Initialize weights
weights = np.random.normal(scale=1 / n_features**.5, size=n_features)
# Neural Network hyperparameters
epochs = 2000
learnrate = 0.6
for e in range(epochs):
del_w = np.zeros(weights.shape)
for x, y in zip(features.values, target):
# Loop through all records, x is the input, y is the target
# Note: We haven't included the h variable from the previous
# lesson. You can add it if you want, or you can calculate
# the h together with the output
# TODO: Calculate the output
output = sigmoid(np.dot(weights,x))
# TODO: Calculate the error
error = y - output
# TODO: Calculate the error term
error_term = error * (output * (1-output))
# TODO: Calculate the change in weights for this sample
# and add it to the total weight change
del_w += error_term * x
# TODO: Update weights using the learning rate and the average change in weights
weights += (learnrate * del_w) / n_records
# Printing out the mean square error on the training set
if e % (epochs / 10) == 0:
out = sigmoid(np.dot(features, weights))
loss = np.mean((out - target) ** 2)
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
# Calculate accuracy on test data
tes_out = sigmoid(np.dot(features_test, weights))
predictions = tes_out > 0.5
accuracy = np.mean(predictions == targets_test)
print("Prediction accuracy: {:.3f}".format(accuracy))
# -
|
UDACITY DL Nanodegree/Introduction to Neural Network/University Acceptance_using Gradient Descent.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/geral98atehortua/Mujeres_Digitales/blob/main/Clase4.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="XiRfYqbNwO71"
# **Continuacion de estructuras de control iterativa **
#
#
# ---
# **Acumuladores**
#
# Sel da este nombre a la variables que se encargan de almcenar algun tipo de informacion.
# **Ejemplo**
#
# El caso de la compra de viveres en la tienda.
#
# ```
#
# ```
#
#
# + id="ugYa83lowaPq" outputId="cb2c8707-a709-44c3-917a-a7b06bb3434a" colab={"base_uri": "https://localhost:8080/"}
nombre=input("Nombre del comprador")
listacompra = "";
print(nombre, "escribe los siguientes niveles para su compra en el supermercado:")
listacompra= listacompra+ "1 paca de papel de higienico"
print("----compras que tengo que hacer----")
listacompra=listacompra+ ", 1 Shampoo pantene 2 en 1"
listacompra=listacompra+" ,2 pacas de pañales pequeñin etapa 3"
print(listacompra)
# + [markdown] id="0Pa0QXC6wcZ0"
# la variable "listacompra" nos esta sirviendooppara acumular informacion de la lista de compra.
# podemos observar, que **NO** estamos creando una variable por cada item, sino una variable definida nos sirve para almacenar la informacion
#
# A continuacion observemos un ejemplo en donde se pone en practica el uso de acumulacion en una variable usando cantidades y precios
# + id="8aYSk4Slv_9q" outputId="41b19c94-6aba-4af6-b1a4-80c79cf738e1" colab={"base_uri": "https://localhost:8080/"}
ppph=14000 #precio de papel higienico
cpph =3 #cantidad de pacas de papel
pshampoo =18000 #Precio de shampoo pantene 2 and 1
cshampoo =5 #Cantidad de shampoo
ppbebe = 17000 #precio de pacas de pañales pequeña
cpbebe = 4 #cantidad de pañales pequeños
subtotal =0
print("Calculando el total de la compra...")
total_ppph=ppph*cpph
print("el valor de la compra del papel higiencio es", total_ppph)
subtotal=subtotal + total_ppph
print("---el subtotal es:",subtotal)
total_shampoo = pshampoo *cshampoo
print("El valor del total de Shampoo es:$",total_shampoo )
subtotal = subtotal+ total_shampoo
print("---el subtotal es:$",subtotal)
total_ppbebe = ppbebe*cpbebe
print("el valor total de pañales es:$",total_ppbebe)
subtotal = subtotal + total_ppbebe
print("el total de su compra es:$",subtotal)
# + [markdown] id="Tv2ZnbrwwjdH"
# **Contadores**
#
#
# tiene mucha relacion con los "acumuladores" visto en el apartado anterior
# Estas variables se caracterizan por ser variables de control, es decir controlan la **cantidad** de veces que se ejecutan determinada accion.
#
# Usando el ejemplo anterior y modificando un poco, podemos desarrollar el siguient algoritmo
# + id="2NNPyZMIwof8"
#Se comprara pañales por unidad en este caso.
contp = 0
print("Se realizara la compra de pañales etapa 3... se ha iniciado la compra de asignacion en el carrito. En total hay :", contp, "pañales")
contp = contp+1
print("Se realizara la compra de pañales etapa 3... se ha iniciado la compra de asignacion en el carrito. Ahora hay :", contp, "pañales")
contp = contp+1
print("Ahora hay:",contp,"pañal1")
contp = contp+1
print("Ahora hay:",contp,"pañal1")
contp = contp+1
print("Ahora hay:",contp,"pañal1")
contp = contp+1
print("Ahora hay:",contp,"pañal1")
# + [markdown] id="BQi1ReikwrAA"
# **Ciclos controlados por condicicones**
#
# **WHILE**
#
#
# ---
# Recordemos que las variables de control, nos permten manejar estados, pasar de un estado a otro es por ejemplo: una variable que no contiene elementos a contenerlo o una variable un elemento en particular (Acumulador o contador) y cambiarlo po completo(Bnadera)
#
# Estas Variables de cocntrol son la base de ciclos de control. Siendo mas claros, pasar de una accion manual a algo mas automatizado
#
# Empezamos con el ciclo "WHILE" En español es "mientras". Este ciclo compone una condiciion y su bloque de codigo
# loque nos quiere decir While es que el bloque de codigo se ejecutara mientrasc la condicion da como resultado True or False
#
# + id="sXyh8Unhwt8D" outputId="73417bb6-6b9e-4d59-89a0-5b897bb36a52" colab={"base_uri": "https://localhost:8080/"}
lapiz= 5
contlapiz=0
print("Se ha iniciado la compra. en total hay :", contlapiz,lapiz)
while (contlapiz < lapiz):
contlapiz = contlapiz+1
print("Se ha realizado la compra de lapices ahora hay",contlapiz," lapiz")
a=str(contlapiz)
print(type(contlapiz))
print(type(a))
# + [markdown] id="sZBvLskLwwNo"
# Tener en cuenta que dentro del ciclo de WHILE se va afectando las variables implicadas en la declracion de la condicicon que debe cumplir el ciclo en el ejemplo anterior la variable contlapiz para que en algun momento la condicion sea vedadera y termine el ciclo se tiene que cumplir la condicion(contlapiz). De lo contrario, tendriamos un ciclo que nunca se detendria, lo cual decantaria en un cilo interminable
# + [markdown] id="buV_kc0Pwy_y"
# **CICLO DE FOR**
#
# Es un ciclo especializado y optimizado parta los ciclos controlados por cantidad. Se compone de tres elementos:
#
# 1. la variable de iteraccion
#
# 2. elemento de iteraccion
#
# 3. bloque de ocdigo iterar
#
# **¿ventajas de usar el FOR ?**
#
# en PYTHON es muy importante y se considera una herramienta bastante flexible y poderos, por permitir ingresar estructuras de datos complejas, cadena de caracteres, rangos , entre otros. los elementos de iteraccion en esta estructura de datos, son necesarios que tengan la siguiente caracteristica :
#
# 1. cantidad definida(Esto lo diferencia totalmente del WHILE)
#
# El WHILE parte de una condicion de verdad, pero el FOR parte de una cantidad definida
# + id="Ej7tDybVw1PO" outputId="20bf3ae4-e702-4b97-e06a-96cf4a88ac86" colab={"base_uri": "https://localhost:8080/"}
##Retomando el ejemplo de la compra de lapices
print("se ha iniciado la compra. En total hay:0 lapices.")
for i in range(1,10): # en los rangos, la funcion range maneja un intervalo abierto a la derecha y cerrado al a izquierda
print("Se ha realizado la ocmpra de lapices. Ahora hay",i,"lapices")
|
Clase4.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + hideCode=false hidePrompt=false tags=["hide_input"]
import ipywidgets as widgets
from IPython.display import display, Markdown
import pandas as pd
import numpy as np
import copy
import sympy as sp
from sympy import sympify
import ipysheet
from ipysheet import sheet, cell
from ipysheet import column, row
from IPython.display import clear_output
from IPython.display import Javascript
import simplex_algorithm as sa
import functools
M = sp.symbols('M')
#display(Markdown(("# Simplex Tableau"))
def on_restr_change(change, anz_var, anz_schlupf_var, anz_kuenstl_var, button, display_variable_input):
with display_variable_input:
if change['new'] != '0':
anz_var.options = list(map(str, range(1, int(change['new'])+1,)))
anz_schlupf_var.options = list(map(str, range(0, int(change['new'])+1)))
anz_kuenstl_var.options = list(map(str, range(0, int(change['new'])+1)))
anz_var.layout.visibility = "visible"
anz_schlupf_var.layout.visibility = "visible"
anz_kuenstl_var.layout.visibility = "visible"
button.layout.visibility = "visible"
else:
anz_var.layout.visibility = "hidden"
anz_schlupf_var.layout.visibility = "hidden"
anz_kuenstl_var.layout.visibility = "hidden"
def create_tableau(change, anz_restriktionen, anz_var, anz_schlupf_var, anz_kuenstl_var, button):
clear_output(True)
display(anz_restriktionen)
display(anz_var)
display(anz_schlupf_var)
display(anz_kuenstl_var)
display(button)
def adjust_cj(change, cell_table):
for row in range(2, len(cell_table)):
if change['new'] == cell_table[row][1].value:
if "s" in cell_table[row][1].value:
cell_table[row][0].value = cell(row,0, 0, read_only=True, background_color = "white")
elif "x" in cell_table[row][1].value:
cell_table[row][0] = cell(row, 0, "...", background_color = 'yellow')
def update_table(table, input_table, simplex_start):
input_table = table['new']
simplex_start.disabled = True
def correct_input(button, simplex_start, cell_table):
wrong_input_counter = 0
for row in range(0,len(cell_table)):
for column in range(0,len(cell_table[0])):
if cell_table[row][column].style == {'backgroundColor': 'yellow'} or cell_table[row][column].style == {'backgroundColor': 'red'} and column != 1:
try:
test = float(cell_table[row][column].value)
cell_table[row][column].style = {'backgroundColor': 'yellow'}
except ValueError:
cell_table[row][column].style = {'backgroundColor': 'red'}
wrong_input_counter += 1
if wrong_input_counter == 0:
simplex_start.disabled = False
else:
simplex_start.disabled = True
def start_simplex(button, input_table, sum_var, display_output):
display_output.clear_output()
copy_tableau = ipysheet.to_dataframe(input_table)
copy_tableau = copy_tableau.apply(pd.to_numeric, errors='ignore', downcast='float')
copy_tableau.columns = range(0, sum_var+3)
cj = []
cj_zj = []
for column in range(0,len(copy_tableau.columns)):
if column == 0:
cj.append(np.nan)
cj_zj.append(np.nan)
elif column == 1:
cj.append("cj")
cj_zj.append("cj-zj")
elif column == 2:
cj.append(0)
cj_zj.append(np.nan)
else:
cj.append(0)
cj_zj.append(0)
copy_tableau.loc[len(copy_tableau.index)] = cj
copy_tableau.loc[len(copy_tableau.index)] = cj_zj
copy_tableau.replace('-M', -M, inplace = True)
for row in copy_tableau.index:
for column in copy_tableau.columns:
try:
copy_tableau.loc[row][column] = int(copy_tableau.loc[row][column])
except Exception:
pass
sa.get_cj_zj(copy_tableau)
#global tableau
tableau = copy_tableau
#Simplex Algorithmus
list_tableaus, Meldungen, list_pivot_elements = sa.simplex_algorithm(tableau, 10,M)
pd.set_option("precision", 3)
with display_output:
display(Markdown("## Ergebnis"))
#Erzeuge Tableaus (untereinander damit man es ausdrucken kann)
for table in range(0,len(list_tableaus)):
with display_output:
display(Markdown("### " + str(table) + ".Tableau"))
display(list_tableaus[table].style\
.apply(lambda x: ['background: lightblue' if x.name == list_pivot_elements[table][1] else '' for i in x])\
.apply(lambda x: ['background: lightblue' if x.name == list_pivot_elements[table][0] else '' for i in x], axis=1)\
.hide_index()\
.hide_columns())
for message in range(len(Meldungen[table])):
with display_output:
display(widgets.Label(value=Meldungen[table][message]))
def create_input_table(button, anz_restriktionen, anz_var ,anz_schlupf_var, anz_kuenstl_var ,display_table_input, display_output):
display_table_input.clear_output()
list_var = []
simplex_start = widgets.Button(description="Starte Simplex", disabled=True)
check_input = widgets.Button(description="Überprüfe Eingabe")
spalte = 3
reihe_basis_var = 2
sum_var=int(anz_var.value) + int(anz_schlupf_var.value) + int(anz_kuenstl_var.value)
input_table = ipysheet.sheet(rows=2+float(anz_restriktionen.value), columns=sum_var+3, row_headers=False, column_headers=False)
M = sp.symbols('M')
#zwei Dimensionales Array, welches Sheet repräsentiert
cell_table = [[0]*input_table.columns for i in range(input_table.rows)]
for row in range(0,input_table.rows):
for column in range(0,input_table.columns):
if column != 0 and column != 1:
cell_table[row][column] = cell(row, column, "...", background_color = 'yellow')
cell_table[row][column].observe(functools.partial(update_table,
input_table = input_table,
simplex_start =simplex_start
)
)
cell_table[0][0] = cell(0,0, "", read_only=True, background_color='grey')
cell_table[0][1] = cell(0,1, "", read_only=True, background_color='grey')
cell_table[0][2] = cell(0,2, "", read_only=True, background_color='grey')
#Befülle Reihe mit Beschreibung
cell_table[1][0] = cell(1, 0, "cj", read_only=True, font_weight = 'bold', background_color = "white")
cell_table[1][1] = cell(1, 1, "Basisvariable", read_only=True, font_weight = 'bold', background_color = "white")
cell_table[1][2] = cell(1, 2, "Quantity", read_only=True, font_weight = 'bold', background_color = "white")
for anz in range(1, int(anz_var.value)+1):
var_name = "x"+ str(anz)
cell_table[1][spalte] = cell(1,spalte, var_name, read_only=True, font_weight = 'bold', background_color = "white")
list_var.append(cell_table[1][spalte])
spalte += 1
for anz in range(1, int(anz_schlupf_var.value)+1):
var_name = "s"+ str(anz)
cell_table[0][spalte] = cell(0,spalte, 0, read_only=True, background_color = "white")
cell_table[1][spalte] = cell(1,spalte, var_name, read_only=True, font_weight = 'bold', background_color = "white")
list_var.append(cell_table[1][spalte])
spalte += 1
for anz in range(1, int(anz_kuenstl_var.value)+1):
var_name = "a"+ str(anz)
cell_table[0][spalte] = cell(0,spalte, "-M", read_only=True, background_color = "white")
cell_table[1][spalte] = cell(1,spalte, var_name, read_only=True, font_weight = 'bold', background_color = "white")
list_var.append(cell_table[1][spalte])
spalte += 1
start_basisvar = list_var[(len(list_var)-int(anz_restriktionen.value)):]
start_basisvar.sort(key=lambda x: x.value)
basis_selection = []
for var in list_var:
if "a" in var.value:
break
basis_selection.append(var.value)
for row in range(2,input_table.rows):
if "a" in start_basisvar[row-2].value:
cell_table[row][0] = cell(row,0, '-M', read_only=True, background_color = "white")
cell_table[row][1] = cell(row,1, start_basisvar[row-2].value, read_only=True, font_weight = 'bold', background_color = "white")
if "s" in start_basisvar[row-2].value:
cell_table[row][0] = cell(row,0, 0, read_only=True, background_color = "white")
cell_table[row][1] = cell(row,1, start_basisvar[row-2].value, read_only=False, font_weight = 'bold', choice = basis_selection, background_color = "yellow")
cell_table[row][1].observe(functools.partial(adjust_cj, cell_table = cell_table))
if "x" in start_basisvar[row-2].value:
cell_table[row][1] = cell(row,1, start_basisvar[row-2].value, read_only=False, font_weight = 'bold', choice = basis_selection, background_color = "yellow")
cell_table[row][0] = cell(row, 0, "...", background_color = 'yellow')
cell_table[row][1].observe(functools.partial(adjust_cj, cell_table = cell_table))
with display_table_input:
display(Markdown("## Erzeuge das Standardtableau"))
display(input_table)
display(check_input)
display(simplex_start)
check_input.on_click(functools.partial(correct_input,
simplex_start = simplex_start,
cell_table = cell_table
)
)
simplex_start.on_click(functools.partial(start_simplex, input_table = input_table, sum_var = sum_var, display_output = display_output))
# example
#0 300 200 0 0 -M -M
#1 cj Basisvariable Quantity x1 x2 s1 s2 a1 a2
#2 -M a1 60 2 2 0 0 1 0
#3 -M a2 80 2 8 -1 0 0 1
#4 0 s2 40 1 0 0 1 0 0
#5 NaN cj -140*M -4*M -10*M M 0 -M -M
#6 NaN cj-zj NaN 4*M + 300 10*M + 200 -M 0 0 0
display(Markdown("# Simplex Tableau"))
display(Markdown("## Definiere Art/Anzahl der Variablen"))
display_variable_input = widgets.Output()
display_table_input = widgets.Output()
display_output = widgets.Output()
anz_restriktionen = widgets.Dropdown(
options=list(map(str, range(0, 6))),
value= '0',
description='Restriktionen:',
disabled=False,
)
anz_var = widgets.Dropdown(
options=list(map(str, range(1, 6))),
value='1',
description='echte Variablen:',
disabled=False,
)
anz_schlupf_var = widgets.Dropdown(
options=list(map(str, range(0, 6))),
value='0',
description='Schlupfvariablen:',
disabled=False,
)
anz_kuenstl_var = widgets.Dropdown(
options=list(map(str, range(0, 6))),
value='0',
description='künstliche Variablen:',
disabled=False,
)
#sum_var = int(anz_var.value) + int(anz_schlupf_var.value) + int(anz_kuenstl_var.value)
button = widgets.Button(description="Erzeuge Eingabetableau!")
anz_restriktionen.observe(functools.partial(on_restr_change,
anz_var = anz_var,
anz_schlupf_var = anz_schlupf_var,
anz_kuenstl_var = anz_kuenstl_var,
button = button,
display_variable_input = display_variable_input),
names = "value"
)
button.on_click(functools.partial(create_tableau,
anz_restriktionen = anz_restriktionen,
anz_var = anz_var,
anz_schlupf_var = anz_schlupf_var,
anz_kuenstl_var = anz_kuenstl_var,
button = button)
)
display(display_variable_input)
display(display_table_input)
display(display_output)
anz_restriktionen.layout.visibility = "visible"
anz_var.layout.visibility = "hidden"
anz_schlupf_var.layout.visibility = "hidden"
anz_kuenstl_var.layout.visibility = "hidden"
button.layout.visibility = "hidden"
with display_variable_input:
display(anz_restriktionen)
display(anz_var)
display(anz_schlupf_var)
display(anz_kuenstl_var)
display(button)
button.on_click(functools.partial(create_input_table,
anz_restriktionen = anz_restriktionen,
anz_var = anz_var,
anz_schlupf_var = anz_schlupf_var,
anz_kuenstl_var = anz_kuenstl_var,
display_table_input = display_table_input,
display_output = display_output
)
)
|
Simplex.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # voting_ensemble_soft
# +
from __future__ import division
from IPython.display import display
from matplotlib import pyplot as plt
# %matplotlib inline
import numpy as np
import pandas as pd
import random, sys, os, re
from sklearn.ensemble import VotingClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import LogisticRegressionCV
import xgboost as xgb
from sklearn.cross_validation import StratifiedKFold
from sklearn.grid_search import RandomizedSearchCV, GridSearchCV
from sklearn.cross_validation import cross_val_predict, permutation_test_score
# +
SEED = 97
scale = False
minmax = False
norm = False
nointercept = False
engineering = False
N_CLASSES = 2
submission_filename = "../submissions/submission_voting_ensemble_softWgtd.csv"
# -
# # Load the training data
# +
from load_blood_data import load_blood_data
y_train, X_train = load_blood_data(train=True, SEED = SEED,
scale = scale,
minmax = minmax,
norm = norm,
nointercept = nointercept,
engineering = engineering)
# -
# # Train the model
StatifiedCV = StratifiedKFold(y = y_train,
n_folds = 10,
shuffle = True,
random_state = SEED)
# +
# %%time
random.seed(SEED)
# -------------------------------- estimators ----------------------------------------
gbc = GradientBoostingClassifier(loss = 'exponential',
learning_rate = 0.15,
n_estimators = 175,
max_depth = 1,
subsample = 0.75,
min_samples_split = 2,
min_samples_leaf = 1,
#min_weight_fraction_leaf = 0.0,
init = None,
random_state = SEED,
max_features = None,
verbose = 0,
max_leaf_nodes = None,
warm_start = False)
#presort = 'auto')
etc = ExtraTreesClassifier(n_estimators = 10,
criterion = 'entropy',
max_depth = 7,
bootstrap = True,
max_features = None,
min_samples_split = 2,
min_samples_leaf = 1,
#min_weight_fraction_leaf = 0.0,
max_leaf_nodes = None,
oob_score = False,
n_jobs = -1,
random_state = SEED,
verbose = 0)
#warm_start = False,
#class_weight = None)
xgbc = xgb.XGBClassifier(learning_rate = 0.1,
n_estimators = 50,
max_depth = 5,
subsample = 0.25,
colsample_bytree = 0.75,
gamma = 0,
nthread = 1,
objective = 'binary:logistic',
min_child_weight = 1,
max_delta_step = 0,
base_score = 0.5,
seed = SEED,
silent = True,
missing = None)
logit = LogisticRegression(penalty = 'l2',
dual = False,
C = 0.001,
fit_intercept = True,
solver = 'liblinear',
max_iter = 50,
intercept_scaling = 1,
tol = 0.0001,
class_weight = None,
random_state = SEED,
multi_class = 'ovr',
verbose = 0,
warm_start = False,
n_jobs = -1)
logitCV = LogisticRegressionCV(Cs = 10,
cv = 10,
fit_intercept = True,
penalty = 'l2',
solver = 'liblinear',
max_iter = 50,
dual = False,
scoring = None,
tol = 0.0001,
class_weight = None,
n_jobs = -1,
verbose = 0,
refit = True,
intercept_scaling = 1.0,
multi_class = 'ovr',
random_state = SEED)
# -------------------------------- VotingClassifier ----------------------------------------
estimator_list = [('gbc', gbc), ('etc', etc), ('xgbc', xgbc), ('logit', logit), ('logitCV',logitCV)]
weights_list = [ 1, 0.75, 0.75, 2, 1]
clf = VotingClassifier(estimators = estimator_list,
voting = 'soft',
weights = weights_list)
clf.fit(X_train, y_train)
# +
# from sklearn_utilities import GridSearchHeatmap
# GridSearchHeatmap(grid_clf, y_key='learning_rate', x_key='n_estimators')
# from sklearn_utilities import plot_validation_curves
# plot_validation_curves(grid_clf, param_grid, X_train, y_train, ylim = (0.0, 1.05))
# +
# %%time
try:
from sklearn_utilities import plot_learning_curve
except:
import imp, os
util = imp.load_source('sklearn_utilities', os.path.expanduser('~/Dropbox/Python/sklearn_utilities.py'))
from sklearn_utilities import plot_learning_curve
plot_learning_curve(estimator = clf,
title = None,
X = X_train,
y = y_train,
ylim = (0.0, 1.10),
cv = StratifiedKFold(y = y_train,
n_folds = 10,
shuffle = True,
random_state = SEED),
train_sizes = np.linspace(.1, 1.0, 5),
n_jobs = 1)
plt.show()
# -
# # Training set predictions
# +
# %%time
train_preds = cross_val_predict(estimator = clf,
X = X_train,
y = y_train,
cv = StatifiedCV,
n_jobs = 1,
verbose = 0,
fit_params = None,
pre_dispatch = '2*n_jobs')
y_true, y_pred = y_train, train_preds
# +
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_true, y_pred, labels=None)
print cm
try:
from sklearn_utilities import plot_confusion_matrix
except:
import imp, os
util = imp.load_source('sklearn_utilities', os.path.expanduser('~/Dropbox/Python/sklearn_utilities.py'))
from sklearn_utilities import plot_confusion_matrix
plot_confusion_matrix(cm, ['Did not Donate','Donated'])
accuracy = round(np.trace(cm)/float(np.sum(cm)),4)
misclass = 1 - accuracy
print("Accuracy {}, mis-class rate {}".format(accuracy,misclass))
# +
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from sklearn.metrics import log_loss
from sklearn.metrics import f1_score
fpr, tpr, thresholds = roc_curve(y_true, y_pred, pos_label=None)
plt.figure(figsize=(10,6))
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr, tpr)
AUC = roc_auc_score(y_true, y_pred, average='macro')
plt.text(x=0.6,y=0.4,s="AUC {:.4f}"\
.format(AUC),
fontsize=16)
plt.text(x=0.6,y=0.3,s="accuracy {:.2f}%"\
.format(accuracy*100),
fontsize=16)
logloss = log_loss(y_true, y_pred)
plt.text(x=0.6,y=0.2,s="LogLoss {:.4f}"\
.format(logloss),
fontsize=16)
f1 = f1_score(y_true, y_pred)
plt.text(x=0.6,y=0.1,s="f1 {:.4f}"\
.format(f1),
fontsize=16)
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve')
plt.show()
# +
# %%time
score, permutation_scores, pvalue = permutation_test_score(estimator = clf,
X = X_train.values.astype(np.float32),
y = y_train,
cv = StatifiedCV,
labels = None,
random_state = SEED,
verbose = 0,
n_permutations = 100,
scoring = None,
n_jobs = 1)
# +
plt.figure(figsize=(20,8))
plt.hist(permutation_scores, 20, label='Permutation scores')
ylim = plt.ylim()
plt.plot(2 * [score], ylim, '--g', linewidth=3,
label='Classification Score (pvalue {:.4f})'.format(pvalue))
plt.plot(2 * [1. / N_CLASSES], ylim, 'r', linewidth=7, label='Luck')
plt.ylim(ylim)
plt.legend(loc='center',fontsize=16)
plt.xlabel('Score')
plt.show()
# find mean and stdev of the scores
from scipy.stats import norm
mu, std = norm.fit(permutation_scores)
# -
# format for scores.csv file
import re
algo = re.search(r"submission_(.*?)\.csv", submission_filename).group(1)
print("{: <26} , , {:.4f} , {:.4f} , {:.4f} , {:.4f} , {:.4f} , {:.4f}"\
.format(algo,accuracy,logloss,AUC,f1,mu,std))
# # --------------------------------------------------------------------------------------------
# # Test Set Predictions
# ## Re-fit with the full training set
#clf.set_params(**clf_params)
clf.fit(X_train, y_train)
# ## Load the test data
# +
from load_blood_data import load_blood_data
X_test, IDs = load_blood_data(train=False, SEED = SEED,
scale = scale,
minmax = minmax,
norm = norm,
nointercept = nointercept,
engineering = engineering)
# -
# # Predict the test set with the fitted model
# +
y_pred = clf.predict(X_test)
print(y_pred[:10])
try:
y_pred_probs = clf.predict_proba(X_test)
print(y_pred_probs[:10])
donate_probs = [prob[1] for prob in y_pred_probs]
except Exception,e:
print(e)
donate_probs = [0.65 if x>0 else 1-0.65 for x in y_pred]
print(donate_probs[:10])
# -
# # Create the submission file
# +
assert len(IDs)==len(donate_probs)
f = open(submission_filename, "w")
f.write(",Made Donation in March 2007\n")
for ID, prob in zip(IDs, donate_probs):
f.write("{},{}\n".format(ID,prob))
f.close()
# -
|
voting_ensemble.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#load library with Iris
from sklearn.datasets import load_iris
#load classifier from sklearn library
from sklearn.ensemble import RandomForestClassifier
import pandas as pd
import numpy as np
#set random seed
np.random.seed(0)
# -
# # Load data
#Create an object called irits with iris dataset
iris = load_iris()
# +
#create df with four feature variables
df = pd.DataFrame(iris.data, columns = iris.feature_names)
#view data
df.head()
# -
#Add a column, target names
df['species'] = pd.Categorical.from_codes(iris.target,iris.target_names)
print(df.shape)
df.head()
# # Preprocess data
from sklearn.preprocessing import LabelEncoder
enc = LabelEncoder()
df['species'] = enc.fit_transform(df['species'])
print(df.shape)
df.head()
# # Create training and test data
from sklearn.model_selection import train_test_split
df.iloc[:,:4].head()
X_train, X_test, Y_train, Y_test = train_test_split(df.iloc[:,:4],df['species'],test_size=0.25)
print(X_train.shape, Y_train.shape)
print(X_test.shape, Y_test.shape)
# # Train random forest classifier
# +
# Create a random forest Classifier. By convention, clf means 'Classifier'
clf = RandomForestClassifier(n_jobs=2, random_state=0)
# Train the Classifier to take the training features and learn how they relate
# to the training y (the species)
clf.fit(X_train,Y_train )
# -
# # Apply classifier to test data
#
# If you have been following along, you will know we only trained our classifier on part of the data, leaving the rest out. This is, in my humble opinion, the most important part of machine learning. Why? Because by leaving out a portion of the data, we have a set of data to test the accuracy of our model!
#
# Let’s do that now.
# Apply the Classifier we trained to the test data (which, remember, it has never seen before)
clf.predict(X_test)
# What are you looking at above? Remember that we coded each of the three species of plant as 0, 1, or 2. What the list of numbers above is showing you is what species our model predicts each plant is based on the the sepal length, sepal width, petal length, and petal width. How confident is the classifier about each plant? We can see that too.
# View the predicted probabilities of the first 10 observations
clf.predict_proba(X_test)[0:10]
# There are three species of plant, thus [ 1. , 0. , 0. ] tells us that the classifier is certain that the plant is the first class. Taking another example, [ 0.9, 0.1, 0. ] tells us that the classifier gives a 90% probability the plant belongs to the first class and a 10% probability the plant belongs to the second class. Because 90 is greater than 10, the classifier predicts the plant is the first class.
# # Evaluate classifier
# Now that we have predicted the species of all plants in the test data, we can compare our predicted species with the that plant’s actual species.
# Create actual english names for the plants for each predicted plant class
preds = clf.predict(X_test)
# # Create confusion matrix
# A confusion matrix can be, no pun intended, a little confusing to interpret at first, but it is actually very straightforward. The columns are the species we predicted for the test data and the rows are the actual species for the test data. So, if we take the top row, we can wee that we predicted all 13 setosa plants in the test data perfectly. However, in the next row, we predicted 5 of the versicolor plants correctly, but mis-predicted two of the versicolor plants as virginica.
#
# The short explanation of how to interpret a confusion matrix is: anything on the diagonal was classified correctly and anything off the diagonal was classified incorrectly.
# Create confusion matrix
cm = pd.crosstab(Y_test, preds, rownames=['Actual Species'], colnames=['Predicted Species'])
cm
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
cm = confusion_matrix(Y_test,preds)
cm
accuracy = accuracy_score(Y_test,preds)
accuracy
# # View feature importance
list(zip(X_train, clf.feature_importances_))
|
Random forest with Iris dataset.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Recipes: How do I ...?
#
# This page contains frequently used code-snippets ("recipes").
# Below, `da` refers to a data array.
# The examples generally assume that data is in "tof" unit (time-of-flight), i.e., no unit conversion was applied yet, and that there is a "spectrum" dimension in addition to the "tof" dimension.
# Replace these by the actual dimensions as required.
#
# ## General
#
# ### Compute total counts per pixel
counts = da.sum('tof') # for histogrammed data
counts = da.bins.sum().sum('tof') # for binned event data
# ## Event data
# ### Compute number of events
da.bins.size() # events per bin (ignoring event weights and event masks)
da.bins.size().sum() # total events from all non-masked bins
# If the events have been normalized the event weights may differ from 1 and `bins.sum()` should be used instead of `bins.size()`.
# This also respects event masks:
da.bins.sum() # effective events per bin
da.bins.sum().sum() # total effective events from all non-masked bins
# ### Mask a time-of-flight region such as a prompt-pulse
tof = sc.array(dims=['tof'], unit='ms', values=[tof_min, mask_start, mask_end, tof_max])
da = sc.bin(da, edges=[tof]) # bin in 'tof', updating prior 'tof' binning if present
da.masks['prompt_pulse'] = (tof >= tof['tof', 1]) & (tof < tof['tof', 2])
# ## Plotting
# ### Plot a single spectrum
da['spectrum', index].plot() # plot spectrum with given index
# plot spectrum with given spectrum-number, provided da.coords['spectrum'] exists
da['spectrum', sc.scalar(spectrum_number)].plot()
# ### Plot comparison of multiple spectra
spectra = {}
spectra['name1'] = da1['spectrum', index1]
spectra['name2'] = da1['spectrum', index2]
spectra['name3'] = da2['spectrum', index3]
sc.plot(spectra)
|
docs/user-guide/recipes.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="Y7Z4EnkgvC5l" colab_type="code" colab={}
# #!pip install datadotworld
# #!pip install datadotworld[pandas]
# + id="Xo8o23hZvi6s" colab_type="code" colab={}
# #!dw configure
# + id="ZkbVNLZJuXp0" colab_type="code" colab={}
from google.colab import drive
import pandas as pd
import numpy as np
import datadotworld as dw
# + id="6Bfw-1I_v0nJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="baae5ca1-18bc-4fe1-94bb-3cb7ae0a8086" executionInfo={"status": "ok", "timestamp": 1581535710016, "user_tz": -60, "elapsed": 1094, "user": {"displayName": "<NAME>\u0119bski", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mA9VzIOeXY6DbvpZwMI99C19h3IZCF6LH_K9jaxGw=s64", "userId": "04605606701251940211"}}
#drive.mount('/content/drive')
# + id="t6CzxIZNv76S" colab_type="code" outputId="b6681d14-43b4-4208-9072-f0cc2facfec4" executionInfo={"status": "ok", "timestamp": 1581535785079, "user_tz": -60, "elapsed": 1695, "user": {"displayName": "<NAME>0119bski", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mA9VzIOeXY6DbvpZwMI99C19h3IZCF6LH_K9jaxGw=s64", "userId": "04605606701251940211"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
# !pwd
# #cd "drive/My Drive/Colab Notebooks/dw_matrix"
# + id="fP5Wtqk7xU2h" colab_type="code" colab={}
# #!mkdir data
# + id="VJdB0gXwxYKo" colab_type="code" colab={}
# #!echo 'data' > .gitignore
# + id="iNYTdKeDxgPQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d36081a4-772a-48d3-c744-c0a83667327f" executionInfo={"status": "ok", "timestamp": 1581535849961, "user_tz": -60, "elapsed": 550, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mA9VzIOeXY6DbvpZwMI99C19h3IZCF6LH_K9jaxGw=s64", "userId": "04605606701251940211"}}
data = dw.load_dataset('datafiniti/mens-shoe-prices')
data.dataframes
# + id="AgG5qxyaxyIw" colab_type="code" outputId="1015c0cc-9339-4788-94c4-4d435c3b03d5" executionInfo={"status": "ok", "timestamp": 1581535860061, "user_tz": -60, "elapsed": 1721, "user": {"displayName": "<NAME>\u0119bski", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mA9VzIOeXY6DbvpZwMI99C19h3IZCF6LH_K9jaxGw=s64", "userId": "04605606701251940211"}} colab={"base_uri": "https://localhost:8080/", "height": 121}
df = data.dataframes['7004_1']
df.shape
# + id="hUIfwrJ6x7uw" colab_type="code" outputId="74640936-345d-4667-e8b2-ec833970eb22" executionInfo={"status": "ok", "timestamp": 1581535884103, "user_tz": -60, "elapsed": 631, "user": {"displayName": "<NAME>0119bski", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mA9VzIOeXY6DbvpZwMI99C19h3IZCF6LH_K9jaxGw=s64", "userId": "04605606701251940211"}} colab={"base_uri": "https://localhost:8080/", "height": 652}
df.sample(5)
# + id="zc4PYXq9yJWC" colab_type="code" outputId="7235243d-cf14-4e02-931a-6dfd32feb850" executionInfo={"status": "ok", "timestamp": 1581535924957, "user_tz": -60, "elapsed": 600, "user": {"displayName": "<NAME>0119bski", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mA9VzIOeXY6DbvpZwMI99C19h3IZCF6LH_K9jaxGw=s64", "userId": "04605606701251940211"}} colab={"base_uri": "https://localhost:8080/", "height": 218}
df.columns
# + id="RgyieFMEyMAJ" colab_type="code" outputId="bff689e5-9dad-406c-c69b-cdc2b885eb97" executionInfo={"status": "ok", "timestamp": 1581535927484, "user_tz": -60, "elapsed": 612, "user": {"displayName": "<NAME>0119bski", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mA9VzIOeXY6DbvpZwMI99C19h3IZCF6LH_K9jaxGw=s64", "userId": "04605606701251940211"}} colab={"base_uri": "https://localhost:8080/", "height": 101}
df.prices_currency.unique()
# + id="wHMtRntRyRmY" colab_type="code" outputId="b784214a-dffe-4edd-a09c-b55913391c56" executionInfo={"status": "ok", "timestamp": 1581535929855, "user_tz": -60, "elapsed": 598, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mA9VzIOeXY6DbvpZwMI99C19h3IZCF6LH_K9jaxGw=s64", "userId": "04605606701251940211"}} colab={"base_uri": "https://localhost:8080/", "height": 252}
df.prices_currency.value_counts()
# + id="siI-dWaayfDY" colab_type="code" colab={}
df_usd = df[df.prices_currency == 'USD'].copy()
# + id="RhKqD049yn5Q" colab_type="code" outputId="12b4203c-fd13-4bd9-c80e-378b244e0275" executionInfo={"status": "ok", "timestamp": 1581535935180, "user_tz": -60, "elapsed": 589, "user": {"displayName": "<NAME>\u0119bski", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mA9VzIOeXY6DbvpZwMI99C19h3IZCF6LH_K9jaxGw=s64", "userId": "04605606701251940211"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
df_usd.shape
# + id="HKHjGbOOyq8g" colab_type="code" outputId="f1334208-f8b8-4ede-fee7-abc720494010" executionInfo={"status": "ok", "timestamp": 1581535938510, "user_tz": -60, "elapsed": 667, "user": {"displayName": "<NAME>\u0119bski", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mA9VzIOeXY6DbvpZwMI99C19h3IZCF6LH_K9jaxGw=s64", "userId": "04605606701251940211"}} colab={"base_uri": "https://localhost:8080/", "height": 282}
df_usd['prices_amountmin'] = df_usd.prices_amountmin.astype(np.float)
df_usd['prices_amountmin'].hist()
# + id="LepBfJ2qzC93" colab_type="code" outputId="bfa99f8a-5a25-448b-9927-ebd27e900ee6" executionInfo={"status": "ok", "timestamp": 1581535942084, "user_tz": -60, "elapsed": 571, "user": {"displayName": "<NAME>\u0119bski", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mA9VzIOeXY6DbvpZwMI99C19h3IZCF6LH_K9jaxGw=s64", "userId": "04605606701251940211"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
filter_max = np.percentile(df_usd['prices_amountmin'], 99)
filter_max
# + id="5JCZrf-kzUWH" colab_type="code" colab={}
df_usd_filter = df_usd[df_usd['prices_amountmin'] < filter_max]
# + id="t_ijtQoDzqHv" colab_type="code" outputId="1eafa97d-e594-460c-8a3c-3e7d7c2d1c55" executionInfo={"status": "ok", "timestamp": 1581535945348, "user_tz": -60, "elapsed": 789, "user": {"displayName": "<NAME>\u0119bski", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mA9VzIOeXY6DbvpZwMI99C19h3IZCF6LH_K9jaxGw=s64", "userId": "04605606701251940211"}} colab={"base_uri": "https://localhost:8080/", "height": 282}
df_usd_filter.prices_amountmin.hist(bins=100)
# + id="0azsStPHzvsv" colab_type="code" colab={}
|
matrix_one/Day3.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import pathlib
import os
# In[50]:
import tensorflow_datasets as tfds
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D,SeparableConv2D
from tensorflow.keras import models
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# -
gpus = tf.config.list_physical_devices("GPU")[0]
tf.config.experimental.set_memory_growth(gpus, True)
# +
fashion_mnist = tf.keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
# -
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
class_names[ train_labels[0] ]
plt.figure()
plt.imshow(train_images[100])
plt.colorbar()
plt.grid(False)
plt.show()
predict_data = train_images[100]
# +
def decode_img(img_raw):
img_tensor = tf.image.decode_jpeg(img_raw, channels=3)
print(img_tensor.shape)
#img_tensor = (img_tensor/127.5) - 1
tf_fianl = tf.image.resize(img_tensor, [160, 160])
# 格式化0-1
img = tf.image.convert_image_dtype(tf_fianl, tf.float32)
img = (img / 127.5) - 1
return img
def process_path(file_path):
label = get_label(file_path)[0]
# load the raw data from the file as a string
img = tf.io.read_file(file_path)
img = decode_img(img)
return img, label
def get_label(file_path):
# convert the path to a list of path components
parts = tf.strings.split(file_path, os.path.sep)
# The second to last is the class-directory
return parts[-2] == label_name
data_root_orig = "/data/cats_and_dogs_filtered/train//"
data_root = pathlib.Path(data_root_orig)
test_data_path = "/data/cats_and_dogs_filtered/validation/"
test_path = pathlib.Path(test_data_path)
test_image_path = list(test_path.glob('*/*'))
test_image_path = [str(path) for path in test_image_path]
traing_image_path = list(data_root.glob('*/*'))
traing_image_path = [str(path) for path in traing_image_path]
label_name = sorted(item.name for item in test_path.glob('*/')
if item.is_dir())
# +
a = tf.io.read_file(traing_image_path[0]).numpy()
d = decode_img(a)
d = d.numpy()
#d = d.reshape(32,32,3).tolist()
#d
plt.figure()
plt.imshow(d)
plt.colorbar()
#plt.grid(False)
plt.show()
# -
pre_dict = list()
for a in traing_image_path[::-100]:
print(a)
a = tf.io.read_file(a).numpy()
d = decode_img(a)
d = d.numpy()
d = d.reshape(1,100,100,3).tolist()
pre_dict.append(d)
# # shuffle the data
# pre_dict = pre_dict[::-10]
len(pre_dict)
# +
traing_ds = tf.data.Dataset.from_tensor_slices(traing_image_path).map(process_path).shuffle(buffer_size=2000).batch(batch_size=100).repeat()
validation_ds = tf.data.Dataset.from_tensor_slices(test_image_path).map(process_path).shuffle(buffer_size=2000).batch(batch_size=100).repeat()
# for i in traing_ds.take(100):
# c = i[0]
# print(i[1])
# plt.figure()
# plt.imshow(c)
# plt.colorbar()
# #plt.grid(False)
# plt.show()
# +
model = models.Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(160, 160, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(SeparableConv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(SeparableConv2D(128, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(64, activation='relu'))
model.add(Dense(10))
# -
model.summary()
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=False),
metrics=['accuracy'])
history =model.fit(traing_ds,epochs=10,
steps_per_epoch=1000,
validation_data=validation_ds,
validation_steps=5)
# pre_data = model.predict(pre_dict[0])
#np.argmax(pre_data)
print(pre_dict[1])
plt.imshow(pre_dict[1][0])
for i in pre_dict:
pre_data = model.predict(pre_dict[0])
print(np.argmax(pre_data))
clothing_model = models.Sequential([
Flatten(input_shape=(28, 28)),
Dense(128, activation='relu'),
Dense(10)
])
clothing_model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
cloese_model.summary()
clothing_model.fit(train_images, train_labels, epochs=10)
clothing_model.evaluate(test_images, test_labels, steps=1000)
# +
predict_data = predict_data.reshape(1,28,28)
print(predict_data.shape)
data_pre = cloese_model.predict(predict_data)
pre_num = np.argmax(data_pre)
print(pre_num)
class_names[pre_num]
# -
new_model = models.Sequential([
Conv2D(32, 3, padding='same', activation='relu', input_shape=(100, 100 ,3)),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Dropout(0.2),
Conv2D(128, 3, padding='same', activation='relu'),
MaxPooling2D(),
Dropout(0.2),
Flatten(),
Dense(512, activation='relu'),
Dense(1)
])
new_model.summary()
new_model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
history =new_model.fit(traing_ds,epochs=10, steps_per_epoch=1000,validation_data=validation_ds, validation_steps=5)
new_model.evaluate(validation_ds, steps=100)
# +
import matplotlib.pyplot as plt
def plot_curves(history):
pd.DataFrame(history.history).plot(figsize=(8,5))
print(pd.DataFrame(history.history))
plt.grid(True)
plt.gca().set_ylim(0,3.2)
plt.show()
plot_curves(history)
# acc = history.history['accuracy']
# val_acc = history.history['val_accuracy']
# loss = history.history['loss']
# val_loss = history.history['val_loss']
# epochs_range = range(10)
# plt.figure(figsize=(8, 8))
# plt.subplot(1, 2, 1)
# plt.plot(epochs_range, acc, label='Training Accuracy')
# plt.plot(epochs_range, val_acc, label='Validation Accuracy')
# plt.legend(loc='lower right')
# plt.title('Training and Validation Accuracy')
# plt.subplot(1, 2, 2)
# plt.plot(epochs_range, loss, label='Training Loss')
# plt.plot(epochs_range, val_loss, label='Validation Loss')
# plt.legend(loc='upper right')
# plt.title('Training and Validation Loss')
# plt.show()
# acc = new_model.history.history['accuracy']
# val_acc = new_model.history.history['val_accuracy']
# loss = new_model.history.history['loss']
# val_loss = new_model.history.history['val_loss']
# epochs_range = range(10)
# plt.figure(figsize=(8, 8))
# plt.subplot(1, 2, 1)
# plt.plot(epochs_range, acc, label='Training Accuracy')
# plt.plot(epochs_range, val_acc, label='Validation Accuracy')
# plt.legend(loc='lower right')
# plt.title('Training and Validation Accuracy')
# plt.subplot(1, 2, 2)
# plt.plot(epochs_range, loss, label='Training Loss')
# plt.plot(epochs_range, val_loss, label='Validation Loss')
# plt.legend(loc='upper right')
# plt.title('Training and Validation Loss')
# plt.show()
# -
|
CNN_with_cat&dog.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="LTEovaE_gXXx"
# # Apricot package demonstration
#
# The following notebook demonstrates a hypotetical problem to illustrate the application of the Apricot package.
#
# A short fictional backstory for the problem in order to motivate a the interest and simulate the real world scenarios.
#
# ## Problem:
#
# You are the of the Chief Network specialist for a new aviation company called Flixflight, the company is a new participant in the German Aviation market. The company plans on entering the local flights market. As a new participant in this competitive market, the company would need to come up with a decisive plan to start their operations. The C-Board members have an important meeting to deceide a penetration strategy. The task is forwarded to the Chief Network specialist hence it is your task to analyse the current specfic routes and come up with a stratergy.
#
# You decide that the best stratergy would be to start the services with in airports that cover most of routes. This is a Max Cover Problem. Operating in each of the airport entail operational costs hence as a budding company you are restricted to choosing 4 airports. This restriction in the number of airports is a cardinality constraint for the above problem.
#
#
# The above defined problem is a Submodular Monotone Maximization problem.
#
# + colab={"base_uri": "https://localhost:8080/"} id="PXGLPrjreKvD" outputId="374158e1-99e6-4872-cd88-0324972e076f"
#Necessary Packages
# !pip install geopandas
# !pip install apricot-select
# + [markdown] id="LptBE5I7gQ45"
#
# + id="fXwKxRjHkwkS"
import pandas as pd
import seaborn; seaborn.set_style('whitegrid')
import geopandas as gpd
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn')
# %matplotlib inline
# + [markdown] id="Qe-3XtsKgeN1"
# ## Data
#
# A brief description about the dataset that would be used in this demonstration.
# The data is from the openflights.org, a free open-source tool that contains over 10,000 airports, train stations and ferry terminals.
#
# During the analysis we use 2 csv files which are airports.csv and routes.csv. The airports.csv file contains information about individual airports such Aiport ID, Name Country, gps location and so on. The routes.csv contains the possible routes between each of the airports.
#
# Our dataset is filtered with Germany as it is the only place of interest.
# + colab={"base_uri": "https://localhost:8080/", "height": 202} id="Y13i8j-dlaPD" outputId="b7cf8c0c-9a85-460a-d85b-a1a761058714"
names = 'Airport ID', 'Name', 'City', 'Country', 'IATA', 'ICAO', 'Latitude', 'Longitude', 'Altitude', 'Timezone', 'DST', 'Tz', 'Type', 'Source'
airports = pd.read_csv("/content/airports.csv", header=None, names=names)
airports = airports[airports['Country'] == 'Germany']
airports.head()
# + [markdown] id="KtKBVRJLgm48"
# There is a total of 249 airports in the given dataset. It is important to understand that most of the airport would be filtered out as the routes dataset is not exhaustive.
# + id="OlOzn-OCxprZ" colab={"base_uri": "https://localhost:8080/", "height": 202} outputId="82f1fd44-4af7-492b-e27e-66b71da20b50"
names = 'Airline', 'Airline ID', 'Source ', 'Source ID', 'Destination', 'Destination ID', 'Codeshare', 'Stops', 'Equipment'
routes = pd.read_csv("/content/routes.csv", header=None, names=names)
routes = routes.replace("\\N", np.nan).dropna()
routes['Source ID'] = routes['Source ID'].astype(int)
routes['Destination ID'] = routes['Destination ID'].astype(int)
routes = routes.dropna()
routes.head()
# + [markdown] id="SxY-z49Ggp4Y"
# Now the routes dataframe is begin created. This dataframe would later be joined with the airport dataframe to provide an overview of the entire dataset. These routes include a source airport, which is where the flight began, and a destination airport.
# + [markdown] id="gIxTq89-gvTB"
# # German Airport Visualization
# + id="wVqYW8ttzc5U"
top_cities = {
'Berlin': (13.404954, 52.520008),
'Cologne': (6.953101, 50.935173),
'Düsseldorf': (6.782048, 51.227144),
'Frankfurt am Main': (8.682127, 50.110924),
'Hamburg': (9.993682, 53.551086),
'Leipzig': (12.387772, 51.343479),
'Munich': (11.576124, 48.137154),
'Dortmund': (7.468554, 51.513400),
'Stuttgart': (9.181332, 48.777128),
'Nuremberg': (11.077438, 49.449820),
'Hannover': (9.73322, 52.37052)
}
# + id="oqFIqcrNABCD"
world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
germany = world.query('name == "Germany"')
# + colab={"base_uri": "https://localhost:8080/", "height": 895} id="TqTvZ3d8yEOg" outputId="8200b236-89d6-41b1-a320-b8d8a4896d30"
plt.figure(figsize=(15, 11))
fig, ax = plt.subplots()
germany.plot(ax=ax,color='orange',alpha=0.8)
plt.scatter(airports['Longitude'], airports['Latitude'], s=3, color='g')
plt.ylim(46, 56)
plt.xlim(5, 16)
for c in top_cities.keys():
# Plot city name.
ax.text(
x=top_cities[c][0],
# Add small shift to avoid overlap with point.
y=top_cities[c][1] + 0.08,
s=c,
fontsize=12,
ha='center',
)
# Plot city location centroid.
ax.plot(
top_cities[c][0],
top_cities[c][1],
marker='o',
c='black',
alpha=0.5
)
ax.set(
title='Germany',
aspect=1.3,
facecolor='lightblue'
)
fig.set_figheight(15)
fig.set_figwidth(15)
# + [markdown] id="_6KBPld8g00Y"
# The above polgon resembles the geographical edges of Germany.
#
# The next step would be to join the 2 tables airport.csv and routes.csv to get a final dataset that can be used to do the analysis.
#
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="byr5CbVL-A_l" outputId="733df5fa-e4bd-459c-f5bc-f38a9298b97e"
routes_reduced = routes[['Source ID', 'Destination ID']].drop_duplicates()
airports_reduced = airports[['Airport ID', 'Latitude', 'Longitude']]
routes_merged = pd.merge(routes_reduced, airports_reduced, left_on='Source ID', right_on='Airport ID')
routes_merged = pd.merge(routes_merged, airports_reduced, left_on='Destination ID', right_on='Airport ID', suffixes=('_source', '_destination'))
routes_merged = routes_merged.drop(['Airport ID_destination', 'Airport ID_source'], axis=1)
routes_merged
# + [markdown] id="-g3isG0XhUPz"
# # Unique Flights routes within Germany
#
# The visualization shows all the unique flight routes within Germany. The objective of finding the unique routes is to avoid multiple counts while calculating the final objective value.
# + colab={"base_uri": "https://localhost:8080/", "height": 895} id="B9sAdLWoAGQX" outputId="1d61e9fe-c7c4-453b-e971-149f953b6c54"
plt.figure(figsize=(15, 11))
fig, ax = plt.subplots()
germany.plot(ax=ax,color='orange',alpha=0.8)
for i, (_, _, la_x, lo_x, la_y, lo_y) in routes_merged.iterrows():
plt.plot([lo_x, lo_y], [la_x, la_y], color='k', linewidth=0.5)
plt.scatter(airports['Longitude'], airports['Latitude'], s=3, color='g')
plt.ylim(46, 56)
plt.xlim(5, 16)
for c in top_cities.keys():
# Plot city name.
ax.text(
x=top_cities[c][0],
# Add small shift to avoid overlap with point.
y=top_cities[c][1] + 0.08,
s=c,
fontsize=12,
ha='center',
)
# Plot city location centroid.
ax.plot(
top_cities[c][0],
top_cities[c][1],
marker='o',
c='black',
alpha=0.5
)
ax.set(
title='Germany Airport Locations',
aspect=1.3,
facecolor='lightblue'
)
fig.set_figheight(15)
fig.set_figwidth(15)
# + [markdown] id="CIYS_G6JiDnQ"
# # The total number of unique routes are 34
# + colab={"base_uri": "https://localhost:8080/"} id="vH6S-K7XAM3X" outputId="658a7079-6766-448c-d0eb-15e8df728b95"
n = len(airports)
mapping = {airport: i for i, airport in enumerate(airports['Airport ID'])}
route_map = np.zeros((n, n))
for _, (source, destination) in routes_reduced.iterrows():
if source in mapping and destination in mapping:
x, y = mapping[source], mapping[destination]
route_map[x, y] = 1
route_map[y, x] = 1
route_map.sum()
# + [markdown] id="Jchm69iDjZMs"
# The problem is to find 4 airports that cover the maximum number of these 34 routes. Hence by choosing the entire ground set we would cover all 34 unique routes, but as there is a cardinality constraint of 4 our final selection needs to satisfy this.
#
# This is an NP hard problem as the number of combination raises exponentially as ground set elements increase. Hence we would be solving the problem using a heuristic method and the Lazy greedy method. The results from above the methods will be compared to come to our final conclusions.
# + [markdown] id="Y6jJxEdlms3g"
# # Heuristic Approach
# + [markdown] id="iu9uQBrYlkZZ"
# The first heuristic trivial approach would be to choose the first k number of airports with the highest number of routes in expectation to reach the optimal result.
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 171} id="2krq03O2ApqW" outputId="f5218988-3221-459f-afd0-b0badeeb259b"
airports_w_routes = airports.copy()
airports_w_routes['# Routes'] = [route_map[mapping[airport]].sum() for airport in airports['Airport ID']]
airports_w_routes.sort_values("# Routes", ascending=False).head(4)
# + [markdown] id="lUDR1R8om402"
# The first 4 airports that have been selected are Munich Airport which covers 8 airports, Frankfurt am Main Airport which covers 6, Berlin-Tegel Airport which covers 4 and Westerland Sylt Airport which covers 2.
# + [markdown] id="DblIm6V-nmhD"
# ## Visualization of the Heuristic solution
# + colab={"base_uri": "https://localhost:8080/", "height": 895} id="gZ-369dKAuFX" outputId="9c7fb21b-1697-4499-8ad4-1d030dd837e3"
plt.figure(figsize=(15, 11))
fig, ax = plt.subplots()
airports_ = airports_w_routes.sort_values("# Routes", ascending=False).head(4)
airport_idxs = airports_['Airport ID'].values
d = {}
plt.scatter(airports['Longitude'], airports['Latitude'], s=2, color='c')
plt.scatter(airports_['Longitude'], airports_['Latitude'], color='r')
for i, (sid, did, la_x, lo_x, la_y, lo_y) in routes_merged.iterrows():
if (sid, did) in d:
continue
if int(sid) in airport_idxs or int(did) in airport_idxs:
d[(sid, did)] = True
plt.plot([lo_x, lo_y], [la_x, la_y], color='r', linewidth=0.15)
germany.plot(ax=ax,color='orange',alpha=0.8)
plt.ylim(46, 56)
plt.xlim(5, 16)
for c in top_cities.keys():
# Plot city name.
ax.text(
x=top_cities[c][0],
# Add small shift to avoid overlap with point.
y=top_cities[c][1] + 0.08,
s=c,
fontsize=12,
ha='center',
)
# Plot city location centroid.
ax.plot(
top_cities[c][0],
top_cities[c][1],
marker='o',
c='black',
alpha=0.5
)
ax.set(
title='Germany Airport Locations',
aspect=1.3,
facecolor='lightblue'
)
fig.set_figheight(15)
fig.set_figwidth(15)
# + [markdown] id="FaDYwwlNnsSO"
#
# The final number of airport that are coverd are 15.
#
# + colab={"base_uri": "https://localhost:8080/"} id="YAz-RuLLAwQ_" outputId="754657f8-996c-4253-bc6d-25471b64185f"
most_routes = np.array([mapping[airport] for airport in airport_idxs[:4]])
route_map[most_routes].max(axis=0).sum()
# + [markdown] id="Ne7OItmJn1SQ"
# # Submodular Maximization Package
#
# + [markdown] id="WJsx-cHRozVF"
# The same problem is now being solved using the Submodular Maximization Package to come up with near optimal solutions.
# + colab={"base_uri": "https://localhost:8080/", "height": 226} id="6Q2iudXYfAra" outputId="182452c9-c44c-482f-979f-a8a262f6b595"
from apricot import FacilityLocationSelection
model = FacilityLocationSelection(4,metric='precomputed')
model.fit(route_map)
airports_w_routes.iloc[model.ranking].head(4)
# + [markdown] id="8vTvDF3UpvXm"
# Based of the solution from the Apricot package the selected airports are Munich Airport, Frankfurt am Main Airport , Berlin-Tegel Airport and Saarbrücken Airport. The solution from the Apricot Package is similar to the solution from the Heuristic technique.
# + [markdown] id="wDMhXk11qV1J"
# # Visualization of the solution from Apricot Package
# + colab={"base_uri": "https://localhost:8080/", "height": 895} id="6OaL6rDgfBJY" outputId="2433aba8-ca34-4b4b-b973-be1004c798c7"
airports_ = airports_w_routes.iloc[model.ranking]
airport_idxs = airports_['Airport ID'].values
d = {}
plt.figure(figsize=(15, 11))
fig, ax = plt.subplots()
plt.scatter(airports['Longitude'], airports['Latitude'], s=2, color='c')
plt.scatter(airports_['Longitude'], airports_['Latitude'], color='m')
for i, (sid, did, la_x, lo_x, la_y, lo_y) in routes_merged.iterrows():
if (sid, did) in d:
continue
if int(sid) in airport_idxs or int(did) in airport_idxs:
d[(sid, did)] = True
plt.plot([lo_x, lo_y], [la_x, la_y], color='r', linewidth=0.15)
germany.plot(ax=ax,color='orange',alpha=0.8)
plt.ylim(46, 56)
plt.xlim(5, 16)
for c in top_cities.keys():
# Plot city name.
ax.text(
x=top_cities[c][0],
# Add small shift to avoid overlap with point.
y=top_cities[c][1] + 0.08,
s=c,
fontsize=12,
ha='center',
)
# Plot city location centroid.
ax.plot(
top_cities[c][0],
top_cities[c][1],
marker='o',
c='black',
alpha=0.5
)
ax.set(
title='Germany Airport Locations',
aspect=1.3,
facecolor='lightblue'
)
fig.set_figheight(15)
fig.set_figwidth(15)
# + [markdown] id="BiQBQqKdqdQa"
# The number of airports covered by using the apricot package is 16. The solution is greater than the solution of the Heuritic solution.
# + colab={"base_uri": "https://localhost:8080/"} id="t2EVL8iDfmp7" outputId="3dc0a89a-3b0c-422f-9c13-72431e593742"
most_routes = np.array([mapping[airport] for airport in airport_idxs[:5]])
route_map[most_routes].max(axis=0).sum()
# + [markdown] id="B2dfW0ABqxME"
# An observation from the solution is that the selection from Submodular Maximisation package, Apricot focuses on maximizing the information gain at each step, hence by choosing 'Saarbrücken Airport' as the final airport it covers an additional airport and increases the objective.
# + id="WztNsuaxf2zR"
|
submodular_airport.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import statsmodels as sm
from linearmodels import PanelOLS
from linearmodels import RandomEffects
df = pd.read_csv("individual african countries 3Jan20.txt",sep='\t')
df.describe()
data = df.set_index(['Ctry','year'])
mod1 = PanelOLS(data.PovertyGap, data[['Credit','Education','Inflation','Institutions','lnFDI','lnGDP']], entity_effects=True)
res1 = mod1.fit(cov_type='clustered', cluster_entity=True)
res1
data.describe()
data = df.set_index(['Ctry','year'])
mod2 = PanelOLS(data.PovertyGap, data[['Credit','Education','Inflation','Institutions','lnFDI','lnGDP','lnODA']], entity_effects=True)
res2 = mod2.fit(cov_type='clustered', cluster_entity=True)
res2
data = df.set_index(['Ctry','year'])
mod3 = PanelOLS(data.PovertyHC, data[['Credit','Education','Inflation','Institutions','lnFDI','lnGDP']], entity_effects=True)
res3 = mod3.fit(cov_type='clustered', cluster_entity=True)
res3
data = df.set_index(['Ctry','year'])
mod4 = PanelOLS(data.PovertyHC, data[['Credit','Education','Inflation','Institutions','lnFDI','lnGDP','lnODA']], entity_effects=True)
res4 = mod4.fit(cov_type='clustered', cluster_entity=True)
res4
|
latest/Africa USIU.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Batch Create Dendrograms
#
# This notebook allows you to perform hierarchical cluster analysis on multiple models with multiple clustering options. The output is an HTML index file which allows you to display the generated cluster analyses as dendrograms.
#
# The last (optional) cell in this notebook allows you to generate standalone HTML files for a list of already-generated dendrograms.
# ## Setup
# +
# Python imports
import os
from pathlib import Path
from IPython.display import display, HTML
# Define paths
# current_dir = %pwd
project_dir = str(Path(current_dir).parent.parent)
project_dirname = project_dir.split('/')[-1]
current_reldir = current_dir.split("/write/")[1]
model_dir = project_dir + '/' + 'project_data/models'
partials_path = os.path.join(current_dir, 'partials')
scripts_path = 'scripts/batch_cluster.py'
config_path = project_dir + '/config/config.py'
# Import scripts
# %run {config_path}
# %run {scripts_path}
display(HTML('<p style="color: green;">Setup complete.</p>'))
# -
# ## Configuration
#
# Provide a list of all models you wish to cluster and the distance metrics and linkage methods you wish to apply to each of the models.
#
# Set `models = []` if you wish to cluster all the models available in our project. Otherwise, provide a list of the folder names for each model you wish to cluster.
#
# Available distance metrics are 'euclidean' and 'cosine'.
#
# Available linkage methods are 'average', 'single', 'complete', and 'ward'.
#
# Note that a number of advanced configuration options are available. These are detailed in the <a href="README.md" target="_blank">README</a> file. If you wish to use advanced configurations, add them directly to the `BatchCluster()` call in the **Cluster** cell.
# +
# Configuration
models = [] # E.g. ['topics25', 'topics50']
distance_metrics = [] # E.g. ['euclidean']
linkage_methods = ['average', 'single', 'complete', 'ward']
orientation ='bottom' # Can be changed to 'top', 'left', or 'right'
height = 600 # In pixels
width = 1200 # In pixels
display(HTML('<p style="color: green;">Configuration complete.</p>'))
# -
# ## Cluster
#
# Begin the cluster analysis by running the cell below.
# Run the batch cluster
batch_cluster = BatchCluster(models, project_dir, model_dir, partials_path, distance_metrics, linkage_methods,
orientation=orientation, height=height, width=width, WRITE_DIR=WRITE_DIR, PORT=PORT)
# ## Create Standalone Dendrograms (Optional)
#
# Run the cell below if you wish to create standalone versions of any of the dendrograms you have already created. They will be saved into your project's Dendgrogram module folder. The dendrograms can be downloaded and will work locally, as long as you have an internet connection.
# ### Configuration
#
# Choose dendrograms to create (e.g. `topics50-euclidean-average`). By default, the dendrogram files will begin with "standalone_". You can modify this by changing the `prefix` variable below. If you do not wish to have a prefix, change it to `None`.
# +
# Configuration
dendrograms = [] # E.g. ['topics25-euclidean-average', 'topics50-euclidean-average']
prefix = 'standalone_'
display(HTML('<p style="color: green;">Configuration complete.</p>'))
# -
# ### Create the Dendrogram(s)
# +
# Python imports
import os
from pathlib import Path
# Define paths
# current_dir = %pwd
project_dir = str(Path(current_dir).parent.parent)
model_dir = project_dir + '/' + 'project_data/models'
scripts_path = 'scripts/standalone.py'
# Import scripts
# %run {scripts_path}
# Generate the dendrogram(s)
create_standalone(dendrograms, partials_path, model_dir, file_prefix=prefix)
|
src/templates/v0.1.9/modules/dendrogram/batch_dendrogram.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# +
fname = '../data/keplerstellar_2021.05.22_17.51.29.csv'
columns = ['kepmag', 'teff', 'radius', 'dist', 'kmag']
df = pd.read_csv(fname, comment='#', usecols=columns)
# Calcualte Absolute Magnitude.
w = df['dist'] == 0.0
df.loc[w, 'dist'] = np.nan
df['Absmag'] = df.kepmag - 5 * (np.log10(df.dist) - 1)
df['dist'].unique()
# -
plt.scatter(df.teff, df.Absmag)
#plt.xlim(20000, 0)
#plt.ylim(25, 5)
|
notebook/keplerstellar.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 2.1 Introduction
# ### 2.1.1 Model Context
# In PyMC3, we typically handle all the variables we want in our model within the context of the Model object.
# +
import pymc3 as pm
with pm.Model() as model:
parameter = pm.Exponential("poisson_param", 1.0)
data_generator = pm.Poisson("data_generator", parameter)
# -
# This is an extra layer of convenience compared to PyMC. Any variables created within a given Model's context will be automatically assigned to that model. If you try to define a variable outside of the context of a model, you will get an error.
#
# We can continue to work within the context of the same model by using with with the name of the model object that we have already created.
with model:
data_plus_one = data_generator + 1
# We can examine the same variables outside of the model context once they have been defined, but to define more variables that the model will recognize they have to be within the context.
parameter.tag.test_value
# Each variable assigned to a model will be defined with its own name, the first string parameter (we will cover this further in the variables section). To create a different model object with the same name as one we have used previously, we need only run the first block of code again.
with pm.Model() as model:
theta = pm.Exponential("theta", 2.0)
data_generator = pm.Poisson("data_generator", theta)
# We can also define an entirely separate model. Note that we are free to name our models whatever we like, so if we do not want to overwrite an old model we need only make another.
with pm.Model() as ab_testing:
p_A = pm.Uniform("P(A)", 0, 1)
p_B = pm.Uniform("P(B)", 0, 1)
# You probably noticed that PyMC3 will often give you notifications about transformations when you add variables to your model. These transformations are done internally by PyMC3 to modify the space that the variable is sampled in (when we get to actually sampling the model). This is an internal feature which helps with the convergence of our samples to the posterior distribution and serves to improve the results.
#
# ### 2.1.2 PyMC3 Variables
# All PyMC3 variables have an initial value (i.e. test value). Using the same variables from before:
print("parameter.tag.test_value =", parameter.tag.test_value)
print("data_generator.tag.test_value =", data_generator.tag.test_value)
print("data_plus_one.tag.test_value =", data_plus_one.tag.test_value)
# The *test_value* is used only for the model, as the starting point for sampling if no other start is specified. It will not change as a result of sampling. This initial state can be changed at variable creation by specifying a value for the testval parameter.
# +
with pm.Model() as model:
parameter = pm.Exponential("poisson_param", 1.0, testval=0.5)
print("\nparameter.tag.test_value =", parameter.tag.test_value)
# -
# This can be helpful if you are using a more unstable prior that may require a better starting point.
#
# PyMC3 is concerned with two types of programming variables: stochastic and deterministic.
#
# * *stochastic variables* are variables that are not deterministic, i.e., even if you knew all the values of the variables' parameters and components, it would still be random. Included in this category are instances of classes `Poisson`, `DiscreteUniform`, and `Exponential`.
#
# * *deterministic variables* are variables that are not random if the variables' parameters and components were known. This might be confusing at first: a quick mental check is if I knew all of variable `foo`'s component variables, I could determine what `foo`'s value is.
# +
with pm.Model() as model:
lambda_1 = pm.Exponential("lambda_1", 1.0)
lambda_2 = pm.Exponential("lambda_2", 1.0)
tau = pm.DiscreteUniform("tau", lower=0, upper=10)
new_deterministic_variable = lambda_1 + lambda_2
# +
import numpy as np
n_data_points = 5 # in CH1 we had ~70 data points
idx = np.arange(n_data_points)
with model:
lambda_ = pm.math.switch(tau >= idx, lambda_1, lambda_2)
# -
# ### 2.1.3 Including Observations in the Model
# At this point, it may not look like it, but we have fully specified our priors. For example, we can ask and answer questions like "What does my prior distribution of λ1 look like?"
# +
# %matplotlib inline
from IPython.core.pylabtools import figsize
import matplotlib.pyplot as plt
import scipy.stats as stats
figsize(12.5, 4)
samples = lambda_1.random(size=20000)
plt.hist(samples, bins=70, normed=True, histtype="stepfilled")
plt.title("Prior distribution for $\lambda_1$")
plt.xlim(0, 8);
# -
# To frame this in the notation of the first chapter, though this is a slight abuse of notation, we have specified P(A). Our next goal is to include data/evidence/observations X into our model.
#
# PyMC3 stochastic variables have a keyword argument `observed`. The keyword `observed` has a very simple role: fix the variable's current value to be the given data, typically a NumPy array or pandas DataFrame. For example:
data = np.array([10, 5])
with model:
fixed_variable = pm.Poisson("fxd", 1, observed=data)
print("value: ", fixed_variable.tag.test_value)
# This is how we include data into our models: initializing a stochastic variable to have a *fixed value*.
#
# To complete our text message example, we fix the PyMC3 variable `observations` to the observed dataset.
# We're using some fake data here
data = np.array([10, 25, 15, 20, 35])
with model:
obs = pm.Poisson("obs", lambda_, observed=data)
print(obs.tag.test_value)
# ## 2.2 Modeling Approaches
# A good starting thought to Bayesian modeling is to think about how your data might have been generated. Position yourself in an omniscient position, and try to imagine how you would recreate the dataset.
#
# In the last chapter we investigated text message data. We begin by asking how our observations may have been generated:
#
# 1. We started by thinking "what is the best random variable to describe this count data?" A Poisson random variable is a good candidate because it can represent count data. So we model the number of sms's received as sampled from a Poisson distribution.
#
# 2. Next, we think, "Ok, assuming sms's are Poisson-distributed, what do I need for the Poisson distribution?" Well, the Poisson distribution has a parameter $\lambda$.
#
# 3. Do we know $\lambda$? No. In fact, we have a suspicion that there are two $\lambda$ values, one for the earlier behaviour and one for the later behaviour. We don't know when the behaviour switches though, but call the switchpoint $\tau$.
#
# 4. What is a good distribution for the two $\lambda$s? The exponential is good, as it assigns probabilities to positive real numbers. Well the exponential distribution has a parameter too, call it $\alpha$.
#
# 5. Do we know what the parameter $\alpha$ might be? No. At this point, we could continue and assign a distribution to $\alpha$, but it's better to stop once we reach a set level of ignorance: whereas we have a prior belief about $\lambda$, ("it probably changes over time", "it's likely between 10 and 30", etc.), we don't really have any strong beliefs about $\alpha$. So it's best to stop here. What is a good value for $\alpha$ then? We think that the $\lambda$s are between 10-30, so if we set $\alpha$ really low (which corresponds to larger probability on high values) we are not reflecting our prior well. Similar, a too-high alpha misses our prior belief as well. A good idea for $\alpha$ as to reflect our belief is to set the value so that the mean of $\lambda$, given $\alpha$, is equal to our observed mean. This was shown in the last chapter.
#
# 6. We have no expert opinion of when $\tau$ might have occurred. So we will suppose $\tau$ is from a discrete uniform distribution over the entire timespan.
#
# ### 2.2.1 Same Story, Different Ending
# Interestingly, we can create new datasets by retelling the story. For example, if we reverse the above steps, we can simulate a possible realization of the dataset.
#
# 1> Specify when the user's behaviour switches by sampling from DiscreteUniform(0,80):
tau = np.random.randint(0, 80)
print(tau)
# 2> Draw λ1 and λ2 from an Exp(α) distribution:
alpha = 1./20.
lambda_1, lambda_2 = np.random.exponential(scale=1/alpha, size=2)
print(lambda_1, lambda_2)
# 3> For days before τ, represent the user's received SMS count by sampling from Poi(λ1), and sample from Poi(λ2) for days after τ. For example:
data = np.r_[stats.poisson.rvs(mu=lambda_1, size=tau), stats.poisson.rvs(mu=lambda_2, size = 80 - tau)]
# 4> Plot the artificial dataset:
plt.bar(np.arange(80), data, color="#348ABD")
plt.bar(tau-1, data[tau - 1], color="r", label="user behaviour changed")
plt.xlabel("Time (days)")
plt.ylabel("count of text-msgs received")
plt.title("Artificial dataset")
plt.xlim(0, 80)
plt.legend();
# It is okay that our fictional dataset does not look like our observed dataset: the probability is incredibly small it indeed would. PyMC3's engine is designed to find good parameters, λi,τ, that maximize this probability.
#
# The ability to generate artificial dataset is an interesting side effect of our modeling, and we will see that this ability is a very important method of Bayesian inference. We produce a few more datasets below:
# +
def plot_artificial_sms_dataset():
tau = stats.randint.rvs(0, 80)
alpha = 1./20.
lambda_1, lambda_2 = stats.expon.rvs(scale=1/alpha, size=2)
data = np.r_[stats.poisson.rvs(mu=lambda_1, size=tau), stats.poisson.rvs(mu=lambda_2, size=80 - tau)]
plt.bar(np.arange(80), data, color="#348ABD")
plt.bar(tau - 1, data[tau-1], color="r", label="user behaviour changed")
plt.xlim(0, 80);
figsize(12.5, 5)
plt.title("More example of artificial datasets")
for i in range(4):
plt.subplot(4, 1, i+1)
plot_artificial_sms_dataset()
# -
# ### 2.2.3 A Simple Case
# As this is a hacker book, we'll continue with the web-dev example. For the moment, we will focus on the analysis of site A only. Assume that there is some true $0 \lt p_A \lt 1$ probability that users who, upon shown site A, eventually purchase from the site. This is the true effectiveness of site A. Currently, this quantity is unknown to us.
#
# Suppose site A was shown to $N$ people, and $n$ people purchased from the site. One might conclude hastily that $p_A = \frac{n}{N}$. Unfortunately, the observed frequency $\frac{n}{N}$ does not necessarily equal $p_A$ -- there is a difference between the observed frequency and the true frequency of an event. The true frequency can be interpreted as the probability of an event occurring. For example, the true frequency of rolling a 1 on a 6-sided die is $\frac{1}{6}$. Knowing the true frequency of events like:
#
# * fraction of users who make purchases,
# * frequency of social attributes,
# * percent of internet users with cats etc.
#
# are common requests we ask of Nature. Unfortunately, often Nature hides the true frequency from us and we must infer it from observed data.
#
# The observed frequency is then the frequency we observe: say rolling the die 100 times you may observe 20 rolls of 1. The observed frequency, 0.2, differs from the true frequency, $\frac{1}{6}$. We can use Bayesian statistics to infer probable values of the true frequency using an appropriate prior and observed data.
#
# With respect to our A/B example, we are interested in using what we know, $N$ (the total trials administered) and $n$ (the number of conversions), to estimate what $p_A$, the true frequency of buyers, might be.
#
# To setup a Bayesian model, we need to assign prior distributions to our unknown quantities. A priori, what do we think $p_A$ might be? For this example, we have no strong conviction about $p_A$, so for now, let's assume $p_A$ is uniform over [0,1]:
# +
import pymc3 as pm
# The parameters are the bounds of the Uniform.
with pm.Model() as model:
p = pm.Uniform('p', lower=0, upper=1)
# -
# Had we had stronger beliefs, we could have expressed them in the prior above.
#
# For this example, consider $p_A = 0.05$, and $N = 1500$ users shown site A, and we will simulate whether the user made a purchase or not. To simulate this from $N$ trials, we will use a Bernoulli distribution: if $X\ \sim \text{Ber}(p)$, then $X$ is 1 with probability $p$ and 0 with probability $1 - p$. Of course, in practice we do not know $p_A$, but we will use it here to simulate the data.
# +
# set constants
p_true = 0.05 # remember, this is unknown.
N = 1500
# sample N Bernoulli random variables from Ber(0.05).
# each random variable has a 0.05 chance of being a 1.
# this is the data-generation step
occurrences = stats.bernoulli.rvs(p_true, size=N)
print(occurrences) # Remember: Python treats True == 1, and False == 0
print(np.sum(occurrences))
# -
# The observed frequency is:
# Occurrences.mean is equal to n/N.
print("What is the observed frequency in Group A? %.4f" % np.mean(occurrences))
print("Does this equal the true frequency? %s" % (np.mean(occurrences) == p_true))
# We combine the observations into the PyMC3 observed variable, and run our inference algorithm:
#include the observations, which are Bernoulli
with model:
obs = pm.Bernoulli("obs", p, observed=occurrences)
# To be explained in chapter 3
step = pm.Metropolis()
trace = pm.sample(18000, step=step)
burned_trace = trace[1000:]
# We plot the posterior distribution of the unknown $p_A$ below:
figsize(12.5, 4)
plt.title("Posterior distribution of $p_A$, the true effectiveness of site A")
plt.vlines(p_true, 0, 90, linestyle="--", label="true $p_A$ (unknown)")
plt.hist(burned_trace["p"], bins=25, histtype="stepfilled", normed=True)
plt.legend();
# Our posterior distribution puts most weight near the true value of $p_A$, but also some weights in the tails. This is a measure of how uncertain we should be, given our observations. Try changing the number of observations, N, and observe how the posterior distribution changes.
#
# ### 2.2.4 A and B together
# A similar analysis can be done for site B's response data to determine the analogous $p_B$. But what we are really interested in is the difference between $p_A$ and $p_B$. Let's infer $p_A$, $p_B$, and $\text{delta} = p_A - p_B$, all at once. We can do this using PyMC3's deterministic variables. (We'll assume for this exercise that $p_B = 0.04$, so $\text{delta} = 0.01$, $N_B = 750$ (significantly less than $N_A$) and we will simulate site B's data like we did for site A's data )
# +
figsize(12, 4)
#these two quantities are unknown to us.
true_p_A = 0.05
true_p_B = 0.04
#notice the unequal sample sizes -- no problem in Bayesian analysis.
N_A = 1500
N_B = 750
#generate some observations
observations_A = stats.bernoulli.rvs(true_p_A, size=N_A)
observations_B = stats.bernoulli.rvs(true_p_B, size=N_B)
print("Obs from Site A: ", observations_A[:30], "...")
print("Obs from Site B: ", observations_B[:30], "...")
# -
print(np.mean(observations_A))
print(np.mean(observations_B))
# Set up the pymc3 model. Again assume Uniform priors for p_A and p_B.
with pm.Model() as model:
p_A = pm.Uniform("p_A", 0, 1)
p_B = pm.Uniform("p_B", 0, 1)
# Define the deterministic delta function. This is our unknown of interest.
delta = pm.Deterministic("delta", p_A - p_B)
# Set of observations, in this case we have two observation datasets.
obs_A = pm.Bernoulli("obs_A", p_A, observed=observations_A)
obs_B = pm.Bernoulli("obs_B", p_B, observed=observations_B)
# To be explained in chapter 3.
step = pm.Metropolis()
trace = pm.sample(20000, step=step)
burned_trace=trace[1000:]
# Below we plot the posterior distributions for the three unknowns:
p_A_samples = burned_trace["p_A"]
p_B_samples = burned_trace["p_B"]
delta_samples = burned_trace["delta"]
# +
figsize(12.5, 10)
#histogram of posteriors
ax = plt.subplot(311)
plt.xlim(0, .1)
plt.hist(p_A_samples, histtype='stepfilled', bins=25, alpha=0.85,
label="posterior of $p_A$", color="#A60628", normed=True)
plt.vlines(true_p_A, 0, 80, linestyle="--", label="true $p_A$ (unknown)")
plt.legend(loc="upper right")
plt.title("Posterior distributions of $p_A$, $p_B$, and delta unknowns")
ax = plt.subplot(312)
plt.xlim(0, .1)
plt.hist(p_B_samples, histtype='stepfilled', bins=25, alpha=0.85,
label="posterior of $p_B$", color="#467821", normed=True)
plt.vlines(true_p_B, 0, 80, linestyle="--", label="true $p_B$ (unknown)")
plt.legend(loc="upper right")
ax = plt.subplot(313)
plt.hist(delta_samples, histtype='stepfilled', bins=30, alpha=0.85,
label="posterior of delta", color="#7A68A6", normed=True)
plt.vlines(true_p_A - true_p_B, 0, 60, linestyle="--",
label="true delta (unknown)")
plt.vlines(0, 0, 60, color="black", alpha=0.2)
plt.legend(loc="upper right");
# -
# Notice that as a result of N_B < N_A, i.e. we have less data from site B, our posterior distribution of $p_B$ is fatter, implying we are less certain about the true value of $p_B$ than we are of $p_A$.
#
# With respect to the posterior distribution of $\text{delta}$, we can see that the majority of the distribution is above $\text{delta}=0$, implying there site A's response is likely better than site B's response. The probability this inference is incorrect is easily computable:
# +
# Count the number of samples less than 0, i.e. the area under the curve
# before 0, represent the probability that site A is worse than site B.
print("Probability site A is WORSE than site B: %.3f" % \
np.mean(delta_samples < 0))
print("Probability site A is BETTER than site B: %.3f" % \
np.mean(delta_samples > 0))
# -
# If this probability is too high for comfortable decision-making, we can perform more trials on site B (as site B has less samples to begin with, each additional data point for site B contributes more inferential "power" than each additional data point for site A).
#
# Try playing with the parameters true_p_A, true_p_B, N_A, and N_B, to see what the posterior of $\text{delta}$ looks like. Notice in all this, the difference in sample sizes between site A and site B was never mentioned: it naturally fits into Bayesian analysis.
#
# I hope the readers feel this style of A/B testing is more natural than hypothesis testing, which has probably confused more than helped practitioners. Later in this book, we will see two extensions of this model: the first to help dynamically adjust for bad sites, and the second will improve the speed of this computation by reducing the analysis to a single equation.
#
# ### 2.2.6 The Binomial Distribution
# The binomial distribution is one of the most popular distributions, mostly because of its simplicity and usefulness. Unlike the other distributions we have encountered thus far in the book, the binomial distribution has 2 parameters: $N$, a positive integer representing $N$ trials or number of instances of potential events, and $p$, the probability of an event occurring in a single trial. Like the Poisson distribution, it is a discrete distribution, but unlike the Poisson distribution, it only weighs integers from $0$ to $N$. The mass distribution looks like:
#
# $$P( X = k ) = {{N}\choose{k}} p^k(1-p)^{N-k}$$
# If $X$ is a binomial random variable with parameters $p$ and $N$, denoted $X \sim \text{Bin}(N,p)$, then $X$ is the number of events that occurred in the $N$ trials (obviously $0 \le X \le N$). The larger $p$ is (while still remaining between 0 and 1), the more events are likely to occur. The expected value of a binomial is equal to $Np$. Below we plot the mass probability distribution for varying parameters.
# +
figsize(12.5, 4)
import scipy.stats as stats
binomial = stats.binom
parameters = [(10, .4), (10, .9)]
colors = ["#348ABD", "#A60628"]
for i in range(2):
N, p = parameters[i]
_x = np.arange(N + 1)
plt.bar(_x - 0.5, binomial.pmf(_x, N, p), color=colors[i],
edgecolor=colors[i],
alpha=0.6,
label="$N$: %d, $p$: %.1f" % (N, p),
linewidth=3)
plt.legend(loc="upper left")
plt.xlim(0, 10.5)
plt.xlabel("$k$")
plt.ylabel("$P(X = k)$")
plt.title("Probability mass distributions of binomial random variables");
# -
# The special case when $N = 1$ corresponds to the Bernoulli distribution. There is another connection between Bernoulli and Binomial random variables. If we have $X_1, X_2, ... , X_N$ Bernoulli random variables with the same $p$, then $Z = X_1 + X_2 + ... + X_N \sim \text{Binomial}(N, p )$.
#
# The expected value of a Bernoulli random variable is $p$. This can be seen by noting the more general Binomial random variable has expected value $Np$ and setting $N=1$.
#
# ### 2.2.7 Example: Cheating Among Students
# We will use the binomial distribution to determine the frequency of students cheating during an exam. If we let $N$ be the total number of students who took the exam, and assuming each student is interviewed post-exam (answering without consequence), we will receive integer $X$ "Yes I did cheat" answers. We then find the posterior distribution of $p$, given $N$, some specified prior on $p$, and observed data $X$.
#
# Suppose 100 students are being surveyed for cheating, and we wish to find $p$, the proportion of cheaters. There are a few ways we can model this in PyMC3. I'll demonstrate the most explicit way, and later show a simplified version. Both versions arrive at the same inference. In our data-generation model, we sample $p$, the true proportion of cheaters, from a prior. Since we are quite ignorant about $p$, we will assign it a $\text{Uniform}(0,1)$ prior.
N = 100
with pm.Model() as model:
p = pm.Uniform("freq_cheating", 0, 1)
# Again, thinking of our data-generation model, we assign Bernoulli random variables to the 100 students: 1 implies they cheated and 0 implies they did not.
with model:
true_answers = pm.Bernoulli("truths", p, shape=N, testval=np.random.binomial(1, 0.5, N))
# If we carry out the algorithm, the next step that occurs is the first coin-flip each student makes. This can be modeled again by sampling 100 Bernoulli random variables with $p=1/2$: denote a 1 as a Heads and 0 a Tails.
with model:
first_coin_flips = pm.Bernoulli("first_flips", 0.5, shape=N, testval=np.random.binomial(1, 0.5, N))
print(first_coin_flips.tag.test_value)
# Although not everyone flips a second time, we can still model the possible realization of second coin-flips:
with model:
second_coin_flips = pm.Bernoulli("second_flips", 0.5, shape=N, testval=np.random.binomial(1, 0.5, N))
# Using these variables, we can return a possible realization of the observed proportion of "Yes" responses. We do this using a PyMC3 deterministic variable:
import theano.tensor as tt
with model:
val = first_coin_flips*true_answers + (1 - first_coin_flips)*second_coin_flips
observed_proportion = pm.Deterministic("observed_proportion", tt.sum(val)/float(N))
# The line fc*t_a + (1-fc)*sc contains the heart of the Privacy algorithm. Elements in this array are 1 if and only if i) the first toss is heads and the student cheated or ii) the first toss is tails, and the second is heads, and are 0 else. Finally, the last line sums this vector and divides by float(N), produces a proportion.
observed_proportion.tag.test_value
# Next we need a dataset. After performing our coin-flipped interviews the researchers received 35 "Yes" responses. To put this into a relative perspective, if there truly were no cheaters, we should expect to see on average 1/4 of all responses being a "Yes" (half chance of having first coin land Tails, and another half chance of having second coin land Heads), so about 25 responses in a cheat-free world. On the other hand, if all students cheated, we should expected to see approximately 3/4 of all responses be "Yes".
#
# The researchers observe a Binomial random variable, with N = 100 and p = observed_proportion with value = 35:
# +
X = 35
with model:
observations = pm.Binomial("obs", N, observed_proportion, observed=X)
# -
# Below we add all the variables of interest to a Model container and run our black-box algorithm over the model.
with model:
step = pm.Metropolis(vars=[p])
trace = pm.sample(40000, step=step)
burned_trace = trace[15000:]
figsize(12.5, 3)
p_trace = burned_trace["freq_cheating"][15000:]
plt.hist(p_trace, histtype="stepfilled", normed=True, alpha=0.85, bins=30,
label="posterior distribution", color="#348ABD")
plt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.3)
plt.xlim(0, 1)
plt.legend();
# With regards to the above plot, we are still pretty uncertain about what the true frequency of cheaters might be, but we have narrowed it down to a range between 0.05 to 0.35 (marked by the solid lines). This is pretty good, as a priori we had no idea how many students might have cheated (hence the uniform distribution for our prior). On the other hand, it is also pretty bad since there is a .3 length window the true value most likely lives in. Have we even gained anything, or are we still too uncertain about the true frequency?
#
# I would argue, yes, we have discovered something. It is implausible, according to our posterior, that there are no cheaters, i.e. the posterior assigns low probability to $p=0$. Since we started with an uniform prior, treating all values of $p$ as equally plausible, but the data ruled out $p=0$ as a possibility, we can be confident that there were cheaters.
#
# This kind of algorithm can be used to gather private information from users and be reasonably confident that the data, though noisy, is truthful.
#
# ### 2.2.8 Alternative PyMC3 Model
# Given a value for $p$ (which from our god-like position we know), we can find the probability the student will answer yes:
#
# $$\begin{align}
# P(\text{"Yes"}) = P( \text{Heads on first coin} )P( \text{cheater} ) + P( \text{Tails on first coin} )P( \text{Heads on second coin} ) \\\\
# = \frac{1}{2}p + \frac{1}{2}\frac{1}{2}\\\\
# = \frac{p}{2} + \frac{1}{4}
# \end{align}$$
# Thus, knowing $p$ we know the probability a student will respond "Yes". In PyMC3, we can create a deterministic function to evaluate the probability of responding "Yes", given $p$:
with pm.Model() as model:
p = pm.Uniform("freq_cheating", 0, 1)
p_skewed = pm.Deterministic("p_skewed", 0.5*p + 0.25)
# I could have typed p_skewed = 0.5*p + 0.25 instead for a one-liner, as the elementary operations of addition and scalar multiplication will implicitly create a deterministic variable, but I wanted to make the deterministic boilerplate explicit for clarity's sake.
#
# If we know the probability of respondents saying "Yes", which is p_skewed, and we have $N=100$ students, the number of "Yes" responses is a binomial random variable with parameters N and p_skewed.
#
# This is where we include our observed 35 "Yes" responses. In the declaration of the pm.Binomial, we include value = 35 and observed = True.
with model:
yes_responses = pm.Binomial("number_cheaters", 100, p_skewed, observed=35)
# Below we add all the variables of interest to a Model container and run our black-box algorithm over the model.
with model:
step = pm.Metropolis()
trace = pm.sample(25000, step=step)
burned_trace = trace[2500:]
figsize(12.5, 3)
p_trace = burned_trace["freq_cheating"]
plt.hist(p_trace, histtype="stepfilled", normed=True, alpha=0.85, bins=30,
label="posterior distribution", color="#348ABD")
plt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.2)
plt.xlim(0, 1)
plt.legend();
# ### 2.2.10 Example: Challenger Space Shuttle Disaster
# On January 28, 1986, the twenty-fifth flight of the U.S. space shuttle program ended in disaster when one of the rocket boosters of the Shuttle Challenger exploded shortly after lift-off, killing all seven crew members. The presidential commission on the accident concluded that it was caused by the failure of an O-ring in a field joint on the rocket booster, and that this failure was due to a faulty design that made the O-ring unacceptably sensitive to a number of factors including outside temperature. Of the previous 24 flights, data were available on failures of O-rings on 23, (one was lost at sea), and these data were discussed on the evening preceding the Challenger launch, but unfortunately only the data corresponding to the 7 flights on which there was a damage incident were considered important and these were thought to show no obvious trend. The data are shown below:
# +
figsize(12.5, 3.5)
np.set_printoptions(precision=3, suppress=True)
challenger_data = np.genfromtxt("data/challenger_data.csv", skip_header=1,
usecols=[1, 2], missing_values="NA",
delimiter=",")
#drop the NA values
challenger_data = challenger_data[~np.isnan(challenger_data[:, 1])]
#plot it, as a function of tempature (the first column)
print("Temp (F), O-Ring failure?")
print(challenger_data)
plt.scatter(challenger_data[:, 0], challenger_data[:, 1], s=75, color="k",
alpha=0.5)
plt.yticks([0, 1])
plt.ylabel("Damage Incident?")
plt.xlabel("Outside temperature (Fahrenheit)")
plt.title("Defects of the Space Shuttle O-Rings vs temperature");
# -
# It looks clear that the probability of damage incidents occurring increases as the outside temperature decreases. We are interested in modeling the probability here because it does not look like there is a strict cutoff point between temperature and a damage incident occurring. The best we can do is ask "At temperature $t$, what is the probability of a damage incident?". The goal of this example is to answer that question.
#
# We need a function of temperature, call it $p(t)$, that is bounded between 0 and 1 (so as to model a probability) and changes from 1 to 0 as we increase temperature. There are actually many such functions, but the most popular choice is the logistic function.
#
# $$p(t) = \frac{1}{ 1 + e^{ \;\beta t } } $$
# In this model, $\beta$ is the variable we are uncertain about. Below is the function plotted for $\beta = 1, 3, -5$.
# +
figsize(12, 3)
def logistic(x, beta):
return 1.0 / (1.0 + np.exp(beta * x))
x = np.linspace(-4, 4, 100)
plt.plot(x, logistic(x, 1), label=r"$\beta = 1$")
plt.plot(x, logistic(x, 3), label=r"$\beta = 3$")
plt.plot(x, logistic(x, -5), label=r"$\beta = -5$")
plt.legend();
# -
# But something is missing. In the plot of the logistic function, the probability changes only near zero, but in our data above the probability changes around 65 to 70. We need to add a bias term to our logistic function:
#
# $$p(t) = \frac{1}{ 1 + e^{ \;\beta t + \alpha } } $$
# Some plots are below, with differing $\alpha$.
# +
def logistic(x, beta, alpha=0):
return 1.0 / (1.0 + np.exp(np.dot(beta, x) + alpha))
x = np.linspace(-4, 4, 100)
plt.plot(x, logistic(x, 1), label=r"$\beta = 1$", ls="--", lw=1)
plt.plot(x, logistic(x, 3), label=r"$\beta = 3$", ls="--", lw=1)
plt.plot(x, logistic(x, -5), label=r"$\beta = -5$", ls="--", lw=1)
plt.plot(x, logistic(x, 1, 1), label=r"$\beta = 1, \alpha = 1$",
color="#348ABD")
plt.plot(x, logistic(x, 3, -2), label=r"$\beta = 3, \alpha = -2$",
color="#A60628")
plt.plot(x, logistic(x, -5, 7), label=r"$\beta = -5, \alpha = 7$",
color="#7A68A6")
plt.legend(loc="lower left");
# -
# Adding a constant term $\alpha$ amounts to shifting the curve left or right (hence why it is called a bias).
#
# Let's start modeling this in PyMC3. The $\beta, \alpha$ parameters have no reason to be positive, bounded or relatively large, so they are best modeled by a Normal random variable, introduced next.
#
# ### 2.2.11 The Normal Distribution
# A Normal random variable, denoted $X \sim N(\mu, 1/\tau)$, has a distribution with two parameters: the mean, $\mu$, and the precision, $\tau$. Those familiar with the Normal distribution already have probably seen $\sigma^2$ instead of $\tau^{-1}$. They are in fact reciprocals of each other. The change was motivated by simpler mathematical analysis and is an artifact of older Bayesian methods. Just remember: the smaller $\tau$, the larger the spread of the distribution (i.e. we are more uncertain); the larger $\tau$, the tighter the distribution (i.e. we are more certain). Regardless, $\tau$ is always positive.
#
# The probability density function of a $N( \mu, 1/\tau)$ random variable is:
#
# $$ f(x | \mu, \tau) = \sqrt{\frac{\tau}{2\pi}} \exp\left( -\frac{\tau}{2} (x-\mu)^2 \right) $$
# We plot some different density functions below.
# +
import scipy.stats as stats
nor = stats.norm
x = np.linspace(-8, 7, 150)
mu = (-2, 0, 3)
tau = (.7, 1, 2.8)
colors = ["#348ABD", "#A60628", "#7A68A6"]
parameters = zip(mu, tau, colors)
for _mu, _tau, _color in parameters:
plt.plot(x, nor.pdf(x, _mu, scale=1./_tau),
label="$\mu = %d,\;\\tau = %.1f$" % (_mu, _tau), color=_color)
plt.fill_between(x, nor.pdf(x, _mu, scale=1./_tau), color=_color,
alpha=.33)
plt.legend(loc="upper right")
plt.xlabel("$x$")
plt.ylabel("density function at $x$")
plt.title("Probability distribution of three different Normal random \
variables");
# -
# A Normal random variable can be take on any real number, but the variable is very likely to be relatively close to $\mu$. In fact, the expected value of a Normal is equal to its $\mu$ parameter:
#
# $$ E[ X | \mu, \tau] = \mu$$
# and its variance is equal to the inverse of $\tau$:
#
# $$Var( X | \mu, \tau ) = \frac{1}{\tau}$$
# Below we continue our modeling of the Challenger space craft:
# +
temperature = challenger_data[:, 0]
D = challenger_data[:, 1] # defect or not?
#notice the`value` here. We explain why below.
with pm.Model() as model:
beta = pm.Normal("beta", mu=0, tau=0.001, testval=0)
alpha = pm.Normal("alpha", mu=0, tau=0.001, testval=0)
p = pm.Deterministic("p", 1.0/(1. + tt.exp(beta*temperature + alpha)))
# -
# We have our probabilities, but how do we connect them to our observed data? A Bernoulli random variable with parameter $p$, denoted $\text{Ber}(p)$, is a random variable that takes value 1 with probability $p$, and 0 else. Thus, our model can look like:
#
# $$ \text{Defect Incident, $D_i$} \sim \text{Ber}( \;p(t_i)\; ), \;\; i=1..N$$
# where $p(t)$ is our logistic function and $t_i$ are the temperatures we have observations about. Notice in the above code we had to set the values of beta and alpha to 0. The reason for this is that if beta and alpha are very large, they make p equal to 1 or 0. Unfortunately, pm.Bernoulli does not like probabilities of exactly 0 or 1, though they are mathematically well-defined probabilities. So by setting the coefficient values to 0, we set the variable p to be a reasonable starting value. This has no effect on our results, nor does it mean we are including any additional information in our prior. It is simply a computational caveat in PyMC3.
# connect the probabilities in `p` with our observations through a
# Bernoulli random variable.
with model:
observed = pm.Bernoulli("bernoulli_obs", p, observed=D)
# Mysterious code to be explained in Chapter 3
start = pm.find_MAP()
step = pm.Metropolis()
trace = pm.sample(120000, step=step, start=start)
burned_trace = trace[100000::2]
# We have trained our model on the observed data, now we can sample values from the posterior. Let's look at the posterior distributions for $\alpha$ and $\beta$:
# +
alpha_samples = burned_trace["alpha"][:, None] # best to make them 1d
beta_samples = burned_trace["beta"][:, None]
figsize(12.5, 6)
#histogram of the samples:
plt.subplot(211)
plt.title(r"Posterior distributions of the variables $\alpha, \beta$")
plt.hist(beta_samples, histtype='stepfilled', bins=35, alpha=0.85,
label=r"posterior of $\beta$", color="#7A68A6", normed=True)
plt.legend()
plt.subplot(212)
plt.hist(alpha_samples, histtype='stepfilled', bins=35, alpha=0.85,
label=r"posterior of $\alpha$", color="#A60628", normed=True)
plt.legend();
# -
# All samples of $\beta$ are greater than 0. If instead the posterior was centered around 0, we may suspect that $\beta = 0$, implying that temperature has no effect on the probability of defect.
#
# Similarly, all $\alpha$ posterior values are negative and far away from 0, implying that it is correct to believe that $\alpha$ is significantly less than 0.
#
# Regarding the spread of the data, we are very uncertain about what the true parameters might be (though considering the low sample size and the large overlap of defects-to-nondefects this behaviour is perhaps expected).
#
# Next, let's look at the expected probability for a specific value of the temperature. That is, we average over all samples from the posterior to get a likely value for $p(t_i)$.
# +
t = np.linspace(temperature.min() - 5, temperature.max()+5, 50)[:, None]
p_t = logistic(t.T, beta_samples, alpha_samples)
mean_prob_t = p_t.mean(axis=0)
# +
figsize(12.5, 4)
plt.plot(t, mean_prob_t, lw=3, label="average posterior \nprobability \
of defect")
plt.plot(t, p_t[0, :], ls="--", label="realization from posterior")
plt.plot(t, p_t[-2, :], ls="--", label="realization from posterior")
plt.scatter(temperature, D, color="k", s=50, alpha=0.5)
plt.title("Posterior expected value of probability of defect; \
plus realizations")
plt.legend(loc="lower left")
plt.ylim(-0.1, 1.1)
plt.xlim(t.min(), t.max())
plt.ylabel("probability")
plt.xlabel("temperature");
# -
# Above we also plotted two possible realizations of what the actual underlying system might be. Both are equally likely as any other draw. The blue line is what occurs when we average all the 20000 possible dotted lines together.
#
# An interesting question to ask is for what temperatures are we most uncertain about the defect-probability? Below we plot the expected value line and the associated 95% intervals for each temperature.
# +
from scipy.stats.mstats import mquantiles
# vectorized bottom and top 2.5% quantiles for "confidence interval"
qs = mquantiles(p_t, [0.025, 0.975], axis=0)
plt.fill_between(t[:, 0], *qs, alpha=0.7,
color="#7A68A6")
plt.plot(t[:, 0], qs[0], label="95% CI", color="#7A68A6", alpha=0.7)
plt.plot(t, mean_prob_t, lw=1, ls="--", color="k",
label="average posterior \nprobability of defect")
plt.xlim(t.min(), t.max())
plt.ylim(-0.02, 1.02)
plt.legend(loc="lower left")
plt.scatter(temperature, D, color="k", s=50, alpha=0.5)
plt.xlabel("temp, $t$")
plt.ylabel("probability estimate")
plt.title("Posterior probability estimates given temp. $t$");
# -
# The 95% credible interval, or 95% CI, painted in purple, represents the interval, for each temperature, that contains 95% of the distribution. For example, at 65 degrees, we can be 95% sure that the probability of defect lies between 0.25 and 0.75.
#
# More generally, we can see that as the temperature nears 60 degrees, the CI's spread out over [0,1] quickly. As we pass 70 degrees, the CI's tighten again. This can give us insight about how to proceed next: we should probably test more O-rings around 60-65 temperature to get a better estimate of probabilities in that range. Similarly, when reporting to scientists your estimates, you should be very cautious about simply telling them the expected probability, as we can see this does not reflect how wide the posterior distribution is.
#
# ### 2.2.12 What about the day of the Challenger disaster?
# On the day of the Challenger disaster, the outside temperature was 31 degrees Fahrenheit. What is the posterior distribution of a defect occurring, given this temperature? The distribution is plotted below. It looks almost guaranteed that the Challenger was going to be subject to defective O-rings.
# +
figsize(12.5, 2.5)
prob_31 = logistic(31, beta_samples, alpha_samples)
plt.xlim(0.995, 1)
plt.hist(prob_31, bins=1000, normed=True, histtype='stepfilled')
plt.title("Posterior distribution of probability of defect, given $t = 31$")
plt.xlabel("probability of defect occurring in O-ring");
# -
# ## 2.3 Is our model appropriate?
# The skeptical reader will say "You deliberately chose the logistic function for $p(t)$ and the specific priors. Perhaps other functions or priors will give different results. How do I know I have chosen a good model?" This is absolutely true. To consider an extreme situation, what if I had chosen the function $p(t) = 1,\; \forall t$, which guarantees a defect always occurring: I would have again predicted disaster on January 28th. Yet this is clearly a poorly chosen model. On the other hand, if I did choose the logistic function for $p(t)$, but specified all my priors to be very tight around 0, likely we would have very different posterior distributions. How do we know our model is an expression of the data? This encourages us to measure the model's goodness of fit.
#
# We can think: how can we test whether our model is a bad fit? An idea is to compare observed data (which if we recall is a fixed stochastic variable) with artificial dataset which we can simulate. The rationale is that if the simulated dataset does not appear similar, statistically, to the observed dataset, then likely our model is not accurately represented the observed data.
#
# Previously in this Chapter, we simulated artificial dataset for the SMS example. To do this, we sampled values from the priors. We saw how varied the resulting datasets looked like, and rarely did they mimic our observed dataset. In the current example, we should sample from the posterior distributions to create very plausible datasets. Luckily, our Bayesian framework makes this very easy. We only need to create a new Stochastic variable, that is exactly the same as our variable that stored the observations, but minus the observations themselves. If you recall, our Stochastic variable that stored our observed data was:
#
# `observed = pm.Bernoulli("bernoulli_obs", p, observed=D)`
#
# Hence we create:
#
# `simulated_data = pm.Bernoulli("simulation_data", p)`
#
# Let's simulate 10 000:
N = 10000
with pm.Model() as model:
beta = pm.Normal("beta", mu=0, tau=0.001, testval=0)
alpha = pm.Normal("alpha", mu=0, tau=0.001, testval=0)
p = pm.Deterministic("p", 1.0/(1. + tt.exp(beta*temperature + alpha)))
observed = pm.Bernoulli("bernoulli_obs", p, observed=D)
simulated = pm.Bernoulli("bernoulli_sim", p, shape=p.tag.test_value.shape)
step = pm.Metropolis(vars=[p])
trace = pm.sample(N, step=step)
# +
figsize(12.5, 5)
simulations = trace["bernoulli_sim"]
print(simulations.shape)
plt.title("Simulated dataset using posterior parameters")
figsize(12.5, 6)
for i in range(4):
ax = plt.subplot(4, 1, i+1)
plt.scatter(temperature, simulations[1000*i, :], color="k",
s=50, alpha=0.6)
# -
# Note that the above plots are different (if you can think of a cleaner way to present this, please send a pull request and answer here!).
#
# We wish to assess how good our model is. "Good" is a subjective term of course, so results must be relative to other models.
#
# We will be doing this graphically as well, which may seem like an even less objective method. The alternative is to use Bayesian p-values. These are still subjective, as the proper cutoff between good and bad is arbitrary. Gelman emphasises that the graphical tests are more illuminating [7] than p-value tests. We agree.
#
# The following graphical test is a novel data-viz approach to logistic regression. The plots are called separation plots. For a suite of models we wish to compare, each model is plotted on an individual separation plot.
#
# For each model, we calculate the proportion of times the posterior simulation proposed a value of 1 for a particular temperature, i.e. compute $P( \;\text{Defect} = 1 | t, \alpha, \beta )$ by averaging. This gives us the posterior probability of a defect at each data point in our dataset. For example, for the model we used above:
posterior_probability = simulations.mean(axis=0)
print("posterior prob of defect | realized defect ")
for i in range(len(D)):
print("%.2f | %d" % (posterior_probability[i], D[i]))
# Next we sort each column by the posterior probabilities:
ix = np.argsort(posterior_probability)
print("probb | defect ")
for i in range(len(D)):
print("%.2f | %d" % (posterior_probability[ix[i]], D[ix[i]]))
# We can present the above data better in a figure: I've wrapped this up into a `separation_plot` function.
# +
from separation_plot import separation_plot
figsize(11., 1.5)
separation_plot(posterior_probability, D)
# -
# The snaking-line is the sorted probabilities, blue bars denote defects, and empty space (or grey bars for the optimistic readers) denote non-defects. As the probability rises, we see more and more defects occur. On the right hand side, the plot suggests that as the posterior probability is large (line close to 1), then more defects are realized. This is good behaviour. Ideally, all the blue bars should be close to the right-hand side, and deviations from this reflect missed predictions.
#
# The black vertical line is the expected number of defects we should observe, given this model. This allows the user to see how the total number of events predicted by the model compares to the actual number of events in the data.
#
# It is much more informative to compare this to separation plots for other models. Below we compare our model (top) versus three others:
#
# 1. the perfect model, which predicts the posterior probability to be equal 1 if a defect did occur.
# 2. a completely random model, which predicts random probabilities regardless of temperature.
# 3. a constant model: where $P(D = 1 \; | \; t) = c, \;\; \forall t$. The best choice for $c$ is the observed frequency of defects, in this case 7/23.
# +
figsize(11., 1.25)
# Our temperature-dependent model
separation_plot(posterior_probability, D)
plt.title("Temperature-dependent model")
# Perfect model
# i.e. the probability of defect is equal to if a defect occurred or not.
p = D
separation_plot(p, D)
plt.title("Perfect model")
# random predictions
p = np.random.rand(23)
separation_plot(p, D)
plt.title("Random model")
# constant model
constant_prob = 7./23*np.ones(23)
separation_plot(constant_prob, D)
plt.title("Constant-prediction model");
# -
# In the random model, we can see that as the probability increases there is no clustering of defects to the right-hand side. Similarly for the constant model.
#
# The perfect model, the probability line is not well shown, as it is stuck to the bottom and top of the figure. Of course the perfect model is only for demonstration, and we cannot infer any scientific inference from it.
|
cracking-the-data-science-interview-master/cracking-the-data-science-interview-master/EBooks/Bayesian-Methods-for-Hackers/C2-A-Little-More-on-PyMC.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tutorial: Getting to that Sweet, Sweet Data
# ### I/O tutorial covering (in several parts) .txt, .HTML, .CSV, .XML, .JSON
# #### Digital Research Methods (LeMasters, 2018)
# Draft 2 (9 September 2018)
# I'd built some tutorials this week that demonstrated three different techniques for acquiring (and making some sense of) data from text- and HTML-based files; instead of presenting those three projects in a serial, discrete fashion, though, I've begun chopping them up and re-organizing them thematically -- I think they'll be much more useful to us that way.
#
# So, for example, instead of building an entire mini-application in one tutorial, we'll review a tutorial wherein we look at 3 different techniques for loading files; one where we look at three different ways of parsing text; and then one where we look at three approaches to rendering that data.
#
# I've never approached them problem in this way, but it looks promising. Be sure to let me know what you think.
# ## Pulling text into your data science pipeline -- and getting it back out.
# ### I/O
# I/O is ubiquitous in computation: It stands for In/Out. The I/O interface is where the computer and the world meet. There are so many ways to do this -- but in the end, there are only a few techniques that you'll return to again and again. The biggest determining factor in your approach is (1) the kind of file you're reading and (2) its disposition in your workflow: What comes out of that file?
#
# As we mentioned on Thursday, there are many file types you're likely to deal with, but .txt, in addition to four related filetypes, stands out as most common (for now):
# 1. **txt** | the plain-vanilla text file. When in doubt, you can almost always read or write any kind of information into this format. The formats below are all .txt files with fancy hats.
# 2. **HTML** | HTML (Hypertext mark-up language) is plain text that is heavily "marked up" with tags (words inside of angle brackets). Because HTML is typically used to build web pages, it is heavily engaged in layout and presentation -- not just content. And because HTML was never really intended to be used in this fashion, it is now joined by dozens of auxillary languages, scripts, and objects -- all of which tend to get in your way while you work with HTML.
# 3. **.CSV** | The venerable comma-separated-values file is the workhorse of contemporary data science. .CSV files (and those of its fraternal twin, .TSV, tab-separated-value) tend to do a great job delivering data with a minimal amount of cruft. Their main drawback is that they are uncompressed. It is not uncommon to find even powerful computers choking on unnecesssarily large .CSV files.
# 4. **.JSON** | Originating with the popular web programming language javascript, the JavaScript Object Notation is a simple, efficient, and remarkably popular format: It makes all the other formats jealous. Many professionals who work with data science toolsets have come to prefer .JSON over .XML, in part because it simply looks less intimidating. While APIs tend to make their data available in both formats, JSON now tends to be the default.
# 5. **.XML** | The W3C describes the Extensible Markup Language (XML) as "a simple, very flexible text format." I would call that description "overly optimistic." When we have time, we can review some .XML files and you'll see why fewer and fewer institutions seem to be depending on it like they used to.
# #### Before you get started:
#
# We need some files to download. Once you've worked through these examples, I encourage you to repeat them but do so using your own files. To get started, though:
#
# Begin by launching Anaconda Navigator. From there, launch Jupyter. It will send you to the default directory listing: If that is not where you want to store your Notebook files, then navigate to the place where you DO want to keep them. Once you are satisfied that you are looking at the best directory for this purpose, go to the upper right hand of the browser window and choose the button/pulldown labeled NEW. From the list it creates, choose Python 3.
#
# You should be in Jupyter now: Let's fix the name of this notebook first. At the top of the document, click on the word "Untitled" and rename the file to:
#
# IO_Tutorial_One
#
# Note: The underscores look silly, but real spaces tend to make code complicated. For example, in order to change to a directory called "/rough draft graphics/color imgs", I would type:
#
# cd /rough\ draft\ graphics/color\ imgs
#
# See what I mean? So we all try to avoid using spaces whenever possible.
#
# Save the document by clicking on the little floppy disk icon on the leftmost side of the tool ribbon. Let's make sure we're off to a good start: Put that browser window aside (minimize it, hide it, etc.) and open up your operating system's native file browser (the one you use everyday: OSX uses the Finder; Windows uses Explorer). Navigate to the directory where your Jupyter notebook should be, and make sure it is, in fact, there. Jupyter will have added a file extension to the filename, so it should look like this:
#
# IO_Tutorial_One.ipynb
#
# Great. Now before you leave this directory, download [this file from our website](http://www.digitalresearch.online/IO_Tutorial_One_Data.zip) and save it in the same directory as your notebook file.
#
# Explanation: You are downloading a compressed copy of 5 or more files inside a single folder. This is delivered to you as a file called IO_Tutorial_One_Data.zip. BUT MacOS may automatically unzip it for you, without asking your permission. This can make things confusing. If so, use the folder MacOS unzipped -- if not, unzip the file yourself and drag that folder to the same place your notebook is stored.
# ## PATH
# Now we need to ingest the datafile. It is a three-step process.
#
# 1. Identify the path. Tell Jupyter where it can find your file.
# 2. Open the file. Tell Jupyter how to open your file.
# 3. Read the file. Tell Jupyter where to put the data.
#
# Piece of cake!
# > Quick Question: Why so many steps?
#
# > It's true, all of this can be accomplished in one step. But don't. Spread it out so you can get a sense of what is happening. You can compress the process once you're more confident about the component parts.
#
# > Quick Question: Why use variables instead of just using the file's name?
#
# > You certainly don't have to use variables - but the idea is that you won't just do this once. You'll do it dozens, even hundreds of times in the near future. By using a variable instead of the file's "real" path, you save yourself effort, and are working in the spirit of data scientists and programmers, whose motto is always DRY: <em>Don't Repeat Yourself</em>.
# ## Identify the Path
# Tell Python how to find your file. The location of the file you want to load into memory is the *file path*. In our case, it is where we can find the folder called IO_Tutorial_One. (If you haven't downloaded that yet, see the section above).
#
# Your code will reuse variables like PATH a lot. So let's return to our Jupyter notebook and create a variable that points right to the file we need:
#
# myFilePath = 'IO_Tutorial_One/RilkePoem.txt'
#
# Don't forget the quotes (typically single quotes on Macs, double quotes on PCs). And note that the variable myFilePath is arbitrary. When I name variables that contain information specific to me, I tend to put the possessive adjective in the front -- it helps me recognize variables that are "mine." But again -- it is arbitrary:
#
# secretNuclearStorageFacility = 'IO_Tutorial_One/RilkePoem.txt'
#
# Whatever makes sense to you is fine for now. But generally, it should be named so that others can understand what you're doing.
# ### Bonus Tip
# Define your path variables way up at the top of your code, right after you import your libraries: Those will change frequently from project to project, and by keeping them near the top, you'll spend less time searching for them.
# ## Open()
# Now we call Python's open() function to ask the operating system for permission to access the file. We also need to let Jupyter know what our long-term intentions are. There are many details you can share with Python about your file, but we only care about two:
# 1. The file path
# 2. The access mode
# We've already handled the file path. Let's turn to the *mode*.
#
# The **mode parameter** wants to know how you will use the file:
#
# 'r' : use for reading
# 'w' : use for writing
# 'x' : use for creating and writing to a new file
# 'a' : use for appending to a file
# 'r+': use for reading and writing to the same file
#
# For the most part, 'r' is a strong choice -- you typically don't really want to overwrite your data files.
#
# Let's put these steps together, then:
myFilePath = 'IO_Tutorial_One_Data/RilkePoem.txt'
myPoemData = open(myFilePath, 'r')
# No complaints from Jupyter is a good sign!
# We're so close! Let's seal the deal by reading the poem into the system!
#
#
#
# # Read()
# In order to see just what we've wrought, let's make use of Jupyter's fancy *interactive mode* again: Instead of writing out the code that a program will execute ("interpret"), we're just going to talk with Python. We can't get much calculation done this way, but it gives us a much more intimate look at internal processes that other programming languages almost never share.
#
# So then: We've set the path, opened the file: All that is left to do is read it.
myPoemData.read()
# **Bam!** *Welcome to flavor country!* Jupyter paints the whole poem right there, line for line.
#
# Of course, the poem looks a bit wilted, a bit crushed: All of those '\n' (usually called "escape-Ns") are usually hidden from sight. They're called "string literals" — in this case, this string literal is an "escape sequence" called "linefeed" (sometimes abbreviated LF). It is a code that originally told a printer to advance its sheet of paper to a new line — or about 5 mm.
# What about those empty parentheses? Parentheses, empty or not, almost always signify *action*. It's useful to think in terms of grammar (because, in the end, it is a kind of grammar):
#
# myPoemData.read()
#
# can be understood as:
#
# (in the imperative) "Tell myPoemData to read itself."
#
# And it does just that, spilling the results all over our page because we didn't tell it where to store the results. Let's do that now.
textOutput = myPoemData.read()
# Excellent! The data is safely locked inside our variable textOutput. See?
print(textOutput)
# Umm...Wait, what? Where's our poem? Why didn't that work?
# Sigh. Here's the story: When I .read() the file to which myFileData refers, I initiate a lot of work on the computer's part. After all, I'm moving a block of data from the file I've opened on my harddrive to be stored in another location. Python uses a pointer (like a cursor) to keep track of the data as it gets channeled to its new home in memory. When that pointer reaches the end of the file, it just stops -- much like the needle arm on an old-fashioned record player. If I want to hear that song again, or if I want to see that data again, then we're going to have to move the needle ourselves.
# How? Easy. The pointer is stored inside the data object we built. We just need to reset it thus:
myPoemData.seek(0)
# That zero means "success" (I know, not very sensible, is it?). But let's try our .read() again.
myPoemData.read()
# *Voila!*
#
# There are many variations we can make use of, too. Using the .seek(0) to reset the needle on the record: Let's try them first, and then make sense of what we're seeing:
myPoemData.seek(0)
myPoemData.readlines()
# And then:
myPoemData.seek(0)
myPoemData.readline()
# Ooh! Again, without the .seek(0)?
myPoemData.readline()
# Interesting. We're moving line by line, right? Let's see:
myPoemData.readline()
myPoemData.readline()
# Fine. So, self-evidently, we need to get the data into a more dependable place: I don't want to reset a variable every time I read it. So this time, we'll just "read" it into a new variable for safekeeping.
myPoemData.seek(0) # reset our pointer
workingPoemData = myPoemData.read()
# And then let's peer into the new variable and see what's up.
print(workingPoemData)
# # Recap
# OK: That was a lot, and not-a-lot, all at once. Just to recall the most important points in the form of working code:
myFilePath = 'IO_Tutorial_One_Data/RilkePoem.txt'
myPoemData = open(myFilePath, 'r')
workingPoemData = myPoemData.read()
print(workingPoemData)
# Of course, I can get any flatfile this way -- as long as it isn't a binary file. All of these will work just as well:
myFilePathA = 'IO_Tutorial_One_Data/RilkePoem.csv'
myFilePathB = 'IO_Tutorial_One_Data/RilkePoem.html'
# and even the related style sheet:
myFilePathC = 'IO_Tutorial_One_Data/main.css'
myPoemDataB = open(myFilePathB, 'r')
workingPoemDataB = myPoemDataB.read()
print(workingPoemDataB)
# Now its your turn. Grab a few text files and write out enough code in Jupyter to pull those files in and display them.
|
IO_Tutorial_One.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %run "0.0 Data preparation.ipynb"
# # Dataset selection
_dataName, _inputData, _dataNameSUSNormalized, _inputDataSUSNormalized = selectDataset("data20190703")
# +
stackedBarPlotsFilenamePathStem = graphsSavePathStem + "/stacked-bar-plots"
tryCreateFolder(stackedBarPlotsFilenamePathStem)
barPlotsErrorBarsFilenamePathStem = graphsSavePathStem + "/bar-plots-with-error-bars"
tryCreateFolder(barPlotsErrorBarsFilenamePathStem)
# +
def plotStackedBar(question
, dataName=_dataName
, saveFig=False
, filename=None
, fig=None
, ax=None
, title=None
, displayedGameNames=identityGameNames
, showLegend=True
, printDebug=True
, tight_layout=True
, constrained_layout=False
):
assert (dataName in datasets), ("Not found in datasets: '" + dataName + "'")
data = datasets[dataName]
allPossibleValues = np.unique(datasets[dataName].loc[:, shortLikertQuestions.values].values)
minLikertValue = min(allPossibleValues)
maxLikertValue = max(allPossibleValues)
assert ((maxLikertValue - minLikertValue) == 4), ("Function designed for 5-step Likert scale")
callShow = False
if None == ax:
callShow = True
if None == fig:
fig = plt.figure(constrained_layout=constrained_layout)
fig.patch.set_facecolor('white')
ax = plt.subplot(111)
ax.patch.set_facecolor('white')
# assert (gameIndex < len(games)), ("game index must be smaller than " + str(len(games)))
ind = np.arange(len(games)) # the x locations for the groups
width = 0.35 # the width of the bars: can also be len(x) sequence
colors = cm.jet(np.linspace(1, 0, 5))
#stacked bar plots of games
#gameStackedBarPlots[gameIndex][plottedValue] contains:
# the bar plot for game gameIndex of Likert/refined value plottedValue
gameStackedBarPlots = [[] for i in range(len(games))]
#participantCounts = [0 for i in range(len(games))]
commonData = datasets[dataName]
commonData = commonData.loc[:, [question, gameQuestion]].groupby([gameQuestion, question]).size()
if printDebug:
print("-------------------------------------------------------------------------------------")
print(question)
for gameIndex in range(len(games)):
# print("gameIndex="+str(gameIndex))
game = games[gameIndex]
data = commonData[game]
gameLikertCounts = [data.get(i,0) for i in range(minLikertValue, maxLikertValue+1)]
if printDebug:
print(" " + game + ": " + str(gameLikertCounts))
print(str(data))
#participantCounts[gameIndex] = data.sum()
#agreement scale: 0 == 100% agree, 4 == 100% disagree
gameStackedBarPlots[gameIndex] = [[] for i in range(5)]
for i in range(5):
_bottom = 0
if i != 4:
_bottom = sum(gameLikertCounts[i+1:])
gameStackedBarPlots[gameIndex][i] = ax.bar(\
ind[gameIndex]\
, gameLikertCounts[i]\
, width\
, color=colors[i]\
, bottom=_bottom\
)
plt.ylabel('Stacked answers')
plt.ylim(0, getMaxAnswers(dataName))
if not title:
plt.title(question)
else:
plt.title(title)
# margins left and right of the bars from the Y axis
# plt.margins(1.2)
plt.xticks(ind, displayedGameNames[games])
# plt.yticks(np.arange(0, data.sum(), round(max(participantCounts)/10)))
if showLegend:
plt.legend(
[gameStackedBarPlots[gameIndex][i][0] for i in range(5)]
, likert5StepDescriptions
, loc='center left', bbox_to_anchor=(1, 0.5)
)
if tight_layout:
plt.tight_layout()
if callShow:
plt.show()
if saveFig:
shortQuestion = shortQuestions.index[shortQuestions.values==question].values[0]
if filename==None:
path = stackedBarPlotsFilenamePathStem + "/" + dataName
tryCreateFolder(path)
filename = path + "/" + shortQuestion
fig.savefig(filename)
return gameStackedBarPlots
plotStackedBar(
indexedQuestions[7]
, dataName=_dataName
, saveFig=False
, printDebug=False
);
# +
#def plotStackedBar(
# question
# , dataName=_dataName
# , saveFig=False
# , filename=None
# , fig=None
# , ax=None
# , title=None
# , displayedGameNames=identityGameNames
# , showLegend=True
# , printDebug=True
# , tight_layout=True
# , constrained_layout=False
# ):
question=indexedQuestions[7]
dataName=_dataName
saveFig=False
filename=None
fig=None
ax=None
title=None
displayedGameNames=identityGameNames
showLegend=True
printDebug=True
tight_layout=True
constrained_layout=False
assert (dataName in datasets), ("Not found in datasets: '" + dataName + "'")
data = datasets[dataName]
allPossibleValues = np.unique(datasets[dataName].loc[:, shortLikertQuestions.values].values)
minLikertValue = min(allPossibleValues)
maxLikertValue = max(allPossibleValues)
assert ((maxLikertValue - minLikertValue) == 4), ("Function designed for 5-step Likert scale")
callShow = False
if None == ax:
callShow = True
if None == fig:
fig = plt.figure(constrained_layout=constrained_layout)
fig.patch.set_facecolor('white')
ax = plt.subplot(111)
ax.patch.set_facecolor('white')
# assert (gameIndex < len(games)), ("game index must be smaller than " + str(len(games)))
ind = np.arange(len(games)) # the x locations for the groups
width = 0.35 # the width of the bars: can also be len(x) sequence
colors = cm.jet(np.linspace(1, 0, 5))
#stacked bar plots of games
#gameStackedBarPlots[gameIndex][plottedValue] contains:
# the bar plot for game gameIndex of Likert/refined value plottedValue
gameStackedBarPlots = [[] for i in range(len(games))]
#participantCounts = [0 for i in range(len(games))]
commonData = datasets[dataName]
commonData = commonData.loc[:, [question, gameQuestion]].groupby([gameQuestion, question]).size()
if printDebug:
print("-------------------------------------------------------------------------------------")
print(question)
for gameIndex in range(len(games)):
# print("gameIndex="+str(gameIndex))
game = games[gameIndex]
data = commonData[game]
gameLikertCounts = [data.get(i,0) for i in range(minLikertValue, maxLikertValue+1)]
if printDebug:
print(" " + game + ": " + str(gameLikertCounts))
print(str(data))
#participantCounts[gameIndex] = data.sum()
#agreement scale: 0 == 100% agree, 4 == 100% disagree
gameStackedBarPlots[gameIndex] = [[] for i in range(5)]
for i in range(5):
_bottom = 0
if i != 4:
_bottom = sum(gameLikertCounts[i+1:])
gameStackedBarPlots[gameIndex][i] = ax.bar(\
ind[gameIndex]\
, gameLikertCounts[i]\
, width\
, color=colors[i]\
, bottom=_bottom\
)
plt.ylabel('Stacked answers')
plt.ylim(0, getMaxAnswers(dataName))
if not title:
plt.title(question)
else:
plt.title(title)
# margins left and right of the bars from the Y axis
# plt.margins(1.2)
plt.xticks(ind, displayedGameNames[games])
# plt.yticks(np.arange(0, data.sum(), round(max(participantCounts)/10)))
if showLegend:
plt.legend(
[gameStackedBarPlots[gameIndex][i][0] for i in range(5)]
, likert5StepDescriptions
, loc='center left', bbox_to_anchor=(1, 0.5)
)
if tight_layout:
plt.tight_layout()
if callShow:
plt.show()
if saveFig:
shortQuestion = shortQuestions.index[shortQuestions.values==question].values[0]
if filename==None:
path = stackedBarPlotsFilenamePathStem + "/" + dataName
tryCreateFolder(path)
filename = path + "/" + shortQuestion
fig.savefig(filename)
gameStackedBarPlots
#plotStackedBar(
# indexedQuestions[7]
# , dataName=_dataName
# , saveFig=False
# , printDebug=False
#);
# -
minLikertValue
def getMaxAnswers(dataName):
return max(
[len(
# unique respondents
#np.unique(
#datasets[dataName][datasets[dataName][gameQuestion]==gameTitle][idQuestion]
#)
# unique answers
datasets[dataName][datasets[dataName][gameQuestion]==gameTitle]
)
for gameTitle in games
]
)
# +
#for dataName in datasets.keys():
# print("\n"+dataName+":\n\t"+str(getMaxAnswers(dataName)))
# +
def getStackedBarPlotsMatrix(
dataName=_dataName
, saveFig=False
, suptitle=None
, tight_layout=True
, constrained_layout=False
):
#fig, axs = plt.subplots(3, 4, constrained_layout=True, figsize=(15,8))
fig = plt.figure(figsize=(15,8), constrained_layout=constrained_layout)
fig.patch.set_facecolor('white')
graphIndex = 1
for question in shortLikertQuestions:
ax = fig.add_subplot(3,4,graphIndex)
# format Qnn
#shortQuestion = shortQuestions.index[shortQuestions.values==question].values[0]
# format 1-word description
shortQuestion = shortDescQuestions[question]
gameStackedBarPlots = plotStackedBar(
question
# for raw data
#, dataName=_dataName
# for refined data
, dataName=dataName
, saveFig=False
, fig=fig
, ax=ax
, title=shortQuestion
, displayedGameNames=shortGameNames
# , showLegend=(graphIndex==11)
, showLegend=False
, printDebug=False
, tight_layout=tight_layout
)
graphIndex += 1
plt.legend(
[gameStackedBarPlots[0][i][0] for i in range(5)]
, likert5StepDescriptions
, loc='center left', bbox_to_anchor=(1.52, 0.5)
)
if suptitle==None:
fig.suptitle(dataName, fontsize=16)
if saveFig:
#path = stackedBarPlotsFilenamePathStem + "/" + dataName
#tryCreateFolder(path)
fig.savefig(stackedBarPlotsFilenamePathStem + "/matrixStackedBars" + dataName)
for dataSet in datasets.keys():
getStackedBarPlotsMatrix(
dataName=dataSet
, saveFig=True
, suptitle=[]
)
# -
print(_dataName)
print(datasets.keys())
datasets["data20190603"][gameQuestion].value_counts()
datasets["data20190703"][gameQuestion].value_counts()
datasets["data20190828"][gameQuestion].value_counts()
for question in shortLikertQuestions:
plotStackedBar(
question
, dataName="data20190828"
, saveFig=False
, printDebug=False
, displayedGameNames=shortGameNames
)
# test to check values displayed in the bar plots
data = _inputData.loc[:, [indexedQuestions[4], gameQuestion]].groupby([gameQuestion, indexedQuestions[4]]).size()
data['Dr Bug: Microbe Mayhem']
# ### SUS Likert scale score variance
# +
saveFig = True
dataName = _dataName
for question in indexedLikertQuestions:
fig = plt.figure()
ax = fig.add_subplot(111)
sns.barplot(x=gameQuestion, y=question, data=datasets[dataName], ax = ax)
ax.set_xticklabels(shortGameNames)
plt.xlabel(question)
plt.ylabel('Agreement - Likert scale')
plt.ylim(1, 5)
if saveFig:
shortQuestion = shortQuestions.index[shortQuestions.values==question].values[0]
path = barPlotsErrorBarsFilenamePathStem + "/" + dataName
tryCreateFolder(path)
fig.savefig(path + "/" + shortQuestion)
# +
saveFig = True
fig = plt.figure(figsize=(15,8))
graphIndex = 1
dataName = _dataName
for question in indexedLikertQuestions:
ax = fig.add_subplot(3,4,graphIndex)
graphIndex += 1
sns.barplot(x=gameQuestion, y=question, data=datasets[dataName], ax = ax)
shortQuestion = shortQuestions.index[shortQuestions.values==question].values[0]
plt.xlabel('')
plt.ylabel('')
ax.set_xticklabels(['',shortQuestion,''])
plt.ylim(1, 5)
if saveFig:
path = barPlotsErrorBarsFilenamePathStem + "/" + dataName
tryCreateFolder(path)
fig.savefig(path + "/SUS-11-bar-graphs-3x4-matrix")
# +
saveFig = True
fig = plt.figure(figsize=(15,8))
graphIndex = 1
dataName = _dataNameSUSNormalized
for question in indexedLikertQuestions:
ax = fig.add_subplot(3,4,graphIndex)
graphIndex += 1
sns.barplot(x=gameQuestion, y=question, data=datasets[dataName], ax = ax)
shortQuestion = shortQuestions.index[shortQuestions.values==question].values[0]
plt.xlabel('')
plt.ylabel('')
ax.set_xticklabels(['',shortQuestion,''])
plt.ylim(0, 4)
if saveFig:
path = barPlotsErrorBarsFilenamePathStem + "/" + dataName
tryCreateFolder(path)
fig.savefig(path + "/SUS-normalized-11-bar-graphs-3x4-matrix")
# -
# ## Negative vs Positive questions comparison
#
# Let's try and know whether the answers from negative and positive questions are indistinguishable.
_inputDataSUSNormalized
|
Functions/2.0 Data analysis - Per question game comparison.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 다변수 정규분포
#
# 복수의 확률 변수를 모형화 할 때 가장 많이 사용되는 분포
#
# (수식)
# $$ \mathcal{N}(x ; \mu, \Sigma) = \dfrac{1}{(2\pi)^{D/2} |\Sigma| ^{1/2}} \exp \left( -\dfrac{1}{2} (x-\mu)^T \Sigma^{-1} (x-\mu) \right) $$
#
# - $x \in \mathbf{R}^D$ : 확률 변수 벡터
# - $\mu \in \mathbf{R}^D$ : 평균 벡터
# - $\Sigma \in \mathbf{R}^{D\times D}$ : 공분산 행렬
# - $\Sigma^-1 \in \mathbf{R}^{D\times D}$ : 공분산 행렬의 역행렬 (precesion matrix)
# 2차원 (D = 2) 다변수 정규분포일 때,
# 2차원 확률변수 벡터는
#
# $$x = \begin{bmatrix}x_1 \\ x_2 \end{bmatrix}$$
#
# ### 경우 1
#
# 만약
#
# $$\mu = \begin{bmatrix}2 \\ 3 \end{bmatrix}. \;\;\;
# \Sigma = \begin{bmatrix}1 & 0 \\ 0 & 1 \end{bmatrix}$$
#
# 라고 하면,
#
# $$| \Sigma| = 1. \;\;\;
# \Sigma^{-1} = \begin{bmatrix}1 & 0 \\ 0 & 1 \end{bmatrix}$$
#
# $$(x-\mu)^T \Sigma^{-1} (x-\mu) =
# \begin{bmatrix}x_1 - 2 & x_2 - 3 \end{bmatrix}
# \begin{bmatrix}1 & 0 \\ 0 & 1 \end{bmatrix}
# \begin{bmatrix}x_1 - 2 \\ x_2 - 3 \end{bmatrix}
# =
# (x_1 - 2)^2 + (x_2 - 3)^2$$
#
# $$\mathcal{N}(x_1, x_2) = \dfrac{1}{2\pi}
# \exp \left( -\dfrac{1}{2} \left( (x_1 - 2)^2 + (x_2 - 3)^2 \right) \right)$$
#
# 이 확률 밀도의 함수 모양
mu = [2, 3]
cov = [[1, 0], [0, 1]]
rv = sp.stats.multivariate_normal(mu, cov)
xx = np.linspace(0, 4, 120)
yy = np.linspace(1, 5, 150)
XX, YY = np.meshgrid(xx, yy)
plt.grid(False)
plt.contourf(XX, YY, rv.pdf(np.dstack([XX, YY])))
plt.axis("equal")
plt.show()
# ### 경우 2
#
# 만약
#
# $$\mu = \begin{bmatrix}2 \\ 3 \end{bmatrix}. \;\;\;
# \Sigma = \begin{bmatrix}2 & 3 \\ 3 & 7 \end{bmatrix}$$
#
# 이라면,
#
# $$|\Sigma| = 5,\;\;\;
# \Sigma^{-1} = \begin{bmatrix}1.4 & -0.6 \\ -0.6 & 0.4 \end{bmatrix}$$
#
# $$(x-\mu)^T \Sigma^{-1} (x-\mu) =
# \begin{bmatrix}x_1 - 2 & x_2 - 3 \end{bmatrix}
# \begin{bmatrix}1.4 & -0.6 \\ -0.6 & 0.4\end{bmatrix}
# \begin{bmatrix}x_1 - 2 \\ x_2 - 3 \end{bmatrix}
# =
# \dfrac{1}{10}\left(14(x_1 - 2)^2 - 12(x_1 - 2)(x_2 - 3) + 4(x_2 - 3)^2\right)$$
#
# $$\mathcal{N}(x_1, x_2) = \dfrac{1}{20\pi}
# \exp \left( -\dfrac{1}{10}\left(7(x_1 - 2)^2 - 6(x_1 - 2)(x_2 - 3) + 2(x_2 - 3)^2\right) \right)$$
mu = [2, 3]
cov = [[2, 3],[3, 7]]
rv = sp.stats.multivariate_normal(mu, cov)
xx = np.linspace(0, 4, 120)
yy = np.linspace(1, 5, 150)
XX, YY = np.meshgrid(xx, yy)
plt.grid(False)
plt.contourf(XX, YY, rv.pdf(np.dstack([XX, YY])))
plt.axis("equal")
plt.show()
import numpy as np
import scipy as sp
from scipy import stats
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels as sm
|
3_math_for_datascience/05_Correlation_of_random_variables/20180225_05_06_Multivariate_normal_distribution.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Welcome to Convolutional Neural Networks!
#
# ---
#
# ECT* TALENT Summer School 2020
#
# *Dr. <NAME>*
#
# *Davidson College*
#
# + [markdown] slideshow={"slide_type": "slide"}
# <!-- I read it's useful to add a bit of personal information when teaching virtual classes -->
#
# ## Research interests:
#
# - ### Machine learning to address challenges in nuclear physics (and high-energy physics)
# - FRIB experiments
# - Jefferson Lab experiments
# - Jefferson Lab Theory Center
#
# -----
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# # Convolutional Neural Networks: Convoution Operations
#
# <!-- 1943 -- McCullough and Pitts computational model of a neuron -->
# The convolutional neural network architecture was first described by Kunihiko Fukushima in 1980 (!).
#
# *Discrete convolutions* are matrix operations that can, amongst other things, be used to apply *filters* to images. Convolutions (continuous) we first published in 1754 (!!).
# + [markdown] slideshow={"slide_type": "notes"}
# - In this session, we will be looking at *predefined* filters for images to gain an intuition or understanding as to how the convolutional filters look.
# - In the next session, we will add them into a neural network architecture to create convolutional neural networks.
# + [markdown] slideshow={"slide_type": "slide"}
# Given an image `A` and a filter `h` with dimensions of $(2\omega+1) \times (2\omega+1)$, the discrete convolution operation is given by the following mathematics:
#
# $$C=x\circledast h$$
#
# where
#
# $$C[m,n] = \sum_{j=-\omega}^{\omega}\sum_{i=-\omega}^{\omega} h[i+\omega,j+\omega]* A[m+i,n+j]$$
#
# Or, graphically:
#
# 
#
# + [markdown] slideshow={"slide_type": "fragment"}
# ## Details
#
# * The filter slides across the image and down the image.
# * *Stride* is how many elements (pixels) you slide the filter by after each operation. This affects the dimensionality of the output of each image.
# * There are choices to be made at the edges.
# - for a stride of $1$ and a filter dimension of $3$, as shown here, the outer elements can not be computed as described.
# - one solution is *padding*, or adding zeros around the outside of the image so that the output can maintain the same shape
# + [markdown] slideshow={"slide_type": "subslide"}
# Now, I will demonstrate the application of discrete convolutions of known filters on an image.
#
# First, we `import` our necessary packages:
# + slideshow={"slide_type": "-"}
import numpy as np
import matplotlib.pyplot as plt
from scipy import signal
# + [markdown] slideshow={"slide_type": "slide"}
# Now, let's define a function to execute the above operation for any given 2-dimensional image and filter matrices:
# + slideshow={"slide_type": "-"}
def conv2d(img, filt, stride):
n_rows = len(img)
n_cols = len(img[0])
filt_w = len(filt)
filt_h = len(filt[0])
#store our filtered image
new_img = np.zeros((n_rows//stride+1,n_cols//stride+1))
# print(n_rows,n_cols,filt_w,filt_h) # uncomment for debugging
for i in range(filt_w//2,n_rows-filt_w//2, stride):
for j in range(filt_h//2,n_cols-filt_h//2, stride):
new_img[i//stride,j//stride] = np.sum(img[i-filt_w//2:i+filt_w//2+1,j-filt_h//2:j+filt_h//2+1]*filt)
return new_img
# + [markdown] slideshow={"slide_type": "slide"}
# We will first generate a simple synthetic image to which we will apply filters:
# + slideshow={"slide_type": "-"}
test_img = np.zeros((128,128)) # make an image 128x128 pixels, start by making it entirely black
test_img[30,:] = 255 # add a white row
test_img[:,40] = 255 # add a white column
# add two diagonal lines
for i in range(len(test_img)):
for j in range(len(test_img[i])):
if i == j or i == j+10:
test_img[i,j] = 255
plt.imshow(test_img, cmap="gray")
plt.colorbar()
plt.show()
# + [markdown] slideshow={"slide_type": "slide"}
# ### Let's also investigate the inverse of this image:
# + slideshow={"slide_type": "-"}
# creating the inverse of test_img
test_img2 = 255 - test_img
plt.imshow(test_img2, cmap="gray")
plt.colorbar()
plt.show()
# + [markdown] slideshow={"slide_type": "slide"}
# ### We will create three filters:
# + slideshow={"slide_type": "-"}
size = 3 # number of rows and columns for filters
# modify all values
filter1 = np.zeros((size,size))
filter1[:,:] = 0.5
# all values -1 except horizonal stripe in center
filter2 = np.zeros((size,size))
filter2[:,:] = -1
filter2[size//2,:] = 2
# all values -1 except vertical stripe in center
filter3 = np.zeros((size,size))
filter3[:,:] = -1
filter3[:,size//2] = 2
print(filter1,filter2,filter3, sep="\n\n")
# + [markdown] slideshow={"slide_type": "slide"}
# ### And now we call our function `conv2d` with our test images and our first filter:
# -
filtered_image = conv2d(test_img, filter3,1)
plt.imshow(filtered_image, cmap="gray")
plt.colorbar()
plt.show()
filtered_image2 = conv2d(test_img2, filter3,1)
plt.imshow(filtered_image2, cmap="gray")
plt.colorbar()
plt.show()
# + [markdown] slideshow={"slide_type": "slide"}
# In practice, you do not have to code the 2d convolutions (or you can do it in a more vectorized way using the full power of `numpy`).
#
# Let's look at the 2d convolutional method from `scipy`. The `mode="same"` argument indicates that our output matrix should match our input matrix.
#
#
#
# Note that he following import statement was executed at the beginning of this notebook:
#
# ```python
# from scipy import signal
# ```
# + slideshow={"slide_type": "-"}
spy_image = signal. (test_img, filter3, mode="same")
spy_image2 = signal.convolve2d(test_img2, filter3, mode="same")
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2,sharex=True, sharey=True, figsize = (8,8))
ax1.imshow(spy_image, cmap="gray")
#plt.colorbar()
#plt.show()
ax2.imshow(spy_image2, cmap="gray")
#plt.colorbar()
#fig.add_subplot(f1)
#plt.show()
ax3.imshow(filtered_image, cmap="gray")
ax4.imshow(filtered_image2, cmap="gray")
plt.show()
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Filter 1 is a *blurring* filter.
#
# It takes an "average" of all of the pixels in the region of the filter, all with the same weight.
#
# #### Let's go back and investigate the other filters.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Filter 1 is a *blurring* filter.
#
# It takes an "average" of all of the pixels in the region of the filter, all with the same weight.
#
# ## Filter 2 detects horizontal lines.
#
# It takes an "average" of all of the pixels in the region of the filter, all with the same weight.
#
# ## Filter 3 detects vertical lines.
#
# It takes an "average" of all of the pixels in the region of the filter, all with the same weight.
#
# + slideshow={"slide_type": "slide"}
residuals = spy_image-filtered_image
plt.imshow(residuals)
plt.title("Residuals")
plt.colorbar()
plt.show()
plt.imshow(residuals[len(filter1):-len(filter1),len(filter1[0]):-len(filter1[0])])
plt.colorbar()
plt.show()
plt.hist(residuals[len(filter1):-len(filter1),len(filter1[0]):-len(filter1[0])].flatten())
plt.show()
print("number of non-zero residuals (removing with of filter all the away around the image):", np.count_nonzero(residuals[len(filter1):-len(filter1),len(filter1[0]):-len(filter1[0])].flatten()))
plt.show()
# + [markdown] slideshow={"slide_type": "slide"}
#
# ### Let's try with a real photograph.
#
# Since we have only defined 2D convolutions for a 2D matrix, we cannot apply our function to color images, which have three channels: (red (R), green (G), blue (B)).
#
# Therefore, we make a gray scale image by averaging over the three <font color="red">R<font color="green">G<font color="blue">B <font color="black">channels.
# + slideshow={"slide_type": "slide"}
house = plt.imread("house_copy.jpg", format="jpeg")
plt.imshow(house)
plt.show()
bw_house = np.mean(house, axis=2)
plt.imshow(bw_house, cmap="gray")
plt.colorbar()
plt.show()
# + slideshow={"slide_type": "slide"}
spy_image = signal.convolve2d(bw_house, filter1, mode="same")
plt.imshow(spy_image, cmap="gray")
plt.colorbar()
plt.show()
spy_image = signal.convolve2d(bw_house, filter2, mode="same")
plt.imshow(spy_image, cmap="gray")
plt.colorbar()
plt.show()
spy_image = signal.convolve2d(bw_house, filter3, mode="same")
plt.imshow(spy_image, cmap="gray")
plt.colorbar()
plt.show()
# + [markdown] slideshow={"slide_type": "slide"}
# We can look at the effects of modifying the *stride*
#
# -
my_conv = conv2d(bw_house,filter3,5)
plt.imshow(my_conv)
# + [markdown] slideshow={"slide_type": "slide"}
# # $N$-D convolutions
#
# The mathmatics of discrete convolutions are the same no matter the dimensionality.
#
# Let's first look at 1D convolutions:
#
# Given a 1-D data array `a` and a filter `h` with dimensions of $2\omega \times 2\omega$, the discrete convolution operation is given by the following mathematics:
#
# $$c[n]=a[n]\circledast h= \sum_{i=-\omega}^{\omega} a[i+n]* h[i+\omega]$$
# <!-- $$C[m,n]=x[m,n]\circledast h= \sum_{j=-\omega}^{\omega}\sum_{i=-\omega}^{\omega} h[i+\omega,j+\omega]* A[m+i,n+j]$$-->
#
#
#
# Or, graphically:
#
# 
#
#
# -
def conv1d(arr, filt, stride):
n = len(arr)
filt_w = len(filt)
#store our filtered image
new_arr = np.zeros(n//stride+1)
# print(n_rows,n_cols,filt_w,filt_h) # uncomment for debugging
for i in range(filt_w//2,n-filt_w//2, stride):
new_arr[i//stride] = np.sum(arr[i-filt_w//2:i+filt_w//2+1]*filt)
return new_arr
# + slideshow={"slide_type": "slide"}
from random import random
x = np.linspace(0,1,100)
y = np.sin(15*x)+2*x**2 + np.random.rand(len(x))
plt.plot(y)
# + [markdown] slideshow={"slide_type": "slide"}
# Now, we define our filter:
# + slideshow={"slide_type": "subslide"}
size = 5
f1 = np.zeros(size)
f1[:] = 0.5
print(f1)
# + [markdown] slideshow={"slide_type": "slide"}
# And we convolve our image with our filter aand look at the output:
# + slideshow={"slide_type": "subslide"}
new_array = conv1d(y,f1,1)
plt.plot(new_array)
# -
# We see that this is still a *blurring* filter, but we would perhaps think of it as a *smoothing* filter in the 2D case.
#
# :: I hope you can see that this simply extends to any dimension.
# + slideshow={"slide_type": "skip"}
|
day3/.ipynb_checkpoints/Discrete Convolutions-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
pd.set_option('display.max_columns', 300)
# ## Step 1: Read in hold out data, scalers, and best model
# +
# holdout = pd.read_csv('resources/movies_holdout_features.csv', index_col=0)
# +
# final_scaler = read_pickle(filename)
# final_model = read_pickle(filename)
# -
# ## Step 2: Feature Engineering for holdout set
# Remember we have to perform the same transformations on our holdout data (feature engineering, extreme values, and scaling) that we performed on the original data.
# +
# transformed_holdout = final_scaler(holdout)
# -
# ## Step 3: Predict the holdout set
# +
# final_answers = final_model.predict(transformed_holdout)
# -
# ## Step 4: Export your predictions
# +
# final_answer.to_csv('housing_preds_your_name.csv')
|
Phase_2/Phase2_project/Predict_holdout.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
import json
with open("ICP_data.dat", "r") as f:
data = json.load(f )
data.keys()
data["0"].keys()
for k in ["0"]:
Te = np.array(data[k]["Te2"])
Phi = np.array(data[k]["phi"])
ne = np.array(data[k]["ne"])
ln_n = np.log(ne/ne.max())
ln_p = np.log((ne*Te)/(ne*Te).max())
plt.figure()
plt.subplot(131)
plt.plot(ne)
plt.subplot(132)
plt.plot(Te*4.68)
plt.plot(Phi)
plt.subplot(133)
plt.plot(ln_n,ln_p,"-+")
imax = 10
gamma, a = np.polyfit(ln_n[:imax], ln_p[:imax], 1)
plt.plot(ln_n, a+gamma*ln_n)
print(gamma)
plt.tight_layout()
# +
k ="0"
plt.figure(figsize=(8,4))
s = 2.5
plt.margins(x=0.01, y=0.1)
Te = np.array(data[k]["Te2"])
x = np.array(data[k]["x"])
Phi = np.array(data[k]["phi"])
ne = np.array(data[k]["ne"])
ni = np.array(data[k]["ni"])
ln_n = np.log(ne/ne.max())
ln_p = np.log((ne*Te)/(ne*Te).max())
plt.subplot(121)
plt.margins(x=0.01)
plt.plot(x, ne, label="Electron density $n_e$", linewidth = s)
plt.plot(x, ni, label="Ion density $n_i$", linewidth = s)
plt.ylabel("$n_e, n_i$ [m$^{-3}$]", fontsize=13)
plt.legend(fontsize=12, loc="lower center")
plt.subplot(122)
plt.margins(x=0.01)
plt.plot(x, Te, label="Electron temperature $T_{e,x}$", linewidth = s)
plt.plot(x, Phi, label="Plasma potential $\phi$", linewidth = s)
plt.ylabel("$\phi$, T$_{e,x}$ [V]", fontsize=13)
plt.legend(fontsize=12, loc="lower center")
for p in [121,122]:
plt.subplot(p)
plt.grid()
plt.xlabel("$x$ position [cm]", fontsize=13)
plt.tight_layout()
plt.savefig("../figures/ICP_results.pdf")
# +
plt.figure(figsize=(4,4))
imax = -1
gamma, a = np.polyfit(ln_n[:imax], ln_p[:imax], 1)
print(gamma)
p = data[k]["Pn"]
ax = plt.gca()
ax.plot(ln_n, ln_p, "k^",alpha=0.8, label ="PIC values")
ax.plot(ln_n, a + gamma*ln_n, "-",alpha=0.8, label ="Linear regression")
# imax = 200
# gamma, a = np.polyfit(ln_n[:imax], ln_p[:imax], 1)
# print(gamma)
# ax.plot(ln_n, a + gamma*ln_n, "-",alpha=0.8, label ="sheath")
ax.set_ylabel("log($\\frac{p_{e,x}}{\max(p_{e,x})}$)", fontsize=16)
ax.set_xlabel("log($\\frac{n_e}{\max(n_e)}$)", fontsize=16)
# ax.set_title("Polytropic process, p= "+str(p)+" mTorr", fontsize=19)
ax.grid(alpha=0.7)
ax.margins(0.01)
ax.legend(fontsize=13)
ax.set_ylim(bottom=-7.5)
plt.tight_layout()
plt.savefig("../figures/ICP_polyfit.pdf")
# -
gamma
# + active=""
# fig, (ax1, ax2) = plt.subplots(1,2, figsize=(8,4))
# #gamma = 1.77
# coef1 = (gamma - 1)/gamma
#
#
# psindex = 20
# s = 2.5
# ax1.plot(x, Te[psindex] + coef1 * (Phi - Phi[psindex] ), "-",alpha=1,linewidth=s, label ="Fluid")
# ax1.plot(x, Te, "-",alpha=0.8, linewidth=s, label ="PIC values")
# ax1.set_ylabel("$T_e$ [V]", fontsize=16)
# ax1.set_xlabel("x [cm]", fontsize=16)
# # ax1.set_title("Sheath model", fontsize=19)
# ax1.set_xlim((0., 0.4))
# ax1.set_ylim(bottom = 0)
#
# for ax in [ax1]:
# ax.grid(alpha=0.7)
# ax.margins(0.01)
# ax.legend(fontsize=14)
#
# psindex = 50
#
# phi0 =Phi[psindex]
# ne0 = ne[psindex]
# Te0 = Te[psindex]
# pot_iso = phi0 + np.log(ne/ne0 )*Te0
#
# pot_poly = phi0 + ((ne/ne0 )**(gamma-1) -1)*Te0/coef1
#
# ax2.plot(x, pot_poly, linewidth=s, label = "Fluid $\gamma = 1.25$")
# ax2.plot(x, Phi, '-',linewidth=s, alpha=0.8, label = "PIC values")
# ax2.set_xlabel("x [cm]", fontsize=16)
# ax2.set_ylabel("$\Phi$ [V]", fontsize=16)
# ax2.set_xlim((0., 0.4))
# ax2.set_ylim(bottom= 0)
#
# for ax in [ax2]:
# ax.grid(alpha=0.7)
# ax.margins(0.01)
# ax.legend(fontsize=14)
#
# plt.tight_layout()
# # position bottom right
# fig.text(1, 0.5, 'Fluid model to update',
# fontsize=50, color='gray',
# ha='right', va='bottom', alpha=0.4)
# # plt.savefig("../figures/sheathModelICP.pdf")
#
#
# -
print(data[k]["Pa"])
print(data[k]["Pa"]/(0.1*0.1*7/450))
data[k]["Pn"]
|
src/Chapitre3/figure/2019-01-08_ICP_results_and_figures.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.5 64-bit
# language: python
# name: python3
# ---
# # Loading data from colorhunt
# +
from urllib import request,parse
import json
import time
import random
from tqdm import trange
def load():
data = []
for i in trange(0,64,desc='Loading data'):
body = parse.urlencode({'step':i,'sort':'new','tags':'','timeframe':30}).encode()
Req = request.Request('https://colorhunt.co/php/feed.php',method='POST',headers={'content-type': 'application/x-www-form-urlencoded','User-Agent' :'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'})
readed = request.urlopen(Req,data=body).read()
loads = json.loads(readed)
data += loads
time.sleep(random.randint(5,10)/100) # slow down to prevent http 429
return data
with open('dataset/color.json','w') as f:
f.write(json.dumps(load()))
# -
# # Process data
#
# +
import json
import time
from tqdm import trange
import random
def color_code_split(code):
return [f'#{code[i:i+6]}'.upper() for i in range(0,len(code),6)]
raw_data = []
with open('dataset/color.json','r') as f:
raw_data = json.loads(f.read())
data = []
for i in trange(len(raw_data),desc="Transform color codes"):
hex = color_code_split(raw_data[i]['code']) # transform color_code in data to hex
for l in range(len(hex)):
row = [str(i*4+l),hex[l],'|'.join(hex)] # create palette id
row.append(raw_data[i]['likes'])
data.append(row)
# time.sleep(0.0001)
with open('dataset/color_processed.csv','w') as f:
f.write("id,base,palette,likes\n" + '\n'.join([','.join(i) for i in data]))
# -
# # Training
# +
import pandas as pd
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.optimizers import Adam
from tensorflow.keras import layers
from tensorflow.keras.models import Model
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
import keras.backend as K
from PIL import Image, ImageDraw
import os
from IPython import display
import time
import math
BUFFER_SIZE = 60000
BATCH_SIZE = 1
epoch = 100
def hex_2_rbg(hex):
hex = hex.replace('##','#')
r = (int('0x'+ hex[1:3],16))
g = (int('0x'+ hex[3:5],16))
b = (int('0x'+ hex[5:],16))
return [r,g,b]
df = pd.read_csv('dataset/color_processed.csv')
df['base'] = df['base'].map(lambda hex: np.asarray([hex_2_rbg(hex)])/255)
df['palette'] = df['palette'].map(lambda hexs: np.asarray([[hex_2_rbg(i) for i in hexs.strip().split("|")]])/255)
x = []
y = []
for i in df['id']:
x.append(df['base'][i])
y.append(df['palette'][i])
train_x, test_x, train_y, test_y = train_test_split( np.array(x) , np.array(y) , test_size=0.1 )
set_per_epoch = int(math.ceil(len(train_y)/epoch))
# -
optimizer = Adam(0.0002,0.5)
def make_generator():
model = keras.Sequential()
model.add(layers.Input(3,))
model.add(layers.Dense(1*4*128))
model.add(layers.Reshape((1,4,128)))
model.add(layers.Conv2DTranspose(64,(1,4),strides=1,padding="same"))
model.add(layers.LeakyReLU(0.2))
model.add(layers.Conv2DTranspose(128,(1,4),strides=1,padding="same"))
model.add(layers.LeakyReLU(0.2))
model.add(layers.Conv2D(3,(1,4),padding="same",activation="sigmoid"))
model.compile(loss="binary_crossentropy",optimizer=optimizer)
return model
generator = make_generator()
# generator.summary()
image = generator.predict([[1,1,1]])*255
plt.xticks([]);plt.yticks([])
plt.imshow(image.reshape((1,4,3)).astype('int'))
plt.show()
def make_discriminator():
model = keras.Sequential()
model.add(layers.Input(shape=(1,4,3)))
model.add(layers.Conv2D(64,(1,4),strides=(1,1),padding="same"))
model.add(layers.LeakyReLU(0.2))
model.add(layers.Conv2D(128,(1,4),strides=(1,1),padding="same"))
model.add(layers.LeakyReLU(0.2))
model.add(layers.Flatten())
model.add(layers.Dropout(0.2))
model.add(layers.Dense(1,activation="sigmoid"))
model.compile(loss="binary_crossentropy",optimizer=optimizer)
return model
discriminator = make_discriminator()
# discriminator.summary()
gan_input = layers.Input(3,)
fake_image = generator(gan_input)
gan_output = discriminator(fake_image)
gan = Model(gan_input,gan_output)
gan.compile(loss="binary_crossentropy",optimizer=optimizer)
# gan.summary()
history_d_loss = []
history_g_loss = []
history_train = []
# +
from tqdm import tqdm
for e in range(1,epoch+1):
for n in tqdm(range(set_per_epoch)):
fake_x = generator.predict(train_x[e*n])
real_x = train_y[[e*n]]
x = np.concatenate((real_x, fake_x))
disc_y = np.zeros(2)
disc_y[:1] = 0.9
d_loss = discriminator.train_on_batch(x, disc_y)
y_gen = np.ones(1)
g_loss = gan.train_on_batch(train_x[e*n], y_gen)
history_d_loss.append(d_loss)
history_g_loss.append(g_loss)
history_train.append(len(history_d_loss))
print(f'epoch:${e} d_loss:${d_loss} g_loss${g_loss}')
rand = np.asarray([np.random.uniform(0,1,3)])
image = generator.predict(rand)*255
image = image.reshape((1,4,3)).astype('int')
image = image.tolist()
image[0]+=(rand*255).astype('int').tolist()
print(image)
plt.xticks([]);plt.yticks([])
plt.imshow(image)
plt.show()
generator.save('./_generator')
discriminator.save('./_discriminator')
gan.save('./_gan')
generator.save_weights('./_generator')
discriminator.save_weights('./_discriminator')
gan.save_weights('./_gan')
print("done!")
|
main.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + raw_mimetype="text/restructuredtext" active=""
# .. _nb_cmaes:
# -
# ## CMA-ES
#
#
# **Disclaimer:** We make use of the implementation available at [PyPi](https://pypi.org/project/cma/) <cite data-cite="pycma"></cite> published by the author <NAME> under the BSD license.
#
#
# CMA-ES which was proposed in <cite data-cite="cmaes"></cite>. Moreover, a comparing review can be found in <cite data-cite="cmaes-review"></cite>.
# CMA-ES stands for covariance matrix adaptation evolution strategy. Evolution strategies (ES) are stochastic, derivative-free methods for numerical optimization of non-linear or non-convex continuous optimization problems. They belong to the class of evolutionary algorithms and evolutionary computation. An evolutionary algorithm is broadly based on the principle of biological evolution, namely the repeated interplay of variation (via recombination and mutation) and selection: in each generation (iteration) new individuals (candidate solutions) are generated by variation, usually in a stochastic way, of the current parental individuals. Then, some individuals are selected to become the parents in the next generation based on their fitness or objective function value
# $f(x)$. Like this, over the generation sequence, individuals with better and better $f$-values are generated.
# (excerpt from [Wikipedia](https://en.wikipedia.org/wiki/CMA-ES)).
#
# ### Example
# + code="algorithms/usage_cmaes.py"
from pymoo.algorithms.so_cmaes import CMAES
from pymoo.factory import get_problem
from pymoo.optimize import minimize
problem = get_problem("sphere")
algorithm = CMAES()
res = minimize(problem,
algorithm,
seed=1,
verbose=False)
print(f"Best solution found: \nX = {res.X}\nF = {res.X}\nCV= {res.CV}")
# -
# CMA-ES already has several stopping criteria implemented. However, as for other algorithms, the number of iterations or function evaluations can be directly passed to `minimize`.
# +
res = minimize(problem,
algorithm,
('n_iter', 10),
seed=1,
verbose=True)
print("Best solution found: \nX = %s\nF = %s" % (res.X, res.F))
# +
res = minimize(problem,
algorithm,
('n_evals', 50),
seed=1,
verbose=True)
print("Best solution found: \nX = %s\nF = %s" % (res.X, res.F))
# -
# Also, easily restarts can be used which are known to work very well on multi-modal functions. For instance, `Rastrigin` can be solved rather quickly by:
# +
problem = get_problem("rastrigin")
algorithm = CMAES(restarts=10, restart_from_best=True)
res = minimize(problem,
algorithm,
('n_evals', 2500),
seed=1,
verbose=False)
print("Best solution found: \nX = %s\nF = %s" % (res.X, res.F))
# -
# Our framework internally calls the `cma.fmin2` function. All parameters which can be used there either as a keyword argument or an option can also be passed to the `CMAES` constructor as well.
# An example with a few selected `cma.fmin2` parameters is shown below:
# +
import numpy as np
from pymoo.util.normalization import denormalize
# define an intitial point for the search
np.random.seed(1)
x0 = denormalize(np.random.random(problem.n_var), problem.xl, problem.xu)
algorithm = CMAES(x0=x0,
sigma=0.5,
restarts=2,
maxfevals=np.inf,
tolfun=1e-6,
tolx=1e-6,
restart_from_best=True,
bipop=True)
res = minimize(problem,
algorithm,
seed=1,
verbose=False)
print("Best solution found: \nX = %s\nF = %s" % (res.X, res.F))
# -
# For more details about hyperparameters we refer to the software documentation of the `fmin2` in CMA-ES which can be found [here](http://cma.gforge.inria.fr/apidocs-pycma/cma.evolution_strategy.html#fmin2).
# A quick explanation of possible parameters is also provided in the API documentation below.
# + [markdown] raw_mimetype="text/restructuredtext"
# ### API
# + raw_mimetype="text/restructuredtext" active=""
# .. autoclass:: pymoo.algorithms.so_cmaes.CMAES
# :noindex:
|
doc/source/algorithms/cmaes.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import keras as keras
from keras.models import Sequential
from keras.layers import Dense, Activation
model = Sequential([
Dense(32, input_shape=(100,)),
Activation('relu'),
Dense(2),
Activation('softmax'),
])
# -
# For a binary classification problem
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
# +
# Generate dummy data
import numpy as np
data = np.random.random((1000, 100))
labels = np.random.randint(2, size=(1000, 2))
# Convert labels to categorical one-hot encoding
one_hot_labels = keras.utils.to_categorical(labels, num_classes=2)
# Train the model, iterating on the data in batches of 32 samples
model.fit(data, labels, epochs=10, batch_size=32)
# -
|
artigo-helper/src/main/resources/python-scripts/First steps with Keras.ipynb
|
# ---
# jupyter:
# jupytext:
# formats: ipynb,jl:hydrogen
# text_representation:
# extension: .jl
# format_name: hydrogen
# format_version: '1.3'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.8.0-DEV
# language: julia
# name: julia-1.8
# ---
# %%
VERSION
# %%
using Plots
using Zygote
# %%
meshgrid(x, y) = reim(complex.(x', y))
x = 1:4
y = 10(1:3)
X, Y = meshgrid(x, y)
display(X)
display(Y)
# %%
function plotvf!(x, y, f; scale=1, kwargs...)
X, Y = meshgrid(x, y)
u(x, y) = scale * f(x, y)[1]
v(x, y) = scale * f(x, y)[2]
U = u.(X, Y)
V = v.(X, Y)
X -= U/2
Y -= V/2
quiver!(vec(X), vec(Y); quiver = (vec(U), vec(V)), kwargs...)
end
plotvf(x, y, f; kwargs...) = (plot(); plotvf!(x, y, f; kwargs...))
# %%
g(x, y) = (-y, x)
x = y = range(-2, 2, length=11)
plotvf(x, y, g; scale=0.2, size=(400, 400))
# %%
f(x, y) = x^3 - 3x + y^2
df(x, y) = gradient(f, x, y)
xs = ys = range(-2, 2, length=101)
heatmap(xs, ys, f; color=:rainbow)
x = y = range(-2, 2, length=21)[2:2:end]
plotvf!(x, y, df; scale=0.05, color=:white)
plot!(xlim=extrema(xs), ylim=extrema(ys))
# %% [markdown]
# For Plots.@recipe, see
#
# * https://docs.juliaplots.org/latest/recipes/
# * https://github.com/JuliaPlots/ExamplePlots.jl/blob/master/notebooks/usertype_recipes.ipynb
# * https://github.com/JuliaPlots/ExamplePlots.jl/blob/master/notebooks/type_recipes.ipynb
# * https://github.com/JuliaPlots/ExamplePlots.jl/blob/master/notebooks/series_recipes.ipynb
# * https://github.com/JuliaPlots/Plots.jl/blob/master/src/recipes.jl
# * https://nbviewer.jupyter.org/gist/genkuroki/521c4bf9160caae8f8c6591e78a9f1d1
# %%
module O
using Plots
meshgrid(x, y) = reim(complex.(x', y))
struct VectorField{X, Y, F, S} x::X; y::Y; f::F; scale::S end
VectorField(x, y, f; scale=0.2) = VectorField(x, y, f, scale)
@recipe function F(vf::VectorField)
x, y, f, scale = vf.x, vf.y, vf.f, vf.scale
X, Y = meshgrid(x, y)
u(x, y) = scale * f(x, y)[1]
v(x, y) = scale * f(x, y)[2]
U = u.(X, Y)
V = v.(X, Y)
X -= U/2
Y -= V/2
seriestype := :quiver
quiver --> (vec(U), vec(V))
(vec(X), vec(Y))
end
end
# %%
x = y = range(-2, 2, length=11)
g(x, y) = (-y, x)
plot(O.VectorField(x, y, g); size=(400, 400))
# %%
f(x, y) = x^3 - 3x + y^2
df(x, y) = gradient(f, x, y)
xs = ys = range(-2, 2, length=101)
heatmap(xs, ys, f; color=:rainbow)
x = y = range(-2, 2, length=21)[2:2:end]
plot!(O.VectorField(x, y, df, 0.05); color=:white)
plot!(xlim=extrema(xs), ylim=extrema(ys))
# %%
|
0007/vector fields.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Model Optimizing
#
# This notebook is about techniques how to find a good model architecture.
#
# If using Multilayer Neural Networks it is always a challenge to find the right size of the hidden layers to get good result. There is no formula you can use, because the solution depends on many properties of the used data. First we have the size of the inputs and outputs, then we have the size of the observations, and we also have the complexity of the data (how many degrees of freedon are inside the data, which have to be learned)
#
# For starting to solve this problem a group models with a wide range of different layer sizes can be used. Comparing the KPIs in most cases gives you an overview, which model configurations can be used for further optimizations.
#
# #### Preparation
# +
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from tensorflow import keras
import tensorflow as tf
from IPython.display import display
seed=8172
np.random.seed(seed)
tf.random.set_seed(seed)
ts_input_size=12
ts_target_size=12
# Read the data
df=pd.read_csv("../data/AirPassengers.csv")
df.columns=["Period","Passengers"]
dfTraining=df.iloc[:-ts_target_size].copy()
# Scaling
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range = (0.1, 0.9))
series_scaled = scaler.fit_transform(dfTraining["Passengers"].values.reshape(-1,1))
dfTraining["PassengersScaled"]=series_scaled
training_flat=dfTraining["PassengersScaled"].values.reshape(-1,1).astype("float32")
trainX=[]
trainY=[]
for i in range(len(training_flat)-ts_target_size-ts_input_size+1):
x=training_flat[i:(i+ts_input_size),0]
y=training_flat[(i+ts_input_size):(i+ts_input_size+ts_target_size),0]
trainX.append(x)
trainY.append(y)
# Training data
trainX=np.array(trainX)
trainY=np.array(trainY)
from sklearn.model_selection import train_test_split
x_trainbatches, x_testbatches, y_trainbatches, y_testbatches = train_test_split(
trainX, trainY, test_size=0.1, random_state=42)
x_testbatches
# Forecast input
predictX=[training_flat[-ts_input_size:,0]]
predictX=np.array(predictX)
# -
# #### Train a wide range of models
# +
def show_loss(history,skipFirst=True):
print("Last Results loss:{}, cross validation loss:{}".format(history.history['loss'][-1],history.history['val_loss'][-1]))
plt.figure(figsize=(20,10))
start=100 if skipFirst else 0
plt.plot(history.history["val_loss"][start:], label="Cross Validation Loss")
plt.plot(history.history["loss"][start:], label="Training Loss")
plt.xlabel("Epochs")
plt.ylabel("Mean Squared Error")
plt.title("Loss Results - Training vs. Cross Validation (Mean Squared Error)")
plt.legend(loc="upper right")
plt.grid(True)
plt.show()
def show_results(df,predictX,predictY):
outputsize=len(predictY[0])
inputsize=len(predictX[0])
absDeviation=np.sum(np.abs(df["Passengers"].values[-outputsize:]-predictY[0]))
absError=absDeviation/np.sum(predictY[0])*100
print(" Forecast : {}\r\n Actual : {}\r\nAbsolute Deviation : {} Passengers\r\n Absolute Error : {:3.2f}%".format(
predictY.astype(int)[0],df["Passengers"].values[-outputsize:],absDeviation,absError))
y0=df["Passengers"].values
y1=[None for x in range(df.shape[0]-outputsize-inputsize)]
y1.extend(predictX[0])
y1=np.array(y1)
y2=[None for x in range(df.shape[0]-outputsize)]
y2.extend(predictY[0])
y2=np.array(y2)
plt.figure(figsize=(20,10))
plt.plot(y0,label='Actual')
plt.plot(y1,label='Input for Forecast')
plt.plot(y2, label= 'Forecast')
plt.xlabel("Period")
labels=[df["Period"].iloc[x] for x in range(0,df.shape[0],12)]
plt.xticks(range(0,df.shape[0],12),labels=labels, rotation=45)
plt.ylabel("Number of Passenger")
plt.title("Forecast Airline Passenger")
plt.grid(True)
plt.legend(loc="upper left")
plt.show()
def buildAndTrainModel(layerSize1,layerSize2, usebias=True):
model=keras.Sequential()
model.add(keras.layers.Dense(layerSize1,activation=keras.layers.LeakyReLU(), input_dim=ts_input_size, use_bias=usebias))
if layerSize2 is not None:
model.add(keras.layers.Dense(layerSize2,activation=keras.layers.LeakyReLU(), use_bias=usebias))
model.add(keras.layers.Dense(ts_target_size, activation=keras.layers.LeakyReLU(), use_bias=usebias))
model.compile(
optimizer='adam',
loss='mean_squared_error',
metrics=['mean_squared_error']
)
earlyStopCB= keras.callbacks.EarlyStopping(monitor='val_loss',min_delta=0,patience=100,verbose=0,mode='auto')
history=model.fit(x_trainbatches, y_trainbatches, epochs=10000,verbose=0,
batch_size=len(x_trainbatches),
validation_data=(x_testbatches, y_testbatches),
use_multiprocessing=True, callbacks=[earlyStopCB])
predictY=scaler.inverse_transform(model.predict(predictX))
return model,history,predictY
# -
# Next, I will build a range of models with tiny, small, midsize and large models.
# +
layer1=[]
layer2=[]
models=[]
histories=[]
pYCol=[]
deviations=[]
absErrors=[]
lossCol=[]
lossValCol=[]
epochsCol=[]
baseModels=[
[12,None], # tiny
[40,None], #small
[24,12],
[40,20],
[100,None], # midsize
[150,20],
[200,30],
[1000,None], # large
[400,200],
[1000,500]
]
for s1 in baseModels:
layer1.append(s1[0])
layer2.append(s1[1])
print(layer1)
print(layer2)
# -
for i in range(len(layer1)):
m,h,pY=buildAndTrainModel(layer1[i],layer2[i])
models.append(m)
histories.append(h)
pYCol.append(pY)
outputsize=len(pY[0])
inputsize=len(predictX[0])
lossCol.append(h.history["loss"][-1])
lossValCol.append(h.history["val_loss"][-1])
epochsCol.append(len(h.history["loss"]))
deviations.append(np.sum(np.abs(df["Passengers"].values[-outputsize:]-pY[0])))
absErrors.append(deviations[i]/np.sum(pY[0])*100)
for i in range(len(layer1)):
print("Model {}: L1:{} L2:{} mse_train:{:.5f} mse_val:{:.5f} Dev:{} Error:{}".format(
i,layer1[i],layer2[i],lossCol[i],lossValCol[i], deviations[i],absErrors[i]))
dfResult=pd.DataFrame()
dfResult["Layer1"]=layer1
dfResult["Layer2"]=layer2
dfResult["Loss"]=lossCol
dfResult["ValLoss"]=lossValCol
dfResult["DiffLoss"]=dfResult["ValLoss"]-dfResult["Loss"]
dfResult["Epochs"]=epochsCol
dfResult["FC-Deviation"]=deviations
dfResult["FC-AbsError"]=absErrors
dfResult.sort_values(by=["ValLoss"])
# It seems that this data works with a large range of models very good. The Errors are small and the report does not show a trend were to go, but with a deeper look there are some indicators you could be aware of.
#
# A large model tends to build a exact representation of the training data and it does not generalize much. This results in low training losses and on the validations side after a few epochs it tends to reach a minimum and rises afterworth again. The model 8 and model 7 shows exactly this behavior as the early break is activated at around 1000 epochs.
#
# Model 2 which is in comaparison a midsize model does much more generalize the data. It needs about 2600 epochs until it tends to overfit.
#
# Small and tiny models in our case results in lower quality.
#
# #### Conclusio
#
# Without knowing the real data and situation we have there with validation losses near together, I always prefer the models with the lower sizes. They are more robust in comparison to unknown data. So I would prefer midsize models and make another test using a smaller range around the midsize models.
#
# Then I would choose one or two models out of the range with good results and make another run where I use K-Fold Cross Validation. This method iterates over the whole training data building separate models with the same configuration but different splits of the training and test data, so that the whole training data is also tested. This gives you a better overview how robust your model to unknown data is.
#
show_loss(histories[8])
show_results(df,scaler.inverse_transform(predictX),pYCol[8])
show_loss(histories[7])
show_results(df,scaler.inverse_transform(predictX),pYCol[2])
show_loss(histories[4])
show_results(df,scaler.inverse_transform(predictX),pYCol[3])
|
jupyter/03 - Model Optimizing.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tutorial-2: Introduction to PySpark DataFrames
# ## Import and initialize SparkSession
# +
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("tutorial-2").getOrCreate()
# -
# # PySpark Dataframes
#
# ## Read in a csv file
# +
df = spark.read.csv("./cars.csv", header = True)
df.printSchema()
# -
# ## TODO : Use the inferSchdma parameter to infer data types
# +
df = spark.read.csv("./cars.csv", header = True, ## YOUR CODE HERE ##)
df.printSchema()
# -
# ## Show samples from dataframe
df.show(10)
# ## Filter all cars made in 2015
df.filter(df['YEAR'] == 2015).show(10)
# ## TODO: Find all cars made by Tesla
df_tesla = ## YOUR CODE GOES HERE ##
# ## Select columns Make, Model and Size
df.select(df['Make'], df['Model'], df['Size']).show(10)
# ## Count manufacturer based on number of cars made
# +
df_manufacturer = df.groupBy("Make").count()
df_manufacturer.show()
# -
# ## Sort manufacturer based on count of cars made
df_manufacturer.sort("count", ascending=False).show()
# ## Count and sort the number of cars made by year
df_year = ## YOUR CODE GOES HERE ##
df_year.show()
# ## Convert Spark DataFrame to Pandas DataFrame
# +
df_pd = df.toPandas()
df_pd.head(10)
# -
df_pd.describe()
spark.stop()
|
2-intro-pyspark/Tutorial-2-PySpark-DataFrame.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Facilities
# ## Water
from sqlalchemy import create_engine
engine = create_engine("postgres://postgres:postgres@localhost/maharashtra")
# get the overall count of each category of water in maharashtra
import pandas as pd
with engine.connect() as con:
query = "SELECT water AS water_source ,COUNT(water) count_of_source, COUNT(WATER_FUNC_YN) count_of_functional_sources from school_info group by water;"
rs = con.execute(query)
df_all = pd.DataFrame(rs.fetchall())
df_all.columns = rs.keys()
df_all = df_all[df_all['water_source']!='9']
df_all.set_index('water_source')
df_all
import matplotlib.pyplot as plt
import seaborn as sn
sn.set()
df_all
labels = ["Tap water","Hand pumps","Well","others","None"]
print ("Over all Maharashtra")
plt.pie(df_all.loc[:,['count_of_source']],labels=labels,autopct = "%1.1f%%");
plt.axis('equal');
plt.show()
# get the count of each category of water for each district
def get_water_count(list_of_dist):
import pandas as pd
with engine.connect() as con:
for dist in list_of_dist:
query = "SELECT water AS water_source ,COUNT(water) count_of_source, COUNT(WATER_FUNC_YN) count_of_functional_sources from school_info where distname = '"+str(dist) +"' and water != '9' group by water;"
rs = con.execute(query)
df_all = pd.DataFrame(rs.fetchall())
df_all.columns = rs.keys()
yield df_all,dist
a = get_water_count(['NANDED','BID','RATNAGIRI'])
bid = next(a)
bid
labels = ["Tap water","Hand pumps","Well","others","None"]
length = len(bid['count_of_source'])
plt.pie(bid.loc[:,['count_of_source']],labels=labels[:length],autopct = "%1.1f%%");
plt.axis('equal');
plt.show()
districts = ['RATNAGIRI',
'NANDURBAR',
'SOLAPUR',
'PALGHAR',
'CHANDRAPUR',
'AMRAVATI',
'NASHIK',
'DHULE',
'AHMADNAGAR',
'PUNE',
'AURANGABAD (MAHARASHTRA)',
'BID',
'GADCHIROLI',
'NAGPUR',
'WARDHA',
'KOLHAPUR',
'SANGLI',
'NANDED',
'BHANDARA',
'MUMBAI II',
'JALGAON',
'THANE',
'GONDIYA',
'OSMANABAD',
'PARBHANI',
'MUMBAI (SUBURBAN)',
'RAIGARH (MAHARASHTRA)',
'YAVATMAL',
'AKOLA',
'SATARA',
'SINDHUDURG',
'WASHIM',
'HINGOLI',
'JALNA',
'LATUR',
'BULDANA']
water_dist = get_water_count(districts)
for df_dist,dist_name in water_dist:
print dist_name
labels = ["Tap water","Hand pumps","Well","others","None"]
length = len(df_dist['count_of_source'])
plt.pie(df_dist.loc[:,['count_of_source']],labels=labels[:length],autopct = "%1.1f%%");
plt.axis('equal');
plt.show()
# # Toilets Facilities
def get_school_student_enrollment():
from sqlalchemy import create_engine
import pandas as pd
connection = create_engine("postgresql://postgres:postgres@localhost/maharashtra")
with connection.connect() as con:
query = "\
SELECT * \
FROM student_enrollement;"
rs = con.execute(query)
df = pd.DataFrame(rs.fetchall())
df.columns = rs.keys()
return df
def get_aggregate_exam_result():
from sqlalchemy import create_engine
import pandas as pd
connection = create_engine("postgresql://postgres:postgres@localhost/maharashtra")
with connection.connect() as con:
query = "\
SELECT * \
FROM aggregate_exam_results;"
rs = con.execute(query)
df = pd.DataFrame(rs.fetchall())
df.columns = rs.keys()
return df
df_exam = get_aggregate_exam_result()
df_exam[:10]
df = get_school_category()
|
data_cleaning_and_analysis/Facilities.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
from RungeKutta4 import RungeKutta4
from matplotlib import pyplot as plt
from sympy import Symbol, Eq, Function, solve, Rational, lambdify, latex
from IPython.display import display
from typing import List
# Just a function to display each equation from Sympy in a visually pleasing latex form
def print_equations(equations: List):
for equation in equations:
display(equation)
# ## Creating the Rayleigh-Plesset in SymPy
# I am using SymPy to help simplify the equation. Please view the paper in this assignment to see me manually derive this type of equation by hand. I employed SymPy to make sure I didn't input any wrong variable or accidentally add instead of subtract. You know, the main reason why I don't do well on your exams 😒
#
# Another reason why I am using sympy is that in the event I need to alter an equation or redo a few things, I don't have to do it from scratch, but just plug and chug with SymPy.
#
# ### IMPORTANT NOTE
# I am NOT using sympy to substitute for the RK4 method, which I have imported above. My RK4 method works genuinely and according to the class requirements. What sympy will do is convert the Rayleigh-Plesset equation into the formats I want (You can see the manual derivation of it in the lab report / paper). The simplified format will then be converted into a numpy-type lambidfy which can be used in place of a function/method in my RK4 class,
# +
rho1 = Symbol("rho_1")
t = Symbol("t")
R = Function("R")(t)
R_ = R.diff()
R__ = R.diff().diff()
P0 = Symbol("P_0")
mu = Symbol("mu")
sigma = Symbol("sigma")
variables = {
rho1: 997, # Density of water
P0: -9.81 * 997 * 1000, # Assume constant throughout process, pressure = density * 9.81 * height
mu: 0.0013076,
sigma: 0.072
}
print("Substitution Values")
print_equations([
Eq(rho1, variables[rho1]),
Eq(P0, variables[P0]),
Eq(mu, variables[mu]),
Eq(sigma, variables[sigma]),
])
lhs = rho1 * (R * R__ + Rational(3 / 2) * R_ ** 2)
rhs = - P0 - 4 * mu * (1 / R) * R_ - 2 * sigma / R
eqn = Eq(lhs, rhs)
print("\n\nRayleigh-Plesset equation")
print_equations([eqn])
# +
# Solve the equations for the first and second derivatives
# Note that for the first derivative, we get two potential answers. We'll explore each of those answers later
dRdt1, dRdt2 = solve(eqn, R_)
d2Rdt2 = solve(eqn, R__)[0]
print_equations([Eq(R_, dRdt1), Eq(R_, dRdt2), Eq(R__, d2Rdt2)])
# +
# Substitute
dRdt1 = dRdt1.subs(variables).simplify().evalf()
dRdt2 = dRdt2.subs(variables).simplify().evalf()
d2Rdt2 = d2Rdt2.subs(variables).simplify().evalf()
print_equations([Eq(R_, dRdt1), Eq(R_, dRdt2), Eq(R__, d2Rdt2)])
# +
function_dR_dt1 = lambdify([R, R__, t], dRdt1)
function_dR_dt2 = lambdify([R, R__, t], dRdt2)
function_d2R_dt2 = lambdify([R, R_, t], d2Rdt2)
# -
function = RungeKutta4(
dt=0.01,
dr_dt=function_dR_dt2,
d2r_dt2=function_d2R_dt2
)
data = function(
r=100, # Starting R value
dr_dt=-0.001, # Program breaks if dr_dt starts at 0
t=0, # Starting t value (almost always at 0)
steps=1400 # Number of steps to run
)
data.keys()
plt.plot(data["t"], data["r"]) #Radius size over Time
plt.plot(data["t"], data["dr_dt"]) #rate of radius change over time
plt.plot(data["t"], data["d2r_dt2"]) #acceleration of radius change over time
|
Projects/Project_1/Q2/.ipynb_checkpoints/CavitationExplosionSimulation-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Contains Duplicate
# + active=""
# Given an array of integers, find if the array contains any duplicates.
# Your function should return true if any value appears at least twice in the array,
# and it should return false if every element is distinct.
#
# Example 1:
# Input: [1,2,3,1]
# Output: true
#
# Example 2:
# Input: [1,2,3,4]
# Output: false
#
# Example 3:
# Input: [1,1,1,3,3,4,3,2,4,2]
# Output: true
# -
class Solution:
def containsDuplicate(self, nums):
"""
:type nums: List[int]
:rtype: bool
"""
return not (len(set(nums)) == len(nums))
nums = [1,1,1,3,3,4,3,2,4,2]
ans = Solution()
ans.containsDuplicate(nums)
|
217. Contains Duplicate.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={} colab_type="code" id="2pYZ0JPjAgi7"
import os
import cv2
import glob
from math import atan2, asin
import numpy as np
import pandas as pd
import math
import matplotlib.pyplot as plt
from tqdm import tqdm_notebook as tqdm
from torch.utils.data import DataLoader, Dataset, sampler
from torch.optim.lr_scheduler import ReduceLROnPlateau
import albumentations as aug
from albumentations import (HorizontalFlip, RandomResizedCrop, VerticalFlip,OneOf, ShiftScaleRotate, Normalize, Resize, Compose,Cutout, GaussNoise, RandomRotate90, Transpose, RandomBrightnessContrast, RandomCrop)
from albumentations import ElasticTransform, GridDistortion, OpticalDistortion, Blur, RandomGamma
from albumentations.pytorch import ToTensor
import torch
from torchvision import transforms
import torch.nn as nn
from torch.nn import functional as F
import torch.optim as optim
import torch.backends.cudnn as cudnn
import torchvision.models as models
import time
import random
import scipy.io
import random
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score, recall_score, precision_score
import sys
sys.path.insert(0, 'segmentation_models.pytorch/')
import segmentation_models_pytorch as smp
skf = StratifiedKFold(n_splits=5, shuffle = True, random_state=24)
try:
from ralamb import Ralamb
from radam import RAdam
from ranger import Ranger
from lookahead import LookaheadAdam
from over9000 import Over9000
from tqdm.notebook import tqdm
except:
os.system(f"""git clone https://github.com/mgrankin/over9000.git""")
import sys
sys.path.insert(0, 'over9000/')
from ralamb import Ralamb
from radam import RAdam
from ranger import Ranger
from lookahead import LookaheadAdam
from over9000 import Over9000
from tqdm.notebook import tqdm
# + [markdown] colab_type="text" id="N1DBoUMaAgjA"
# #### Setting Random Seed
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="EUTE-cpupSlv" outputId="1329f155-2274-4010-95b5-1c60917b0dc6"
def seed_everything(seed):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
seed_everything(24)
# -
# ##### Training and validation split was created while keeping the distribution of the classes(Covid/Not-Covid) similar, for better evaluation
df = pd.read_csv('df_covid.csv')
df_train = pd.read_csv('df_train.csv')
df_val = pd.read_csv('df_val.csv')
print(f"Total Size: {len(df)}, Train Size: {len(df_train)}, Val Size: {len(df_val)}")
# + [markdown] colab_type="text" id="WrUp-9xnAgjF"
# ## Visualisation
# #### Clearly the distribution of train and val split is same
# -
def checkDistribution(df, phase):
df['Covid'].value_counts().sort_index().plot(kind="bar", figsize=(4,3), rot = 0)
if phase == 'train':
plt.title("Label Distribution (Training Set)",
weight='bold',
fontsize=10)
else:
plt.title("Label Distribution (Validation Set)",
weight='bold',
fontsize=10)
plt.xticks(fontsize=8)
plt.yticks(fontsize=8)
plt.xlabel("Label", fontsize=10)
plt.ylabel("Frequency", fontsize=10);
checkDistribution(df_train, 'train')
checkDistribution(df_val, 'val')
# + colab={"base_uri": "https://localhost:8080/", "height": 421} colab_type="code" id="CzYpbnTMPj-m" outputId="3ea439fd-c5d1-4f8b-beda-1ba8057ad8c5"
def plot(path):
w=10
h=10
fig=plt.figure(figsize=(8, 8))
columns = 4
rows = 5
for i in range(1, columns*rows +1):
if(path[i][-3:]=='png'):
img = cv2.imread(path[i])
else:
print(path[i])
fig.add_subplot(rows, columns, i)
plt.imshow(img)
plt.show()
# -
path = list(df.Path)
plot(path)
# + [markdown] colab_type="text" id="3IvhlT3uPbu9"
# ### Checking for corrupt images
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="2IXC_0N4F8EX" outputId="59c4bd0e-aba6-485e-eabd-8500176407a0"
count=0
for i in range(len(df.Path)):
try:
image = cv2.imread(df.loc[i].Path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, (224, 224))
except:
print(i)
df.drop(i, inplace = True)
count+=1
# + [markdown] colab_type="text" id="jp9YO0WtAgkG"
# ### Class for Dataset
# + colab={} colab_type="code" id="zLzsjFqKAgkH"
class Covid_Dataset(Dataset):
def __init__(self, df, phase='train', transform =True):
self.df = df
self.phase = phase
self.aug = get_transforms(self.phase)
self.transform = transform
def __getitem__(self,idx):
image = cv2.imread(self.df.loc[idx].Path)
image = cv2.resize(image, (256, 256), interpolation = cv2.INTER_NEAREST)
label = self.df.loc[idx].Covid
label = np.asarray(label).reshape(1,)
augment = self.aug(image =image)
image = augment['image']
return image,label
def __len__(self):
return len(self.df)
# + colab={} colab_type="code" id="iektHGWZAgkK"
def get_transforms(phase):
"""
This function returns the transformation list.
These are some commonly used augmentation techniques that
I believed would be useful.
"""
list_transforms = []
if phase == "train":
list_transforms.extend(
[
HorizontalFlip(p = 0.5),
VerticalFlip(p = 0.5),
Cutout(num_holes=4, p=0.5),
ShiftScaleRotate(p=1,border_mode=cv2.BORDER_CONSTANT),
# OneOf([
# ElasticTransform(p=0.1, alpha=1, sigma=50, alpha_affine=50,border_mode=cv2.BORDER_CONSTANT),
# GridDistortion(distort_limit =0.05 ,border_mode=cv2.BORDER_CONSTANT, p=0.1),
# OpticalDistortion(p=0.1, distort_limit= 0.05, shift_limit=0.2,border_mode=cv2.BORDER_CONSTANT)
# ], p=0.3),
# OneOf([
# Blur(blur_limit=7)
# ], p=0.4),
# RandomGamma(p=0.8)
]
)
list_transforms.extend(
[
# RandomResizedCrop(height = 224, width = 224, p = 1),
# Normalize(mean=[0.485,0.456,0.406], std=[0.229,0.224,0.225], p=1),
ToTensor(),
]
)
list_trfms = Compose(list_transforms)
return list_trfms
# + [markdown] colab={} colab_type="code" id="KZjZa3n0AgkP"
# ### This function returns the dataloader according to the phase(training/validation)
# + colab={} colab_type="code" id="IFRv3oHuAgkS"
def provider(phase,batch_size=16, num_workers=0):
"""
This function returns the dataloader according to
the phase passed.
"""
if phase == 'train' :
df = pd.read_csv('df_train.csv')
image_dataset = Covid_Dataset(df)
else:
df = pd.read_csv('df_val.csv')
image_dataset = Covid_Dataset(df, transform = False)
dataloader = DataLoader(
image_dataset,
batch_size=batch_size,
num_workers=0,
pin_memory=False,
shuffle=True,
)
return dataloader
# -
# ## Code to check if DataLoader is working properly or not
# + colab={} colab_type="code" id="poGwpgyZAgkV"
dl = provider('train')
for x, y in dl:
print(x.shape)
print(y.shape)
x = x[0].permute(1,2,0).cpu().numpy()
plt.imshow(x)
plt.show()
print(y)
break
# -
# ### Meter to create log file for training
# +
class Meter:
'''A meter to keep track of iou and dice scores throughout an epoch'''
def __init__(self, phase, epoch):
self.acc_scores = []
self.f1_scores = []
self.precision_scores = []
self.recall_scores = []
self.phase = phase
def update(self, targets, outputs):
probs = torch.sigmoid(outputs)
probs_cls = torch.sigmoid(outputs)
precision = precision_score(targets, probs_cls.round(), labels = [0,1])
recall = recall_score(targets, probs_cls.round(), labels = [0,1])
f1 = f1_score(targets, probs_cls.round(), labels = [0,1])
acc = accuracy_score(targets, probs_cls.round())
# Adding all metrics to list
self.acc_scores.append(acc)
self.f1_scores.append(f1)
self.precision_scores.append(precision)
self.recall_scores.append(recall)
def get_metrics(self):
acc = np.nanmean(self.acc_scores)
f1 = np.nanmean(self.f1_scores)
precision = np.nanmean(self.precision_scores)
recall = np.nanmean(self.recall_scores)
return acc, f1, precision, recall
def epoch_log(phase, epoch, epoch_loss, meter, start):
'''logging the metrics at the end of an epoch'''
acc, f1, precision, recall = meter.get_metrics()
print("Loss: %0.4f | accuracy: %0.4f | F1: %0.4f | Precision: %0.4f | Recall: %0.4f" % (epoch_loss, acc, f1, precision, recall))
return acc, f1, precision, recall
# -
class BCEDiceLoss(nn.Module):
__name__ = 'bce_dice_loss'
def __init__(self, eps=1e-7, beta=2., fn_weight = .6,activation='sigmoid', ignore_channels=None, threshold=None):
super().__init__()
self.bce = nn.BCEWithLogitsLoss(reduction='mean')
self.beta = beta
self.eps = eps
self.threshold = threshold
self.ignore_channels = ignore_channels
self.activation = smp.utils.base.Activation(activation)
def forward(self, y_pr, y_gt):
bce = self.bce(y_pr, y_gt)
y_pr = self.activation(y_pr)
dice = 1 - smp.utils.functional.f_score(
y_pr, y_gt,
beta=self.beta,
eps=self.eps,
threshold=self.threshold,
ignore_channels=self.ignore_channels,
)
return dice + bce
# + colab={} colab_type="code" id="s8mS5D_oAgkl"
class Trainer(object):
"""
This class takes care of training and validation of our model
"""
def __init__(self,model, optim, lr, bs, epochs = 20, name = 'model', shape=200):
self.batch_size = bs
self.accumulation_steps = 1
self.lr = lr
self.name = name
self.num_epochs = epochs
self.optim = optim
self.best_loss = float("inf")
self.phases = ["train", "val"]
self.device = torch.device("cuda:0")
torch.set_default_tensor_type("torch.FloatTensor")
self.net = model
self.best_val_acc = 0
self.best_val_loss = 10
self.best_f1_score = 0
self.losses = {phase: [] for phase in self.phases}
self.criterion = BCEDiceLoss()
if self.optim == 'Over9000':
self.optimizer = Over9000(self.net.parameters(),lr=self.lr)
elif self.optim == 'Adam':
self.optimizer = torch.optim.Adam(self.net.parameters(),lr=self.lr)
elif self.optim == 'RAdam':
self.optimizer = Radam(self.net.parameters(),lr=self.lr)
elif self.optim == 'Ralamb':
self.optimizer = Ralamb(self.net.parameters(),lr=self.lr)
elif self.optim == 'Ranger':
self.optimizer = Ranger(self.net.parameters(),lr=self.lr)
elif self.optim == 'LookaheadAdam':
self.optimizer = LookaheadAdam(self.net.parameters(),lr=self.lr)
else:
raise(Exception(f'{self.optim} is not recognized. Please provide a valid optimizer function.'))
self.scheduler = ReduceLROnPlateau(self.optimizer, mode="min", patience=3, verbose=True,factor = 0.5,min_lr = 1e-5)
self.net = self.net.to(self.device)
cudnn.benchmark = True
self.dataloaders = {
phase: provider(
phase=phase,
batch_size=self.batch_size
)
for phase in self.phases
}
self.losses = {phase: [] for phase in self.phases}
self.acc_scores = {phase: [] for phase in self.phases}
self.f1_scores = {phase: [] for phase in self.phases}
def load_model(self, name, path='models/'):
state = torch.load(path+name, map_location=lambda storage, loc: storage)
self.net.load_state_dict(state['state_dict'])
self.optimizer.load_state_dict(state['optimizer'])
print("Loaded model with dice: ", state['best_acc'])
def forward(self, images, targets):
images = images.to(self.device)
targets = targets.type("torch.FloatTensor")
targets = targets.to(self.device)
preds = self.net(images)
preds.to(self.device)
loss = self.criterion(preds,targets)
# Calculating accuracy of the predictions
# probs = torch.sigmoid(preds)
# probs_cls = torch.sigmoid(preds)
# acc = accuracy_score(probs_cls.detach().cpu().round(), targets.detach().cpu())
return loss, preds
def iterate(self, epoch, phase):
meter = Meter(phase, epoch)
start = time.strftime("%H:%M:%S")
print(f"Starting epoch: {epoch} | phase: {phase} | ⏰: {start}")
batch_size = self.batch_size
self.net.train(phase == "train")
dataloader = self.dataloaders[phase]
running_loss = 0.0
total_batches = len(dataloader)
tk0 = tqdm(dataloader, total=total_batches)
self.optimizer.zero_grad()
for itr, batch in enumerate(tk0):
images, targets = batch
loss, preds= self.forward(images, targets)
loss = loss / self.accumulation_steps
if phase == "train":
loss.backward()
if (itr + 1 ) % self.accumulation_steps == 0:
self.optimizer.step()
self.optimizer.zero_grad()
running_loss += loss.item()
preds = preds.detach().cpu()
targets = targets.detach().cpu()
meter.update(targets, preds)
tk0.set_postfix(loss=(running_loss / ((itr + 1))))
epoch_loss = (running_loss * self.accumulation_steps) / total_batches
acc, f1, precision, recall = epoch_log(phase, epoch, epoch_loss, meter, start)
self.losses[phase].append(epoch_loss)
self.acc_scores[phase].append(acc)
torch.cuda.empty_cache()
return epoch_loss, acc, f1, precision, recall
def train_end(self):
train_loss = self.losses["train"]
val_loss = self.losses["val"]
train_acc = self.acc_scores["train"]
val_acc = self.acc_scores["val"]
df_data=np.array([train_loss, train_acc, val_loss, val_acc]).T
df = pd.DataFrame(df_data,columns = ['train_loss','train_acc', 'val_loss', 'val_acc'])
df.to_csv('logs/'+self.name+".csv")
def predict(self):
self.net.eval()
with torch.no_grad():
self.iterate(1,'test')
print('Done')
def fit(self, epochs):
# self.num_epochs+=epochs
for epoch in range(0, self.num_epochs):
self.iterate(epoch, "train")
state = {
"epoch": epoch,
"best_loss": self.best_val_loss,
"best_f1": self.best_f1_score,
"state_dict": self.net.state_dict(),
"optimizer": self.optimizer.state_dict(),
}
self.net.eval()
with torch.no_grad():
epoch_loss, acc, f1, precision, recall = self.iterate(epoch, "val")
self.scheduler.step(epoch_loss)
if f1 > self.best_f1_score:
print("* New optimal found according to f1 score, saving state *")
state["best_f1"] = self.best_f1_score = f1
os.makedirs('models/', exist_ok=True)
torch.save(state, 'models/'+self.name+'_best_f1.pth')
if epoch_loss < self.best_val_loss:
print("* New optimal found according to val loss, saving state *")
state["best_loss"] = self.best_val_loss = epoch_loss
os.makedirs('models/', exist_ok=True)
torch.save(state, 'models/'+self.name+'_best_loss.pth')
print()
self.train_end()
# -
# ### Create Model and start training
# + colab={} colab_type="code" id="BWDGXpMmAgkp"
try:
from efficientnet_pytorch import EfficientNet
except:
os.system(f"""pip install efficientnet-pytorch""")
from efficientnet_pytorch import EfficientNet
model = EfficientNet.from_pretrained('efficientnet-b3')
num_ftrs = model._fc.in_features
model._fc = nn.Linear(num_ftrs, 1)
# -
model_trainer = Trainer(model, optim='Over9000',bs=32, lr=1e-3, name='b3-model-1-Over9000')
model_trainer.do_cutmix = False
model_trainer.fit(20)
# ### Predict and Test
dl_val = provider('val')
path = "models/b3-model-1_best_loss.pth"
checkpoint = torch.load(path)
model.load_state_dict(checkpoint["state_dict"])
for img, y_true in dl_val:
y_preds = model(img)
y_preds = nn.Sigmoid()(y_preds)
y_preds = y_preds.detach().cpu().numpy()
y_preds = (y_preds > 0.4).astype('uint8')
img = img.detach().cpu().permute(0, 2,3,1).numpy()
for i in range(img.shape[0]):
image = img[i]
y_tr = y_true[i].item()
plt.imshow(image)
plt.show()
print("True Label: ", y_tr, "Predicted Label: ", y_preds[i])
break
|
notebooks/Covid_ClassificationTask.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Nested Functions
def outer_func(outer_var):
def inner_func(inner_var):
print "Outer variable is ",outer_var
print "Inner variable is ",inner_var
inner_func("world")
outer_func("hello")
# ## Accessing variables in enclosing scope
def outer_func2(outer_var):
print "Outer variable in outer function is ", outer_var
def inner_func2(inner_var):
outer_var = "yolo"
print "Outer variable in inner function is ",outer_var
print "Inner variable in inner function is ",inner_var
inner_func2("world")
print "After inner function call"
print "Outer variable in outer function is ", outer_var
outer_func2("hello")
# # Closures
def outer_func3(outer_var):
#Example of closure
def inner_func3():
print outer_var
return inner_func3
my_closure = outer_func3("hello world")
my_closure()
def add_two_cond(sen1, sen2):
def use_operator(my_op):
print sen1 + " " + my_op + " " + sen2
return use_operator
my_joined_cond = add_two_cond("work hard","play hard")
my_joined_cond(" AND ")
my_joined_cond(" OR ")
# # Closure Attributes
dir(my_closure)
type(my_closure.func_closure)
len(my_closure.func_closure)
dir(my_closure.func_closure[0])
my_closure.func_closure[0].cell_contents
len(my_joined_cond.func_closure)
for x in my_joined_cond.func_closure:
print x.cell_contents
|
_src/Section 3/3.1 Closures and Nested Function.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction
# IPython, pandas and matplotlib have a number of useful options you can use to make it easier to view and format your data. This notebook collects a bunch of them in one place. I hope this will be a useful reference.
#
# The original blog posting is on http://pbpython.com/ipython-pandas-display-tips.html
# ## Import modules and some sample data
# First, do our standard pandas, numpy and matplotlib imports as well as configure inline displays of plots.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
# One of the simple things we can do is override the default CSS to customize our DataFrame output.
#
# This specific example is from - [<NAME>' talk at pycon](https://www.youtube.com/watch?v=5JnMutdy6Fw "Pandas From The Ground Up")
# For the purposes of the notebook, I'm defining CSS as a variable but you could easily read in from a file as well.
CSS = """
body {
margin: 0;
font-family: Helvetica;
}
table.dataframe {
border-collapse: collapse;
border: none;
}
table.dataframe tr {
border: none;
}
table.dataframe td, table.dataframe th {
margin: 0;
border: 1px solid white;
padding-left: 0.25em;
padding-right: 0.25em;
}
table.dataframe th:not(:empty) {
background-color: #fec;
text-align: left;
font-weight: normal;
}
table.dataframe tr:nth-child(2) th:empty {
border-left: none;
border-right: 1px dashed #888;
}
table.dataframe td {
border: 2px solid #ccf;
background-color: #f4f4ff;
}
"""
# Now add this CSS into the current notebook's HTML.
from IPython.core.display import HTML
HTML('<style>{}</style>'.format(CSS))
SALES=pd.read_csv("../data/sample-sales-tax.csv", parse_dates=True)
SALES.head()
# You can see how the CSS is now applied to the DataFrame and how you could easily modify it to customize it to your liking.
#
# Jupyter notebooks do a good job of automatically displaying information but sometimes you want to force data to display. Fortunately, ipython provides and option. This is especially useful if you want to display multiple dataframes.
from IPython.display import display
display(SALES.head(2))
display(SALES.tail(2))
display(SALES.describe())
# ## Using pandas settings to control output
# Pandas has many different options to control how data is displayed.
#
# You can use max_rows to control how many rows are displayed
pd.set_option("display.max_rows",4)
SALES
# Depending on the data set, you may only want to display a smaller number of columns.
pd.set_option("display.max_columns",6)
SALES
# You can control how many decimal points of precision to display
pd.set_option('precision',2)
SALES
pd.set_option('precision',7)
SALES
# You can also format floating point numbers using float_format
pd.set_option('float_format', '{:.2f}'.format)
SALES
# This does apply to all the data. In our example, applying dollar signs to everything would not be correct for this example.
pd.set_option('float_format', '${:.2f}'.format)
SALES
# ## Third Party Plugins
# Qtopian has a useful plugin called qgrid - https://github.com/quantopian/qgrid
#
# Import it and install it.
import qgrid
qgrid.nbinstall(overwrite=True)
# Showing the data is straighforward.
qgrid.show_grid(SALES, remote_js=True)
# The plugin is very similar to the capability of an Excel autofilter. It can be handy to quickly filter and sort your data.
# ## Improving your plots
# I have mentioned before how the default pandas plots don't look so great. Fortunately, there are style sheets in matplotlib which go a long way towards improving the visualization of your data.
#
# Here is a simple plot with the default values.
SALES.groupby('name')['quantity'].sum().plot(kind="bar")
# We can use some of the matplolib styles available to us to make this look better.
# http://matplotlib.org/users/style_sheets.html
plt.style.use('ggplot')
SALES.groupby('name')['quantity'].sum().plot(kind="bar")
# You can see all the styles available
plt.style.available
plt.style.use('bmh')
SALES.groupby('name')['quantity'].sum().plot(kind="bar")
plt.style.use('fivethirtyeight')
SALES.groupby('name')['quantity'].sum().plot(kind="bar")
# Each of the different styles have subtle (and not so subtle) changes. Fortunately it is easy to experiment with them and your own plots.
#
#
# You can find other articles at [Practical Business Python](http://pbpython.com)
#
# This notebook is referenced in the following post - http://pbpython.com/ipython-pandas-display-tips.html
|
notebooks/07_Pandas-tips-and-tricks.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# # Fuzzing APIs
#
# So far, we have always generated _system input_, i.e. data that the program as a whole obtains via its input channels. However, we can also generate inputs that go directly into individual functions, gaining flexibility and speed in the process. In this chapter, we explore the use of grammars to synthesize code for function calls, which allows you to generate _program code that very efficiently invokes functions directly._
# + [markdown] button=false new_sheet=false run_control={"read_only": false} slideshow={"slide_type": "skip"}
# **Prerequisites**
#
# * You have to know how grammar fuzzing work, e.g. from the [chapter on grammars](Grammars.ipynb).
# * We make use of _generator functions_, as discussed in the [chapter on fuzzing with generators](GeneratorGrammarFuzzer.ipynb).
# * We make use of probabilities, as discussed in the [chapter on fuzzing with probabilities](ProbabilisticGrammarFuzzer.ipynb).
# + [markdown] slideshow={"slide_type": "skip"}
# ## Synopsis
# <!-- Automatically generated. Do not edit. -->
#
# To [use the code provided in this chapter](Importing.ipynb), write
#
# ```python
# >>> from fuzzingbook.APIFuzzer import <identifier>
# ```
#
# and then make use of the following features.
#
#
# This chapter provides grammars grammar constructors that are useful for generating function calls.
#
# * `INT_GRAMMAR`, `FLOAT_GRAMMAR`, `ASCII_STRING_GRAMMAR` produce integers, floats, and strings, respectively.
# * `int_grammar_with_range(start, end)` produces an integer grammar with values `N` such that `start <= N <= end`.
# * `float_grammar_with_range(start, end)` produces a floating-number grammar with values `N` such that `start <= N <= end`.
#
# The grammars are [probabilistic](ProbabilisticGrammarFuzzer.ipynb) and make use of [generators](GeneratorGrammarFuzzer.ipynb), so use `ProbabilisticGeneratorGrammarFuzzer` as a producer.
#
# ```python
# >>> from GeneratorGrammarFuzzer import ProbabilisticGeneratorGrammarFuzzer
# >>> int_grammar = int_grammar_with_range(100, 200)
# >>> fuzzer = ProbabilisticGeneratorGrammarFuzzer(int_grammar)
# >>> [fuzzer.fuzz() for i in range(10)]
# ['172', '102', '127', '119', '167', '186', '133', '155', '111', '111']
# ```
# Such values can be immediately used for testing function calls:
#
# ```python
# >>> from math import sqrt
# >>> eval("sqrt(" + fuzzer.fuzz() + ")")
# 13.45362404707371
# ```
# These grammars can also be composed to form more complex grammars:
#
# * `list_grammar(object_grammar)` returns a grammar that produces lists of objects as defined by `object_grammar`.
#
# ```python
# >>> int_list_grammar = list_grammar(int_grammar)
# >>> fuzzer = ProbabilisticGeneratorGrammarFuzzer(int_list_grammar)
# >>> [fuzzer.fuzz() for i in range(5)]
# ['[194, 118, 169, 164, 169, 190, 172, 144, 174]',
# '[109, 127, 185, 155]',
# '[146, 103, 114, 185, 119, 148, 169, 167, 161]',
# '[]',
# '[138, 123, 147, 112, 139, 190, 114, 112]']
# >>> eval("len(" + fuzzer.fuzz() + ")")
# 2
# ```
#
# + [markdown] button=false new_sheet=true run_control={"read_only": false} toc-hr-collapsed=false
# ## Fuzzing a Function
#
# Let us start with our first problem: How do we fuzz a given function? For an interpreted language like Python, this is pretty straight-forward. All we need to do is to generate _calls_ to the function(s) we want to test. This is something we can easily do with a grammar.
# + [markdown] button=false new_sheet=true run_control={"read_only": false}
# As an example, consider the `urlparse()` function from the Python library. `urlparse()` takes a URL and decomposes it into its individual components.
# + button=false new_sheet=false run_control={"read_only": false} slideshow={"slide_type": "skip"}
import bookutils
# -
from urllib.parse import urlparse
urlparse('https://www.fuzzingbook.com/html/APIFuzzer.html')
# You see how the individual elements of the URL – the _scheme_ (`"http"`), the _network location_ (`"www.fuzzingbook.com"`), or the path (`"//html/APIFuzzer.html"`) are all properly identified. Other elements (like `params`, `query`, or `fragment`) are empty, because they were not part of our input.
# To test `urlparse()`, we'd want to feed it a large set of different URLs. We can obtain these from the URL grammar we had defined in the ["Grammars"](Grammars.ipynb) chapter.
from Grammars import URL_GRAMMAR, is_valid_grammar, START_SYMBOL, new_symbol, opts, extend_grammar
from GrammarFuzzer import GrammarFuzzer, display_tree, all_terminals
url_fuzzer = GrammarFuzzer(URL_GRAMMAR)
for i in range(10):
url = url_fuzzer.fuzz()
print(urlparse(url))
# This way, we can easily test any Python function – by setting up a scaffold that runs it. How would we proceed, though, if we wanted to have a test that can be re-run again and again, without having to generate new calls every time?
# ## Synthesizing Code
#
# The "scaffolding" method, as sketched above, has an important downside: It couples test generation and test execution into a single unit, disallowing running both at different times, or for different languages. To decouple the two, we take another approach: Rather than generating inputs and immediately feeding this input into a function, we _synthesize code_ instead that invokes functions with a given input.
# For instance, if we generate the string
call = "urlparse('http://www.example.com/')"
# we can execute this string as a whole (and thus run the test) at any time:
eval(call)
# To systematically generate such calls, we can again use a grammar:
# +
URLPARSE_GRAMMAR = {
"<call>":
['urlparse("<url>")']
}
# Import definitions from URL_GRAMMAR
URLPARSE_GRAMMAR.update(URL_GRAMMAR)
URLPARSE_GRAMMAR["<start>"] = ["<call>"]
assert is_valid_grammar(URLPARSE_GRAMMAR)
# -
# This grammar creates calls in the form `urlparse(<url>)`, where `<url>` comes from the "imported" URL grammar. The idea is to create many of these calls and to feed them into the Python interpreter.
URLPARSE_GRAMMAR
# We can now use this grammar for fuzzing and synthesizing calls to `urlparse)`:
urlparse_fuzzer = GrammarFuzzer(URLPARSE_GRAMMAR)
urlparse_fuzzer.fuzz()
# Just as above, we can immediately execute these calls. To better see what is happening, we define a small helper function:
# Call function_name(arg[0], arg[1], ...) as a string
def do_call(call_string):
print(call_string)
result = eval(call_string)
print("\t= " + repr(result))
return result
call = urlparse_fuzzer.fuzz()
do_call(call)
# If `urlparse()` were a C function, for instance, we could embed its call into some (also generated) C function:
URLPARSE_C_GRAMMAR = {
"<cfile>": ["<cheader><cfunction>"],
"<cheader>": ['#include "urlparse.h"\n\n'],
"<cfunction>": ["void test() {\n<calls>}\n"],
"<calls>": ["<call>", "<calls><call>"],
"<call>": [' urlparse("<url>");\n']
}
URLPARSE_C_GRAMMAR.update(URL_GRAMMAR)
URLPARSE_C_GRAMMAR["<start>"] = ["<cfile>"]
assert is_valid_grammar(URLPARSE_C_GRAMMAR)
urlparse_fuzzer = GrammarFuzzer(URLPARSE_C_GRAMMAR)
print(urlparse_fuzzer.fuzz())
# ## Synthesizing Oracles
#
# In our `urlparse()` example, both the Python as well as the C variant only check for _generic_ errors in `urlparse()`; that is, they only detect fatal errors and exceptions. For a full test, we need to set up a specific *oracle* as well that checks whether the result is valid.
# Our plan is to check whether specific parts of the URL reappear in the result – that is, if the scheme is `http:`, then the `ParseResult` returned should also contain a `http:` scheme. As discussed in the [chapter on fuzzing with generators](GeneratorGrammarFuzzer.ipynb), equalities of strings such as `http:` across two symbols cannot be expressed in a context-free grammar. We can, however, use a _generator function_ (also introduced in the [chapter on fuzzing with generators](GeneratorGrammarFuzzer.ipynb)) to automatically enforce such equalities.
# Here is an example. Invoking `geturl()` on a `urlparse()` result should return the URL as originally passed to `urlparse()`.
from GeneratorGrammarFuzzer import GeneratorGrammarFuzzer, ProbabilisticGeneratorGrammarFuzzer
URLPARSE_ORACLE_GRAMMAR = extend_grammar(URLPARSE_GRAMMAR,
{
"<call>": [("assert urlparse('<url>').geturl() == '<url>'",
opts(post=lambda url_1, url_2: [None, url_1]))]
})
urlparse_oracle_fuzzer = GeneratorGrammarFuzzer(URLPARSE_ORACLE_GRAMMAR)
test = urlparse_oracle_fuzzer.fuzz()
print(test)
exec(test)
# In a similar way, we can also check individual components of the result:
# +
URLPARSE_ORACLE_GRAMMAR = extend_grammar(URLPARSE_GRAMMAR,
{
"<call>": [("result = urlparse('<scheme>://<host><path>?<params>')\n"
# + "print(result)\n"
+ "assert result.scheme == '<scheme>'\n"
+ "assert result.netloc == '<host>'\n"
+ "assert result.path == '<path>'\n"
+ "assert result.query == '<params>'",
opts(post=lambda scheme_1, authority_1, path_1, params_1,
scheme_2, authority_2, path_2, params_2:
[None, None, None, None,
scheme_1, authority_1, path_1, params_1]))]
})
# Get rid of unused symbols
del URLPARSE_ORACLE_GRAMMAR["<url>"]
del URLPARSE_ORACLE_GRAMMAR["<query>"]
del URLPARSE_ORACLE_GRAMMAR["<authority>"]
del URLPARSE_ORACLE_GRAMMAR["<userinfo>"]
del URLPARSE_ORACLE_GRAMMAR["<port>"]
# -
urlparse_oracle_fuzzer = GeneratorGrammarFuzzer(URLPARSE_ORACLE_GRAMMAR)
test = urlparse_oracle_fuzzer.fuzz()
print(test)
exec(test)
# The use of generator functions may feel a bit cumbersome. Indeed, if we uniquely stick to Python, we could also create a _unit test_ that directly invokes the fuzzer to generate individual parts:
def fuzzed_url_element(symbol):
return GrammarFuzzer(URLPARSE_GRAMMAR, start_symbol=symbol).fuzz()
scheme = fuzzed_url_element("<scheme>")
authority = fuzzed_url_element("<authority>")
path = fuzzed_url_element("<path>")
query = fuzzed_url_element("<params>")
url = "%s://%s%s?%s" % (scheme, authority, path, query)
result = urlparse(url)
# print(result)
assert result.geturl() == url
assert result.scheme == scheme
assert result.path == path
assert result.query == query
# Using such a unit test makes it easier to express oracles. However, we lose the ability to systematically cover individual URL elements and alternatives as with [`GrammarCoverageFuzzer`](GrammarCoverageFuzzer.ipynb) as well as the ability to guide generation towards specific elements as with [`ProbabilisticGrammarFuzzer`](ProbabilisticGrammarFuzzer.ipynb). Furthermore, a grammar allows us to generate tests for arbitrary programming languages and APIs.
# + [markdown] toc-hr-collapsed=false
# ## Synthesizing Data
#
# For `urlparse()`, we have used a very specific grammar for creating a very specific argument. Many functions take basic data types as (some) arguments, though; we therefore define grammars that generate precisely those arguments. Even better, we can define functions that _generate_ grammars tailored towards our specific needs, returning values in a particular range, for instance.
# -
# ### Integers
#
# We introduce a simple grammar to produce integers.
from Grammars import convert_ebnf_grammar, crange
from ProbabilisticGrammarFuzzer import ProbabilisticGrammarFuzzer
# +
INT_EBNF_GRAMMAR = {
"<start>": ["<int>"],
"<int>": ["<_int>"],
"<_int>": ["(-)?<leaddigit><digit>*", "0"],
"<leaddigit>": crange('1', '9'),
"<digit>": crange('0', '9')
}
assert is_valid_grammar(INT_EBNF_GRAMMAR)
# -
INT_GRAMMAR = convert_ebnf_grammar(INT_EBNF_GRAMMAR)
INT_GRAMMAR
int_fuzzer = GrammarFuzzer(INT_GRAMMAR)
print([int_fuzzer.fuzz() for i in range(10)])
# If we need integers in a specific range, we can add a generator function that does right that:
from Grammars import set_opts
import random
def int_grammar_with_range(start, end):
int_grammar = extend_grammar(INT_GRAMMAR)
set_opts(int_grammar, "<int>", "<_int>",
opts(pre=lambda: random.randint(start, end)))
return int_grammar
int_fuzzer = GeneratorGrammarFuzzer(int_grammar_with_range(900, 1000))
[int_fuzzer.fuzz() for i in range(10)]
# ### Floats
#
# The grammar for floating-point values closely resembles the integer grammar.
# +
FLOAT_EBNF_GRAMMAR = {
"<start>": ["<float>"],
"<float>": [("<_float>", opts(prob=0.9)), "inf", "NaN"],
"<_float>": ["<int>(.<digit>+)?<exp>?"],
"<exp>": ["e<int>"]
}
FLOAT_EBNF_GRAMMAR.update(INT_EBNF_GRAMMAR)
FLOAT_EBNF_GRAMMAR["<start>"] = ["<float>"]
assert is_valid_grammar(FLOAT_EBNF_GRAMMAR)
# -
FLOAT_GRAMMAR = convert_ebnf_grammar(FLOAT_EBNF_GRAMMAR)
FLOAT_GRAMMAR
float_fuzzer = ProbabilisticGrammarFuzzer(FLOAT_GRAMMAR)
print([float_fuzzer.fuzz() for i in range(10)])
def float_grammar_with_range(start, end):
float_grammar = extend_grammar(FLOAT_GRAMMAR)
set_opts(float_grammar, "<float>", "<_float>", opts(
pre=lambda: start + random.random() * (end - start)))
return float_grammar
float_fuzzer = ProbabilisticGeneratorGrammarFuzzer(
float_grammar_with_range(900.0, 900.9))
[float_fuzzer.fuzz() for i in range(10)]
# ### Strings
# Finally, we introduce a grammar for producing strings.
# +
ASCII_STRING_EBNF_GRAMMAR = {
"<start>": ["<ascii-string>"],
"<ascii-string>": ['"<ascii-chars>"'],
"<ascii-chars>": [
("", opts(prob=0.05)),
"<ascii-chars><ascii-char>"
],
"<ascii-char>": crange(" ", "!") + [r'\"'] + crange("#", "~")
}
assert is_valid_grammar(ASCII_STRING_EBNF_GRAMMAR)
# -
ASCII_STRING_GRAMMAR = convert_ebnf_grammar(ASCII_STRING_EBNF_GRAMMAR)
string_fuzzer = ProbabilisticGrammarFuzzer(ASCII_STRING_GRAMMAR)
print([string_fuzzer.fuzz() for i in range(10)])
# ## Synthesizing Composite Data
#
# From basic data, as discussed above, we can also produce _composite data_ in data structures such as sets or lists. We illustrate such generation on lists.
# ### Lists
# +
LIST_EBNF_GRAMMAR = {
"<start>": ["<list>"],
"<list>": [
("[]", opts(prob=0.05)),
"[<list-objects>]"
],
"<list-objects>": [
("<list-object>", opts(prob=0.2)),
"<list-object>, <list-objects>"
],
"<list-object>": ["0"],
}
assert is_valid_grammar(LIST_EBNF_GRAMMAR)
# -
LIST_GRAMMAR = convert_ebnf_grammar(LIST_EBNF_GRAMMAR)
# Our list generator takes a grammar that produces objects; it then instantiates a list grammar with the objects from these grammars.
def list_grammar(object_grammar, list_object_symbol=None):
obj_list_grammar = extend_grammar(LIST_GRAMMAR)
if list_object_symbol is None:
# Default: Use the first expansion of <start> as list symbol
list_object_symbol = object_grammar[START_SYMBOL][0]
obj_list_grammar.update(object_grammar)
obj_list_grammar[START_SYMBOL] = ["<list>"]
obj_list_grammar["<list-object>"] = [list_object_symbol]
assert is_valid_grammar(obj_list_grammar)
return obj_list_grammar
int_list_fuzzer = ProbabilisticGrammarFuzzer(list_grammar(INT_GRAMMAR))
[int_list_fuzzer.fuzz() for i in range(10)]
string_list_fuzzer = ProbabilisticGrammarFuzzer(
list_grammar(ASCII_STRING_GRAMMAR))
[string_list_fuzzer.fuzz() for i in range(10)]
float_list_fuzzer = ProbabilisticGeneratorGrammarFuzzer(list_grammar(
float_grammar_with_range(900.0, 900.9)))
[float_list_fuzzer.fuzz() for i in range(10)]
# Generators for dictionaries, sets, etc. can be defined in a similar fashion. By plugging together grammar generators, we can produce data structures with arbitrary elements.
# ## Synopsis
#
# This chapter provides grammars grammar constructors that are useful for generating function calls.
#
# * `INT_GRAMMAR`, `FLOAT_GRAMMAR`, `ASCII_STRING_GRAMMAR` produce integers, floats, and strings, respectively.
# * `int_grammar_with_range(start, end)` produces an integer grammar with values `N` such that `start <= N <= end`.
# * `float_grammar_with_range(start, end)` produces a floating-number grammar with values `N` such that `start <= N <= end`.
# The grammars are [probabilistic](ProbabilisticGrammarFuzzer.ipynb) and make use of [generators](GeneratorGrammarFuzzer.ipynb), so use `ProbabilisticGeneratorGrammarFuzzer` as a producer.
from GeneratorGrammarFuzzer import ProbabilisticGeneratorGrammarFuzzer
int_grammar = int_grammar_with_range(100, 200)
fuzzer = ProbabilisticGeneratorGrammarFuzzer(int_grammar)
[fuzzer.fuzz() for i in range(10)]
# Such values can be immediately used for testing function calls:
from math import sqrt
eval("sqrt(" + fuzzer.fuzz() + ")")
# These grammars can also be composed to form more complex grammars:
#
# * `list_grammar(object_grammar)` returns a grammar that produces lists of objects as defined by `object_grammar`.
int_list_grammar = list_grammar(int_grammar)
fuzzer = ProbabilisticGeneratorGrammarFuzzer(int_list_grammar)
[fuzzer.fuzz() for i in range(5)]
eval("len(" + fuzzer.fuzz() + ")")
# + [markdown] button=false new_sheet=true run_control={"read_only": false}
# ## Lessons Learned
#
# * To fuzz individual functions, one can easily set up grammars that produce function calls.
# * Fuzzing at the API level can be much faster than fuzzing at the system level, but brings the risk of false alarms by violating implicit preconditions.
# + [markdown] button=false new_sheet=false run_control={"read_only": false}
# ## Next Steps
#
# This chapter was all about manually writing test and controlling which data gets generated. [In the next chapter](Carver.ipynb), we will introduce a much higher level of automation:
#
# * _Carving_ automatically records function calls and arguments from program executions.
# * We can turn these into _grammars_, allowing to test these functions with various combinations of recorded values.
#
# With these techniques, we automatically obtain grammars that already invoke functions in application contexts, making our work of specifying them much easier.
# -
# ## Background
#
# The idea of using generator functions to generate input structures was first explored in QuickCheck \cite{Claessen2000}. A very nice implementation for Python is the [hypothesis package](https://hypothesis.readthedocs.io/en/latest/) which allows to write and combine data structure generators for testing APIs.
#
#
# + [markdown] button=false new_sheet=true run_control={"read_only": false}
# ## Exercises
#
# The exercises for this chapter combine the above techniques with fuzzing techniques introduced earlier.
# + [markdown] solution2="hidden" solution2_first=true
# ### Exercise 1: Deep Arguments
#
# In the example generating oracles for `urlparse()`, important elements such as `authority` or `port` are not checked. Enrich `URLPARSE_ORACLE_GRAMMAR` with post-expansion functions that store the generated elements in a symbol table, such that they can be accessed when generating the assertions.
# + [markdown] slideshow={"slide_type": "skip"} solution2="hidden"
# **Solution.** Left to the reader.
# + [markdown] button=false new_sheet=false run_control={"read_only": false} solution2="hidden" solution2_first=true
# ### Exercise 2: Covering Argument Combinations
#
# In the chapter on [configuration testing](ConfigurationFuzzer.ipynb), we also discussed _combinatorial testing_ – that is, systematic coverage of _sets_ of configuration elements. Implement a scheme that by changing the grammar, allows all _pairs_ of argument values to be covered.
# + [markdown] slideshow={"slide_type": "skip"} solution2="hidden"
# **Solution.** Left to the reader.
# + [markdown] button=false new_sheet=false run_control={"read_only": false} solution2="hidden" solution2_first=true
# ### Exercise 3: Mutating Arguments
#
# To widen the range of arguments to be used during testing, apply the _mutation schemes_ introduced in [mutation fuzzing](MutationFuzzer.ipynb) – for instance, flip individual bytes or delete characters from strings. Apply this either during grammar inference or as a separate step when invoking functions.
# + [markdown] slideshow={"slide_type": "skip"} solution2="hidden"
# **Solution.** Left to the reader.
|
notebooks/APIFuzzer.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 量子神经网络在自然语言处理中的应用
#
#
# `Linux` `CPU` `全流程` `初级` `中级` `高级`
#
# [](https://gitee.com/mindspore/docs/blob/master/tutorials/training/source_zh_cn/advanced_use/qnn_for_nlp.ipynb)
#
# ## 概述
#
# 在自然语言处理过程中,词嵌入(Word embedding)是其中的重要步骤,它是一个将高维度空间的词向量嵌入到一个维数更低的连续向量空间的过程。当给予神经网络的语料信息不断增加时,网络的训练过程将越来越困难。利用量子力学的态叠加和纠缠等特性,我们可以利用量子神经网络来处理这些经典语料信息,加入其训练过程,并提高收敛精度。下面,我们将简单地搭建一个量子经典混合神经网络来完成一个词嵌入任务。
#
#
# ## 环境准备
#
# 导入本教程所依赖模块
#
import numpy as np
import time
from mindquantum.ops import QubitOperator
import mindspore.ops as ops
import mindspore.dataset as ds
from mindspore import nn
from mindspore.train.callback import LossMonitor
from mindspore import Model
from mindquantum.nn import MindQuantumLayer
from mindquantum import Hamiltonian, Circuit, RX, RY, X, H, UN
# 本教程实现的是一个[CBOW模型](https://blog.csdn.net/u010665216/article/details/78724856),即利用某个词所处的环境来预测该词。例如对于“I love natural language processing”这句话,我们可以将其切分为5个词,\["I", "love", "natural", "language", "processing”\],在所选窗口为2时,我们要处理的问题是利用\["I", "love", "language", "processing"\]来预测出目标词汇"natural"。这里我们以窗口为2为例,搭建如下的量子神经网络,来完成词嵌入任务。
#
# 
#
# 这里,编码线路会将"I"、"love"、"language"和"processing"的编码信息编码到量子线路中,待训练的量子线路由四个Ansatz线路构成,最后我们在量子线路末端对量子比特做$\text{Z}$基矢上的测量,具体所需测量的比特的个数由所需嵌入空间的维数确定。
#
# ## 数据预处理
#
# 我们对所需要处理的语句进行处理,生成关于该句子的词典,并根据窗口大小来生成样本点。
#
def GenerateWordDictAndSample(corpus, window=2):
all_words = corpus.split()
word_set = list(set(all_words))
word_set.sort()
word_dict = {w: i for i,w in enumerate(word_set)}
sampling = []
for index, word in enumerate(all_words[window:-window]):
around = []
for i in range(index, index + 2*window + 1):
if i != index + window:
around.append(all_words[i])
sampling.append([around,all_words[index + window]])
return word_dict, sampling
word_dict, sample = GenerateWordDictAndSample("I love natural language processing")
print(word_dict)
print('word dict size: ', len(word_dict))
print('samples: ', sample)
print('number of samples: ', len(sample))
# 根据如上信息,我们得到该句子的词典大小为5,能够产生一个样本点。
#
# ## 编码线路
#
# 为了简单起见,我们使用的编码线路由$\text{RX}$旋转门构成,结构如下。
#
# 
#
# 我们对每个量子门都作用一个$\text{RX}$旋转门。
def GenerateEncoderCircuit(n_qubits, prefix=''):
if len(prefix) != 0 and prefix[-1] != '_':
prefix += '_'
circ = Circuit()
for i in range(n_qubits):
circ += RX(prefix + str(i)).on(i)
return circ
GenerateEncoderCircuit(3,prefix='e')
# 我们通常用$\left|0\right>$和$\left|1\right>$来标记二能级量子比特的两个状态,由态叠加原理,量子比特还可以处于这两个状态的叠加态:
#
# $$\left|\psi\right>=\alpha\left|0\right>+\beta\left|1\right>$$
#
# 对于$n$比特的量子态,其将处于$2^n$维的希尔伯特空间中。对于上面由5个词构成的词典,我们只需要$\lceil \log_2 5 \rceil=3$个量子比特即可完成编码,这也体现出量子计算的优越性。
#
# 例如对于上面词典中的"love",其对应的标签为2,2的二进制表示为`010`,我们只需将编码线路中的`e_0`、`e_1`和`e_2`分别设为$0$、$\pi$和$0$即可。下面我们通过`Evolution`算子来验证以下。
# +
from mindquantum.nn import generate_evolution_operator
from mindspore import context
from mindspore import Tensor
n_qubits = 3 # number of qubits of this quantum circuit
label = 2 # label need to encode
label_bin = bin(label)[-1:1:-1].ljust(n_qubits,'0') # binary form of label
label_array = np.array([int(i)*np.pi for i in label_bin]).astype(np.float32) # parameter value of encoder
encoder = GenerateEncoderCircuit(n_qubits, prefix='e') # encoder circuit
encoder_para_names = encoder.para_name # parameter names of encoder
print("Label is: ", label)
print("Binary label is: ", label_bin)
print("Parameters of encoder is: \n", np.round(label_array, 5))
print("Encoder circuit is: \n", encoder)
print("Encoder parameter names are: \n", encoder_para_names)
context.set_context(mode=context.GRAPH_MODE, device_target="CPU")
# quantum state evolution operator
evol = generate_evolution_operator(param_names=encoder_para_names, circuit=encoder)
state = evol(Tensor(label_array))
amp = np.round(np.abs(state)**2, 3)
print("Amplitude of quantum state is: \n", amp)
print("Label in quantum state is: ", np.argmax(amp))
# -
# 通过上面的验证,我们发现,对于标签为2的数据,最后得到量子态的振幅最大的位置也是2,因此得到的量子态正是对输入标签的编码。我们将对数据编码生成参数数值的过程总结成如下函数。
def GenerateTrainData(sample, word_dict):
n_qubits = np.int(np.ceil(np.log2(1 + max(word_dict.values()))))
data_x = []
data_y = []
for around, center in sample:
data_x.append([])
for word in around:
label = word_dict[word]
label_bin = bin(label)[-1:1:-1].ljust(n_qubits,'0')
label_array = [int(i)*np.pi for i in label_bin]
data_x[-1].extend(label_array)
data_y.append(word_dict[center])
return np.array(data_x).astype(np.float32), np.array(data_y).astype(np.int32)
GenerateTrainData(sample, word_dict)
# 根据上面的结果,我们将4个输入的词编码的信息合并为一个更长向量,便于后续神经网络调用。
#
# ## Ansatz线路
#
# Ansatz线路的选择多种多样,我们选择如下的量子线路作为Ansatz线路,它的一个单元由一层$\text{RY}$门和一层$\text{CNOT}$门构成,对此单元重复$p$次构成整个Ansatz线路。
#
# 
#
# 定义如下函数生成Ansatz线路。
def GenerateAnsatzCircuit(n_qubits, layers, prefix=''):
if len(prefix) != 0 and prefix[-1] != '_':
prefix += '_'
circ = Circuit()
for l in range(layers):
for i in range(n_qubits):
circ += RY(prefix + str(l) + '_' + str(i)).on(i)
for i in range(l % 2, n_qubits, 2):
if i < n_qubits and i + 1 < n_qubits:
circ += X.on(i + 1, i)
return circ
GenerateAnsatzCircuit(5, 2, 'a')
# ## 测量
#
# 我们把对不同比特位上的测量结果作为降维后的数据。具体过程与比特编码类似,例如当我们想将词向量降维为5维向量时,对于第3维的数据可以如下产生:
#
# - 3对应的二进制为`00011`。
# - 测量量子线路末态对$Z_0Z_1$哈密顿量的期望值。
#
# 下面函数将给出产生各个维度上数据所需的哈密顿量(hams),其中`n_qubits`表示线路的比特数,`dims`表示词嵌入的维度:
def GenerateEmbeddingHamiltonian(dims, n_qubits):
hams = []
for i in range(dims):
s = ''
for j, k in enumerate(bin(i + 1)[-1:1:-1]):
if k == '1':
s = s + 'Z' + str(j) + ' '
hams.append(Hamiltonian(QubitOperator(s)))
return hams
GenerateEmbeddingHamiltonian(5, 5)
# ## 量子版词向量嵌入层
#
# 量子版词向量嵌入层结合前面的编码量子线路和待训练量子线路,以及测量哈密顿量,将`num_embedding`个词嵌入为`embedding_dim`维的词向量。这里我们还在量子线路的最开始加上了Hadamard门,将初态制备为均匀叠加态,用以提高量子神经网络的表达能力。
#
# 下面,我们定义量子嵌入层,它将返回一个量子线路模拟算子。
def QEmbedding(num_embedding, embedding_dim, window, layers, n_threads):
n_qubits = int(np.ceil(np.log2(num_embedding)))
hams = GenerateEmbeddingHamiltonian(embedding_dim, n_qubits)
circ = Circuit()
circ = UN(H, n_qubits)
encoder_param_name = []
ansatz_param_name = []
for w in range(2 * window):
encoder = GenerateEncoderCircuit(n_qubits, 'Encoder_' + str(w))
ansatz = GenerateAnsatzCircuit(n_qubits, layers, 'Ansatz_' + str(w))
encoder.no_grad()
circ += encoder
circ += ansatz
encoder_param_name.extend(encoder.para_name)
ansatz_param_name.extend(ansatz.para_name)
net = MindQuantumLayer(encoder_param_name,
ansatz_param_name,
circ,
hams,
n_threads=n_threads)
return net
# 整个训练模型跟经典网络类似,由一个嵌入层和两个全连通层构成,然而此处的嵌入层是由量子神经网络构成。下面定义量子神经网络CBOW。
class CBOW(nn.Cell):
def __init__(self, num_embedding, embedding_dim, window, layers, n_threads,
hidden_dim):
super(CBOW, self).__init__()
self.embedding = QEmbedding(num_embedding, embedding_dim, window,
layers, n_threads)
self.dense1 = nn.Dense(embedding_dim, hidden_dim)
self.dense2 = nn.Dense(hidden_dim, num_embedding)
self.relu = ops.ReLU()
def construct(self, x):
embed = self.embedding(x)
out = self.dense1(embed)
out = self.relu(out)
out = self.dense2(out)
return out
# 下面我们对一个稍长的句子来进行训练。首先定义`LossMonitorWithCollection`用于监督收敛过程,并搜集收敛过程的损失。
class LossMonitorWithCollection(LossMonitor):
def __init__(self, per_print_times=1):
super(LossMonitorWithCollection, self).__init__(per_print_times)
self.loss = []
def begin(self, run_context):
self.begin_time = time.time()
def end(self, run_context):
self.end_time = time.time()
print('Total time used: {}'.format(self.end_time - self.begin_time))
def epoch_begin(self, run_context):
self.epoch_begin_time = time.time()
def epoch_end(self, run_context):
cb_params = run_context.original_args()
self.epoch_end_time = time.time()
if self._per_print_times != 0 and cb_params.cur_step_num % self._per_print_times == 0:
print('')
def step_end(self, run_context):
cb_params = run_context.original_args()
loss = cb_params.net_outputs
if isinstance(loss, (tuple, list)):
if isinstance(loss[0], Tensor) and isinstance(loss[0].asnumpy(), np.ndarray):
loss = loss[0]
if isinstance(loss, Tensor) and isinstance(loss.asnumpy(), np.ndarray):
loss = np.mean(loss.asnumpy())
cur_step_in_epoch = (cb_params.cur_step_num - 1) % cb_params.batch_num + 1
if isinstance(loss, float) and (np.isnan(loss) or np.isinf(loss)):
raise ValueError("epoch: {} step: {}. Invalid loss, terminating training.".format(
cb_params.cur_epoch_num, cur_step_in_epoch))
self.loss.append(loss)
if self._per_print_times != 0 and cb_params.cur_step_num % self._per_print_times == 0:
print("\repoch: %+3s step: %+3s time: %5.5s, loss is %5.5s" % (cb_params.cur_epoch_num, cur_step_in_epoch, time.time() - self.epoch_begin_time, loss), flush=True, end='')
# 接下来,利用量子版本的`CBOW`来对一个长句进行词嵌入。运行之前请在终端运行`export OMP_NUM_THREADS=4`,将量子模拟器的线程数设置为4个,当所需模拟的量子系统比特数较多时,可设置更多的线程数来提高模拟效率。
# + tags=[]
import mindspore as ms
from mindspore import context
from mindspore import Tensor
context.set_context(mode=context.GRAPH_MODE, device_target="CPU")
corpus = """We are about to study the idea of a computational process.
Computational processes are abstract beings that inhabit computers.
As they evolve, processes manipulate other abstract things called data.
The evolution of a process is directed by a pattern of rules
called a program. People create programs to direct processes. In effect,
we conjure the spirits of the computer with our spells."""
ms.set_seed(42)
window_size = 2
embedding_dim = 10
hidden_dim = 128
word_dict, sample = GenerateWordDictAndSample(corpus, window=window_size)
train_x,train_y = GenerateTrainData(sample, word_dict)
train_loader = ds.NumpySlicesDataset({
"around": train_x,
"center": train_y
},shuffle=False).batch(3)
net = CBOW(len(word_dict), embedding_dim, window_size, 3, 4, hidden_dim)
net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')
net_opt = nn.Momentum(net.trainable_params(), 0.01, 0.9)
loss_monitor = LossMonitorWithCollection(500)
model = Model(net, net_loss, net_opt)
model.train(350, train_loader, callbacks=[loss_monitor], dataset_sink_mode=False)
# -
# 打印收敛过程中的损失函数值:
# +
import matplotlib.pyplot as plt
plt.plot(loss_monitor.loss,'.')
plt.xlabel('Steps')
plt.ylabel('Loss')
plt.show()
# -
# 得到收敛图为
#
# 
#
# 通过如下方法打印量子嵌入层的量子线路中的参数:
net.embedding.weight.asnumpy()
# ## 经典版词向量嵌入层
#
# 这里我们利用经典的词向量嵌入层来搭建一个经典的CBOW神经网络,并与量子版本进行对比。
#
# 首先,搭建经典的CBOW神经网络,其中的参数跟量子版本的类似。
class CBOWClassical(nn.Cell):
def __init__(self, num_embedding, embedding_dim, window, hidden_dim):
super(CBOWClassical, self).__init__()
self.dim = 2 * window * embedding_dim
self.embedding = nn.Embedding(num_embedding, embedding_dim, True)
self.dense1 = nn.Dense(self.dim, hidden_dim)
self.dense2 = nn.Dense(hidden_dim, num_embedding)
self.relu = ops.ReLU()
self.reshape = ops.Reshape()
def construct(self, x):
embed = self.embedding(x)
embed = self.reshape(embed, (-1, self.dim))
out = self.dense1(embed)
out = self.relu(out)
out = self.dense2(out)
return out
# 生成适用于经典CBOW神经网络的数据集。
train_x = []
train_y = []
for i in sample:
around, center = i
train_y.append(word_dict[center])
train_x.append([])
for j in around:
train_x[-1].append(word_dict[j])
train_x = np.array(train_x).astype(np.int32)
train_y = np.array(train_y).astype(np.int32)
print("train_x shape: ", train_x.shape)
print("train_y shape: ", train_y.shape)
# 我们对经典CBOW网络进行训练。
train_loader = ds.NumpySlicesDataset({
"around": train_x,
"center": train_y
},shuffle=False).batch(3)
net = CBOWClassical(len(word_dict), embedding_dim, window_size, hidden_dim)
net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')
net_opt = nn.Momentum(net.trainable_params(), 0.01, 0.9)
loss_monitor = LossMonitorWithCollection(500)
model = Model(net, net_loss, net_opt)
model.train(350, train_loader, callbacks=[loss_monitor], dataset_sink_mode=False)
# 打印收敛过程中的损失函数值:
# +
import matplotlib.pyplot as plt
plt.plot(loss_monitor.loss,'.')
plt.xlabel('Steps')
plt.ylabel('Loss')
plt.show()
# -
# 得到收敛图为
#
# 
#
# 由上可知,通过量子模拟得到的量子版词嵌入模型也能很好的完成嵌入任务。当数据集大到经典计算机算力难以承受时,量子计算机将能够轻松处理这类问题。
# ## 参考文献
#
# [1] <NAME>, <NAME>, <NAME>, <NAME>. [Efficient Estimation of Word Representations in
# Vector Space](https://arxiv.org/pdf/1301.3781.pdf)
|
tutorials/training/source_zh_cn/advanced_use/qnn_for_nlp.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # CS155 Project 3 - Shakespearean Sonnets: Pre-processing data
#
# **Author:** <NAME>
#
# **Description:** this notebook pre-processes the Shakespeare's sonnet datasets for training.
import re
import pickle
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
matplotlib.rcParams.update({'font.size': 12})
# ### 0. Visualize Datasets
#
# Let's visualize the two datasets using wordclouds.
# +
from HMM_helper import text_to_wordcloud
text = open('raw_data/shakespeare.txt').read()
wordcloud = text_to_wordcloud(text, title='Shakespeare')
text = open('raw_data/spenser.txt').read()
wordcloud = text_to_wordcloud(text, title='Spenser')
# -
# ### 1. Basic Pre-processing (word-based) for HMM
#
# My basic text pre-processing procedure includes:
# - Make texts all lower case
# - Remove punctuations
# - Remove numbers irrelevant to poems
# - Remove structural characters (newline char \n, space, etc)
# - Tokenize texts
# +
# don't count ' to keep apostrophes in words
# don't count - to keep hyphenated-words
punctuations = '''!()[]{};:"\,<>./?@#$%^&*_~'''
def hmm_preprocess(infiles):
processed_seqs = []
for infile in infiles:
with open(infile) as f:
# split the file into sequences, 1 line as a sequence
for seq in f.readlines():
# tokenize each seq
words = seq.lower().split()
# remove numerical numbers, roman numbers, and non-sensible lines
if len(words) == 1:
continue
# remove punctuations
words = [re.sub('[%s]' % re.escape(punctuations), '', w) for w in words]
# remove empty strings
words = list(filter(None, words))
# remove empty lists
if words:
processed_seqs.append(words)
print('Total number of sequences: {}'.format(len(processed_seqs)))
return processed_seqs
# -
# For the naive HMM implementation, it uses ONLY Shakespeare's sonnet file:
basic_processed_seqs = hmm_preprocess(['raw_data/shakespeare.txt'])
# For the advanced HMM implementation, it ALSO uses Spenser's sonnet file:
adv_processed_seqs = hmm_preprocess(['raw_data/shakespeare.txt', 'raw_data/spenser.txt'])
# for training the inverse lines
adv_processed_seqs_inv = [line[::-1] for line in adv_processed_seqs]
# Now convert words to machine-readable vectors and save the data to files.
for processed_seqs, tag in zip([basic_processed_seqs, adv_processed_seqs, adv_processed_seqs_inv],
['basic', 'adv', 'adv_inv']):
# use set() to get unique words to form a vocabulary
poem_vocab = sorted(set([item for sublist in processed_seqs for item in sublist]))
# mappings of word2vec and vec2word
word2vec = {unique: idx for idx, unique in enumerate(poem_vocab)}
vec2word = {idx: unique for idx, unique in enumerate(poem_vocab)}
# obtain sequence vectors
processed_seqs_vec = []
for seq in processed_seqs:
words_vec = []
for word in seq:
words_vec.append(word2vec[word])
processed_seqs_vec.append(words_vec)
pickle.dump(processed_seqs_vec, open("processed_data/{}_processed_seqs_vec.p".format(tag), "wb"))
if tag is not 'adv_inv':
pickle.dump(word2vec, open("processed_data/{}_word2vec.p".format(tag), "wb"))
pickle.dump(vec2word, open("processed_data/{}_vec2word.p".format(tag), "wb"))
print('Total number of unique words for {} HMM pre-processing: {}'.format(tag, len(poem_vocab)))
# Store the 'number of words per line' probabilities for poetry generation.
line_len = [len(l) for l in basic_processed_seqs]
lengths, bins, _ = plt.hist(line_len, bins=[i-0.5 for i in range(min(line_len), max(line_len)+2)])
plt.xlabel('Number of words per line')
pickle.dump(lengths / sum(lengths), open("processed_data/wordperline_prob.p", "wb"))
# ### 2. Adv Pre-processing specifically for Adv HMM
#
# A few more text pre-processing (HMM):
# - Syllable count
# - Rhyme
#
# #### 2.1. `Syllable Count`
#
# Syllable counts for *shakespeare.txt* are stored in file already.
word2syllable = {}
with open('raw_data/Syllable_dictionary.txt', 'r') as f:
for line in f:
line = line.rstrip('\n').split(' ')
# split the line into words and syllables
word = line[0]
syls = line[1:]
# loop through all possible syllable counts for a word
for i, s in enumerate(syls):
# append to word2syllable
if i == 0:
word2syllable[word] = [s]
else:
word2syllable[word].append(s)
# We can also get syllable counts for *spenser.txt*.
# +
import pronouncing as pro
word_syl = {}
# loop through the words in spenser.txt
for line in adv_processed_seqs[len(basic_processed_seqs):]:
for word in line:
# get pronounciation list of the word
p_list = pro.phones_for_word(word)
# if could find syllable info and not already stored
if p_list and word not in word2syllable:
word_syl = []
for p in p_list:
word_syl.append(pro.syllable_count(p))
# get the unique syllable counts
word_syl = sorted(list(set(word_syl)))
# store into the big dictionary
if len(word_syl) == 1:
word2syllable[word] = [str(word_syl[0])]
else:
word2syllable[word] = ['E'+str(word_syl[0])]
word2syllable[word].append(str(word_syl[1]))
# clean the special ending case
for w in word2syllable:
if len(word2syllable[w]) > 1:
if word2syllable[w][1][0] == 'E':
word2syllable[w] = word2syllable[w][::-1]
# save the dictionary
pickle.dump(word2syllable, open("processed_data/word2syllable.p", "wb"))
# -
# #### 2.2. Rhyme
#
# Create a rhyme dictionary.
# +
rhymes = {}
for word in word2syllable:
# get the rhymes corresponding to each word in the poem vocab
word_rhymes = pro.rhymes(word)
word_rhymes = [r for r in word_rhymes if r in word2syllable]
# store non-empty lists into the rhymes dict
if word_rhymes:
rhymes[word] = word_rhymes
pickle.dump(rhymes, open("processed_data/rhymes.p", "wb"))
# -
# ### 3. Basic Pre-processing (char-based) for Naive RNN
#
# For the naive RNN implementation, let's use char-based sequences.
#
# Clean the chars.
with open('raw_data/shakespeare.txt') as f:
words = f.read().lower().split()
# remove numbers
words = [re.sub(r'\d+', '', w) for w in words]
# remove punctuation except for ' and -
words = [re.sub('[%s]' % re.escape(punctuations), '', w) for w in words]
# remove empty strings
words = list(filter(None, words))
# get our final long list of chars
raw_text = ' '.join(words)
# Store all of the 40-char sequences.
max_len = 40
step = 1
basic_char_seqs = []
for i in range(0, len(raw_text)-max_len, step):
basic_char_seqs.append(raw_text[i:i+max_len+1])
print('Total number of sequences: {}'.format(len(basic_char_seqs)))
# Save the 'chars to vectors' and 'vectors to chars' mappings.
basic_chars = sorted(list(set(raw_text)))
basic_char2vec = dict((c, i) for i, c in enumerate(basic_chars))
basic_vec2char = dict((i, c) for i, c in enumerate(basic_chars))
pickle.dump(basic_char2vec, open("processed_data/basic_char2vec.p", "wb"))
pickle.dump(basic_vec2char, open("processed_data/basic_vec2char.p", "wb"))
print('Total number of unique chars: {}'.format(len(basic_char2vec)))
print('All unique chars: ')
print(basic_chars)
# Save the processed char-based sequence vectors.
basic_char_seqs_vec = []
for char_s in basic_char_seqs:
basic_char_seqs_vec.append([basic_char2vec[c] for c in char_s])
pickle.dump(basic_char_seqs_vec, open("processed_data/basic_char_seqs_vec.p", "wb"))
# Let's also see the distribution of number of chars per line in *shakespeare.txt*.
proc_lines = [' '.join(l) for l in basic_processed_seqs]
char_len = [len(l) for l in proc_lines]
n, bins, _ = plt.hist(char_len, bins=[i-0.5 for i in range(min(char_len), max(char_len)+2)])
plt.xlabel('Number of chars per line')
plt.show()
# ### 4. Adv Pre-processing (char-based) for Adv RNN
#
# Let's do a more careful char-based data cleaning. Here, don't use sequences across poems. Only use 40-char sequences within individual poems. And we keep all punctuations except for newline char.
# +
# get all useful chars in each line in both files
raw_seqs = []
for infile in ['raw_data/shakespeare.txt','raw_data/spenser.txt']:
with open(infile) as f:
for line in f.readlines():
# tokenize each seq, split punctuations, keep \n char
words = line.lower().split()
# remove empty lists
if words:
raw_seqs.append(words)
# gather seqs of words within poems into dictionaries indexed by numbers
all_poems = {}
counter = 0
for seq in raw_seqs:
if len(seq) == 1:
counter += 1
all_poems[counter] = []
else:
all_poems[counter].append(seq)
# join words into lines for each poem
for poem in all_poems:
all_poems[poem] = [' '.join(line) for line in all_poems[poem]]
# join poem lines into a long string of raw text for RNN training sequencing
proc_text = [' '.join(all_poems[i]) for i in all_poems]
# -
# Store all of the 40-char sequences.
# +
max_len = 40
step = 1
adv_char_seqs = []
for pt in proc_text:
for i in range(0, len(pt)-max_len, step):
adv_char_seqs.append(pt[i:i+max_len+1])
print('Total number of sequences: {}'.format(len(adv_char_seqs)))
# -
# Save the 'chars to vectors' and 'vectors to chars' mappings.
adv_chars = sorted(list(set(''.join(proc_text))))
adv_char2vec = dict((c, i) for i, c in enumerate(adv_chars))
adv_vec2char = dict((i, c) for i, c in enumerate(adv_chars))
pickle.dump(adv_char2vec, open("processed_data/adv_char2vec.p", "wb"))
pickle.dump(adv_vec2char, open("processed_data/adv_vec2char.p", "wb"))
print('Total number of unique chars: {}'.format(len(adv_char2vec)))
print('All unique chars: ')
print(adv_chars)
# Save the processed char-based sequence vectors.
adv_char_seqs_vec = []
for char_s in adv_char_seqs:
adv_char_seqs_vec.append([adv_char2vec[c] for c in char_s])
pickle.dump(adv_char_seqs_vec, open("processed_data/adv_char_seqs_vec.p", "wb"))
|
1-Data-Preprocessing.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Gradient descent algorithm for Scenario 2
#
#
# In this part, we implement an gradient descent algorithm to optimization the objective loss function in Scenario 2:
#
#
# $$\min F := \min \frac{1}{2(n-i)} \sum_{i=1000}^n (fbpredic(i) + a*tby(i) +b*ffr(i) + c*fta(i) - asp(i))^2$$
#
# Gradient descent:
#
# $$ \beta_k = \beta_{k-1} + \delta* \nabla F, $$
# where $\delta$ control how far does each iteration go.
#
#
# ### Detailed plan
#
# First, split the data as train and test with 80% and 20% respectively. For the training part, we need prophet() predicted price, there are a couple of issues. One is prophet() can not predict too far in the future. The other is we can not call prophet() too many times, this takes a lot of time. So we will use a sliding window strategy:
#
# 1, Split the train data as train_1 and train_2, where train_1 is used as a sliding window to fit prophet(), and give predictions in train_2. Train_2 is used train the model we proposed above.
#
# 2, After we got full size (size of train_2) predictions from prophet(), then we use gradient descent to fit the above model, extracting the coefficients of features to make predicution in the testing data.
#
# +
import pandas as pd
import numpy as np
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import FunctionTransformer
from numpy import meshgrid
## For plotting
import matplotlib.pyplot as plt
from matplotlib import style
import datetime as dt
import seaborn as sns
sns.set_style("whitegrid")
# +
# processing data, removing the space in coumn names
total_data= pd.read_csv('../data/total_df.csv')
new_name = [name.strip() for name in total_data.columns]
total_data.columns = new_name
total_data = total_data.sort_values(by='Date')
#extract features to use
sub_feature = ['Date','Close','bond','fed funds','fed total assets']
total_data = total_data[sub_feature]
total_data=total_data.iloc[:10000]
#change the columns names for fitting prophet()
total_data.columns = ['ds','y','bond','fed funds','fed total assets']
total_data
# +
#feature normalization
normal_constants = []
for col in total_data.columns[1:]:
tmp_max = np.max(abs(total_data[col]))
normal_constants.append(tmp_max)
total_data[col]= total_data[col]/tmp_max
print(normal_constants)
print(total_data.head())
# +
train_size = int(0.9*total_data.shape[0])
train_data = total_data.iloc[:train_size]
test_data= total_data.iloc[train_size:]
print(train_data,test_data)
# -
# Use prophet() to make predictions, we will split training as train_1 and train_2 with ratio 40% vs 60%,
# train_1 will be used to fit prophet(), then predict on train_2. Getting the predictions, feed the data into
# the Scenario 2 model, train again to get the parameters a,b,c,....
# +
#prophet part
import fbprophet as prophet
window_size = 4000
pred_size =1000
num_winds = 5
train_2 = train_data.iloc[window_size:]
pro_pred = []
for i in range(num_winds):
tmp_train = train_data.iloc[i*pred_size:i*pred_size+window_size].copy()
my_prophet = prophet.Prophet()
my_prophet.fit(tmp_train[['ds','y']])
my_test_dates = my_prophet.make_future_dataframe(periods = pred_size)
tmp_forecast = my_prophet.predict(my_test_dates)
pro_pred.append(tmp_forecast.yhat[-1000:])
# +
#flatten the pro_pred
flat_pro_pred = [item for l1 in pro_pred for item in l1]
#Next, we will fit our own model on the 5000 data set, then predict on the testing_data
sc2_X = train_data[['bond','fed funds','fed total assets']].copy()
sc2_XX = sc2_X.iloc[4000:]
sc2_y = train_data.y.copy()
sc2_yy = sc2_y.iloc[4000:] - flat_pro_pred
# -
sc2_XX
# +
# gradient descent algorithm
def grad_descent(X,y,itr,delta):
n,m = X.shape
tol =10**-9
#initialization
beta_old = np.zeros(m)
beta_new = np.zeros(m)
#grad = np.zeros(m)
totl_itrs = itr
for i in range(itr):
beta_new = beta_old - (delta/n)*X.transpose().dot(np.subtract(X.dot(beta_old),y))
#grad = (y-X.dot(beta[i,])).dot(-X)/n
#beta[i+1,] = beta[i,] + delta*grad
error = np.sum((beta_new-beta_old)**2)
if error <tol:
return beta_new
beta_old = beta_new
# +
#get the coefficents output from grad_descent algorithm
coef_abc = grad_descent(sc2_XX.values,sc2_yy.values,10000,0.2)
new_sc2_yy = sc2_XX.values.dot(coef_abc)
#compute fitted value on train_2 and scale it back
fitting_train = normal_constants[0]*(new_sc2_yy + flat_pro_pred)
# -
print(coef_abc)
# +
plt.figure(figsize=(11,6))
plt.plot(range(0,5000),fitting_train,label="fitted values by our model")
plt.plot(range(0,5000),sc2_y.iloc[4000:]*normal_constants[0],label='ture price value')
plt.legend(fontsize=13)
plt.title("Fitting on the training data",fontsize=18)
# +
# we make predictions on the tesing data using our fitted model
#get fb prophet() first
test_proph = prophet.Prophet()
test_proph.fit(train_data[['ds','y']].iloc[-4000:].copy())
test_proph_dates = test_proph.make_future_dataframe(periods = pred_size)
test_pred_proph = test_proph.predict(test_proph_dates)
# +
#get other features prediction
test_sc2_XX = test_data[['bond','fed funds','fed total assets']].copy()
testpred_abc_feature = test_sc2_XX.dot(coef_abc)
#final prediction
test_total_pred = normal_constants[0]*(testpred_abc_feature.values + test_pred_proph.yhat[-1000:].values)
test_sc2_XX
# -
#test_pred_proph.yhat[-1000:].values
print(coef_abc)
sc2_XX
# +
plt.figure(figsize=(11,6))
plt.plot(range(0,1000),test_total_pred,label='fitted value on test_data')
plt.plot(range(0,1000),test_data['y']*normal_constants[0],label='true price value on test')
plt.legend(fontsize=13)
plt.title("Prediction on the testing data",fontsize=18)
# -
|
scratch work/.ipynb_checkpoints/Qiang_Gradient Descent for Scenario 2-checkpoint.ipynb
|
# # Working with numerical data
#
# In the previous notebook, we trained a k-nearest neighbors model on
# some data.
#
# However, we oversimplified the procedure by loading a dataset that contained
# exclusively numerical data. Besides, we used datasets which were already
# split into train-test sets.
#
# In this notebook, we aim at:
#
# * identifying numerical data in a heterogeneous dataset;
# * selecting the subset of columns corresponding to numerical data;
# * using a scikit-learn helper to separate data into train-test sets;
# * training and evaluating a more complex scikit-learn model.
#
# We will start by loading the adult census dataset used during the data
# exploration.
#
# ## Loading the entire dataset
#
# As in the previous notebook, we rely on pandas to open the CSV file into
# a pandas dataframe.
# +
import pandas as pd
adult_census = pd.read_csv("../datasets/adult-census.csv")
# drop the duplicated column `"education-num"` as stated in the first notebook
adult_census = adult_census.drop(columns="education-num")
adult_census.head()
# -
# The next step separates the target from the data. We performed the same
# procedure in the previous notebook.
data, target = adult_census.drop(columns="class"), adult_census["class"]
data.head()
target
# <div class="admonition caution alert alert-warning">
# <p class="first admonition-title" style="font-weight: bold;">Caution!</p>
# <p class="last">Here and later, we use the name <tt class="docutils literal">data</tt> and <tt class="docutils literal">target</tt> to be explicit. In
# scikit-learn documentation, <tt class="docutils literal">data</tt> is commonly named <tt class="docutils literal">X</tt> and <tt class="docutils literal">target</tt> is
# commonly called <tt class="docutils literal">y</tt>.</p>
# </div>
# At this point, we can focus on the data we want to use to train our
# predictive model.
#
# ## Identify numerical data
#
# Numerical data are represented with numbers. They are linked to measurable
# (quantitative) data, such as age or the number of hours a person works a
# week.
#
# Predictive models are natively designed to work with numerical data.
# Moreover, numerical data usually requires very little work before getting
# started with training.
#
# The first task here will be to identify numerical data in our dataset.
#
# <div class="admonition caution alert alert-warning">
# <p class="first admonition-title" style="font-weight: bold;">Caution!</p>
# <p class="last">Numerical data are represented with numbers, but numbers are not always
# representing numerical data. Categories could already be encoded with
# numbers and you will need to identify these features.</p>
# </div>
#
# Thus, we can check the data type for each of the column in the dataset.
data.dtypes
# We seem to have only two data types. We can make sure by checking the unique
# data types.
data.dtypes.unique()
# Indeed, the only two types in the dataset are integer and object.
# We can look at the first few lines of the dataframe to understand the
# meaning of the `object` data type.
data.head()
# We see that the `object` data type corresponds to columns containing strings.
# As we saw in the exploration section, these columns contain categories and we
# will see later how to handle those. We can select the columns containing
# integers and check their content.
numerical_columns = ["age", "capital-gain", "capital-loss", "hours-per-week"]
data[numerical_columns].head()
# Now that we limited the dataset to numerical columns only,
# we can analyse these numbers to figure out what they represent. We can
# identify two types of usage.
#
# The first column, `"age"`, is self-explanatory. We can note that the values
# are continuous, meaning they can take up any number in a given range. Let's
# find out what this range is:
data["age"].describe()
# We can see the age varies between 17 and 90 years.
#
# We could extend our analysis and we will find that `"capital-gain"`,
# `"capital-loss"`, and `"hours-per-week"` are also representing quantitative
# data.
#
# Now, we store the subset of numerical columns in a new dataframe.
data_numeric = data[numerical_columns]
# ## Train-test split the dataset
#
# In the previous notebook, we loaded two separate datasets: a training one and
# a testing one. However, as mentioned earlier, having separate datasets like
# that is unusual: most of the time, we have a single one, which we will
# subdivide.
#
# We also mentioned that scikit-learn provides the helper function
# `sklearn.model_selection.train_test_split` which is used to automatically
# split the data.
# +
from sklearn.model_selection import train_test_split
data_train, data_test, target_train, target_test = train_test_split(
data_numeric, target, random_state=42, test_size=0.25)
# -
# <div class="admonition tip alert alert-warning">
# <p class="first admonition-title" style="font-weight: bold;">Tip</p>
# <p class="last">In scikit-learn setting the <tt class="docutils literal">random_state</tt> parameter allows to get
# deterministic results when we use a random number generator. In the
# <tt class="docutils literal">train_test_split</tt> case the randomness comes from shuffling the data, which
# decides how the dataset is split into a train and a test set).</p>
# </div>
# When calling the function `train_test_split`, we specified that we would like
# to have 25% of samples in the testing set while the remaining samples (75%)
# will be available in the training set. We can check quickly if we got
# what we expected.
print(f"Number of samples in testing: {data_test.shape[0]} => "
f"{data_test.shape[0] / data_numeric.shape[0] * 100:.1f}% of the"
f" original set")
print(f"Number of samples in training: {data_train.shape[0]} => "
f"{data_train.shape[0] / data_numeric.shape[0] * 100:.1f}% of the"
f" original set")
# In the previous notebook, we used a k-nearest neighbors model. While this
# model is intuitive to understand, it is not widely used in practice. Now, we
# will use a more useful model, called a logistic regression, which belongs to
# the linear models family.
#
# <div class="admonition note alert alert-info">
# <p class="first admonition-title" style="font-weight: bold;">Note</p>
# <p>In short, linear models find a set of weights to combine features linearly
# and predict the target. For instance, the model can come up with a rule such
# as:</p>
# <ul class="simple">
# <li>if <tt class="docutils literal">0.1 * age + 3.3 * <span class="pre">hours-per-week</span> - 15.1 > 0</tt>, predict <tt class="docutils literal"><span class="pre">high-income</span></tt></li>
# <li>otherwise predict <tt class="docutils literal"><span class="pre">low-income</span></tt></li>
# </ul>
# <p class="last">Linear models, and in particular the logistic regression, will be covered in
# more details in the "Linear models" module later in this course. For now the
# focus is to use this logistic regression model in scikit-learn rather than
# understand how it works in details.</p>
# </div>
#
# To create a logistic regression model in scikit-learn you can do:
# +
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
# -
# Now that the model has been created, you can use it exactly the same way as
# we used the k-nearest neighbors model in the previous notebook. In
# particular, we can use the `fit` method to train the model using the training
# data and labels:
model.fit(data_train, target_train)
# We can also use the `score` method to check the model statistical performance
# on the test set.
accuracy = model.score(data_test, target_test)
print(f"Accuracy of logistic regression: {accuracy:.3f}")
# Now the real question is: is this statistical performance relevant of a good
# predictive model? Find out by solving the next exercise!
#
# In this notebook, we learned to:
#
# * identify numerical data in a heterogeneous dataset;
# * select the subset of columns corresponding to numerical data;
# * use the scikit-learn `train_test_split` function to separate data into
# a train and a test set;
# * train and evaluate a logistic regression model.
|
notebooks/02_numerical_pipeline_hands_on.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
import random
from operator import add
from functools import reduce
import matplotlib.pyplot as plt
import math
import numpy as np
import numpy.random as nrand
import pandas as pd
import itertools
import scipy.stats as stats
import powerlaw
from stockmarket import baselinemodel
from tqdm import tqdm
from pandas_datareader import data
from pylab import plot, show
from math import isclose
from stockmarket.stylizedfacts import *
import itertools
import quandl
from SALib.sample import latin
from statistics import mean
import bisect
# # Data used for calibration
# +
start_date = '2010-01-01'
end_date = '2016-12-31'
spy = data.DataReader("SPY",
start=start_date,
end=end_date,
data_source='google')['Close']
spy_returns = spy.pct_change()[1:]
spy_volume = data.DataReader("SPY",
start=start_date,
end=end_date,
data_source='google')['Volume']
# -
# calculate stylized facts for SP500
spy_autocorrelation = autocorrelation_returns(spy_returns, 25)
spy_kurtosis = kurtosis(spy_returns)
spy_autocorrelation_abs = autocorrelation_abs_returns(spy_returns, 25)
spy_hurst = hurst(spy, lag1=2 , lag2=20)
spy_cor_volu_vola = correlation_volume_volatility(spy_volume, spy_returns, window=10)
stylized_facts_spy = [spy_autocorrelation, spy_kurtosis, spy_autocorrelation_abs, spy_hurst, spy_cor_volu_vola]
print('spy_autocorrelation ',spy_autocorrelation)
print('spy_kurtosis ', spy_kurtosis)
print('spy_autocorrelation_abs ', spy_autocorrelation_abs)
print('spy_hurst ', spy_hurst)
print('spy_cor_volu_vola ',spy_cor_volu_vola)
# # Calibrate the zero-intelligence model using an evolutionary algorithm
SIMTIME = 200
NRUNS = 5
backward_simulated_time = 400
initial_total_money = 26000
init_profit = 1000
init_discount_rate = 0.17
# ## Parameter space
#
# We define the parameter bounds as follows.
#
# | Parameter | Values (start, stop, step) |
# | -------------| ------------|
# | share_chartists | 0 - 1, 0.1 |
# | share_mean_reversion | 0 - 1, 0.1 |
# | order_expiration_time | 1000 - 10000, 1000 |
# | agent_order_price_variability | 1 - 10, 1 |
# | agent_order_variability | 0.1 - 5 |
# | agent_ma_short | 5 - 100, 5 |
# | agent_ma_long | 50 - 400, 50 |
# | agents_hold_thresholds | 0.0005 |
# | Agent_volume_risk_aversion | 0.1 - 1, 0.1 |
# | Agent_propensity_to_switch | 0.1 - 2.2, 0.1 |
# | profit_announcement_working_days | 5 - 50, 5 |
# | price_to_earnings_spread | 5 - 50, 5 |
# | price_to_earnings_heterogeneity | 5 - 50, 5 |
parameter_space = {'share_chartists':[0.0, 1.0], 'share_mean_reversion':[0.0, 1.0], 'order_expiration_time':[1000, 10000],
'agent_order_price_variability':[1, 10], 'agent_order_variability':[0.1, 5.0],
'agent_ma_short':[5, 100], 'agent_ma_long':[50, 400], 'agents_hold_thresholds':[0.00005,0.01],
'agent_volume_risk_aversion':[0.1, 1.0], 'agent_propensity_to_switch':[0.1, 2.2],
'profit_announcement_working_days':[5, 50], 'price_to_earnings_base':[10,20],
'price_to_earnings_heterogeneity':[1.1,2.5], 'price_to_earnings_gap':[4,20],
'longMA_heterogeneity':[1.1,1.8], 'shortMA_heterogeneity':[1.1,1.8], 'shortMA_memory_divider':[1, 10]}
# Then, we determine the amount of starting points we want for the genetic algorithm and sample the parameter space using a Latin hypercube sample.
population_size = 8
problem = {
'num_vars': 17,
'names': ['share_chartists', 'share_mean_reversion', 'order_expiration_time', 'agent_order_price_variability',
'agent_order_variability', 'agent_ma_short', 'agent_ma_long', 'agents_hold_thresholds',
'agent_volume_risk_aversion', 'agent_propensity_to_switch', 'profit_announcement_working_days',
'price_to_earnings_base', 'price_to_earnings_heterogeneity', 'price_to_earnings_gap',
'longMA_heterogeneity', 'shortMA_heterogeneity', 'shortMA_memory_divider'],
'bounds': [[0.0, 1.0], [0.0, 1.0], [1000, 10000], [1, 10],
[0.1, 5.0], [5, 100], [50, 400], [0.00005,0.01],
[0.1, 1], [0.1, 2.2], [5, 50],
[10,20], [1.1,2.5], [4,20],
[1.1,1.8], [1.1,1.8], [1, 10]]
}
latin_hyper_cube = latin.sample(problem=problem, N=population_size)
latin_hyper_cube = latin_hyper_cube.tolist()
# transform some of the parameters to integer
for idx, parameters in enumerate(latin_hyper_cube):
latin_hyper_cube[idx][2] = int(latin_hyper_cube[idx][2])
latin_hyper_cube[idx][3] = int(latin_hyper_cube[idx][3])
latin_hyper_cube[idx][4] = int(latin_hyper_cube[idx][4])
latin_hyper_cube[idx][5] = int(latin_hyper_cube[idx][5])
latin_hyper_cube[idx][6] = int(latin_hyper_cube[idx][6])
latin_hyper_cube[idx][10] = int(latin_hyper_cube[idx][10])
latin_hyper_cube[idx][11] = int(latin_hyper_cube[idx][11])
latin_hyper_cube[idx][13] = int(latin_hyper_cube[idx][13])
latin_hyper_cube[idx][16] = int(latin_hyper_cube[idx][16])
# ## Problem
# We try to match average simulation stylized facts as closely as possible to observed stylized facts.
#
# For that, we use an evolutionary algorithm to minimize a cost function.
#
# ## Create population of individuals
# In our algorithm, an individual is a set a of parameters and its average associated values for stylized facts over several simulation runs.
class Individual:
"""The order class can represent both bid or ask type orders"""
def __init__(self, parameters, stylized_facts, cost):
self.parameters = parameters
self.stylized_facts = stylized_facts
self.cost = cost
def __lt__(self, other):
"""Allows comparison to other individuals based on its cost (negative fitness)"""
return self.cost < other.cost
# create initial population
population = []
for parameters in latin_hyper_cube:
# add an individual to the population
population.append(Individual(parameters, [], np.inf))
# create populations_over_time
populations_over_time = [population]
# ## Define Fitness / cost function
# We measure the relative difference between the simulated and actual data using
#
# $c(s)= \frac{spy(s) - a}{spy(s)}^2$.
#
# Then, for each simulaten, we measure total costs as:
#
# $t(w,v,x,y,z)= c(w) + c(v) + c(x) + c(y) + c(z)
# $
# where, w represents autocorrelation, v fat tails, x is clustered volatility, y is long memory, and z is the correlation between price and volume.
def cost_function(observed_values, average_simulated_values):
"""cost function"""
score = 0
for obs, sim in zip(observed_values, average_simulated_values):
score += ((obs - sim) / obs)**2
return score
def average_fitness(population):
total_cost = 0
for individual in population:
total_cost += individual.cost
return total_cost / (float(len(population)))
# ## Define function to simulate a population
# +
#av_pop_fitness = []
# -
def simulate_population(population, number_of_runs, simulation_time, number_of_agents):
"""
Simulate a population of parameter spaces for the stock market model
:param population: population of parameter spaces used to simulate model
:param number_of_runs: number of times the simulation should be run
:param simulation_time: amount of days which will be simulated for each run
:return: simulated population, average population fitness
"""
simulated_population = []
for idx, individual in tqdm(enumerate(population)):
parameters = individual.parameters
stylized_facts = [[],[],[],[],[]]
# identify parameters
share_chartists= parameters[0]
share_mean_reversion = parameters[1]
order_expiration_time = parameters[2]
agent_order_price_variability = parameters[3]
agent_order_variability = parameters[4]
agent_ma_short = parameters[5]
agent_ma_long = parameters[6]
agents_hold_thresholds = parameters[7]
agent_volume_risk_aversion = parameters[8]
agent_propensity_to_switch = parameters[9]
profit_announcement_working_days = parameters[10]
price_to_earnings_base = parameters[11]
price_to_earnings_heterogeneity = parameters[12]
price_to_earnings_gap = parameters[13]
longMA_heterogeneity = parameters[14]
shortMA_heterogeneity = parameters[15]
shortMA_memory_divider = parameters[16]
PE_low_low = price_to_earnings_base
PE_low_high = int(price_to_earnings_heterogeneity*price_to_earnings_base)
PE_high_low = PE_low_high + price_to_earnings_gap
PE_high_high = int(price_to_earnings_heterogeneity*PE_high_low)
# simulate the model
for seed in range(NRUNS):
agents, firms, stocks, order_books = baselinemodel.stockMarketSimulation(seed=seed,
simulation_time=SIMTIME,
init_backward_simulated_time=int(agent_ma_long*longMA_heterogeneity),
number_of_agents=number_of_agents,
share_chartists=share_chartists,
share_mean_reversion=share_mean_reversion,
amount_of_firms=1,
initial_total_money=(initial_total_money,int(initial_total_money*1.1)),
initial_profit=(init_profit, init_profit),
discount_rate=init_discount_rate,
init_price_to_earnings_window=((PE_low_low,
PE_low_high),
(PE_high_low,
PE_high_high)),
order_expiration_time=order_expiration_time,
agent_order_price_variability=(agent_order_price_variability,agent_order_price_variability),
agent_order_variability=agent_order_variability,
agent_ma_short=(agent_ma_short, int(agent_ma_short*shortMA_heterogeneity)),
agent_ma_long=(agent_ma_long, int(agent_ma_long*longMA_heterogeneity)),
agents_hold_thresholds=(1-agents_hold_thresholds, 1+agents_hold_thresholds),
agent_volume_risk_aversion=agent_volume_risk_aversion,
agent_propensity_to_switch=agent_propensity_to_switch,
firm_profit_mu=0.058,
firm_profit_delta=0.00396825396,
firm_profit_sigma=0.125,
profit_announcement_working_days=profit_announcement_working_days,
mean_reversion_memory_divider=4,
printProgress=False,
)
# store simulated stylized facts
sim_returns = calculate_returns(order_books[0].transaction_prices_history)
sim_volume = []
for day in order_books[0].transaction_volumes_history[1:]:
sim_volume.append(sum(day))
stylized_facts[0].append(autocorrelation_returns(sim_returns, 25))
stylized_facts[1].append(kurtosis(sim_returns))
stylized_facts[2].append(autocorrelation_abs_returns(sim_returns, 25))
stylized_facts[3].append(hurst(spy, lag1=2 , lag2=20))
stylized_facts[4].append(correlation_volume_volatility(sim_volume, sim_returns, window=10))
# create next generation individual
next_gen_individual = Individual(parameters, [], np.inf)
# add average stylized facts to individual
for s_fact in stylized_facts:
next_gen_individual.stylized_facts.append(mean(s_fact))
# add average fitness to individual
next_gen_individual.cost = cost_function(stylized_facts_spy, next_gen_individual.stylized_facts)
# set any non_volume simulation cost to infinity
if np.isnan(next_gen_individual.cost):
next_gen_individual.cost = np.inf
# insert into next generation population, lowest score to the left
bisect.insort_left(simulated_population, next_gen_individual)
average_population_fitness = average_fitness(simulated_population)
return simulated_population, average_population_fitness
# +
# next_population = []
# for idx, individual in tqdm(enumerate(population)):
# parameters = individual.parameters
# stylized_facts = [[],[],[],[],[]]
# # name the parameters
# share_chartists= parameters[0]
# share_mean_reversion = parameters[1]
# order_expiration_time = parameters[2]
# agent_order_price_variability = parameters[3]
# agent_order_variability = parameters[4]
# agent_ma_short = parameters[5]
# agent_ma_long = parameters[6]
# agents_hold_thresholds = parameters[7]
# agent_volume_risk_aversion = parameters[8]
# agent_propensity_to_switch = parameters[9]
# profit_announcement_working_days = parameters[10]
# price_to_earnings_base = parameters[11]
# price_to_earnings_heterogeneity = parameters[12]
# price_to_earnings_gap = parameters[13]
# longMA_heterogeneity = parameters[14]
# shortMA_heterogeneity = parameters[15]
# shortMA_memory_divider = parameters[16]
# PE_low_low = price_to_earnings_base
# PE_low_high = int(price_to_earnings_heterogeneity*price_to_earnings_base)
# PE_high_low = PE_low_high + price_to_earnings_gap
# PE_high_high = int(price_to_earnings_heterogeneity*PE_high_low)
# # simulate the model
# for seed in range(NRUNS):
# agents, firms, stocks, order_books = baselinemodel.stockMarketSimulation(seed=seed,
# simulation_time=SIMTIME,
# init_backward_simulated_time=int(agent_ma_long*longMA_heterogeneity),
# number_of_agents=500,
# share_chartists=share_chartists,
# share_mean_reversion=share_mean_reversion,
# amount_of_firms=1,
# initial_total_money=(initial_total_money,int(initial_total_money*1.1)),
# initial_profit=(init_profit, init_profit),
# discount_rate=init_discount_rate,
# init_price_to_earnings_window=((PE_low_low,
# PE_low_high),
# (PE_high_low,
# PE_high_high)),
# order_expiration_time=order_expiration_time,
# agent_order_price_variability=(agent_order_price_variability,agent_order_price_variability),
# agent_order_variability=agent_order_variability,
# agent_ma_short=(agent_ma_short, int(agent_ma_short*shortMA_heterogeneity)),
# agent_ma_long=(agent_ma_long, int(agent_ma_long*longMA_heterogeneity)),
# agents_hold_thresholds=(1-agents_hold_thresholds, 1+agents_hold_thresholds),
# agent_volume_risk_aversion=agent_volume_risk_aversion,
# agent_propensity_to_switch=agent_propensity_to_switch,
# firm_profit_mu=0.058,
# firm_profit_delta=0.00396825396,
# firm_profit_sigma=0.125,
# profit_announcement_working_days=profit_announcement_working_days,
# mean_reversion_memory_divider=4,
# printProgress=False,
# )
# # store simulated stylized facts
# sim_returns = calculate_returns(order_books[0].transaction_prices_history)
# sim_volume = []
# for day in order_books[0].transaction_volumes_history[1:]:
# sim_volume.append(sum(day))
# stylized_facts[0].append(autocorrelation_returns(sim_returns, 25))
# stylized_facts[1].append(kurtosis(sim_returns))
# stylized_facts[2].append(autocorrelation_abs_returns(sim_returns, 25))
# stylized_facts[3].append(hurst(spy, lag1=2 , lag2=20))
# stylized_facts[4].append(correlation_volume_volatility(sim_volume, sim_returns, window=10))
# # create next generation individual
# next_gen_individual = Individual(parameters, [], np.inf)
# # add average stylized facts to individual
# for s_fact in stylized_facts:
# next_gen_individual.stylized_facts.append(mean(s_fact))
# # add average fitness to individual
# next_gen_individual.cost = cost_function(stylized_facts_spy, next_gen_individual.stylized_facts)
# # set any non_volume simulation cost to infinity
# if np.isnan(next_gen_individual.cost):
# next_gen_individual.cost = np.inf
# # insert into next generation population, lowest score to the left
# bisect.insort_left(next_population, next_gen_individual)
# # add this generation to the overview of generations and its fitness to the fitness over time tracker
# populations_over_time.append(next_population)
# av_pop_fitness.append(average_fitness(next_population))
# -
# # Function to evolve the population
def evolve_population(population, fittest_to_retain, random_to_retain, parents_to_mutate, parameters_to_mutate):
"""
Evolves a population. First, the fittest members of the population plus some random individuals become parents.
Then, some random mutations take place in the parents. Finally, the parents breed to create children.
:param population: population individuals sorted by cost (cheapest left) which contain parameter values
:param fittest_to_retain: percentage of fittest individuals which should be maintained as parents
:param random_to_retain: percentage of other random individuals which should be maintained as parents
:param individuals_to_mutate: percentage of parents in which mutations will take place
:param parameters_to_mutate: percentage of parameters in chosen individuals which will mutate
:return:
"""
# 1 retain parents
retain_lenght = int(len(population) * fittest_to_retain)
parents = population[:retain_lenght]
# 2 retain random individuals
amount_random_indiv = int(len(population) * random_to_retain)
parents.extend(random.sample(population[retain_lenght:], amount_random_indiv))
# 3 mutate random parameters of random individuals
amount_of_individuals_to_mutate = int(parents_to_mutate * len(parents))
amount_of_params_to_mutate = int(parameters_to_mutate * len(parents[0].parameters))
for parent in random.sample(parents, amount_of_params_to_mutate):
indexes_of_mutable_params = random.sample(range(len(parent.parameters)), amount_of_params_to_mutate)
for idx in indexes_of_mutable_params:
min_value, max_value = problem['bounds'][idx][0], problem['bounds'][idx][1]
if type(min_value) == float:
parent.parameters[idx] = random.uniform(min_value, max_value)
else:
parent.parameters[idx] = random.randint(min_value, max_value)
# 4 parents breed to create a new population
parents_lenght = len(parents)
desired_lenght = len(population) - parents_lenght
children = []
while len(children) < desired_lenght:
male = random.randint(0, parents_lenght - 1)
female = random.randint(0, parents_lenght - 1)
if male != female:
male = parents[male]
female = parents[female]
half = int(len(male.parameters) / 2)
child_parameters = male.parameters[:half] + female.parameters[half:]
child = Individual(child_parameters, [], np.inf)
children.append(child)
parents.extend(children)
# the parents list now contains a full new population with the parents and their offspring
return parents
# # Simulate the evolutionary model
#
# 1. simulated_population = simulate_population(population, kwargs)
# 2. evolved_population, generation_fitness = evolve_population(population, kwargs)
# 3. fitness.append(generation_fitness)
#
# for i in range(iterations):
# pop = evolve(pop, target)
# fitness_history.append(average_fitness(pop, target))
iterations = 2
av_pop_fitness = []
all_populations = [population]
for i in range(iterations):
simulated_population, fitness = simulate_population(all_populations[i], number_of_runs=3, simulation_time=10, number_of_agents=200)
av_pop_fitness.append(fitness)
all_populations.append(evolve_population(simulated_population, fittest_to_retain=0.2, random_to_retain=0.1,
parents_to_mutate=0.5, parameters_to_mutate=0.1))
# ## 1 Selection:
# ### A Select fittest members of the population
# For that we already sorted the list of individuals. So it is easy to select the fittest individuals
percentage_parameters_to_mutate = 0.1
percentage_individuals_to_mutate = 0.5
retain=0.3
random_select=0.1
retain_lenght = int(len(next_population) * retain)
parents = next_population[:retain_lenght]
# ### B Select some random other members of the population
# randomly add other individuals to promote genetic diversity
for individual in next_population[retain_lenght:]:
if random_select > random.random():
parents.append(individual)
# ### 2 Mutation: vary random parameters of random individuals
amount_of_individuals_to_mutate = int(percentage_individuals_to_mutate * len(parents))
# determine if a mutation will take place in this parent
for parent in random.sample(parents, amount_of_params_to_mutate):
# determine how many parameters should be mutated
amount_of_params_to_mutate = int(percentage_parameter_to_mutate * len(parents[0].parameters))
# sample the indexes of the parameters to mutate
print('I will mutate ', amount_of_params_to_mutate, ' parameters from', parent)
indexes_of_mutable_params = random.sample(range(len(parents[0].parameters)), amount_of_params_to_mutate)
for idx in indexes_of_mutable_params:
# identify the range for this parameter to mutate
min_value, max_value = problem['bounds'][idx][0], problem['bounds'][idx][1]
print('I mutate ', problem['names'][idx], ' which has min, max ', min_value, max_value, 'and current val= ', parent.parameters[idx])
if type(min_value) == float:
parent.parameters[idx] = random.uniform(min_value, max_value)
else:
parent.parameters[idx] = random.randint(min_value, max_value)
print('new variable value is ', parent.parameters[idx])
# ### 3 Breeding: fill up the rest of the population with combinations of the most fittest individuals
# keep in mind if it is a float or an integer
# let parents breed to create children
parents_lenght = len(parents)
desired_lenght = len(next_population) - parents_lenght
children = []
while len(children) < desired_lenght:
male = random.randint(0, parents_lenght - 1)
female = random.randint(0, parents_lenght - 1)
print('parents are ', male, female)
if male != female:
male = parents[male]
female = parents[female]
half = int(len(male.parameters) / 2)
# here I should create a new child
child_parameters = male.parameters[:half] + female.parameters[half:]
print('male params are ', male.parameters)
print('female params are ', female.parameters)
print('child params are', child_parameters)
child = Individual(child_parameters, [], np.inf)
children.append(child)
parents.extend(children)
# ## Simulate evolution
#
#
#
mean([1,2,3])
|
EvolAlgoCalibration.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# !pip install -e ../..
# +
import torch
import matplotlib.pyplot as plt
from generation.nets.signals_net import Generator
from generation.inference import InferenceModel
# -
WANDB_RUN_ID = '2ukg3of4'
EPOCH = 1300
model = InferenceModel(Generator, WANDB_RUN_ID, EPOCH)
model.generate(samples_num=65).shape
|
notebooks/inference/inference_model.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 1. Crear dataframe por especialidad -
# ### Directorio: df_specialty_title_abstract
# ### Columnas dataframe: (id, title_spa, abstract_spa)
import os
import xml.etree.ElementTree as ET
import pandas as pd
path_pubmed_xml_specialties = '../02_download_pubmed/specialties_case_report_xml/'
path_df_specialty_title_abstract = './dataframes/df_specialty_title_abstract_case_report'
def get_lists_pmid_title_abstract(root):
list_pmid = []
list_title = []
list_abstract = []
for PubmedArticle in root.findall('PubmedArticle'):
pmid = ''
title_spa = ''
abstract_spa = ''
for MedlineCitation in PubmedArticle.findall('MedlineCitation'):
pmid = MedlineCitation.find('PMID').text
for Article in MedlineCitation.findall('Article'):
if not Article.find('VernacularTitle') is None:
title_spa = Article.find('VernacularTitle').text
if not title_spa is None and len(title_spa) > 1 and title_spa.isupper():
title_spa = title_spa.replace("'A", "Á")
title_spa = title_spa.replace("'E", "É")
title_spa = title_spa.replace("'I", "Í")
title_spa = title_spa.replace("'O", "Ó")
title_spa = title_spa.replace("'U", "Ú")
for OtherAbstract in MedlineCitation.findall('OtherAbstract'):
abstrac_lang = OtherAbstract.get('Language')
if abstrac_lang == 'spa':
for AbstractText in OtherAbstract.findall('AbstractText'):
if not AbstractText.text is None:
abstract_spa = abstract_spa + AbstractText.text + ' '
if not title_spa is None and len(title_spa) > 1:
list_pmid.append(pmid)
list_title.append(title_spa)
list_abstract.append(abstract_spa)
print(len(list_pmid), len(list_title), len(list_abstract))
return list_pmid, list_title, list_abstract
def read_xml_file(file_xml):
tree = ET.parse(file_xml)
root = tree.getroot()
return root
def create_dataframe_text_abstract(specialty_name, specialty_xml):
xml_data = read_xml_file(specialty_xml)
#pmid = get_pmid(xml_data)
list_pmid, list_title, list_abstract = get_lists_pmid_title_abstract(xml_data)
df = pd.DataFrame ( { 'id': list_pmid, 'title': list_title, 'abstract': list_abstract})
df.to_csv(os.path.join(path_df_specialty_title_abstract, specialty_name) + '.csv')
print("Guardado ", os.path.join(path_df_specialty_title_abstract, specialty_name) + '.csv' , ' Long: ', len(df))
def read_path_specialties_xml():
for root, dirs, list_files in os.walk(path_pubmed_xml_specialties):
for specialty in list_files:
specialty_name = specialty.split(".xml")[0]
print(specialty_name)
create_dataframe_text_abstract(specialty_name, root + specialty)
read_path_specialties_xml()
# # 3. Crear dataframe con seis columnas
# ### Directorio: df_specialty_ngram
# ### Columnas dataframe: (idpubmed, título-unigramas, título-bigramas, título-trigramas, abstract-unigramas, abstract-bigramas, abstract-trigramas)
#
# ## Procesamiento de texto:
# 1 Lowercase: pasar a minúscula si el término no es uppercase
# 2. Unigramas:
# 2.1 borrar puntuacion
# 2.2 borrar dígitos
import os
import pandas as pd
import os
import spacy
import ast
nlp = spacy.load('es_core_news_sm')
from nltk import everygrams
from nltk.corpus import stopwords
stop_words = set(stopwords.words('spanish'))
import string
path_df_specialty_title_abstract = './pubmed_files/dataframes/df_specialty_title_abstract'
path_df_specialty_xgram_title_abstract = './pubmed_files/dataframes/df_specialty_ngram'
def change_to_lowercase(term):
if not term.isupper():
return term.lower()
return term
def to_lowercase(list_terms):
list_terms_new = []
for term in list_terms:
if type(term) == str:
list_terms_new.append(change_to_lowercase(term))
elif type(term) == tuple:
new_tuple = ()
for t in term:
new_tuple = new_tuple + (change_to_lowercase(t),)
list_terms_new.append(new_tuple)
#print("2",list_terms_new )
return list_terms_new
def remove_stopword_punt_digit(list_tems):
list_tems = [term for term in list_tems if not term in stop_words]
list_tems = [term for term in list_tems if not term in set(string.punctuation)]
list_tems = [term for term in list_tems if not term.isdigit() ]
#print("3", list_tems )
return list_tems
def tokenize(text):
list_tokens = []
if type(text) == float:
text = ''
doc = nlp(text)
for token in doc:
list_tokens.append(token.text)
return list_tokens
def obtain_grams(text, gram):
list_tokens = tokenize(text)
list_bigrams = list(everygrams(list_tokens, min_len=gram, max_len=gram))
return list_bigrams
def create_new_df_xgrams(df):
list_id = []
list_ngram = []
# Iteración por filas del DataFrame:
for index, row in df.iterrows():
file_id = row['id']
abstract = row['abstract']
title = row['title']
title_unig = tokenize(title)
title_big = obtain_grams(title ,2)
title_trig = obtain_grams(title ,3)
abstract_unig = tokenize(abstract)
abstract_big = obtain_grams(abstract ,2)
abstract_trig = obtain_grams(abstract ,3)
## procesamiento
title_unig = remove_stopword_punt_digit(to_lowercase(title_unig))
title_big = to_lowercase(title_big)
title_trig = to_lowercase(title_trig)
abstract_unig = remove_stopword_punt_digit(to_lowercase(abstract_unig))
abstract_big = to_lowercase(abstract_big)
abstract_trig = to_lowercase(abstract_trig)
len_total = len(title_unig) + len(title_big) + len(title_trig) + len(abstract_unig) + len(abstract_big) + len(abstract_trig)
list_id.extend([file_id] * len_total)
list_ngram.extend(title_unig)
list_ngram.extend(title_big)
list_ngram.extend(title_trig)
list_ngram.extend(abstract_unig)
list_ngram.extend(abstract_big)
list_ngram.extend(abstract_trig)
new_df = pd.DataFrame ( { 'id': list_id,
'ngram': list_ngram
})
return new_df
# +
def read_df_specialties_title_abstract():
list_df = os.listdir(path_df_specialty_title_abstract)
for specialty_df in list_df:
specialty_df = os.path.join(path_df_specialty_title_abstract, specialty_df)
specialty_name_csv = specialty_df.split("/")[-1]
df = pd.read_csv(specialty_df)
print("Specialty: ", specialty_name_csv , ' --- Longitud del DF:', len(df))
new_df = create_new_df_xgrams(df)
file_out = os.path.join(path_df_specialty_xgram_title_abstract, specialty_name_csv)
new_df.to_csv(file_out)
print("Guardado ", file_out, ' Long: ', len(new_df))
# -
read_df_specialties_title_abstract()
# # 4 Crear otro tipo de estructura
import os
import pandas as pd
import pickle
path_df_specialty_xgram_title_abstract = './pubmed_files/dataframes/df_specialty_ngram'
def read_df_specialties_grams():
dic_final = {}
list_df = os.listdir(path_df_specialty_xgram_title_abstract)
for specialty_df in list_df:
dic_specialty = {}
specialty_name_csv = specialty_df
df = pd.read_csv(os.path.join(path_df_specialty_xgram_title_abstract, specialty_df))
print("Specialty: ", specialty_name_csv , ' --- Longitud del DF:', len(df))
list_docs = []
dic_terms = {}
# Iteración por filas del DataFrame:
for index, row in df.iterrows():
file_id = row['id']
ngram = row['ngram']
list_docs.append(file_id)
dic_terms.setdefault(ngram, []).append(file_id)
dic_specialty['terms'] = dic_terms
dic_specialty['docs'] = list(set(list_docs))
print("Nº de términos en la especialidad:", len(dic_specialty['terms']))
print("Nº de doc en la especialidad:", len(dic_specialty['docs']))
dic_final[specialty_name_csv] = dic_specialty
with open('dic_specialties.pkl', 'wb') as f:
pickle.dump(dic_final, f)
read_df_specialties_grams()
# ### Prueba varios términos en especialidad:
def deserialize_object(path):
pickle_in = open(path,"rb")
obj = pickle.load(pickle_in)
pickle_in.close()
print("Cargado el objeto", path.split("/")[- 1])
return obj
path_dic = 'dic_specialties.pkl'
dic = deserialize_object(path_dic)
# +
print(dic['vaccinology.csv']['terms']["('de', 'la')"]) # 19 veces -> correcto
print(dic['vaccinology.csv']['terms']["vacuna"]) # 13 veces -> correcto
# -
print(dic['vaccinology.csv']['docs'])
|
03_treatment_text/01_create_dataframe_from_xml.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import datetime
from datetime import time as dt
import matplotlib.pyplot as plt
import seaborn as sns
import gc
gc.collect()
#mydatapath="yellow_tripdata_2018-01.csv"
#this function to read month's data
def data_aggregator(path,columnnumber,chunksize):
df_list = []
for chunk in pd.read_csv(path,usecols=columnnumber, chunksize=chunksize):
df_list.append(pd.DataFrame(chunk).dropna())
result = pd.concat(df_list)
del df_list
return result
JanData="yellow_tripdata_2018-01.csv"
FebData="yellow_tripdata_2018-02.csv"
MarData="yellow_tripdata_2018-03.csv"
AprData="yellow_tripdata_2018-04.csv"
MayData="yellow_tripdata_2018-05.csv"
JunData="yellow_tripdata_2018-06.csv"
MayDF=data_aggregator(MayData,[1,2,4],10000)
MayData=pd.DataFrame(columns=['pickup','dropoff','distance'])
MayData['pickup']=pd.to_datetime(MayDF['tpep_pickup_datetime'])
MayData['dropoff']=pd.to_datetime(MayDF['tpep_dropoff_datetime'])
MayData['distance']=MayDF['trip_distance']
MayDF=None
del MayDF
MayDuration=pd.DataFrame(columns=['Duration','Distance'])
MayDuration['Duration']=(MayData['dropoff']-MayData['pickup']).dt.seconds/60
MayDuration['Distance']=MayData['distance']
MayData=None
del MayData
MayDuration=MayDuration[MayDuration.Duration > 1.0]
MayDuration=MayDuration[MayDuration.Distance > 0.0]
JunDF=data_aggregator(JunData,[1,2,4],10000)
JunData=pd.DataFrame(columns=['pickup','dropoff','distance'])
JunData['pickup']=pd.to_datetime(JunDF['tpep_pickup_datetime'])
JunData['dropoff']=pd.to_datetime(JunDF['tpep_dropoff_datetime'])
JunData['distance']=JunDF['trip_distance']
JunDF=None
del JunDF
JunDuration=pd.DataFrame(columns=['Duration','Distance'])
JunDuration['Duration']=(JunData['dropoff']-JunData['pickup']).dt.seconds/60
JunDuration['Distance']=JunData['distance']
JunData=None
del JunData
JunDuration=JunDuration[JunDuration.Duration > 1]
JunDuration=JunDuration[JunDuration.Distance > 0.0]
FebDF=data_aggregator(FebData,[1,2,4],10000)
FebData=pd.DataFrame(columns=['pickup','dropoff','distance'])
FebData['pickup']=pd.to_datetime(FebDF['tpep_pickup_datetime'])
FebData['dropoff']=pd.to_datetime(FebDF['tpep_dropoff_datetime'])
FebData['distance']=FebDF['trip_distance']
FebDF=None
del FebDF
FebDuration=pd.DataFrame(columns=['Duration','Distance'])
FebDuration['Duration']=(FebData['dropoff']-FebData['pickup']).dt.seconds/60
FebDuration['Distance']=FebData['distance']
FebBoroghDF=None
del FebBoroghDF
FebDuration=FebDuration[FebDuration.Duration > 1]
FebDuration=FebDuration[FebDuration.Distance > 0]
MarDF=data_aggregator(MarData,[1,2,4],10000)
MarData=pd.DataFrame(columns=['pickup','dropoff','distance'])
MarData['pickup']=pd.to_datetime(MarDF['tpep_pickup_datetime'])
MarData['dropoff']=pd.to_datetime(MarDF['tpep_dropoff_datetime'])
MarData['distance']=MarDF['trip_distance']
MarDF=None
del MarDF
MarDuration=pd.DataFrame(columns=['Duration','Distance'])
MarDuration['Duration']=(MarData['dropoff']-MarData['pickup']).dt.seconds/60
MarDuration['Distance']=MarData['distance']
MarBoroghDF=None
del MarBoroghDF
MarDuration=MarDuration[MarDuration.Duration > 1]
MarDuration=MarDuration[MarDuration.Distance > 0]
AprDF=data_aggregator(AprData,[1,2,4],10000)
AprData=pd.DataFrame(columns=['pickup','dropoff','distance'])
AprData['pickup']=pd.to_datetime(AprDF['tpep_pickup_datetime'])
AprData['dropoff']=pd.to_datetime(AprDF['tpep_dropoff_datetime'])
AprData['distance']=AprDF['trip_distance']
AprDF=None
del AprDF
AprDuration=pd.DataFrame(columns=['Duration','Distance'])
AprDuration['Duration']=(AprData['dropoff']-AprData['pickup']).dt.seconds/60
AprDuration['Distance']=AprData['distance']
AprData=None
del AprData
AprDuration=AprDuration[AprDuration.Duration > 1]
AprDuration=AprDuration[AprDuration.Distance > 0]
JanDF=data_aggregator(JanData,[1,2,4],10000)
JanData=pd.DataFrame(columns=['pickup','dropoff','distance'])
JanData['pickup']=pd.to_datetime(JanDF['tpep_pickup_datetime'])
JanData['dropoff']=pd.to_datetime(JanDF['tpep_dropoff_datetime'])
JanData['distance']=JanDF['trip_distance']
JanDF=None
del JanDF
JanDuration=pd.DataFrame(columns=['Duration','Distance'])
JanDuration['Duration']=(JanData['dropoff']-JanData['pickup']).dt.seconds/60
JanDuration['Distance']=JanData['distance']
JanData=None
del JanData
JanDuration=JanDuration[JanDuration.Duration > 1]
JanDuration=JanDuration[JanDuration.Distance > 0]
FullData=pd.concat([JanDuration,FebDuration,MarDuration,AprDuration,MayDuration,JunDuration])
plt.scatter(FullData['Duration'],y=FullData['Distance'])
plt.xlabel('Duration')
plt.ylabel('Distance')
FullData=FullData[FullData.Distance < 500.0]
plt.scatter(FullData['Duration'],y=FullData['Distance'])
plt.xlabel('Duration')
plt.ylabel('Distance')
#From the plot we notice that there is an unrealistic data like the points that take more than 650 mins for a short distance.
FullData=FullData[FullData.Distance < 300.0]
FullData=FullData[FullData.Duration < 650.0]
plt.scatter(FullData['Duration'],y=FullData['Distance'])
plt.xlabel('Duration')
plt.ylabel('Distance')
import numpy
numpy.corrcoef(FullData['Duration'], FullData['Distance'])[0, 1]
# The Pearson Coefficient that we got after clenning the data is equal to 0.75 and that meaning there is a strong relationship between the distance and the duration and that is true in the real world.
|
RQ5.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] Collapsed="false" slideshow={"slide_type": "slide"}
# <img src="https://upload.wikimedia.org/wikipedia/commons/4/47/Logo_UTFSM.png" width="200" alt="utfsm-logo" align="left"/>
#
# # MAT281
# ### Aplicaciones de la Matemática en la Ingeniería
# + [markdown] Collapsed="false" slideshow={"slide_type": "slide"}
# ## Proyecto 01: Clasificación de dígitos
# + [markdown] Collapsed="false"
# ### Instrucciones
#
# * Completa tus datos personales (nombre y rol USM) en siguiente celda.
# * Debes _pushear_ tus cambios a tu repositorio personal del curso.
# * Como respaldo, debes enviar un archivo .zip con el siguiente formato `mXX_projectYY_apellido_nombre.zip` a <EMAIL>, debe contener todo lo necesario para que se ejecute correctamente cada celda, ya sea datos, imágenes, scripts, etc.
# * Se evaluará:
# - Soluciones
# - Código
# - Que Binder esté bien configurado.
# - Al presionar `Kernel -> Restart Kernel and Run All Cells` deben ejecutarse todas las celdas sin error.
# + [markdown] Collapsed="false"
# __Nombre__:
#
# __Rol__:
# + [markdown] Collapsed="false"
# ## Clasificación de dígitos
# En este laboratorio realizaremos el trabajo de reconocer un dígito a partir de una imagen.
#
# + [markdown] Collapsed="false" slideshow={"slide_type": "subslide"}
# ## Contenidos
# * [K Nearest Neighbours](#k_nearest_neighbours)
# * [Exploración de Datos](#data_exploration)
# * [Entrenamiento y Predicción](#train_and_prediction)
# * [Selección de Modelo](#model_selection)
# + [markdown] Collapsed="false" slideshow={"slide_type": "slide"}
# <a id='k_neirest_neighbours'></a>
# + [markdown] Collapsed="false"
# ## K Nearest Neighbours
# + [markdown] Collapsed="false"
# El algoritmo **k Nearest Neighbors** es un método no paramétrico: una vez que el parámetro $k$ se ha fijado, no se busca obtener ningún parámetro adicional.
#
# Sean los puntos $x^{(i)} = (x^{(i)}_1, ..., x^{(i)}_n)$ de etiqueta $y^{(i)}$ conocida, para $i=1, ..., m$.
#
# El problema de clasificación consiste en encontrar la etiqueta de un nuevo punto $x=(x_1, ..., x_m)$ para el cual no conocemos la etiqueta.
# + [markdown] Collapsed="false"
# La etiqueta de un punto se obtiene de la siguiente forma:
# * Para $k=1$, **1NN** asigna a $x$ la etiqueta de su vecino más cercano.
# * Para $k$ genérico, **kNN** asigna a $x$ la etiqueta más popular de los k vecinos más cercanos.
#
# El modelo subyacente a kNN es el conjunto de entrenamiento completo. A diferencia de otros métodos que efectivamente generalizan y resumen la información (como regresión logística, por ejemplo), cuando se necesita realizar una predicción, el algoritmo kNN mira **todos** los datos y selecciona los k datos más cercanos, para regresar la etiqueta más popular/más común. Los datos no se resumen en parámetros, sino que siempre deben mantenerse en memoria. Es un método por tanto que no escala bien con un gran número de datos.
# + [markdown] Collapsed="false"
# En caso de empate, existen diversas maneras de desempatar:
# * Elegir la etiqueta del vecino más cercano (problema: no garantiza solución).
# * Elegir la etiqueta de menor valor (problema: arbitrario).
# * Elegir la etiqueta que se obtendría con $k+1$ o $k-1$ (problema: no garantiza solución, aumenta tiempo de cálculo).
# + [markdown] Collapsed="false"
# La cercanía o similaridad entre los datos se mide de diversas maneras, pero en general depende del tipo de datos y del contexto.
#
# * Para datos reales, puede utilizarse cualquier distancia, siendo la **distancia euclidiana** la más utilizada. También es posible ponderar unas componentes más que otras. Resulta conveniente normalizar para poder utilizar la noción de distancia más naturalmente.
#
# * Para **datos categóricos o binarios**, suele utilizarse la distancia de Hamming.
# + [markdown] Collapsed="false"
# A continuación, una implementación de "bare bones" en numpy:
# + Collapsed="false"
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# + Collapsed="false"
def knn_search(X, k, x):
""" find K nearest neighbours of data among D """
# Distancia euclidiana
d = np.linalg.norm(X - x, axis=1)
# Ordenar por cercania
idx = np.argsort(d)
# Regresar los k mas cercanos
id_closest = idx[:k]
return id_closest, d[id_closest].max()
def knn(X,Y,k,x):
# Obtener los k mas cercanos
k_closest, dmax = knn_search(X, k, x)
# Obtener las etiquetas
Y_closest = Y[k_closest]
# Obtener la mas popular
counts = np.bincount(Y_closest.flatten())
# Regresar la mas popular (cualquiera, si hay empate)
return np.argmax(counts), k_closest, dmax
def plot_knn(X, Y, k, x):
y_pred, neig_idx, dmax = knn(X, Y, k, x)
# plotting the data and the input point
fig = plt.figure(figsize=(8, 8))
plt.plot(x[0, 0], x[0, 1], 'ok', ms=16)
m_ob = Y[:, 0] == 0
plt.plot(X[m_ob, 0], X[m_ob, 1], 'ob', ms=8)
m_sr = Y[:,0] == 1
plt.plot(X[m_sr, 0], X[m_sr, 1], 'sr', ms=8)
# highlighting the neighbours
plt.plot(X[neig_idx, 0], X[neig_idx, 1], 'o', markerfacecolor='None', markersize=24, markeredgewidth=1)
# Plot a circle
x_circle = dmax * np.cos(np.linspace(0, 2*np.pi, 360)) + x[0, 0]
y_circle = dmax * np.sin(np.linspace(0, 2*np.pi, 360)) + x[0, 1]
plt.plot(x_circle, y_circle, 'k', alpha=0.25)
plt.show();
# Print result
if y_pred==0:
print("Prediccion realizada para etiqueta del punto = {} (circulo azul)".format(y_pred))
else:
print("Prediccion realizada para etiqueta del punto = {} (cuadrado rojo)".format(y_pred))
# + [markdown] Collapsed="false"
# Puedes ejecutar varias veces el código anterior, variando el número de vecinos `k` para ver cómo afecta el algoritmo.
# + Collapsed="false"
k = 3 # hyper-parameter
N = 100
X = np.random.rand(N, 2) # random dataset
Y = np.array(np.random.rand(N) < 0.4, dtype=int).reshape(N, 1) # random dataset
x = np.random.rand(1, 2) # query point
# performing the search
plot_knn(X, Y, k, x)
# + [markdown] Collapsed="false" slideshow={"slide_type": "slide"}
# <a id='data_exploration'></a>
# + [markdown] Collapsed="false"
# ## Exploración de los datos
# + [markdown] Collapsed="false"
# A continuación se carga el conjunto de datos a utilizar, a través del sub-módulo `datasets` de `sklearn`.
# + Collapsed="false"
import pandas as pd
from sklearn import datasets
# + Collapsed="false"
digits_dict = datasets.load_digits()
# + Collapsed="false"
print(digits_dict["DESCR"])
# + Collapsed="false"
digits_dict.keys()
# + Collapsed="false"
digits_dict["target"]
# + [markdown] Collapsed="false"
# A continuación se crea dataframe declarado como `digits` con los datos de `digits_dict` tal que tenga 65 columnas, las 6 primeras a la representación de la imagen en escala de grises (0-blanco, 255-negro) y la última correspondiente al dígito (`target`) con el nombre _target_.
# + Collapsed="false"
digits = (
pd.DataFrame(
digits_dict["data"],
)
.rename(columns=lambda x: f"c{x:02d}")
.assign(target=digits_dict["target"])
.astype(int)
)
digits.head()
# + [markdown] Collapsed="false"
# ### Ejercicio 1
#
# **_(10 puntos)_**
# + [markdown] Collapsed="false"
# **Análisis exploratorio:** Realiza tu análisis exploratorio, no debes olvidar nada! Recuerda, cada análisis debe responder una pregunta.
#
# Algunas sugerencias:
#
# * ¿Cómo se distribuyen los datos?
# * ¿Cuánta memoria estoy utilizando?
# * ¿Qué tipo de datos son?
# * ¿Cuántos registros por clase hay?
# * ¿Hay registros que no se correspondan con tu conocimiento previo de los datos?
# + Collapsed="false"
## FIX ME PLEASE
# + [markdown] Collapsed="false"
# ### Ejercicio 2
#
# **_(10 puntos)_**
# + [markdown] Collapsed="false"
# **Visualización:** Para visualizar los datos utilizaremos el método `imshow` de `matplotlib`. Resulta necesario convertir el arreglo desde las dimensiones (1,64) a (8,8) para que la imagen sea cuadrada y pueda distinguirse el dígito. Superpondremos además el label correspondiente al dígito, mediante el método `text`. Esto nos permitirá comparar la imagen generada con la etiqueta asociada a los valores. Realizaremos lo anterior para los primeros 25 datos del archivo.
# + Collapsed="false"
digits_dict["images"][0]
# + [markdown] Collapsed="false"
# Visualiza imágenes de los dígitos utilizando la llave `images` de `digits_dict`.
#
# Sugerencia: Utiliza `plt.subplots` y el método `imshow`. Puedes hacer una grilla de varias imágenes al mismo tiempo!
# + Collapsed="false"
nx, ny = 5, 5
fig, axs = plt.subplots(nx, ny, figsize=(12, 12))
## FIX ME PLEASE
# + [markdown] Collapsed="false" slideshow={"slide_type": "slide"}
# <a id='train_and_prediction'></a>
# + [markdown] Collapsed="false"
# ## Entrenamiento y Predicción
# + [markdown] Collapsed="false"
# Se utilizará la implementación de `scikit-learn` llamada `KNeighborsClassifier` (el cual es un _estimator_) que se encuentra en `neighbors`.
#
# Utiliza la métrica por defecto.
# + Collapsed="false" jupyter={"outputs_hidden": false}
from sklearn.neighbors import KNeighborsClassifier
# + Collapsed="false"
X = digits.drop(columns="target").values
y = digits["target"].values
# + [markdown] Collapsed="false"
# ### Ejercicio 3
#
# **_(10 puntos)_**
# + [markdown] Collapsed="false"
# Entrenar utilizando todos los datos. Además, recuerda que `k` es un hiper-parámetro, por lo tanto prueba con distintos tipos `k` y obten el `score` desde el modelo.
# + Collapsed="false" jupyter={"outputs_hidden": false}
k_array = np.arange(1, 101)
# + Collapsed="false"
## FIX ME PLEASE ##
# + [markdown] Collapsed="false"
# **Preguntas**
#
# * ¿Cuál fue la métrica utilizada?
# * ¿Por qué entrega estos resultados? En especial para k=1.
# * ¿Por qué no se normalizó o estandarizó la matriz de diseño?
# + [markdown] Collapsed="false"
# _## RESPONDE AQUÍ ##_
# + [markdown] Collapsed="false"
# ### Ejercicio 4
#
# **_(10 puntos)_**
# + [markdown] Collapsed="false"
# Divide los datos en _train_ y _test_ utilizando la función preferida del curso. Para reproducibilidad utiliza `random_state=42`. A continuación, vuelve a ajustar con los datos de _train_ y con los distintos valores de _k_, pero en esta ocasión calcula el _score_ con los datos de _test_.
#
# ¿Qué modelo escoges?
# + Collapsed="false"
from sklearn.model_selection import train_test_split
# + Collapsed="false"
X_train, X_test, y_train, y_test = ## FIX ME PLEASE ##
# + Collapsed="false"
## FIX ME PLEASE ##
# + [markdown] Collapsed="false" slideshow={"slide_type": "slide"}
# <a id='model_selection'></a>
# + [markdown] Collapsed="false"
# ## Selección de Modelo
# + [markdown] Collapsed="false"
# ### Ejercicio 5
#
# **_(15 puntos)_**
# + [markdown] Collapsed="true"
#
# **Curva de Validación**: Replica el ejemplo del siguiente [link](https://scikit-learn.org/stable/auto_examples/model_selection/plot_validation_curve.html#sphx-glr-auto-examples-model-selection-plot-validation-curve-py) pero con el modelo, parámetros y métrica adecuada.
#
# ¿Qué podrías decir de la elección de `k`?
# + Collapsed="false"
from sklearn.model_selection import validation_curve
# + Collapsed="false"
param_range = np.arange(1, 101)
# + Collapsed="false"
## FIX ME PLEASE ##
# + Collapsed="false"
plt.figure(figsize=(12, 8))
## FIX ME PLEASE ##
plt.show();
# + [markdown] Collapsed="false"
# **Pregunta**
#
# * ¿Qué refleja este gráfico?
# * ¿Qué conclusiones puedes sacar a partir de él?
# * ¿Qué patrón se observa en los datos, en relación a los números pares e impares? ¿Porqué sucede esto?
# + [markdown] Collapsed="false"
# _## RESPONDE AQUÍ ##_
# + [markdown] Collapsed="false"
# ### Ejercicio 6
#
# **_(15 puntos)_**
# + [markdown] Collapsed="false"
# **Búsqueda de hiper-parámetros con validación cruzada:** Utiliza `sklearn.model_selection.GridSearchCV` para obtener la mejor estimación del parámetro _k_. Prueba con valores de _k_ desde 2 a 100.
# + Collapsed="false"
from sklearn.model_selection import GridSearchCV
# + Collapsed="false"
parameters = ## FIX ME PLEASE ##
digits_gscv = ## FIX ME PLEASE ##
## FIX ME PLEASE ##
# + Collapsed="false"
# Best params
## FIX ME PLEASE ##
# + [markdown] Collapsed="false"
# **Pregunta**
#
# * ¿Cuál es el mejor valor de _k_?
# * ¿Es consistente con lo obtenido en el ejercicio anterior?
# + [markdown] Collapsed="false"
# _## RESPONDE AQUÍ ##_
# + [markdown] Collapsed="false"
# ### Ejercicio 7
#
# **_(10 puntos)_**
# + [markdown] Collapsed="false"
# __Visualizando datos:__ A continuación se provee código para comparar las etiquetas predichas vs las etiquetas reales del conjunto de _test_.
#
# * Define la variable `best_knn` que corresponde al mejor estimador `KNeighborsClassifier` obtenido.
# * Ajusta el estimador anterior con los datos de entrenamiento.
# * Crea el arreglo `y_pred` prediciendo con los datos de test.
#
# _Hint:_ `digits_gscv.best_estimator_` te entrega una instancia `estimator` del mejor estimador encontrado por `GridSearchCV`.
# + Collapsed="false"
best_knn =## FIX ME PLEASE ##
## FIX ME PLEASE ##
# + Collapsed="false"
y_pred = ## FIX ME PLEASE ##
# + Collapsed="false"
# Mostrar los datos correctos
mask = (y_pred == y_test)
X_aux = X_test[mask]
y_aux_true = y_test[mask]
y_aux_pred = y_pred[mask]
# We'll plot the first 100 examples, randomly choosen
nx, ny = 5, 5
fig, ax = plt.subplots(nx, ny, figsize=(12,12))
for i in range(nx):
for j in range(ny):
index = j + ny * i
data = X_aux[index, :].reshape(8,8)
label_pred = str(int(y_aux_pred[index]))
label_true = str(int(y_aux_true[index]))
ax[i][j].imshow(data, interpolation='nearest', cmap='gray_r')
ax[i][j].text(0, 0, label_pred, horizontalalignment='center', verticalalignment='center', fontsize=10, color='green')
ax[i][j].text(7, 0, label_true, horizontalalignment='center', verticalalignment='center', fontsize=10, color='blue')
ax[i][j].get_xaxis().set_visible(False)
ax[i][j].get_yaxis().set_visible(False)
plt.show()
# + [markdown] Collapsed="false"
# Modifique el código anteriormente provisto para que muestre los dígitos incorrectamente etiquetados, cambiando apropiadamente la máscara. Cambie también el color de la etiqueta desde verde a rojo, para indicar una mala etiquetación.
# + Collapsed="false"
## FIX ME PLEASE ##
# + [markdown] Collapsed="false"
# **Pregunta**
#
# * Solo utilizando la inspección visual, ¿Por qué crees que falla en esos valores?
# + [markdown] Collapsed="false"
# _## RESPONDE AQUÍ ##_
# + [markdown] Collapsed="false"
# ### Ejercicio 8
#
# **_(10 puntos)_**
# + [markdown] Collapsed="false"
# **Matriz de confusión:** Grafica la matriz de confusión.
#
# **Importante!** Al principio del curso se entregó una versión antigua de `scikit-learn`, por lo cual es importante que actualicen esta librearía a la última versión para hacer uso de `plot_confusion_matrix`. Hacerlo es tan fácil como ejecutar `conda update -n mat281 -c conda-forge scikit-learn` en la terminal de conda.
# + Collapsed="false"
from sklearn.metrics import plot_confusion_matrix
# + Collapsed="false"
fig, ax = plt.subplots(figsize=(12, 12))
## FIX ME PLEASE ##
# + [markdown] Collapsed="false"
# **Pregunta**
#
# * ¿Cuáles son las etiquetas con mejores y peores predicciones?
# * Con tu conocimiento previo del problema, ¿Por qué crees que esas etiquetas son las que tienen mejores y peores predicciones?
# + [markdown] Collapsed="false"
# _## RESPONDE AQUÍ ##_
# + [markdown] Collapsed="false"
# ### Ejercicio 9
#
# **_(10 puntos)_**
# + [markdown] Collapsed="false"
# **Curva de aprendizaje:** Replica el ejemplo del siguiente [link](https://scikit-learn.org/stable/auto_examples/model_selection/plot_learning_curve.html#sphx-glr-auto-examples-model-selection-plot-learning-curve-py) pero solo utilizando un modelo de KNN con el hiperparámetro _k_ seleccionado anteriormente.
# + Collapsed="false" jupyter={"source_hidden": true}
def plot_learning_curve(estimator, title, X, y, axes=None, ylim=None, cv=None,
n_jobs=None, train_sizes=np.linspace(.1, 1.0, 5)):
"""
Generate 3 plots: the test and training learning curve, the training
samples vs fit times curve, the fit times vs score curve.
Parameters
----------
estimator : object type that implements the "fit" and "predict" methods
An object of that type which is cloned for each validation.
title : string
Title for the chart.
X : array-like, shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape (n_samples) or (n_samples, n_features), optional
Target relative to X for classification or regression;
None for unsupervised learning.
axes : array of 3 axes, optional (default=None)
Axes to use for plotting the curves.
ylim : tuple, shape (ymin, ymax), optional
Defines minimum and maximum yvalues plotted.
cv : int, cross-validation generator or an iterable, optional
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 5-fold cross-validation,
- integer, to specify the number of folds.
- :term:`CV splitter`,
- An iterable yielding (train, test) splits as arrays of indices.
For integer/None inputs, if ``y`` is binary or multiclass,
:class:`StratifiedKFold` used. If the estimator is not a classifier
or if ``y`` is neither binary nor multiclass, :class:`KFold` is used.
Refer :ref:`User Guide <cross_validation>` for the various
cross-validators that can be used here.
n_jobs : int or None, optional (default=None)
Number of jobs to run in parallel.
``None`` means 1 unless in a :obj:`joblib.parallel_backend` context.
``-1`` means using all processors. See :term:`Glossary <n_jobs>`
for more details.
train_sizes : array-like, shape (n_ticks,), dtype float or int
Relative or absolute numbers of training examples that will be used to
generate the learning curve. If the dtype is float, it is regarded as a
fraction of the maximum size of the training set (that is determined
by the selected validation method), i.e. it has to be within (0, 1].
Otherwise it is interpreted as absolute sizes of the training sets.
Note that for classification the number of samples usually have to
be big enough to contain at least one sample from each class.
(default: np.linspace(0.1, 1.0, 5))
"""
if axes is None:
_, axes = plt.subplots(1, 3, figsize=(20, 5))
axes[0].set_title(title)
if ylim is not None:
axes[0].set_ylim(*ylim)
axes[0].set_xlabel("Training examples")
axes[0].set_ylabel("Score")
train_sizes, train_scores, test_scores, fit_times, _ = \
learning_curve(estimator, X, y, cv=cv, n_jobs=n_jobs,
train_sizes=train_sizes,
return_times=True)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
fit_times_mean = np.mean(fit_times, axis=1)
fit_times_std = np.std(fit_times, axis=1)
# Plot learning curve
axes[0].grid()
axes[0].fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
axes[0].fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1,
color="g")
axes[0].plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
axes[0].plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
axes[0].legend(loc="best")
# Plot n_samples vs fit_times
axes[1].grid()
axes[1].plot(train_sizes, fit_times_mean, 'o-')
axes[1].fill_between(train_sizes, fit_times_mean - fit_times_std,
fit_times_mean + fit_times_std, alpha=0.1)
axes[1].set_xlabel("Training examples")
axes[1].set_ylabel("fit_times")
axes[1].set_title("Scalability of the model")
# Plot fit_time vs score
axes[2].grid()
axes[2].plot(fit_times_mean, test_scores_mean, 'o-')
axes[2].fill_between(fit_times_mean, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1)
axes[2].set_xlabel("fit_times")
axes[2].set_ylabel("Score")
axes[2].set_title("Performance of the model")
return plt
# + Collapsed="false"
from sklearn.model_selection import learning_curve
from sklearn.model_selection import ShuffleSplit
# + Collapsed="false"
fig, axes = plt.subplots(3, 1, figsize=(10, 15))
## FIX ME PLEASE ##
plt.show()
# + [markdown] Collapsed="false"
# **Pregunta**
#
# * ¿Qué refleja este gráfico?
# * ¿Qué conclusiones puedes sacar a partir de él?
# * ¿En qué crees que hay que poner más atención a la hora de trabajar con un problema de clasificación?
# + [markdown] Collapsed="false"
# _## RESPONDE AQUÍ ##_
|
Clases/m05_data_science/m05_project01/m05_project01.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Python 3 : Le BN pour le traitement des images
# ## Préparation
# Il existe de nombreuses bibliothèques Python permettant la manipulation d’images.
#
# Nous allons dans un premier temps explorer ici la solution **`pillow`** qui est la version Python3 du très célèbre module **`PIL`** (Python Image Library).
#
# Pour pouvoir utiliser les fonctionnalités d'une bibliothèque, il faut au préalable l'importer dans le notebook jupyter actif, par exemple, tester la commande :
import PIL
# S’il ne se passe rien c’est bon signe, c’est que votre environnement Python connaît cette bibliothèque.
# Dans le cas contraire vous obtenez un message d’erreur, il faut alors installer la bibliothèque manquante à partir d'un terminal de commande...
# ## Précaution
# Avant de (mal) traiter une image, il est bon de s'assurer d'être autorisé à le faire.
#
# Ainsi, l'image utilisée dans ce Bloc-Note résulte d'une recherche sur le métamoteur [CC search](https://search.creativecommons.org/).
# <img src="img/mesange.jpg" alt="mesange image" title="Une jeune mésange bleu" width="50%">
#
# Elle est sous licence *CC0 Creative Commons (Free for commercial use, No attribution required)* et est disponible au téléchargement sur la page : https://pixabay.com/en/bird-tit-blue-tit-young-2396015/
# ## Premiers pas avec `pillow` :
from PIL import Image
img = Image.open("img/mesange.jpg")
# L'objet retourné, img peut être interrogé afin d'obtenir les informations de taille, format, et mode :
print(img.size, img.format, img.mode)
# On peut connaître la valeur d'un pixel (0,0) (ici le coin supérieur gauche) ainsi :
img.getpixel((0,0))
# Ce résultat est de type `tuple`, un "n-uplet", c'est à dire une collection ordonnée et non modifiable (contrairement à une liste) d'éléments.
type(img.getpixel((0,0)))
# On peut donc ne récupérer qu'une seule composante de couleur, comme par exemple ici le vert :
img.getpixel((0,0))[1]
# Un pixel de l'image (RGB) peut être modifié en donnant ses coordonnées et ses nouvelles composantes.
img.putpixel((0,0),(255,0,0))
# On vérifie alors la modification :
img.getpixel((0,0))
# Aussi, pour agir sur une zone plus importante de notre image nous pouvons avantageusement utiliser des boucles imbriquées tel que par exemple :
for i in range (120,270) :
for j in range (100,200):
img.putpixel((i,j),(255,255,255))
# Le résultat de la modification peut alors être visualisée très simplement avec **`jupyter notebook`** :
img
# La modification peut aussi, mais uniquement avec jupyter sur une machine locale, être visualisée dans l'application d'affichage d'image par défaut avec la fonction `show()`:
img.show()
# Puis on peut sauvegarder l'image résultante :
img.save("imgV0.png")
# Un nouveau fichier image a été créé à la racine du répertoire contenant ce bloc note. Attention : si un fichier du même nom existait déjà, il sera tout simplement écrasé et remplacé par le nouveau.
# ## Visualiser un fichier image
# L'environnement de jupyter offre alors de très nombreuses solutions pour voir les modifications que nous avons apportées à notre image ainsi enregistrée :
# * Pour la visualiser dans le navigateur, il suffit de double cliquer sur le fichier créé dans le répertoire de ce notebook via le dashboard. L'image s'affiche alors dans un nouvel onglet du navigateur.
# 
# * Pour l'insérer dans notre Bloc Note, on peut utiliser le Mardown ou encore l'HTML
# + active=""
# 
# + active=""
# <img src="imgV0.png" alt="image traitée" title="Image après traitement" width="65%">
# -
# * on peut aussi faire appel au `display` de `Ipython`
from IPython.display import Image
Image("imgV0.png")
# * On peut même faire appel à `matplotlib` :
import matplotlib.pyplot as plt
plt.imshow(img)
plt.show()
# * ...
# ## Plus avant avec `Pillow`
#
#
# ### Créer une image à partir de rien
# La commande suivante crée une image de taille 400×300, contenant 3 plans de couleurs, et initialement entièrement grise :
from PIL import Image
monImage=Image.new("RGB",(400,300),"grey")
monImage
# ### Fonctions de dessin avancées
# Il est possible de dessiner sur une image. Pour cela, on commence par obtenir une instance de ImageDraw:
from PIL import ImageDraw
imgd = ImageDraw.Draw(img)
# Puis on dessine sur ce nouvel objet :
imgd.line([(250,200),(300,250)], (0,255,255), width=10)
img
# ## Mini-Projet :
#
# ### Augmenter l'image :
# > * Ajouter un cadre autour du rectangle blanc avec un texte dedans...
#
# ### Appliquer un traitement :
# > * Afficher l'image en noir et blanc, avec un filtre bleu, vert, rouge, cépia, avec du flou, ...
#
# ### QR Code :
# > * Initier un générateur de QR Code après avoir regardé la vidéo de Micode : https://youtu.be/N2Wz1T4drsg
# > Voir aussi les sites :
# * http://www-igm.univ-mlv.fr/~dr/XPOSE2011/QRCode/index.html
# * http://blog.qartis.com/decoding-small-qr-codes-by-hand/
# ## Besoin d'aide :
#
# Le module phare de traitement de l'image en Python, PIL (Python Image Library) n'a pas tout de suite été porté en Python 3. Comme c'est souvent le cas dans le monde du libre, un fork pour Python 3 est apparu : Pillow. Pillow et PIL s'utilisent donc pratiquement de la même façon.
#
# La documentation complète de Pillow est accessible ici : https://pillow.readthedocs.io
#
# D'autres informations peuvent être trouvées dans la documentation de PIL : http://effbot.org/imagingbook/
#
# et sur ce site : http://jlbicquelet.free.fr/scripts/python/pil/pil.php#manipulation2
#
# L'autocomplétion présente toutes les méthodes de l'objet instancié en appuyant sur la touche **`Tab`** après le point :
img.
# On peut également appeler la documentation pour obtenir la liste des fonctions disponibles :
help(img)
# ## Ressources :
#
# * https://deptinfo-ensip.univ-poitiers.fr/ENS/doku/doku.php/stu:python_gui:tuto_images
# * http://fsincere.free.fr/isn/python/cours_python_ch10.php
# * http://dept-info.labri.fr/~namyst/ens/lycee/TD1.html
# * http://dept-info.labri.fr/~namyst/ens/lycee/TD2.html
# ## Retour sur `matplotlib` et `numpy`
#
# C'est bien connu, une image est composée de pixels. Ces pixels peuvent être décrits par un tableau homogène de nombres.
# Or la façon la plus efficace de manipuler des tableaux de nombres en Python est d'utiliser la bibliothèque Numpy.
# %pylab inline
pixels = plt.imread("img/mesange.jpg")
pixels
pixels.shape
plt.imshow(pixels)
plt.show()
# N'importe quelle couleur peut être produite en superposant des sources de lumières rouge, verte et bleue dans des proportions adéquates. La couleur d'un pixel peut donc être représentée par trois nombres compris entre 0 et 255 donnant respectivement les quantités de chacune des couleurs primaires rouge, verte et bleue. C'est le principe du codage RGB.
#
# Une image couleur est donc décrite par un tableau à trois dimensions : (hauteur, largeur, 3). On peut se représenter ce tableau comme la superposition de trois tableaux 2D.
#
# * pixels[i, j, 0] est l'intensité de rouge du pixel de coordonnées (i, j)
# * pixels[i, j, 1] est l'intensité de vert du pixel de coordonnées (i, j)
# * pixels[i, j, 2] est l'intensité de bleu du pixel de coordonnées (i, j)
# ### Le slicing
# Les cases du tableau peuvent être accédées individuellement ou par tranche (slice en anglais) :
#
# | Syntaxe | Signification |
# |:----------|:-------------|
# | pixels[i, j] | Pixel de coordonnées (i, j) |
# | pixels[i:r, j:s] | Sous-tableau 2D formé des lignes de i à r-1 et des colonnes de j à s-1 |
# | pixels[:i, j:] | Sous-tableau 2D formé des lignes strictement inférieures à i et des colonnes supérieures ou égales à j |
# | pixels[:, j] | Colonne j (tableau 1D) |
# ## Autres bibliothèques pour le traitement des images :
#
# ### `imageio`
# * https://pypi.python.org/pypi/imageio
# * http://imageio.readthedocs.io/en/latest/examples.html
#
# ### `PyGame`
# <!--
# from PIL import Image
#
# def mystere(i):
# (l, h) = i.size
# for y in range(h):
# for x in range(l):
# c = Image.getpixel(i, (x, y))
# inv = 255 - c
# Image.putpixel(i, (x, y), inv)
#
# im=Image.open("img/mesange.jpg")
# Image.show(im)
#
# mystere(im)
# Image.show(im)
# -->
|
Rozenn/ISN-Python3-Image.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.0 64-bit (''3.8.0'': pyenv)'
# language: python
# name: python3
# ---
# # Requesting data from essios
import requests
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# read environment variables from .env file
with open('../.env', 'rt') as fp:
TOKEN = fp.read().strip().split("\n")[0].split("=")[1]
PERSONAL_TOKEN = TOKEN or "<PASSWORD>"
BASE_URL = "https://api.esios.ree.es"
# ## Request archives
#
# As per https://api.esios.ree.es/archive/getting_a_list_of_archives
# +
headers = {
"Accept": "application/json; application/vnd.esios-api-v1+json",
"Content-Type": "application/json",
"Host": "api.esios.ree.es",
"Authorization": "Token token={}".format(PERSONAL_TOKEN),
"Cookie": "",
}
ARCHIVE_URL = BASE_URL + "/archives"
# -
# make the request
r = requests.get(ARCHIVE_URL, headers=headers)
r.json()
# ## Getting data from specific visualization
#
# We want to retrieve the data shown in the visualization in <https://www.esios.ree.es/es/analisis/1293?vis=1&start_date=02-09-2018T00%3A00&end_date=06-10-2018T23%3A50&compare_start_date=01-09-2018T00%3A00&groupby=minutes10&level=1&zoom=6&latlng=40.91351257612758,-1.8896484375>
# We suspect the data we need to fetch is an indicator, as described in <https://api.esios.ree.es/indicator/getting_a_disaggregated_indicator_filtering_values_by_a_date_range_and_geo_ids,_grouped_by_geo_id_and_month,_using_avg_aggregation_for_geo_and_avg_for_time_without_time_trunc>
#
# ```
# locale Get translations for sources (es, en). Default language: es
# datetime A certain date to filter values by (iso8601 format)
# start_date Beginning of the date range to filter indicator values (iso8601 format)
# end_date End of the date range to filter indicator values (iso8601 format)
# time_agg How to aggregate indicator values when grouping them by time. Accepted values: `sum`, `average`. Default value: `sum`.
# time_trunc Tells the API how to trunc data time series. Accepted values: `ten_minutes`, `fifteen_minutes`, `hour`, `day`, `month`, `year`.
# geo_agg How to aggregate indicator values when grouping them by geo_id. Accepted values: `sum`, `average`. Default value: `sum`.
# geo_ids Tells the API the geo ids to filter the dataear && ./bin/rspec by.
# geo_trunc Tells the API how to group data at geolocalization level when the geo_agg is informed. Accepted values: 'country', 'electric_system', 'autonomous_community', 'province', 'electric_subsystem', 'town' and 'drainage_basin'
# ```
# +
# format datetimes as ISO8601
# https://stackoverflow.com/questions/2150739/iso-time-iso-8601-in-python
# specify time zones as if it were in Spain or in UTC?
# https://www.enricozini.org/blog/2009/debian/using-python-datetime/
import datetime as dt
from time import strftime
import pytz
dt.datetime.utcnow().isoformat()
# -
pytz.country_timezones['ES']
# +
REQUEST_ID = "1293"
# tzinfo = pytz.timezone('Europe/Madrid')
tzinfo = None
params = {
"locale": "es",
# "datetime": A certain date to filter values by (iso8601 format)
"start_date": dt.datetime(year=2018, month=9, day=2, hour=0, minute=0, second=0, tzinfo=tzinfo).isoformat(), #Beginning of the date range to filter indicator values (iso8601 format)
"end_date": dt.datetime(year=2018, month=10, day=6, hour=23, minute=0, second=0, tzinfo=tzinfo).isoformat(), # End of the date range to filter indicator values (iso8601 format)
"time_agg": "sum", # How to aggregate indicator values when grouping them by time. Accepted values: `sum`, `average`. Default value: `sum`.
"time_trunc": "ten_minutes", # Tells the API how to trunc data time series. Accepted values: `ten_minutes`, `fifteen_minutes`, `hour`, `day`, `month`, `year`.
# "geo_agg": None, # How to aggregate indicator values when grouping them by geo_id. Accepted values: `sum`, `average`. Default value: `sum`.
# "geo_ids": None, # Tells the API the geo ids to filter the dataear && ./bin/rspec by.
# "geo_trunc": None, # Tells the API how to group data at geolocalization level when the geo_agg is informed. Accepted values: 'country', 'electric_system', 'autonomous_community', 'province', 'electric_subsystem', 'town' and 'drainage_basin'
}
# -
INDICATOR_URL = BASE_URL + f"/indicators/{REQUEST_ID}"
r2 = requests.get(INDICATOR_URL, headers=headers, params=params)
r2.json()
r2.url
# # Write JSON to file
# +
import json
with open('../resources/dump.json', 'wt') as fp:
json.dump(r2.json(), fp)
# -
# ## Move data to dataframe
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from matplotlib.dates import DateFormatter
with open('../resources/dump.json', 'rt') as fp:
r2_json = json.load(fp)
r2_values = r2_json.get('indicator', {}).get('values', {})
df = pd.DataFrame(r2_values)
df.head()
df.index = pd.to_datetime(df['datetime'])
df.head()
df[['value']]
# ## Write CSV to file
df.to_csv('../resources/data.csv', index=False)
df = pd.read_csv('../resources/data.csv')
df.index = pd.to_datetime(df['datetime'])
# ## Plot data
# +
# plot data
fig, ax = plt.subplots(figsize=(20, 10))
df["value"].plot(ax=ax)
# format axis
ax.set(xlabel="Date", ylabel="Demand (MW)", title="Daily Demand")
# format x axis
# ax.xaxis.set_major_locator(mdates.DayLocator(interval=10))
# ax.xaxis.set_major_formatter(DateFormatter("%m-%d"))
# format y axis to show MW
# https://matplotlib.org/stable/gallery/ticks/tick-formatters.html
ax.yaxis.set_major_formatter(lambda x, pos: str(x / 1000.0))
ax.grid()
# -
# # Fourier transform
# Code I had was done using pytorch
#
# - using a real valued fast fourier transform from tensorflow
# - https://www.tensorflow.org/api_docs/python/tf/signal/rfft?hl=en
# ```python
# # fft = tf.signal.rfft(df['mean_sale_eur'])
# var_of_interest = "sum_sale_eur"
# fft = torch.fft.rfft(torch.Tensor(df[var_of_interest]))
# freqs_per_dataset = np.arange(0, len(fft))
#
# n_samples_h = len(df[var_of_interest])
# hours_per_year = 24 * 365.2524
# years_per_dataset = n_samples_h / (hours_per_year)
#
# f_per_year = freqs_per_dataset / years_per_dataset
#
# fig, ax = plt.subplots(figsize=(20, 10))
#
#
# # ax.bar(f_per_year, np.abs(fft), align="center")
# ax.step(f_per_year, np.abs(fft), where='pre')
# ax.set_xscale("log")
# # ax.set_yscale("log")
# # ax.set_ylim(0, 10000)
# # ax.set_xlim([0.1, max(plt.xlim())])
# # ax.set_xticks([1, 31, 45, 180, 365.2524])
# labels = [1, 31, 45, 180, 365.2524]
# ax.vlines(labels, *ax.get_ylim(), "g", label=labels)
# for label in labels:
# ax.text(
# x=label*1.05,
# y=ax.get_ylim()[1]*0.5,
# s=f"{label} days",
# size=20,
# rotation=90
# )
# # ax.set_xticklabels(
# # labels=["1 day", "month", "45 days", "season", "year"], rotation=90, size=20
# # )
#
# ax.set_xticklabels(labels=ax.get_xticks(), rotation=90, size=20)
#
# ax.set_xlabel("Frequency (log scale)", size=20)
# ax.set_yticklabels(ax.get_yticks(), size=20)
# # fig.savefig("fft_sales.pdf", bbox_inches="tight");
#
# ```
from scipy import fft
import numpy as np
# https://stackoverflow.com/questions/6363154/what-is-the-difference-between-numpy-fft-and-scipy-fftpack
# As per the documentation, fftpack submodule is now considered legacy, new code should use :mod:`scipy.fft`.
df[['value']].shape
# +
# I need to confirm this!!
# compute the 1D fast fourier transform
fft_values = fft.fft(df["value"].values)
# -
pd.to_datetime(df.datetime).describe(datetime_is_numeric=True)
# +
# compute human-readable frequencies
freqs_in_fft = np.arange(0, len(fft_values))
n_samples = len(df['value'])
# divide the time in proportional units
# we have 34 days, sampled with 10 minute resolution
# we want to express this time in days of a year
# map data to a single year
# a day has 24 hours
# each hour has 6 10-minute spans
# a year has 365.25 of these
t_units_per_year = (24 * 6 * 365.2524) # number of 10-minute spans in a year
# how many years are being expressed currently in out dataset?
# result should be similar to 34/365.2524
years_per_dataset = n_samples / (t_units_per_year)
# hoy many frequencies can be allocated in a single year?
freqs_per_year = freqs_in_fft / years_per_dataset
# plot results
fig, ax = plt.subplots(figsize=(20, 10))
ax.step(freqs_per_year, np.abs(fft_values), where='pre')
# format
ax.set_xscale("log")
ax.set_yscale("log")
# tick labels
# plt.xticks([1, 365.2524, 365.2524 * 24, 365.2524 * 24 * 6], labels=['1/Year', '1/day', '1/hour', '1/10min'])
ticks = [1, 365.2524, 365.2524 * 24, 365.2524 * 24 * 6]
labels=['1/Year', '1/day', '1/hour', '1/10min']
ax.vlines(ticks, *ax.get_ylim(), "g", label=labels, alpha=0.2, linewidth=10)
for tick, label in zip(ticks, labels):
ax.text(
x=tick*0.75,
y=ax.get_ylim()[1]*0.075,
s=label,
size=20,
rotation=90
)
ax.set_xlabel("Frequency (log scale)", size=20)
ax.set_ylabel("Amplitude (log scale)", size=20)
ax.grid()
# -
# ### What is happening in the plot?
#
# - Frequency is expressed in Herzt [Hz], a Herzt is expressed in s^-1, where s is the SI unit for second.
# - We want to map each frequency to a unit of time. What unit of time? It depends on what data we have.
# - If we want to map each frequency to, say, a day, we have to rescale the frequencies we get from the Fourier analysis
#
# Here's how it is done
# - Since our original data has a 10-minute resolution, spanning ~34 days, we want to map these slots as fractions of a year. In other words, we have 0, 1, 2, ... len(df) samples and we want to rescale these as if they were part of a single year. Thus
# - A year has 365.2524 days, each day has 24 hours, and each hour has 6 slots of 10 minutes. Thus each sample in our dataset is rescaled by a factor of R=len(df)/(24*6*365.2524)
# - Now, we want to map each frequency in the Fourier analysis to this scale, so we divide each frequency by this factor: (1/s)/R
# - Now each frequency in the fourier analysis is mapped to 10-minute slots of a single year
# - Example: since a year is 365.2524 days, now the frequency axis expresses 1/day at that point.
# - Since a year is 365.2524 * 24 hours, 1/hour marks that tick
#
# From the plot we observe that the most relevant frequencies are 1/day, 1/10min. In other words, the sinusoids with these frequencies are more relevant to represent the original signal.
# ## Another example
# +
# in an hourly basis
# compute human-readable frequencies
freqs_in_fft = np.arange(0, len(fft_values))
n_samples = len(df['value'])
# map data to a single day
# a day has 24 hours
# each hour has 6 10-minute spans
t_units_per_day = (24 * 6) # number of 10-minute spans in a day
# how many years are being expressed currently in out dataset?
# result should be similar to 34/365.2524
days_per_dataset = n_samples / (t_units_per_day)
# hoy many frequencies can be allocated in a single year?
freqs_per_day = freqs_in_fft / days_per_dataset
# plot results
fig, ax = plt.subplots(figsize=(20, 10))
ax.step(freqs_per_day, np.abs(fft_values), where='pre')
# format
ax.set_xscale("log")
ax.set_yscale("log")
# tick labels
plt.xticks([1/365.2524, 1, 24], labels=['1/year', '1/day', '1/hour'])
# labels = [1, 5, 10, 25, 30]
# ax.vlines(labels, *ax.get_ylim(), "g", label=labels, alpha=0.4)
# for label in labels:
# ax.text(
# x=label*0.8,
# y=ax.get_ylim()[1]*0.01,
# s=f"{int(label):d} days",
# size=20,
# rotation=90
# )
ax.set_xlabel("Frequency (log scale)", size=20)
ax.set_ylabel("Amplitude (log scale)", size=20)
ax.grid()
# -
# ## two plots in one figure
# +
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(20, 15))
# first plot
df["value"].plot(ax=ax1)
# format axis
ax1.set_xlabel("Time [Days]", fontsize=15)
ax1.set_ylabel("Amplitude", fontsize=15)
ax1.set_title("Time domain", fontsize=17)
# format y axis to show MW
# https://matplotlib.org/stable/gallery/ticks/tick-formatters.html
ax1.yaxis.set_major_formatter(lambda x, pos: str(x / 1000.0))
ax1.grid()
# second plot
ax2.plot(fft_values, color="green")
ax2.set_yscale("log")
ax2.set_xscale("log")
ax2.set_xlabel("frequency [Hz]", fontsize=15)
ax2.set_ylabel("Log(Amplitude)", fontsize=15)
ax2.set_title("Frequency domain", fontsize=17)
ax2.grid()
fig.suptitle("Daily Aggregated Demand [MW]", fontsize=20)
fig.tight_layout(h_pad=2)
plt.subplots_adjust(top=0.935)
# -
|
notebooks/request.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %pylab inline
# %config InlineBackend.figure_format = 'retina'
def polyint(xs, ys):
N = len(xs)
def f(x):
estimates = ys[:]
for order in range(1, N):
new_estimates = zeros(N-order)
for i in range(N-order):
new_estimates[i] = (x-xs[i+order])/(xs[i]-xs[i+order])*estimates[i] \
+ (x - xs[i])/(xs[i+order]-xs[i])*estimates[i+1]
estimates = new_estimates
return estimates[0]
return f
def f_exact(x):
return (x-2)*(x-2)
x, y, z = randn(3)
f = polyint([x,y,z], [f_exact(x), f_exact(y), f_exact(z)])
xs = linspace(-3, 3, 100)
plot(xs, [f(x) for x in xs])
plot(xs, f_exact(xs))
|
Interpolation.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:ddc_env]
# language: python
# name: conda-env-ddc_env-py
# ---
# +
# %load_ext autoreload
# %autoreload 2
import numpy as np
import rdkit
from rdkit import Chem
import pandas as pd
import h5py, ast, pickle
# Occupy a GPU for the model to be loaded
# %env CUDA_DEVICE_ORDER=PCI_BUS_ID
# GPU ID, if occupied change to an available GPU ID listed under !nvidia-smi
# %env CUDA_VISIBLE_DEVICES=2
from ddc_pub import ddc_v3 as ddc
# -
# # Load model
# Import existing (trained) model
# Ignore any warning(s) about training configuration or non-seriazable keyword arguments
model_name = "./models/opd_fp_complete" # complete model
# model_name = "./models/opd_fp_tl" # retrain model
model = ddc.DDC(model_name=model_name)
# # Load data from dataset
data = pd.read_csv('./datasets/OPD_Data/FP_C_TL_Seeds.csv')['smiles'].tolist()
# # Alternatively, use your own SMILES
# +
# Input SMILES to auto-encode
smiles_in = data
# MUST convert SMILES to binary mols for the model to accept them (it re-converts them to SMILES internally)
mols_in = [Chem.rdchem.Mol.ToBinary(Chem.MolFromSmiles(smiles)) for smiles in smiles_in]
# -
# Encode the binary mols into their latent representations
latent = model.transform(model.vectorize(mols_in))
# Convert back to SMILES
smiles_out = []
nll_out = []
for lat in latent:
smiles, nll = model.predict(lat, temp=0)
smiles_out.append(smiles)
nll_out.append(nll)
# To compare the results, convert smiles_out to CANONICAL
for idx, smiles in enumerate(smiles_out):
mol = Chem.MolFromSmiles(smiles)
if mol:
smiles_out[idx] = Chem.MolToSmiles(mol, canonical=True)
else:
smiles_out[idx] = "INVALID"
|
OPM_Fingerprint_Sampling.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 基本程序设计
# - 一切代码输入,请使用英文输入法
print('hello world')
# ## 编写一个简单的程序
# - 圆公式面积: area = radius \* radius \* 3.1415
radius=2
radius='100'
#area = radius * radius * 3.1415
#print(area)
print(radius)
# ### 在Python里面不需要定义数据的类型
# ## 控制台的读取与输入
# - input 输入进去的是字符串
# - eval
radius = int(input('请输入一个半径'))
area = radius * radius * 3.1415
print(area)
radius = eval(input('请输入一个边'))
area = radius * radius
print(area)
radius1 = int(input('请输入一个长'))
radius2 = int(input('请输入一个宽'))
area = radius1 * radius2
print(area)
input os
input_ = input('今天')
os.system('')
# - 在jupyter用shift + tab 键可以跳出解释文档
# ## 变量命名的规范
# - 由字母、数字、下划线构成
# - 不能以数字开头 \*
# - 标识符不能是关键词(实际上是可以强制改变的,但是对于代码规范而言是极其不适合)
# - 可以是任意长度
# - 驼峰式命名
# ## 变量、赋值语句和赋值表达式
# - 变量: 通俗理解为可以变化的量
# - x = 2 \* x + 1 在数学中是一个方程,而在语言中它是一个表达式
# - test = test + 1 \* 变量在赋值之前必须有值
# ## 同时赋值
# var1, var2,var3... = exp1,exp2,exp3...
a,b,c = '10','20','30'
print(a,b,c)
heigth,width = eval(input('>>'))
area = heigth * width
print(area)
# ## 定义常量
# - 常量:表示一种定值标识符,适合于多次使用的场景。比如PI
# - 注意:在其他低级语言中如果定义了常量,那么,该常量是不可以被改变的,但是在Python中一切皆对象,常量也是可以被改变的
# ## 数值数据类型和运算符
# - 在Python中有两种数值类型(int 和 float)适用于加减乘除、模、幂次
# <img src = "../Photo/01.jpg"></img>
# ## 运算符 /(除)、//(整除)、**(求幂)
# ## 运算符 %
# ## EP:
# - 25/4 多少,如果要将其转变为整数该怎么改写
# - 输入一个数字判断是奇数还是偶数
# - 进阶: 输入一个秒数,写一个程序将其转换成分和秒:例如500秒等于8分20秒
# - 进阶: 如果今天是星期六,那么10天以后是星期几? 提示:每个星期的第0天是星期天
int(25/4)
shu = eval(input('>>'))
if shu % 2==0:
print(str(shu)+'是一个偶数')
else :
print(str(shu)+'是一个奇数')
second = eval(input('>>'))
a = second // 60
b =second % 60
print(str(a)+'分'+str(b)+'秒')
week = eval(input('>>'))
res = (week+10) % 7
print('10天以后是星期'+str(res))
# ## 科学计数法
# - 1.234e+2
# - 1.234e-2
1.234e+2
# ## 计算表达式和运算优先级
# <img src = "../Photo/02.png"></img>
# <img src = "../Photo/03.png"></img>
# 分开写
x = 10
y = 6
a = 0
b = 1
c = 1
sss = (3+4*x)/5
sss1 = 10*(y-5)*(a+b+c)
sss2 = 9*(4/x+(9+x)/y)
ss = sss - sss1 + sss2
print(ss)
# ## 增强型赋值运算
# <img src = "../Photo/04.png"></img>
# ## 类型转换
# - float -> int
# - 四舍五入 round
round(3.5) # 第一个是数,第二个为位数
round
# ## EP:
# - 如果一个年营业税为0.06%,那么对于197.55e+2的年收入,需要交税为多少?(结果保留2为小数)
# - 必须使用科学计数法
a = (197.55e+2) * (6e-4)
round(a,2)
# # Project
# - 用Python写一个贷款计算器程序:输入的是月供(monthlyPayment) 输出的是总还款数(totalpayment)
# 
# # Homework
# - 1
# <img src="../Photo/06.png"></img>
Celsius = eval(input('Enter a degree in Celsius:'))
fahrenheit = (9 / 5) * Celsius + 32
print(str(Celsius)+' Celsius is '+str(fahrenheit)+' Fahrenheit')
# - 2
# <img src="../Photo/07.png"></img>
import math
radius,length = eval(input('Enter the radius and length of a cylinder :'))
area = radius * radius * math.pi
volume = area * length
print('The area is '+ str(area))
print('The volume is '+ str(volume))
# - 3
# <img src="../Photo/08.png"></img>
feet = eval(input('Enter a value for feet :'))
meters = feet * 0.305
print(str(feet) + ' feet is ' + str(meters) + ' meters')
# - 4
# <img src="../Photo/10.png"></img>
fil = eval(input('Enter the amount of water in kilograms :'))
ini = eval(input('Enter the initial temperature :'))
fin = eval(input('Enter the final temperature:'))
Q = fil * (fin - ini) * 4184
print('The energy needed is ' + str(Q))
# - 5
# <img src="../Photo/11.png"></img>
blance,rate = eval(input('Enter blance and interest rate (e.g., 3 for 3%):'))
interest = blance * (rate / 1200)
print('The interest is ' + str(interest))
# - 6
# <img src="../Photo/12.png"></img>
v0,v1,t = eval(input('Enter v0,v1, and t :'))
a = (v1 - v0) / t
print('The average acceleration is ' + str(a))
# - 7 进阶
# <img src="../Photo/13.png"></img>
b = 1 + 0.00417
money = eval(input('Enter the moonthly saving amount :'))
a = money * b
a1 = (money + a) * b
a2 = (money + a1) * b
a3 = (money + a2)* b
a4 = (money + a3) * b
a5 = (money + a4) *b
# aaa = a + a1 + a2 + a3 + a4 + a5
print('After the sixth month,the account value is '+ str(a5))
# - 8 进阶
# <img src="../Photo/14.png"></img>
number = eval(input('Enter a mumber between 0 and 1000 :'))
a = number % 10
b = number // 10
c = b % 10
d = b // 10
e = a + c + d
# print(a)
# print(b)
# print(c)
# print(d)
print('The sum of the digits is ' + str(e))
|
9.10.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
from dotenv import load_dotenv
import numpy as np
import pandas as pd
import matplotlib as plt
from airtable import Airtable
load_dotenv()
# Loading in Airtable
API_KEY = os.getenv('AIRTABLE_API_KEY')
BASE_KEY = os.getenv('AIRTABLE_BASE_KEY')
table_name = 'Directory'
airtable = Airtable(BASE_KEY, table_name, API_KEY)
# Get 2 records
records = airtable.get_all(maxRecords=2)
df = pd.DataFrame.from_records((r['fields'] for r in records))
df.head()
# -
|
notebook/main.ipynb
|
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.7.0
# language: julia
# name: julia-1.7
# ---
# # Goods Recommandation by the Tompson Sampling-based AI engine
# +
## Declare Packages tu Use
using Distributions
using Formatting
using Plots
using Zygote
using Random
## Basic Functions
function cost_ts(S_W, S_B, F_W, F_B, means, mx)
S = S_W * mx + S_B
F = F_W * mx + F_B
probs = rand.(Normal.(S, abs.(F)))
p_arm = argmax(probs)
rand(Uniform())<means[p_arm] ? (1-probs[p_arm])^2 : (0 -probs[p_arm])^2
end
# -
# ## Original code
## Training
function train(
N = 3,
means_l = [[0.3, 0.7, 0.5], [0.3, 0.7, 0.5]],
Nepoch = 100)
S_W = zeros(Float64, N)
S_B = zeros(Float64, N)
F_W = zeros(Float64, N)
F_B = zeros(Float64, N)
S = zeros(Float64, N)
F = zeros(Float64, N)
S_list = zeros(Float64, Nepoch+1, N)
S_list[1,:] = S
F_abs_list = zeros(Float64, Nepoch+1, N)
F_abs_list[1,:] = abs.(F)
mx = 1
μ = 0.01
for epoch in range(1, Nepoch)
G_all = gradient((S_W, S_B, F_W, F_B) -> cost_ts(S_W, S_B, F_W, F_B, means_l[mx],mx), S_W, S_B, F_W, F_B)
S_W -= μ * G_all[1]
S_B -= μ * G_all[2]
F_W -= μ * G_all[3]
F_B -= μ * G_all[4]
S = S_W * mx + S_B
F = F_W * mx + F_B
S_list[epoch+1,:] = S
F_abs_list[epoch+1,:] = abs.(F)
end
return S_list, F_abs_list
end
N = 3
means_l = [[0.3, 0.7, 0.5], [0.3, 0.7, 0.5]]
Nepoch = 2000
S_list, F_abs_list = train(N, means_l, Nepoch);
# +
## Result Plotting
println(means_l)
p1 = plot(range(1,Nepoch+1), S_list, ylabel="mean",
title = "Gaussian Thompson Sampling, Zygote@Julia", legend=:topleft,
label = ["Arm 1" "Arm 2" "Arm 3"])
p2 = plot(range(1,Nepoch+1), F_abs_list, ylabel="variance",
label = ["Arm 1" "Arm 2" "Arm 3"])
h = plot(p1, p2, xlabel="Epoch", layout = (2,1))
display(h)
lo, hi = 0., 1.
x = range(lo, hi; length = 100)
Y = []
for i in range(1,N)
y = pdf.(Normal(S_list[end,i],abs.(F_abs_list[end,i])),x)
if i == 1
Y = y
else
Y = [Y y]
end
end
h = plot(x, Y, xlabel = "Prob", ylabel = "Bins",
title = "Final Gaussian Distrubtion of Each Arm",
label = ["Arm 1" "Arm 2" "Arm 3"])
display(h)
# -
# ## Test Codes
mx = 2
means = means_l[mx]
println(size(means))
println(cost_ts(S_W,S_B,F_W,F_B,means_l[mx],mx))
gradient((S_W, S_B, F_W, F_B) -> cost_ts(S_W, S_B, F_W, F_B, means_l[mx],mx), S_W, S_B, F_W, F_B)
zeros(Int, 2, 2, 2) == zeros(Int, (2, 2, 2))
zeros(Int, (2, 2))
|
all_repository/julia_lab/recommender/repository/ts_normal_wb-Copy5.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # Building of French ACC/UAC control sectors tables
# #### Air traffic control sectors data can be found on the [SIA website](https://www.sia.aviation-civile.gouv.fr) in the eAIP section (ENR 3.8)
url = "https://www.sia.aviation-civile.gouv.fr/dvd/eAIP_16_AUG_2018/FRANCE/AIRAC-2018-08-16/html/eAIP/FR-ENR-3.8-fr-FR.html#ENR-3.8"
import requests
from bs4 import BeautifulSoup
soup = BeautifulSoup(requests.get(url).content, "lxml")
list_tables = soup.find_all('table')
# ##### Get the coordinates of airspace volumes (sixth table)
# +
import geopandas as gpd
from shapely.geometry import Point, Polygon, MultiPolygon
from shapely.ops import nearest_points
from shapely.geometry import MultiPoint
world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
france = world[world.name == "France"].copy()
france.geometry = france.geometry.intersection(Polygon([(-10,41),(-10,52),(10,52),(10,41)]))
poly = france.geometry.iloc[0][1]
mp = MultiPoint(poly.exterior.coords)
fr = list(mp.geoms)
def get_points_between(before_point, after_point):
b_n = nearest_points(mp, before_point)[0]
a_n = nearest_points(mp, after_point)[0]
tlist = [point for point in fr
if min(fr.index(b_n), fr.index(a_n)) <= fr.index(point) <= max(fr.index(b_n), fr.index(a_n))]
clist = [point for point in fr if point not in tlist]
tlist = clist if len(tlist) > len(clist) else tlist
return tlist if fr.index(b_n) <= fr.index(a_n) else tlist[::-1]
def lat_conv(slat):
val = round(float(slat[0:2]) + float(slat[3:5])/60 + float(slat[6:8])/3600, 3)
return val if slat[9] == 'N' else -val
def lon_conv(slon):
val = round(float(slon[1:3]) + float(slon[4:6])/60 + float(slon[7:9])/3600, 3)
return val if slon[10] == 'E' else -val
# +
from collections import defaultdict
cdict = defaultdict(str)
vol_es, es_acc, upper, lower = ({} for i in range(4))
vol, es = ('', '')
latest_lat, latest_long, current_lat = (0.0, 0.0, 0.0)
boundary_required = False
for tag in list_tables[5].find_all('span'):
if tag.has_attr('id'):
if 'NOM_USUEL' in tag['id']:
acc = tag.text
elif 'AIRSPACE.TXT_NAME' in tag['id']:
es = tag.text
es_acc[es] = acc
vol = es
vol_es[vol] = es
elif 'DIST_VER_UPPER' in tag['id']:
upper[vol] = tag.text
elif 'DIST_VER_LOWER' in tag['id']:
lower[vol] = tag.text
elif 'GEO_LAT' in tag['id']:
if tag.text[0].isdigit():
lat = lat_conv(tag.text)
if boundary_required:
current_lat = lat
else:
latest_lat = lat
elif 'GEO_LONG' in tag['id']:
if tag.text[0].isdigit():
lon = lon_conv(tag.text)
if boundary_required:
ch = get_points_between(Point(latest_long, latest_lat), Point(lon, current_lat))
for pt in ch:
cdict[vol] += str(pt.y) + ";" + str(pt.x) + ","
cdict[vol] += str(current_lat) + ";" + str(lon) + ","
boundary_required = False
else:
latest_long = lon
cdict[vol] += str(latest_lat) + ";" + str(lon) + ","
elif 'GEO_BORDER.NOM' in tag['id']:
boundary_required = True
elif 'AIRSPACE_BORDER.NOM_PARTIE' in tag['id']:
if (len(tag.text) == 1) and (tag.text[0].isdigit()):
vol_es.pop(vol)
vol += " " + tag.text[0]
vol_es[vol] = es
else: # second column
if (len(tag.text) == 1) and (tag.text[0].isdigit()):
vol_es.pop(vol)
vol += " " + tag.text[0]
vol_es[vol] = es
fdict = defaultdict(list)
for key, value in cdict.items():
for couple in value.split(","):
if len(couple) > 0:
fdict[key].append((float(couple.split(";")[1]), float(couple.split(";")[0])))
city_acc_map = {'BORDEAUX':'LFBB', 'BREST':'LFRR', 'MARSEILLE':'LFMM', 'PARIS':'LFFF', 'REIMS':'LFEE'}
es_acc = {key: city_acc_map[value] for key,value in es_acc.items()}
fdict = {key: Polygon(value) for key, value in fdict.items()}
fdict['P1 1']
# -
# ##### Building the GeoDataFrame
import pandas as pd
df_v = pd.DataFrame({'volume': [*fdict]}, dtype=str)
df_v['elementary_sector'] = df_v['volume'].map(vol_es)
df_v['acc'] = df_v['elementary_sector'].map(es_acc)
df_v['level_min'] = df_v['volume'].map(lower)
df_v['level_max'] = df_v['volume'].map(upper)
df_v['geometry'] = df_v['volume'].map(fdict)
f = lambda x: 0 if x == 'SFC' else 999 if x == 'UNL' else int(x[3:]) # to be modified with real SFC/UNL values
df_v[['level_min','level_max']] = df_v[['level_min','level_max']].applymap(f)
gdf_es = gpd.GeoDataFrame(df_v, geometry='geometry')
gdf_es.to_file('volumes.geojson', driver='GeoJSON')
# ##### Get the composition of collapse sectors (five first tables)
def table_to_df(table, name):
vol_dict = defaultdict(list)
es_dict = defaultdict(set)
cs = ""
for tag in table.find_all('span'):
if tag.has_attr('class'):
cs = tag.text
else:
vol_dict[cs].append(tag.text)
es_dict[cs].add(vol_es[tag.text])
es_dict = {key: list(value) for key, value in es_dict.items()}
cs = [*vol_dict]
df = pd.DataFrame({'control_sector': cs, 'acc':[name for i in range(len(cs))]})
df['volumes'] = df['control_sector'].map(vol_dict)
df['elementary_sectors'] = df['control_sector'].map(es_dict)
return df
# ##### Building the control sectors dataframe
list_acc = ['LFBB', 'LFRR', 'LFMM', 'LFFF', 'LFEE']
lfbb, lfrr, lfmm, lfff, lfee = (table_to_df(list_tables[i], list_acc[i]) for i in range(5))
acc = pd.concat([lfbb, lfrr, lfmm, lfff, lfee])
acc.sample(3)
acc.to_csv('sectors.csv', index=False)
|
tables_building.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:CloudDetection]
# language: python
# name: conda-env-CloudDetection-py
# ---
# +
import sys
import subprocess
import os
import datetime
import pandas as pd
import seaborn as sns
import gc
#from keras import backend as K
from PIL import Image
Image.MAX_IMAGE_PIXELS = 1000000000
from matplotlib import pyplot as plt
'''
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
config = tf.ConfigProto()
config.gpu_options.allow_growth = True # dynamically grow the memory used on the GPU
config.log_device_placement = True # to log device placement (on which device the operation ran)
# (nothing gets printed in Jupyter, only if you run it standalone)
sess = tf.Session(config=config)
set_session(sess) # set this TensorFlow session as the default session for Keras
'''
sys.path.insert(0, '../')
# %load_ext autoreload
# %autoreload 2
from src.models.params import get_params
#from src.models.model_utils import evaluate_test_set, write_csv_files
#from src.models.Unet import Unet
#from src.visualization.make_image_files import visualize_test_data, visualize_landsat8_tile
#from src.visualization.visualization_utils import get_predicted_thumbnails
#from src.utils import get_model_name
# #%env CUDA_DEVICE_ORDER=PCI_BUS_ID
# #%env CUDA_VISIBLE_DEVICES=0
# -
# # Define and run the training loops
# +
activation_functions = ['relu', 'swish']
#loss_functions = ['binary_crossentropy', 'jaccard_coef_loss']
L2reg = [0, 1e-2]
#optimizers = ['Adam', 'Nadam']
dropout_vals = [0, 0.5]
epochs = 2
num_gpus = 2
# Train the models
for activation_func in activation_functions:
for l2 in L2reg:
#for optimizer in optimizers:
for dropout in dropout_vals:
params = get_params('U-net', 'Landsat8')
params.modelID = datetime.datetime.now().strftime("%y%m%d%H%M%S")
params.L2reg = l2
params.optimizer = 'Adam'
params.activation_func = activation_func
params.dropout = dropout
params.epochs = epochs
params.brightness_augmentation = False
params.batch_size = 16
params.learning_rate = 1e-2
cmd = "/home/jhj/phd/GitProjects/SentinelSemanticSegmentation/SentinelSemanticSegmentation.py" + \
" --train" + \
" --params=L2reg=" + str(l2) + \
" --params=dropout=" + str(dropout) + \
" --params=activation_func=" + activation_func
# !/home/jhj/anaconda3/envs/CloudDetection/bin/python {cmd}
# Try inserting this in a separate python script, and then run that script from here:
# https://stackoverflow.com/questions/28126809/ipython-notebook-output-from-child-process
# *Use subprocess.check_output
#os.system("/home/jhj/anaconda3/envs/CloudDetection/bin/python /home/jhj/phd/GitProjects/SentinelSemanticSegmentation/SentinelSemanticSegmentation.py --train")
#subprocess.call('/home/jhj/anaconda3/envs/CloudDetection/bin/python /home/jhj/phd/GitProjects/SentinelSemanticSegmentation/SentinelSemanticSegmentation.py --train', shell=True)
# print('Done')
# +
activation_functions = ['relu', 'swish']
#loss_functions = ['binary_crossentropy', 'jaccard_coef_loss']
L2reg = [0, 1e-2]
#optimizers = ['Adam', 'Nadam']
dropout_vals = [0, 0.5]
epochs = 2
num_gpus = 2
# Train the models
for activation_func in activation_functions:
for l2 in L2reg:
#for optimizer in optimizers:
for dropout in dropout_vals:
params = get_params('U-net', 'Landsat8')
params.modelID = datetime.datetime.now().strftime("%y%m%d%H%M%S")
#params.loss_func = loss_function
params.L2reg = l2
params.optimizer = 'Adam'
params.activation_func = activation_func
params.dropout = dropout
params.epochs = epochs
params.brightness_augmentation = False
params.batch_size = 16
params.learning_rate = 1e-2
print('###########################################')
print('# activation_func: ' + activation_func)
#print('# loss_function: ' + loss_function)
#print('# optimizer: ' + optimizer)
print('###########################################')
model = Unet()
model.train(num_gpus, params)
del model
K.clear_session()
gc.collect()
model = Unet()
evaluate_test_set(model, num_gpus, params)
del model
K.clear_session()
gc.collect()
# +
classes = [['thin', 'cloud']]
epochs = 50
num_gpus = 1
# Train the models
for cls in classes:
print('###########################################')
print('# class: ' + str(cls))
print('###########################################')
params = get_params('U-net', 'Landsat8')
params.batch_size = 12
params.modelID = datetime.datetime.now().strftime("%y%m%d%H%M%S")
params.loss_func = 'binary_crossentropy'
params.optimizer = 'Nadam'
params.activation_func = 'swish'
params.epochs = epochs
params.cls = cls
model = Unet()
model.train(num_gpus, params)
# Test different threshold values for the trained model (remember timeID is the same for all thresholds)
for threshold in thresholds:
params.threshold = threshold
avg_jaccard, product_names, product_jaccard = evaluate_test_set(model, num_gpus, params)
write_csv_files(avg_jaccard, product_jaccard, product_names, params)
# -
# # Investigate the trained models
params = get_params('U-net', 'Landsat8')
df = pd.read_csv(params.project_path + 'reports/Unet/param_optimization.csv' )
#df.sort_values('entire_testset', ascending=False).head(40)
#df.loc[df['modelID'] == 180201172308]
df.loc[df['cls'] == 'shadow']
# # Plot predictions from the desired model
# +
modelID = '180202154744'
num_gpus = 2
K.clear_session()
model = Unet()
params = get_params('U-net', 'Landsat8')
params.add_hparam('modelID', modelID)
params.activation_func = 'swish'
params.cls = ['shadow']
visualize_test_data(model, num_gpus, params)
# +
# Overlay pictures
thresholded = True
transparency = 200
thumbnail_res = 512, 512 # Resolution to be showed
area = (1000, 1000, 7000, 7000) # Area to be cropped in (min_width, min_height, max_width, max_height)
params.threshold = 0.1
model_name = get_model_name(params)
# Plot predictions
files = sorted(os.listdir(params.project_path + 'data/output/')) # os.listdir loads in arbitrary order, hence use sorted()
files = [f for f in files if ('thresholded_Unet_Landsat8_' + modelID) in f] # Filter out one ID for each tile
for i, f in enumerate(files, start=1):
rgb, pred_unet, pred_true = get_predicted_thumbnails(f, thresholded, area, transparency, thumbnail_res, params)
# Plot
plt.figure(figsize=(35, 35))
plt.subplot(1, 3, 1)
plt.title('RGB for tile: ' + f[0:21])
plt.imshow(rgb)
plt.subplot(1, 3, 2)
plt.title('Unet for tile: ' + f[0:21])
plt.imshow(pred_unet)
plt.subplot(1, 3, 3)
plt.title('True for tile: ' + f[0:21])
plt.imshow(pred_true)
plt.show()
# -
|
notebooks/jhj_ParameterOptimizationGPU0.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %reload_ext autoreload
# %autoreload 2
# %matplotlib inline
import os
os.chdir('../../../')
from musicautobot.numpy_encode import *
from musicautobot.utils.file_processing import process_all, process_file
from musicautobot.config import *
from musicautobot.music_transformer import *
from music21 import *
# +
# Chords
c = stream.Part()
i = instrument.Piano()
i.instrumentName = 'Chords'
c.append(i)
c.append(music21.chord.Chord('A2 E3', type='half')) # vi power
c.append(music21.chord.Chord('C3 G3', type='half')) # I power
# Melody
m = stream.Part()
i = instrument.Piano()
i.instrumentName = 'Melody'
m.append(i)
m.append(note.Note('C4'))
m.append(note.Note('D4'))
m.append(note.Note('E4', type='half'))
s = stream.Score([m, c])
# -
item = MusicItem.from_stream(s, MusicVocab.create())
item.show()
item.stream.plot()
# Tokenized
item.to_text()
# Index encoding
item.data
# Indexes
list(range(len(item.data)))
# Beat encoding
item.position//4
item.transpose(4)
|
squash trainer/notebooks/data_encoding/short_examples/Examples.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#Importando as bibliotecas
from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator
import matplotlib.pyplot as plt
# importar o arquivo csv em um df
text = open('Clientes Africa do Sul.csv','r').read()
#Remove os conectivos
STOPWORDS.update(["da", "meu", "em", "você", "de", "ao", "os",'a', 'e',
'in','or','and','who','to','x'])
# +
#Define e instancia as dimensões da imagem
wordcloud = WordCloud(max_font_size=100,width = 1520, height = 535).generate(text)
plt.figure(figsize=(16,9))
plt.imshow(wordcloud)
plt.axis("off")
plt.show()
# -
|
WordCloud/.ipynb_checkpoints/word_cloud-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
# +
# S3 prefix
bucket = 'sagemaker-getting-start-test'
prefix = 'sagemaker/scikit-optuna'
# Import libraries
from sagemaker import get_execution_role
import boto3, sys, os
import sagemaker
sagemaker_session = sagemaker.Session()
# Get a SageMaker-compatible role used by this Notebook Instance.
role = get_execution_role()
my_region = boto3.session.Session().region_name # set the region of the instance
print("Execution role is " + role)
print("Success - the MySageMakerInstance is in the " + my_region + ".")
# +
s3 = boto3.resource('s3')
try:
if my_region == 'ap-northeast-1':
s3.create_bucket(Bucket=bucket)
else:
s3.create_bucket(Bucket=bucket, CreateBucketConfiguration={'LocationConstraint': my_region})
print('S3 bucket created successfully')
except Exception as e:
print('S3 error: ', e)
# +
import os
# Create directory and upload data to S3
os.makedirs('./data', exist_ok=True)
WORK_DIRECTORY = 'data'
train_input = sagemaker_session.upload_data("{}/boston.csv".format(WORK_DIRECTORY), bucket=bucket, key_prefix="{}/{}".format(prefix, WORK_DIRECTORY))
# +
# train data and save a model
account = sagemaker_session.boto_session.client('sts').get_caller_identity()['Account']
region = sagemaker_session.boto_session.region_name
container_name = 'optuna-sklearn-container'
image_full = '{}.dkr.ecr.{}.amazonaws.com/{}:latest'.format(account, region, container_name)
clf = sagemaker.estimator.Estimator(image_full, role, 1, 'ml.c4.2xlarge',
output_path="s3://{}/{}/output".format(bucket, prefix),
sagemaker_session=sagemaker_session)
params = dict(seconds = 300)
clf.set_hyperparameters(**params)
# training with the gradient boosting classifier model
clf.fit(train_input)
# -
from sagemaker.predictor import csv_serializer
predictor = clf.deploy(initial_instance_count=1, instance_type="ml.m4.xlarge", serializer=csv_serializer)
# load test payload
import numpy as np
import pandas as pd
test_data = pd.read_csv("{}/payload.csv".format(WORK_DIRECTORY), header=None)
test_X = test_data.iloc[:, :-1]
test_y = test_data.iloc[:, [-1]]
print("test_X: {}".format(test_X.shape))
print("test_y: {}".format(test_y.shape))
predictions = predictor.predict(test_X.values).decode('utf-8')
predictions_array = np.fromstring(predictions, sep=' ') # and turn the prediction into an array
print("Predicted values:\n{}".format(predictions_array))
print("test_y values:\n{}".format(test_y.values.ravel()))
clf.delete_endpoint()
|
Scikit-learn_Estimator_Example_With_Optuna-Container.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
import scipy.io
import scipy.misc
import numpy as np
import pandas as pd
import PIL
import tensorflow as tf
from skimage.transform import resize
from keras import backend as K
from keras.layers import Input, Lambda, Conv2D
from keras.models import load_model, Model
import sys
yolo_utils="../input/yolo-utils/yolo_utils.py"
sys.path.append(sys.path.append(os.path.dirname(os.path.expanduser(yolo_utils))))
yad2k="../input/obj-localisation-files/dataset and libraries/week3/yad2k"
sys.path.append(sys.path.append(os.path.dirname(os.path.expanduser(yad2k))))
from yolo_utils import read_classes, read_anchors, generate_colors, preprocess_image, draw_boxes, scale_boxes
from yad2k.models.keras_yolo import yolo_head, yolo_boxes_to_corners, preprocess_true_boxes, yolo_loss, yolo_body
# -
def yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = .6):
box_scores = box_confidence*box_class_probs
box_classes = K.argmax(box_scores,-1)
box_class_scores = K.max(box_scores,-1)
filtering_mask = box_class_scores>threshold
scores = tf.boolean_mask(box_class_scores,filtering_mask)
boxes = tf.boolean_mask(boxes,filtering_mask)
classes = tf.boolean_mask(box_classes,filtering_mask)
return scores, boxes, classes
def iou(box1, box2):
xi1 = max(box1[0],box2[0])
yi1 = max(box1[1],box2[1])
xi2 = min(box1[2],box2[2])
yi2 = min(box1[3],box2[3])
inter_area = (yi2-yi1)*(xi2-xi1)
box1_area = (box1[3]-box1[1])*(box1[2]-box1[0])
box2_area = (box2[3]-box2[1])*(box2[2]-box2[0])
union_area = box1_area+box2_area-inter_area
iou = inter_area/union_area
return iou
def yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10, iou_threshold = 0.5):
max_boxes_tensor = K.variable(max_boxes, dtype='int32')
K.get_session().run(tf.variables_initializer([max_boxes_tensor]))
nms_indices = tf.image.non_max_suppression(boxes,scores,max_boxes,iou_threshold)
scores = K.gather(scores,nms_indices)
boxes = K.gather(boxes,nms_indices)
classes = K.gather(classes,nms_indices)
return scores, boxes, classes
yolo_outputs = (tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1))
def yolo_eval(yolo_outputs, image_shape = (720., 1280.), max_boxes=10, score_threshold=.6, iou_threshold=.5):
box_confidence, box_xy, box_wh, box_class_probs = yolo_outputs
boxes = yolo_boxes_to_corners(box_xy, box_wh)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = score_threshold)
boxes = scale_boxes(boxes, image_shape)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes, max_boxes, iou_threshold)
return scores, boxes, classes
scores, boxes, classes = yolo_eval(yolo_outputs)
with tf.Session() as test_b:
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
sess = K.get_session()
class_names = read_classes("../input/obj-localisation-files/dataset and libraries/week3/model_data/coco_classes.txt")
# +
anchors = read_anchors("../input/obj-localisation-files/dataset and libraries/week3/model_data/yolo_anchors.txt")
# -
yolo_model = load_model("../input/obj-localisation-files/dataset and libraries/week3/model_data/yolo.h5")
yolo_outputs = yolo_head(yolo_model.output, anchors, len(class_names))
def predict(sess, image_file):
image, image_data = preprocess_image("../input/obj-localisation-files/dataset and libraries/week3/images/" + image_file, model_image_size = (608, 608))
out_scores, out_boxes, out_classes = sess.run([scores, boxes, classes], feed_dict={yolo_model.input: image_data, K.learning_phase(): 0})
print('Found {} boxes for {}'.format(len(out_boxes), image_file))
# Generate colors for drawing bounding boxes.
colors = generate_colors(class_names)
# Draw bounding boxes on the image file
draw_boxes(image, out_scores, out_boxes, out_classes, class_names, colors)
# Save the predicted bounding box on the image
image.save(os.path.join("../", image_file), quality=90)
# Display the results in the notebook
output_image = scipy.misc.imread(os.path.join("../", image_file))
plt.figure(figsize=(7,7))
imshow(output_image)
return out_scores, out_boxes, out_classes
img = plt.imread('../input/obj-localisation-files/dataset and libraries/week3/images/car.jpeg')
image_shape = float(img.shape[0]), float(img.shape[1])
scores, boxes, classes = yolo_eval(yolo_outputs, image_shape)
out_scores, out_boxes, out_classes = predict(sess, "car.jpeg")
|
object_localisation.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import scipy as sp
import pandas as pd
import matplotlib.pyplot as plt
from numpy.linalg import norm
class Particle():
def __init__(self, t = 0, p = np.zeros(2), v = np.zeros(2), v_max = 10, a_max = 6):
""" Creates a particle
"""
self.t = t
self.p = p
self.v = v
self.v_min, self.v_max = 0, v_max
self.a_min, self.a_max = 0, a_max
self.history = pd.DataFrame(columns=['t','px','py','vx','vy','ax','ay'])
self.vdHistory = pd.DataFrame(columns=['t','vdx','vdy'])
def update(self, a = np.zeros(2), dt = 1):
"""Updates the position and velocity of the particle.
Overwrites the current values.
Stores new values in history
Parameters
__________
a : numpy 2x1 vector, optional
acceleration vector of the particle (default is [0,0])
dt: float, optional
time step (default is 1)
"""
#store in history
history = {'t': self.t,
'px': self.p[0],
'py': self.p[1],
'vx': self.v[0],
'vy': self.v[1],
'ax': a[0],
'ay': a[1]}
self.history = self.history.append(history,ignore_index=True)
#update
self.t += dt
self.p = self.p + self.v * dt + 0.5 * a * dt**2
self.v = self.v + a * dt
return
def P_controller(self, vd = None, k = 1, update=True, dt=1):
""" Determines the acceleration based on a proportional controller.
If update is true, it also performs the update
Parameters
__________
vd : numpy 2x1 vector, optional
desired velocity (default is current velocity vector)
k : float, optional
proportional controller gain (default is 1)
update: boolean
if true, state will be updated (default is True)
dt : float
simulation time step, only needed if update is true (default is 1)
Returns
_______
a : numpy 2x1 vector
acceleration of the particle
"""
if vd is None:
vd = self.v
#store vd
self.vdHistory = self.vdHistory.append({'t':self.t, 'vdx': vd[0],'vdy': vd[1]}, ignore_index= True)
#proportional gain
a = k * (vd - self.v)
if norm(a) > self.a_max:
#cap based on max value
a = (a/norm(a)) * self.a_max
if update:
self.update(a = a, dt = dt)
return a
def target_spot(self, p_target = np.zeros(2), k = 1):
""" Returns the targetting speed vector using a proportional controller on the
Parameters
__________
p_target: numpy 2x1, optional
target destination (default is origin)
k : float
controller gain (default is 1)
"""
v_target = k*(self.p_target - self.p)
if norm(v_target) > self.v_max:
v_target = self.v_max * v_target/norm(v_target)
return v_target
def plot_path(self, fig = None, ax = None, colored = True, *args, **kwargs):
""" Plots the path of a particle
Parameters
_________
colored: bool
if true, the plot will be colored with time.
"""
plt.plot(self.history.px,self.history.py,'k',alpha = 0.4)
if colored:
plt.scatter(self.history.px, self.history.py, c=self.history.t, marker='.',cmap='jet')
# +
pList = [Particle(p = np.random.rand(2), v = 3*np.random.rand(2)) for i in range(5)]
for i in range(100):
for p in pList:
a = p.P_controller(vd = (1/(p.t+0.01)*(-0.25+0.0*np.random.rand(2))), dt = 0.1)
#p.update(a = a, dt = 0.1)
# +
plt.figure()
[p.plot_path(colored=True) for p in pList];
plt.grid()
plt.show()
# +
plt.figure()
for p in pList:
plt.plot(p.history.t, p.history.vx)
plt.plot(p.history.t, p.history.vy)
plt.plot(p.vdHistory.t, p.vdHistory.vdx,':')
plt.plot(p.vdHistory.t, p.vdHistory.vdy,':')
plt.ylim([-3,3]);
plt.xlim([0,10]);
# -
plt.figure()
for p in pList:
plt.plot(p.history.t, p.history.ax)
plt.plot(p.history.t, p.history.ay)
[plt.plot(p.vdHistory.vdx-p.history.vx) for p in pList]
plt.ylim([-1,1])
fig = None
ax = None
# +
if fig == None:
fig = plt.figure()
if ax == None:
ax = fig.add_subplot()
points = np.array([p.history.px, p.history.py]).T.reshape(-1, 1, 2)
segments = np.concatenate([points[:-1], points[1:]], axis=1)
range_norm = plt.Normalize(p.history.t.min(), p.history.t.max())
# Create a continuous norm to map from data points to colors
lc = LineCollection(segments, cmap='viridis', norm=range_norm)
# Set the values used for colormapping
lc.set_array(p.history.t)
lc.set_linewidth(2)
line = axs.add_collection(lc)
fig.colorbar(line, ax=ax)
#ax.set_xlim(.min(), x.max())
#ax.set_ylim(-1.1, 1.1)
plt.show()
# -
fig
plt.show()
plt.figure;
plt.plot(p.history.t, p.history.px)
plt.plot(p.history.t, p.history.py)
plt.figure
plt.plot(p.history.px,p.history.py)
plt.grid()
# +
from matplotlib.collections import LineCollection
from matplotlib.colors import ListedColormap, BoundaryNorm
x = np.linspace(0, 3 * np.pi, 500)
y = np.sin(x)
dydx = np.sqrt(x**2) #np.cos(0.5 * (x[:-1] + x[1:]))
points = np.array([x, y]).T.reshape(-1, 1, 2)
segments = np.concatenate([points[:-1], points[1:]], axis=1)
fig, axs = plt.subplots(1, 1, sharex=True, sharey=True)
norm = plt.Normalize(dydx.min(), dydx.max())
# Create a continuous norm to map from data points to colors
lc = LineCollection(segments, cmap='viridis', norm=norm)
# Set the values used for colormapping
lc.set_array(dydx)
lc.set_linewidth(2)
line = axs.add_collection(lc)
fig.colorbar(line, ax=axs)
axs.set_xlim(x.min(), x.max())
axs.set_ylim(-1.1, 1.1)
plt.show()
# +
axs.add_collection(lc)
# -
fig = plt.figure()
ax = plt.axes()
ax.add_collection(lc)
|
Testing/.ipynb_checkpoints/test_1-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
import numpy as np
import torch
from qYOLO.qyolo import train
# +
img_dir = "./../../Dataset/images"
lbl_dir = "./../../Dataset/labels"
weight_bit_width = 8
act_bit_width = 8
n_anchors = 5
anchors = torch.tensor([[0.03240000, 0.07950000],
[0.11750000, 0.42449999],
[0.05780000, 0.15050000],
[0.06340000, 0.25070000],
[0.18110000, 0.22120000]])
n_epochs = 10
batch_size = 1
train(
img_dir,
lbl_dir,
weight_bit_width=weight_bit_width,
act_bit_width=act_bit_width,
anchors=anchors,
n_anchors=n_anchors,
n_epochs=n_epochs,
batch_size=batch_size,
len_lim=50,
img_samples=6,
quantized=True,
)
|
notebooks/MScThesis/train_network.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.0 64-bit
# language: python
# name: python3
# ---
# + [markdown] id="U_J72oQEUy2Y"
# # London at Night
# ### Follow the Instructions here https://developers.google.com/earth-engine/guides/image_relational#colab-python_1
# + id="oKo-zolAU-lr" outputId="9a511f49-29ab-40d3-f4ce-01e06bdc6c9c" colab={"base_uri": "https://localhost:8080/"}
import ee
import folium
# Trigger the authentication flow.
ee.Authenticate()
# Initialize the library.
ee.Initialize()
# + id="FAdps2CbUy2a"
# This is needed in python to add layers to Folium
def add_ee_layer(self, ee_image_object, vis_params, name):
"""Adds a method for displaying Earth Engine image tiles to folium map."""
map_id_dict = ee.Image(ee_image_object).getMapId(vis_params)
folium.raster_layers.TileLayer(
tiles=map_id_dict['tile_fetcher'].url_format,
attr='Map Data © <a href="https://earthengine.google.com/">Google Earth Engine</a>',
name=name,
overlay=True,
control=True
).add_to(self)
# Add EE drawing method to folium.
folium.Map.add_ee_layer = add_ee_layer
# + id="6N8azOqhUy2g" outputId="c804055a-77b2-420a-a85b-7db2ccb5148d"
|
09-GoogleEarthEngine_master/04-Stgoat.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# - ndim : 차원 dimension
# - shape: 모양
array = np.array([0, 1, 2, 3])
print(type(array), array)
array.ndim, array.shape
# #### np.arange
# - better than range()
#
# #### np.newaxis
# - increase the dimension
# - can be used by 'None'
#
a = np.arange(6)[:, np.newaxis]
print(a.ndim, a.shape)
a
array = np.array([0, 1, 2, 3])[:, np.newaxis]
array, array.shape
array = np.array([0, 1, 2, 3])[None, None, :] # None = np.newaxis
array, array.shape
a = np.ones((2, 3, 6))
b = np.ones((2, 6))
print(a.shape, b.shape, "\n")
print(a, "\n")
print("b", b, "\n")
print("b'", b[:, None, :])
c = (a + b[:, np.newaxis, :])
c.shape, c
# #### reshape
a = a.reshape(3, 2)
print(a.ndim, a.shape)
a
print(a)
a.T # transpose
# #### index
print(a)
print(a[2][1]) #(2+1x 1+1 => 3x2 => 5)
a[1,0] #(1+1x 0+1 => 2x1 => 2)
array = np.arange(0, 12).reshape(2, 3, 2)
print(array, "\n")
print("new\n", array[:, 1:, 1:], "\n") # 1:2 x2:3 x2:
print(array[0, 1, 0]) # 1x2x1 => 2
# decrease dimension
print(array)
print(array[0][0][1])
array[0][0], array[0][0].ndim
# #### zeros, ones, eye, empty
z, o = np.zeros([2, 3 ,2]), np.ones([1, 3])
z, o
np.eye(2)
np.empty([2, 2]) # actually, it's dummy data
# #### like
print(z)
ls = np.ones_like(z)
# ls
print(o)
ls = np.zeros_like(o)
print(ls)
print(ls.ndim, ls.shape)
# #### linspace, logspace
print(np.linspace(0, 24, 3))
np.logspace(2, 4, 3) # log=1, log =2, ,,, (start, end, interval)
|
Past/DSS/Programming/Python/180123_numpy_01.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# ## ex09-Advanced Query Techniques of CASE and Subquery
#
# The SQLite CASE expression evaluates a list of conditions and returns an expression based on the result of the evaluation. The CASE expression is similar to the IF-THEN-ELSE statement in other programming languages. You can use the CASE statement in any clause or statement that accepts a valid expression. For example, you can use the CASE statement in clauses such as WHERE, ORDER BY, HAVING, IN, SELECT and statements such as SELECT, UPDATE, and DELETE. See more at http://www.sqlitetutorial.net/sqlite-case/.
#
# A subquery, simply put, is a query written as a part of a bigger statement. Think of it as a SELECT statement inside another one. The result of the inner SELECT can then be used in the outer query.
#
# In this notebook, we put these two query techniques together to calculate seasonal runoff from year-month data in the table of rch.
# %load_ext sql
# ### 1. Connet to the given database of demo.db3
# %sql sqlite:///data/demo.db3
# If you do not remember the tables in the demo data, you can always use the following command to query.
# %sql SELECT name FROM sqlite_master WHERE type='table'
# ### 2. Chek the rch table
#
# We can find that the rch table contains time series data with year and month for each river reach. Therefore, it is natural to calculate some seasonal statistics.
# %sql SELECT * From rch LIMIT 3
# ### 3. Calculate Seasonal Runoff
#
# There are two key steps:
# >(1) use the CASE and Subquery to convert months to named seasons;<br>
# >(2) calculate seasonal mean with aggregate functions on groups.
#
# In addition, we also use another filter keyword of ***BETWEEN*** to span months into seasons.
# + magic_args="sqlite://" language="sql"
# SELECT RCH, Quarter, AVG(FLOW_OUTcms) as Runoff
# FROM(
# SELECT RCH, YR,
# CASE
# WHEN (MO) BETWEEN 3 AND 5 THEN 'MAM'
# WHEN (MO) BETWEEN 6 and 8 THEN 'JJA'
# WHEN (MO) BETWEEN 9 and 11 THEN 'SON'
# ELSE 'DJF'
# END Quarter,
# FLOW_OUTcms
# from rch)
# GROUP BY RCH, Quarter
# -
# ### Summary
#
# Sometimes, we may need construct complicated requires that go beyond a table join or basic SELECT query. For example, we might need to write a query that uses the results of other queries as inputs (i.e., SUBQUERY). Or we might need to reclassify numerical values into categories before counting them (i.e., CASE).
#
# In this notebook, we explored a collection of SQL functions and options essential for solving more complex problems. Now we can add subqueries in multiple locations to provide finer control over filtering or preprocessing data before analyzing it in a main query.
|
ex09-Advanced Query Techniques of CASE and Subquery.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"} tags=[]
# # Ice-albedo feedback and Snowball Earth
# -
# Welcome! In this activity we will apply our simple energy balance model ("EBM") in Climlab to evaluate the climate impacts of changing ice cover.
#
# We will study one of the most extreme cases of the ice-albedo feedback hinted at in the geologic record: Snowball Earth.
#
# 
# Ensure compatibility with Python 2 and 3
from __future__ import print_function, division
# + slideshow={"slide_type": "slide"}
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import climlab
# -
def state_plot(model, figsize=(12,5), show=True, ice_temp=True): ## define a standard plot for temperature and albedo
"""Plot the temperature and albedo at the current state of the model.
Shade the current ice line in grey."""
templimits = -30,35
alimits = min(model.albedo)-0.05, max(model.albedo)+0.05
latlimits = -90,90
lat_ticks = np.arange(-90,90,30)
Ts = np.array(model.Ts).flatten()
if ice_temp:
Tf = float(model.param['Tf'])
else:
Tf=0
fig = plt.figure(figsize=figsize)
ax1 = fig.add_subplot(1,2,1)
ax1.plot(model.lat, Ts)
ax1.set(xlim=latlimits, ylim=templimits,
ylabel='Temperature [deg C]', xlabel='Latitude', xticks=lat_ticks)
ax1.fill_between(model.lat, Ts, y2=Tf, where=Ts<Tf, color='LightGrey', alpha=0.5)
ax1.grid()
ax2 = fig.add_subplot(1,2,2)
icerect1 = patches.Rectangle((latlimits[0], 0), width=model.icelat[0]-latlimits[0], height=1,
color='LightGrey', alpha=0.5)
icerect2 = patches.Rectangle((model.icelat[1], 0), width=latlimits[1]-model.icelat[1], height=1,
color='LightGrey', alpha=0.5)
ax2.add_patch(icerect1)
ax2.add_patch(icerect2)
ax2.plot(model.lat, model.albedo)
ax2.set(xlim=latlimits, ylim=alimits,
ylabel='Albedo', xlabel='Latitude', xticks=lat_ticks)
ax2.grid()
# + [markdown] slideshow={"slide_type": "slide"}
# ## Contents
#
# 1. [Setting up an energy balance model](#section1)
# 2. [Ice advance and retreat in the EBM](#section2)
# 3. [Snowball Earth: onset](#section3)
# 4. [Escape from the Snowball](#section4)
# -
# We will use an energy balance model that is very similar to what we set up in Lab 3, with one exception: we account for variations in energy with latitude, and energy transport across latitude bands.
# + [markdown] slideshow={"slide_type": "slide"}
# ____________
# <a id='section1'></a>
#
# ## 1. Setting up an energy balance model
# ____________
#
# -
# 
#
# As we have seen in lecture, Earth's energy budget is an important control on the climate we experience. Radiation energy coming in from the sun can be reflected (in the atmosphere or from the surface) or absorbed. The portion of incoming solar radiation $Q$ that is reflected, $Q_{reflected}$, is set by the **planetary albedo**, $\alpha$.
#
# As we saw in lecture,
# +
Q = 341.3 # W/m2, the incoming solar radiation
Q_reflected = 101.9 # W/m2, the reflected shortwave radiation
alpha = Q_reflected/Q
print('The planetary albedo is {:.2f}'.format(alpha)) # make a nicely formatted print statement to 2 sig figs
# -
# The portion of incoming solar radiation that is not reflected is the **Absorbed Shortwave Radiation**,
# \begin{equation}
# ASR=Q-Q_{reflected}=Q (1-\alpha),
# \end{equation}
# controlled by the albedo $\alpha$.
#
# The heat energy emitted to space at the top of the atmosphere is the **Outgoing Longwave Radiation**, $OLR$. The total **energy budget** of the Earth system is the balance between energy going out ($OLR$) and coming in ($ASR$):
#
# \begin{align}
# \frac{dE}{dt} &= ASR - OLR \\
# &= Q (1-\alpha) - OLR,
# \end{align}
# where we see $\alpha$ is a key parameter.
#
# This is the basis of the simple **energy balance model** we first set up in Climlab. Today, we will use it to explore the ice-albedo feedback.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Our EBM: latitude-dependent, annual mean
#
# Here, we'll use an energy balance model that accounts for radiation fluxes that vary with latitude $\phi$, but averages out seasonal changes in an annual mean.
#
# The equation the model will solve for us is below.
# \begin{align*}
# C(\phi) \frac{\partial T_s}{\partial t} = & ~(1-\alpha) ~ Q - \left( A + B~T_s \right) + \\
# & \frac{D}{\cos\phi } \frac{\partial }{\partial \phi} \left(\cos\phi ~ \frac{\partial T_s}{\partial \phi} \right)
# \end{align*}
#
# **Questions:**
#
# *1.1. With your lab partner, diagram the physical meaning of each term in the equation.*
#
# *1.2. How do you expect the heat capacity, $C(\phi)$, to vary with latitude? Why?*
# + [markdown] slideshow={"slide_type": "slide"}
# ____________
# <a id='section2'></a>
#
# ## 2. Interactive snow and ice line in the EBM
# ____________
#
# + [markdown] slideshow={"slide_type": "slide"}
# ### Temperature-dependent ice line
#
# Let the surface albedo be larger wherever the temperature is below some threshold $T_f$:
#
# $$ \alpha\left(\phi, T(\phi) \right) = \left\{\begin{array}{ccc}
# \alpha_0 + \alpha_2 P_2(\sin\phi) & ~ & T(\phi) > T_f \\
# a_i & ~ & T(\phi) \le T_f \\
# \end{array} \right. $$
#
# -
# for convenience, set up a dictionary with our reference parameters
param = {'A':210, 'B':2, 'a0':0.3, 'a2':0.078, 'ai':0.62, 'Tf':-10.}
model1 = climlab.EBM_annual(name='Annual EBM with ice line',
num_lat=180, D=0.55, **param )
print(model1)
# + [markdown] slideshow={"slide_type": "slide"}
# Because we provided a parameter `ai` for the icy albedo, our model now contains several sub-processes contained within the process called `albedo`. Together these implement the step-function formula above.
#
# The process called `iceline` simply looks for grid cells with temperature below $T_f$ and adjusts their albedo.
# -
print(model1.param)
model1.integrate_years(5)
f = state_plot(model1)
# Grey shading on the above plots indicates where there is ice in the model. We can find the same information by querying the model `icelat` attribute:
# + slideshow={"slide_type": "slide"}
model1.icelat
# -
# ### Sudden cooling
#
# What happens if we force the model to be colder? Let's store our current model state, then make a new model like it that will decrease the temperature by 20 $^{\circ}$C everywhere.
#
# **Exercise: figure out how to make a model clone and decrease the temperature by 20 $^{\circ}$C everywhere.** Call it m2 so that the plotting cell will recognize it.
## Assign current model diagnostics to separate variables
Tequil = np.array(model1.Ts)
ALBequil = np.array(model1.albedo)
OLRequil = np.array(model1.OLR)
ASRequil = np.array(model1.ASR)
m2 = ... #your work here
m2.compute_diagnostics()
f3 = state_plot(m2)
f2 = state_plot(model1)
# **Question:**
#
# *2.1. Compare and contrast the global patterns of temperature and albedo in this new, colder climate versus the previous simulation.*
# Let's look at the radiative effect - how does the absorbed shortwave radiation change with this colder climate?
# +
my_ticks = [-90,-60,-30,0,30,60,90]
lat = model1.lat
fig = plt.figure( figsize=(12,5) )
ax1 = fig.add_subplot(1,2,1)
ax1.plot(lat, Tequil, label='equil')
ax1.plot(lat, m2.state['Ts'], label='pert' )
ax1.grid()
ax1.legend()
ax1.set_xlim(-90,90)
ax1.set_xticks(my_ticks)
ax1.set_xlabel('Latitude')
ax1.set_ylabel('Temperature (degC)')
ax2 = fig.add_subplot(1,2,2)
ax2.plot( lat, ASRequil, label='equil')
ax2.plot( lat, m2.diagnostics['ASR'], label='pert' )
ax2.grid()
ax2.legend()
ax2.set_xlim(-90,90)
ax2.set_xticks(my_ticks)
ax2.set_xlabel('Latitude')
ax2.set_ylabel('ASR (W m$^{-2}$)')
# -
# This tells us that making the climate colder, and allowing the ice edge to advance, tends to decrease the absorbed shortwave radiation (ASR). That is, ice advance due to cooling is a ***positive feedback*** that will tend to lead to more cooling and more ice advance.
# **Question:**
# *2.2. Repeat the comparison with a warmer climate. What is responsible for the differences?*
# + [markdown] slideshow={"slide_type": "slide"} tags=[]
# ____________
# <a id='section3'></a>
#
# ## 3. Snowball Earth onset
# ____________
#
# -
# In section 2 we forced the model to be cooler everywhere. Now, we'll examine some real conditions of the past that could have forced a cooling.
#
# The radiation coming in from the sun has not always been what it was today. Millions of years ago, the Sun was less bright, and as a result there was less solar energy entering the Earth system. Luckily, it is easy to use our ClimLab energy balance model to investigate a past climate with a weaker sun.
#
# The model parameter `S0` is the solar constant, describing the flux of solar radiation.
m3 = climlab.process_like(model1)
m3.subprocess.insolation.S0
m3.icelat
# Let's decrease the solar constant to examine past conditions.
m3.subprocess.insolation.S0 = 1300.
m3.integrate_years(100.)
m3.icelat
f3 = state_plot(m3)
# *3.1. Find a value or values of S0 that results in 100% ice cover (entire plot shaded).*
#
# *3.2. Compare the value of S0 that you found with the inferred history of S0 (from the textbook or another source). Was it possible to reach Snowball Earth conditions under realistic historical values of S0?*
# Is it possible to produce Snowball Earth with orbital parameters only? Climlab can help us assess. Modify the eccentricity, longitude of perihelion, and obliquity below to see if you can produce a Snowball Earth with realistic values.
## alternative: orbital forcing
m3.subprocess.insolation.orb = {'ecc': 0.017236, 'long_peri': 281.37, 'obliquity': 22.9}
m3.integrate_years(100.)
m3.icelat
# *What do you conclude from this experiment?*
# ____________
# <a id='section4'></a>
#
# ## 4. Escape from the Snowball
# ____________
#
# We have read that there were other important factors in Earth's past climate.
#
# **Discussion:**
# With your neighbor and then in class, propose a geological mechanism that could have ended Snowball Earth.
# + [markdown] slideshow={"slide_type": "slide"}
# ____________
#
# ## Credits
#
# This notebook was developed by [<NAME>](http://ehultee.github.io), based in large part on the `ClimateModeling_courseware` resources of author of [<NAME>](http://www.atmos.albany.edu/facstaff/brose/index.html).
# ____________
# + slideshow={"slide_type": "skip"}
|
07-snowball_earth.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# This notebook explains how to add batch normalization to VGG. The code shown here is implemented in [vgg_bn.py](https://github.com/fastai/courses/blob/master/deeplearning1/nbs/vgg16bn.py), and there is a version of ``vgg_ft`` (our fine tuning function) with batch norm called ``vgg_ft_bn`` in [utils.py](https://github.com/fastai/courses/blob/master/deeplearning1/nbs/utils.py).
from theano.sandbox import cuda
# %matplotlib inline
import utils; reload(utils)
from utils import *
from __future__ import print_function, division
# # The problem, and the solution
# ## The problem
# The problem that we faced in the lesson 3 is that when we wanted to add batch normalization, we initialized *all* the dense layers of the model to random weights, and then tried to train them with our cats v dogs dataset. But that's a lot of weights to initialize to random - out of 134m params, around 119m are in the dense layers! Take a moment to think about why this is, and convince yourself that dense layers are where most of the weights will be. Also, think about whether this implies that most of the *time* will be spent training these weights. What do you think?
#
# Trying to train 120m params using just 23k images is clearly an unreasonable expectation. The reason we haven't had this problem before is that the dense layers were not random, but were trained to recognize imagenet categories (other than the very last layer, which only has 8194 params).
# ## The solution
# The solution, obviously enough, is to add batch normalization to the VGG model! To do so, we have to be careful - we can't just insert batchnorm layers, since their parameters (*gamma* - which is used to multiply by each activation, and *beta* - which is used to add to each activation) will not be set correctly. Without setting these correctly, the new batchnorm layers will normalize the previous layer's activations, meaning that the next layer will receive totally different activations to what it would have without new batchnorm layer. And that means that all the pre-trained weights are no longer of any use!
#
# So instead, we need to figure out what beta and gamma to choose when we insert the layers. The answer to this turns out to be pretty simple - we need to calculate what the mean and standard deviation of that activations for that layer are when calculated on all of imagenet, and then set beta and gamma to these values. That means that the new batchnorm layer will normalize the data with the mean and standard deviation, and then immediately un-normalize the data using the beta and gamma parameters we provide. So the output of the batchnorm layer will be identical to it's input - which means that all the pre-trained weights will continue to work just as well as before.
#
# The benefit of this is that when we wish to fine-tune our own networks, we will have all the benefits of batch normalization (higher learning rates, more resiliant training, and less need for dropout) plus all the benefits of a pre-trained network.
# To calculate the mean and standard deviation of the activations on imagenet, we need to download imagenet. You can download imagenet from http://www.image-net.org/download-images . The file you want is the one titled **Download links to ILSVRC2013 image data**. You'll need to request access from the imagenet admins for this, although it seems to be an automated system - I've always found that access is provided instantly. Once you're logged in and have gone to that page, look for the **CLS-LOC dataset** section. Both training and validation images are available, and you should download both. There's not much reason to download the test images, however.
#
# Note that this will not be the entire imagenet archive, but just the 1000 categories that are used in the annual competition. Since that's what VGG16 was originally trained on, that seems like a good choice - especially since the full dataset is 1.1 terabytes, whereas the 1000 category dataset is 138 gigabytes.
# # Adding batchnorm to Imagenet
# ## Setup
# ### Sample
# As per usual, we create a sample so we can experiment more rapidly.
# %pushd data/imagenet
# %cd train
# +
# %mkdir ../sample
# %mkdir ../sample/train
# %mkdir ../sample/valid
from shutil import copyfile
g = glob('*')
for d in g:
os.mkdir('../sample/train/'+d)
os.mkdir('../sample/valid/'+d)
# -
g = glob('*/*.JPEG')
shuf = np.random.permutation(g)
for i in range(25000): copyfile(shuf[i], '../sample/train/' + shuf[i])
# +
# %cd ../valid
g = glob('*/*.JPEG')
shuf = np.random.permutation(g)
for i in range(5000): copyfile(shuf[i], '../sample/valid/' + shuf[i])
# %cd ..
# -
# %mkdir sample/results
# %popd
# ### Data setup
# We set up our paths, data, and labels in the usual way. Note that we don't try to read all of Imagenet into memory! We only load the sample into memory.
sample_path = 'data/jhoward/imagenet/sample/'
# This is the path to my fast SSD - I put datasets there when I can to get the speed benefit
fast_path = '/home/jhoward/ILSVRC2012_img_proc/'
#path = '/data/jhoward/imagenet/sample/'
path = 'data/jhoward/imagenet/'
batch_size=64
samp_trn = get_data(path+'train')
samp_val = get_data(path+'valid')
save_array(samp_path+'results/trn.dat', samp_trn)
save_array(samp_path+'results/val.dat', samp_val)
samp_trn = load_array(sample_path+'results/trn.dat')
samp_val = load_array(sample_path+'results/val.dat')
(val_classes, trn_classes, val_labels, trn_labels,
val_filenames, filenames, test_filenames) = get_classes(path)
(samp_val_classes, samp_trn_classes, samp_val_labels, samp_trn_labels,
samp_val_filenames, samp_filenames, samp_test_filenames) = get_classes(sample_path)
# ### Model setup
# Since we're just working with the dense layers, we should pre-compute the output of the convolutional layers.
vgg = Vgg16()
model = vgg.model
layers = model.layers
last_conv_idx = [index for index,layer in enumerate(layers)
if type(layer) is Convolution2D][-1]
conv_layers = layers[:last_conv_idx+1]
dense_layers = layers[last_conv_idx+1:]
conv_model = Sequential(conv_layers)
samp_conv_val_feat = conv_model.predict(samp_val, batch_size=batch_size*2)
samp_conv_feat = conv_model.predict(samp_trn, batch_size=batch_size*2)
save_array(sample_path+'results/conv_val_feat.dat', samp_conv_val_feat)
save_array(sample_path+'results/conv_feat.dat', samp_conv_feat)
samp_conv_feat = load_array(sample_path+'results/conv_feat.dat')
samp_conv_val_feat = load_array(sample_path+'results/conv_val_feat.dat')
samp_conv_val_feat.shape
# This is our usual Vgg network just covering the dense layers:
def get_dense_layers():
return [
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dense(4096, activation='relu'),
Dropout(0.5),
Dense(4096, activation='relu'),
Dropout(0.5),
Dense(1000, activation='softmax')
]
dense_model = Sequential(get_dense_layers())
for l1, l2 in zip(dense_layers, dense_model.layers):
l2.set_weights(l1.get_weights())
# ### Check model
# It's a good idea to check that your models are giving reasonable answers, before using them.
dense_model.compile(Adam(), 'categorical_crossentropy', ['accuracy'])
dense_model.evaluate(samp_conv_val_feat, samp_val_labels)
model.compile(Adam(), 'categorical_crossentropy', ['accuracy'])
# should be identical to above
model.evaluate(val, val_labels)
# should be a little better than above, since VGG authors overfit
dense_model.evaluate(conv_feat, trn_labels)
# ## Adding our new layers
# ### Calculating batchnorm params
# To calculate the output of a layer in a Keras sequential model, we have to create a function that defines the input layer and the output layer, like this:
k_layer_out = K.function([dense_model.layers[0].input, K.learning_phase()],
[dense_model.layers[2].output])
# Then we can call the function to get our layer activations:
d0_out = k_layer_out([samp_conv_val_feat, 0])[0]
k_layer_out = K.function([dense_model.layers[0].input, K.learning_phase()],
[dense_model.layers[4].output])
d2_out = k_layer_out([samp_conv_val_feat, 0])[0]
# Now that we've got our activations, we can calculate the mean and standard deviation for each (note that due to a bug in keras, it's actually the variance that we'll need).
mu0,var0 = d0_out.mean(axis=0), d0_out.var(axis=0)
mu2,var2 = d2_out.mean(axis=0), d2_out.var(axis=0)
# ### Creating batchnorm model
# Now we're ready to create and insert our layers just after each dense layer.
nl1 = BatchNormalization()
nl2 = BatchNormalization()
bn_model = insert_layer(dense_model, nl2, 5)
bn_model = insert_layer(bn_model, nl1, 3)
bnl1 = bn_model.layers[3]
bnl4 = bn_model.layers[6]
# After inserting the layers, we can set their weights to the variance and mean we just calculated.
bnl1.set_weights([var0, mu0, mu0, var0])
bnl4.set_weights([var2, mu2, mu2, var2])
bn_model.compile(Adam(1e-5), 'categorical_crossentropy', ['accuracy'])
# We should find that the new model gives identical results to those provided by the original VGG model.
bn_model.evaluate(samp_conv_val_feat, samp_val_labels)
bn_model.evaluate(samp_conv_feat, samp_trn_labels)
# ### Optional - additional fine-tuning
# Now that we have a VGG model with batchnorm, we might expect that the optimal weights would be a little different to what they were when originally created without batchnorm. So we fine tune the weights for one epoch.
feat_bc = bcolz.open(fast_path+'trn_features.dat')
labels = load_array(fast_path+'trn_labels.dat')
val_feat_bc = bcolz.open(fast_path+'val_features.dat')
val_labels = load_array(fast_path+'val_labels.dat')
bn_model.fit(feat_bc, labels, nb_epoch=1, batch_size=batch_size,
validation_data=(val_feat_bc, val_labels))
# The results look quite encouraging! Note that these VGG weights are now specific to how keras handles image scaling - that is, it squashes and stretches images, rather than adding black borders. So this model is best used on images created in that way.
bn_model.save_weights(path+'models/bn_model2.h5')
bn_model.load_weights(path+'models/bn_model2.h5')
# ### Create combined model
# Our last step is simply to copy our new dense layers on to the end of the convolutional part of the network, and save the new complete set of weights, so we can use them in the future when using VGG. (Of course, we'll also need to update our VGG architecture to add the batchnorm layers).
new_layers = copy_layers(bn_model.layers)
for layer in new_layers:
conv_model.add(layer)
copy_weights(bn_model.layers, new_layers)
conv_model.compile(Adam(1e-5), 'categorical_crossentropy', ['accuracy'])
conv_model.evaluate(samp_val, samp_val_labels)
conv_model.save_weights(path+'models/inet_224squash_bn.h5')
# The code shown here is implemented in [vgg_bn.py](https://github.com/fastai/courses/blob/master/deeplearning1/nbs/vgg16bn.py), and there is a version of ``vgg_ft`` (our fine tuning function) with batch norm called ``vgg_ft_bn`` in [utils.py](https://github.com/fastai/courses/blob/master/deeplearning1/nbs/utils.py).
|
deeplearning1/nbs/imagenet_batchnorm.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: geo_dev
# language: python
# name: geo_dev
# ---
import geopandas as gpd
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
from sklearn import preprocessing
import numpy as np
from sklearn.mixture import GaussianMixture
path = 'files/contextual.parquet'
data = pd.read_parquet(path)
# +
# normalise data
x = data.values
scaler = preprocessing.StandardScaler()
cols = list(data.columns)
data[cols] = scaler.fit_transform(data[cols])
# -
# We have now normalised data, let's save them.
data.to_parquet('files/contex_data_norm.parquet')
# +
bic = pd.DataFrame(columns=['n', 'bic', 'run'])
ix = 0
n_components_range = range(2, 40)
gmmruns = 3
# -
# Measure BIC to estimate optimal number of clusters.
sample = data
for n_components in n_components_range:
for i in range(gmmruns):
gmm = GaussianMixture(n_components=n_components, covariance_type="full", max_iter=200, n_init=1, verbose=1)
fitted = gmm.fit(sample)
bicnum = gmm.bic(data)
bic.loc[ix] = [n_components, bicnum, i]
ix += 1
print(n_components, i, "BIC:", bicnum)
bic.to_csv('files/complete_BIC.csv')
# Plot BIC values
# +
import seaborn as sns
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(16, 16))
sns.lineplot(ax=ax, x='n', y='bic', data=bic)
plt.savefig('files/complete_BIC.pdf')
# -
# ## Clustering
# +
n = 30
gmm = GaussianMixture(n_components=n, covariance_type="full", max_iter=200, n_init=5, verbose=1)
fitted = gmm.fit(data)
# -
data['cluster'] = gmm.predict(data)
data.reset_index()[['cluster', 'uID']].to_csv('files/200309_clusters_complete_n30.csv')
# ## Dendrogram
from scipy.cluster import hierarchy
import matplotlib.pyplot as plt
clusters = data.reset_index()[['cluster', 'uID']]
# Save to pdf.
# +
group = data.groupby('cluster').mean()
Z = hierarchy.linkage(group, 'ward')
plt.figure(figsize=(25, 10))
dn = hierarchy.dendrogram(Z, color_threshold=30, labels=group.index)
plt.savefig('tree.pdf')
|
source/notebooks/chapter78/Chapter 7 + 8 - Cluster analysis + taxonomy.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
import glob
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers, models, initializers
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
file_paths = glob.glob("./data/*")
print(file_paths)
# +
category =np.empty((0,1), float)
rssi =np.empty((0,100), float)
for file in file_paths:
d = np.loadtxt(file, delimiter=',')
category_tmp, rssi_tmp = np.hsplit(d, [1])
rssi = np.concatenate([rssi, rssi_tmp], axis=0)
category = np.concatenate([category, category_tmp], axis=0)
rssi = rssi * (-1) / 128
print("rssi array shape : ", rssi.shape)
#print(rssi)
category = tf.keras.utils.to_categorical(category, 2)
print("category array shape : ", category.shape)
#print(category)
rssi_train, rssi_test, category_train, category_test = train_test_split(rssi, category, test_size=0.2)
print("rssi training array shape : ", rssi_train.shape)
#print(rssi_train)
print("category training array shape : ", category_train.shape)
#print(category_train)
print("rssi test array shape : ", rssi_test.shape)
print("category test array shape : ", category_test.shape)
#train_data = tf.data.Dataset.from_tensor_slices((rssi_train, category_train))
#print(train_data)
# +
# モデルを作成
model = models.Sequential()
model.add(layers.Dense(128, input_shape=(100, ), activation='relu'))
model.add(layers.Dense(256, activation='relu'))
model.add(layers.Dense(128, activation='relu'))
model.add(layers.Dense(64, activation='relu'))
#model.add(layers.Dropout(0.1))
model.add(layers.Dense(2, activation='softmax'))
# サマリーを出力
model.summary()
# +
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
training = model.fit(rssi_train, category_train,
batch_size=128,
epochs=256,
#verbose=1,
validation_data=(rssi_test, category_test))
#正答率
plt.plot(training.history['accuracy'])
plt.plot(training.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
#loss
plt.plot(training.history['loss'])
plt.plot(training.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# -
model.save('model/seating_detection_algorithm.h5', save_format='h5')
# +
import tensorflow as tf
from tensorflow.keras import models
import numpy as np
model = models.load_model('model/seating_detection_algorithm.h5')
x = np.loadtxt("./data/20200319_seating.csv", delimiter=',')
#print("x array shape : ", x.shape)
#print(x)
x = x[0]
#print("x array shape : ", x.shape)
#print(x)
x = np.delete(x, 0)
#print("x array shape : ", x.shape)
#print(x)
x = x.reshape(1,100)
#print("x array shape : ", x.shape)
#print(x)
x = x * (-1) / 128
#print("x array shape : ", x.shape)
#print(x)
print(np.argmax(model.predict(x)))
#l = model.predict(x)
# -
|
seating_detection_algorithm.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
import csv
import numpy as np
from watson_developer_cloud import NaturalLanguageUnderstandingV1
import watson_developer_cloud.natural_language_understanding.features.v1 as features
import json
natural_language_understanding = NaturalLanguageUnderstandingV1(version='2017-02-27', username='')
import pandas
articles = pandas.read_csv('scmp_news_2.csv', encoding = "ISO-8859-1")
content=[]
for x in range(0,articles.shape[0]):
content.append(articles.Content[x])
dict_categories={}
for x in range(0,articles.shape[0]):
keys=dict_categories.keys()
response = natural_language_understanding.analyze( text=content[x],features=[features.Sentiment(),features.Categories()])
category=response['categories'][0]['label'].split('/')
if category[1] not in keys:
dict_categories[category[1]]={}
dict_categories[category[1]]['count']={}
dict_categories[category[1]]['count'][response['sentiment']['document']['label']]=1
else:
try:
dict_categories[category[1]]['count'][response['sentiment']['document']['label']]=dict_categories[category[1]]['count'][response['sentiment']['document']['label']]+1
except:
dict_categories[category[1]]['count'][response['sentiment']['document']['label']]=1
# +
ptive_counts=[]
ntive_counts=[]
neu_counts=[]
ptive_categories=[]
ntive_categories=[]
neu_categories=[]
for x in dict_categories:
try:
ptive_counts.append(dict_categories[x]['count']['positive'])
except:
ptive_counts.append(0)
try:
ntive_counts.append(dict_categories[x]['count']['negative'])
except:
ntive_counts.append(0)
try:
neu_counts.append(dict_categories[x]['count']['neutral'])
except:
neu_counts.append(0)
ptive_categories.append(x)
ntive_categories.append(x)
neu_categories.append(x)
ptive_sentiment=['positive']*len(ptive_categories)
ntive_sentiment=['negative']*len(ptive_categories)
neu_sentiment=['neutral']*len(ptive_categories)
categories=ptive_categories+ntive_categories+neu_categories
counts=ptive_counts+ntive_counts+neu_counts
sentiment=ptive_sentiment+ntive_sentiment+neu_sentiment
# -
with open("scmp_categories.csv", "w") as toWrite:
writer = csv.writer(toWrite, delimiter=",")
writer.writerow(["Category","Count", "Sentiment"])
for x in range(0,len(categories)):
writer.writerow([categories[x],counts[x],sentiment[x]])
categories = pandas.read_csv('scmp_categories.csv', encoding = "ISO-8859-1")
categories
|
Categories_extraction_scmp.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# # Looking Glass
#
#
# Example using mouse events to simulate a looking glass for inspecting data.
#
# +
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as patches
# Fixing random state for reproducibility
np.random.seed(19680801)
x, y = np.random.rand(2, 200)
fig, ax = plt.subplots()
circ = patches.Circle((0.5, 0.5), 0.25, alpha=0.8, fc='yellow')
ax.add_patch(circ)
ax.plot(x, y, alpha=0.2)
line, = ax.plot(x, y, alpha=1.0, clip_path=circ)
ax.set_title("Left click and drag to move looking glass")
class EventHandler:
def __init__(self):
fig.canvas.mpl_connect('button_press_event', self.onpress)
fig.canvas.mpl_connect('button_release_event', self.onrelease)
fig.canvas.mpl_connect('motion_notify_event', self.onmove)
self.x0, self.y0 = circ.center
self.pressevent = None
def onpress(self, event):
if event.inaxes != ax:
return
if not circ.contains(event)[0]:
return
self.pressevent = event
def onrelease(self, event):
self.pressevent = None
self.x0, self.y0 = circ.center
def onmove(self, event):
if self.pressevent is None or event.inaxes != self.pressevent.inaxes:
return
dx = event.xdata - self.pressevent.xdata
dy = event.ydata - self.pressevent.ydata
circ.center = self.x0 + dx, self.y0 + dy
line.set_clip_path(circ)
fig.canvas.draw()
handler = EventHandler()
plt.show()
|
matplotlib/gallery_jupyter/event_handling/looking_glass.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
bw.projects.set_current("B4B18")
if bw.Database("testdb"):
bw.Database("testdb").delete()
t_db = bw.Database("testdb")
# +
t_db.write({
("testdb", "Electricity production"):{
'name':'Electricity, low voltage',
'unit': 'kWh',
'exchanges': [{
'input': ('testdb', 'Fuel production'),
'amount': 2,
'unit': 'kg',
'type': 'technosphere'
},{
'input': ('testdb', 'Carbon dioxide'),
'amount': 1,
'unit': 'kg',
'type': 'biosphere'
},{
'input': ('testdb', 'Sulphur dioxide'),
'amount': 0.1,
'unit': 'kg',
'type': 'biosphere'
},{
'input': ('testdb', 'Electricity production'), #important to write the same process name in output
'amount': 10,
'unit': 'kWh',
'type': 'production'
}]
},
('testdb', 'Fuel production'):{
'name': 'Refined fuel',
'unit': 'kg',
'exchanges':[{
'input': ('testdb', 'Carbon dioxide'),
'amount': 10,
'unit': 'kg',
'type': 'biosphere'
},{
'input': ('testdb', 'Sulphur dioxide'),
'amount': 2,
'unit': 'kg',
'type': 'biosphere'
},{
'input': ('testdb', 'Crude oil'),
'amount': -50,
'unit': 'kg',
'type': 'biosphere'
},{
'input': ('testdb', 'Fuel production'),
'amount': 1,
'unit': 'kg',
'type': 'production'
}]
},
('testdb', 'Carbon dioxide'):{'name': 'Carbon dioxide', 'unit':'kg', 'type': 'biosphere'},
('testdb', 'Sulphur dioxide'):{'name': 'Sulphur dioxide', 'unit':'kg', 'type': 'biosphere'},
('testdb', 'Crude oil'):{'name': 'Crude oil', 'unit':'kg', 'type': 'biosphere'}
})
functional_unit = {t_db.get("Electricity production") : 1}
lca = bw.LCA(functional_unit)
lca.lci()
# -
# ### Create reversed dictionnaries. They will return the row or column number of matrices and arrays that correspond to an activity, product, or elementary flow.
rev_act_dict, rev_product_dict, rev_bio_dict = lca.reverse_dict()
# ### Check out the dictionnaries
# #### This is the dictionary of activities (columns in the technosphere matrix, supply and demand arrays) with the column number as key, and the activity reference as value
print(rev_act_dict)
# ### Or fancy-printed
print("Col. num."+ " " + "Activity")
[print(str(k)+" "+rev_act_dict[k][1]) for k in rev_act_dict]
# #### This is the dictionnary of products (which are supplied by activities), with the key being the row number in the technosphere, demand and supply arrays, and the value being the activity supplying the product.
rev_product_dict
# ### Or, to see directly the product
print("Row num."+" "+"Product")
[print(str(k)+" "+str(bw.get_activity(rev_product_dict[k]))) for k in rev_product_dict]
# ### And here is the dictionary of elementary flows (rows in the environmental matrix (B matrix)), with the key being the row number and the value being the elementary flow
print("Row num."+" "+"Elementary flow")
[print(str(k)+" "+str(bw.get_activity(rev_bio_dict[k]))) for k in rev_bio_dict]
# ### When we have all that, we can check out the different matrices and arrays.
# ### Regarding exchanges between activities (technosphere matrix, or A matrix) we have modeled, who gives to who?
tech_matrix=lca.technosphere_matrix.toarray()
for r in range(0,tech_matrix.shape[0]):
for c in range(0,tech_matrix.shape[1]):
if tech_matrix[r, c]>0:
print(str(rev_act_dict[c][1])+" supplies "+ str(tech_matrix[r, c])+" of "+str(bw.get_activity(rev_product_dict[r])))
else:
print(str(rev_act_dict[c][1])+" uses "+ str(tech_matrix[r, c])+" of "+str(bw.get_activity(rev_product_dict[r])))
# ### This seems to correspond with the activities created.
# ### Regarding the demand array: which product is demanded to fulfill the FU?
demand_array=lca.demand_array.tolist()
for r in demand_array:
print("{} is demanded of {}".format(demand_array[demand_array.index(r)], bw.get_activity(rev_product_dict[demand_array.index(r)])))
# ### This seems to make sense. It correspond to the activity and amount specified in the bw.lca() function
# ### Regarding the supply array: which activities supply to fulfill the FU?
supply_array=lca.supply_array.tolist()
for r in supply_array:
print("{} supplies {}".format(rev_product_dict[supply_array.index(r)][1],supply_array[supply_array.index(r)]))
# ### Now, let's look at the calculated inventory and try to display the corresponding supplying activties as columns as the requested environmental flows as rows.
# ### The inventory given by the LCA object looks like this. It is hardly readable when it contains thousands of flows.
print(lca.inventory)
# ### We turn it into a simple array for ease of access to the values
inventory_matrix=lca.inventory.toarray()
# ### And we loop through it, first row-wise, then column-wise
for r in range(0,inventory_matrix.shape[0]):
for c in range(0,inventory_matrix.shape[1]):
if inventory_matrix[r, c]>0:
print(str(rev_act_dict[c][1])+" emits "+ str(inventory_matrix[r, c])+" "+bw.get_activity(rev_bio_dict[r])["unit"]+" of "+str(rev_bio_dict[r][1]))
else:
print(str(rev_act_dict[c][1])+" uses "+ str(inventory_matrix[r, c])+" "+bw.get_activity(rev_bio_dict[r])["unit"]+" of "+str(rev_bio_dict[r][1]))
|
00.2_Navigate though matrices and arrays copy_RS.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Lesson 2: Comparison Operators
1>2
1==1
1!=2
'string'=='string'
'bell'=='boy'
(1==2)and(2==2)
(1==2) or (2==2)
(1==1) and not (1==2)
# <b> Control Flow of Python
# <b> If Statement
if True:
print('yes')
if False:
print('no')
if (1==5):
print('true')
elif (2!=2):
print ('yes')
else:
print('hi')
if (1==3):
print('true')
elif (2!=3):
print('yes')
# <b>for loops
seq=[10,202,30,40,50]
for item in seq:
print('hi')
for num in seq:
print(num)
for num in seq:
print(num**2)
# <b>While loops
i=1
while i<5:
print('i is cuurently {}'.format(i))
i=i+1
# <b>Range Function
range(5)
for item in range(5):
print('item currently is {}'.format(item))
list(range(1,11))
# <b>List comprehension
# +
x=[1,2,3,4]
out =[]
for num in x:
out.append(num**2)
out
# 1,4,9,16
# -
# The above same code can be written in comprehensive way
x=[10,20,30,40]
[num**2 for num in x]
#100,400,900,1600
# # Lesson 3: Functions
# 1. Functions
# 2. Lambda Expressions
# 3. Vaiours useful method
def my_func():
print('hello')
my_func()
# <b>functions with parameter
def myfunc(param,param2='class'):
print(param,param2)
myfunc('this is my class ApDev')
# <b>functions with default parameter
def myfunc1(param=5):
"""
docstring goes here!
"""
print(param)
#return param
myfunc1() # here we are not passing any parameter to the function
#since it is already declared as default in function definition
def myfunc1(argument):
"""
docstring goes here!
"""
return (argument *5)
x=myfunc1(6)
x
#30
def times_two(var):
return var*2
result = times_two(4)
result
# instead of the code mentioned above we can lambda function
lambda var: var*2
# <b> showing the usage of lambda in map function
seq=[1,2,3,4,5]
list(map(times_two,seq))
# <b> lambda with only one argument
#x=15
list(map(lambda num:num*2,seq))
# <b> lambda functions can accept zero or more arguments but only one expression
#
f=lambda x, y: x*y
f(5,2)
# <b>Methods
# <b> String Upper and String Lower
st="hello i'm JEFF"
st.lower()
# <b> Split method
tweet="Go sports ! #cool"
#splits with white space. This is the default one
tweet.split()
|
Lesson2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # BERT-SQuAD Inference Example with AMD MIGraphX
# This tutorial shows how to run the BERT-Squad model on ONNX-Runtime with MIGraphX backend.
# ## Requirements
# !pip3 install -r requirements_bertsquad.txt
# +
import numpy as np
import json
import time
import os.path
from os import path
import sys
import tokenizers
from run_onnx_squad import *
import migraphx
# -
# ## Download BERT ONNX file
# !wget -nc https://github.com/onnx/models/raw/master/text/machine_comprehension/bert-squad/model/bertsquad-10.onnx
# ## Download uncased file / vocabulary
# !apt-get install unzip
# !wget -q -nc https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip
# !unzip -n uncased_L-12_H-768_A-12.zip
# ## Input data
input_file = 'inputs.json'
with open(input_file) as json_file:
test_data = json.load(json_file)
print(json.dumps(test_data, indent=2))
# # Configuration for inference
max_seq_length = 256
doc_stride = 128
max_query_length = 64
batch_size = 1
n_best_size = 20
max_answer_length = 30
# ## Read vocabulary file and tokenize
vocab_file = os.path.join('uncased_L-12_H-768_A-12', 'vocab.txt')
tokenizer = tokenizers.BertWordPieceTokenizer(vocab_file)
# ## Convert the example to features to input
# +
# preprocess input
predict_file = 'inputs.json'
# Use read_squad_examples method from run_onnx_squad to read the input file
eval_examples = read_squad_examples(input_file=predict_file)
# Use convert_examples_to_features method from run_onnx_squad to get parameters from the input
input_ids, input_mask, segment_ids, extra_data = convert_examples_to_features(
eval_examples, tokenizer, max_seq_length, doc_stride, max_query_length)
# -
# ## Compile with MIGraphX for GPU
# +
model = migraphx.parse_onnx("bertsquad-10.onnx")
model.compile(migraphx.get_target("gpu"))
#model.print()
model.get_parameter_names()
model.get_parameter_shapes()
# -
# ## Run the input through the model
# +
n = len(input_ids)
bs = batch_size
all_results = []
for idx in range(0, n):
item = eval_examples[idx]
print(item)
result = model.run({
"unique_ids_raw_output___9:0":
np.array([item.qas_id], dtype=np.int64),
"input_ids:0":
input_ids[idx:idx + bs],
"input_mask:0":
input_mask[idx:idx + bs],
"segment_ids:0":
segment_ids[idx:idx + bs]
})
in_batch = result[1].get_shape().lens()[0]
print(in_batch)
start_logits = [float(x) for x in result[1].tolist()]
end_logits = [float(x) for x in result[0].tolist()]
# print(start_logits)
# print(end_logits)
for i in range(0, in_batch):
unique_id = len(all_results)
all_results.append(
RawResult(unique_id=unique_id,
start_logits=start_logits,
end_logits=end_logits))
# -
# ## Get the predictions
# +
output_dir = 'predictions'
os.makedirs(output_dir, exist_ok=True)
output_prediction_file = os.path.join(output_dir, "predictions.json")
output_nbest_file = os.path.join(output_dir, "nbest_predictions.json")
write_predictions(eval_examples, extra_data, all_results, n_best_size,
max_answer_length, True, output_prediction_file,
output_nbest_file)
with open(output_prediction_file) as json_file:
test_data = json.load(json_file)
print(json.dumps(test_data, indent=2))
|
examples/python_bert_squad_example/BERT-Squad.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
for i in range(1,101):
if i%3==0 and i%5==0:
print("Dogcat")
elif i%3==0:
print("Dog")
elif i%5==0:
print("cat")
else:
print(i)
|
Problem Statement_forloop_dogcat.ipynb
|