text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# CHEM 1000 - Spring 2022
Prof. Geoffrey Hutchison, University of Pittsburgh
## 5 Scalar and Vector Operators
Chapter 5 in [*Mathematical Methods for Chemists*](http://sites.bu.edu/straub/mathematical-methods-for-molecular-science/)
By the end of this session, you should be able to:
- Understand the concept of vector-valued functions and vector fields
- Identify sinks, sources, and saddle points of vector fields
- Understand the concept of vector operators
- Understand the gradient and applications to chemistry (e.g., forces)
### Scalars vs. Vectors
Reminder...
**Scalars** are just numbers - they have a magnitude, a size. The mass of a molecule would be an example, e.g., 120 amu.
**Vectors** have both a magnitude and a direction:
- velocity $\mathbf{v}$
- acceleration $\mathbf{a}$
- force $\mathbf{F}$
- electric field $\mathbf{E}$
### Scalar Functions and Vector Functions
A **function** takes in a number, a vector, etc. and returns a number:
$$
\sin 0 = 0
$$
Notice that $\sin x$ is a scalar function. You give it something, and it returns a **scalar**.
By extension, there must be **vector functions** too - one that returns a vector for every point.
Right now, you're experiencing force due to gravity. If you stand up, the forces acting on your body change over time. (Consider if you go on a roller coaster or fly in an airplane.)
$$
\overrightarrow{\boldsymbol{F}}(t)
$$
Notice that time is a scalar - it's just a number. So a vector function returns a vector regardless of what the input is. It might be one-dimensional (e.g., the force we feel at a given time $t$) or 2D or 3D, etc. (e.g., the forces on a satellite in space .. we probably care about the position of the satellite but maybe also time.)
### Vector Fields
When we have a vector function in 2D or 3D, we usually call these **vector fields**.
<div class="alert alert-block alert-success">
A **vector field** is a function that return a vector for every (x,y) or (x, y, z) point.
</div>
It sounds abstract, but we're actually already familiar with the concept. Consider a weather map showing wind:
<img src='../images/wind-vectors.png' width="540" />
Depending on our location, the wind speed and vector will differ. Let's look at a [tropical cyclone (Hurricane
Sally)](https://en.wikipedia.org/wiki/Tropical_Storm_Sally_(2020)) in the Gulf of Mexico.
<img src='../images/hurricane.png' width="505" />
Obviously, the further away from the hurricane, the wind speed (magnitude) decreases, and there's an *eye* in the center, where there's no wind at all. Also, the vector direction differs depending on where you are.
In chemistry, we encounter a range of vector functions and vector fields. For example:
- The force acting on a charge in an electric field. At each point in space, the force acting on the charge will have a specific magnitude and a direction.
- Fluid flow for which each element of the fluid at some point in space has a given speed and direction of flow.
There are a few key terms with vector fields:
- a **sink** is a point in which all vectors flow inward
- a **source** is a point at which all vectors flow outward
- a **saddle-point** has no net inward or outward flow (i.e., they balance exactly)
<img src='../images/vector-field.png' />
(Image from [*Mathematical Methods for Chemists*](http://sites.bu.edu/straub/mathematical-methods-for-molecular-science/))
Here, the red point is a **source** and all the arrows point away from it (imagine the forces from a negative charge towards an electron).
The black point is a **sink** and all the arrows point towards it (e.g., a positive charge that will attract an electron).
The two "X" points are **saddle points**. Notice that they come inward along one direction and outward along another direction. (Think about a horse saddle that rises up to the neck and head and also to the tail, and slopes down along the sides.)
### Example Vector Field
Let's try plotting
$$
\vec{F}=\frac{x}{5} \hat{\mathbf{x}}+\frac{y}{5} \hat{\mathbf{y}}
$$
We'll use `numpy` and `matplotlib`. The code is a bit different because we're creating a "quiverplot".
```
# Let's plot some vector fields with numpy and matplotlib
# this is just our normal 'import numpy and matplotlib' code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.style.use('./chem1000.mplstyle')
# Example adapted from:
# https://pythonforundergradengineers.com/quiver-plot-with-matplotlib-and-jupyter-notebooks.html
# we're going to create a set of points from -5 to +5
# along x and y axes
x = np.arange(-5,6,1) # remember that np.arange() doesn't include the end value
y = np.arange(-5,6,1)
# remember, this takes the numpy arrays above and makes a mesh
X, Y = np.meshgrid(x, y)
# here's how we express the function
F_x = X/5 # x component
F_y = Y/5 # y component
# here's how we create a "quiver plot" with matplotlib
# if you want to know more, please ask
fig, ax = plt.subplots() # create a figure
ax.quiver(X,Y,F_x,F_y) # axis.quiver( X, Y mesh, Func X, Func Y )
# We'll go from -6 to +6 on each axis
# to see the arrows
ax.axis([-6, 6, -6, 6])
ax.set_aspect('equal') # make sure it's exactly square
plt.show()
```
<div class="alert alert-block alert-info">
**In the plot above, what kind of a point is at the origin?**
</div>
Let's try another example. This time, we'll plot:
$$
\vec{F}=\frac{x}{5} \hat{\mathbf{x}} - \frac{y}{5} \hat{\mathbf{y}}
$$
```
# here's how we express the function
F_x = +X/5 # x component
F_y = -Y/5 # y component
fig, ax = plt.subplots() # create a figure
ax.quiver(X,Y,F_x,F_y) # axis.quiver( X, Y mesh, Func X, Func Y )
# We'll go from -6 to +6 on each axis
# to see the arrows
ax.axis([-6, 6, -6, 6])
ax.set_aspect('equal') # make sure it's exactly square
plt.show()
```
<div class="alert alert-block alert-info">
**Now what kind of point is at the origin?**
</div>
**Gradients**
Perhaps one of the most useful vector functions / vector fields in chemistry comes from the **gradient** of a scalar function.
<div class="alert alert-block alert-success">
The **gradient** operator in 2D Cartesian coordinates (x, y) is
$$
\boldsymbol{\nabla} \equiv \hat{\mathbf{x}} \frac{\partial}{\partial x}+\hat{\mathbf{y}} \frac{\partial}{\partial y}
$$
</div>
We use $\boldsymbol{\nabla}$ as a short-hand for the gradient operator, no matter what coordinate system is. Particularly in polar or spherical coordinates the expression can get complicated.
So what does it do?
$$
\boldsymbol{\nabla} V(x, y) = \left(\hat{\mathbf{x}} \frac{\partial}{\partial x}+\hat{\mathbf{y}} \frac{\partial}{\partial y}\right) V(x, y)
$$
At every point (x,y) we take the partial derivative of $V(x,y)$ (a scalar function) with respect to x and y, and use those as the x-component and y-component of a vector.
In other words, the gradient operator returns a **vector** from a **scalar** function.
**Laplace Operator**
<div class="alert alert-block alert-success">
The **Laplace** operator (sometimes called the Laplacian) in 2D Cartesian coordinates (x, y) is
$$
\nabla^{2} \equiv \frac{\partial^{2}}{\partial x^{2}}+\frac{\partial^{2}}{\partial y^{2}}
$$
</div>
We use $\nabla^2$ as a short-hand for the Laplace operator, no matter what coordinate system we use. At every point, we take the second partial derivatives of a function, and add the components.
The Laplace operator takes a scalar function and returns a new scalar function.
**Potential Energy and Forces**
Consider the interaction of two atoms according to the [Lennard-Jones potential](https://en.wikipedia.org/wiki/Lennard-Jones_potential):
$$
V(r)=4 \varepsilon\left[\left(\frac{\sigma}{r}\right)^{12}-\left(\frac{\sigma}{r}\right)^{6}\right]
$$
where $r$ is the distance between the two atoms, $\sigma$ represents the atomic diameter (or the sum of the two atomic radii), and $\epsilon$ represents the binding energy. In this case, we know that there will be a minimum energy at:
$$
r_{\min }=\sqrt[6]{2} \sigma
$$
Image from [*Mathematical Methods for Chemists*](http://sites.bu.edu/straub/mathematical-methods-for-molecular-science/):
<img src="../images/lennard-jones.png" />
Now the force will depend on the potential energy:
$$
\mathbf{F}=-\left[\hat{\mathbf{x}} \frac{\partial}{\partial x}+\hat{\mathbf{y}} \frac{\partial}{\partial y}+\hat{\mathbf{z}} \frac{\partial}{\partial z}\right] V(x, y, z)=-\nabla V(x, y, z)
$$
Put simply, the forces are the gradient of the potential energy!
$$
\mathbf{F}=-\boldsymbol{\nabla} V
$$
We can even do the work in this case - it's one-dimensional in $\hat{\mathbf{r}}$:
$$
\mathbf{F}(r)=-\frac{d V(r)}{d r} \hat{\mathbf{r}}=4 \varepsilon\left[12\left(\frac{\sigma^{12}}{r^{13}}\right)-6\left(\frac{\sigma^6}{r^7}\right)\right] \hat{\mathbf{r}}
$$
So when $r < r_{min}$ the repulsive force proportional to $1/r^{12}$ dominates the total force and pushes in a positive direction, while at distances $r > r_{min}$ the attractive force proportional to $1/r^6$ is dominant and pushes in a negative direction.
<div class="alert alert-block alert-info">
Notice that the forces on the atom always aim to minimize the energy:
- If the two atoms are close together, the repulsive force pushes in a direction towards $r_{min}$.
- If the two atoms are far apart, the attractive force pulls in a direction towards $r_{min}$.
</div>
While the Lennard-Jones potential has a very simple one-dimensional form (e.g., one atom is at the origin and the other is some distance $r$ away), the concept of the gradient and forces applies regardless of the potential energy.
If we have some method to calculate the potential energy (e.g., quantum chemistry, etc.) we can take the gradient, get the forces on the atoms, and move them accordingly to get a minimum energy.
<img src="../images/atom-forces.png" width="341" />
In this case, we can see that the central carbon-carbon bond is too short and the carbon atoms are pulling apart, resulting in the hydrogen atoms moving in different directions.
Eventually as we repeat the process (find the gradient and forces, move the atoms a bit, re-calculate), we can minimize the potential energy and find an optimized geometry.
**Example problem:**
For a scalar potential energy, $V(x, y, z)=x^{2}+y^{2}+z^{2}$, derive the force defined as the negative gradient of the potential.
```
from sympy import init_session
init_session()
V = x**2 + y**2 + z**2
# get the x-component
diff(V, x)
# the y-component
diff(V, y)
# the z-component
diff(V, z)
```
Okay, we probably could have done that by inspection.
$$
\boldsymbol{\nabla} V(x,y,z) = 2x \hat{\mathbf{x}} + 2y \hat{\mathbf{y}} + 2z \hat{\mathbf{z}}
$$
<div class="alert alert-block alert-info">
**If that's the gradient, what are the forces?**
</div>
### Gradient in Spherical Coordinates
One last thing.. so far, we've expressed the gradient in 2D or 3D Cartesian coordinates, and it's not too bad.
In spherical coordinates, it's a little messier:
$$
\boldsymbol{\nabla} V=\hat{\mathbf{r}} \frac{\partial V}{\partial r}+\hat{\boldsymbol{\theta}} \frac{1}{r} \frac{\partial V}{\partial \theta}+\hat{\boldsymbol{\varphi}} \frac{1}{r \sin \theta} \frac{\partial V}{\partial \varphi}
$$
The Laplace operator is similarly messy:
$$
\nabla^{2} V=\frac{1}{r^{2}} \frac{\partial}{\partial r}\left(r^{2} \frac{\partial V}{\partial r}\right)+\frac{1}{r^{2} \sin \theta} \frac{\partial}{\partial \theta}\left(\sin \theta \frac{\partial V}{\partial \theta}\right)+\frac{1}{r^{2} \sin ^{2} \theta} \frac{\partial^{2} V}{\partial \varphi^{2}}
$$
We'll come back to these as we need them, but it's a nice illustration of why we use operators. We can write a short symbol and it represents a longer, messier operator.
-------
This notebook is from Prof. Geoffrey Hutchison, University of Pittsburgh
https://github.com/ghutchis/chem1000
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a>
| github_jupyter |
### Overview
This notebook is tested using SageMaker `Studio SparkMagic - PySpark Kernel`. Please ensure that you see `PySpark (SparkMagic)` in the top right on your notebook.
This notebook does the following:
* Demonstrates how you can visually connect Amazon SageMaker Studio Sparkmagic kernel to an EMR cluster
* Explore and query data from a Hive table
* Use the data locally
* Provides resources that demonstrate how to use the local data for ML including using SageMaker Processing.
----------
When using PySpark kernel notebooks, there is no need to create a SparkContext or a HiveContext; those are all created for you automatically when you run the first code cell, and you'll be able to see the progress printed. The contexts are created with the following variable names:
- SparkContext (sc)
- HiveContext (sqlContext)
----------
### PySpark magics
The PySpark kernel provides some predefined “magics”, which are special commands that you can call with `%%` (e.g. `%%MAGIC` <args>). The magic command must be the first word in a code cell and allow for multiple lines of content. You can’t put comments before a cell magic.
For more information on magics, see [here](http://ipython.readthedocs.org/en/stable/interactive/magics.html).
#### Running locally (%%local)
You can use the `%%local` magic to run your code locally on the Jupyter server without going to Spark. When you use %%local all subsequent lines in the cell will be executed locally. The code in the cell must be valid Python code.
```
%%local
print("Demo Notebook")
```
### Connection to EMR Cluster
In the cell below, the code block is autogenerated. You can generate this code by clicking on the "Cluster" link on the top of the notebook and select the EMR cluster. The "j-xxxxxxxxxxxx" is the cluster id of the cluster selected.
For this workshop, we use a no-auth cluster for simplicity, but this works equally well for Kerberos, LDAP and HTTP auth mechanisms
```
# %load_ext sagemaker_studio_analytics_extension.magics
# %sm_analytics emr connect --cluster_id j-xxxxxxxxxxxx --auth-type None
```
Next, we will query the movie_reviews table and get the data into a spark dataframe. You can visualize the data from the remote cluster locally in the notebook
```
from pyspark.sql.functions import regexp_replace, col, concat, lit
movie_reviews = sqlContext.sql("select * from movie_reviews").cache()
movie_reviews= movie_reviews.where(col('sentiment') != "sentiment")
```
Using the SageMaker Studio sparkmagic kernel, you can train machine learning models in the Spark cluster using the *SageMaker Spark library*. SageMaker Spark is an open source Spark library for Amazon SageMaker. For examples,
see [here](https://github.com/aws/sagemaker-spark#example-using-sagemaker-spark-with-any-sagemaker-algorithm)
In this notebook however, we will use SageMaker experiments, trial and estimator to train a model and deploy the model using SageMaker realtime endpoint hosting
In the next cell, we will install the necessary libraries
```
%%local
%pip install -q sagemaker-experiments
```
Next, we will import libraries and set global definitions
```
%%local
import sagemaker
import boto3
import botocore
from botocore.exceptions import ClientError
from time import strftime, gmtime
import json
from sagemaker import get_execution_role
from smexperiments.experiment import Experiment
from smexperiments.trial import Trial
%%local
sess = sagemaker.Session()
bucket = sess.default_bucket()
train_bucket = f"s3://{bucket}/reviews/train"
val_bucket = f"s3://{bucket}/reviews/val"
```
Send the following variables to spark
```
%%send_to_spark -i train_bucket -t str -n train_bucket
%%send_to_spark -i val_bucket -t str -n val_bucket
val_bucket
```
### Pre-process data and feature engineering
```
from pyspark.sql.functions import regexp_replace, col, concat, lit
movie_reviews = movie_reviews.withColumn('sentiment', regexp_replace('sentiment', 'positive', '__label__positive'))
movie_reviews = movie_reviews.withColumn('sentiment', regexp_replace('sentiment', 'negative', '__label__negative'))
# Remove all the special characters
movie_reviews = movie_reviews.withColumn('review', regexp_replace('review', '\W', " "))
# Remove all single characters
movie_reviews = movie_reviews.withColumn('review', regexp_replace('review', r"\s+[a-zA-Z]\s+", " "))
# Remove single characters from the start
movie_reviews = movie_reviews.withColumn('review', regexp_replace('review', r"\^[a-zA-Z]\s+", " "))
# Substituting multiple spaces with single space
movie_reviews = movie_reviews.withColumn('review', regexp_replace('review', r"\s+", " "))
# Removing prefixed 'b'
movie_reviews = movie_reviews.withColumn('review', regexp_replace('review', r"^b\s+", " "))
movie_reviews.show()
# Merge columns for BlazingText input format:
# https://docs.aws.amazon.com/sagemaker/latest/dg/blazingtext.html
movie_reviews = movie_reviews.select(concat(col("sentiment"), lit(" "), col("review")).alias("record"))
movie_reviews.show()
# Set flag so that _SUCCESS meta files are not written to S3
spark.conf.set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false")
train_df, val_df = movie_reviews.randomSplit([0.8, 0.2], seed=42)
train_df.coalesce(1).write.csv(train_bucket, mode='overwrite')
val_df.coalesce(1).write.csv(val_bucket, mode='overwrite')
print(train_bucket)
print(val_bucket)
%%local
instance_type_smtraining="ml.m5.xlarge"
instance_type_smendpoint="ml.m5.xlarge"
%%local
prefix = 'blazingtext/supervised'
output_location = 's3://{}/{}/output'.format(bucket, prefix)
print(train_bucket)
print(val_bucket)
print(output_location)
```
### Train a SageMaker model
#### Amazon SageMaker Experiments
Amazon SageMaker Experiments allows us to keep track of model training; organize related models together; and log model configuration, parameters, and metrics to reproduce and iterate on previous models and compare models.
Let's create the experiment, trial, and train the model. To reduce cost, the training code below has a variable to utilize spot instances.
```
%%local
import boto3
region_name = boto3.Session().region_name
sm_session = sagemaker.session.Session()
create_date = strftime("%Y-%m-%d-%H-%M-%S", gmtime())
sentiment_experiment = Experiment.create(experiment_name="sentimentdetection-{}".format(create_date),
description="Detect sentiment in text",
sagemaker_boto_client=boto3.client('sagemaker'))
trial = Trial.create(trial_name="sentiment-trial-blazingtext-{}".format(strftime("%Y-%m-%d-%H-%M-%S", gmtime())),
experiment_name=sentiment_experiment.experiment_name,
sagemaker_boto_client=boto3.client('sagemaker'))
container = sagemaker.amazon.amazon_estimator.get_image_uri(region_name, "blazingtext", "latest")
print('Using SageMaker BlazingText container: {} ({})'.format(container, region_name))
%%local
train_use_spot_instances = False
train_max_run=3600
train_max_wait = 3600 if train_use_spot_instances else None
bt_model = sagemaker.estimator.Estimator(container,
role=sagemaker.get_execution_role(),
instance_count=1,
instance_type=instance_type_smtraining,
volume_size = 30,
input_mode= 'File',
output_path=output_location,
sagemaker_session=sm_session,
use_spot_instances=train_use_spot_instances,
max_run=train_max_run,
max_wait=train_max_wait)
%%local
bt_model.set_hyperparameters(mode="supervised",
epochs=10,
min_count=2,
learning_rate=0.005328,
vector_dim=286,
early_stopping=True,
patience=4,
min_epochs=5,
word_ngrams=2)
%%local
train_data = sagemaker.inputs.TrainingInput(train_bucket, distribution='FullyReplicated',
content_type='text/plain', s3_data_type='S3Prefix')
validation_data = sagemaker.inputs.TrainingInput(val_bucket, distribution='FullyReplicated',
content_type='text/plain', s3_data_type='S3Prefix')
data_channels = {'train': train_data, 'validation': validation_data}
%%local
%%time
bt_model.fit(data_channels,
experiment_config={
"ExperimentName": sentiment_experiment.experiment_name,
"TrialName": trial.trial_name,
"TrialComponentDisplayName": "BlazingText-Training",
},
logs=False)
```
### Deploy the model and get predictions
```
%%local
from sagemaker.serializers import JSONSerializer
text_classifier = bt_model.deploy(initial_instance_count = 1, instance_type = instance_type_smendpoint, serializer=JSONSerializer())
%%local
import json
review = ["please give this one a miss br br kristy swanson and the rest of the cast"
"rendered terrible performances the show is flat flat flat br br"
"i don't know how michael madison could have allowed this one on his plate"
"he almost seemed to know this wasn't going to work out"
"and his performance was quite lacklustre so all you madison fans give this a miss"]
payload = {"instances" : review}
output = json.loads(text_classifier.predict(payload).decode('utf-8'))
classification = output[0]['label'][0].split('__')[-1]
print("Sentiment:", classification.upper())
```
### Clean up
```
%%local
# Delete endpoint
text_classifier.delete_endpoint()
%%cleanup -f
```
| github_jupyter |
# 1. Transforming Data with dplyr
Learn verbs you can use to transform your data, including select, filter, arrange, and mutate. You'll use these functions to modify the counties dataset to view particular observations and answer questions about the data
## The counties dataset
This particular dataset is from the 2015 United States Census.
### Chapter 1 verbs
- `select()`
- `filter()`
- `arrange()`
- `mutate()`
```
library(dplyr)
counties <- readRDS("counties.rds")
head(counties)
```
- Select the following four columns from the `counties` variable:
`state`, `county`, `population`, `poverty`
```
counties %>%
# Select the columns
select(state, county, population, poverty)
```
---
## The filter and arrange verbs
```
counties_selected_scpu <- counties %>%
select(state, county, population, unemployment)
```
### Arrange
```
counties_selected_scpu %>%
arrange(population)
```
### Arrange: descending
```
counties_selected_scpu %>%
arrange(desc(population))
```
### Filter
```
counties_selected_scpu %>%
arrange(desc(population)) %>%
filter(state == 'New York')
counties_selected_scpu %>%
arrange(desc(population)) %>%
filter(unemployment < 6)
```
### Combining conditions
```
counties_selected_scpu %>%
arrange(desc(population)) %>%
filter(state == "New York", unemployment < 6)
```
## Exercise
```
# Arranging observations
counties_selected <- counties %>%
select(state, county, population, private_work, public_work, self_employed)
counties_selected %>%
# Add a verb to sort in descending order of public_work
arrange(desc(public_work))
# Filtering for conditions
counties_selected <- counties %>%
select(state, county, population)
# Filter for counties with a population above 1000000
counties_selected %>%
filter(population > 1000000)
# Filter for the counties in the state of California with a population above 1000000
counties_selected %>%
filter(state == "California", population > 1000000)
# Filtering and arranging
counties_selected <- counties %>%
select(state, county, population, private_work, public_work, self_employed)
# Filter for Texas and more than 10000 people; sort in descending order of private_work
counties_selected %>%
# Filter for Texas and more than 10000 people
filter(state == "Texas", population > 10000) %>%
# Sort in descending order of private_work
arrange(desc(private_work))
```
---
## Mutate
`mutate()` verb adds new variables or change existing variables.
```
counties_selected_scpu %>%
mutate(unemployed_population = population * unemployment / 100) %>%
arrange(desc(unemployed_population))
```
## Exercise
```
# Calculating the number of government employees
counties_selected <- counties %>%
select(state, county, population, public_work)
counties_selected %>%
# Add a new column public_workers with the number of people employed in public work
mutate(public_workers = population * public_work / 100) %>%
# Sort in descending order of the public_workers column
arrange(desc(public_workers))
# Calculating the percentage of women in a county
counties_selected <- counties %>%
# Select the columns state, county, population, men, and women
select(state, county, population, men, women)
counties_selected %>%
# Calculate proportion_women as the fraction of the population made up of women
mutate(proportion_women = women / population)
# Select, mutate, filter, and arrange
counties %>%
select(state, county, population, men, women) %>%
# Add the proportion_men variable
mutate(proportion_men = men / population) %>%
# Filter for population of at least 10,000
filter(population >= 10000) %>%
# Arrange proportion of men in descending order
arrange(desc(proportion_men))
```
*Notice Sussex County in Virginia is more than two thirds male: this is because of two men's prisons in the county.*
---
---
# 2. Aggregating Data
Now that you know how to transform your data, you'll want to know more about how to aggregate your data to make it more interpretable. You'll learn a number of functions you can use to take many observations in your data and summarize them, including count, group_by, summarize, ungroup, and top_n.
## The count verb
```
counties %>%
count()
```
### Count variable
```
counties %>%
count(state)
```
### Count and sort
```
counties %>%
count(state, sort = TRUE)
```
### Count population/Add weight
Add the argument `wt`, which stands for 'weight'.
```
counties %>%
count(state, wt = population, sort = TRUE)
```
## Exercise
```
# Counting by region
counties_selected <- counties %>%
select(county, region, state, population, citizens)
# Use count to find the number of counties in each region
counties_selected %>%
count(region, sort = TRUE)
# Counting citizens by state
# Find number of counties per state, weighted by citizens, sorted in descending order
counties_selected %>%
count(state, wt = citizens, sort = TRUE)
```
*California is the state with the most citizens.*
```
# Mutating and counting
counties_selected <- counties %>%
select(county, region, state, population, walk)
counties_selected %>%
# Add population_walk containing the total number of people who walk to work
mutate(population_walk = population * walk / 100) %>%
# Count weighted by the new column, sort in descending order
count(state, wt = population_walk, sort = TRUE)
```
*While California had the largest total population, New York state has the largest number of people who walk to work.*
---
## The group by, summarize and ungroup verbs
`count` is a special case of a more general set of verbs: group by and summarize.
### Summarize
The summarize verb takes many observations and turns them into one observation.
```
counties %>%
summarize(total_population = sum(population))
```
### Aggregate and summarize
```
counties %>%
summarize(total_population = sum(population),
average_unemployment = mean(unemployment))
```
### Summary functions
- `sum()`
- `mean()`
- `median()`
- `min()`
- `max()`
- `n()` : the size of the group
### Aggregate within groups + Arrange
```
counties %>%
group_by(state) %>%
summarize(total_pop = sum(population),
average_unemployment = mean(unemployment)) %>%
arrange(desc(average_unemployment))
```
### Metro column & Group by
```
counties %>%
group_by(state, metro) %>%
summarize(total_pop = sum(population))
```
### Ungroup
If not want to keep state as a group, add another dplyr verb: `ungroup()`.
```
counties %>%
group_by(state, metro) %>%
summarize(total_pop = sum(population)) %>%
ungroup()
```
## Exercise
```
# Summarizing
counties_selected <- counties %>%
select(county, population, income, unemployment)
# Summarize to find minimum population, maximum unemployment, and average income
counties_selected %>%
summarize(min_population = min(population),
max_unemployment = max(unemployment),
average_income = mean(income))
# Summarizing by state
counties_selected <- counties %>%
select(state, county, population, land_area)
counties_selected %>%
# Group by state
group_by(state) %>%
# Find the total area and population
summarise(total_area = sum(land_area),
total_population = sum(population)) %>%
# Add a density column
mutate(density = total_population / total_area) %>%
# Sort by density in descending order
arrange(desc(density))
```
*Looks like New Jersey and Rhode Island are the “most crowded” of the US states, with more than a thousand people per square mile.*
```
# Summarizing by state and region
counties_selected <- counties %>%
select(region, state, county, population)
counties_selected %>%
# Group and summarize to find the total population
group_by(region, state) %>%
summarize(total_pop = sum(population)) %>%
# Calculate the average_pop and median_pop columns
summarize(average_pop = mean(total_pop), median_pop = median(total_pop))
```
*It looks like the South has the highest* `average_pop` *of 7370486, while the North Central region has the highest* `median_pop` *of 5580644.*
---
## The top_n verb
dplyr's `top n` is very useful for keeping the most extreme observations from each group.
### top_n
Like `summarize()`, `top_n()` operates on a grouped table. The function takes two arguments: the number of observations you want from each group, and the column you want to weight by.
For example, `group_by(state)` and then `top_n(1, population)` would find the county with the highest population in each state.
```
counties_selected <- counties %>%
select(state, county, population, unemployment, income)
counties_selected %>%
group_by(state) %>%
top_n(1, population)
```
Jefferson is the highest population county in Alabama with a population of 659 thousand. Notice that it kept other columns in this table, in this case, unemployment and income.
### Highest unemployment
```
counties_selected %>%
group_by(state) %>%
top_n(1, unemployment)
```
### Number of observations
```
counties_selected %>%
group_by(state) %>%
top_n(3, unemployment)
```
`top_n` is often used when creating graphs.
## Exercise
```
# Selecting a county from each region
counties_selected <- counties %>%
select(region, state, county, metro, population, walk)
counties_selected %>%
# Group by region
group_by(region) %>%
# Find the greatest number of citizens who walk to work
top_n(1, walk)
```
*Notice that three of the places lots of people walk to work are low-population nonmetro counties, but that New York City also pops up.*
```
# Finding the highest-income state in each region
counties_selected <- counties %>%
select(region, state, county, population, income)
counties_selected %>%
group_by(region, state) %>%
# Calculate average income
summarise(average_income = mean(income)) %>%
# Find the highest income state in each region
top_n(1, average_income)
```
*New Jersey in the Northeast is the state with the highest* `average_income` *of 73014.*
```
# Using summarize, top_n, and count together
counties_selected <- counties %>%
select(state, metro, population)
counties_selected %>%
# Find the total population for each combination of state and metro
group_by(state, metro) %>%
summarise(total_pop = sum(population)) %>%
# Extract the most populated row for each state
top_n(1, total_pop)
counties_selected %>%
# Find the total population for each combination of state and metro
group_by(state, metro) %>%
summarize(total_pop = sum(population)) %>%
# Extract the most populated row for each state
top_n(1, total_pop) %>%
# Count the states with more people in Metro or Nonmetro areas
ungroup() %>%
count(metro)
```
*Notice that 44 states have more people living in Metro areas, and 6 states have more people living in Nonmetro areas.*
| github_jupyter |
# Train a VAE on L1000 Data
```
import sys
import pathlib
import numpy as np
import pandas as pd
sys.path.insert(0, "../../scripts")
from utils import load_data, infer_L1000_features
import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
from sklearn.decomposition import PCA
from tensorflow import keras
from vae import VAE
from tensorflow.keras.models import Model, Sequential
import seaborn
import tensorflow as tf
def remove_moa(df):
pipes = ['CDK inhibitor|glycogen synthase kinase inhibitor',
'AKT inhibitor|mTOR inhibitor',
'EGFR inhibitor|protein tyrosine kinase inhibitor',
'benzodiazepine receptor agonist|HDAC inhibitor',
'dihydroorotate dehydrogenase inhibitor|PDGFR tyrosine kinase receptor inhibitor']
moas = []
for pipe in pipes:
moas.append(pipe)
moas.append(pipe.split('|')[0])
moas.append(pipe.split('|')[1])
return df[~df.moa.isin(moas)]
data_splits = ["train", "valid", "test", "complete"]
data_dict = load_data(data_splits, dataset="L1000")
# Prepare data for training
meta_features = infer_L1000_features(data_dict["train"], metadata=True)
profile_features = infer_L1000_features(data_dict["train"])
moa_df_train = pd.read_csv("../3.application/repurposing_info_external_moa_map_resolved.tsv",sep='\t').set_index('broad_id').reindex(index=data_dict['train']['pert_id']).reset_index().drop('pert_id',axis = 1)
data_dict['train'] = pd.concat([moa_df_train,data_dict['train']], axis=1)
moa_df_valid = pd.read_csv("../3.application/repurposing_info_external_moa_map_resolved.tsv",sep='\t').set_index('broad_id').reindex(index=data_dict['valid']['pert_id']).reset_index().drop('pert_id',axis = 1)
data_dict['valid'] = pd.concat([moa_df_valid,data_dict['valid']], axis=1)
data_dict['train'] = remove_moa(data_dict['train'])
data_dict['valid'] = remove_moa(data_dict['valid'])
train_features_df = data_dict["train"].reindex(profile_features, axis="columns")
train_meta_df = data_dict["train"].reindex(meta_features, axis="columns")
test_features_df = data_dict["test"].reindex(profile_features, axis="columns")
test_meta_df = data_dict["test"].reindex(meta_features, axis="columns")
valid_features_df = data_dict["valid"].reindex(profile_features, axis="columns")
valid_meta_df = data_dict["valid"].reindex(meta_features, axis="columns")
complete_features_df = data_dict["complete"].reindex(profile_features, axis="columns")
complete_meta_df = data_dict["complete"].reindex(meta_features, axis="columns")
print(train_features_df.shape)
train_features_df.head(3)
print(valid_features_df.shape)
valid_features_df.head(3)
print(test_features_df.shape)
test_features_df.head(3)
print(complete_features_df.shape)
complete_features_df.head(3)
encoder_architecture = [500]
decoder_architecture = [500]
L1000_vae = VAE(
input_dim=train_features_df.shape[1],
latent_dim=65,
batch_size=512,
encoder_batch_norm=True,
epochs=180,
learning_rate=0.001,
encoder_architecture=encoder_architecture,
decoder_architecture=decoder_architecture,
beta=1,
verbose=True,
)
L1000_vae.compile_vae()
L1000_vae.train(x_train=train_features_df, x_test=valid_features_df)
L1000_vae.vae
# Save training performance
history_df = pd.DataFrame(L1000_vae.vae.history.history)
history_df
history_df.to_csv('twolayer_training_vanilla_leaveOut.csv')
plt.figure(figsize=(10, 5))
plt.plot(history_df["loss"], label="Training data")
plt.plot(history_df["val_loss"], label="Validation data")
plt.title("Loss for VAE training on L1000 data")
plt.ylabel("MSE + KL Divergence")
plt.ylabel("Loss")
plt.xlabel("No. Epoch")
plt.legend()
plt.show()
# evaluating performance using test set
L1000_vae.vae.evaluate(test_features_df)
reconstruction = pd.DataFrame(
L1000_vae.vae.predict(test_features_df), columns=profile_features
)
(sum(sum((np.array(test_features_df) - np.array(reconstruction)) ** 2))) ** 0.5
# latent space heatmap
fig, ax = plt.subplots(figsize=(10, 10))
encoder = L1000_vae.encoder_block["encoder"]
latent = np.array(encoder.predict(test_features_df)[2])
seaborn.heatmap(latent, ax=ax)
reconstruction = pd.DataFrame(
L1000_vae.vae.predict(test_features_df), columns=profile_features
)
pca = PCA(n_components=2).fit(test_features_df)
pca_reconstructed_latent_df = pd.DataFrame(pca.transform(reconstruction))
pca_test_latent_df = pd.DataFrame(pca.transform(test_features_df))
figure(figsize=(10, 10), dpi=80)
plt.scatter(pca_test_latent_df[0],pca_test_latent_df[1], marker = ".", alpha = 0.5)
plt.scatter(pca_reconstructed_latent_df[0],pca_reconstructed_latent_df[1], marker = ".", alpha = 0.5)
decoder = L1000_vae.decoder_block["decoder"]
pca_training = PCA(n_components=2).fit(train_features_df)
simulated_df = pd.DataFrame(np.random.normal(size=(94440, 65)), columns=np.arange(0,65))
reconstruction_of_simulated = decoder.predict(simulated_df)
pca_reconstruction_of_simulated = pd.DataFrame(pca_training.transform(reconstruction_of_simulated))
pca_train_latent_df = pd.DataFrame(pca_training.transform(train_features_df))
fig, (ax1,ax2) = plt.subplots(1, 2, figsize=(16,8), sharey = True, sharex = True)
ax1.scatter(pca_train_latent_df[0],pca_train_latent_df[1], marker = ".", alpha = 0.5)
ax2.scatter(pca_reconstruction_of_simulated[0],pca_reconstruction_of_simulated[1], marker = ".", alpha = 0.5)
from scipy.spatial.distance import directed_hausdorff
max(directed_hausdorff(reconstruction_of_simulated, train_features_df)[0],directed_hausdorff(train_features_df,reconstruction_of_simulated)[0])
#NOTE: IF YOU RUN THIS, YOU WILL NOT BE ABLE TO REPRODUCE THE EXACT RESULTS IN THE EXPERIMENT
latent_complete = np.array(encoder.predict(complete_features_df)[2])
latent_df = pd.DataFrame(latent_complete)
latent_df.to_csv("../3.application/latentTwoLayer_vanilla_leaveOut.csv")
#NOTE: IF YOU RUN THIS, YOU WILL NOT BE ABLE TO REPRODUCE THE EXACT RESULTS IN THE EXPERIMENT
decoder.save('./models/L1000twolayerDecoder_vanilla_leaveOut')
#NOTE: IF YOU RUN THIS, YOU WILL NOT BE ABLE TO REPRODUCE THE EXACT RESULTS IN THE EXPERIMENT
encoder.save('./models/L1000twolayerEncoder_vanilla_leaveOut')
```
| github_jupyter |
# Pandas Tips & Tricks & More
### Hello Kaggler!
### <span style="color:PURPLE">Objective of this kernal is to demonstrate most commonly used</span> <span style="color:red">Pandas Tips & Tricks and More</span> .
# Contents
Note : Please use below links to navigate the note book
1. [Check Package Version](#CheckPackageVersion)
1. [Ignore Warnings](#IgnoreWarnings)
1. Pandas Basics
1. [Data Read and Peak](#ReadandPeak)
1. [Shape, Columns](#ShapeColumns)
1. [pandas series to pandas dataframe](#series2df)
1. [Query Data Type](#QueryDataType)
1. [Columns With Missing Values as a List](#ColumnsWithMissingValues)
1. [Columns of object Data Type (Categorical Columns) as a List](#CatColumns)
1. [Columns that Contain Numeric Values as a List](#NumColumns)
1. [Categorical Columns with Cardinality less than N](#CatColsCar)
1. [Count of Unique Values in a Column](#UniqueCount)
1. [Select Columns Based on Data Types](#DTypeColSelect)
1. [Check whether data is ready for training](#checkDataBeforeTraining)
1. [Subplotting in Notebooks](#SublottinginNotebooks)
1. Tools to Deal with Missing Values
1. [Get Missing Values Info](#GetMissingValuesInfo)
1. [Fill missing values](#FillMissingValues)
1. [Drop columns where data is missing more than x%](#dropDataMissingColumns)
1. [Columns With Missing Values as a List](#ColumnsWithMissingValues)
1. Feature Engineering
1. [Drop rows where target is missing](#dropTargetMissingRows)
1. [OneHot Encode the Dataframe](#OneHotEncode)
1. [Convert categorical columns in numerical dtype to object type](#Convertnumericalcategoricalobject)
1. [Select Columns Based on Data Types](#DTypeColSelect)
1. [Get feature columns df and target column from training data](#getTrainX_TrainY)
1. Modelling
1. [Logistic Regression](#LogisticRegression)
```
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
from pprint import pprint
print(os.listdir("../input"))
```
#### Check Package Version[^](#CheckPackageVersion)<a id="CheckPackageVersion" ></a><br>
```
print(pd.__version__)
print(np.__version__)
```
#### Ignore Warnings[^](#IgnoreWarnings)<a id="IgnoreWarnings" ></a><br>
```
import warnings
warnings.filterwarnings("ignore")
```
### Data Read and Peak [^](#ReadandPeak)<a id="ReadandPeak" ></a><br>
```
# Read & peek top
data = pd.read_csv("../input/titanic/train.csv")
data.head()
# Reading with Index column & peek tail
data2 = pd.read_csv("../input/titanic/train.csv",index_col='PassengerId')
data2.tail()
```
#### Shape, Columns[^](#ShapeColumns)<a id="ShapeColumns" ></a><br>
```
# Shape, Row Count, Column Count & Column Names
print('Shape of dataframe \t:', data.shape)
print('# of Rows \t\t:', data.shape[0])
print('# of Columns \t\t:', data.shape[1])
print('Columns in dataframe \t:', data.columns)
```
#### Query Data Type[^](#QueryDataType)<a id="QueryDataType" ></a><br>
```
values = {}
arr = []
print('values is a ' ,type(values))
type(arr)
```
#### Columns With Missing Values as a List[^](#ColumnsWithMissingValues)<a id="ColumnsWithMissingValues" ></a><br>
```
def getColumnsWithMissingValuesList(df):
return [col for col in df.columns if df[col].isnull().any()]
getColumnsWithMissingValuesList(data)
```
#### Columns of object Data Type (Categorical Columns) as a List[^](#CatColumns)<a id="CatColumns" ></a><br>
```
def getObjectColumnsList(df):
return [cname for cname in df.columns if df[cname].dtype == "object"]
cat_cols = getObjectColumnsList(data)
cat_cols
```
#### Columns that Contain Numeric Values as a List[^](#NumColumns)<a id="NumColumns" ></a><br>
```
def getNumericColumnsList(df):
return [cname for cname in df.columns if df[cname].dtype in ['int64', 'float64']]
num_cols = getNumericColumnsList(data)
num_cols
```
#### Categorical Columns with Cardinality less than N[^](#CatColsCar)<a id="CatColsCar" ></a><br>
```
def getLowCardinalityColumnsList(df,cardinality):
return [cname for cname in df.columns if df[cname].nunique() < cardinality and df[cname].dtype == "object"]
LowCardinalityColumns = getLowCardinalityColumnsList(data,10)
LowCardinalityColumns
```
#### Count of Unique Values in a Column[^](#UniqueCount)<a id="UniqueCount" ></a><br>
```
data['Embarked'].nunique()
```
#### OneHot Encode the Dataframe[^](#OneHotEncode)<a id="OneHotEncode" ></a><br>
```
def PerformOneHotEncoding(df,columnsToEncode):
return pd.get_dummies(df,columns = columnsToEncode)
oneHotEncoded_df = PerformOneHotEncoding(data,getLowCardinalityColumnsList(data,10))
oneHotEncoded_df.head()
```
#### Select Columns Based on Data Types[^](#DTypeColSelect)<a id="DTypeColSelect" ></a><br>
```
# select only int64 & float64 columns
numeric_data = data.select_dtypes(include=['int64','float64'])
# select only object columns
categorical_data = data.select_dtypes(include='object')
numeric_data.head()
categorical_data.head()
```
#### Get Missing Values Info[^](#GetMissingValuesInfo)<a id="GetMissingValuesInfo" ></a><br>
```
def missingValuesInfo(df):
total = df.isnull().sum().sort_values(ascending = False)
percent = round(df.isnull().sum().sort_values(ascending = False)/len(df)*100, 2)
temp = pd.concat([total, percent], axis = 1,keys= ['Total', 'Percentage'])
return temp.loc[(temp['Total'] > 0)]
missingValuesInfo(data)
```
#### Fill missing values[^](#FillMissingValues)<a id="FillMissingValues" ></a><br>
```
# for Object columns fill using 'UNKOWN'
# for Numeric columns fill using median
def fillMissingValues(df):
num_cols = [cname for cname in df.columns if df[cname].dtype in ['int64', 'float64']]
cat_cols = [cname for cname in df.columns if df[cname].dtype == "object"]
values = {}
for a in cat_cols:
values[a] = 'UNKOWN'
for a in num_cols:
values[a] = df[a].median()
df.fillna(value=values,inplace=True)
HandleMissingValues(data)
data.head()
#check for NaN values
data.isnull().sum().sum()
```
#### Drop columns where data is missing more than x%[^](#dropDataMissingColumns)<a id="dropDataMissingColumns" ></a><br>
```
# pass the DataFrame and percentage
def dropDataMissingColumns(df,percentage):
print("Dropping columns where more than {}% values are Missing..".format(percentage))
nan_percentage = df.isnull().sum().sort_values(ascending=False) / df.shape[0]
missing_val = nan_percentage[nan_percentage > 0]
to_drop = missing_val[missing_val > percentage/100].index.values
df.drop(to_drop, axis=1, inplace=True)
```
#### Drop rows where target is missing[^](#dropTargetMissingRows)<a id="dropTargetMissingRows" ></a><br>
```
def dropTargetMissingRows(df,target):
print("Dropping Rows where Target is Missing..")
df.dropna(axis=0, subset=[target], inplace=True)
```
Logistic Regression[^](#LogisticRegression)<a id="LogisticRegression" ></a><br>
```
def logistic(X,y):
X_train,X_test,y_train,y_test=train_test_split(X,y,random_state=42,test_size=0.2)
lr=LogisticRegression()
lr.fit(X_train,y_train)
y_pre=lr.predict(X_test)
print('Accuracy : ',accuracy_score(y_test,y_pre))
```
#### pandas series to pandas dataframe[^](#series2df)<a id="series2df" ></a><br>
```
series = data['Fare']
d = {series.name : series}
df = pd.DataFrame(d)
df.head()
```
#### Convert categorical columns in numerical dtype to object type[^](#Convertnumericalcategoricalobject)<a id="Convertnumericalcategoricalobject" ></a><br>
Sometimes categorical columns comes in numerical data types. This is the case for all most all ordinal columns. If not converted to 'category' descriptive statistic summary does not makes sense.
```
PassengerClass = data['Pclass'].astype('category')
PassengerClass.describe()
```
#### Check whether data is ready for training[^](#checkDataBeforeTraining)<a id="checkDataBeforeTraining" ></a>
```
# checks whether df contatins null values or object columns
def checkDataBeforeTraining(df):
if(df.isnull().sum().sum() != 0):
print("Error : Null Values Exist in Data")
return False;
if(len([cname for cname in df.columns if df[cname].dtype == "object"])>0):
print("Error : Object Columns Exist in Data")
return False;
print("Data is Ready for Training")
return True;
```
#### Get feature columns df and target column from training data[^](#getTrainX_TrainY)<a id="getTrainX_TrainY" ></a>
```
def getTrainX_TrainY(train_df,target):
trainY = train_df.loc[:,target]
trainX = train_df.drop(target, axis=1)
return trainX,trainY
```
#### Subplotting in Notebooks[^](#SublottinginNotebooks)<a id="SublottinginNotebooks" ></a>
```
#impoting required libraries for demo
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.datasets import make_regression
X, y = make_regression(n_samples=500, n_features=4, n_informative=2,random_state=0, shuffle=False)
# Subploting lets 2*2 figure with sizes (14*14)
f,ax=plt.subplots(2,2,figsize=(14,14))
#first plot
sns.scatterplot(x=X[:,0], y=y, ax=ax[0,0])
ax[0,0].set_xlabel('Feature 1 Values')
ax[0,0].set_ylabel('Y Values')
ax[0,0].set_title('Sactter Plot : Feature 1 vs Y')
#second plot
sns.scatterplot(x=X[:,1], y=y,ax=ax[0,1])
ax[0,1].set_xlabel('Feature 2 Values')
ax[0,1].set_ylabel('Y Values')
ax[0,1].set_title('Sactter Plot : Feature 2 vs Y')
#Third plot
sns.scatterplot(x=X[:,2], y=y,ax=ax[1,0])
ax[1,0].set_xlabel('Feature 3 Values')
ax[1,0].set_ylabel('Y Values')
ax[1,0].set_title('Sactter Plot : Feature 3 vs Y')
#Fourth plot
sns.scatterplot(x=X[:,3], y=y,ax=ax[1,1])
ax[1,1].set_xlabel('Feature 4 Values')
ax[1,1].set_ylabel('Y Values')
ax[1,1].set_title('Sactter Plot : Feature 4 vs Y')
plt.show()
```
| github_jupyter |
# Model Specification for 1st-Level fMRI Analysis
Nipype provides also an interfaces to create a first level Model for an fMRI analysis. Such a model is needed to specify the analysis-specific information, such as **condition**, their **onsets**, and **durations**. For more information, make sure to check out [nipype.algorithms.modelgen](http://nipype.readthedocs.io/en/latest/interfaces/generated/nipype.algorithms.modelgen.html).
## General purpose model specification
The `SpecifyModel` provides a generic mechanism for model specification. A mandatory input called `subject_info` provides paradigm specification for each run corresponding to a subject. This has to be in the form of a `Bunch` or a list of `Bunch` objects (one for each run). Each `Bunch` object contains the following attributes.
### Required for most designs
- **`conditions`** : list of names
- **`onsets`** : lists of onsets corresponding to each condition
- **`durations`** : lists of durations corresponding to each condition. Should be left to a single 0 if all events are being modeled as impulses.
### Optional
- **`regressor_names`**: list of names corresponding to each column. Should be None if automatically assigned.
- **`regressors`**: list of lists. values for each regressor - must correspond to the number of volumes in the functional run
- **`amplitudes`**: lists of amplitudes for each event. This will be ignored by SPM's Level1Design.
The following two (`tmod`, `pmod`) will be ignored by any `Level1Design` class other than `SPM`:
- **`tmod`**: lists of conditions that should be temporally modulated. Should default to None if not being used.
- **`pmod`**: list of Bunch corresponding to conditions
- `name`: name of parametric modulator
- `param`: values of the modulator
- `poly`: degree of modulation
Together with this information, one needs to specify:
- whether the durations and event onsets are specified in terms of scan volumes or secs.
- the high-pass filter cutoff,
- the repetition time per scan
- functional data files corresponding to each run.
Optionally you can specify realignment parameters, outlier indices. Outlier files should contain a list of numbers, one per row indicating which scans should not be included in the analysis. The numbers are 0-based
## Example
An example Bunch definition:
```
from nipype.interfaces.base import Bunch
condnames = ['Tapping', 'Speaking', 'Yawning']
event_onsets = [[0, 10, 50],
[20, 60, 80],
[30, 40, 70]]
durations = [[0],[0],[0]]
subject_info = Bunch(conditions=condnames,
onsets = event_onsets,
durations = durations)
subject_info
```
## Input via textfile
Alternatively, you can provide condition, onset, duration and amplitude
information through event files. The event files have to be in 1, 2 or 3
column format with the columns corresponding to Onsets, Durations and
Amplitudes and they have to have the name event_name.run<anything else>
e.g.: `Words.run001.txt`.
The event_name part will be used to create the condition names. `Words.run001.txt` may look like:
# Word Onsets Durations
0 10
20 10
...
or with amplitudes:
# Word Onsets Durations Amplitudes
0 10 1
20 10 1
...
## Example based on dataset
Now let's look at a TSV file from our tutorial dataset.
```
!cat data/ds000114/task-fingerfootlips_events.tsv
```
We can also use [pandas](http://pandas.pydata.org/) to create a data frame from our dataset.
```
import pandas as pd
trialinfo = pd.read_table('data/ds000114/task-fingerfootlips_events.tsv')
trialinfo.head()
```
Before we can use the onsets, we first need to split them into the three conditions:
```
for group in trialinfo.groupby('trial_type'):
print(group)
```
The last thing we now need to to is to put this into a ``Bunch`` object and we're done:
```
from nipype.interfaces.base import Bunch
conditions = []
onsets = []
durations = []
for group in trialinfo.groupby('trial_type'):
conditions.append(group[0])
onsets.append(group[1].onset.tolist())
durations.append(group[1].duration.tolist())
subject_info = Bunch(conditions=conditions,
onsets=onsets,
durations=durations)
subject_info.items()
```
# Sparse model specification
In addition to standard models, `SpecifySparseModel` allows model generation for sparse and sparse-clustered acquisition experiments. Details of the model generation and utility are provided in [Ghosh et al. (2009) OHBM 2009](https://www.researchgate.net/publication/242810827_Incorporating_hemodynamic_response_functions_to_improve_analysis_models_for_sparse-acquisition_experiments)
| github_jupyter |
# Introduction
Oftentimes data will come to us with column names, index names, or other naming conventions that we are not satisfied with. In that case, you'll learn how to use pandas functions to change the names of the offending entries to something better.
You'll also explore how to combine data from multiple DataFrames and/or Series.
종종 데이터는 열 이름, 인덱스 이름 또는 우리가 만족하지 않는 다른 명명 규칙과 함께 우리에게 제공됩니다. 이 경우 pandas 함수를 사용하여 문제가되는 항목의 이름을 더 나은 이름으로 변경하는 방법을 배우게됩니다.
또한 여러 DataFrame 및 / 또는 시리즈의 데이터를 결합하는 방법도 살펴 봅니다.
**To start the exercise for this topic, please click [here](https://www.kaggle.com/kernels/fork/638064).**
# Renaming
The first function we'll introduce here is `rename()`, which lets you change index names and/or column names. For example, to change the `points` column in our dataset to `score`, we would do:
여기서 소개 할 첫 번째 함수는 rename ()으로 인덱스 이름 및 / 또는 열 이름을 변경할 수 있습니다. 예를 들어 데이터 세트의 포인트 열을 점수로 변경하려면 다음을 수행합니다.
```
import pandas as pd
pd.set_option('max_rows', 5)
reviews = pd.read_csv("data/winemag-data-130k-v2.csv", index_col=0)
reviews.rename(columns={'points': 'score'})
```
`rename()` lets you rename index _or_ column values by specifying a `index` or `column` keyword parameter, respectively. It supports a variety of input formats, but usually a Python dictionary is the most convenient. Here is an example using it to rename some elements of the index.
```
reviews.rename(index={0: 'firstEntry', 1: 'secondEntry'})
```
You'll probably rename columns very often, but rename index values very rarely. For that, `set_index()` is usually more convenient.
Both the row index and the column index can have their own `name` attribute. The complimentary `rename_axis()` method may be used to change these names. For example:
```
reviews.rename_axis("wines", axis='rows').rename_axis("fields", axis='columns')
```
# Combining
When performing operations on a dataset, we will sometimes need to combine different DataFrames and/or Series in non-trivial ways. Pandas has three core methods for doing this. In order of increasing complexity, these are `concat()`, `join()`, and `merge()`. Most of what `merge()` can do can also be done more simply with `join()`, so we will omit it and focus on the first two functions here.
The simplest combining method is `concat()`. Given a list of elements, this function will smush those elements together along an axis.
This is useful when we have data in different DataFrame or Series objects but having the same fields (columns). One example: the [YouTube Videos dataset](https://www.kaggle.com/datasnaek/youtube-new), which splits the data up based on country of origin (e.g. Canada and the UK, in this example). If we want to study multiple countries simultaneously, we can use `concat()` to smush them together:
```
canadian_youtube = pd.read_csv("data/CAvideos.csv")
british_youtube = pd.read_csv("data/GBvideos.csv")
pd.concat([canadian_youtube, british_youtube])
```
The middlemost combiner in terms of complexity is `join()`. `join()` lets you combine different DataFrame objects which have an index in common. For example, to pull down videos that happened to be trending on the same day in _both_ Canada and the UK, we could do the following:
```
left = canadian_youtube.set_index(['title', 'trending_date'])
right = british_youtube.set_index(['title', 'trending_date'])
left.join(right, lsuffix='_CAN', rsuffix='_UK')
```
The `lsuffix` and `rsuffix` parameters are necessary here because the data has the same column names in both British and Canadian datasets. If this wasn't true (because, say, we'd renamed them beforehand) we wouldn't need them.
# Your turn
If you haven't started the exercise, you can **[get started here](https://www.kaggle.com/kernels/fork/638064)**.
---
*Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/161299) to chat with other Learners.*
| github_jupyter |
# The surface energy balance
____________
<a id='section1'></a>
## 1. Energy exchange mechanisms at the Earth's surface
____________
The surface of the Earth is the boundary between the atmosphere and the land, ocean, or ice. Understanding the energy fluxes across the surface are very important for three main reasons:
1. We are most interested in the climate at the surface because we live at the surface.
2. The surface energy budget determines how much energy is available to evaporate water and moisten the atmosphere.
3. Air-sea energy fluxes set the thermal structure of the oceans, which in turn act to redistribute energy around the planet, with many important consequences for climate.
The energy budget at the surface is more complex that the budget at the top of the atmosphere. At the TOA the only energy transfer mechanisms are radiative (shortwave and longwave). At the surface, in addition to radiation we need to consider fluxes of energy by conduction and by convection of heat and moisture through turbulent fluid motion.
### Major terms in the surface energy budget
We will denote the **net upward energy flux at the surface** as $F_S$.
As we mentioned back in the lecture on [heat transport](heat-transport.ipynb), there are four principal contributions to $F_S$:
1. Shortwave radiation
2. Longwave radiation
3. Sensible heat flux
4. Evaporation or latent heat flux
Wherever $F_S \ne 0$, there is a net flux of energy between the atmosphere and the surface below. This implies either that there is heat storage or release occuring below the surface (e.g. warming or cooling of water, melting of snow and ice), and/or there is horizontal heat transport by fluid motions occuring below the surface (ocean circulation, groundwater flow).
### Minor terms in the surface energy budget
All of these terms are small globally but can be significant locally or seasonally.
- Latent heat of fusion required for melting ice and snow
- Conversion of the kinetic energy of winds and waves to thermal energy
- Heat transport by precipitation, if precipitation is at a different temperature than the surface
- Biological uptake of solar energy through photosynthesis
- Biological release of energy through oxidation (respiration, decay, fires)
- Geothermal heat sources (hot springs, volcanoes, etc.)
- Anthropogenic heat released through fossil fuel burning and nuclear power generation.
____________
<a id='section2'></a>
## 2. The surface energy budget in CESM simulations
____________
We will examine the surface budget in the CESM slab ocean simulations. The advantage of looking at surface fluxes in a model rather than observations is that the model fluxes are completely consistent with the model climate, so that the net flux $F_S$ will be a meaningful measure of the heat storage in the system.
The model also gives us an opportunity to look at how the surface budget reponds to global warming under a doubling of CO$_2$.
### First, load the data
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import xarray as xr
from climlab import constants as const
# The path to the THREDDS server, should work from anywhere
basepath = 'http://thredds.atmos.albany.edu:8080/thredds/dodsC/CESMA/'
#basepath = '../Data/CESMA/'
# First get topography
cesm_input_path = basepath + 'som_input/'
topo = xr.open_dataset(cesm_input_path + 'USGS-gtopo30_1.9x2.5_remap_c050602.nc')
# Then control and 2xCO2 simulations
casenames = {'control': 'som_1850_f19',
'2xCO2': 'som_1850_2xCO2',
}
casepaths = {}
for model in casenames:
casepaths[model] = basepath + casenames[model] + '/concatenated/'
# make a dictionary of all the CAM atmosphere output
atm = {}
for model in casenames:
path = casepaths[model] + casenames[model] + '.cam.h0.nc'
print('Attempting to open the dataset ', path)
atm[model] = xr.open_dataset(path, decode_times=False)
lat = atm['control'].lat
lon = atm['control'].lon
lev = atm['control'].lev
```
### Annual mean surface energy budget
```
# Surface energy budget terms, all defined as positive up (from ocean to atmosphere)
surface_budget = {}
for (name, run) in atm.items():
budget = xr.Dataset()
budget['LHF'] = run.LHFLX
budget['SHF'] = run.SHFLX
budget['LWsfc'] = run.FLNS
budget['LWsfc_clr'] = run.FLNSC
budget['SWsfc'] = -run.FSNS
budget['SWsfc_clr'] = -run.FSNSC
budget['SnowFlux'] = ((run.PRECSC+run.PRECSL)
*const.rho_w*const.Lhfus)
# net upward radiation from surface
budget['NetRad'] = budget['LWsfc'] + budget['SWsfc']
budget['NetRad_clr'] = budget['LWsfc_clr'] + budget['SWsfc_clr']
# net upward surface heat flux
budget['Net'] = (budget['NetRad'] + budget['LHF'] +
budget['SHF'] + budget['SnowFlux'])
surface_budget[name] = budget
```
### Compute anomalies for all terms
```
# Here we take advantage of xarray!
# We can simply subtract the two xarray.Dataset objects
# to get anomalies for every term
surface_budget['anom'] = surface_budget['2xCO2'] - surface_budget['control']
# Also compute zonal averages
zonal_budget = {}
for run, budget in surface_budget.items():
zonal_budget[run] = budget.mean(dim='lon')
```
### Plot the annual mean net upward flux $F_S$ (control and anomaly after warming)
```
fig, axes = plt.subplots(1,2, figsize=(16,5))
cax1 = axes[0].pcolormesh(lon, lat, surface_budget['control'].Net.mean(dim='time'),
cmap=plt.cm.seismic, vmin=-200., vmax=200. )
axes[0].set_title('Annual mean net surface heat flux (+ up) - CESM control')
cax2 = axes[1].pcolormesh(lon, lat, surface_budget['anom'].Net.mean(dim='time'),
cmap=plt.cm.seismic, vmin=-20., vmax=20. )
fig.colorbar(cax1, ax=axes[0]); fig.colorbar(cax2, ax=axes[1])
axes[1].set_title('Anomaly after CO2 doubling')
for ax in axes:
ax.set_xlim(0, 360); ax.set_ylim(-90, 90); ax.contour( lon, lat, topo.LANDFRAC, [0.5], colors='k');
```
Some notable points about the control state:
- The net flux over all land surfaces is very close to zero!
- In the long-term annual mean, a non-zero $F_S$ must be balanced by heat transport.
- The spatial pattern of $F_S$ over the oceans is essentially just the prescribed q-flux that we have imposed on the slab ocean to represent ocean heat transport.
- Net heat uptake by the oceans occurs mostly along the equator and the cold tongues on the eastern sides of the tropical basins.
- Net heat release from oceans to atmosphere occurs mostly in mid- to high latitudes. Hot spots include the Gulf Stream and Kuroshio regions on the western sides of the mid-latitude basins, as well as the subpolar North Atlantic. These features are largely determined by ocean dynamics.
**After greenhouse warming**:
- The net change in $F_S$ is very small in most locations.
- This indicates that the model has reached quasi-equilibrium. Non-zero changes in $F_S$ would indicate either
- heat storage below the surface
- changes in ocean heat transport (not permitted in a slab ocean model).
- Non-zero changes are found in areas where the sea ice cover is changing in the model.
### Variation of energy balance components with latitude
```
fieldlist = ['SWsfc', 'LWsfc', 'LHF', 'SHF', 'Net']
fig, axes = plt.subplots(1,2, figsize=(16,5))
for ax, run in zip(axes, ['control', 'anom']):
for field in fieldlist:
ax.plot(lat, zonal_budget[run][field].mean(dim='time'), label=field)
ax.set_xlim(-90, 90); ax.grid(); ax.legend()
axes[0].set_title('Components of ANNUAL surface energy budget (+ up) - CESM control')
axes[1].set_title('Anomaly after CO2 doubling');
```
In these graphs, the curve labeled "Net" is the net flux $F_S$. It is just the zonal average of the maps from the previous figure, and shows the ocean heat uptake at the equator and release in mid- to high latitudes.
More interestingly, these graphs show the contribution of the various terms to $F_S$. They are all plotted as positive up. A **negative** value thus indicates **heating of the surface**, and a **positive** value indicates a **cooling of the surface**.
Key points about the control simulation:
- Solar radiation acts to warm the surface everywhere.
- Note that this is a net shortwave flux, so it is the amount that is actually absorbed by the surface after accounting for the reflected fraction.
- All other mechanisms act to cool the surface.
- The dominant balance across the **tropics** is between **warming by solar radiation** and **cooling by evaporation** (latent heat flux or LHF).
- The latent heat flux decreases poleward.
- Latent heat flux is dominant over sensible heat flux at most latitudes except close to the poles.
- The net longwave radiation also acts to cool the surface.
- This is the residual between the surface emissions (essentially $\sigma~T_s^4$) and the back-radiation from the atmosphere.
### Exercise: Discuss the surface energy budget anomalies to greenhouse warming
For **each term** on the right panel of the plot identify the following:
- The sign of the change
- The physical mechanism responsible for the change
- The consequence for surface temperature change
### Seasonal variations
We will compute the budgets for the months of January and July, and plot their differences.
```
# July minus January
julminusjan_budget = {}
for name, budget in surface_budget.items():
# xarray.Dataset objects let you "select" a subset in various ways
# Here we are using the integer time index (0-11)
julminusjan_budget[name] = budget.isel(time=6) - budget.isel(time=0)
fieldlist = ['SWsfc', 'LWsfc', 'LHF', 'SHF', 'Net']
fig,axes = plt.subplots(1,2,figsize=(16,5))
for field in fieldlist:
axes[0].plot(lat, julminusjan_budget['control'][field].mean(dim='lon'), label=field)
axes[0].set_title('Components of JUL-JAN surface energy budget (+ up) - CESM control')
for field in fieldlist:
axes[1].plot(lat, julminusjan_budget['anom'][field].mean(dim='lon'), label=field)
axes[1].set_title('Anomaly after CO2 doubling')
for ax in axes:
ax.set_xlim(-90, 90)
ax.grid()
ax.legend()
```
Seasonally, the dominant balance by far is between solar radiation and heat storage!
____________
<a id='section3'></a>
## 3. Sensible and Latent Heat Fluxes in the boundary layer
____________
These notes largely follow Chapter 4 of Hartmann (1994) "Global Physical Climatology", Academic Press.
Turbulent fluxes of heat: eddy fluxes of heat and moisture at some level in the atmospheric boundary layer
$$ \text{SH} = c_p ~\rho ~ \overline{w^\prime T^\prime} $$
$$ \text{LE} = L ~\rho ~\overline{w^\prime q^\prime} $$
where $c_p$ is the specific heat of air at constant pressure, $L$ is the latent heat of vaporization, $\text{SH}$ is the sensible heat flux and $\text{LE}$ is the latent heat flux.
### Bulk aerodynamic formulas
From theory of boundary layer turbulence, we suppose that the eddy heat flux is related to boundary layer temperature gradients, as well as the mean wind speed:
$$ \text{SH} = c_p ~\rho ~ C_D ~ U \left( T_s - T_a \right) $$
where $T_s$ is the surface temperature and $T_a$ is the air temperature at some reference height above the surface. $U$ is the wind speed at the reference height, and $C_D$ is a dimensionless aerodynamic drag coefficient.
$C_D$ will depend, among other things, on the roughness of the surface.
Similarly, we assume that the latent heat flux is related to boundary layer moisture gradients:
$$ \text{LE} = L ~\rho ~ C_D ~ U \left( q_s - q_a \right) $$
where $q_s$ is the specific humidity of air immediately above the surface, and $q_a$ is the specific humidity at the reference height.
In general the transfer coefficients $C_D$ could be different for sensible and latent heat flux, but empirically they are found to be very similar to each other. We will assume they are equal here.
### The Bowen ratio
The **Bowen ratio** is a dimensionless number defined as
$$ B_o = \frac{\text{SH}}{\text{LE}} $$
i.e. the ratio of **sensible heat loss** to **evaporative cooling**.
From the above plots, the Bowen ratio tends to be small in the low latitudes.
### The Bowen ratio for wet surfaces
Over a water surface or a very wet land surface, we may assume that the mixing ratio of water vapor at the surface is equal to the saturation mixing ratio $q^*$ at the temperature of the surface:
$$ q_s = q^*(T_s) $$
Recall that the saturation vapor pressure $q^*$ is a sensitive function of temperature through the Clausius-Claperyon relation. (It also depends on pressure)
Let's approximate the mixing ratio for **saturated air** at the reference height through a first-order Taylor series expansion:
$$ q_a^* \approx q_s^*(T_s) + \frac{\partial q^*}{\partial T} \left( T_a - T_s \right) $$
The actual mixing ratio at the reference height can be expressed as
$$ q_a = r ~ q_a^* $$
where $r$ is the relative humidity at that level.
Then we have an appoximation for $q_a$ in terms of temperature gradients:
$$ q_a \approx r \left( q_s^*(T_s) + \frac{\partial q^*}{\partial T} \left( T_a - T_s \right) \right) $$
Substituting this into the bulk formula for latent heat flux, we get
$$ \text{LE} \approx L ~\rho ~ C_D ~ U \left( q_s^* - r \left( q_s^* + \frac{\partial q^*}{\partial T} \left( T_a - T_s \right) \right) \right) $$
or, rearranging a bit,
$$ \text{LE} \approx L ~\rho ~ C_D ~ U \left( (1-r) ~ q_s^* + r \frac{\partial q^*}{\partial T} \left( T_s - T_a \right) \right) $$
The Bowen ratio is thus
$$ B_o = \frac{c_p}{ L \left( \frac{(1-r)}{\left( T_s - T_a \right)} q_s^* + r \frac{\partial q^*}{\partial T} \right)} $$
### The equilibrium Bowen ratio (for saturated air)
Notice that **if the boundary layer air is saturated**, then $r=1$ and the Bowen ratio takes on a special value
$$ B_e = \frac{c_p}{ L \frac{\partial q^*}{\partial T} } $$
When the surface and the air at the reference level are saturated, the Bowen ratio approaches the value $B_e$, which is called the equilibrium Bowen ratio. We presume that the flux of moisture from the boundary layer to the free atmosphere is sufficient to just balance the upward flux of moisture from the surface so that the humidity at the reference height is in equilibrium at the saturation value.
Recall that from the Clausius-Claperyon relation, the rate of change of the saturation mixing ratio is itself a strong function of temperature:
$$ \frac{\partial q^*}{\partial T} = q^*(T) \frac{L}{R_v ~ T^2} $$
Here the quasi-exponential dependence of $q^*$ on $T$ far outweighs the inverse square dependence, so the **equilibrium Bowen ratio decreases roughly exponentially with temperature**.
The following code reproduces Figure 4.10 of Hartmann (1994).
```
from climlab.utils.thermo import qsat
T = np.linspace(-40, 40) + const.tempCtoK
qstar = qsat(T, const.ps) # in kg / kg
def Be(T):
qstar = qsat(T, const.ps) # in kg / kg
dqstardT = qstar * const.Lhvap / const.Rv / T**2
return const.cp / const.Lhvap / dqstardT
fig, ax = plt.subplots()
ax.semilogy(T + const.tempKtoC, qstar*1000, label='$q^*$')
ax.semilogy(T + const.tempKtoC, Be(T), label='$B_e$')
ax.grid()
ax.set_xlabel('Temperature (degC)')
ax.legend(loc='upper center')
ax.set_title('Saturation specific humidity (g/kg) and equilibrium Bowen ratio');
```
- Equilibrium Bowen ratio is near 1 at 0ºC, and decreases to about 0.2 at 30ºC.
- As relative humidity is decreased from 1 to smaller values, **evaporative cooling increases**.
- The equilibrium Bowen ratio is the **maximum possible Bowen ratio for a wet surface**.
- Actual Bowen ratio over a wet surface will generally be smaller than $B_e$, because the air is usually not saturated.
- Because of the strong temperature dependence of the saturation specific humidity:
- Evaporative cooling (latent heat flux) dominates over sensible cooling of wet surfaces at **tropical** temperatures.
- Sensible heat flux becomes important wherever the surface is either **cold** or **dry**.
____________
<a id='section4'></a>
## 4. Bowen ratio in CESM simulations
____________
```
Bo_control = (surface_budget['control'].SHF.mean(dim='time') /
surface_budget['control'].LHF.mean(dim='time'))
Be_control = Be(atm['control'].TS.mean(dim='time'))
fig,axes = plt.subplots(1,3,figsize=(16,4))
cax1 = axes[0].pcolormesh(lon, lat, Bo_control,
vmin=0., vmax=5. )
fig.colorbar(cax1, ax=axes[0])
axes[0].set_title('$B_o$ (CESM control)', fontsize=20)
cax2 = axes[1].pcolormesh(lon, lat, Be_control,
vmin=0., vmax=5. )
fig.colorbar(cax2, ax=axes[1])
axes[1].set_title('$B_e$ (CESM control)', fontsize=20)
cax3 = axes[2].pcolormesh(lon, lat, (Bo_control - Be_control),
cmap='seismic', vmin=-10., vmax=10. )
fig.colorbar(cax3, ax=axes[2])
axes[2].set_title('$B_o - B_e$ (CESM control)', fontsize=20)
for ax in axes:
ax.set_xlim(0, 360)
ax.set_ylim(-90, 90)
ax.contour( lon, lat, topo.variables['LANDFRAC'][:], [0.5], colors='k');
```
On the difference plot, the blue colors indicate where the actual Bowen ratio is smaller than the equilibrium Bowen ratio. This will typically occur for **wet surfaces** with **undersaturated air**.
The red colors indicate where the actual Bowen ratio is larger than the equilibrium Bowen ratio. This typically occurs for **dry surfaces** where there is not enough water available to satisfy the energetic demand for evaporation.
____________
## Credits
This notebook is part of [The Climate Laboratory](https://brian-rose.github.io/ClimateLaboratoryBook), an open-source textbook developed and maintained by [Brian E. J. Rose](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany. It has been modified by [Nicole Feldl](http://nicolefeldl.com), UC Santa Cruz.
It is licensed for free and open consumption under the
[Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) license.
Development of these notes and the [climlab software](https://github.com/brian-rose/climlab) is partially supported by the National Science Foundation under award AGS-1455071 to Brian Rose. Any opinions, findings, conclusions or recommendations expressed here are mine and do not necessarily reflect the views of the National Science Foundation.
____________
| github_jupyter |
If you're opening this Notebook on colab, you will probably need to install 🤗 Transformers and 🤗 Datasets as well as other dependencies. Right now this requires the current master branch of both. Uncomment the following cell and run it.
```
#! pip install git+https://github.com/huggingface/transformers.git
#! pip install git+https://github.com/huggingface/datasets.git
#! pip install rouge-score nltk
```
If you're opening this notebook locally, make sure your environment has an install from the last version of those libraries.
To be able to share your model with the community and generate results like the one shown in the picture below via the inference API, there are a few more steps to follow.
First you have to store your authentication token from the Hugging Face website (sign up [here](https://huggingface.co/join) if you haven't already!) then uncomment the following cell and input your username and password (this only works on Colab, in a regular notebook, you need to do this in a terminal):
```
from huggingface_hub import notebook_login
notebook_login()
```
Then you need to install Git-LFS and setup Git if you haven't already. Uncomment the following instructions and adapt with your name and email:
```
# !apt install git-lfs
# !git config --global user.email "you@example.com"
# !git config --global user.name "Your Name"
```
Make sure your version of Transformers is at least 4.8.1 since the functionality was introduced in that version:
```
import transformers
print(transformers.__version__)
```
You can find a script version of this notebook to fine-tune your model in a distributed fashion using multiple GPUs or TPUs [here](https://github.com/huggingface/transformers/tree/master/examples/seq2seq).
# Fine-tuning a model on a summarization task
In this notebook, we will see how to fine-tune one of the [🤗 Transformers](https://github.com/huggingface/transformers) model for a summarization task. We will use the [XSum dataset](https://arxiv.org/pdf/1808.08745.pdf) (for extreme summarization) which contains BBC articles accompanied with single-sentence summaries.

We will see how to easily load the dataset for this task using 🤗 Datasets and how to fine-tune a model on it using the `Trainer` API.
```
model_checkpoint = "t5-small"
```
This notebook is built to run with any model checkpoint from the [Model Hub](https://huggingface.co/models) as long as that model has a sequence-to-sequence version in the Transformers library. Here we picked the [`t5-small`](https://huggingface.co/t5-small) checkpoint.
## Loading the dataset
We will use the [🤗 Datasets](https://github.com/huggingface/datasets) library to download the data and get the metric we need to use for evaluation (to compare our model to the benchmark). This can be easily done with the functions `load_dataset` and `load_metric`.
```
from datasets import load_dataset, load_metric
raw_datasets = load_dataset("xsum")
metric = load_metric("rouge")
```
The `dataset` object itself is [`DatasetDict`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasetdict), which contains one key for the training, validation and test set:
```
raw_datasets
```
To access an actual element, you need to select a split first, then give an index:
```
raw_datasets["train"][0]
```
To get a sense of what the data looks like, the following function will show some examples picked randomly in the dataset.
```
import datasets
import random
import pandas as pd
from IPython.display import display, HTML
def show_random_elements(dataset, num_examples=5):
assert num_examples <= len(
dataset
), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset) - 1)
while pick in picks:
pick = random.randint(0, len(dataset) - 1)
picks.append(pick)
df = pd.DataFrame(dataset[picks])
for column, typ in dataset.features.items():
if isinstance(typ, datasets.ClassLabel):
df[column] = df[column].transform(lambda i: typ.names[i])
display(HTML(df.to_html()))
show_random_elements(raw_datasets["train"])
```
The metric is an instance of [`datasets.Metric`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Metric):
```
metric
```
You can call its `compute` method with your predictions and labels, which need to be list of decoded strings:
```
fake_preds = ["hello there", "general kenobi"]
fake_labels = ["hello there", "general kenobi"]
metric.compute(predictions=fake_preds, references=fake_labels)
```
## Preprocessing the data
Before we can feed those texts to our model, we need to preprocess them. This is done by a 🤗 Transformers `Tokenizer` which will (as the name indicates) tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary) and put it in a format the model expects, as well as generate the other inputs that the model requires.
To do all of this, we instantiate our tokenizer with the `AutoTokenizer.from_pretrained` method, which will ensure:
- we get a tokenizer that corresponds to the model architecture we want to use,
- we download the vocabulary used when pretraining this specific checkpoint.
That vocabulary will be cached, so it's not downloaded again the next time we run the cell.
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
```
By default, the call above will use one of the fast tokenizers (backed by Rust) from the 🤗 Tokenizers library.
You can directly call this tokenizer on one sentence or a pair of sentences:
```
tokenizer("Hello, this one sentence!")
```
Depending on the model you selected, you will see different keys in the dictionary returned by the cell above. They don't matter much for what we're doing here (just know they are required by the model we will instantiate later), you can learn more about them in [this tutorial](https://huggingface.co/transformers/preprocessing.html) if you're interested.
Instead of one sentence, we can pass along a list of sentences:
```
tokenizer(["Hello, this one sentence!", "This is another sentence."])
```
To prepare the targets for our model, we need to tokenize them inside the `as_target_tokenizer` context manager. This will make sure the tokenizer uses the special tokens corresponding to the targets:
```
with tokenizer.as_target_tokenizer():
print(tokenizer(["Hello, this one sentence!", "This is another sentence."]))
```
If you are using one of the five T5 checkpoints we have to prefix the inputs with "summarize:" (the model can also translate and it needs the prefix to know which task it has to perform).
```
if model_checkpoint in ["t5-small", "t5-base", "t5-larg", "t5-3b", "t5-11b"]:
prefix = "summarize: "
else:
prefix = ""
```
We can then write the function that will preprocess our samples. We just feed them to the `tokenizer` with the argument `truncation=True`. This will ensure that an input longer that what the model selected can handle will be truncated to the maximum length accepted by the model. The padding will be dealt with later on (in a data collator) so we pad examples to the longest length in the batch and not the whole dataset.
```
max_input_length = 1024
max_target_length = 128
def preprocess_function(examples):
inputs = [prefix + doc for doc in examples["document"]]
model_inputs = tokenizer(inputs, max_length=max_input_length, truncation=True)
# Setup the tokenizer for targets
with tokenizer.as_target_tokenizer():
labels = tokenizer(
examples["summary"], max_length=max_target_length, truncation=True
)
model_inputs["labels"] = labels["input_ids"]
return model_inputs
```
This function works with one or several examples. In the case of several examples, the tokenizer will return a list of lists for each key:
```
preprocess_function(raw_datasets["train"][:2])
```
To apply this function on all the pairs of sentences in our dataset, we just use the `map` method of our `dataset` object we created earlier. This will apply the function on all the elements of all the splits in `dataset`, so our training, validation and testing data will be preprocessed in one single command.
```
tokenized_datasets = raw_datasets.map(preprocess_function, batched=True)
```
Even better, the results are automatically cached by the 🤗 Datasets library to avoid spending time on this step the next time you run your notebook. The 🤗 Datasets library is normally smart enough to detect when the function you pass to map has changed (and thus requires to not use the cache data). For instance, it will properly detect if you change the task in the first cell and rerun the notebook. 🤗 Datasets warns you when it uses cached files, you can pass `load_from_cache_file=False` in the call to `map` to not use the cached files and force the preprocessing to be applied again.
Note that we passed `batched=True` to encode the texts by batches together. This is to leverage the full benefit of the fast tokenizer we loaded earlier, which will use multi-threading to treat the texts in a batch concurrently.
## Fine-tuning the model
Now that our data is ready, we can download the pretrained model and fine-tune it. Since our task is of the sequence-to-sequence kind, we use the `AutoModelForSeq2SeqLM` class. Like with the tokenizer, the `from_pretrained` method will download and cache the model for us.
```
from transformers import TFAutoModelForSeq2SeqLM, DataCollatorForSeq2Seq
model = TFAutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)
```
Note that we don't get a warning like in our classification example. This means we used all the weights of the pretrained model and there is no randomly initialized head in this case.
Next we set some parameters like the learning rate and the `batch_size`and customize the weight decay.
The last two arguments are to setup everything so we can push the model to the [Hub](https://huggingface.co/models) at the end of training. Remove the two of them if you didn't follow the installation steps at the top of the notebook, otherwise you can change the value of push_to_hub_model_id to something you would prefer.
```
batch_size = 8
learning_rate = 2e-5
weight_decay = 0.01
num_train_epochs = 1
model_name = model_checkpoint.split("/")[-1]
push_to_hub_model_id = f"{model_name}-finetuned-xsum"
```
Then, we need a special kind of data collator, which will not only pad the inputs to the maximum length in the batch, but also the labels. Note that our data collators are multi-framework, so make sure you set `return_tensors='tf'` so you get `tf.Tensor` objects back and not something else!
```
data_collator = DataCollatorForSeq2Seq(tokenizer, model=model, return_tensors="tf")
tokenized_datasets["train"]
```
Now we convert our input datasets to TF datasets using this collator. There's a built-in method for this: `to_tf_dataset()`. Make sure to specify the collator we just created as our `collate_fn`!
```
train_dataset = tokenized_datasets["train"].to_tf_dataset(
batch_size=batch_size,
columns=["input_ids", "attention_mask", "labels"],
shuffle=True,
collate_fn=data_collator,
)
validation_dataset = tokenized_datasets["validation"].to_tf_dataset(
batch_size=8,
columns=["input_ids", "attention_mask", "labels"],
shuffle=False,
collate_fn=data_collator,
)
```
Now we initialize our loss and optimizer and compile the model. Note that most Transformers models compute loss internally - we can train on this as our loss value simply by not specifying a loss when we `compile()`.
```
from transformers import AdamWeightDecay
import tensorflow as tf
optimizer = AdamWeightDecay(learning_rate=learning_rate, weight_decay_rate=weight_decay)
model.compile(optimizer=optimizer)
```
Now we can train our model. We can also add a callback to sync up our model with the Hub - this allows us to resume training from other machines and even test the model's inference quality midway through training! Make sure to change the `username` if you do. If you don't want to do this, simply remove the callbacks argument in the call to `fit()`.
```
from transformers.keras_callbacks import PushToHubCallback
callback = PushToHubCallback(
output_dir="./summarization_model_save",
tokenizer=tokenizer,
hub_model_id=push_to_hub_model_id,
)
model.fit(train_dataset, validation_data=validation_dataset, epochs=1, callbacks=[callback])
```
Hopefully you saw your loss value declining as training continued, but that doesn't really tell us much about the quality of the model. Let's use the ROUGE metric we loaded earlier to quantify our model's ability in more detail. First we need to get the model's predictions for the validation set.
```
import numpy as np
decoded_predictions = []
decoded_labels = []
for batch in validation_dataset:
labels = batch["labels"]
predictions = model.predict_on_batch(batch)["logits"]
predicted_tokens = np.argmax(predictions, axis=-1)
decoded_predictions.extend(
tokenizer.batch_decode(predicted_tokens, skip_special_tokens=True)
)
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels.extend(tokenizer.batch_decode(labels, skip_special_tokens=True))
```
Now we need to prepare the data as the metric expects, with one sentence per line.
```
import nltk
import numpy as np
# Rouge expects a newline after each sentence
decoded_predictions = [
"\n".join(nltk.sent_tokenize(pred.strip())) for pred in decoded_predictions
]
decoded_labels = [
"\n".join(nltk.sent_tokenize(label.strip())) for label in decoded_labels
]
result = metric.compute(
predictions=decoded_predictions, references=decoded_labels, use_stemmer=True
)
# Extract a few results
result = {key: value.mid.fmeasure * 100 for key, value in result.items()}
# Add mean generated length
prediction_lens = [
np.count_nonzero(pred != tokenizer.pad_token_id) for pred in predictions
]
result["gen_len"] = np.mean(prediction_lens)
print({k: round(v, 4) for k, v in result.items()})
```
If you used the callback above, you can now share this model with all your friends, family, favorite pets: they can all load it with the identifier `"your-username/the-name-you-picked"` so for instance:
```python
from transformers import TFAutoModelForSeq2SeqLM
model = TFAutoModelForSeq2SeqLM.from_pretrained("your-username/my-awesome-model")
```
| github_jupyter |
# BOSS Calibration Tutorial
The purpose of this tutorial is to reconstruct and document the calibration steps from detected electrons to calibrated flux, as described [here](https://trac.sdss3.org/wiki/BOSS/pipeline/FluxToPhotons) (requires SDSS3 login).
```
%pylab inline
import astropy.io.fits as fits
import bossdata
print(bossdata.__version__)
finder = bossdata.path.Finder()
mirror = bossdata.remote.Manager()
```
Define a utility function to take the inverse of an array that might be masked or contain zeros. The result is always an unmasked array with any invalid entries set to zero.
```
def inverse(data):
if isinstance(data, ma.core.MaskedArray):
# Add any zero entries to the original mask.
mask = data.mask
mask[~data.mask] = (data[~data.mask] == 0)
else:
mask = (data == 0)
inv = np.zeros(data.shape)
inv[~mask] = 1 / data[~mask]
return inv
```
Catch any warnings since there shouldn't be any:
```
#import warnings
#warnings.simplefilter('error')
```
With the default plate 6641, all files are mirrored from https://dr12.sdss.org/sas/dr12/boss/spectro/redux/v5_7_0/6641/.
```
def plot_calib(plate=6641, mjd=None, fiber=1, expidx=0, band='blue',
mask=None, save=None):
"""
"""
assert band in ('blue', 'red')
# Infer the MJD if possible, when none is specified.
if mjd is None:
mjds = bossdata.meta.get_plate_mjd_list(plate, finder, mirror)
if len(mjds) == 0:
print('Plate {} has never been observed with good quality.')
elif len(mjds) > 1:
print('Plate {} observed multiple on multiple MJDs (pick one): {}.'
.format(','.join(mjds)))
else:
mjd = mjds[0]
print('Using MJD {}.'.format(mjd))
if not mjd:
return
# Which spectrograph does this fiber belong to?
num_fibers = bossdata.plate.get_num_fibers(plate)
spec_num = 1 if fiber <= num_fibers // 2 else 2
camera = band[0] + str(spec_num)
print('Fiber {} read out by {} camera {}.'.format(fiber, band, camera))
# Load the list of exposures used for the science coadd of PLATE-MJD
# and the associated calibration exposures.
spec_name = finder.get_spec_path(plate, mjd, fiber, lite=True)
exposures = bossdata.spec.SpecFile(mirror.get(spec_name)).exposures
nexp = len(exposures.table)
if expidx >= nexp:
print('Invalid exposure index {} (should be 0-{}).'
.format(expidx, nexp - 1))
return
expnum = exposures.table[expidx]['science']
print('Analyzing exposure[{}] = #{} of {} used in coadd.'
.format(expidx, expnum, nexp))
# Load the calibrated flux and wavelength solution and ivars from the spCFrame file.
name = exposures.get_exposure_name(expidx, camera, 'spCFrame')
path = mirror.get(finder.get_plate_path(plate, name))
spCFrame = bossdata.plate.FrameFile(path, calibrated=True)
data = spCFrame.get_valid_data(
[fiber], include_sky=True,use_ivar=True, pixel_quality_mask=mask)[0]
wave, flux, sky, ivar = data['wavelength'], data['flux'], data['sky'], data['ivar']
# Lookup the metadata for this fiber.
fiber_index = spCFrame.get_fiber_offsets([fiber])[0]
info = spCFrame.plug_map[fiber_index]
objtype = info['OBJTYPE'].rstrip()
print('Fiber {} objtype is {}.'.format(fiber, objtype))
# Load the uncalibrated flux in flat-fielded electrons from the spFrame file.
name = exposures.get_exposure_name(expidx, camera, 'spFrame')
path = mirror.get(finder.get_plate_path(plate, name))
spFrame = bossdata.plate.FrameFile(path, calibrated=False)
data = spFrame.get_valid_data(
[fiber], include_sky=True, use_ivar=True, pixel_quality_mask=mask)[0]
ewave, eflux, esky, eivar = data['wavelength'], data['flux'], data['sky'], data['ivar']
# Look up the trace position on the CCD.
tracex = spFrame.hdulist[7].read()[fiber_index]
# Load the fluxcorr for this fiber.
name = exposures.get_exposure_name(expidx, camera, 'spFluxcorr')
path = mirror.get(finder.get_plate_path(plate, name))
with fits.open(path) as spFluxcorr:
corr = spFluxcorr[0].data[fiber_index]
# Load the fluxcalib for this fiber.
name = exposures.get_exposure_name(expidx, camera, 'spFluxcalib')
path = mirror.get(finder.get_plate_path(plate, name))
with fits.open(path) as spFluxcalib:
spcalib = spFluxcalib[0].data[fiber_index]
# The spFrame uses a TraceSet instead of tabulated log(lambda) values.
# The b-camera spCFrame, spFluxcorr arrays have 16 extra entries compared
# with spFrame, so trim those now.
n = len(ewave)
wave = wave[:n]
assert np.allclose(wave, ewave)
flux = flux[:n]
sky = sky[:n]
ivar = ivar[:n]
corr = corr[:n]
# Load the superflat from the spFrame file.
superflat = spFrame.get_superflat([fiber])[0]
# Load the fiberflat and neff from the spFlat file.
name = exposures.get_exposure_name(expidx, camera, 'spFlat')
path = mirror.get(finder.get_plate_path(plate, name))
with fits.open(path) as spFlat:
fiberflat = spFlat[0].data[fiber_index]
neff = bossdata.plate.TraceSet(spFlat[3]).get_y()[fiber_index]
# Get the flux distortion map for this plate's coadd.
path = mirror.get(finder.get_fluxdistort_path(plate, mjd))
with fits.open(path) as spFluxdistort:
distort_coadd = spFluxdistort[0].data[fiber_index]
# Build the coadded loglam grid.
hdr = spFluxdistort[0].header
loglam0, idx0, dloglam = hdr['CRVAL1'], hdr['CRPIX1'], hdr['CD1_1']
loglam = loglam0 + (np.arange(len(distort_coadd)) - idx0) * dloglam
wave_coadd = 10 ** loglam
# Linearly interpolate the distortion to our wavelength grid.
distort = np.interp(wave, wave_coadd, distort_coadd)
# Calculate ratio of dloglam=10e-4 bin sizes to native pixel binsizes.
R = dloglam / np.gradient(np.log10(wave))
# Combine the flat-field corrections.
flat = superflat * fiberflat
# Calculate the raw electron counts, including the sky. Note that this
# can be negative due to read noise.
electrons = flat * (eflux + esky)
# Lookup the measured readnoise measured in each amplifier quadrant.
readnoise_per_quad = np.empty(4)
for quadrant in range(4):
readnoise_per_quad[quadrant] = spFrame.header['RDNOISE{}'.format(quadrant)]
print('Readnoise is {:.2f}/{:.2f}/{:.2f}/{:.2f} electrons'
.format(*readnoise_per_quad))
# Get the quadrant of each wavelength pixel along this trace.
ampsizes = {'blue': (2056, 2048), 'red': (2064, 2057)}
ysize, xsize = ampsizes[band]
yamp = 1 * (np.arange(2 * ysize) >= ysize)
xamp = 2 * (tracex >= xsize)
quad = xamp + yamp
# Lookup the read noise for each wavelength pixel along this trace.
readnoise_per_pixel = readnoise_per_quad[quad]
mean_readnoise = np.mean(readnoise_per_pixel)
# Estimate the readnoise per wavelength pixel.
# Why is scale~2.35 necessary to reproduce the pipeline noise??
##scale = np.sqrt(8 * np.log(2))
scale = (4 * np.pi) ** 0.25
readnoise = readnoise_per_pixel * neff * scale
# Calculate the pipeline variance in detected electrons.
evar = flat ** 2 * inverse(eivar)
# Predict what the variance in detected electrons should be.
# Clip bins with electrons < 0 (due to read noise), to match what
# the pipeline does (in sdssproc).
evar_pred = np.clip(electrons, a_min=0, a_max=None) + readnoise ** 2
# Calculate the actual flux / eflux calibration used by the pipeline.
ecalib1 = flux * inverse(eflux)
# Calculate the flux / eflux calibration from the components described at
# https://trac.sdss3.org/wiki/BOSS/pipeline/FluxToPhotons
ecalib2 = corr * distort * R * inverse(spcalib)
# Compare the actual and predicted calibrations.
nonzero = (ecalib1 > 0)
absdiff = np.abs(ecalib1[nonzero] - ecalib2[nonzero])
reldiff = absdiff / np.abs(ecalib1[nonzero] + ecalib2[nonzero])
print('calibration check: max(absdiff) = {:.5f}, max(reldiff) = {:.5f}'
.format(np.max(absdiff), np.max(reldiff)))
# Calculate the flux variance.
var = inverse(ivar)
# Predict the flux variance by scaling the eflux variance.
var_pred = ecalib1 ** 2 * inverse(eivar)
# Limit plots to wavelengths where the flat is nonzero.
nonzero = np.where(flat > 0)[0]
wmin, wmax = wave[nonzero[[0,-1]]]
# Truncate tails.
evar_max = np.percentile(evar, 99)
var_max = np.percentile(var, 99)
# Initialize plots.
fig, ax = plt.subplots(3, 2, figsize=(8.5, 11))
ax = ax.flatten()
ax[0].plot(wave, flux + sky, 'k.', ms=1, label='flux+sky')
ax[0].plot(wave, var, 'r.', ms=1, label='var')
ax[0].plot(wave, var_pred, 'b.', ms=1, label='pred')
ax[0].set_xlim(wmin, wmax)
ax[0].set_ylim(0, np.percentile(flux + sky, 99))
ax[0].set_xlabel('Wavelength [A]')
ax[0].set_ylabel('Flux, Variance [flux]')
ax[0].legend(ncol=3)
ax[1].plot(wave, eflux + esky, 'k.', ms=1, label='flux+sky')
ax[1].plot(wave, evar, 'r.', ms=1, label='var')
ax[1].plot(wave, evar_pred, 'b.', ms=1, label='pred')
ax[1].plot(wave, readnoise, 'g-', label='readnoise')
ax[1].set_xlim(wmin, wmax)
ax[1].set_ylim(0, evar_max)
ax[1].set_xlabel('Wavelength [A]')
ax[1].set_ylabel('Flux, Variance [elec]')
ax[1].legend(ncol=2)
ax[2].plot(wave, flat, 'k-', label='both')
ax[2].plot(wave, superflat, 'r-', label='super')
ax[2].plot(wave, fiberflat, 'b-', label='fiber')
ax[2].set_xlim(wmin, wmax)
ax[2].set_xlabel('Wavelength [A]')
ax[2].set_ylabel('Flat Field Correction')
ax[2].legend(ncol=3)
ax[3].plot(wave, ecalib1, 'k-', label='All')
ax[3].plot(wave, corr, 'b-', label='corr')
ax[3].plot(wave, 5 * inverse(spcalib), 'r-', label='5/spcalib')
ax[3].plot(wave, distort, '-', c='magenta', label='distort')
ax[3].plot(wave, R, 'g-', label='R')
ax[3].set_xlim(wmin, wmax)
ax[3].set_xlabel('Wavelength [A]')
ax[3].set_ylabel('Flux Calibration [flux/elec]')
ax[3].legend(ncol=3)
'''
ax[4].plot(var_pred, var, 'k.', ms=1)
ax[4].set_xlim(0, var_max)
ax[4].set_ylim(0, var_max)
ax[4].set_xlabel('Predicted Variance [flux]')
ax[4].set_ylabel('Pipeline Variance [flux]')
'''
excess_rms_per_pix = np.sqrt(evar - electrons)
ax[4].plot(wave, excess_rms_per_pix, 'k.', ms=1)
ax[4].plot(wave, readnoise, 'r-')
ax[4].set_xlim(wmin, wmax)
ax[4].set_ylim(0., np.percentile(excess_rms_per_pix, 95))
ax[4].set_xlabel('Wavelength [A]')
ax[4].set_ylabel('(Pipeline Var - Shot Noise)$^{1/2}$ [det elec]')
ax[5].plot(evar_pred, evar, 'k.', ms=1)
ax[5].plot([0, evar_max], [0, evar_max], 'r--')
ax[5].set_xlim(0, evar_max)
ax[5].set_ylim(0, evar_max)
ax[5].set_xlabel('Predicted Variance [det elec$^2$]')
ax[5].set_ylabel('Pipeline Variance [det elec$^2$]')
title = '{}-{}-{} {}[{}]={} OBJ={} RDNOISE={:.1f}e'.format(
plate, mjd, fiber, camera, expidx, expnum, objtype, mean_readnoise)
plt.suptitle(title)
plt.subplots_adjust(top=0.95, right=0.99)
if save:
plt.savefig(save)
plot_calib(fiber=1, band='blue')
plot_calib(fiber=1, band='red')
plot_calib(fiber=486, band='blue')
plot_calib(fiber=486, band='red')
plot_calib(fiber=12, band='red')
```
| github_jupyter |
# Current SARS-CoV-2 Viral Diversity Supports Transmission Rule-Out by Genomic Sequencing
When community transmission levels are high, there will be many coincidences in which individuals in the same workplace, classroom, nursing home, or other institution test positive for SARS-CoV-2 purely by chance. Genomic sequencing can separate such coincidences from true transmission clusters.
Demonstrating that an epidemiologically-linked cluster does not have genomic links provides reassurance to stakeholders that infection control practices are working. If an epidemiologically-linked cluster does have genomic links, transmission in the identified setting is more likely and decision-makers can focus on revising infection control practices or policies to prevent future transmission.
This is possible because the SARS-CoV-2 virus mutates, on average, once every 2 weeks. (See [nextstrain](https://nextstrain.org/ncov/gisaid/global?l=clock) for an up-to-date estimate. As of 10-01-21, the rate estimate was 23.87 substitutions per year, or one every 2.18 weeks.) If two people are part of the same transmission event (A infected B, or some C infected both A and B), then the genome sequences of the virus from each case involved will differ by at most 1 or 2 mutations.

The converse is not necessarily true: it is possible for the genomes of virus to match even when the cases are epidemiologically quite distant, especially when superspreader events are involved. (Early in the pandemic we [documented](https://twitter.com/thebasepoint/status/1278057767983448064) instances where a viral genomic sequence was observed identically across dozens of countries, and persisted for months.)
Inspired by a potential program doing sequencing in schools, we asked: at this point in the pandemic, how sensitive is genomic sequencing for ruling out transmission? If two cases are unrelated, if the epi link is a coincidence, will genomic sequencing tell you that?
The more diverse the circulating population of SARS-CoV-2, the more powerful sequencing will be.
### Analysis
To answer this question, we picked an American city with a very high level of genomic sequencing being done: San Diego, California. The [SEARCH Alliance](https://searchcovid.info/) has been sequencing SARS-CoV-2 for almost a year and a half, and over the summer, they regularly sequenced 10-20% of the reported daily cases. We chose to analyze samples from August 2021, because it is after the Delta sweep, and so represents the current pandemic phase, and has a large number of genomes already sequenced and deposited into GISAID.
We downloaded the 2429 high-quality genomes from August 2021 from GISAID, and computed their pairwise SNP distances using `snp-dists` on a `mafft` alignment.

We define a *potential coincidental epi link* to be a pair of samples from the time period, with collection dates less than 2 weeks apart. We say that potential epi link would be ruled out if the genomes are more than 2 SNPs away from each other.
**We find that 99.5% of potential spurious epi links would be ruled out by sequencing!**
One way to think about this: a given person A was infected by one person B. Just 0.5% of the cases in the area have genotypes close enough to that of A to plausibly be the infector B, so the chances that a coincidentally epi-linked case is also a genetic link is very low.
Analysis below:
```
# Inputs
fasta_file = 'data/1633204764832.sequences.fasta'
meta_file = 'data/1633204764832.metadata.tsv'
reference_file = 'data/ref.fasta'
# Intermediates
working_dir = 'scratch/'
aligned_file = working_dir + 'aligned.aln'
aligned_masked_file = working_dir + 'aligned_masked.aln'
dists_file = working_dir + 'snp-dists.tsv'
dists_masked_file = working_dir + 'snp-dists_masked.tsv'
```
Align whole genomes to reference.
```
# Flags just align to reference
!mafft \
--6merpair --thread 10 --keeplength --addfragments \
{fasta_file} {reference_file} > {aligned_file}
```
Mask sites
```
import pandas as pd
from Bio import AlignIO
import copy
algn = AlignIO.read(aligned_file, "fasta")
masked_algn = copy.deepcopy(algn) # Create copy to test differences
masked_vcf_url = "https://raw.githubusercontent.com/W-L/ProblematicSites_SARS-CoV2/master/problematic_sites_sarsCov2.vcf"
masked_vcf = pd.read_csv(masked_vcf_url, sep="\t", comment="#", names=["region", "pos", "ref", "alt", "x", "y", "mask", "comment"])
masked_sites = masked_vcf[masked_vcf["mask"] == "mask"]["pos"].tolist()
for i in masked_sites:
pos = i-1
for rec in masked_algn:
rec.seq = rec.seq[:pos] + "N" + rec.seq[pos+1:]
AlignIO.write(masked_algn, aligned_masked_file, "fasta")
```
Compute SNP distances between samples.
```
!/home/gk/code/snp-dists/snp-dists -j 20 \
-m {aligned_file} > {dists_file}
!/home/gk/code/snp-dists/snp-dists -j 20 \
-m {aligned_masked_file} > {dists_masked_file}
import datetime
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
import tqdm
import seaborn as sns
%matplotlib inline
# Load metadata
meta = pd.read_csv(meta_file, sep='\t')
date_lookup = dict()
for sample, date in zip(meta['strain'], meta['date']):
if pd.isna(date):
print(f"Warning: {sample} is missing collection date.")
date_lookup[sample] = np.nan
else:
date_lookup[sample] = datetime.datetime.strptime(date, '%Y-%m-%d').toordinal()
# Close = SNP dist <= 2, Far = SNP dist > 2, for samples collected
# within 2 weeks of each other
def compute_pairs(dist_matrix_file):
close_pairs = 0
far_pairs = 0
distances = []
with open(dist_matrix_file, 'r') as infile:
for line in tqdm.tqdm(infile):
(sample1, sample2, distance) = line.split()
distance = int(distance)
if sample1 not in date_lookup or sample2 not in date_lookup:
continue
if abs(date_lookup[sample1] - date_lookup[sample2]) > 14:
continue
if sample1 == sample2:
continue
if distance <= 2:
close_pairs += 1
if distance > 2:
far_pairs += 1
distances.append(distance)
return close_pairs, far_pairs, distances
close_pairs, far_pairs, distances = compute_pairs(dists_file)
close_pairs, far_pairs
np.round(100*far_pairs/(close_pairs + far_pairs), 1)
masked_close_pairs, masked_far_pairs, masked_distances = compute_pairs(dists_masked_file)
masked_close_pairs, masked_far_pairs
np.round(100*masked_far_pairs/(masked_close_pairs + masked_far_pairs), 1)
masked_close_pairs, masked_far_pairs
close_pairs, far_pairs
```
## Measuring Diversity Within and Between Lineages
A more detailed histogram of all pairwise SNP distances between samples collected within 2 weeks of one another shows a tri-modal plot.
Only pairs of samples on the far left (to the left of the 2-SNP red line) could have direct transmission links.
```
plt.subplots(figsize=(10,7.5))
plt.hist(distances, bins = 40)
plt.title("SNP distances between random pairs of samples\n(San Diego, August 2021, n=3830)")
plt.xlabel('SNP distance')
plt.ylabel('count')
plt.axvline(2, color='red')
plt.tight_layout()
plt.savefig('SNP-dist-san-diego.png')
plt.subplots(figsize=(10,7.5))
plt.hist(masked_distances, bins = 40)
plt.title("SNP masked distances between random pairs of samples\n(San Diego, August 2021, n=3830)")
plt.xlabel('SNP distance')
plt.ylabel('count')
plt.axvline(2, color='red')
plt.tight_layout()
plt.savefig('SNP-dist-masked-san-diego.png')
```
We then investigated the source of the three peaks for pairwise SNP distances: around 15, 35, and 70.
Presumably, these are generated by the typical distances within or between certain lineages. For example, the founding genotypes of B.1.617.2 and P.1 (Delta and Gamma) are some distance apart on the tree, and the distance from any Delta to any Gamma should be approximately the same. (With a mutation rate of ~1 mut/2 weeks, the distance should be approximately the number of weeks since Delta and Gamma diverged.)
```
meta.groupby('pangolin_lineage')['date'].count().sort_values(ascending=False)
lineages = meta['pangolin_lineage'].unique()
n_lineages = len(lineages)
lin_to_row = dict(zip(lineages, range(n_lineages)))
sample_to_lin = dict(zip(meta['strain'], meta['pangolin_lineage']))
counts = np.zeros((n_lineages, n_lineages))
distances = np.zeros((n_lineages, n_lineages))
with open(dists_file, 'r') as infile:
for line in tqdm.tqdm(infile):
(sample1, sample2, distance) = line.split()
if sample1 == sample2:
continue
distance = int(distance)
if sample1 not in date_lookup or sample2 not in date_lookup:
continue
if abs(date_lookup[sample1] - date_lookup[sample2]) > 14:
continue
lin1 = sample_to_lin[sample1]
lin2 = sample_to_lin[sample2]
# distances.loc[lin1, lin2] += distance
# counts.loc[lin1, lin2] += 1
idx1 = lin_to_row[lin1]
idx2 = lin_to_row[lin2]
distances[idx1, idx2] += distance
counts[idx1, idx2] += 1
distances = pd.DataFrame(index = lineages,
columns = lineages,
data = distances)
counts = pd.DataFrame(index = lineages,
columns = lineages,
data = counts)
mean_distances = distances/counts
```
Remove None (uncalled lineages), and AY.10 (few pairs w/in 2 week window).
```
md = mean_distances.drop(['None'], axis = 0).drop(['None'], axis = 1)
within_lineage_dist = (
md
.stack()
.reset_index()
.query('level_0 == level_1')
.sort_values(0)
)
plt.subplots(figsize=(10,7.5))
plt.barh(within_lineage_dist["level_0"].tolist(), within_lineage_dist[0].tolist())
plt.title("Within lineage distance in San Diego from Aug (n=3830)")
plt.tight_layout()
plt.savefig("within_lineage_dist.png", dpi = 300)
sns.clustermap(md.fillna(0), vmin=-5, mask=md.isna())
```
The distance between a typical Delta lineage (say, AY.25) and Gamma (P.1) is 71 SNPs. The lineages diverged in Jan 2020, or 86 weeks before these samples were collected.
```
mean_distances.loc['AY.25', 'P.1']
```
In contrast, the average distance between AY.3 and AY.25 samples is 16.6, which have a least common ancestor in April 2021, or 16 weeks before the samples were collected.
```
mean_distances.loc['AY.25', 'AY.3']
```
Even within PANGO lineages, there is significant diversity. The average distance within AY.25 is 8, meaning that most pairs could still be ruled out.
```
mean_distances.loc['AY.25', 'AY.25']
```
Inside Delta more broadly, those still categorized as `B.1.617.2`, the diversity is even greater.
```
mean_distances.loc['B.1.617.2', 'B.1.617.2']
within_lineage_data = []
within_lineage_distances = []
lineage_distribution = {}
with open(dists_file, 'r') as infile:
for line in tqdm.tqdm(infile):
(sample1, sample2, distance) = line.split()
if sample1 == sample2:
continue
distance = int(distance)
if sample1 not in date_lookup or sample2 not in date_lookup:
continue
if abs(date_lookup[sample1] - date_lookup[sample2]) > 14:
continue
lin1 = sample_to_lin[sample1]
lin2 = sample_to_lin[sample2]
row = [sample1, sample2, lin1, lin2, distance]
within_lineage_data.append(row)
if lin1 == lin2:
within_lineage_distances.append(distance)
if lin1 in lineage_distribution:
lineage_distribution[lin1].append(distance)
else:
lineage_distribution[lin1] = [distance]
within_lineage_df = pd.DataFrame(within_lineage_data, columns = ["sample1", "sample2", "sample1_lineage", "sample2_lineage", "snp_distance"])
(np.array(within_lineage_distances) > 2).sum()/len(within_lineage_distances)
```
Even if we restrict ourselves to pairs of samples within the same PANGO lineage, 98.5% of samples are still more than 2 SNPs away.
This means that **for transmission cluster rule-out, it is essential to use actual SNP distances, and not just PANGO lineage assignments**. Relying on lineage assignments alone gives up substantial power.
```
lineage_match = 0
lineage_mismatch = 0
with open(dists_file, 'r') as infile:
for line in tqdm.tqdm(infile):
(sample1, sample2, distance) = line.split()
if sample1 == sample2:
continue
distance = int(distance)
if sample1 not in date_lookup or sample2 not in date_lookup:
continue
if abs(date_lookup[sample1] - date_lookup[sample2]) > 14:
continue
lin1 = sample_to_lin[sample1]
lin2 = sample_to_lin[sample2]
if lin1 == lin2:
lineage_match += 1
else:
lineage_mismatch += 1
100 - np.round(lineage_mismatch/(lineage_match + lineage_mismatch)*100,2)
# Within lineage distances
plt.hist(lineage_distribution['B.1.617.2'], bins=20)
plt.title("SNP Distances within B.1.617.2")
plt.axvline(2, color='red')
plt.hist(lineage_distribution['P.1'], bins=20)
plt.title("SNP Distances within P.1")
plt.axvline(2, color='red')
# Within lineage distances
plt.hist(lineage_distribution['B.1.621'], bins=20)
plt.title("SNP Distances within B.1.621")
plt.axvline(2, color='red')
plt.hist(lineage_distribution['B.1.617.2'], bins=20)
```
## Rule-in?
How reliable is genetic confirmation of an epi link?
We can think of this in a Bayesian way:
$$P(\textrm{transmission}| \textrm{epi}, \textrm{genomics}) = \frac{P(\textrm{genomics}| \textrm{epi}, \textrm{transmission})*P(\textrm{transmission} | \textrm{epi})}{P(\textrm{genomics}|\textrm{epi})}.$$
The probability of seeing a genomic link (<= 2 SNPs) given transmission is very high, say 99% (the 1% accounts for a burst of mutations, as in a long latent infection, or sample mixups in the sequencing lab).
The prior probability of seeing transmission given the epi link alone, $P(\textrm{transmission} | \textrm{epi})$ depends on the circumstance, but might reasonably range from 1% (same school but no shared classes) to 90% (same household).
The denominator is a sum of two terms: the probability of seeing a genomic link given transmission, weighted by the prior probability of transmission given the epi data, plus the probability of seeing a genomic link given no transmission, weighted by the prior probability of no transmission given the epi data.
$$P(\textrm{genomics}|\textrm{epi}) = P(\textrm{genomics}| \textrm{epi}, \textrm{transmission})*P(\textrm{transmission} | \textrm{epi}) \\+ P(\textrm{genomics}| \textrm{epi}, \textrm{no transmission})*P(\textrm{no transmission} | \textrm{epi})$$
The key variable that we approximated above is $P(\textrm{genomics}| \textrm{epi}, \textrm{no transmission})$, the probability of seeing a genomic link given the epi circumstance *if transmission did not take place*. That is a measure of the genomic diversity of the community from which these individuals were drawn. We estimated this to be 0.5% in San Diego at this time, but if a smaller community is considered (eg, a neighborhood, or a socio-economic/demographic group), then data from within that community should be used.
If we say that the probability of a genomic link given no transmission were 2% (accounting for a smaller community with less genomic diversity), then we would have:
$$P(\textrm{transmission}| \textrm{epi}, \textrm{genomics}) = \frac{0.99*p}{0.99*p + 0.02*(1-p)},$$
where $p$ was the prior probability of transmission given the epi evidence alone.
```
def posterior_probability(prior, background=0.02, sequencing_accuracy=0.99):
return sequencing_accuracy*prior/(sequencing_accuracy*prior + background*(1-prior))
```
If the epi evidence were reasonably strong, say, p = 75% (say, in the case of household transmission), the genomics would boost it to 99.3%.
```
posterior_probability(0.75)
```
If the epi evidence were very weak, say, p = 0.1% (say, if the two cases are just in the same neighborhood), then genomics would boost it to 4.7%.
```
posterior_probability(0.001)
```
| github_jupyter |
# Import requried libraries
```
import pandas as pd # for manipulating data
import numpy as np # Manipulating arrays
import keras # High level neural network API
import tensorflow as tf # Framework use for dataflow
from sklearn.model_selection import train_test_split # To split the data into train and validation
from tensorflow.keras.models import Sequential # To build neural network
from tensorflow.keras.layers import Dense # To add the dense layer
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score # To calculate the matrix
from sklearn.metrics import classification_report # To print the report
from sklearn.metrics import precision_recall_curve # To print the precision-recall curve
from tensorflow.keras.optimizers import Adam # optimizer
import matplotlib.pyplot as plt # To plot the graph
from tensorflow.keras.models import load_model # To load the model
from keras.utils import CustomObjectScope #Provides a scope that changes to _GLOBAL_CUSTOM_OBJECTS cannot escape.
from keras.initializers import glorot_uniform #Initializations define the way to set the initial random weights of Keras layers.
# Read the file
df = pd.read_csv('musk_csv.csv')
df.head()
```
# Pre-processing
```
df.describe()
print("Length of Musk is :",len(df[df['class']== 1]))
print("Length of Non-Musk is :",len(df[df['class']== 0]))
# check the, wether there is any null value
df.isnull().sum()
# Drop unnecessary columns(ID ,molecule_name,conformation_name)
df = df.drop(columns=['ID','molecule_name','conformation_name'])
df.head()
```
### Split the data into train and test
```
train, test= train_test_split(df,test_size=0.20,random_state=6)
print(f"Row in training set:{len(train)}\nRow in testing set:{len(test)} ")
train_X = train[train.columns[:-1]]
train_Y = train[train.columns[-1]]
test_X = test[test.columns[:-1]]
test_Y = test[test.columns[-1]]
# CHeck the shape of the data
train_X.shape
train_Y.shape
```
# Small model
### Named small model because only one hidden layer with 60 nodes and a ReLU activation function are used followed by an output layer with a single node and a sigmoid activation function
### The model will predict whether the taken compound is Non-Musk(0) or Musk(1)
```
model = Sequential()
model.add(Dense(60, input_dim=166, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
```
### The model will be fit using the binary cross entropy loss function and we will use the efficient Adam version of stochastic gradient descent. The model will also monitor the classification accuracy metric
```
# Compile model
model.compile(optimizer='adam', loss='binary_crossentropy',
metrics=['accuracy'])
```
### Fitting the model with 9 epochs with the batch size of 100
```
history = model.fit(train_X,train_Y, epochs = 9, batch_size=100, validation_data=(test_X, test_Y) )
```
### Visualize the model accuracy and model loss
```
def plot_learningCurve(history, epoch):
# Plot training & validation accuracy values
epoch_range = range(1, epoch+1)
plt.plot(epoch_range, history.history['acc'])
plt.plot(epoch_range, history.history['val_acc'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Val'], loc='upper left')
plt.show()
# Plot training & validation loss values
plt.plot(epoch_range, history.history['loss'])
plt.plot(epoch_range, history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Val'], loc='upper left')
plt.show()
plot_learningCurve(history, 9)
```
# Make predictions
```
preds= model.predict(test_X)
preds_classes = model.predict_classes(test_X)
```
# Calculate matrix
### Three matrics, in addition to classification accuracy, that are commonly required for a neural network model on a binary classification problem are:
### Accuracy : (TP + TN) / (TP +TN+FP+FN)
### Precision : TP / (TP + FP)
### Recall : TP / (TP + FN)
### F1 Score : 2 TP / (2 TP + TP + FN)
```
# reduce to 1d array before calculating the matrix
preds = preds[:, 0]
preds_classes =preds_classes[:, 0]
print(classification_report(test_Y, preds_classes))
precision, recall, thresholds = precision_recall_curve(test_Y, preds_classes)
# create plot
plt.plot(precision, recall, label='Precision-recall curve')
_ = plt.xlabel('Precision')
_ = plt.ylabel('Recall')
_ = plt.title('Precision-recall curve')
_ = plt.legend(loc="lower left")
```
# Large model
### Named large model because more than one hidden layers are used. 60 nodes in 1st layer, 30 nodes in 2rd layers along with a output layer with a single node
```
model = Sequential()
model.add(Dense(60, input_dim=166, activation='relu'))
model.add(Dense(30, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
# Compile model
model.compile(optimizer='adam', loss='binary_crossentropy',
metrics=['accuracy'])
history = model.fit(train_X,train_Y, epochs = 9, batch_size=100, validation_data=(test_X, test_Y) )
```
### Visualize the model accuracy and model loss
```
def plot_learningCurve(history, epoch):
# Plot training & validation accuracy values
epoch_range = range(1, epoch+1)
plt.plot(epoch_range, history.history['acc'])
plt.plot(epoch_range, history.history['val_acc'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Val'], loc='upper left')
plt.show()
# Plot training & validation loss values
plt.plot(epoch_range, history.history['loss'])
plt.plot(epoch_range, history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Val'], loc='upper left')
plt.show()
plot_learningCurve(history, 9)
```
# Make predictions
```
preds= model.predict(test_X)
preds_classes = model.predict_classes(test_X)
```
# Calculate metrix
```
# reduce to 1d array before calculating the matrix
preds = preds[:, 0]
preds_classes =preds_classes[:, 0]
print(classification_report(test_Y, preds_classes))
precision, recall, thresholds = precision_recall_curve(test_Y, preds_classes)
# create plot
plt.plot(precision, recall, label='Precision-recall curve')
_ = plt.xlabel('Precision')
_ = plt.ylabel('Recall')
_ = plt.title('Precision-recall curve')
_ = plt.legend(loc="lower left")
```
# Model save
### This function saves
.The architecture of the model, allowing to create the model
.The weights of the model
.The training configuration(loss, optimizer)
.The state of the optimizer, allowing to resume training exactly where you left off.
```
model.save('model.h5')
```
# Load model
```
with CustomObjectScope({'GlorotUniform': glorot_uniform()}):
new_model = load_model('model.h5')
```
| github_jupyter |
<h1 align="center">ML For Defect Analysis</h1>
## 1. Building the Model
```
import warnings
warnings.filterwarnings('ignore')
import os
import tensorflow as tf
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.preprocessing.image import ImageDataGenerator
#Split the data into train, validation & test
import splitfolders
input_folder = 'Data'
# Split with a ratio.
# To only split into training and validation set, set a tuple to `ratio`, i.e, `(.8, .2)`.
#Train, val, test
splitfolders.ratio(input_folder, output="Data_split",
seed=42, ratio=(.75, .2, .05),
group_prefix=None)
train_dir = os.path.join(os.getcwd(), 'Data_split\\train')
validation_dir = os.path.join(os.getcwd(), 'Data_split\\val')
# Directory with our training 'proper' pictures
train_proper_dir = os.path.join(train_dir, 'proper')
# Directory with our training 'defective' pictures
train_defective_dir = os.path.join(train_dir, 'defective')
# Directory with our validation 'proper' pictures
validation_proper_dir = os.path.join(validation_dir, 'proper')
# Directory with our validation 'defective' pictures
validation_defective_dir = os.path.join(validation_dir, 'defective')
# The model has already been trained. Run this codeblock only to re-train the model.
'''
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
'''
# The model has already been trained. Run this codeblock only to re-train the model.
'''
#Compiling the model
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=1e-4),
metrics=['accuracy'])
'''
# The model has already been trained. Run this codeblock only to re-train the model.
'''
# Image Augmentation
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
test_datagen = ImageDataGenerator(rescale=1./255)
'''
# The model has already been trained. Run this codeblock only to re-train the model.
'''
# Flow training images in batches of 16 using train_datagen generator
train_generator = train_datagen.flow_from_directory(
train_dir, # This is the source directory for training images
target_size=(150, 150), # All images will be resized to 150x150
batch_size=16,
class_mode='binary')
# Flow validation images in batches of 16 using test_datagen generator
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=16,
class_mode='binary')
'''
# The model has already been trained. Run this codeblock only to re-train the model.
'''
history = model.fit(
train_generator,
steps_per_epoch=50, # 2000 images = batch_size * steps
epochs=20,
validation_data=validation_generator,
validation_steps=30, # 1000 images = batch_size * steps
verbose=2)
'''
#This will work only if you have run the previous codeblocks
'''
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training Loss')
plt.plot(epochs, val_loss, 'b', label='Validation Loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
'''
#Saving the model
#model.save('model.h5') Current best model has train_acc = 82.75 and val_acc = 84.93
```
## 2. Loading The Model
```
import warnings
warnings.filterwarnings("ignore")
import os
import tensorflow as tf
from IPython.display import Image, display
from tensorflow.keras.models import load_model
new_model = load_model('model_tacc8275_val8493.h5')
new_model.summary()
img_path = os.path.join(os.getcwd(), 'Data\\Defective\\Defective (5).jpg')
display(Image(filename=img_path))
listOfImageNames = [img_path]
#For multiple images
'''
from IPython.display import Image, display
listOfImageNames = [os.path.join(os.getcwd(), 'Data\\Defective\\Defective (1).jpg'),
os.path.join(os.getcwd(), 'Data\\Defective\\Defective (2).jpg'),
os.path.join(os.getcwd(), 'Data\\Defective\\Defective (3).jpg'),
os.path.join(os.getcwd(), 'Data\\Defective\\Defective (4).jpg'),
os.path.join(os.getcwd(), 'Data\\Defective\\Defective (5).jpg'),
os.path.join(os.getcwd(), 'Data\\Defective\\Defective (6).jpg'),
os.path.join(os.getcwd(), 'Data\\Defective\\Defective (7).jpg'),
]
for imageName in listOfImageNames:
display(Image(filename=imageName))
'''
import numpy as np
from keras.preprocessing import image
#for single image
path = img_path
img = image.load_img(path, target_size=(150, 150))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
images = np.vstack([x])
classes = new_model.predict(images, batch_size=10)
print(classes[0])
if classes[0]<0.5:
print("Bottle is Defective")
else:
print("Bottle is Proper")
#for multiple images
'''
for fn in listOfImageNames:
path = fn
img = image.load_img(path, target_size=(150, 150))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
images = np.vstack([x])
classes = new_model.predict(images, batch_size=10)
print(classes[0])
if classes[0]>0.5:
print("Bottle is proper")
else:
print("Bottle is defective")
'''
```
| github_jupyter |
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy import stats
import seaborn as sns
palette = 'muted'
sns.set_palette(palette)
sns.set_color_codes(palette)
# 让Mac下图片的显示更清晰些
%config InlineBackend.figure_format = 'retina'
mu_params = [-1, 0, 1]
sd_params = [0.5, 1, 1.5]
x = np.linspace(-7, 7, 100)
f, ax = plt.subplots(len(mu_params), len(sd_params), sharex=True, sharey=True)
for i in range(3):
for j in range(3):
mu = mu_params[i]
sd = sd_params[j]
y = stats.norm(mu, sd).pdf(x)
ax[i,j].plot(x, y)
ax[i,j].plot(0, 0,
label="$\\mu$ = {:3.2f}\n$\\sigma$ = {:3.2f}".format(mu, sd), alpha=0)
ax[i,j].legend(fontsize=12)
ax[2,1].set_xlabel('$x$', fontsize=16)
ax[1,0].set_ylabel('$pdf(x)$', fontsize=16)
plt.tight_layout()
plt.savefig('B04958_01_01.png', dpi=300, figsize=(5.5, 5.5))
data = np.genfromtxt('mauna_loa_CO2.csv', delimiter=',')
plt.plot(data[:,0], data[:,1])
plt.xlabel('$year$', fontsize=16)
plt.ylabel('$CO_2 (ppmv)$', fontsize=16)
plt.savefig('B04958_01_02.png', dpi=300, figsize=(5.5, 5.5))
n_params = [1, 2, 4]
p_params = [0.25, 0.5, 0.75]
x = np.arange(0, max(n_params)+1)
f, ax = plt.subplots(len(n_params), len(p_params), sharex=True,
sharey=True)
for i in range(3):
for j in range(3):
n = n_params[i]
p = p_params[j]
y = stats.binom(n=n, p=p).pmf(x)
ax[i,j].vlines(x, 0, y, colors='b', lw=5)
ax[i,j].set_ylim(0, 1)
ax[i,j].plot(0, 0, label="n = {:3.2f}\np = {:3.2f}".format(n, p), alpha=0)
ax[i,j].legend(fontsize=12)
ax[2,1].set_xlabel('$\\theta$', fontsize=14)
ax[1,0].set_ylabel('$p(y|\\theta)$', fontsize=14)
ax[0,0].set_xticks(x)
plt.savefig('B04958_01_03.png', dpi=300, figsize=(5.5, 5.5))
params = [0.5, 1, 2, 3]
x = np.linspace(0, 1, 100)
f, ax = plt.subplots(len(params), len(params), sharex=True,
sharey=True)
for i in range(4):
for j in range(4):
a = params[i]
b = params[j]
y = stats.beta(a, b).pdf(x)
ax[i,j].plot(x, y)
ax[i,j].plot(0, 0, label="$\\alpha$ = {:3.2f}\n$\\beta$ = {:3.2f}".format(a, b), alpha=0)
ax[i,j].legend(fontsize=12)
ax[3,0].set_xlabel('$\\theta$', fontsize=14)
ax[0,0].set_ylabel('$p(\\theta)$', fontsize=14)
plt.savefig('B04958_01_04.png', dpi=300, figsize=(5.5, 5.5))
theta_real = 0.35
trials = [0, 1, 2, 3, 4, 8, 16, 32, 50, 150]
data = [0, 1, 1, 1, 1, 4, 6, 9, 13, 48]
beta_params = [(1, 1), (0.5, 0.5), (20, 20)]
dist = stats.beta
x = np.linspace(0, 1, 100)
for idx, N in enumerate(trials):
if idx == 0:
plt.subplot(4,3, 2)
else:
plt.subplot(4,3, idx+3)
y = data[idx]
for (a_prior, b_prior), c in zip(beta_params, ('b', 'r', 'g')):
p_theta_given_y = dist.pdf(x, a_prior + y, b_prior + N - y)
plt.plot(x, p_theta_given_y, c)
plt.fill_between(x, 0, p_theta_given_y, color=c, alpha=0.6)
plt.axvline(theta_real, ymax=0.3, color='k')
plt.plot(0, 0, label="{:d} experiments\n{:d} heads".format(N, y), alpha=0)
plt.xlim(0,1)
plt.ylim(0,12)
plt.xlabel(r"$\theta$")
plt.legend()
plt.gca().axes.get_yaxis().set_visible(False)
plt.tight_layout()
plt.savefig('B04958_01_05.png', dpi=300, figsize=(5.5, 5.5))
def naive_hpd(post):
sns.kdeplot(post)
HPD = np.percentile(post, [2.5, 97.5])
plt.plot(HPD, [0, 0], label='HPD {:.2f} {:.2f}'.format(*HPD),
linewidth=8, color='k')
plt.legend(fontsize=16);
plt.xlabel(r"$\theta$", fontsize=14)
plt.gca().axes.get_yaxis().set_ticks([])
np.random.seed(1)
post = stats.beta.rvs(5, 11, size=1000)
naive_hpd(post)
plt.xlim(0, 1)
plt.savefig('B04958_01_07.png', dpi=300, figsize=(5.5, 5.5))
np.random.seed(1)
gauss_a = stats.norm.rvs(loc=4, scale=0.9, size=3000)
gauss_b = stats.norm.rvs(loc=-2, scale=1, size=2000)
mix_norm = np.concatenate((gauss_a, gauss_b))
naive_hpd(mix_norm)
plt.savefig('B04958_01_08.png', dpi=300, figsize=(5.5, 5.5))
```
关于如何计算**最大后验区间**,*Doing Bayesian Data Analysis*一书的第25章有详细介绍和R代码的实现。
这里原作者实现计算多峰分布的最大后验区间的逻辑是,首先根据原始数据计算KDE分布,然后将其离散化,sort之后根据参数alpha过滤掉概率较低的值,然后再按原始数据sort之后即可找出对应的区间。具体可查看本目录下的`hpd.py`文件
```
from plot_post import plot_post
plot_post(mix_norm, roundto=2, alpha=0.05)
plt.legend(loc=0, fontsize=16)
plt.xlabel(r"$\theta$", fontsize=14)
plt.savefig('B04958_01_09.png', dpi=300, figsize=(5.5, 5.5))
```
| github_jupyter |
```
## tensorflow-gpu==2.3.0rc1 bug to load_weight after call inference
!pip install tensorflow-gpu==2.2.0
import yaml
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow_tts.processor.ljspeech import LJSpeechProcessor
from tensorflow_tts.processor.ljspeech import symbols, _symbol_to_id
from tensorflow_tts.inference import AutoConfig
from tensorflow_tts.inference import TFAutoModel
processor = LJSpeechProcessor(None, "english_cleaners")
input_text = "i love you so much."
input_ids = processor.text_to_sequence(input_text)
config = AutoConfig.from_pretrained("../examples/fastspeech2/conf/fastspeech2.v1.yaml")
fastspeech2 = TFAutoModel.from_pretrained(
config=config,
pretrained_path=None, # "../examples/fastspeech2/checkpoints/model-150000.h5",
is_build=False, # don't build model if you want to save it to pb. (TF related bug)
name="fastspeech2"
)
```
# Save to Pb
```
mel_before, mel_after, duration_outputs, _, _ = fastspeech2.inference(
input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0),
speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32),
speed_ratios=tf.convert_to_tensor([1.0], dtype=tf.float32),
f0_ratios =tf.convert_to_tensor([1.0], dtype=tf.float32),
energy_ratios =tf.convert_to_tensor([1.0], dtype=tf.float32),
)
fastspeech2.load_weights("../examples/fastspeech2/checkpoints/model-150000.h5")
# save model into pb and do inference. Note that signatures should be a tf.function with input_signatures.
tf.saved_model.save(fastspeech2, "./test_saved", signatures=fastspeech2.inference)
```
# Load and Inference
```
fastspeech2 = tf.saved_model.load("./test_saved")
input_text = "There’s a way to measure the acute emotional intelligence that has never gone out of style."
input_ids = processor.text_to_sequence(input_text)
mel_before, mel_after, duration_outputs, _, _ = fastspeech2.inference(
input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0),
speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32),
speed_ratios=tf.convert_to_tensor([1.0], dtype=tf.float32),
f0_ratios =tf.convert_to_tensor([1.0], dtype=tf.float32),
energy_ratios =tf.convert_to_tensor([1.0], dtype=tf.float32)
)
mel_after = tf.reshape(mel_after, [-1, 80]).numpy()
fig = plt.figure(figsize=(10, 8))
ax1 = fig.add_subplot(311)
ax1.set_title(f'Predicted Mel-after-Spectrogram')
im = ax1.imshow(np.rot90(mel_after), aspect='auto', interpolation='none')
fig.colorbar(mappable=im, shrink=0.65, orientation='horizontal', ax=ax1)
plt.show()
plt.close()
```
# Let inference other input to check dynamic shape
```
input_text = "The Commission further recommends that the Secret Service coordinate its planning as closely as possible with all of the Federal agencies from which it receives information."
input_ids = processor.text_to_sequence(input_text)
mel_before, mel_after, duration_outputs, _, _ = fastspeech2.inference(
input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0),
speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32),
speed_ratios=tf.convert_to_tensor([1.0], dtype=tf.float32),
f0_ratios =tf.convert_to_tensor([1.0], dtype=tf.float32),
energy_ratios =tf.convert_to_tensor([1.0], dtype=tf.float32)
)
mel_after = tf.reshape(mel_after, [-1, 80]).numpy()
fig = plt.figure(figsize=(10, 8))
ax1 = fig.add_subplot(311)
ax1.set_title(f'Predicted Mel-after-Spectrogram')
im = ax1.imshow(np.rot90(mel_after), aspect='auto', interpolation='none')
fig.colorbar(mappable=im, shrink=0.65, orientation='horizontal', ax=ax1)
plt.show()
plt.close()
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
# getting the utils file here
import os, sys
import xbos_services_getter as xsg
import datetime
import calendar
import pytz
import numpy as np
import pandas as pd
import itertools
import time
from pathlib import Path
import pickle
import yaml
pd.set_option('display.max_columns', None)
import process_indoor_data as pid
import create_models as ctm
import matplotlib.pyplot as plt
building_zone_names_stub = xsg.get_building_zone_names_stub()
all_buildings_zones = xsg.get_all_buildings_zones(building_zone_names_stub)
# Get predictions
indoor_temperature_prediction_stub = xsg.get_indoor_temperature_prediction_stub()
t_in = 79.69
t_out = 70
t_prev = 80
action = 1
building = "jesse-turner-center"
zone = "hvac_zone_kitchen"
current_time = datetime.datetime(year=2018, month=7, day=1).replace(tzinfo=pytz.utc)
other_zone_temperatures = {}
for iter_zone in all_buildings_zones[building]:
if iter_zone != zone:
other_zone_temperatures[iter_zone] = 70
xsg.get_indoor_temperature_prediction(indoor_temperature_prediction_stub, building, zone, current_time, action, t_in, t_out, t_prev,
other_zone_temperatures)
prediction_window = "5m"
seconds_prediction_window = xsg.get_window_in_sec(prediction_window)
start = datetime.datetime(year=2018, month=7, day=1).replace(tzinfo=pytz.utc)
end = start + datetime.timedelta(days=10)
data_stub = xsg.get_indoor_historic_stub()
bldg = "jesse-turner-center"
zone = "hvac_zone_kitchen"
a = xsg.get_actions_historic(data_stub, bldg, zone, start, end, "1m")
# b = xsg.get_indoor_temperature_historic(data_stub, bldg, zone, start, end, "1m")
a
data_stub = xsg.get_indoor_historic_stub()
for bldg in all_buildings_zones.keys():
print("Getting bldg", bldg)
for zone in all_buildings_zones[bldg]:
print("getting zone", zone)
s_time = time.time()
a = xsg.get_actions_historic(data_stub, bldg, zone, start, end, "1m")
b = xsg.get_indoor_temperature_historic(data_stub, bldg, zone, start, end, "1m")
print("Temp shape", b.shape)
print("Action shape", a.shape)
print("Took:", time.time() - s_time)
print("")
bldg = "jesse-turner-center"
zone = "hvac_zone_kitchen"
cache_name = "indoor_temperature_historic_cache" # "action_historic_cache"
end = datetime.datetime(year=2019, month=4, day=1).replace(tzinfo=pytz.utc)
start = end - datetime.timedelta(days=10)
window = "1m"
loaded_data = pid.load_data(bldg, zone, cache_name)
err = xsg.check_data(loaded_data, start, end, window)
print(loaded_data)
prediction_window = "5m"
seconds_prediction_window = xsg.get_window_in_sec(prediction_window)
start = datetime.datetime(year=2018, month=7, day=10).replace(tzinfo=pytz.utc)
end = start + datetime.timedelta(days=)
bldg = "berkeley-corporate-yard"
zone = "hvac_zone_parks_assembly"
raw_data_granularity = "1m"
train_ratio = 0.7
is_second_order = True
use_occupancy = False
curr_action_timesteps = 0
prev_action_timesteps = -1
method = "OLS"
at = time.time()
ctm.create_model(bldg, zone, start, end, prediction_window, raw_data_granularity, train_ratio, is_second_order,
use_occupancy,
curr_action_timesteps, prev_action_timesteps, method, True)
print(time.time() - at)
```
# Data Getters
Ground truth for how to store and retrieve data. The file naming is building_zone. No differentiation between training and test sets.
```
def store_data(data, building, zone):
data_dir = Path.cwd() / "services_data"
if not os.path.isdir(data_dir):
os.makedirs(data_dir)
file_path = data_dir / (building + "_" + zone + ".pkl")
if not os.path.isfile(file_path):
return None
with open(str(file_path), "wb") as f:
pickle.dump(data, f)
def load_data(building, zone):
data_dir = Path.cwd() / "services_data"
if not os.path.isdir(data_dir):
return None
file_path = data_dir / (building + "_" + zone + ".pkl")
if not os.path.isfile(file_path):
return None
with open(str(file_path), "rb") as f:
return pickle.load(f)
```
# Get the data
num_start and num_end are hyperparameters.
```
building = "avenal-animal-shelter"
zone = all_buildings_zones[building][0]
prediction_window = "5m"
seconds_prediction_window = xsg.get_window_in_sec(prediction_window)
# daterange
start = datetime.datetime(year=2018, month=7, day=1).replace(tzinfo=pytz.utc)
end = start + datetime.timedelta(days=130)
# TODO add check that the data we have stored is at least as long and has right prediction_window
# TODO Fix how we deal with nan's. some zone temperatures might get set to -1.
loaded_data = load_data(building, zone)
if loaded_data is None:
processed_data = pid.get_preprocessed_data(building, zone, start, end, prediction_window)
store_data(processed_data, building, zone)
else:
processed_data = loaded_data
processed_data.shape
processed_data.head()
```
### Add features to prepocessed_data
```
processed_data = pid.indoor_data_cleaning(processed_data)
processed_data = pid.add_feature_last_temperature(processed_data)
processed_data = pid.convert_categorical_action(processed_data, num_start=4, num_end=4, interval_thermal=seconds_prediction_window)
processed_data.head()
```
# Linear Regressor with all features
This will use all available features and make it into a linear regressor.
### Get training and test data
```
train_ratio = 0.7 # how much training data to take from given data
N = processed_data.shape[0] # number of datapoints
train_data = processed_data.iloc[:int(N*train_ratio)]
test_data = processed_data.iloc[int(N*train_ratio):]
columns_to_drop = ["action", "action_prev", "dt", "action_duration"]
# Training data
train_data = train_data[train_data["dt"] == seconds_prediction_window]
train_data = train_data.drop(columns_to_drop, axis=1)
train_y = train_data["t_next"].interpolate(method="time")
train_X = train_data.drop(["t_next"], axis=1).interpolate(method="time")
# Test data
test_data = test_data[test_data["dt"] == seconds_prediction_window]
test_data = test_data.drop(columns_to_drop, axis=1)
test_y = test_data["t_next"].interpolate(method="time")
test_X = test_data.drop(["t_next"], axis=1).interpolate(method="time")
train_X.head()
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
reg = LinearRegression().fit(train_X, train_y)
reg.score(train_X, train_y)
```
# First Order Linear Regressor
```
train_ratio = 0.7 # how much training data to take from given data
N = processed_data.shape[0] # number of datapoints
train_data = processed_data.iloc[:int(N*train_ratio)]
test_data = processed_data.iloc[int(N*train_ratio):]
action_to_drop = []
for c in train_data.columns:
if "action" in c and len("action") != c:
action_to_drop.append(c)
columns_to_drop = action_to_drop + ["t_prev", "action_prev", "dt", "action_duration"]
# train data
train_data = train_data[train_data["dt"] == seconds_prediction_window]
train_data = train_data.drop(columns_to_drop, axis=1)
train_y = train_data["t_next"].interpolate(method="time")
train_X = train_data.drop(["t_next"], axis=1).interpolate(method="time")
# test data
test_data = test_data[test_data["dt"] == seconds_prediction_window]
test_data = test_data.drop(columns_to_drop, axis=1)
test_y = test_data["t_next"].interpolate(method="time")
test_X = test_data.drop(["t_next"], axis=1).interpolate(method="time")
train_X.head()
lin_reg = LinearRegression().fit(train_X, train_y)
```
# Create Tests for Regressor
Will try to do forecasting. If it fails, won't return anything.
```
def forecasting(thermal_model, data, start, duration, seconds_prediction_window, dt, is_second_order=False):
true_data = data.loc[start:]
if true_data.index[-1] < start + datetime.timedelta(seconds=duration) or true_data.index[0] != start:
return None
forecast = []
curr_time = true_data.index[0]
while curr_time <= start + datetime.timedelta(seconds=duration):
if curr_time not in true_data.index:
return None
curr_row = true_data.loc[curr_time].to_frame().T
if (len(forecast) < 2 and is_second_order) or (len(forecast) < 1):
forecast.append(float(curr_row["t_in"].values))
else:
curr_row["t_in"] = forecast[-1]
if is_second_order:
curr_row["t_prev"] = forecast[-2]
forecast.append(thermal_model.predict(curr_row)[0])
curr_time += datetime.timedelta(seconds=float(dt.loc[curr_time]))
forecast = forecast[:-1] # otherwise might predict beyond the set end
return pd.Series(index=true_data.index[:len(forecast)], data=forecast)
dt = processed_data.loc[test_X.index[0]:test_X.index[-1]]["dt"]
N = test_X.shape[0]
forecasts = []
for i in range(N):
start_time = test_X.index[i]
forecast = forecasting(reg, test_X, start_time, 6*60*60, 5*60, dt, is_second_order=True)
if forecast is not None:
forecasts.append(forecast)
if i % 500 == 0:
print("Iteration:", i)
print("Successful Forecasts:", len(forecasts))
```
# Get RMSE plot
Will do shady stuff. At least we can have duration / interval number of predictions. so we will only use that many. Otherwise dimensions don't work
```
errs = []
least_points = int(6*60*60 / (5*60))
for i in range(len(forecasts)):
forecast = forecasts[i]
real_data = test_X.loc[forecast.index]["t_in"]
if real_data.shape[0] >= least_points:
errs.append((forecast - real_data).values[:least_points])
print("Num before", len(forecasts))
print("Num remaining", len(errs))
errs = np.vstack(errs)
errs = np.square(errs)
errs = np.mean(errs, axis=0)
errs = np.sqrt(errs)
errs.shape
errs = []
for i in range(len(forecasts)):
forecast = forecasts[i]
real_data = test_X.loc[forecast.index]["t_in"]
errs.append(forecast - real_data) # assuming all forecasts have the same length.
errs = np.vstack(errs)
errs = np.square(errs)
errs = np.mean(errs, axis=0)
errs = np.sqrt(errs)
plt.plot(errs)
plt.plot(errs)
# occ_plot = pd.Series(index=date_range, data=test_data["occ"][date_range])
real_plot = test_X.loc[forecast.index]["t_in"]
# real_outside_plot = new_pred_horizon.loc[date_range]["t_out"]
# real_action_plt = new_pred_horizon.loc[date_range]["action"]
# real_action_plt *= 5
# real_action_plt += 65
# real_action_plt.plot(label="action", color="darkblue")
# real_outside_plot.plot(label="t_out")
real_plot.plot(label="real", color="goldenrod")
forecast.plot(label="forecast", color="firebrick")
# first_tm_plot.plot(label="First Order TM", color="firebrick")
# plt.show()
# # second_tm_plot.plot(label="Second order TM", color="steelblue")
# lti_tm_plot.plot(label="LTI TM", color="mediumpurple")
# plt.show()
# random_forest_plot.plot(label="Random Forest", color="black")
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.show()
# print("plot")
# print(occ_plot)
# occ_plot.plot()
# plt.show()
def get_rmse_plot(pred, real):
pred = pred[:len(real)]
real = real[:len(pred)]
diff = pred - real
diff = np.square(diff)
return np.sqrt(diff.mean())
# print("lag", get_rmse_plot(tm_pred_plot, real_plot))
# print("old", get_rmse_plot(old_pred_plot, real_plot))
# print("occ lag", get_rmse_plot(lag_tm_plot, real_plot))
# print("sec order", get_rmse_plot(second_tm_pred, real_plot))
# print("sec order occ", get_rmse_plot(second_occ_tm_pred, real_plot))
def save_results(building, zone, start, end, prediction_window, raw_data_granularity, train_ratio, is_second_order,
curr_action_timesteps, prev_action_timesteps, method, rmse_series, num_forecasts, forecasting_horizon):
"""Stores the results and the methods as a yamls file. The files is stores in a way that it can be used
to configure the exact same model and get the results.
:param building: (string) building name
:param zone: (string) zone name
:param start: (datetime timezone aware) start of the dataset used
:param end: (datetime timezone aware) start of the dataset used
:param prediction_window: (int seconds) number of seconds between predictions
:param raw_data_granularity: (int seconds) the window size of the raw data. needs to be less than prediction_window.
:param train_ratio: (float) in (0, 1). the ratio in which to split train and test set from the given dataset. The train set comes before test set in time.
:param is_second_order: (bool) Whether we are using second order in temperature.
:param curr_action_timesteps: (int) The order of the current action. Set 0 if there should only be one action.
:param prev_action_timesteps: (int) The order of the previous action. Set 0 if it should not be used at all.
:param method: (str) ["OLS", "random_forest", "LSTM"] are the available methods so far
:param rmse_series: np.array the rmse of the forecasting procedure.
:param num_forecasts: (int) The number of forecasts which contributed to the RMSE.
:param forecasting_horizon: (int seconds) The horizon used when forecasting.
:return:
"""
to_store = {"building": building,
"zone": zone,
"start": start,
"end": end,
"raw_data_granularity": raw_data_granularity,
"prediction_window": prediction_window,
"train_ratio": train_ratio,
"is_second_order": is_second_order,
"curr_action_timesteps": curr_action_timesteps,
"prev_action_timesteps": prev_action_timesteps,
"method": method,
"rmse_series": rmse_series,
"num_forecasts": num_forecasts,
"forecasting_horizon": forecasting_horizon
}
data_dir = Path.cwd() / "model_results"
if not os.path.isdir(data_dir):
os.makedirs(data_dir)
file_path = data_dir / (building + "_" + zone + ".pkl")
if os.path.isfile(file_path):
try:
with open(str(file_path), "rb") as f:
loaded_results = pickle.load(f)
loaded_results.append(to_store)
to_store = loaded_results
except:
to_store = [to_store]
else:
to_store = [to_store]
with open(str(file_path), "wb") as f:
pickle.dump(to_store, f)
def load_results(building, zone):
data_dir = Path.cwd() / "model_results"
if not os.path.isdir(data_dir):
return None
file_path = data_dir / (building + "_" + zone + ".pkl")
if not os.path.isfile(file_path):
return None
with open(str(file_path), "rb") as f:
return pickle.load(f)
save_results(building, zone, start, end, prediction_window, 60, train_ratio, True,
4, 4, "OLS", errs, 6167, 6*60*60)
load_results(building, zone)
```
# Create Model and run on all Buildings/Zones
```
import create_test_models as ctm
import datetime
import pytz
import xbos_services_getter as xsg
building_zone_names_stub = xsg.get_building_zone_names_stub()
all_building_zones = xsg.get_all_buildings_zones(building_zone_names_stub)
start = datetime.datetime(year=2018, month=7, day=1).replace(tzinfo=pytz.utc)
end = start + datetime.timedelta(days=10)
for bldg in list(not_working.keys())[1:]:
print("Building", bldg)
for zone in all_building_zones[bldg]:
print("Zone", zone)
reg, p_data, test_X, test_y = ctm.create_model(bldg, zone,
start, end, "5m", "1m", 0.7, True,
1, 1, "OLS")
# try:
# reg, p_data, test_X, test_y = ctm.create_model(bldg, zone,
# start, end, "5m", "1m", 0.7, True,
# 1, 1, "OLS")
# print("Score", reg.score(test_X, test_y))
# except:
# if bldg not in not_working:
# not_working[bldg] = []
# not_working[bldg].append(zone)
print("")
not_working = {'hayward-station-1': ['HVAC_Zone_AC-7',
'HVAC_Zone_AC-6',
'HVAC_Zone_AC-5',
'HVAC_Zone_AC-4',
'HVAC_Zone_AC-3',
'HVAC_Zone_AC-2',
'HVAC_Zone_AC-1'],
'hayward-station-8': ['HVAC_Zone_F-2', 'HVAC_Zone_F-3', 'HVAC_Zone_F-1'],
'north-berkeley-senior-center': ['HVAC_Zone_AC-5',
'HVAC_Zone_AC-3',
'HVAC_Zone_AC-1'],
'csu-dominguez-hills': ['HVAC_Zone_SAC_2134',
'HVAC_Zone_SAC_2113A',
'HVAC_Zone_SAC_2149',
'HVAC_Zone_SAC_2103',
'HVAC_Zone_SAC_2107',
'HVAC_Zone_SAC_2144',
'HVAC_Zone_Sac_2_Corridor',
'HVAC_Zone_SAC_2114',
'HVAC_Zone_SAC_2113',
'HVAC_Zone_SAC-2106',
'HVAC_Zone_SAC-2104',
'HVAC_Zone_SAC-2102',
'HVAC_Zone_SAC_2150',
'HVAC_Zone_SAC_2105',
'HVAC_Zone_SAC_2101',
'HVAC_Zone_SAC_2129',
'HVAC_Zone_SAC_2126'],
'orinda-community-center': ['HVAC_Zone_RM2',
'HVAC_Zone_RM1',
'HVAC_Zone_RM6',
'HVAC_Zone_RM7',
'HVAC_Zone_AC-8',
'HVAC_Zone_AC-7',
'HVAC_Zone_AC-6',
'HVAC_Zone_AC-5',
'HVAC_Zone_AC-4',
'HVAC_Zone_AC-3',
'HVAC_Zone_AC-2',
'HVAC_Zone_AC-1',
'HVAC_Zone_Kinder_GYM',
'HVAC_Zone_FRONT_OFFICE'],
'avenal-veterans-hall': ['HVAC_Zone_AC-6',
'HVAC_Zone_AC-5',
'HVAC_Zone_AC-4',
'HVAC_Zone_AC-3',
'HVAC_Zone_AC-2',
'HVAC_Zone_AC-1'],
'south-berkeley-senior-center': ['HVAC_Zone_AC-3',
'HVAC_Zone_Front_Office',
'HVAC_Zone_AC-2']}
```
# Explore Data for South Berkeley Senior Center
```
building_zone_names_stub = xsg.get_building_zone_names_stub()
all_buildings_zones = xsg.get_all_buildings_zones(building_zone_names_stub)
building = "south-berkeley-senior-center"
zones = all_buildings_zones[building]
# get data for all zones
end = datetime.datetime.utcnow().replace(tzinfo=pytz.utc).astimezone(pytz.timezone('US/Pacific'))
start = end - datetime.timedelta(days=365)
prediction_window = "5m"
raw_data_granularity = "1m"
is_second_order = True
train_ratio = 1
use_occupancy = True
curr_action_timesteps = 0
prev_action_timesteps = -1
check_data = True
zones_data = {}
print(end)
print(start)
for iter_zone in zones:
print("zone", iter_zone)
a = time.time()
zones_data[iter_zone] = ctm.get_train_test(building, iter_zone, start, end, prediction_window, raw_data_granularity, train_ratio, is_second_order,
use_occupancy,
curr_action_timesteps, prev_action_timesteps, check_data=False)
print("TIME", time.time() - a)
def foo():
xsg.get_occupancy_stub()
_INTERVAL = "5m" # minutes # TODO allow for getting multiples of 5. Prediction horizon.
building = 'avenal-movie-theatre'
zone = 'hvac_zone_lobby'
end = datetime.datetime(year=2019, month=4, day=1).replace(
tzinfo=pytz.utc) # datetime.datetime.utcnow().replace(tzinfo=pytz.utc) # TODO make environ var.
start = end - datetime.timedelta(days=365)
ctm.get_train_test(building=building,
zone=zone,
start=start,
end=end,
prediction_window=_INTERVAL,
raw_data_granularity="1m",
train_ratio=1,
is_second_order=True,
use_occupancy=False,
curr_action_timesteps=0,
prev_action_timesteps=-1,
check_data=False)
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
#export
from exp.nb_02_callbacks import *
```
# Initial Setup
```
x_train, y_train, x_valid, y_valid = get_data(url=MNIST_URL)
train_ds = Dataset(x=x_train, y=y_train)
valid_ds = Dataset(x=x_valid, y=y_valid)
nh = 50
bs = 16
c = y_train.max().item() + 1
loss_func = F.cross_entropy
data = DataBunch(*get_dls(train_ds, valid_ds, bs=bs), c=c)
#export
def create_learner(model_func, loss_func, data):
return Learner(*model_func(data), loss_func, data)
learner = create_learner(get_model, loss_func, data)
run = Runner(cbs=[AvgStatsCallback(metrics=[accuracy])])
run.fit(epochs=3, learner=learner)
learner = create_learner(partial(get_model, lr=0.3), loss_func, data)
run = Runner(cbs=[AvgStatsCallback(metrics=[accuracy])])
run.fit(epochs=3, learner=learner)
#export
def get_model_func(lr=0.5):
return partial(get_model, lr=lr)
```
# Annealing
```
We define two new callbacks:
1. a Recorder: to save track of the loss and our scheduled learning rate
2. a ParamScheduler: that can schedule any hyperparameter as long as it's registered in the state_dict of the optimizer
```
```
#export
class Recorder(Callback):
def begin_fit(self):
self.lrs = []
self.losses = []
def after_batch(self):
if not self.in_train:
return
self.lrs.append(self.opt.param_groups[-1]["lr"])
self.losses.append(self.loss.detach().cpu())
def plot_lr(self):
plt.plot(self.lrs)
def plot_loss(self):
plt.plot(self.losses)
class ParamScheduler(Callback):
_order = 1
def __init__(self, pname, sched_func):
self.pname = pname
self.sched_func = sched_func
def set_param(self):
for pg in self.opt.param_groups:
### print(self.sched_func, self.n_epochs, self.epochs)
pg[self.pname] = self.sched_func(self.n_epochs/self.epochs)
def begin_batch(self):
if self.in_train:
self.set_param()
```
```
Let's start with a simple linear schedule going from start to end.
It returns a function that takes a "pos" argument (going from 0 to 1) such that this function goes from "start" (at pos=0) to "end" (at pos=1) in a linear fashion.
```
```
def sched_linear(start, end, pos):
def _inner(start, end, pos):
return start + (end-start)*pos
return partial(_inner, start, end)
```
```
We can refator the above sched_linear function using decorators so that we donot need to create a separate instance of sched_linear for every pos value
```
```
#export
def annealer(f):
def _inner(start, end):
return partial(f, start, end)
return _inner
@annealer
def sched_linear(start, end, pos):
return start + (end-start)*pos
f = sched_linear(1,2)
f
f(pos=0.3)
f(0.3)
f(0.5)
```
```
Some more important acheduler functions
```
```
#export
@annealer
def sched_cos(start, end, pos):
return start + (end-start) * (1 + math.cos(math.pi*(1-pos))) / 2.
@annealer
def sched_no(start, end, pos):
return start
@annealer
def sched_exp(start, end, pos):
return start * ((end/start) ** pos)
annealings = "NO LINEAR COS EXP".split(" ")
a = torch.arange(start=0, end=100)
p = torch.linspace(start=0.01, end=1, steps=100)
fns = [sched_no, sched_linear, sched_cos, sched_exp]
for fn, t in zip(fns, annealings):
f = fn(start=2, end=1e-2)
plt.plot(a, [f(i) for i in p], label=t)
plt.legend();
### in earlier version of Pytorch, a Tensor object did not had "ndim" attribute
### we can add any attribute to any Python object using property() function.
### here we are adding "ndim" attribute to Tensor object using the below monkey-patching
# torch.Tensor.ndim = property(lambda x: len(x.shape))
```
```
In practice we will want to use multiple schedulers and the below function helps us do that
```
```
#export
def combine_scheds(pcts, scheds):
"""
pcts : list of %ages of each scheduler
scheds: list of all schedulers
"""
assert sum(pcts) == 1
pcts = torch.tensor([0] + listify(pcts))
assert torch.all(pcts >= 0)
pcts = torch.cumsum(input=pcts, dim=0)
def _inner(pos):
"""pos is a value b/w (0,1)"""
idx = (pos >= pcts).nonzero().max()
actual_pos = (pos-pcts[idx]) / (pcts[idx+1]-pcts[idx])
return scheds[idx](pos=actual_pos)
return _inner
### Example of a learning rate scheduler annealing:
### using 30% of training budget to go from 0.3 to 0.6 using cosine scheduler
### using the rest 70% of the trainign budget to go from 0.6 to 0.2 using another cosine scheduler
sched = combine_scheds(pcts=[0.3, 0.7], scheds=[sched_cos(start=0.3, end=0.6), sched_cos(start=0.6, end=0.2)])
plt.plot(a, [sched(i) for i in p])
```
```
We can use it for trainign quite easily.
```
```
cbfs = [Recorder,
partial(AvgStatsCallback, metrics=accuracy),
partial(ParamScheduler, pname="lr", sched_func=sched)]
cbfs
bs=512
data = DataBunch(*get_dls(train_ds, valid_ds, bs), c=c)
learner = create_learner(model_func=get_model_func(lr=0.3), loss_func=loss_func, data=data)
run = Runner(cb_funcs=cbfs)
run.fit(epochs=2, learner=learner)
run.recorder.plot_lr()
run.recorder.plot_loss()
```
# Export
```
!python notebook_to_script.py imflash217__02_anneal.ipynb
pct = [0.3, 0.7]
pct = torch.tensor([0] + listify(pct))
pct = torch.cumsum(pct, 0)
pos = 2
(pos >= pct).nonzero().max()
```
| github_jupyter |
TSG061 - Get tail of all container logs for pods in BDC namespace
=================================================================
Steps
-----
### Parameters
```
since_seconds = 60 * 60 * 1 # the last hour
coalesce_duplicates = True
```
### Instantiate Kubernetes client
```
# Instantiate the Python Kubernetes client into 'api' variable
import os
from IPython.display import Markdown
try:
from kubernetes import client, config
from kubernetes.stream import stream
if "KUBERNETES_SERVICE_PORT" in os.environ and "KUBERNETES_SERVICE_HOST" in os.environ:
config.load_incluster_config()
else:
try:
config.load_kube_config()
except:
display(Markdown(f'HINT: Use [TSG118 - Configure Kubernetes config](../repair/tsg118-configure-kube-config.ipynb) to resolve this issue.'))
raise
api = client.CoreV1Api()
print('Kubernetes client instantiated')
except ImportError:
display(Markdown(f'HINT: Use [SOP059 - Install Kubernetes Python module](../install/sop059-install-kubernetes-module.ipynb) to resolve this issue.'))
raise
```
### Get the namespace for the big data cluster
Get the namespace of the Big Data Cluster from the Kuberenetes API.
**NOTE:**
If there is more than one Big Data Cluster in the target Kubernetes
cluster, then either:
- set \[0\] to the correct value for the big data cluster.
- set the environment variable AZDATA\_NAMESPACE, before starting
Azure Data Studio.
```
# Place Kubernetes namespace name for BDC into 'namespace' variable
if "AZDATA_NAMESPACE" in os.environ:
namespace = os.environ["AZDATA_NAMESPACE"]
else:
try:
namespace = api.list_namespace(label_selector='MSSQL_CLUSTER').items[0].metadata.name
except IndexError:
from IPython.display import Markdown
display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))
raise
print('The kubernetes namespace for your big data cluster is: ' + namespace)
```
### Get logs for all containers in Big Data Cluster namespace
```
pod_list = api.list_namespaced_pod(namespace)
pod_names = [pod.metadata.name for pod in pod_list.items]
print('Scanning pods: ' + ', '.join(pod_names))
for pod in pod_list.items:
print("*** %s\t%s\t%s" % (pod.metadata.name,
pod.status.phase,
pod.status.pod_ip))
container_names = [container.name for container in pod.spec.containers]
for container in container_names:
print (f"POD: {pod.metadata.name} / CONTAINER: {container}")
try:
logs = api.read_namespaced_pod_log(pod.metadata.name, namespace, container=container, since_seconds=since_seconds)
if coalesce_duplicates:
previous_line = ""
duplicates = 1
for line in logs.split('\n'):
if line[27:] != previous_line[27:]:
if duplicates != 1:
print(f"\t{previous_line} (x{duplicates})")
print(f"\t{line}")
duplicates = 1
else:
duplicates = duplicates + 1
previous_line = line
else:
print(logs)
except Exception:
print (f"Failed to get LOGS for CONTAINER: {container} in POD: {pod.metadata.name}")
print('Notebook execution complete.')
```
Related
-------
- [TSG062 - Get tail of all previous container logs for pods in BDC
namespace](../log-files/tsg062-tail-bdc-previous-container-logs.ipynb)
| github_jupyter |
Write a function to draw a circular smiley face with eyes, a nose, and a mouth. One argument should set the overall size of the face (the circle radius). Optional arguments should allow the user to specify the `(x, y)` position of the face, whether the face is smiling or frowning, and the color of the lines. The default should
be a smiling blue face centered at `(0, 0)`. Once you write your function, write a program that calls it several times to produce a plot like the one below (creative improvisation is encouraged!). In producing your plot, you may find the call `plt.axes().set_aspect(1)` useful so that circles appear as circles and not ovals. You should only use MatPlotLib functions introduced in this text. To create a circle you can create an array of angles that goes from 0 to 2π and then produce the `x` and `y` arrays for your circle by taking the cosine and sine, respectively, of the array. Hint: You can use the same `(x, y)` arrays to make the smile and frown as you used to make the circle by plotting appropriate slices of those arrays. You do not need to create new arrays.
```
def semi_circle(r, x0=0.0, y0=0.0, n=50, in_ang=0.0, end_ang=np.pi):
theta = np.linspace(in_ang, end_ang, n, endpoint=False)
x = r * np.cos(theta)
y = r * np.sin(theta)
return x0 + x, y0 + y
def circle(r, x0=0.0, y0=0.0, n=12):
theta = np.linspace(0., 2. * np.pi, n, endpoint=True)
x = r * np.cos(theta)
y = r * np.sin(theta)
return x0 + x, y0 + y
import numpy as np
import matplotlib.pyplot as plt
######################### face centered
############## circle
x0, y0 = 0, 0
r = 4
n = 100
x, y = circle(r, x0, y0, n)
plt.plot(x, y, 'r-')
########### eyes
x0, y0 = -2, 2
r = 0.5
n = 50
x, y = circle(r, x0, y0, n)
plt.plot(x, y, 'r-')
x0, y0 = 2, 2
r = 0.5
n = 50
x, y = circle(r, x0, y0, n)
plt.plot(x, y, 'r-')
########## nose
x0, y0 = 0, 0
r = 0.3
n = 50
x, y = circle(r, x0, y0, n)
plt.plot(x, y, 'r-')
########## smile
x0, y0 = 0, -2.0
r = 1
n = 50
x, y = semi_circle(r, x0, y0, n, np.pi, 2 * np.pi)
plt.plot(x, y, 'r-')
######################### face up right
############## circle
x0, y0 = 8, 8
r = 3
n = 100
x, y = circle(r, x0, y0, n)
plt.plot(x, y, 'g-')
########### eyes
x0, y0 = 7, 9.5
r = 0.5
n = 50
x, y = circle(r, x0, y0, n)
plt.plot(x, y, 'g-')
x0, y0 = 9, 9.5
r = 0.5
n = 50
x, y = circle(r, x0, y0, n)
plt.plot(x, y, 'g-')
########## nose
x0, y0 = 8, 8
r = 0.3
n = 50
x, y = circle(r, x0, y0, n)
plt.plot(x, y, 'g-')
########## smile
x0, y0 = 8, 7.5
r = 2
n = 50
x, y = semi_circle(r, x0, y0, n, np.pi, 2 * np.pi)
plt.plot(x, y, 'g-')
######################### face down left
############## circle
x0, y0 = -8, -8
r = 3
n = 100
x, y = circle(r, x0, y0, n)
plt.plot(x, y, 'y-')
########### eyes
x0, y0 = -9, -6
r = 0.5
n = 50
x, y = circle(r, x0, y0, n)
plt.plot(x, y, 'y-')
x0, y0 = -7, -6
r = 0.5
n = 50
x, y = circle(r, x0, y0, n)
plt.plot(x, y, 'y-')
########## nose
x0, y0 = -8, -8
r = 0.3
n = 50
x, y = circle(r, x0, y0, n)
plt.plot(x, y, 'y-')
########## frown
x0, y0 = -8, -11.
r = 2
n = 50
x, y = semi_circle(r, x0, y0, n, np.pi / 6, 5 * np.pi / 6)
plt.plot(x, y, 'y-')
################################################# no more faces
plt.axes().set_aspect('equal')
plt.show()
```
| github_jupyter |
# SageMaker/DeepAR demo on electricity dataset
This notebook complements the [DeepAR introduction notebook](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/deepar_synthetic/deepar_synthetic.ipynb).
Here, we will consider a real use case and show how to use DeepAR on SageMaker for predicting energy consumption of 370 customers over time, based on a [dataset](https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014) that was used in the academic papers [[1](https://media.nips.cc/nipsbooks/nipspapers/paper_files/nips29/reviews/526.html)] and [[2](https://arxiv.org/abs/1704.04110)].
In particular, we will see how to:
* Prepare the dataset
* Use the SageMaker Python SDK to train a DeepAR model and deploy it
* Make requests to the deployed model to obtain forecasts interactively
* Illustrate advanced features of DeepAR: missing values, additional time features, non-regular frequencies and category information
For more information see the DeepAR [documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar.html) or [paper](https://arxiv.org/abs/1704.04110),
### Lab time
Running this notebook takes around 35 to 40 minutes on a ml.c4.2xlarge for the training, and inference is done on a ml.m4.xlarge (the usage time will depend on how long you leave your served model running).
```
import timeit
start_time = timeit.default_timer()
%matplotlib inline
import sys
from urllib.request import urlretrieve
import zipfile
from dateutil.parser import parse
import json
from random import shuffle
import random
import datetime
import os
import boto3
import s3fs
import sagemaker
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from __future__ import print_function
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from ipywidgets import IntSlider, FloatSlider, Checkbox
# set random seeds for reproducibility
np.random.seed(42)
random.seed(42)
sagemaker_session = sagemaker.Session()
```
Before starting, we can override the default values for the following:
- The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.
- The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these.
```
# s3_bucket = sagemaker.Session().default_bucket() # replace with an existing bucket if needed
s3_bucket='<put your S3 bucket name>' # customize to your bucket
s3_prefix = 'deepar-electricity-demo-notebook' # prefix used for all data stored within the bucket
role = sagemaker.get_execution_role() # IAM role to use by SageMaker
region = sagemaker_session.boto_region_name
s3_data_path = "s3://{}/{}/data".format(s3_bucket, s3_prefix)
s3_output_path = "s3://{}/{}/output".format(s3_bucket, s3_prefix)
```
Next, we configure the container image to be used for the region that we are running in.
```
image_name = sagemaker.amazon.amazon_estimator.get_image_uri(region, "forecasting-deepar", "latest")
```
### Import electricity dataset and upload it to S3 to make it available for Sagemaker
As a first step, we need to download the original data set of from the UCI data set repository.
```
DATA_HOST = "https://archive.ics.uci.edu"
DATA_PATH = "/ml/machine-learning-databases/00321/"
ARCHIVE_NAME = "LD2011_2014.txt.zip"
FILE_NAME = '/tmp/' + ARCHIVE_NAME[:-4] # Modified to use '/tmp' directory, Pil 8thOct2018
def progress_report_hook(count, block_size, total_size):
mb = int(count * block_size // 1e6)
if count % 500 == 0:
sys.stdout.write("\r{} MB downloaded".format(mb))
sys.stdout.flush()
if not os.path.isfile(FILE_NAME):
print("downloading dataset (258MB), can take a few minutes depending on your connection")
urlretrieve(DATA_HOST + DATA_PATH + ARCHIVE_NAME, '/tmp/' + ARCHIVE_NAME, reporthook=progress_report_hook)
print("\nextracting data archive")
zip_ref = zipfile.ZipFile('/tmp/' + ARCHIVE_NAME, 'r')
zip_ref.extractall("/tmp")
zip_ref.close()
else:
print("File found skipping download")
```
Then, we load and parse the dataset and convert it to a collection of Pandas time series, which makes common time series operations such as indexing by time periods or resampling much easier. The data is originally recorded in 15min interval, which we could use directly. Here we want to forecast longer periods (one week) and resample the data to a granularity of 2 hours.
```
data = pd.read_csv(FILE_NAME, sep=";", index_col=0, parse_dates=True, decimal=',')
num_timeseries = data.shape[1]
data_kw = data.resample('2H').sum() / 8
timeseries = []
for i in range(num_timeseries):
timeseries.append(np.trim_zeros(data_kw.iloc[:,i], trim='f'))
```
Let us plot the resulting time series for the first ten customers for the time period spanning the first two weeks of 2014.
```
fig, axs = plt.subplots(5, 2, figsize=(20, 20), sharex=True)
axx = axs.ravel()
for i in range(0, 10):
timeseries[i].loc["2014-01-01":"2014-01-14"].plot(ax=axx[i])
axx[i].set_xlabel("date")
axx[i].set_ylabel("kW consumption")
axx[i].grid(which='minor', axis='x')
```
### Train and Test splits
Often times one is interested in evaluating the model or tuning its hyperparameters by looking at error metrics on a hold-out test set. Here we split the available data into train and test sets for evaluating the trained model. For standard machine learning tasks such as classification and regression, one typically obtains this split by randomly separating examples into train and test sets. However, in forecasting it is important to do this train/test split based on time rather than by time series.
In this example, we will reserve the last section of each of the time series for evalutation purpose and use only the first part as training data.
```
# we use 2 hour frequency for the time series
freq = '2H'
# we predict for 7 days
prediction_length = 7 * 12
# we also use 7 days as context length, this is the number of state updates accomplished before making predictions
context_length = 7 * 12
```
We specify here the portion of the data that is used for training: the model sees data from 2014-01-01 to 2014-09-01 for training.
```
start_dataset = pd.Timestamp("2014-01-01 00:00:00", freq=freq)
end_training = pd.Timestamp("2014-09-01 00:00:00", freq=freq)
```
The DeepAR JSON input format represents each time series as a JSON object. In the simplest case each time series just consists of a start time stamp (``start``) and a list of values (``target``). For more complex cases, DeepAR also supports the fields ``dynamic_feat`` for time-series features and ``cat`` for categorical features, which we will use later.
```
training_data = [
{
"start": str(start_dataset),
"target": ts[start_dataset:end_training - 1].tolist() # We use -1, because pandas indexing includes the upper bound
}
for ts in timeseries
]
print(len(training_data))
```
As test data, we will consider time series extending beyond the training range: these will be used for computing test scores, by using the trained model to forecast their trailing 7 days, and comparing predictions with actual values.
To evaluate our model performance on more than one week, we generate test data that extends to 1, 2, 3, 4 weeks beyond the training range. This way we perform *rolling evaluation* of our model.
```
num_test_windows = 4
test_data = [
{
"start": str(start_dataset),
"target": ts[start_dataset:end_training + k * prediction_length].tolist()
}
for k in range(1, num_test_windows + 1)
for ts in timeseries
]
print(len(test_data))
```
Let's now write the dictionary to the `jsonlines` file format that DeepAR understands (it also supports gzipped jsonlines and parquet).
```
def write_dicts_to_file(path, data):
with open(path, 'wb') as fp:
for d in data:
fp.write(json.dumps(d).encode("utf-8"))
fp.write("\n".encode('utf-8'))
%%time
write_dicts_to_file("/tmp/train.json", training_data)
write_dicts_to_file("/tmp/test.json", test_data)
```
Now that we have the data files locally, let us copy them to S3 where DeepAR can access them. Depending on your connection, this may take a couple of minutes.
```
s3 = boto3.resource('s3')
def copy_to_s3(local_file, s3_path, override=False):
assert s3_path.startswith('s3://')
split = s3_path.split('/')
bucket = split[2]
path = '/'.join(split[3:])
buk = s3.Bucket(bucket)
if len(list(buk.objects.filter(Prefix=path))) > 0:
if not override:
print('File s3://{}/{} already exists.\nSet override to upload anyway.\n'.format(s3_bucket, s3_path))
return
else:
print('Overwriting existing file')
with open(local_file, 'rb') as data:
print('Uploading file to {}'.format(s3_path))
buk.put_object(Key=path, Body=data)
%%time
copy_to_s3("/tmp/train.json", s3_data_path + "/train/train.json")
copy_to_s3("/tmp/test.json", s3_data_path + "/test/test.json")
```
Let's have a look to what we just wrote to S3.
```
s3filesystem = s3fs.S3FileSystem()
with s3filesystem.open(s3_data_path + "/train/train.json", 'rb') as fp:
print(fp.readline().decode("utf-8")[:100] + "...")
```
We are all set with our dataset processing, we can now call DeepAR to train a model and generate predictions.
### Train a model
Here we define the estimator that will launch the training job.
```
estimator = sagemaker.estimator.Estimator(
sagemaker_session=sagemaker_session,
image_name=image_name,
role=role,
train_instance_count=1,
train_instance_type='ml.c4.2xlarge',
base_job_name='deepar-electricity-demo',
output_path=s3_output_path
)
```
Next we need to set the hyperparameters for the training job. For example frequency of the time series used, number of data points the model will look at in the past, number of predicted data points. The other hyperparameters concern the model to train (number of layers, number of cells per layer, likelihood function) and the training options (number of epochs, batch size, learning rate...). We use default parameters for every optional parameter in this case (you can always use [Sagemaker Automated Model Tuning](https://aws.amazon.com/blogs/aws/sagemaker-automatic-model-tuning/) to tune them).
```
hyperparameters = {
"time_freq": freq,
"epochs": "400",
"early_stopping_patience": "40",
"mini_batch_size": "64",
"learning_rate": "5E-4",
"context_length": str(context_length),
"prediction_length": str(prediction_length)
}
estimator.set_hyperparameters(**hyperparameters)
```
We are ready to launch the training job. SageMaker will start an EC2 instance, download the data from S3, start training the model and save the trained model.
If you provide the `test` data channel as we do in this example, DeepAR will also calculate accuracy metrics for the trained model on this test. This is done by predicting the last `prediction_length` points of each time-series in the test set and comparing this to the actual value of the time-series.
**Note:** the next cell may take a few minutes to complete, depending on data size, model complexity, training options.
```
%%time
data_channels = {
"train": "{}/train/".format(s3_data_path),
"test": "{}/test/".format(s3_data_path)
}
estimator.fit(inputs=data_channels, wait=True)
```
Since you pass a test set in this example, accuracy metrics for the forecast are computed and logged (see bottom of the log).
You can find the definition of these metrics from [our documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar.html). You can use these to optimize the parameters and tune your model or use SageMaker's [Automated Model Tuning service](https://aws.amazon.com/blogs/aws/sagemaker-automatic-model-tuning/) to tune the model for you.
### Create endpoint and predictor
Now that we have a trained model, we can use it to perform predictions by deploying it to an endpoint.
**Note: Remember to delete the endpoint after running this experiment. A cell at the very bottom of this notebook will do that: make sure you run it at the end.**
To query the endpoint and perform predictions, we can define the following utility class: this allows making requests using `pandas.Series` objects rather than raw JSON strings.
```
class DeepARPredictor(sagemaker.predictor.RealTimePredictor):
def __init__(self, *args, **kwargs):
super().__init__(*args, content_type=sagemaker.content_types.CONTENT_TYPE_JSON, **kwargs)
def predict(self, ts, cat=None, dynamic_feat=None,
num_samples=100, return_samples=False, quantiles=["0.1", "0.5", "0.9"]):
"""Requests the prediction of for the time series listed in `ts`, each with the (optional)
corresponding category listed in `cat`.
ts -- `pandas.Series` object, the time series to predict
cat -- integer, the group associated to the time series (default: None)
num_samples -- integer, number of samples to compute at prediction time (default: 100)
return_samples -- boolean indicating whether to include samples in the response (default: False)
quantiles -- list of strings specifying the quantiles to compute (default: ["0.1", "0.5", "0.9"])
Return value: list of `pandas.DataFrame` objects, each containing the predictions
"""
prediction_time = ts.index[-1] + 1
quantiles = [str(q) for q in quantiles]
req = self.__encode_request(ts, cat, dynamic_feat, num_samples, return_samples, quantiles)
res = super(DeepARPredictor, self).predict(req)
return self.__decode_response(res, ts.index.freq, prediction_time, return_samples)
def __encode_request(self, ts, cat, dynamic_feat, num_samples, return_samples, quantiles):
instance = series_to_dict(ts, cat if cat is not None else None, dynamic_feat if dynamic_feat else None)
configuration = {
"num_samples": num_samples,
"output_types": ["quantiles", "samples"] if return_samples else ["quantiles"],
"quantiles": quantiles
}
http_request_data = {
"instances": [instance],
"configuration": configuration
}
return json.dumps(http_request_data).encode('utf-8')
def __decode_response(self, response, freq, prediction_time, return_samples):
# we only sent one time series so we only receive one in return
# however, if possible one will pass multiple time series as predictions will then be faster
predictions = json.loads(response.decode('utf-8'))['predictions'][0]
prediction_length = len(next(iter(predictions['quantiles'].values())))
prediction_index = pd.DatetimeIndex(start=prediction_time, freq=freq, periods=prediction_length)
if return_samples:
dict_of_samples = {'sample_' + str(i): s for i, s in enumerate(predictions['samples'])}
else:
dict_of_samples = {}
return pd.DataFrame(data={**predictions['quantiles'], **dict_of_samples}, index=prediction_index)
def set_frequency(self, freq):
self.freq = freq
def encode_target(ts):
return [x if np.isfinite(x) else "NaN" for x in ts]
def series_to_dict(ts, cat=None, dynamic_feat=None):
"""Given a pandas.Series object, returns a dictionary encoding the time series.
ts -- a pands.Series object with the target time series
cat -- an integer indicating the time series category
Return value: a dictionary
"""
obj = {"start": str(ts.index[0]), "target": encode_target(ts)}
if cat is not None:
obj["cat"] = cat
if dynamic_feat is not None:
obj["dynamic_feat"] = dynamic_feat
return obj
```
Now we can deploy the model and create and endpoint that can be queried using our custom DeepARPredictor class.
```
predictor = estimator.deploy(
initial_instance_count=1,
instance_type='ml.m4.xlarge',
predictor_cls=DeepARPredictor)
```
### Make predictions and plot results
Now we can use the `predictor` object to generate predictions.
```
predictor.predict(ts=timeseries[120], quantiles=[0.10, 0.5, 0.90]).head()
```
Below we define a plotting function that queries the model and displays the forecast.
```
def plot(
predictor,
target_ts,
cat=None,
dynamic_feat=None,
forecast_date=end_training,
show_samples=False,
plot_history=7 * 12,
confidence=80
):
print("calling served model to generate predictions starting from {}".format(str(forecast_date)))
assert(confidence > 50 and confidence < 100)
low_quantile = 0.5 - confidence * 0.005
up_quantile = confidence * 0.005 + 0.5
# we first construct the argument to call our model
args = {
"ts": target_ts[:forecast_date],
"return_samples": show_samples,
"quantiles": [low_quantile, 0.5, up_quantile],
"num_samples": 100
}
if dynamic_feat is not None:
args["dynamic_feat"] = dynamic_feat
fig = plt.figure(figsize=(20, 6))
ax = plt.subplot(2, 1, 1)
else:
fig = plt.figure(figsize=(20, 3))
ax = plt.subplot(1,1,1)
if cat is not None:
args["cat"] = cat
ax.text(0.9, 0.9, 'cat = {}'.format(cat), transform=ax.transAxes)
# call the end point to get the prediction
prediction = predictor.predict(**args)
# plot the samples
if show_samples:
for key in prediction.keys():
if "sample" in key:
prediction[key].plot(color='lightskyblue', alpha=0.2, label='_nolegend_')
# plot the target
target_section = target_ts[forecast_date-plot_history:forecast_date+prediction_length]
target_section.plot(color="black", label='target')
# plot the confidence interval and the median predicted
ax.fill_between(
prediction[str(low_quantile)].index,
prediction[str(low_quantile)].values,
prediction[str(up_quantile)].values,
color="b", alpha=0.3, label='{}% confidence interval'.format(confidence)
)
prediction["0.5"].plot(color="b", label='P50')
ax.legend(loc=2)
# fix the scale as the samples may change it
ax.set_ylim(target_section.min() * 0.5, target_section.max() * 1.5)
if dynamic_feat is not None:
for i, f in enumerate(dynamic_feat, start=1):
ax = plt.subplot(len(dynamic_feat) * 2, 1, len(dynamic_feat) + i, sharex=ax)
feat_ts = pd.Series(
index=pd.DatetimeIndex(start=target_ts.index[0], freq=target_ts.index.freq, periods=len(f)),
data=f
)
feat_ts[forecast_date-plot_history:forecast_date+prediction_length].plot(ax=ax, color='g')
```
We can interact with the function previously defined, to look at the forecast of any customer at any point in (future) time.
For each request, the predictions are obtained by calling our served model on the fly.
Here we forecast the consumption of an office after week-end (note the lower week-end consumption).
You can select any time series and any forecast date, just click on `Run Interact` to generate the predictions from our served endpoint and see the plot.
```
style = {'description_width': 'initial'}
@interact_manual(
customer_id=IntSlider(min=0, max=369, value=91, style=style),
forecast_day=IntSlider(min=0, max=100, value=51, style=style),
confidence=IntSlider(min=60, max=95, value=80, step=5, style=style),
history_weeks_plot=IntSlider(min=1, max=20, value=1, style=style),
show_samples=Checkbox(value=False),
continuous_update=False
)
def plot_interact(customer_id, forecast_day, confidence, history_weeks_plot, show_samples):
plot(
predictor,
target_ts=timeseries[customer_id],
forecast_date=end_training + datetime.timedelta(days=forecast_day),
show_samples=show_samples,
plot_history=history_weeks_plot * 12 * 7,
confidence=confidence
)
```
# Additional features
We have seen how to prepare a dataset and run DeepAR for a simple example.
In addition DeepAR supports the following features:
* missing values: DeepAR can handle missing values in the time series during training as well as for inference.
* Additional time features: DeepAR provides a set default time series features such as hour of day etc. However, you can provide additional feature time series via the `dynamic_feat` field.
* generalize frequencies: any integer multiple of the previously supported base frequencies (minutes `min`, hours `H`, days `D`, weeks `W`, month `M`) are now allowed; e.g., `15min`. We already demonstrated this above by using `2H` frequency.
* categories: If your time series belong to different groups (e.g. types of product, regions, etc), this information can be encoded as one or more categorical features using the `cat` field.
We will now demonstrate the missing values and time features support. For this part we will reuse the electricity dataset but will do some artificial changes to demonstrate the new features:
* We will randomly mask parts of the time series to demonstrate the missing values support.
* We will include a "special-day" that occurs at different days for different time series during this day we introduce a strong up-lift
* We train the model on this dataset giving "special-day" as a custom time series feature
## Prepare dataset
As discussed above we will create a "special-day" feature and create an up-lift for the time series during this day. This simulates real world application where you may have things like promotions of a product for a certain time or a special event that influences your time series.
```
def create_special_day_feature(ts, fraction=0.05):
# First select random day indices (plus the forecast day)
num_days = (ts.index[-1] - ts.index[0]).days
rand_indices = list(np.random.randint(0, num_days, int(num_days * 0.1))) + [num_days]
feature_value = np.zeros_like(ts)
for i in rand_indices:
feature_value[i * 12: (i + 1) * 12] = 1.0
feature = pd.Series(index=ts.index, data=feature_value)
return feature
def drop_at_random(ts, drop_probability=0.1):
assert(0 <= drop_probability < 1)
random_mask = np.random.random(len(ts)) < drop_probability
return ts.mask(random_mask)
special_day_features = [create_special_day_feature(ts) for ts in timeseries]
```
We now create the up-lifted time series and randomly remove time points.
The figures below show some example time series and the `special_day` feature value in green.
```
timeseries_uplift = [ts * (1.0 + feat) for ts, feat in zip(timeseries, special_day_features)]
time_series_processed = [drop_at_random(ts) for ts in timeseries_uplift]
fig, axs = plt.subplots(5, 2, figsize=(20, 20), sharex=True)
axx = axs.ravel()
for i in range(0, 10):
ax = axx[i]
ts = time_series_processed[i][:400]
ts.plot(ax=ax)
ax.set_ylim(-0.1 * ts.max(), ts.max())
ax2 = ax.twinx()
special_day_features[i][:400].plot(ax=ax2, color='g')
ax2.set_ylim(-0.2, 7)
%%time
training_data_new_features = [
{
"start": str(start_dataset),
"target": encode_target(ts[start_dataset:end_training]),
"dynamic_feat": [special_day_features[i][start_dataset:end_training].tolist()]
}
for i, ts in enumerate(time_series_processed)
]
print(len(training_data_new_features))
# as in our previous example, we do a rolling evaluation over the next 7 days
num_test_windows = 7
test_data_new_features = [
{
"start": str(start_dataset),
"target": encode_target(ts[start_dataset:end_training + 2*k*prediction_length]),
"dynamic_feat": [special_day_features[i][start_dataset:end_training + 2*k*prediction_length].tolist()]
}
for k in range(1, num_test_windows + 1)
for i, ts in enumerate(timeseries_uplift)
]
def check_dataset_consistency(train_dataset, test_dataset=None):
d = train_dataset[0]
has_dynamic_feat = 'dynamic_feat' in d
if has_dynamic_feat:
num_dynamic_feat = len(d['dynamic_feat'])
has_cat = 'cat' in d
if has_cat:
num_cat = len(d['cat'])
def check_ds(ds):
for i, d in enumerate(ds):
if has_dynamic_feat:
assert 'dynamic_feat' in d
assert num_dynamic_feat == len(d['dynamic_feat'])
for f in d['dynamic_feat']:
assert len(d['target']) == len(f)
if has_cat:
assert 'cat' in d
assert len(d['cat']) == num_cat
check_ds(train_dataset)
if test_dataset is not None:
check_ds(test_dataset)
check_dataset_consistency(training_data_new_features, test_data_new_features)
%%time
write_dicts_to_file("/tmp/train_new_features.json", training_data_new_features)
write_dicts_to_file("/tmp/test_new_features.json", test_data_new_features)
%%time
s3_data_path_new_features = "s3://{}/{}-new-features/data".format(s3_bucket, s3_prefix)
s3_output_path_new_features = "s3://{}/{}-new-features/output".format(s3_bucket, s3_prefix)
print('Uploading to S3 this may take a few minutes depending on your connection.')
copy_to_s3("/tmp/train_new_features.json", s3_data_path_new_features + "/train/train_new_features.json", override=True)
copy_to_s3("/tmp/test_new_features.json", s3_data_path_new_features + "/test/test_new_features.json", override=True)
%%time
estimator_new_features = sagemaker.estimator.Estimator(
sagemaker_session=sagemaker_session,
image_name=image_name,
role=role,
train_instance_count=1,
train_instance_type='ml.c4.2xlarge',
base_job_name='deepar-electricity-demo-new-features',
output_path=s3_output_path_new_features
)
hyperparameters = {
"time_freq": freq,
"context_length": str(context_length),
"prediction_length": str(prediction_length),
"epochs": "400",
"learning_rate": "5E-4",
"mini_batch_size": "64",
"early_stopping_patience": "40",
"num_dynamic_feat": "auto", # this will use the `dynamic_feat` field if it's present in the data
}
estimator_new_features.set_hyperparameters(**hyperparameters)
estimator_new_features.fit(
inputs={
"train": "{}/train/".format(s3_data_path_new_features),
"test": "{}/test/".format(s3_data_path_new_features)
},
wait=True
)
```
As before, we spawn an endpoint to visualize our forecasts on examples we send on the fly.
```
%%time
predictor_new_features = estimator_new_features.deploy(
initial_instance_count=1,
instance_type='ml.m4.xlarge',
predictor_cls=DeepARPredictor)
customer_id = 120
predictor_new_features.predict(
ts=time_series_processed[customer_id][:-prediction_length],
dynamic_feat=[special_day_features[customer_id].tolist()],
quantiles=[0.1, 0.5, 0.9]
).head()
```
As before, we can query the endpoint to see predictions for arbitrary time series and time points.
```
@interact_manual(
customer_id=IntSlider(min=0, max=369, value=13, style=style),
forecast_day=IntSlider(min=0, max=100, value=21, style=style),
confidence=IntSlider(min=60, max=95, value=80, step=5, style=style),
missing_ratio=FloatSlider(min=0.0, max=0.95, value=0.2, step=0.05, style=style),
show_samples=Checkbox(value=False),
continuous_update=False
)
def plot_interact(customer_id, forecast_day, confidence, missing_ratio, show_samples):
forecast_date = end_training + datetime.timedelta(days=forecast_day)
target = time_series_processed[customer_id][start_dataset:forecast_date + prediction_length]
target = drop_at_random(target, missing_ratio)
dynamic_feat = [special_day_features[customer_id][start_dataset:forecast_date + prediction_length].tolist()]
plot(
predictor_new_features,
target_ts=target,
dynamic_feat=dynamic_feat,
forecast_date=forecast_date,
show_samples=show_samples,
plot_history=7*12,
confidence=confidence
)
```
### Delete endpoints
```
predictor.delete_endpoint()
predictor_new_features.delete_endpoint()
# code you want to evaluate
elapsed = timeit.default_timer() - start_time
print(elapsed/60)
```
| github_jupyter |
# Taller 1. Introducción a Python con Numpy
Bienvenido al primer taller. Contiene ejercicios para una breve introducción a Python. Si ya ha utilizado Python antes, este taller le ayudará a familiarizarse con las funciones que necesitamos.
**Instrucciones:**
- Se utilizará Python 3.
- Evite utilizar bucles-for y bucles-while, a menos que explícitamente se le pida hacerlo.
- No modifique el comentario (# FUNCIÓN A CALIFICAR [nombre de la funcion]) de algunas celdas. Es neceario para su calificación. Cada celda que contenga ese comentario debe contener solo una función.
- Tras codificar su función, verifique que el resultado es correcto.
**Tras este taller usted va a ser capaz de:**
- Usar Cuadernos iPython
- Utilizar funciones numpy y operaciones numpy sobre matrices/vectores
- Entender el concepto de "broadcasting"
- Vectorizar el código
Manos a la obra!!
## Sobre los Cuadernos iPython ##
Los Cuadernos iPython son ambientes de programación de código interactivos montados en una página web. En esta clase utilizaremos cuadernos iPython. Sólo necesita escribir código entre los comentarios de ### EMPIEZE EL CÓDIGO AQUÍ ### y ### TERMINE EL CÓDIGO AQUÍ ###. Tras escribir el código, puede ejecutar la celda presionando "SHIFT"+"ENTER" o haciendo click en "Run" (símbolo de "play") en la barra superior del cuaderno.
Se especificará en los comentarios aproximadamente cuantas lineas de codigo necesita escribir "(≈ X lineas de codigo)". Es solo una guía, no pasa nada si escribe menos o más lineas siempre que el codigo ahga lo que debe hacer.
**Ejercicio**: Defina test como `"Hola Mundo"` en la celda de abajo para imprimir "Hello World" y ejecute las dos celdas abajo.
```
### EMPIEZE EL CÓDIGO AQUÍ ### (≈ 1 linea de código)
test =
### TERMINE EL CÓDIGO AQUÍ ###
print ("test: " + test)
```
**Salida esperada**:
test: Hola Mundo
## 1 - Construyendo funciones básicas con numpy ##
Numpy es el paquete principal para la ciencia computacional en Python (www.numpy.org). En este ejercicio aprenderá algunas funciones numpy claves tal como np.exp, np.log, y np.reshape. Va a necesitar saber cómo utilizar estas funciones para talleres futuros.
### 1.1 - Función sigmoide, np.exp() ###
Antes de utilizar np.exp(), utilizaremos math.exp() para implementar la función sigmoide. Entonce spodrá ver porqué np.exp() es preferible a math.exp().
**Ejercicio**: Construya una función que devuelva el sigmoide de un número real x. Utilize math.exp(x) para la función exponencial.
**Ayuda**:
$sigmoid(x) = \frac{1}{1+e^{-x}}$ se le conoce como la función logística. Es una función no-lineal utilizada tanto en Machine Learning (Regresión Logistica), como en Deep Learning.
Para referirse a una función de cierto paquete, la puede llamar utilizando package_name.function(). Ejecute el código abajo para trabajar con math.exp().
```
# FUNCIÓN A CALIFICAR basic_sigmoid
import math
def basic_sigmoid(x):
"""
Calcula el sigmoide de x
Input:
x: scalar
Output:
s: sigmoid(x)
"""
### EMPIEZE EL CÓDIGO AQUÍ ### (≈ 1 linea de código)
s =
### TERMINE EL CÓDIGO AQUÍ ###
return s
basic_sigmoid(3)
```
**Salida esperada**:
<table style = "width:40%">
<tr>
<td>** basic_sigmoid(3) **</td>
<td>0.9525741268224334 </td>
</tr>
</table>
En verdad la "math" library suele utilizarse poco en deep learning porque los inputs de las funciones son números reales. En deep learning su utilizan más que todo matrices y vectores. Por esto es que numpy es más útil.
```
### Una razón para utilizar "numpy" en lugar de "math" en Deep Learning ###
x = [1, 2, 3]
basic_sigmoid(x) # esto da error, porque x es un vector
```
De hecho, $ x = (x_1, x_2, ..., x_n)$ es un vector fila, donde $np.exp(x)$ aplica la función exponencial a cada elemento de x. La salida será: $np.exp(x) = (e^{x_1}, e^{x_2}, ..., e^{x_n})$
```
import numpy as np
# ejemplo de np.exp
x = np.array([1, 2, 3])
print(np.exp(x))
```
Si x es un vector, entonces la operación de Python $s = x + 3$ o $s = \frac{1}{x}$ obtiene como resultado s como un vector del mismo tamaño que x.
```
# ejemplo de una operación vectorial
x = np.array([1, 2, 3])
print (x + 3)
```
Más información sobre la función numpy [documentación oficial](https://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.exp.html).
También puede escribir en una nueva celda `np.exp?` y acceder a la documentación.
**Ejercicio**: Implemente la funcion sigmoide utilizando numpy.
**Instrucciones**: x puede ser o un número real, un vector, o una matriz. A las estructuras de datos que se utilizan en numpy para representar estas formas (vectores, matrices,...) se les denominan arreglos numpy.
$$ \text{Para } x \in \mathbb{R}^n \text{, } sigmoide(x) = sigmoide\begin{pmatrix}
x_1 \\
x_2 \\
... \\
x_n \\
\end{pmatrix} = \begin{pmatrix}
\frac{1}{1+e^{-x_1}} \\
\frac{1}{1+e^{-x_2}} \\
... \\
\frac{1}{1+e^{-x_n}} \\
\end{pmatrix}\tag{1} $$
```
# FUNCIÓN A CALIFICAR: sigmoid
import numpy as np # esto permite acceder funciones numpy simplemente escribiendo np.function() en lugar de numpy.function()
def sigmoid(x):
"""
Calcule el sigmoide de x
Input:
x: un escalar o arreglo numpy de cualquier tamaño
Output:
s: sigmoid(x)
"""
### EMPIEZE EL CÓDIGO AQUÍ ### (≈ 1 linea de código)
s =
### TERMINE EL CÓDIGO AQUÍ ###
return s
x = np.array([1, 2, 3])
sigmoid(x)
```
**Salida esperada**:
<table>
<tr>
<td> **sigmoid([1,2,3])**</td>
<td> array([ 0.73105858, 0.88079708, 0.95257413]) </td>
</tr>
</table>
### 1.2 - Gradiente del sigmoide
Como hemos visto, se debe calcular gradientes para optimizar las funciones de coste usando retro-propagación. A continuación se computa la función del gradiente.
**Ejercicio**: Implemente la función sigmoid_grad() para computar el gradiente de la función sigmoide con respecto a su input x. La fórmula es: $$sigmoid\_derivative(x) = \sigma'(x) = \sigma(x) (1 - \sigma(x))\tag{2}$$
Esta función se puede programar en dos pasos:
1. Defina s como el sigmoide de x. Puede utilizar la funcion sigmoid(x) ya programada.
2. Compute $\sigma'(x) = s(1-s)$
```
# FUNCIÓN A CALIFICAR: sigmoid_derivative
def sigmoid_derivative(x):
"""
Calcule el gradiente (o derivada) de la función sigmoide con respecto al input x.
Puede guardar el output del sigmoide como variables y luego usarlo para calcular el gradiente.
Input:
x: un escalar o arrgelo numpy
Output:
ds: el gradiente calculado.
"""
### EMPIEZE EL CÓDIGO AQUÍ ### (≈ 2 lineas de codigo)
s =
ds =
### TERMINE EL CÓDIGO AQUÍ ###
return ds
x = np.array([1, 2, 3])
print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))
```
**Salida esperada**:
<table>
<tr>
<td> **sigmoid_derivative([1,2,3])**</td>
<td> [ 0.19661193 0.10499359 0.04517666] </td>
</tr>
</table>
### 1.3 - Reformando arreglos ###
Dos funciones numpy comunes usadas en deep learning son [np.shape](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html) y [np.reshape()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html).
- X.shape es usado para obtener la forma (dimension) de una matriz/vector X.
- X.reshape(...) es usado para reformar X en alguna otra dimensión.
Por ejemplo, en ciencia computacional, un aimagen es representada por un arreglo en 3D con forma $(longitud, altura, profundidad= 3)$. Sin embargo, cuando se lee una imagen como el input de un algoritmo, se convierte en un vector con forma $(longitud*altura*3, 1)$. En otras palabras, se desenrolla o reforma el arreglo 3D en un vector 1D.
**Ejercicio**: Implemente la función `image2vector()` que toma un input de forma (longitud, altura, 3) y devuelve un vector de form (longitud\*altura\*3, 1). Por ejemplo, si quisiera deformar un arreglo v con forma (a, b, c) en un vector con forma (a*b,c), se escribiría:
``` python
v = v.reshape((v.shape[0]*v.shape[1], v.shape[2])) # v.shape[0] = a ; v.shape[1] = b ; v.shape[2] = c
```
- Nótese que cada imagen tiene sus propias dimensiones, las cuales se pueden averiguar mediante `image.shape[0]`, etc.
```
# FUNCIÓN A CALIFICAR: image2vector
def image2vector(image):
"""
Input:
image: un arreglo numpy con forma (longitud, altura, profundidad)
Output:
v: un vector con forma (longitud*altura*profundidad, 1)
"""
### EMPIEZE EL CÓDIGO AQUÍ ### (≈ 1 linea de código)
v =
### TERMINE EL CÓDIGO AQUÍ ###
return v
# Este es un arreglo de 3 por 3 por 2, usualmente las imagenes son de (num_px_x, num_px_y,3) donde 3 representa los valores RGB
image = np.array([[[ 0.67826139, 0.29380381],
[ 0.90714982, 0.52835647],
[ 0.4215251 , 0.45017551]],
[[ 0.92814219, 0.96677647],
[ 0.85304703, 0.52351845],
[ 0.19981397, 0.27417313]],
[[ 0.60659855, 0.00533165],
[ 0.10820313, 0.49978937],
[ 0.34144279, 0.94630077]]])
print ("image2vector(image) = " + str(image2vector(image)))
```
**Salida esperada**:
<table style="width:100%">
<tr>
<td> **image2vector(image)** </td>
<td> [[ 0.67826139]
[ 0.29380381]
[ 0.90714982]
[ 0.52835647]
[ 0.4215251 ]
[ 0.45017551]
[ 0.92814219]
[ 0.96677647]
[ 0.85304703]
[ 0.52351845]
[ 0.19981397]
[ 0.27417313]
[ 0.60659855]
[ 0.00533165]
[ 0.10820313]
[ 0.49978937]
[ 0.34144279]
[ 0.94630077]]</td>
</tr>
</table>
### 1.4 - Normalización de filas
Otra técnica muy utilizada en Machine Learning y Deep Learning es normalizar los datos. Usualmente lleva a una mejor desempeño porque el descenso en la dirección del gradiente converge más rápidamente tras la normalización. Por normalización nos referimos aquí a transformar x de acuerdo con la expresión $ \frac{x}{\| x\|} $ (dividiendo cada vector file de x por su norma).
Por ejemplo, si $$x =
\begin{bmatrix}
0 & 3 & 4 \\
2 & 6 & 4 \\
\end{bmatrix}\tag{3}$$ entonces $$\| x\| = np.linalg.norm(x, axis = 1, keepdims = True) = \begin{bmatrix}
5 \\
\sqrt{56} \\
\end{bmatrix}\tag{4} $$ y $$ x\_normalizado = \frac{x}{\| x\|} = \begin{bmatrix}
0 & \frac{3}{5} & \frac{4}{5} \\
\frac{2}{\sqrt{56}} & \frac{6}{\sqrt{56}} & \frac{4}{\sqrt{56}} \\
\end{bmatrix}\tag{5}$$ Nótese que se pueden dividir matrices de distinto tamaño y funciona sin problema: esto es llamado "broadcasting" y lo veremos más adelante.
**Ejercicio**: Implemente normalizeRows() para normalizar las filas de una matriz. Luego de aplicar esta función sobre una matriz x, cada fila de x debe ser una vector de longitud unitaria (longitud=1).
```
# FUNCIÓN A CALIFICAR: normalizeRows
def normalizeRows(x):
"""
Implemente una función que normalize cada fila de la matriz x (para que tenga longitud unitaria).
Input:
x: Un arreglo numpy con forma (n, m)
Output:
x: La matriz numpy normalizada por filas.
"""
### EMPIEZE EL CÓDIGO AQUÍ ### (≈ 2 lineas de código)
# Compute x_norm como la norma 2 de x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True)
x_norm =
# Divida a x por su norma.
x =
### TERMINE EL CÓDIGO AQUÍ ###
return x
x = np.array([
[0, 3, 4],
[1, 6, 4]])
print("normalizeRows(x) = " + str(normalizeRows(x)))
```
**Salida esperada**:
<table style="width:60%">
<tr>
<td> **normalizeRows(x)** </td>
<td> [[ 0. 0.6 0.8 ]
[ 0.13736056 0.82416338 0.54944226]]</td>
</tr>
</table>
**Nota**:
Al calcular x_norm, se obtiene la norma de cada fila de x. Luego, x_norm tiene el mismo número de filas que normalizeRows(x) pero sólo una columna. Al dividir x por x_norm, se está aplicando el "broadcasting", que veremos a continuación.
### 1.5 - Broadcasting y la función softmax ####
Un concepto muy importante para entender numpy es el de "broadcasting". Es muy útil para implementar operaciones matemáticas entre areglos de distintos tamaños. Para mayores detalles se puede ver la documentación oficial [broadcasting](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).
**Ejercicio**: Implemente la función softmax utilizando numpy. Puede entender el softmax como una función de normalización utilizada cuando su algoritmo necesita clasificar dos o más clases. Aprenderá más sobre softmax en ejercicios posteriores.
**Instrucciones**:
- $ \text{Para } x \in \mathbb{R}^{1\times n} \text{, } softmax(x) = softmax(\begin{bmatrix}
x_1 &&
x_2 &&
... &&
x_n
\end{bmatrix}) = \begin{bmatrix}
\frac{e^{x_1}}{\sum_{j}e^{x_j}} &&
\frac{e^{x_2}}{\sum_{j}e^{x_j}} &&
... &&
\frac{e^{x_n}}{\sum_{j}e^{x_j}}
\end{bmatrix} $
- $\text{Para una matriz } x \in \mathbb{R}^{m \times n} \text{, $x_{ij}$ mapea el elemento en la $i-{ésima}$ fila y la $j-{ésima}$ columna de $x$, por lo que se obtiene: }$
$$softmax(x) = softmax\begin{bmatrix}
x_{11} & x_{12} & x_{13} & \dots & x_{1n} \\
x_{21} & x_{22} & x_{23} & \dots & x_{2n} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
x_{m1} & x_{m2} & x_{m3} & \dots & x_{mn}
\end{bmatrix} = \begin{bmatrix}
\frac{e^{x_{11}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{12}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{13}}}{\sum_{j}e^{x_{1j}}} & \dots & \frac{e^{x_{1n}}}{\sum_{j}e^{x_{1j}}} \\
\frac{e^{x_{21}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{22}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{23}}}{\sum_{j}e^{x_{2j}}} & \dots & \frac{e^{x_{2n}}}{\sum_{j}e^{x_{2j}}} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
\frac{e^{x_{m1}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m2}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m3}}}{\sum_{j}e^{x_{mj}}} & \dots & \frac{e^{x_{mn}}}{\sum_{j}e^{x_{mj}}}
\end{bmatrix} = \begin{pmatrix}
softmax\text{(primera fila de x)} \\
softmax\text{(segunda fila de x)} \\
... \\
softmax\text{(última fila de x)} \\
\end{pmatrix} $$
```
# FUNCIÓN A CALIFICAR: softmax
def softmax(x):
"""
Calcule el softmax para cada fila del input x.
El código debe funcionar tanto para un vector fila como para matrices de tamaño (n, m).
Input:
x: un arreglo numpy con forma (n,m)
Output:
s: Una matriz numpy igual al softmax de x, de tamaño (n,m)
"""
### EMPIEZE EL CÓDIGO AQUÍ ### (≈ 3 lineas de código)
# Utilize exp() sobre cada elemento de x. Use np.exp(...).
x_exp =
# Defina el vector x_sum que sume cada fila de x_exp. Use np.sum(..., axis = 1, keepdims = True).
x_sum =
# Compute softmax(x) dividiendo x_exp por x_sum. Debería usar automáticamente numpy broadcasting.
s =
### TERMINE EL CÓDIGO AQUÍ ###
return s
x = np.array([
[9, 2, 5, 0, 0],
[7, 5, 0, 0 ,0]])
print("softmax(x) = " + str(softmax(x)))
```
**Salida esperada**:
<table style="width:60%">
<tr>
<td> **softmax(x)** </td>
<td> [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04
1.21052389e-04]
[ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04
8.01252314e-04]]</td>
</tr>
</table>
**Nota**:
- Si examina la forma de x_exp, de x_sum y de s, notará que x_sum es de tamaño (2,1) mientras que x_exp y s son de forma (2,5). **x_exp/x_sum** funciona gracias a python broadcasting.
Ahora tiene un conocimiento básico de python numpy y ha implementado algunas funciones útiles que se utilizan en deep learning.
## 2) Vectorización
En deep learning, se trabaja con grandes volumenes de datos. Por lo tanto, una función computacionalmente ineficiente puede convertirse en un gran cuello de botella de su algoritmo y el modelo puede tardar demasiado en correr. Para asegurarse que su código es computacionalmente eficiente, usaremos la vectorización. Por ejemplo, determine la diferencia entre las siguientes implementaciones del producto interno, externo entre matrices y la multiplicación por elementos.
```
import time
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### IMPLEMENTACION CLASICA DEL PRODUCTO PUNTO ENTRE DOS VECTORES ###
tic = time.process_time()
dot = 0
for i in range(len(x1)):
dot+= x1[i]*x2[i]
toc = time.process_time()
print ("interno = " + str(dot) + "\n ----- Tiempo computacional = " + str(1000*(toc - tic)) + "ms")
### IMPLEMENTACION CLÁSICA DEL PRODUCTO EXTERIOR ###
tic = time.process_time()
outer = np.zeros((len(x1),len(x2))) # matriz de ceros de tamaño len(x1)*len(x2)
for i in range(len(x1)):
for j in range(len(x2)):
outer[i,j] = x1[i]*x2[j]
toc = time.process_time()
print ("externo = " + str(outer) + "\n ----- Tiempo computacional = " + str(1000*(toc - tic)) + "ms")
### IMPLEMENTACION CLÁSICA POR ELEMENTOS ###
tic = time.process_time()
mul = np.zeros(len(x1))
for i in range(len(x1)):
mul[i] = x1[i]*x2[i]
toc = time.process_time()
print ("multiplicación por elementos = " + str(mul) + "\n ----- Tiempo computacional = " + str(1000*(toc - tic)) + "ms")
### IMPLEMENTACION CLASICA GENERAL DEL PRODUCTO PUNTO ###
W = np.random.rand(3,len(x1)) # Arreglo numpy aleatorio de tamaño 3*len(x1)
tic = time.process_time()
gdot = np.zeros(W.shape[0])
for i in range(W.shape[0]):
for j in range(len(x1)):
gdot[i] += W[i,j]*x1[j]
toc = time.process_time()
print ("g_interno = " + str(gdot) + "\n ----- Tiempo computacional = " + str(1000*(toc - tic)) + "ms")
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### PRODUCTO INTERNO VECTORIZADO ###
tic = time.process_time()
dot = np.dot(x1,x2)
toc = time.process_time()
print ("interno = " + str(dot) + "\n ----- Tiempo computacional = " + str(1000*(toc - tic)) + "ms")
### PRODUCTO EXTERNO VECTORIZADO ###
tic = time.process_time()
outer = np.outer(x1,x2)
toc = time.process_time()
print ("externo = " + str(outer) + "\n ----- Tiempo computacional = " + str(1000*(toc - tic)) + "ms")
### MULTIPLICACION POR ELEMENTOS VECTORIZADA ###
tic = time.process_time()
mul = np.multiply(x1,x2)
toc = time.process_time()
print ("multiplicación por elementos = " + str(mul) + "\n ----- Tiempo computacional = " + str(1000*(toc - tic)) + "ms")
### PRODUCTO INTERNO GENERAL VECTORIZADO ###
tic = time.process_time()
dot = np.dot(W,x1)
toc = time.process_time()
print ("g_interno = " + str(dot) + "\n ----- Tiempo computacional = " + str(1000*(toc - tic)) + "ms")
```
Como se puede ver, la implementación vectorizada es más limpia y eficiente. Para vectores/matrices más grandes, las diferencias en tiempo computacional serán mayores.
**Nota:**
`np.dot()` desarrolla una multiplicación de matriz-matriz o matriz-vector. Esto es distinto a `np.multiply()` y el operador `*`, que aplica una multiplicación por elementos.
### 2.1 Implementación de funciones de coste L1 y L2
**Ejercicio**: Implemente la versión numpy vectorizada de pérdida L1. Puede utilizar la función abs(x) (valor absoluto de x).
**Ayuda**:
- La pérdida o función de coste es utilizada para evaluar el desempeño del modelo. Entre más grande la pérdida, mayor será la diferencia entre las predicciones ($ \hat{y} $) y los valores observados ($y$). En deep learning, se utilizan algoritmos de optimización como Descenso en la dirección del gradiente (G.D.) para entrenar el modelo y minimizar la pérdida.
- La pérdida L1 se define como:
$$\begin{align*} & L_1(\hat{y}, y) = \sum_{i=0}^m|y^{(i)} - \hat{y}^{(i)}| \end{align*}\tag{6}$$
```
# FUNCIÓN A CALIFICAR: L1
def L1(yhat, y):
"""
Input:
yhat: vector de tamaño m (etiquetas estimadas)
y: vector de tamaño m (etiquetas observadas)
Output:
loss: el valor de la pérdida L1 definida arriba
"""
### EMPIEZE EL CÓDIGO AQUÍ ### (≈ 1 linea de código)
loss =
### TERMINE EL CÓDIGO AQUÍ ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L1 = " + str(L1(yhat,y)))
```
**Salida esperada**:
<table style="width:20%">
<tr>
<td> **L1** </td>
<td> 1.1 </td>
</tr>
</table>
**Ejercicio**: Implemente la versión numpy vectorizada de pérdida L2. Hay varias maneras de implementarla, pero puede encontrar útil la función np.dot(). Como ayuda, si $x = [x_1, x_2, ..., x_n]$, entoncesn `np.dot(x,x)` = $\sum_{j=0}^n x_j^{2}$.
- La pérdida L2 se define como $$\begin{align*} & L_2(\hat{y},y) = \sum_{i=0}^m(y^{(i)} - \hat{y}^{(i)})^2 \end{align*}\tag{7}$$
```
# FUNCIÓN A CALIFICAR: L2
def L2(yhat, y):
"""
Input:
yhat: vector de tamaño m (etiquetas estimadas)
y: vector de tamaño m (etiquetas observadas)
Output:
loss: el valor de la pérdida L2 definida arriba
"""
### EMPIEZE EL CÓDIGO AQUÍ ### (≈ 1 linea de código)
loss =
### TERMINE EL CÓDIGO AQUÍ ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L2 = " + str(L2(yhat,y)))
```
**Salida esperada**:
<table style="width:20%">
<tr>
<td> **L2** </td>
<td> 0.43 </td>
</tr>
</table>
| github_jupyter |
## 1. Bitcoin and Cryptocurrencies: Full dataset, filtering, and reproducibility
<p>Since the <a href="https://newfronttest.bitcoin.com/bitcoin.pdf">launch of Bitcoin in 2008</a>, hundreds of similar projects based on the blockchain technology have emerged. We call these cryptocurrencies (also coins or cryptos in the Internet slang). Some are extremely valuable nowadays, and others may have the potential to become extremely valuable in the future<sup>1</sup>. In fact, on the 6th of December of 2017, Bitcoin has a <a href="https://en.wikipedia.org/wiki/Market_capitalization">market capitalization</a> above $200 billion. </p>
<p><center>
<img src="https://assets.datacamp.com/production/project_82/img/bitcoint_market_cap_2017.png" style="width:500px"> <br>
<em>The astonishing increase of Bitcoin market capitalization in 2017.</em></center></p>
<p>*<sup>1</sup> <strong>WARNING</strong>: The cryptocurrency market is exceptionally volatile<sup>2</sup> and any money you put in might disappear into thin air. Cryptocurrencies mentioned here <strong>might be scams</strong> similar to <a href="https://en.wikipedia.org/wiki/Ponzi_scheme">Ponzi Schemes</a> or have many other issues (overvaluation, technical, etc.). <strong>Please do not mistake this for investment advice</strong>. *</p>
<p><em><sup>2</sup> <strong>Update on March 2020</strong>: Well, it turned out to be volatile indeed :D</em></p>
<p>That said, let's get to business. We will start with a CSV we conveniently downloaded on the 6th of December of 2017 using the coinmarketcap API (NOTE: The public API went private in 2020 and is no longer available) named <code>datasets/coinmarketcap_06122017.csv</code>. </p>
```
# Importing pandas
import pandas as pd
# Importing matplotlib and setting aesthetics for plotting later.
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
plt.style.use('fivethirtyeight')
# Reading datasets/coinmarketcap_06122017.csv into pandas
dec6 = pd.read_csv('datasets/coinmarketcap_06122017.csv')
# Selecting the 'id' and the 'market_cap_usd' columns
market_cap_raw = dec6[['id','market_cap_usd']]
# Counting the number of values
print(market_cap_raw.count)
# ... YOUR CODE FOR TASK 2 ...
```
## 2. Discard the cryptocurrencies without a market capitalization
<p>Why do the <code>count()</code> for <code>id</code> and <code>market_cap_usd</code> differ above? It is because some cryptocurrencies listed in coinmarketcap.com have no known market capitalization, this is represented by <code>NaN</code> in the data, and <code>NaN</code>s are not counted by <code>count()</code>. These cryptocurrencies are of little interest to us in this analysis, so they are safe to remove.</p>
```
# Filtering out rows without a market capitalization
cap = market_cap_raw.query('market_cap_usd > 0')
# Counting the number of values again
print(cap.count)
# ... YOUR CODE FOR TASK 3 ...
```
## 3. How big is Bitcoin compared with the rest of the cryptocurrencies?
<p>At the time of writing, Bitcoin is under serious competition from other projects, but it is still dominant in market capitalization. Let's plot the market capitalization for the top 10 coins as a barplot to better visualize this.</p>
```
#Declaring these now for later use in the plots
TOP_CAP_TITLE = 'Top 10 market capitalization'
TOP_CAP_YLABEL = '% of total cap'
# Selecting the first 10 rows and setting the index
cap10 = cap.head(10).set_index('id')
# Calculating market_cap_perc
cap10 = cap10.assign(market_cap_perc =
lambda x: (x.market_cap_usd / cap.market_cap_usd.sum()) * 100)
# Plotting the barplot with the title defined above
ax = cap10.plot.bar(title="Top 10 market capitalization")
# Annotating the y axis with the label defined above
ax.set(ylabel = "% of total cap")
# ... YOUR CODE FOR TASK 4 ...
```
## 4. Making the plot easier to read and more informative
<p>While the plot above is informative enough, it can be improved. Bitcoin is too big, and the other coins are hard to distinguish because of this. Instead of the percentage, let's use a log<sup>10</sup> scale of the "raw" capitalization. Plus, let's use color to group similar coins and make the plot more informative<sup>1</sup>. </p>
<p>For the colors rationale: bitcoin-cash and bitcoin-gold are forks of the bitcoin <a href="https://en.wikipedia.org/wiki/Blockchain">blockchain</a><sup>2</sup>. Ethereum and Cardano both offer Turing Complete <a href="https://en.wikipedia.org/wiki/Smart_contract">smart contracts</a>. Iota and Ripple are not minable. Dash, Litecoin, and Monero get their own color.</p>
<p><sup>1</sup> <em>This coloring is a simplification. There are more differences and similarities that are not being represented here.</em></p>
<p><sup>2</sup> <em>The bitcoin forks are actually <strong>very</strong> different, but it is out of scope to talk about them here. Please see the warning above and do your own research.</em></p>
```
# Colors for the bar plot
COLORS = ['orange', 'green', 'orange', 'cyan', 'cyan', 'blue', 'silver', 'orange', 'red', 'green']
# Plotting market_cap_usd as before but adding the colors and scaling the y-axis
ax = cap10.market_cap_usd.head(10).plot(kind = 'bar', title = TOP_CAP_TITLE, color = COLORS, logy = True)
# Annotating the y axis with 'USD'
ax.set_ylabel('USD')
# ... YOUR CODE FOR TASK 5 ...
# Final touch! Removing the xlabel as it is not very informative
ax.set_xlabel('')
# ... YOUR CODE FOR TASK 5 ...
```
## 5. What is going on?! Volatility in cryptocurrencies
<p>The cryptocurrencies market has been spectacularly volatile since the first exchange opened. This notebook didn't start with a big, bold warning for nothing. Let's explore this volatility a bit more! We will begin by selecting and plotting the 24 hours and 7 days percentage change, which we already have available.</p>
```
# Selecting the id, percent_change_24h and percent_change_7d columns
volatility = dec6[['id','percent_change_24h','percent_change_7d']]
# Setting the index to 'id' and dropping all NaN rows
volatility = volatility.set_index('id').dropna()
# Sorting the DataFrame by percent_change_24h in ascending order
volatility = volatility.sort_values(by=['percent_change_24h'])
# Checking the first few rows
print(volatility.head())
# ... YOUR CODE FOR TASK 6 ...
```
## 6. Well, we can already see that things are *a bit* crazy
<p>It seems you can lose a lot of money quickly on cryptocurrencies. Let's plot the top 10 biggest gainers and top 10 losers in market capitalization.</p>
```
#Defining a function with 2 parameters, the series to plot and the title
def top10_subplot(volatility_series, title):
# Making the subplot and the figure for two side by side plots
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(10, 6))
# Plotting with pandas the barchart for the top 10 losers
ax = volatility_series[:10].plot.bar(ax=axes[0], color='darkred')
# Setting the figure's main title to the text passed as parameter
# ... YOUR CODE FOR TASK 7 ...
fig.suptitle(t = title)
# Setting the ylabel to '% change'
# ... YOUR CODE FOR TASK 7 ...
ax.set_ylabel('% change')
# Same as above, but for the top 10 winners
ax = volatility_series[-10:].plot.bar(ax=axes[1], color='darkblue')
# Returning this for good practice, might use later
return fig, ax
DTITLE = "24 hours top losers and winners"
# Calling the function above with the 24 hours period series and title DTITLE
fig, ax = top10_subplot(volatility.percent_change_24h, DTITLE)
```
## 7. Ok, those are... interesting. Let's check the weekly Series too.
<p>800% daily increase?! Why are we doing this tutorial and not buying random coins?<sup>1</sup></p>
<p>After calming down, let's reuse the function defined above to see what is going weekly instead of daily.</p>
<p><em><sup>1</sup> Please take a moment to understand the implications of the red plots on how much value some cryptocurrencies lose in such short periods of time</em></p>
```
# Sorting in ascending order
volatility7d = volatility[['percent_change_7d']].sort_values(by=['percent_change_7d'])
WTITLE = "Weekly top losers and winners"
# Calling the top10_subplot function
fig, ax = top10_subplot(volatility7d, WTITLE)
```
## 8. How small is small?
<p>The names of the cryptocurrencies above are quite unknown, and there is a considerable fluctuation between the 1 and 7 days percentage changes. As with stocks, and many other financial products, the smaller the capitalization, the bigger the risk and reward. Smaller cryptocurrencies are less stable projects in general, and therefore even riskier investments than the bigger ones<sup>1</sup>. Let's classify our dataset based on Investopedia's capitalization <a href="https://www.investopedia.com/video/play/large-cap/">definitions</a> for company stocks. </p>
<p><sup>1</sup> <em>Cryptocurrencies are a new asset class, so they are not directly comparable to stocks. Furthermore, there are no limits set in stone for what a "small" or "large" stock is. Finally, some investors argue that bitcoin is similar to gold, this would make them more comparable to a <a href="https://www.investopedia.com/terms/c/commodity.asp">commodity</a> instead.</em></p>
```
# Selecting everything bigger than 10 billion
largecaps = cap.query('market_cap_usd>10000000000')
# Printing out largecaps
print(largecaps)
```
## 9. Most coins are tiny
<p>Note that many coins are not comparable to large companies in market cap, so let's divert from the original Investopedia definition by merging categories.</p>
<p><em>This is all for now. Thanks for completing this project!</em></p>
```
# Making a nice function for counting different marketcaps from the
# "cap" DataFrame. Returns an int.
# INSTRUCTORS NOTE: Since you made it to the end, consider it a gift :D
def capcount(query_string):
return cap.query(query_string).count().id
# Labels for the plot
LABELS = ["biggish", "micro", "nano"]
# Using capcount count the biggish cryptos
biggish = capcount('(market_cap_usd>10000000000) or (market_cap_usd>2000000000 and market_cap_usd<10000000000) or (market_cap_usd>300000000 and market_cap_usd<2000000000)')
# Same as above for micro ...
micro = capcount('market_cap_usd>50000000 and market_cap_usd<300000000')
# ... and for nano
nano = capcount('market_cap_usd<50000000')
# Making a list with the 3 counts
values = [biggish,micro,nano]
# Plotting them with matplotlib
plt.bar(x=LABELS, height=values)
```
| github_jupyter |
```
import pvl
import struct
import matplotlib.pyplot as plt
import numpy as np
import datetime
import os.path
import binascii
chan_file = '/home/arsanders/testData/chandrayaan/forwardDescending/input/M3G20081129T171431_V03_L1B.LBL'
image_file = chan_file
header = pvl.load(chan_file)
# chan1m32isis requires 4 different files
rdn_file = os.path.dirname(chan_file) + "/"+ header['RDN_FILE']['^RDN_IMAGE']
obs_file = os.path.dirname(chan_file) + "/"+ header['OBS_FILE']['^OBS_IMAGE']
loc_file = os.path.dirname(chan_file) + "/"+ header['LOC_FILE']['^LOC_IMAGE']
tab_file = os.path.dirname(chan_file) + "/"+ header['UTC_FILE']['^UTC_TIME_TABLE']
with open(rdn_file, 'rb') as f:
# From the end, seek n_records * record_size backwards
f.seek(-header['RDN_FILE']['RECORD_BYTES'] * header['RDN_FILE']['FILE_RECORDS'], 2)
b_image_data = f.read()
n_lines = 5
line_length = header['RDN_FILE']['RDN_IMAGE']['LINE_SAMPLES'] * (header['RDN_FILE']['RDN_IMAGE']['SAMPLE_BITS']//8)
def read_chandrayaan(b_image_data, line_length, n_lines, n_bands):
image_data = []
for j in range(n_lines*n_bands):
image_sample = np.frombuffer(b_image_data[j*line_length:(j+1)*line_length],
dtype=np.float32, count=int(line_length/4))
image_data.append(image_sample)
return np.array(image_data)
n_bands = header['RDN_FILE']['RDN_IMAGE']['BANDS']
n_output_bands = 3
image_data = read_chandrayaan(b_image_data, line_length, n_lines, n_bands)
cropped_image_data = image_data[np.where(np.arange(image_data.shape[0]) % n_bands < n_output_bands)]
plt.imshow(cropped_image_data[0::n_output_bands])
with open(obs_file, 'rb') as f:
# From the end, seek n_records * record_size backwards
f.seek(-header['OBS_FILE']['RECORD_BYTES'] * header['OBS_FILE']['FILE_RECORDS'], 2)
b_image_data = f.read()
n_bands = header['OBS_FILE']['OBS_IMAGE']['BANDS']
obs_image_data = read_chandrayaan(b_image_data, line_length, n_lines, n_bands)
plt.imshow(obs_image_data[1::10])
with open(loc_file, 'rb') as f:
# From the end, seek n_records * record_size backwards
f.seek(-header['LOC_FILE']['RECORD_BYTES'] * header['LOC_FILE']['FILE_RECORDS'], 2)
b_image_data = f.read()
line_length = header['LOC_FILE']['LOC_IMAGE']['LINE_SAMPLES'] * (header['LOC_FILE']['LOC_IMAGE']['SAMPLE_BITS']//8)
n_bands = header['LOC_FILE']['LOC_IMAGE']['BANDS']
image_data = []
for j in range(n_lines*n_bands):
image_sample = np.frombuffer(b_image_data[j*line_length:(j+1)*line_length],
dtype=np.float64, count=int(line_length/8))
image_data.append(image_sample)
loc_image_data = np.array(image_data)
plt.imshow(loc_image_data[0::n_bands])
# Set up files names for each of the four files
rdn_fn, rdn_ext = os.path.splitext(rdn_file)
obs_fn, obs_ext = os.path.splitext(obs_file)
loc_fn, loc_ext = os.path.splitext(loc_file)
tab_fn, tab_ext = os.path.splitext(tab_file)
crop = '_cropped'
mini_rdn_fn = rdn_fn + crop + rdn_ext
mini_rdn_bn = os.path.basename(mini_rdn_fn)
mini_obs_fn = obs_fn + crop + obs_ext
mini_obs_bn = os.path.basename(mini_obs_fn)
mini_loc_fn = loc_fn + crop + loc_ext
mini_loc_bn = os.path.basename(mini_loc_fn)
mini_tab_fn = tab_fn + crop + tab_ext
mini_tab_bn = os.path.basename(mini_tab_fn)
header['RDN_FILE']['^RDN_IMAGE'] = mini_rdn_bn
header['RDN_FILE']['FILE_RECORDS'] = n_lines
header['RDN_FILE']['RDN_IMAGE']['LINES'] = n_lines
header['RDN_FILE']['RDN_IMAGE']['BANDS'] = n_output_bands
header['RDN_FILE']['RECORD_BYTES'] = int(n_output_bands * (header['RDN_FILE']['RDN_IMAGE']['SAMPLE_BITS']/8) *header['RDN_FILE']['RDN_IMAGE']['LINE_SAMPLES'])
header['LOC_FILE']['^LOC_IMAGE'] = mini_loc_bn
header['LOC_FILE']['FILE_RECORDS'] = n_lines
header['LOC_FILE']['LOC_IMAGE']['LINES'] = n_lines
header['OBS_FILE']['^OBS_IMAGE'] = mini_obs_bn
header['OBS_FILE']['FILE_RECORDS'] = n_lines
header['OBS_FILE']['OBS_IMAGE']['LINES'] = n_lines
header['UTC_FILE']['^UTC_TIME_TABLE'] = mini_tab_bn
header['UTC_FILE']['FILE_RECORDS'] = n_lines
header['UTC_FILE']['UTC_TIME_TABLE']['ROWS'] = n_lines
label_fn, label_ext = os.path.splitext(chan_file)
out_label = label_fn + crop + label_ext
grammar = pvl.grammar.ISISGrammar()
grammar.comments+=(("#", "\n"), )
encoder = pvl.encoder.ISISEncoder()
pvl.dump(header, out_label, encoder=encoder, grammar=grammar)
with open(mini_rdn_fn, 'wb+') as f:
b_reduced_image_data = cropped_image_data.tobytes()
f.seek(0, 2)
f.write(b_reduced_image_data)
with open(mini_loc_fn, 'wb+') as f:
b_reduced_image_data = loc_image_data.tobytes()
f.seek(0, 2)
f.write(b_reduced_image_data)
with open(mini_obs_fn, 'wb+') as f:
b_reduced_image_data = obs_image_data.tobytes()
f.seek(0, 2)
f.write(b_reduced_image_data)
with open(tab_file) as f:
head = [next(f) for x in range(n_lines)]
head = "".join(head)
with open(mini_tab_fn, 'w+') as f:
f.write(head)
```
| github_jupyter |
# Scalar uniform quantisation of random variables
This tutorial considers scalar quantisation implemented using a uniform quantiser and applied over random variables with different Probability Mass Functions (PMFs). In particular we will consider uniform- and Gaussian-distributed random variables so to comment on the optimality of such a simple quantiser.
## Preliminary remarks
Quantisation is an irreversible operation which reduces the precision used to represent our data to be encoded. Such a precision reduction translates into less bits used to transmit the information. Accordingly, the whole dynamic range associated with the input data ($X$) is divided into intervals denoted as *quantisation bins*, each having a given width. Each quantisation bin $b_i$ is also associated with its reproduction level $l_i$ which corresponds to the value used to represent all original data values belonging to $b_i$. From this description, it is easy to realise why quantisation is an irreversible process: it is indeed a *many-to-one* mapping, hence after a value $x$ is quantised it cannot be recovered. Usually a quantiser is associated with its number of bits $qb$ which determines the number of reproduction levels, given as $2^{qb}$. If scalar (1D) quantities are presented to the quantiser as input, then we talk about *scalar quantisation* (i.e. the subject of this tutorial) if group of samples are considered together as input to the quantiser, we talk about *vector quantisation*.
During encoding the quantiser will output the index $i$ of each quantisation bin $b_i$ where each input sample belongs to. The decoder will receive these indexes and write to the output the corresponding reproduction level $l_i$. The mapping $\{b_i \leftrightarrow l_i\}$ must be known at the decoder side. Working out the optimal partitioning of the input data range (i.e. the width of each $b_i$) and the associated set of $\{l_i\}$ can be a computational intensive process, although it can provide significant gains in the overall rate distortion performance of our coding system.
A widely and well-known used quantiser is the so-called *uniform quantiser*, characterised by having each $b_i$ with the same width and the reproduction level $l_i$ placed at the mid-value of the quantisation bin, that is:
$$
\large
l_i = \frac{b_i + b_{i+1}}{2}.
$$
The width of each quantisation bin is usually denoted as the quantisation step $\Delta$, given as:
$$
\large
\Delta = \frac{\max(X) - \min(X)}{2^{qb}}.
$$
Using the Mean Square Error (MSE) as distortion measure for the quantisation error ($e$) and considering the input data ($X$) to have a uniform PMF, the variance of $e$, $\sigma^2_e$ is given by:
$$
\large
\sigma^2_e = \frac{\Delta^2}{12}.
$$
If $X$ ranges in $\left[-\frac{M\Delta}{2},\frac{M\Delta}{2}\right]$ and if we consider the Signal-to-Noise-Ratio (SNR) as alternative measure to express the reproduction quality, then we have the so-called ***six dB rule***:
$$
SNR = 6 \cdot qb\quad[dB],
$$
That is, each bit added to increase the number of reproduction levels will provide a 6 dB improvement to our reconstructed quality. More details about rate distortion theory and quantisation are provided in these two good references:
* Allen Gersho and Robert M. Gray. Vector Quantization and Signal Compression. Kluwer Academic Press, 732 pages, 1992.
* David S. Taubman and Micheal W. Marcellin, "JPEG2000: Image compression fundamentals, standards and practice", Kluwer Academic Press, 773 pages, 2002.
## Rate distortion performance of a uniform quantiser
We demonstrate now the rate-distortion performance of a uniform quantiser and see how this resembles the six dB rule. We also note from the remarks above that a uniform quantiser is a sort of low complexity solution to quantisation. In fact, encoding results in a simple integer precision division by $\Delta$ rather than a comparison of each input data with the different quantisation bins' extrema. Moreover, at the decoder side, the only additional information one would require is only $\Delta$. Accordingly, it is interesting to verify whether a uniform quantiser is able to attain the six dB rule also for other PMFs, most notably knowing that when transformation is used in our coding scheme, the distribution of coefficients tends to be more Laplacian or Gaussian.
The following Python code cell will generate two input data: one with uniform and another with Gaussian PMF (zero mean and variance $\sigma^2$ equal to four). Uniform quantisation is applied to both inputs and the SNR is computed on the reconstructed values. A plot of the rate distortion perform is shown along the straight line associated with the 6 dB rule.
```
import numpy as np
import numpy.random as rnd
import matplotlib.pyplot as plt
# Total samples
N = 1000
# Quantiser's bits
qb = np.arange(0, 8, 1)
# Generate a random variable uniformly distributed in [0, 255]
X = np.round(255*rnd.rand(N, 1)).astype(np.int32)
var_X = np.var(X)
# Generate a random Gaussian variable with mean 128 and variance 4
Xg = 2.0*rnd.randn(N, 1) + 128
var_Xg = np.var(Xg)
B = np.max(Xg) - np.min(Xg)
snr_data = np.zeros(len(qb))
snr_data_g = np.zeros(len(qb))
for i, b in enumerate(qb):
levels = 2**b
Q = 256.0 / float(levels)
Qg = B / float( levels)
Y = Q * np.round(X / Q)
Yg = Qg*np.round(Xg / Qg)
mse = np.mean(np.square(X - Y))
mse_g = np.mean(np.square(Xg - Yg))
snr_data[i] = 10*np.log10(var_X / mse)
snr_data_g[i] = 10*np.log10(var_Xg / mse_g)
six_dB_rule = 6.0 * qb
# Plot the results and verify the 6dB rule
plt.figure(figsize=(8,8))
plt.plot(qb, snr_data, 'b-o', label='Uniform quantiser with uniform variable')
plt.xlabel('Quantiser bits', fontsize=16)
plt.ylabel('Signal-to-Noise-Ratio SNR [dB]', fontsize=16)
plt.grid()
plt.plot(qb, snr_data_g, 'k-+', label='Uniform quantiser with Gaussian variable')
plt.plot(qb, six_dB_rule, 'r-*', label='Six dB rule')
plt.legend();
```
As expected, the uniform quantiser applied over a uniformly distributed input provides a rate-distortion performance which follows the six dB rule. Conversely, when the input is Gaussian, then the performance is offset by approximately 4 dB. Such a suboptimal performance is due to the fact that the reproduction levels are placed at the mid-point of each interval, which for a uniform PMF is absolutely fine since each value in a given bin $b_i$ has equal chance to appear. This is not the case for a Gaussian PMF where in each bin some values have higher chance to appear than others. Accordingly, it would make sense to place the reproduction levels around those values which are more likely to appear. The procedure which does this automatically is the subject of the next section of our tutorial.
## Towards an optimal quantiser: The Lloyd-Max algorithm
As mentioned above, we want to find a procedure which adjusts the reproduction levels to fit the underlying PMF of the data. In particular, by using again the MSE as distortion measure, one can show that the reproduction levels which minimise the MSE in each quantisation bin is given by:
$$
\large
l_i = E[X|X\in b_i]= \frac{\sum_{x_i\in b_i}x_i\cdot P_X(x_i)}{\sum_{x_i\in b_i}P_X(x_i)},
$$
where $P_X$ denotes the PMF of the input $X$. The condition above is usually denoted as *centroid condition* and, for a continuos variable, becomes:
$$
\large
l_i = E[X|X\in b_i]= \frac{\int_{x\in b_i}x\cdot f_X(x)}{\int_{x\in b_i}f_X(x)},
$$
where now $f_X$ denotes the Probability Density Function (PDF). We note that the centroid condition above requires to know the partitioning of the input data into quantisation bins $b_i$. Given that we do not know beforehand what such a partitioning would look like, we could assume an initial partitioning with equal width (as for a uniform quantiser) and then compute the reproduction levels according to the centroid condition above. Once all reproduction levels have been computed, we can derive a new set of quantisation bins whereby the extrema of each bin are given by the mid point of the reproduction levels computed previously. We then compute a new set of reproduction levels and continue to iterate until convergence is reached. More precisely, the following pseudo code represents the workflow we just described in plain text:
* k = 0
* set $b_i^k$ equal to the bins associated with a uniform quantiser with $qb$ bits
* apply uniform quantisation over the input data, compute the associated MSE and set it to $MSE_{old}$
* set $\gamma$ = $\infty$
* while $\gamma > \epsilon$:
* compute $l_i^k$ using the centroid condition for each bin $b_i^k$
* derive the new quantisation bins as $b_i^{k+1} = \frac{l_{i}^k + l_{i+1}^k}{2}$
* apply the quantiser derived by these new bins and reproduction levels and compute the MSE
* compute $\gamma = \frac{MSE_{old} - MSE}{MSE_{old}}$
* set $k = k + 1$
Where $\epsilon$ denotes a given tolerance threshold. The iterative produce described above is also known as the [Lloyd-Max algorithm](https://en.wikipedia.org/wiki/Lloyd%27s_algorithm). The next code cell will provide you with an implementation of the Lloyd-Max algorithm, which is conveniently wrapped up as a function so we can then use it to compare its rate-distortion performance with the uniform quantiser analysed before.
```
from typing import Any, List, Tuple
from nptyping import NDArray
def lloydmax(X: NDArray[(Any), np.float64], qb: int) -> Tuple[List[float], List[float], NDArray[(Any), np.float64]]:
levels = 1 << qb
delta = (np.max(X) - np.min(X)) / levels
pmf = np.zeros(X.shape)
# Quantisation bins
bins = np.array([np.min(X) + float(i * delta) for i in range(levels + 1)], np.float64)
# Add a small quantity to the last bin to avoid empty cells
bins[-1] += 0.1
# pmf calculation
for i in range(levels):
index = (bins[i] <= X) & (X < bins[i + 1])
pmf[index] = np.sum(index) / X.size
# Reproduction levels
rl = (bins[:levels] + bins[1:levels + 1]) / 2
# Codebook initialization with a uniform scalar quantiser
XQ = np.zeros(X.shape)
for i in range(rl.size):
index = (bins[i] <= X) & (X < bins[i + 1])
XQ[index] = rl[i]
error = np.square(X - XQ)
MSE_old = np.average(error)
epsilon, variation, step = 1e-5, 1, 1
# Lloyd-Max Iteration over all decision thresholds and reproduction levels
bins_next, rl_next = np.zeros(bins.shape), np.zeros(rl.shape)
while variation > epsilon:
# Loop over all reproduction levels in order to adjust them wrt
# centroid condition
for i in range(levels):
index = (bins[i] <= X) & (X < bins[i + 1])
if np.all(~index): # empty decision threshold, relative reprodution level will be the same for the next step
rl_next[i] = rl[i]
else: # centroid condition
rl_next[i] = np.sum(np.multiply(X[index], pmf[index])) / np.sum(pmf[index])
# New decision threshold: they are at the mid point of two
# reproduction levels
bins_next[1:levels] = (rl_next[:levels - 1] + rl_next[1:levels]) / 2
bins_next[0], bins_next[-1] = bins[0], bins[-1]
# New MSE calculation
XQ[:] = 0
for i in range(rl_next.size):
index = (bins_next[i] <= X) & (X < bins_next[i + 1])
XQ[index] = rl_next[i]
MSE = np.average(np.square(X - XQ))
variation = (MSE_old - MSE) / (MSE_old)
# Recompute the pmf weights
for i in range(levels):
index = (bins_next[i] <= X) & (X < bins_next[i + 1])
pmf[index] = np.sum(index) / X.size
# Swap the old variables with the new ones
bins, rl, MSE_old = bins_next, rl_next, MSE
step += 1
return bins, rl, XQ
```
The code above contains some comments to help the reader understand the flow. We are now ready to try this non uniform quantiser and measure its performance. The following code cell in Python will run the Lloyd-Max quantiser for each of the tested quantiser bit values and compute its associated SNR.
```
snr_data_lm = np.zeros((len(qb)))
for i, b in enumerate(qb):
_, _, xq_lm = lloydmax(Xg, b)
mse = np.average(np.square(Xg - xq_lm))
snr_data_lm[i] = 10 * np.log10(var_Xg / mse)
# Plot the results, including the 6dB rule
plt.figure(figsize=(8,8))
plt.plot(qb, snr_data, 'b-o', label='Uniform quantiser applied to uniform PMF')
plt.xlabel('Quantiser bits', fontsize=16)
plt.ylabel('Signal-to-Noise-Ratio SNR [dB]', fontsize=16)
plt.grid()
plt.plot(qb, snr_data_g, 'k-+', label='Uniform quantiser applied to Gaussian PMF')
plt.plot(qb, snr_data_lm, 'g-x', label='Lloyd-Max quantiser applied to Gaussian PMF')
plt.plot(qb, six_dB_rule, 'r-*', label='Six dB rule')
plt.legend();
```
We can observe from the graph above that the Lloyd-Max algorithm starts by providing a better SNR performance at low bitrates and then tends to sit on the same performance of the uniform quantiser when applied to a Gaussian variable. This result might be surprising at first sight but it is actually not. In fact, as the number of quantiser bits gets higher, the quantisation step of the Lloyd-Max quantiser gets smaller and the PMF enclosed in each quantisation bin resembles a uniform one. In that case, the best the Lloyd-Max algorithm can do is to place all reproduction levels at approximatively the mid-point of the quantisation bin, which is exactly what a uniform quantiser would do. Finally, we also note that the Lloyd-Max algorithm would require to send the reproduction levels and quantisation bins, thus some additional rate needs to be added to the bits used by the quantiser.
## Concluding remarks
In this short tutorial we have investigated the rate-distortion performance of two types of scalar quantiser when applied to random variables with a given probability mass function. We showed how a uniform quantiser follows the six dB rule when it is applied to a random variable uniformly distributed but it is sub-optimal in the case of a Gaussian PMF. We then introduced the Lloyd-Max algorithm which provides a better rate-distortion performance, most notably when the number of bits allocated to the quantiser is small. The price to pay for this improved rate-distortion tradeoff is the additional complexity associated with the iterative Lloyd-Max procedure.
| github_jupyter |
<a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a>
# Setting Boundary Conditions on the Perimeter of a Raster.
<hr>
<small>For more Landlab tutorials, click here: <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html</a></small>
<hr>
### This tutorial illustrates how to modify the boundary conditions along the perimeter of a rectangular raster.
```
from landlab import RasterModelGrid
from landlab.plot.imshow import imshow_grid
import numpy as np
%matplotlib inline
```
- Instantiate a grid.
```
mg = RasterModelGrid((4, 4))
```
The node boundary condition options are:
- mg.BC_NODE_IS_CORE (status value = 0; all operations are performed on a mg.BC_NODE_IS_CORE)
- mg.BC_NODE_IS_FIXED_VALUE (status value = 1; a boundary node with a fixed value)
- mg.BC_NODE_IS_FIXED_GRADIENT (status value = 2; a boundary node with a fixed gradient)
- mg.BC_NODE_IS_LOOPED (status value = 3; a boundary node that is wrap-around)
- mg.BC_NODE_IS_CLOSED (status value = 4; closed boundary, or no flux can cross this node, or more accurately, can cross the faces around the node)
(Note that these options are designed for the convenience in writing Landlab applications, and are not "automatically enforced" in internal Landlab functions. For example, if you add two node fields together, as in `my_field1 + my_field2`, *all* values will be added, not just core nodes; to take advantage of boundary coding, you would use a syntax like `my_field1[grid.core_nodes] + my_field2[grid.core_nodes]`.)
Check the status of boundaries immediately after instantiating the grid:
```
mg.status_at_node
```
The default conditions are for the perimeter to be fixed value (status of 1) and the interior nodes to be core (status of 0).
This is a bit easier to see graphically.
```
imshow_grid(mg, mg.status_at_node)
```
Now let's choose one node on the perimeter to be closed.
Note that `imshow_grid` by default does not illustrate values for closed nodes, so we override that below and show them in blue.
```
mg.status_at_node[2] = mg.BC_NODE_IS_CLOSED
imshow_grid(mg, mg.status_at_node, color_for_closed='blue')
```
We could set the boundary condition at each node individually, or at groups of nodes (e.g. where the `x_of_node` value is greater than some specified value). But in many cases we just want to set the edges in one way or another. There are some functions for setting the boundary conditions around the perimeter of a raster. (Remember that initially all of the perimeter nodes are mg.BC_NODE_IS_FIXED_VALUE by default.)
A generic way to do this is to use **set_status_at_node_on_edges**.
Note that this method takes the node status for whether a boundary should be closed. The order is **right, top, left, bottom**.
You could send it, for example, mg.BC_NODE_IS_CLOSED, or 4, which is the value for mg.BC_NODE_IS_CLOSED status.
Below we set the right and left edges as closed and the top and bottom as fixed_value.
```
mg.set_status_at_node_on_edges(right=mg.BC_NODE_IS_CLOSED,
top=mg.BC_NODE_IS_FIXED_VALUE,
left=mg.BC_NODE_IS_CLOSED,
bottom=mg.BC_NODE_IS_FIXED_VALUE)
#the same thing could be done as ...
#mg.set_status_at_node_on_edges(right=4, top=1, left=4, bottom=1)
imshow_grid(mg, mg.status_at_node, color_for_closed='blue')
```
There are multiple ways to set edge boundary conditions. If above isn't intuitive to you, keep reading.
Now let's set the right and left edges as closed boundaries using **set_closed_boundaries_at_grid_edges.**
Note that this method takes boolean values for whether a boundary should be closed. The order is
**right, top, left, bottom**.
Note that here we instantiate a new grid.
```
mg1 = RasterModelGrid((4, 4), 1.)
mg1.set_closed_boundaries_at_grid_edges(True, False, True, False)
imshow_grid(mg1, mg1.status_at_node, color_for_closed='blue')
```
Now let's try setting looped boundaries using **set_looped_bondaries.**
Note that this method takes boolean values for whether the top and bottom (first) or right and left (second) are looped.
We will set the top and bottom to be looped (status value of 3)
```
mg2 = RasterModelGrid((4, 4), 1.)
mg2.set_looped_boundaries(True, False)
imshow_grid(mg2, mg2.status_at_node)
```
Note that this has the right and left edges as mg.BC_NODE_IS_FIXED_VALUE (status value of 1).
We can change those to closed if we want.
```
mg2.set_closed_boundaries_at_grid_edges(True, False, True, False)
imshow_grid(mg2, mg2.status_at_node, color_for_closed='Blue')
```
Note that there are not methods for setting mg.BC_NODE_IS_FIXED_GRADIENT conditions on the boundary edges. But we can do that. We could use **set_status_at_node_on_edges**. Below is another way to do this.
Remember that mg.BC_NODE_IS_FIXED_GRADIENT has a status value of 2.
We will set the top and bottom to be fixed gradient.
```
mg3 = RasterModelGrid((4, 4), 1.)
mg3.status_at_node[mg3.y_of_node == 0] = mg.BC_NODE_IS_FIXED_GRADIENT
mg3.status_at_node[mg3.y_of_node == 3] = mg.BC_NODE_IS_FIXED_GRADIENT
imshow_grid(mg3, mg3.status_at_node, color_for_closed='Blue')
#there are no closed boundaries so we didn't need the color_for_closed option,
#but no problem if you accidentally include it!
```
### Click here for more <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">Landlab tutorials</a>
| github_jupyter |
## Part 3 - Deploy the model
In the second notebook we created a basic model and exported it to a file. In this notebook we'll use that same model file to create a REST API with Microsoft ML Server. The Ubuntu DSVM has an installation of ML Server for testing deployments. We'll create a REST API with our model and test it with the same truck image we used in notebook 2 to evaluate the model.
There are two variables you must set before running this notebook. The first is the password for your ML Server instance. At MLADS we've already set this for you. If you're following this tutorial on your own, you should configure your ML Server instance for [one-box deployment](https://docs.microsoft.com/en-us/machine-learning-server/operationalize/configure-machine-learning-server-one-box). The second variable is the name of the deployed web service. This needs to be unique on the VM. We recommend that you use your username and a number, like *username5*.
```
# choose a unique service name. We recommend you use your username and a number, like alias3
service_name = ____SET_ME_TO_A_UNIQUE_VALUE_____
# set the ML Server admin password. This is NOT your login password; it is the admin password for Machine Learning Server. If you deployed this tutorial using the ARM template in GitHub, this password is Dsvm@123
ml_server_password = ____SET_ME_____
```
## Microsoft ML Server Operationalization
ML Server Operationalization provides the ability to easily convert a model into a REST API and call it from many languages.
```
from IPython.display import Image as ShowImage
ShowImage(url="https://docs.microsoft.com/en-us/machine-learning-server/media/what-is-operationalization/data-scientist-easy-deploy.png", width=800, height=800)
```
ML Server runs one or more web node as the front end for REST API calls and one more compute nodes to perform the calculations for the deployed services. This VM was configured for ML Server Operationalization when it was created. Here we run a single web node and single compute node on this VM in a *one-box* configuration.
ML Server provides the azureml.deploy Python package to deploy new REST API endpoints and call them.
```
from IPython.display import Image as ShowImage
ShowImage(url="https://docs.microsoft.com/en-us/machine-learning-server/operationalize/media/configure-machine-learning-server-one-box/setup-onebox.png", width=800, height=800)
```
More details are available in [the ML Server documentation](https://docs.microsoft.com/en-us/machine-learning-server/operationalize/configure-machine-learning-server-one-box-9-2).
```
from azureml.deploy import DeployClient
from azureml.deploy.server import MLServer
HOST = 'http://localhost:12800'
context = ('admin', ml_server_password)
client = DeployClient(HOST, use=MLServer, auth=context)
```
Retrieve the truck image for testing our deployed service.
```
from PIL import Image
import pandas as pd
import numpy as np
from matplotlib.pyplot import imshow
from IPython.display import Image as ImageShow
try:
from urllib.request import urlopen
except ImportError:
from urllib import urlopen
url = "https://cntk.ai/jup/201/00014.png"
myimg = np.array(Image.open(urlopen(url)), dtype=np.float32)
flattened = myimg.ravel()
ImageShow(url=url, width=64, height=64)
```
## Deploy the model
We need two functions to deploy a model in ML Server. The *init* function handles service initialization. The *eval* function evaluates a single input value and returns the result. *eval* will be called by the server when we call the REST API.
Our *eval* function accepts a single input: a 1D numpy array with the image to evaluate. It needs to (1) reshape the input data from a 1D array to a 3D image, (2) subtract the image mean, to mimic the inputs to the model during training, (3) evaluate the model on the image, and (4) return the results as a pandas DataFrame. Alternatively we could return just the top result or the top three results.
```
import cntk
with open('model.cntk', mode='rb') as file: # b is important -> binary
binary_model = file.read()
# --Define an `init` function to handle service initialization --
def init():
import cntk
global loaded_model
loaded_model = cntk.ops.functions.load_model(binary_model)
# define an eval function to handle scoring
def eval(image_data):
import numpy as np
import cntk
from pandas import DataFrame
image_data = image_data.copy().reshape((32, 32, 3))
image_mean = 133.0
image_data -= image_mean
image_data = np.ascontiguousarray(np.transpose(image_data, (2, 0, 1)))
results = loaded_model.eval({loaded_model.arguments[0]:[image_data]})
return DataFrame(results)
# create the API
service = client.service(service_name)\
.version('1.0')\
.code_fn(eval, init)\
.inputs(image_data=np.array)\
.outputs(results=pd.DataFrame)\
.models(binary_model=binary_model)\
.description('My CNTK model')\
.deploy()
print(help(service))
service.capabilities()
```
Now call our newly created API with our truck image.
```
res = service.eval(flattened)
# -- Pluck out the named output `results` as defined during publishing and print --
print(res.output('results'))
# get the top 3 predictions
result = res.output('results')
result = result.as_matrix()[0]
top_count = 3
result_indices = (-np.array(result)).argsort()[:top_count]
label_lookup = ["airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck"]
print("Top 3 predictions:")
for i in range(top_count):
print("\tLabel: {:10s}, confidence: {:.2f}%".format(label_lookup[result_indices[i]], result[result_indices[i]] * 100))
# -- Retrieve the URL of the swagger file for this service.
cap = service.capabilities()
swagger_URL = cap['swagger']
print(swagger_URL)
print(service.swagger())
```
| github_jupyter |
# Download and process the Bay Area's walkable network
```
import time
import os, zipfile, requests, pandas as pd, geopandas as gpd, osmnx as ox, networkx as nx
ox.config(use_cache=True, log_console=True)
print('ox {}\nnx {}'.format(ox.__version__, nx.__version__))
start_time = time.time()
# point to the shapefile for counties
counties_shapefile_url = 'http://www2.census.gov/geo/tiger/GENZ2016/shp/cb_2016_us_county_500k.zip'
# identify bay area counties by fips code
bayarea = {'Alameda':'001',
'Contra Costa':'013',
'Marin':'041',
'Napa':'055',
'San Francisco':'075',
'San Mateo':'081',
'Santa Clara':'085',
'Solano':'095',
'Sonoma':'097'}
```
## Download and extract the counties shapefile if it doesn't already exist, then load it
To use OSMnx, we need a polygon of the Bay Area's nine counties. So, we'll download a shapefile from the census, extract our counties, and take the union to form a polygon. Also, project the polygon so we can calculate its area for density stats.
```
counties_shapefile_zip = counties_shapefile_url[counties_shapefile_url.rfind('/') + 1 :]
counties_shapefile_dir = counties_shapefile_zip[: counties_shapefile_zip.rfind('.zip')]
if not os.path.exists(counties_shapefile_dir):
response = requests.get(counties_shapefile_url)
with open(counties_shapefile_zip, 'wb') as f:
f.write(response.content)
with zipfile.ZipFile(counties_shapefile_zip, 'r') as zip_file:
zip_file.extractall(counties_shapefile_dir)
os.remove(counties_shapefile_zip)
counties = gpd.read_file(counties_shapefile_dir)
len(counties)
# retain only those tracts that are in the bay area counties
mask = (counties['STATEFP'] == '06') & (counties['COUNTYFP'].isin(bayarea.values()))
gdf_bay = counties[mask]
len(gdf_bay)
bayarea_polygon = gdf_bay.unary_union
bayarea_polygon
# get the convex hull, otherwise we'll cut out bridges over the bay
bayarea_polygon_hull = bayarea_polygon.convex_hull
bayarea_polygon_hull_proj, crs = ox.project_geometry(bayarea_polygon_hull)
# project by a mile to get connectivities surrounding our O-Ds
bayarea_polygon_hull_proj_buff = bayarea_polygon_hull_proj.buffer(1600) #1 mile in meters
bayarea_polygon_hull_buff, crs = ox.project_geometry(bayarea_polygon_hull_proj_buff, crs=crs, to_latlong=True)
```
## Download the street network
Now we've got our polygon of the buffered convex hull around the nine county bay area. Use OSMnx to download the street network (walkable paths) and simplify its topology. Setting `retain_all=False` means we only keep the largest (weakly) connected component. In the bay area, this means we keep only the southern 3/4 of the network. The northern part is not connected.
```
# download full walkable network and simplify its topology
G = ox.graph_from_polygon(bayarea_polygon_hull_buff, network_type='walk',
simplify=True, retain_all=False)
# verify that our graph is strongly connected, should be if it's a walkable network and we
# already retained only the largest weakly connected component
assert nx.is_strongly_connected(G)
# create a unique ID for each edge because osmid can hold multiple values due to topology simplification
i = 0
for u, v, k, d in G.edges(data=True, keys=True):
d['uniqueid'] = i
i += 1
```
## See some descriptive stats then save to disk
```
print(len(G.nodes()))
print(len(G.edges()))
# see some basic network stats
# note, areas/densities include water
pd.Series(ox.basic_stats(G, area=bayarea_polygon_hull_proj_buff.area))
# save nodes and edges list as csv
nodes, edges = ox.graph_to_gdfs(G, node_geometry=False, fill_edge_geometry=False)
ecols = ['uniqueid', 'u', 'v', 'key', 'oneway', 'highway', 'name', 'length',
'lanes', 'width', 'est_width', 'maxspeed', 'access', 'service',
'bridge', 'tunnel', 'area', 'junction', 'osmid', 'ref']
edges = edges.drop(columns=['geometry']).reindex(columns=ecols)
nodes = nodes.reindex(columns=['osmid', 'x', 'y', 'ref', 'highway'])
nodes.to_csv('data/walk_network_connected/bay_area_walk_nodes.csv', index=False, encoding='utf-8')
edges.to_csv('data/walk_network_connected/bay_area_walk_edges.csv', index=False, encoding='utf-8')
# save as graphml for re-using later
ox.save_graphml(G, filename='bayarea_walk_simplified.graphml', folder='data/walk_network_connected')
# save as shapefile for GIS
ox.save_graph_shapefile(G, filename='bayarea_walk_simplified', folder='data/walk_network_connected')
# visualize the network
fig, ax = ox.plot_graph(G, node_size=0, edge_linewidth=0.2)
time.time() - start_time
```
| github_jupyter |
### Python 开发命令行工具
Python 作为一种脚本语言,可以非常方便地用于系统(尤其是\*nix系统)命令行工具的开发。Python 自身也集成了一些标准库,专门用于处理命令行相关的问题。
#### 命令行工具的一般结构

**1. 标准输入输出**
\*nix 系统中,一切皆为文件,因此标准输入、输出可以完全可以看做是对文件的操作。标准化输入可以通过管道(pipe)或重定向(redirect)的方式传递:
```
# script reverse.py
#!/usr/bin/env python
import sys
for l in sys.stdin.readlines():
sys.stdout.write(l[::-1])
```
保存为 `reverse.py`,通过管道 `|` 传递:
```sh
chmod +x reverse.py
cat reverse.py | ./reverse.py
nohtyp vne/nib/rsu/!#
sys tropmi
:)(senildaer.nidts.sys ni l rof
)]1-::[l(etirw.tuodts.sys
```
通过重定向 `<` 传递:
```sh
./reverse.py < reverse.py
# 输出结果同上
```
**2. 命令行参数**
一般在命令行后追加的参数可以通过 `sys.argv` 获取, `sys.argv` 是一个列表,其中第一个元素为当前脚本的文件名:
```
# script argv.py
#!/usr/bin/env python
import sys
print(sys.argv) # 下面返回的是 Jupyter 运行的结果
```
运行上面的脚本:
```sh
chmod +x argv.py
./argv.py hello world
python argv.py hello world
# 返回的结果是相同的
# ['./test.py', 'hello', 'world']
```
对于比较复杂的命令行参数,例如通过 `--option` 传递的选项参数,如果是对 `sys.argv` 逐项进行解析会很麻烦,Python 提供标准库 [`argparse`](https://docs.python.org/3/library/argparse.html)(旧的库为 `optparse`,已经停止维护)专门解析命令行参数:
```
# script convert.py
#!/usr/bin/env python
import argparse as apa
def loadConfig(config):
print("Load config from: {}".format(config))
def setTheme(theme):
print("Set theme: {}".format(theme))
def main():
parser = apa.ArgumentParser(prog="convert") # 设定命令信息,用于输出帮助信息
parser.add_argument("-c", "--config", required=False, default="config.ini")
parser.add_argument("-t", "--theme", required=False, default="default.theme")
parser.add_argument("-f") # Accept Jupyter runtime option
args = parser.parse_args()
loadConfig(args.config)
setTheme(args.theme)
if __name__ == "__main__":
main()
```
利用 `argparse` 可以很方便地解析选项参数,同时可以定义指定参数的相关属性(是否必须、默认值等),同时还可以自动生成帮助文档。执行上面的脚本:
```sh
./convert.py -h
usage: convert [-h] [-c CONFIG] [-t THEME]
optional arguments:
-h, --help show this help message and exit
-c CONFIG, --config CONFIG
-t THEME, --theme THEME
```
**3. 执行系统命令**
当 Python 能够准确地解读输入信息或参数之后,就可以通过 Python 去做任何事情了。这里主要介绍通过 Python 调用系统命令,也就是替代 `Shell` 脚本完成系统管理的功能。我以前的习惯是将命令行指令通过 `os.system(command)` 执行,但是更好的做法应该是用 [`subprocess`](https://docs.python.org/3.5/library/subprocess.html) 标准库,它的存在就是为了替代旧的 `os.system; os.spawn*` 。
`subprocess` 模块提供简便的直接调用系统指令的`call()`方法,以及较为复杂可以让用户更加深入地与系统命令进行交互的`Popen`对象。
```
# script list_files.py
#!/usr/bin/env python
import subprocess as sb
res = sb.check_output("ls -lh ./*.ipynb", shell=True) # 为了安全起见,默认不通过系统 Shell 执行,因此需要设定 shell=True
print(res.decode()) # 默认返回值为 bytes 类型,需要进行解码操作
```
如果只是简单地执行系统命令还不能满足你的需求,可以使用 `subprocess.Popen` 与生成的子进程进行更多交互:
```
import subprocess as sb
p = sb.Popen(['grep', 'communicate'], stdin=sb.PIPE, stdout=sb.PIPE)
res, err = p.communicate(sb.check_output('cat ./*', shell=True))
if not err:
print(res.decode())
```
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc" style="margin-top: 1em;"><ul class="toc-item"><li><span><a href="#Name" data-toc-modified-id="Name-1"><span class="toc-item-num">1 </span>Name</a></span></li><li><span><a href="#Search" data-toc-modified-id="Search-2"><span class="toc-item-num">2 </span>Search</a></span><ul class="toc-item"><li><span><a href="#Load-Cached-Results" data-toc-modified-id="Load-Cached-Results-2.1"><span class="toc-item-num">2.1 </span>Load Cached Results</a></span></li><li><span><a href="#Run-From-Scratch" data-toc-modified-id="Run-From-Scratch-2.2"><span class="toc-item-num">2.2 </span>Run From Scratch</a></span></li></ul></li><li><span><a href="#Analysis" data-toc-modified-id="Analysis-3"><span class="toc-item-num">3 </span>Analysis</a></span><ul class="toc-item"><li><span><a href="#Gender-Breakdown" data-toc-modified-id="Gender-Breakdown-3.1"><span class="toc-item-num">3.1 </span>Gender Breakdown</a></span></li><li><span><a href="#Face-Sizes" data-toc-modified-id="Face-Sizes-3.2"><span class="toc-item-num">3.2 </span>Face Sizes</a></span></li><li><span><a href="#Appearances-on-a-Single-Show" data-toc-modified-id="Appearances-on-a-Single-Show-3.3"><span class="toc-item-num">3.3 </span>Appearances on a Single Show</a></span></li><li><span><a href="#Screen-Time-Across-All-Shows" data-toc-modified-id="Screen-Time-Across-All-Shows-3.4"><span class="toc-item-num">3.4 </span>Screen Time Across All Shows</a></span></li></ul></li><li><span><a href="#Persist-to-Cloud" data-toc-modified-id="Persist-to-Cloud-4"><span class="toc-item-num">4 </span>Persist to Cloud</a></span><ul class="toc-item"><li><span><a href="#Save-Model-to-GCS" data-toc-modified-id="Save-Model-to-GCS-4.1"><span class="toc-item-num">4.1 </span>Save Model to GCS</a></span><ul class="toc-item"><li><span><a href="#Make-sure-the-GCS-file-is-valid" data-toc-modified-id="Make-sure-the-GCS-file-is-valid-4.1.1"><span class="toc-item-num">4.1.1 </span>Make sure the GCS file is valid</a></span></li></ul></li><li><span><a href="#Save-Labels-to-DB" data-toc-modified-id="Save-Labels-to-DB-4.2"><span class="toc-item-num">4.2 </span>Save Labels to DB</a></span><ul class="toc-item"><li><span><a href="#Commit-the-person-and-labeler" data-toc-modified-id="Commit-the-person-and-labeler-4.2.1"><span class="toc-item-num">4.2.1 </span>Commit the person and labeler</a></span></li><li><span><a href="#Commit-the-FaceIdentity-labels" data-toc-modified-id="Commit-the-FaceIdentity-labels-4.2.2"><span class="toc-item-num">4.2.2 </span>Commit the FaceIdentity labels</a></span></li></ul></li></ul></li></ul></div>
```
from esper.prelude import *
from esper.identity import *
from esper import embed_google_images
```
# Name
```
name = 'Chris Matthews'
```
# Search
## Load Cached Results
```
assert name != ''
results = FaceIdentityModel.load(name=name)
imshow(np.hstack([cv2.resize(x[1][0], (200, 200)) for x in results.model_params['images']]))
plt.show()
plot_precision_and_cdf(results)
```
## Run From Scratch
Run this section if you do not have a cached model and precision curve estimates.
```
assert name != ''
img_dir = embed_google_images.fetch_images(name)
face_imgs = load_and_select_faces_from_images(img_dir)
face_embs = embed_google_images.embed_images(face_imgs)
assert(len(face_embs) == len(face_imgs))
imshow(np.hstack([cv2.resize(x[0], (200, 200)) for x in face_imgs if x]))
plt.show()
face_ids_by_bucket, face_ids_to_score = face_search_by_embeddings(face_embs)
precision_model = PrecisionModel(face_ids_by_bucket)
print('Select all MISTAKES. Ordered by DESCENDING score. Expecting {} frames'.format(precision_model.get_lower_count()))
lower_widget = precision_model.get_lower_widget()
lower_widget
print('Select all NON-MISTAKES. Ordered by ASCENDING distance. Expecting {} frames'.format(precision_model.get_upper_count()))
upper_widget = precision_model.get_upper_widget()
upper_widget
```
Run the following cell after labelling.
```
lower_precision = precision_model.compute_precision_for_lower_buckets(lower_widget.selected)
upper_precision = precision_model.compute_precision_for_upper_buckets(upper_widget.selected)
precision_by_bucket = {**lower_precision, **upper_precision}
results = FaceIdentityModel(
name=name,
face_ids_by_bucket=face_ids_by_bucket,
face_ids_to_score=face_ids_to_score,
precision_by_bucket=precision_by_bucket,
model_params={
'images': list(zip(face_embs, face_imgs))
}
)
plot_precision_and_cdf(results)
# Save the model
results.save()
```
# Analysis
## Gender Breakdown
```
gender_breakdown = compute_gender_breakdown(results)
print('Raw counts:')
for k, v in gender_breakdown.items():
print(' ', k, ':', v)
print()
print('Proportions:')
denominator = sum(v for v in gender_breakdown.values())
for k, v in gender_breakdown.items():
print(' ', k, ':', v / denominator)
print()
print('Showing examples:')
show_gender_examples(results)
```
## Face Sizes
```
plot_histogram_of_face_sizes(results)
```
## Appearances on a Single Show
```
show_name = 'Hardball'
screen_time_by_video_id = compute_screen_time_by_video(results, show_name)
plot_histogram_of_screen_times_by_video(name, show_name, screen_time_by_video_id)
plot_screentime_over_time(name, show_name, screen_time_by_video_id)
plot_distribution_of_appearance_times_by_video(results, show_name)
```
## Screen Time Across All Shows
```
screen_time_by_show = get_screen_time_by_show(results)
plot_screen_time_by_show(name, screen_time_by_show)
```
# Persist to Cloud
## Save Model to GCS
```
gcs_model_path = results.save_to_gcs()
```
### Make sure the GCS file is valid
```
gcs_results = FaceIdentityModel.load_from_gcs(name=name)
plot_precision_and_cdf(gcs_results)
```
## Save Labels to DB
```
from django.core.exceptions import ObjectDoesNotExist
def standardize_name(name):
return name.lower()
person_type = ThingType.objects.get(name='person')
try:
person = Thing.objects.get(name=standardize_name(name), type=person_type)
print('Found person:', person.name)
except ObjectDoesNotExist:
person = Thing(name=standardize_name(name), type=person_type)
print('Creating person:', person.name)
labeler = Labeler(name='face-identity-{}'.format(person.name), data_path=gcs_model_path)
```
### Commit the person and labeler
```
person.save()
labeler.save()
```
### Commit the FaceIdentity labels
```
commit_face_identities_to_db(results, person, labeler)
print('Committed {} labels to the db'.format(FaceIdentity.objects.filter(labeler=labeler).count()))
```
| github_jupyter |
# An introduction to geocoding
Geocoders are tools to which you pass in an address / place of interest and it gives back the coordinates of that place.
The **`arcgis.geocoding`** module provides types and functions for geocoding, batch geocoding and reverse geocoding.
```
from arcgis.gis import GIS
from arcgis import geocoding
from getpass import getpass
password = getpass()
gis = GIS("http://www.arcgis.com", "arcgis_python", password)
```
## Geocoding addresses
All geocoding operations are handled by `geocode()` function. It can geocode
1. single line address
2. multi field address
3. points of interest
4. administrative place names
5. postal codes
```
results = geocoding.geocode('San Diego, CA')
results
len(results)
results[0]
map1 = gis.map('San Diego, CA')
map1
map1.draw(results[0]['location'])
```
### Geocode single line addresses
```
r2 = geocoding.geocode('San Diego Convention center')
len(r2)
[str(r['score']) +" : "+ r['attributes']['Match_addr'] for r in r2]
```
### Geocode multi field address
```
multi_field_address = {
"Address" : "111 Harbor Dr",
"City" : "San Diego",
"Region" : "CA",
"Subregion":"San Diego",
"Postal" : 92103,
"Country":"USA"
}
r_multi_fields = geocoding.geocode(multi_field_address)
len(r_multi_fields)
```
### Searching within an exent
Find restaurants within walking distance of this training center (Hard Rock Hotel, Palm Springs)
```
# first create an extent.
conv_center = geocoding.geocode('San Diego Convention Center, California')[0]
map3 = gis.map('San Diego Convention Center', zoomlevel=17)
map3
restaurants = geocoding.geocode('grocery',search_extent=conv_center['extent'],
max_locations=15)
len(restaurants)
map3.clear_graphics()
for shop in restaurants:
popup = {
"title" : shop['attributes']['PlaceName'],
"content" : shop['attributes']['Place_addr']
}
map3.draw(shop['location'],popup)
```
## Batch geocoding
```
addresses = ["380 New York St, Redlands, CA",
"1 World Way, Los Angeles, CA",
"1200 Getty Center Drive, Los Angeles, CA",
"5905 Wilshire Boulevard, Los Angeles, CA",
"100 Universal City Plaza, Universal City, CA 91608",
"4800 Oak Grove Dr, Pasadena, CA 91109"]
results = geocoding.batch_geocode(addresses)
len(results)
map5 = gis.map('Los Angeles, CA')
map5
for address in results:
map5.draw(address['location'])
```
## Reverse geocoding
```
reverse_geocode_results = geocoding.reverse_geocode([2.2945, 48.8583])
reverse_geocode_results
map6 = gis.map('San Diego convention center, San Diego, CA', 16)
map6
def find_addr(map6, g):
try:
map6.draw(g)
geocoded = geocoding.reverse_geocode(g)
print(geocoded['address']['Match_addr'])
except:
print("Couldn't match address. Try another place...")
map6.on_click(find_addr)
```
| github_jupyter |
This is a simple NLP project which predicts the sentiment of the movie reviews from IMDB dataset.
```
import numpy as np
import pandas as pd
import os
import glob
import csv
import random
```
Gathering the Datasets and converting them to a single csv file
```
# Since all the reviews and sentiments are in txt file I have used this function to combine them and put them in an array
def txt_tocsv(path_of_files, arr, dir_list):
for filename in dir_list:
fpath = os.path.join(path_of_files, filename)
with open(fpath, 'r', encoding="utf8") as file:
arr.append(file.read())
# Converting the combined text file dataset to a dataframe using pandas
def arr_todf(dic, arr, data_f, rating):
dic = {"review": arr,
"rating": [rating]*len(arr)}
data_f = pd.DataFrame(dic)
print(data_f.head())
return(data_f)
```
Converting the Positive training dataset to a csv file
```
pos_arr = []
path = r"datasets\train_pos.csv"
dir_list_pos = os.listdir(path)
txt_tocsv(path, pos_arr, dir_list_pos)
pos_dic = {}
pos_df = pd.DataFrame()
pos_df = arr_todf(pos_dic, pos_arr, pos_df, 1)
pos_df.to_csv("train_pos.csv", index=False)
```
Converting the negative training datasets to csv files
```
neg_arr = []
path = r"datasets\train_neg.csv"
dir_list_neg = os.listdir(path)
txt_tocsv(path, neg_arr, dir_list_neg)
neg_dic = {}
neg_df = pd.DataFrame()
neg_df = arr_todf(neg_dic, neg_arr, neg_df, 0)
neg_df.to_csv("train_neg.csv", index=False)
```
Merging both the training dataset
```
pos_rev = pd.read_csv("train_pos.csv")
neg_rev = pd.read_csv("train_neg.csv")
training_data = pd.concat([pos_rev, neg_rev])
training_df = pd.DataFrame(training_data)
training_df = training_df.sample(frac=1)
training_df.head(10)
training_df.to_csv("training_data.csv", index=False)
```
Test data
Converting all the Positive Test Data to csv file
```
path = r'datasets\test_pos.csv'
dir_list_pos_test = os.listdir(path)
pos_arr_test = []
txt_tocsv(path, pos_arr_test, dir_list_pos_test)
pos_dic_test = {}
pos_df_test = pd.DataFrame()
pos_df_test = arr_todf(pos_dic_test, pos_arr_test, pos_df_test, 1)
pos_df_test.to_csv("test_pos.csv", index=False)
```
Converting all the negative test data to csv file
```
path = r"datasets\test_neg.csv"
dir_list_neg_test = os.listdir(path)
neg_arr_test = []
txt_tocsv(path, neg_arr_test, dir_list_neg_test)
neg_dic_test = {}
neg_df_test = pd.DataFrame()
neg_df_test = arr_todf(neg_dic_test, neg_arr_test, neg_df_test, 0)
neg_df_test.to_csv("test_neg.csv", index=False)
pos_rev_test = pd.read_csv("test_pos.csv")
neg_rev_test = pd.read_csv("test_neg.csv")
test_data = pd.concat([pos_rev_test, neg_rev_test])
testing_df = pd.DataFrame(test_data)
testing_df = testing_df.sample(frac=1)
testing_df.head(10)
testing_df.to_csv("testing_data.csv", index=False)
```
Merging both the training and testing dataset to get the full dataset
```
train_csv = pd.read_csv("training_data.csv")
test_csv = pd.read_csv("testing_data.csv")
dataset = pd.concat([train_csv, test_csv])
dataset_df = pd.DataFrame(dataset)
dataset_df = dataset_df.sample(frac=1)
dataset_df.head()
dataset_df.to_csv("imdb_dataset.csv", index=False)
import seaborn as sns
import nltk
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.preprocessing import LabelBinarizer
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
from wordcloud import WordCloud, STOPWORDS
from nltk.stem import WordNetLemmatizer
from nltk.tokenize import word_tokenize, sent_tokenize
from bs4 import BeautifulSoup
import spacy
import re
import string
import unicodedata
from nltk.tokenize.toktok import ToktokTokenizer
from nltk.stem import LancasterStemmer, WordNetLemmatizer
from sklearn.linear_model import LogisticRegression, SGDClassifier
from sklearn.naive_bayes import MultinomialNB
from sklearn.svm import SVC
from textblob import TextBlob
from textblob import Word
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
import warnings
```
This part deals with data cleaning and model training
```
# importing the csv file to a dataframe
df_imdb_train = pd.read_csv('imdb_dataset.csv')
df_imdb_train = df_imdb_train.sample(frac=1)
# df_imdb_train.head(20)
df_imdb_train.shape
df_imdb_train.describe()
df_imdb_train['rating'].value_counts()
# tokenization
tokenizer = ToktokTokenizer()
stopword_list = nltk.corpus.stopwords.words('english')
```
Data cleaning
```
# removing the html strips
def html_rem(text):
soup = BeautifulSoup(text, "html.parser")
return soup.get_text()
# removing the brackets
def brac_rem(text):
return re.sub('\[[^]]*\]', '', text)
def denoise_txt(text):
'''removing the noisy text'''
text = html_rem(text)
text = brac_rem(text)
return text
df_imdb_train['review'] = df_imdb_train['review'].apply(denoise_txt)
def rem_spe_char(text, rem_dig=True):
'''removing the special character'''
pattern = r'[^a-zA-Z0-9\s]'
text = re.sub(pattern, '', text)
return text
df_imdb_train['review'] = df_imdb_train['review'].apply(rem_spe_char)
def simple_stemmer(text):
'''eliminates the affixes from words in order to retrieve the base form'''
ps = nltk.porter.PorterStemmer()
text = ' '.join([ps.stem(word) for word in text.split()])
return text
df_imdb_train['review'] = df_imdb_train['review'].apply(simple_stemmer)
stop_words = set(stopwords.words('english'))
print(stop_words)
def rem_stop_words(text, is_lower_case=False):
'''removes all the words that have little or no meaning'''
tokens = tokenizer.tokenize(text)
tokens = [token.strip()for token in tokens]
if is_lower_case:
filter_tokens = [
token for token in tokens if token not in stopword_list]
else:
filter_tokens = [
token for token in tokens if token.lower() not in stopword_list]
filtered_text = ' '.join(filter_tokens)
return filtered_text
df_imdb_train['review'] = df_imdb_train['review'].apply(rem_stop_words)
norm_train_rev = df_imdb_train.review[:40000]
norm_train_rev[0]
norm_test_rev = df_imdb_train.review[40000:]
# norm_test_rev.count()
```
Bag of word model
```
cv = CountVectorizer(min_df=0, max_df=1, binary=False, ngram_range=(1, 3))
cv_train_rev = cv.fit_transform(norm_train_rev)
cv_test_rev = cv.transform(norm_test_rev)
print("Bag of words for training dataset:", cv_train_rev.shape)
print("Bag of words of test dataset:", cv_test_rev.shape)
```
TFIDF model
```
tf = TfidfVectorizer(min_df=0, max_df=1, use_idf=True, ngram_range=(1, 3))
tf_train_rev = tf.fit_transform(norm_train_rev)
tf_test_rev = tf.transform(norm_test_rev)
print("TFIDF of training dataset:", tf_train_rev.shape)
print("TFIDF test data:", tf_test_rev.shape)
lb = LabelBinarizer()
rating_data = lb.fit_transform(df_imdb_train['rating'])
train_rating = rating_data[:40000]
test_rating = rating_data[40000:]
print(train_rating)
print(test_rating)
```
Modelling the data(multinomial naive bayes)
```
mnb = MultinomialNB()
# bag of words
mnb_bow = mnb.fit(cv_train_rev, np.ravel(train_rating))
print(mnb_bow)
# tfidf
mnb_tfidf = mnb.fit(tf_train_rev, np.ravel(train_rating))
print(mnb_tfidf)
mb_bow_predict = mnb.predict(cv_test_rev)
print(mb_bow_predict)
mb_tfidf_predict = mnb.predict(tf_test_rev)
print(mb_tfidf_predict)
mb_bow_acc = accuracy_score(test_rating, mb_bow_predict)
print(mb_bow_acc)
mb_tfidf_acc = accuracy_score(test_rating, mb_tfidf_predict)
print(mb_tfidf_acc)
mb_bow_report = classification_report(
test_rating, mb_bow_predict, target_names=['Positive', 'Negative'])
print(mb_bow_report)
mb_tfidf_report = classification_report(
test_rating, mb_tfidf_predict, target_names=['Positive', 'Negative'])
print(mb_tfidf_report)
cm_bow = confusion_matrix(test_rating, mb_bow_predict, labels=[1, 0])
print(cm_bow)
cm = pd.DataFrame(cm_bow)
sns.heatmap(cm, annot=True, fmt="d")
cm_tfidf = confusion_matrix(test_rating, mb_tfidf_predict, labels=[1, 0])
print(cm_tfidf)
cm_t=pd.DataFrame(cm_tfidf)
sns.heatmap(cm_t,annot=True,fmt="d")
predict = mnb.predict(cv.transform(["When you make a film with a killer-kids premise, there are two effective ways to approach it; you can either make it as realistic as possible, creating believable characters and situations, or you can make it as fun as possible by playing it for laughs (something which the makers of Silent Night, Deadly Night did, for example, on an equally controversial subject: a killer Santa). The people who made Bloody Birthday, however, do neither of those things; they simply rely on the shock value of the image of a kid with a gun (or a knife, or a noose, or an arrow) in his/her hand. The result is both offensive and stupid. The whole film looks like a bad idea that was rushed through production (and then kept from release for several years). It's redeemed a tiny bit by good performances from the kids, but it's VERY sloppily made. (*1/2)"]))
if predict == 1:
print("The sentiment is positive")
else:
print("The sentiment is negative")
predict = mnb.predict(cv.transform(["I went and saw this movie last night after being coaxed to by a few friends of mine. I'll admit that I was reluctant to see it because from what I knew of Ashton Kutcher he was only able to do comedy. I was wrong. Kutcher played the character of Jake Fischer very well, and Kevin Costner played Ben Randall with such professionalism. The sign of a good movie is that it can toy with our emotions. This one did exactly that. The entire theater(which was sold out) was overcome by laughter during the first half of the movie, and were moved to tears during the second half. While exiting the theater I not only saw many women in tears, but many full grown men as well, trying desperately not to let anyone see them crying. This movie was great, and I suggest that you go see it before you judge."]))
if predict == 1:
print("The sentiment is positive")
else:
print("The sentiment is negative")
```
| github_jupyter |
<p style="font-family: Arial; font-size:2.75em;color:purple; font-style:bold"><br>
## Regresssion with scikit-learn
using Soccer Dataset
<br></p>
We will again be using the open dataset from the popular site <a href="https://www.kaggle.com">Kaggle</a> that we used in Week 1 for our example.
Recall that this <a href="https://www.kaggle.com/hugomathien/soccer">European Soccer Database</a> has more than 25,000 matches and more than 10,000 players for European professional soccer seasons from 2008 to 2016.
**Note:** Please download the file *database.sqlite* if you don't yet have it in your *Week-7-MachineLearning* folder.
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
Import Libraries<br><br></p>
```
import sqlite3
import pandas as pd
from sklearn.tree import DecisionTreeRegressor
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from math import sqrt
```
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
Read Data from the Database into pandas
<br><br></p>
```
# Create your connection.
cnx = sqlite3.connect('../../Week 1/database.sqlite')
df = pd.read_sql_query("SELECT * FROM Player_Attributes", cnx)
df.head()
df.shape
df.columns
```
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
Declare the Columns You Want to Use as Features
<br><br></p>
```
features = [
'potential', 'crossing', 'finishing', 'heading_accuracy',
'short_passing', 'volleys', 'dribbling', 'curve', 'free_kick_accuracy',
'long_passing', 'ball_control', 'acceleration', 'sprint_speed',
'agility', 'reactions', 'balance', 'shot_power', 'jumping', 'stamina',
'strength', 'long_shots', 'aggression', 'interceptions', 'positioning',
'vision', 'penalties', 'marking', 'standing_tackle', 'sliding_tackle',
'gk_diving', 'gk_handling', 'gk_kicking', 'gk_positioning',
'gk_reflexes']
```
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
Specify the Prediction Target
<br><br></p>
```
target = ['overall_rating']
```
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
Clean the Data<br><br></p>
```
df = df.dropna()
```
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
Extract Features and Target ('overall_rating') Values into Separate Dataframes
<br><br></p>
```
X = df[features]
y = df[target]
```
Let us look at a typical row from our features:
```
X.iloc[2]
```
Let us also display our target values:
```
y
```
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
Split the Dataset into Training and Test Datasets
<br><br></p>
```
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=324)
```
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
(1) Linear Regression: Fit a model to the training set
<br><br></p>
```
regressor = LinearRegression()
regressor.fit(X_train, y_train)
```
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
Perform Prediction using Linear Regression Model
<br><br></p>
```
y_prediction = regressor.predict(X_test)
y_prediction
```
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
What is the mean of the expected target value in test set ?
<br><br></p>
```
y_test.describe()
```
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
Evaluate Linear Regression Accuracy using Root Mean Square Error
<br><br></p>
```
RMSE = sqrt(mean_squared_error(y_true = y_test, y_pred = y_prediction))
print(RMSE)
```
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
(2) Decision Tree Regressor: Fit a new regression model to the training set
<br><br></p>
```
regressor = DecisionTreeRegressor(max_depth=20)
regressor.fit(X_train, y_train)
```
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
Perform Prediction using Decision Tree Regressor
<br><br></p>
```
y_prediction = regressor.predict(X_test)
y_prediction
```
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
For comparision: What is the mean of the expected target value in test set ?
<br><br></p>
```
y_test.describe()
```
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
Evaluate Decision Tree Regression Accuracy using Root Mean Square Error
<br><br></p>
```
RMSE = sqrt(mean_squared_error(y_true = y_test, y_pred = y_prediction))
print(RMSE)
```
| github_jupyter |
# Explainable fraud detection model
In this example we develop a small fraud detection model for credit card transactions based on XGBoost, export it to TorchScript using Hummingbird (https://github.com/microsoft/hummingbird) and run Shapley Value Sampling explanations (see https://captum.ai/api/shapley_value_sampling.html for reference) on it, via torch script.
We load both the original model and the explainability script in RedisAI and trigger them in a DAG.
## Data
For this example we use a dataset of transactions made by credit cards in September 2013 by European cardholders.
The dataset presents transactions that occurred in two days, with 492 frauds out of 284,807 transactions.
The dataset is available at https://www.kaggle.com/mlg-ulb/creditcardfraud. For anonymity purposes, the features are 28 PCA features (V1 to V28), along with transaction Time and Amount.
__In order to run this notebook please download the `creditcard.csv` file from Kaggle and place it in the `data/` directory.__
Once the file is in place, we start by importing Pandas and reading the data. We create a dataframe of covariates and a dataframe of targets.
```
import pandas as pd
import numpy as np
df = pd.read_csv('data/creditcard.csv')
X = df.drop(['Class'], axis=1)
Y = df['Class']
```
## Model
We start off by randomly splitting train and test datasets.
```
from sklearn.model_selection import train_test_split
seed = 7
test_size = 0.33
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=test_size, random_state=seed)
```
Next we use XGBoost to classify the transactions. Note that we convert the arguments to `fit` to NumPy arrays.
```
from xgboost import XGBClassifier
model = XGBClassifier(label_encoder=False)
model.fit(X_train.to_numpy(), y_train.to_numpy())
```
We now obtain predictions on the test dataset and binarize the output probabilities to get a target.
```
y_pred = model.predict(X_test.to_numpy())
predictions = [round(value) for value in y_pred]
```
We evaluate the accuracy of our model on the test set (this is just an example: the dataset is heavily unbalanced so accuracy is not a fair characterization in this case).
```
from sklearn.metrics import accuracy_score, confusion_matrix
accuracy = accuracy_score(y_test, predictions)
print("Accuracy: %.2f%%" % (accuracy * 100.0))
```
Looking at the confusion matrix gives a clearer representation.
```
confusion_matrix(y_test, predictions)
```
We are interested to explore are cases of fraud, so we extract them from the test set.
```
X_test_fraud = X_test[y_test == 1].to_numpy()
```
We verify how many times we are getting it right.
```
model.predict(X_test_fraud) == 1
```
## Exporting to TorchScript with Hummingbird
From the project page (https://github.com/microsoft/hummingbird):
> Hummingbird is a library for compiling trained traditional ML models into tensor computations. Hummingbird allows users to seamlessly leverage neural network frameworks (such as PyTorch) to accelerate traditional ML models.
Hummingbird can take scikit-learn, XGBoost or LightGBM models and export them to PyTorch, TorchScript, ONNX and TVM. This works very well for running ML models on RedisAI and take advantage of vectorized CPU instructions or GPU.
We choose to convert the boosted tree to tensor computations using the `gemm` implementation.
```
from hummingbird.ml import convert, load
extra_config={
"tree_implementation": "gemm"
}
hummingbird_model = convert(model, 'torchscript', test_input=X_test_fraud, extra_config=extra_config)
```
At this point, `hm_model` is an object containing a TorchScript model that is ready to be exported.
```
import torch
torch.jit.save(hummingbird_model.model, "models/fraud_detection_model.pt")
```
We can verify everything works by loading the model and running a prediction. The model outputs a tuple containing the predicted classes and the output probabilities.
```
loaded_model = torch.jit.load("models/fraud_detection_model.pt")
X_test_fraud_tensor = torch.from_numpy(X_test_fraud)
loaded_output_classes, loaded_output_probs = loaded_model(X_test_fraud_tensor)
```
We can now compare against the original output from the XGBoost model.
```
xgboost_output_classes = torch.from_numpy(model.predict(X_test_fraud))
torch.equal(loaded_output_classes, xgboost_output_classes)
```
## Explainer Script
The script `torch_shapely.py` is a torch script degined specificly running on RedisAI, and utilizes RedisAI extension for torch script, that allows to run any model stored in RedisAI from within the script. Let's go over the details:
In RedisAI, each entry point (function in script) should have the signature:
`function_name(tensors: List[Tensor], keys: List[str], args: List[str]):`
In our case our entry point is `shapely_sample(tensors: List[Tensor], keys: List[str], args: List[str]):` and the parameters are:
```
Tensors:
tensors[0] - x : Input tensor to the model
tensors[1] - baselines : Optional - reference values which replace each feature when
ablated; if no baselines are provided, baselines are set
to all zeros
Keys:
keys[0] - model_key: Redis key name where the model is stored as RedisAI model.
Args:
args[0] - n_samples: number of random feature permutations performed
args[1] - number_of_outputs - number of model outputs
args[2] - output_tensor_index - index of the tested output tensor
args[3] - Optional - target: output indices for which Shapley Value Sampling is
computed; if model returns a single scalar, target can be
None
```
The script will create `n_samples` amount of permutations of the input features. For each permutation it will check for each feature what was its contribution to the result by running the model repeatedly on a new subset of input features.
## Serving model and explainer in RedisAI
At this point we can load the model we exported into RedisAI and serve it from there. We will also load the `torch_shapely.py` script, that allows calculating the Shapely value of a model, from within RedisAI. After making sure RedisAI is running, we initialize the client.
```
import redisai
rai = redisai.Client()
```
We read the model and the script.
```
with open("models/fraud_detection_model.pt", "rb") as f:
fraud_detection_model_blob = f.read()
with open("torch_shapley.py", "rb") as f:
shapely_script = f.read()
```
We load both model and script into RedisAI.
```
rai.modelstore("fraud_detection_model", "TORCH", "CPU", fraud_detection_model_blob)
rai.scriptstore("shapley_script", device='CPU', script=shapely_script, entry_points=["shapley_sample"] )
```
All set, it's now test time. We reuse our `X_test_fraud` NumPy array we created previously. We set it, and run the Shapley script and get explanations as arrays.
```
rai.tensorset("fraud_input", X_test_fraud, dtype="float")
rai.scriptexecute("shapley_script", "shapley_sample", inputs = ["fraud_input"], keys = ["fraud_detection_model"], args = ["20", "2", "0"], outputs=["fraud_explanations"])
rai_expl = rai.tensorget("fraud_explanations")
winning_feature_redisai = np.argmax(rai_expl[0], axis=0)
print("Winning feature: %d" % winning_feature_redisai)
```
Alternatively we can set up a RedisAI DAG and run it in one swoop.
```
dag = rai.dag(routing ="fraud_detection_model")
dag.tensorset("fraud_input", X_test_fraud, dtype="float")
dag.modelexecute("fraud_detection_model", "fraud_input", ["fraud_pred", "fraud_prob"])
dag.scriptexecute("shapely_script", "shapely_sample", inputs = ["fraud_input"], keys = ["fraud_detection_model"], args = ["20", "2", "0"], outputs=["fraud_explanations"])
dag.tensorget("fraud_pred")
dag.tensorget("fraud_explanations")
```
We now set the input and request a DAG execution, which will produce the desired outputs.
```
# rai.tensorset("fraud_input", X_test_fraud, dtype="float")
_, _, _, dag_pred, dag_expl = dag.execute()
dag_pred
```
We can now check that the winning feature matches with what we computed earlier on the first sample in the test batch.
```
winning_feature_redisai_dag = np.argmax(dag_expl[0])
print("Winning feature: %d" % winning_feature_redisai_dag)
dag_expl[1]
```
| github_jupyter |
```
import numpy as np
import importlib
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
from cp2k_spm_tools import cp2k_grid_orbitals, cp2k_ftsts, qe_utils
lat_param = 4.37 # angstrom
wfn_file = "./examples/polyphenylene_cp2k_scf/PROJ-RESTART.wfn"
xyz_file = "./examples/polyphenylene_cp2k_scf/ppp_12uc-opt.xyz"
cp2k_inp = "./examples/polyphenylene_cp2k_scf/cp2k.inp"
basis_file = "./examples/BASIS_MOLOPT"
# define global energy limits (eV)
emin = -3.5
emax = 3.5
cp2k_grid_orb = cp2k_grid_orbitals.Cp2kGridOrbitals()
cp2k_grid_orb.read_cp2k_input(cp2k_inp)
cp2k_grid_orb.read_xyz(xyz_file)
cp2k_grid_orb.ase_atoms.center()
cp2k_grid_orb.read_basis_functions(basis_file)
cp2k_grid_orb.load_restart_wfn_file(wfn_file, emin=emin, emax=emax)
# define evaluation region (plane)
plane_h = 3.5 # ang
atoms_max_z = np.max(cp2k_grid_orb.ase_atoms.positions[:, 2]) # ang
plane_z = atoms_max_z+plane_h
eval_reg = [None, None, [plane_z, plane_z]]
cp2k_grid_orb.calc_morbs_in_region(0.10,
x_eval_region = eval_reg[0],
y_eval_region = eval_reg[1],
z_eval_region = eval_reg[2],
reserve_extrap = 0.0,
pbc = (True, True, False),
eval_cutoff = 12.0)
```
# QE bands (optional)
```
qe_scf_xml = "./examples/polyphenylene_qe_bands/scf.xml"
qe_bands_xml = "./examples/polyphenylene_qe_bands/bands.xml"
qe_kpts = None
qe_bands = None
if qe_scf_xml is not None and qe_bands_xml is not None:
qe_kpts, qe_bands, _ = qe_utils.read_band_data(qe_bands_xml)
qe_fermi_en = qe_utils.read_scf_data(qe_scf_xml)
qe_gap_middle = qe_utils.gap_middle(qe_bands[0], qe_fermi_en)
qe_bands -= qe_gap_middle
```
# FT-STS
```
de = 0.01
fwhm = 0.1
ftsts = cp2k_ftsts.FTSTS(cp2k_grid_orb)
ftsts.project_orbitals_1d(gauss_pos=0.0, gauss_fwhm=3.0)
borders = ftsts.take_fts(crop_padding=True, crop_edges=1.2, remove_row_avg=True, padding=3.0)
ftsts.make_ftldos(emin, emax, de, fwhm)
imshow_kwargs = {'aspect': 'auto',
'origin': 'lower',
#'cmap': 'jet',
'cmap': 'gist_ncar',
}
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 12), gridspec_kw={'width_ratios': [3, 1]})
# left side: LDOS
ax1.imshow(ftsts.ldos.T, extent=ftsts.ldos_extent, vmax=1.0*np.max(ftsts.ldos), **imshow_kwargs)
ax1.axvline(borders[0], color='r')
ax1.axvline(borders[1], color='r')
ax1.set_ylabel("Energy (eV)")
ax1.set_xlabel("x (Å)")
# right side: FT-LDOS
ftldos, extent = ftsts.get_ftldos_bz(2, lat_param) # number of Brilliuin zones
ax2.imshow(ftldos.T, extent=extent, vmax=1.0*np.max(ftldos), **imshow_kwargs)
# add also QE bands
if qe_bands is not None:
for qe_band in qe_bands[0,]:
plt.plot(2*qe_kpts[:, 0]*2*np.pi/lat_param, qe_band, '-', color='r', linewidth=2.0)
ax2.set_ylim([emin, emax])
ax2.set_xlabel("2k (Å$^{-1}$)")
plt.show()
```
# Plot individual orbitals
```
# select orbitals wrt to HOMO
index_start = -5
index_end = 6
i_spin = 0
for i_mo_wrt_homo in range(index_end, index_start-1, -1):
i_mo = cp2k_grid_orb.i_homo_glob[i_spin] + i_mo_wrt_homo
global_i = cp2k_grid_orb.cwf.global_morb_indexes[i_spin][i_mo]
print("%d HOMO%+d, E=%.3f eV" % (global_i, i_mo_wrt_homo, cp2k_grid_orb.morb_energies[i_spin][i_mo]))
morb = (cp2k_grid_orb.morb_grids[i_spin][i_mo, :, :, 0]).astype(np.float64)
morb_amax = np.max(np.abs(morb))
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(16, 3))
plt.subplots_adjust(wspace=0, hspace=0)
ax1.imshow(morb.T, vmin=-morb_amax, vmax=morb_amax,origin='lower', cmap='seismic')
ax1.set_xticks([])
ax1.set_yticks([])
ax2.imshow((morb**2).T,origin='lower', cmap='seismic')
ax2.set_xticks([])
ax2.set_yticks([])
plt.show()
```
| github_jupyter |
```
# from utils_torsion_dataset_generator import *
from util_2nd_round_generator import *
%%capture cap1 --no-stderr
# Create force field object
forcefield = ForceField('param_valence.offxml', allow_cosmetic_attributes=True)
# Create dictionaries storing molecules and attributes
molecules_list_dict, molecule_attributes = read_aggregate_molecules("roche_optimization_inputs.json")
# List torsion parameters and effective rotations matched to each parameter from input molecule set
tid_molecules_list = gen_tid_molecules_list(molecule_attributes, molecules_list_dict, forcefield )
# Read pickle file containing data downloaded from qcarchive for reuse
gen2_torsiondrive_data = download_torsiondrive_data('OpenFF Gen 2 Torsion Set 1 Roche', output_pickle='roche_gen2_torsiondrive_data.pickle')
# List up pre-calculated torsions for re-use
gen2_tid_calculated_molecules_list, gen2_molecules_list_dict_from_td = gen_tid_calculated_molecules_list(gen2_torsiondrive_data, forcefield)
# Read pickle file containing data downloaded from qcarchive for reuse
with open('roche_torsiondrive_data.pickle', 'rb') as pfile:
torsiondrive_data = pickle.load(pfile)
tid_calculated_molecules_list, molecules_list_dict_from_td = gen_tid_calculated_molecules_list(torsiondrive_data, forcefield)
%%capture cap2 --no-stderr
# clustering each list of molecules and tid_molecules_list -> tid_clusters_list
# output: `tid_clusters_list[tid] = [..., {'cluster_label': N, 'torsions': [...]}, ...] `
tid_clusters_list = gen_tid_clusters_list_mod(tid_molecules_list, fptype=OEFPType_MACCS166)
# find if any cluster has pre-calculated torsion and add one more information 'reusable' in the dictionary
# if 'reusable' == False, no reusable torsion detacted
# tid_clusters_list_detailed[tid] = [ {'cluster_label': N, 'torsions': [...], 'reusable': False or torsion_info}, ...]
tid_clusters_list_detailed = find_reusable_cluster(tid_clusters_list, gen2_tid_calculated_molecules_list)
# Convert linear dependency (data degenracy) into graph representaion
graph_reusable_set, graph_single_coverage_set, graph_multiple_coverage_sets = gen_graph_for_2nd_round(tid_clusters_list_detailed)
# randomized optimization procedure for minimization of data-degeneracy
selected , final_coverage, final_overlap, coverage_history, overlap_history = find_minimum_degeneracy_for_2nd_round(graph_reusable_set, graph_single_coverage_set, graph_multiple_coverage_sets)
%%capture cap3 --no-stderr
selected_rotations, molecules_list_dict_updated = select_rotations_for_2nd_round(tid_clusters_list_detailed, selected, molecules_list_dict, tid_calculated_molecules_list=tid_calculated_molecules_list, molecules_list_dict_from_td=molecules_list_dict_from_td, first_round_tid_calculated_molecules_list=gen2_tid_calculated_molecules_list, first_round_molecules_list_dict_from_td=gen2_molecules_list_dict_from_td)
# Store selected molecules into json file
gen_json_for_2nd_round(selected_rotations, molecule_attributes, molecules_list_dict_updated, output_json='roche_2_selected_torsions.json')
with open('select.log', 'w') as f:
f.write(cap1.stdout)
f.write(cap2.stdout)
f.write(cap3.stdout)
```
| github_jupyter |
# HiddenLayer Training Demo - PyTorch
```
import os
import time
import random
import numpy as np
import torch
import torchvision.models
import torch.nn as nn
from torchvision import datasets, transforms
import hiddenlayer as hl
```
## Basic Use Case
To track your training, you need to use two Classes: History to store the metrics, and Canvas to draw them.
This example simulates a training loop.
```
# A History object to store metrics
history1 = hl.History()
# A Canvas object to draw the metrics
canvas1 = hl.Canvas()
# Simulate a training loop with two metrics: loss and accuracy
loss = 1
accuracy = 0
for step in range(800):
# Fake loss and accuracy
loss -= loss * np.random.uniform(-.09, 0.1)
accuracy = max(0, accuracy + (1 - accuracy) * np.random.uniform(-.09, 0.1))
# Log metrics and display them at certain intervals
if step % 10 == 0:
# Store metrics in the history object
history1.log(step, loss=loss, accuracy=accuracy)
# Plot the two metrics in one graph
canvas1.draw_plot([history1["loss"], history1["accuracy"]])
time.sleep(0.1)
```
## Comparing Experiments
Often you'd want to compare how experiments compare to each other.
```
# New history and canvas objects
history2 = hl.History()
canvas2 = hl.Canvas()
# Simulate a training loop with two metrics: loss and accuracy
loss = 1
accuracy = 0
for step in range(800):
# Fake loss and accuracy
loss -= loss * np.random.uniform(-.09, 0.1)
accuracy = max(0, accuracy + (1 - accuracy) * np.random.uniform(-.09, 0.1))
# Log metrics and display them at certain intervals
if step % 10 == 0:
history2.log(step, loss=loss, accuracy=accuracy)
# Draw two plots
# Encluse them in a "with" context to ensure they render together
with canvas2:
canvas2.draw_plot([history1["loss"], history2["loss"]],
labels=["Loss 1", "Loss 2"])
canvas2.draw_plot([history1["accuracy"], history2["accuracy"]],
labels=["Accuracy 1", "Accuracy 2"])
time.sleep(0.1)
```
## Saving and Loading Histories
The History object store the metrics in RAM, which is often good enough for simple
expriments. To keep the history, you can save/load them with.
```
# Save experiments 1 and 2
history1.save("experiment1.pkl")
history2.save("experiment2.pkl")
# Load them again. To verify it's working, load them into new objects.
h1 = hl.History()
h2 = hl.History()
h1.load("experiment1.pkl")
h2.load("experiment2.pkl")
```
Verify the data is loaded
```
# Show a summary of the experiment
h1.summary()
# Draw a plot of experiment 2
hl.Canvas().draw_plot(h2["accuracy"])
```
## Custom Visualizations
Adding new custom visualizations is pretty easy. Derive a new class from `Canvas` and add your new method to it. You can use any of the drawing functions provided by `matplotlib`.
Here is an example to display the accuracy metric as a pie chart.
```
class MyCanvas(hl.Canvas):
"""Extending Canvas to add a pie chart method."""
def draw_pie(self, metric):
# Method name must start with 'draw_' for the Canvas to automatically manage it
# Use the provided matplotlib Axes in self.ax
self.ax.axis('equal') # set square aspect ratio
# Get latest value of the metric
value = np.clip(metric.data[-1], 0, 1)
# Draw pie chart
self.ax.pie([value, 1-value], labels=["Accuracy", ""])
```
In addition to the pie chart, let's use image visualizations (which is built-in).
```
history3 = hl.History()
canvas3 = MyCanvas() # My custom Canvas
# Simulate a training loop
loss = 1
accuracy = 0
for step in range(400):
# Fake loss and accuracy
loss -= loss * np.random.uniform(-.09, 0.1)
accuracy = max(0, accuracy + (1 - accuracy) * np.random.uniform(-.09, 0.1))
if step % 10 == 0:
# Log loss and accuracy
history3.log(step, loss=loss, accuracy=accuracy)
# Log a fake image metric (e.g. image generated by a GAN)
image = np.sin(np.sum(((np.indices([32, 32]) - 16) * 0.5 * accuracy) ** 2, 0))
history3.log(step, image=image)
# Display
with canvas3:
canvas3.draw_pie(history3["accuracy"])
canvas3.draw_plot([history3["accuracy"], history3["loss"]])
canvas3.draw_image(history3["image"])
time.sleep(0.1)
```
## Running without a GUI
If the training loop is running on a server without a GUI, then use the `History` `progress()` method to print a text status.
```
# Print the metrics of the last step.
history1.progress()
```
You can also periodically saving a snapshot of the graphs to disk to view later. See `demos/history_demo.py` for an example.
First, set matplotlib backend to Agg.
```Python
# Set matplotlib backend to Agg. MUST be done BEFORE importing hiddenlayer
import matplotlib
matplotlib.use("Agg")
```
Then, in the training loop:
```
# Print a text progress status in the loop
history.progress()
# Occasionally, save a snapshot of the graphs.
canvas.draw_plot([h["loss"], h["accuracy"]])
canvas.save("training_graph.png")
```
## Real Training Example
```
# Create data directory in project root (to download dataset to)
ROOT_DIR = os.path.dirname(os.path.abspath(os.getcwd()))
DATA_DIR = os.path.join(ROOT_DIR, "test_data")
# CIFAR10 Dataset
t = transforms.Compose([transforms.ToTensor()])
train_dataset = datasets.CIFAR10(DATA_DIR, train=True, download=True, transform=t)
test_dataset = datasets.CIFAR10(DATA_DIR, train=False, download=True, transform=t)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=50, shuffle=True)
testloader = torch.utils.data.DataLoader(test_dataset, batch_size=50, shuffle=True)
# Print dataset stats
hl.write("train_dataset.data", train_dataset.train_data)
hl.write("train_dataset.labels", train_dataset.train_labels)
hl.write("test_dataset.data", test_dataset.test_data)
hl.write("test_dataset.labels", test_dataset.test_labels)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print("device = ", device)
# Simple Convolutional Network
class CifarModel(nn.Module):
def __init__(self):
super(CifarModel, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, padding=1),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, padding=1),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(2, 2),
nn.Conv2d(16, 32, kernel_size=3, padding=1),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.Conv2d(32, 32, kernel_size=3, padding=1),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(2, 2),
nn.Conv2d(32, 32, kernel_size=3, padding=1),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.Conv2d(32, 32, kernel_size=3, padding=1),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.AdaptiveMaxPool2d(1)
)
self.classifier = nn.Sequential(
nn.Linear(32, 32),
# TODO: nn.BatchNorm2d(32),
nn.ReLU(),
nn.Linear(32, 10))
def forward(self, x):
x = self.features(x)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return x
model = CifarModel().to(device)
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
hl.build_graph(model, torch.zeros([1, 3, 32, 32]).to(device))
def activations_hook(self, inputs, output):
"""Intercepts the forward pass and logs activations.
"""
batch_ix = step[1]
if batch_ix and batch_ix % 100 == 0:
# The output of this layer is of shape [batch_size, 16, 32, 32]
# Take a slice that represents one feature map
cifar_history.log(step, conv1_output=output.data[0, 0])
# A hook to extract the activations of intermediate layers
model.features[0].register_forward_hook(activations_hook);
step = (0, 0) # tuple of (epoch, batch_ix)
cifar_history = hl.History()
cifar_canvas = hl.Canvas()
# Training loop
for epoch in range(2):
train_iter = iter(train_loader)
for batch_ix, (inputs, labels) in enumerate(train_iter):
# Update global step counter
step = (epoch, batch_ix)
optimizer.zero_grad()
inputs = inputs.to(device)
labels = labels.to(device)
# forward + backward + optimize
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# Print statistics
if batch_ix and batch_ix % 100 == 0:
# Compute accuracy
pred_labels = np.argmax(outputs.detach().cpu().numpy(), 1)
accuracy = np.mean(pred_labels == labels.detach().cpu().numpy())
# Log metrics to history
cifar_history.log((epoch, batch_ix),
loss=loss, accuracy=accuracy,
conv1_weight=model.features[0].weight)
# Visualize metrics
with cifar_canvas:
cifar_canvas.draw_plot([cifar_history["loss"], cifar_history["accuracy"]])
cifar_canvas.draw_image(cifar_history["conv1_output"])
cifar_canvas.draw_hist(cifar_history["conv1_weight"])
```
| github_jupyter |
```
%matplotlib notebook
from pydub import AudioSegment
import tqdm
import json
import os
import statistics
import argparse
from utils import get_msecs, video_to_wav, extract_features, read_audio_file, get_wav
from models import model_torch
import librosa
import torch
import numpy as np
import sed_vis
import dcase_util
from utils import video_to_wav
def librosa_get_data_chunk(X, sr, json_data, labels_exist=True):
labeled_wav_list = []
if labels_exist:
for descr in json_data['sound_results']:
st_time = int(get_msecs(descr['start_time']) * sr / 1000)
end_time = int(get_msecs(descr['end_time']) * sr / 1000)
label = descr['sound_type']
labeled_wav_list.append((X[st_time:end_time+1], label, get_msecs(descr['start_time']), get_msecs(descr['end_time'])))
else:
data_length = len(X)*1000//sr
for time in range(0, data_length-1000, 1000):
st_time = int(time * sr / 1000)
end_time = int((time+1000) * sr / 1000)
label = None
labeled_wav_list.append((X[st_time:end_time+1], label, time, time+1000))
return labeled_wav_list
def get_model(features):
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print("Executing model on:", device)
model = model_torch(40, 9)
print("Loading model weights...")
PATH = "../checkpoint/torch_model.pt"
checkpoint = torch.load(PATH)
model.load_state_dict(checkpoint['model_state_dict'])
epoch = checkpoint['epoch']
prev_loss = checkpoint['loss']
model.to(device)
model.eval()
print("Loading finished.")
return model, device
def get_time(start_time, end_time):
st_secs = start_time//1000
ed_secs = end_time//1000
return st_secs, ed_secs
def _convert_to_dcase_format(ann_dict):
ev_list = []
sound_results = ann_dict['sound_results']
for ev in sound_results:
d = {}
d['onset'], d['offset'] = get_time(ev['start_time'], ev['end_time'])
d['file_name'] = ann_dict['file_name']
d['event_label'] = ev['sound_type']
ev_list.append(d)
return ev_list
def visualize(pred, gt, path):
truth = _convert_to_dcase_format(gt)
prediction = _convert_to_dcase_format(pred)
audio_container = dcase_util.containers.AudioContainer().load(path, mono=True)
truth_list = dcase_util.containers.MetaDataContainer(truth)
pred_list = dcase_util.containers.MetaDataContainer(prediction)
event_lists = {'reference': truth_list,
'estimated': pred_list}
"""
Visualize the predicted and ground_truth sound events
"""
vis = sed_vis.visualization.EventListVisualizer(event_lists=event_lists,
audio_signal=audio_container.data,
sampling_rate=audio_container.fs)
vis.show()
#vis.save('../figures/prediction_visualization.png')
return
def predict(args):
labels = None
if args.gt != None:
print("Loading json label files...")
path = args.gt
with open(path) as js_file:
js_data = json.load(js_file)
print("JS_data of ", js_data['file_name'], 'is loaded')
labels = js_data
print("Loading input audio file...")
path = args.input
#(audio, _) = get_wav(path)
X, sr = read_audio_file(path)
#print('Extracted.', len(audio))
print("shape and sr:", X.shape, sr)
classes = np.load('./classes.npy', allow_pickle=True)
for it in range(len(classes)):
cl = classes[it]
classes[it] = cl.replace('-','/')
print("\nClasses are ", classes, '\n')
if labels != None:
print("Dividing whole video into chunks with GT labels...")
labeled_wav = None
labeled_wav = librosa_get_data_chunk(X, sr, labels)
else:
print("Dividing whole video into chunks...")
labeled_wav = None
labeled_wav = librosa_get_data_chunk(X, sr, labels, labels_exist=False)
#load pre-tained model
model, device = get_model(40)
#prediction json dictionary
json_dict = {}
json_dict["file_name"] = args.input.split('/')[-1]
json_dict["sound_results"] = list()
#ground truth json dictionary
gt_labels = {}
gt_labels["file_name"] = args.input.split('/')[-1]
gt_labels["sound_results"] = list()
for sample in tqdm.tqdm(labeled_wav):
if len(sample[0]) < 1000:
continue
features = extract_features(sample[0], sr)
X_test = np.expand_dims(np.array(features), axis=0)
x_testcnn = np.expand_dims(X_test, axis=2)
y_pred = model(torch.from_numpy(x_testcnn).float().to(device)).detach().cpu().numpy()
predicted = np.argmax(y_pred, 1)
js_entry = {}
js_entry["start_time"] = sample[-2]
js_entry["end_time"] = sample[-1]
js_entry["sound_type"] = classes[predicted][0]
json_dict["sound_results"].append(js_entry)
if labels != None and js_entry["sound_type"] != sample[1]:
js_entry2 = {"start_time": sample[-2], "end_time": sample[-1], "sound_type": sample[1]}
gt_labels["sound_results"].append(js_entry2)
else:
js_entry2 = {"start_time": sample[-2], "end_time": sample[-1], "sound_type": 'unlabeled'}
gt_labels["sound_results"].append(js_entry2)
save_directory = args.output[:args.output.rfind("/")]
if not os.path.exists(save_directory):
os.makedirs(save_directory)
with open(args.output, 'w') as outfile:
json.dump(json_dict, outfile, indent=2)
print("Predictions are written to JSON file!")
with open(save_directory+'/'+'gt_labels.json', 'w') as outfile:
json.dump(gt_labels, outfile, indent=2)
print("GT_labels are written to JSON file!")
return json_dict, gt_labels, path
def parse_args():
parser = argparse.ArgumentParser(description="Run data preprocessing.")
parser.add_argument('--checkpoint', nargs='?', default='checkpoint/torch_model.pt',
help="Path to the model's checkpoint.")
parser.add_argument('--input', nargs='?', default=None,
help='Path to input video.')
parser.add_argument('--gt', nargs='?', default=None,
help='Path to JSON file with GT labels.')
parser.add_argument('--output', nargs='?', default=None,
help='Path to output JSON file with predicted labels.')
parser.add_argument('--sr', nargs='?', default=48000,
help='Sampling rate.')
return parser.parse_args()
import sys,os,argparse
#input wav file and prediction output
sys.argv = ['demo.py', '--input', '../test_kor.mp4', '--output', '../predictions/prediction.json']
args = parse_args()
if args.input[-3:] != 'wav':
print("Video file is received! Converting to wav...")
args.input = video_to_wav(args.input, save_directory=args.input.rsplit('/', 1)[0]+'/', video_format=args.input.rsplit('.', 1)[1])
print("Conversion finished! New wav file: ", args.input)
print("Inference stage...")
json_dict, gt_labels, path = predict(args)
#Visualization
print("Visualization of results...")
with open(args.output) as js_file:
json_dict = json.load(js_file)
save_directory = args.output[:args.output.rfind("/")]
with open(save_directory+'/'+'gt_labels.json') as js_file:
gt_labels = json.load(js_file)
path = args.input
print("Predictions and GT labels are loaded.")
print("Begin visualization...")
visualize(json_dict, gt_labels, path)
args.output
```
| github_jupyter |
```
from google.colab import drive
drive.mount('/content/drive')
import torch
print(torch.__version__)
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
GDRIVE = '/content/drive/MyDrive/2516'
models = [
'RDN_50epoch_Baseline_Bicubic',
'RDN_50epoch_AblateCMLRLGFF_Bicubic',
'RDN_50epoch_AblateLRLGFF_Bicubic',
'RDN_50epoch_AblateCMGFF_Bicubic',
'RDN_50epoch_AblateCMLRL_Bicubic',
'RDN_50epoch_AblateGFF_Bicubic',
'RDN_50epoch_AblateLRL_Bicubic',
'RDN_50epoch_AblateCM_Bicubic',
'RDN_50epoch_ShortSkipConn_Bicubic',
'RDN_50epoch_Laplacian_Bicubic',
'RDN_50epoch_ResidualBlock_Bicubic',
'RDN_50epoch_CascadingBlock_Bicubic',
'RDN_50epoch_BaselineD16C8G64_Bicubic',
'RDN_50epoch_AblateCMLRLGFFD16C8G64_Bicubic',
'DRLN_50epoch_Baseline_Model',
'DRLN_50epoch_No_Everything_Model',
'DRLN_50epoch_No-Laplacian_Model',
'DRLN_50epoch_No-LONG-No-Laplacian_Model',
'DRLN_50epoch_No_Long_Skip_Conn_Model',
'DRLN_50epoch_NO-Medium-Con-No-Long-Con_Model',
'DRLN_50epoch_NO-Medium-No-Laplacian_Model',
'DRLN_50epoch_no_medium_skip_conn_Model',
]
```
# Download Models - Deprecated
```
for model in models:
print(model)
!cp {GDRIVE}/{model}.zip /content/
!unzip -q {model}.zip
```
# Process DRLN Zips - Deprecated
```
weird_models = [
'DRLN_50epoch_No_Everything_Model',
'DRLN_50epoch_No-Laplacian_Model',
'DRLN_50epoch_No-LONG-No-Laplacian_Model',
'DRLN_50epoch_No_Long_Skip_Conn_Model',
'DRLN_50epoch_NO-Medium-Con-No-Long-Con_Model',
'DRLN_50epoch_NO-Medium-No-Laplacian_Model',
'DRLN_50epoch_no_medium_skip_conn_Model']
drln_models = [
'DRLN_50epoch_Baseline_Model',
'DRLN_50epoch_No_Everything_Model',
'DRLN_50epoch_No-Laplacian_Model',
'DRLN_50epoch_No-LONG-No-Laplacian_Model',
'DRLN_50epoch_No_Long_Skip_Conn_Model',
'DRLN_50epoch_NO-Medium-Con-No-Long-Con_Model',
'DRLN_50epoch_NO-Medium-No-Laplacian_Model',
'DRLN_50epoch_no_medium_skip_conn_Model',
]
for model in weird_models:
!mkdir /content/{model}
!mv /content/content/{model}/* /content/{model}
!rm -rf /content/content
for model in drln_models:
!mv /content/{model}/DRLN_epoch50/* /content/{model}
```
# Extract logs from model.zips
Just need to load `model_logs.zip` next time for plots
```
mkdir /content/model_logs
to_cps = ['log.txt', 'loss.pt', 'loss_L1.png', 'loss_log.pt', 'psnr_log.pt', 'test_DIV2K.png']
for model in models:
!mkdir /content/model_logs/{model}
for to_cp in to_cps:
!cp /content/{model}/{to_cp} /content/model_logs/{model}/{to_cp}
!zip -r model_logs.zip model_logs
!cp model_logs.zip {GDRIVE}
```
# Load logs
```
cp {GDRIVE}/model_logs.zip model_logs.zip
!unzip -q model_logs.zip
for model in models:
!mv model_logs/{model} {model}
```
# Plot Logs
```
FILE_EXPNAME = {
'RDN_50epoch_Baseline_Bicubic': 'RDN Baseline',
'RDN_50epoch_AblateCMLRLGFF_Bicubic': 'RDN w/o CM,LRL,GFF',
'RDN_50epoch_AblateLRLGFF_Bicubic':'RDN w/o LRL,GFF',
'RDN_50epoch_AblateCMGFF_Bicubic':'RDN w/o CM,GFF',
'RDN_50epoch_AblateCMLRL_Bicubic':'RDN w/o CM,LRL',
'RDN_50epoch_AblateGFF_Bicubic':'RDN w/o GFF',
'RDN_50epoch_AblateLRL_Bicubic':'RDN w/o LRL',
'RDN_50epoch_AblateCM_Bicubic':'RDN w/o CM',
'RDN_50epoch_ShortSkipConn_Bicubic':'RDN + SSC',
'RDN_50epoch_Laplacian_Bicubic':'RDN + LA',
'RDN_50epoch_ResidualBlock_Bicubic':'RDN + Residual Blocks',
'RDN_50epoch_CascadingBlock_Bicubic':'RDN + Global Cascading',
'RDN_50epoch_BaselineD16C8G64_Bicubic': 'RDN Baseline D16C8G64',
'RDN_50epoch_AblateCMLRLGFFD16C8G64_Bicubic': 'RDN w/o CM,LRL,GFF D16C8G64',
'DRLN_50epoch_Baseline_Model':'DRLN Baseline',
'DRLN_50epoch_No_Everything_Model':'DRLN w/o SSC,LSC,LA',
'DRLN_50epoch_No-Laplacian_Model':'DRLN w/o LA',
'DRLN_50epoch_No-LONG-No-Laplacian_Model':'DRLN w/o LSC,LA',
'DRLN_50epoch_No_Long_Skip_Conn_Model':'DRLN w/o LSC',
'DRLN_50epoch_NO-Medium-Con-No-Long-Con_Model':'DRLN w/o SSC,LSC',
'DRLN_50epoch_NO-Medium-No-Laplacian_Model':'DRLN w/o SSC,LA',
'DRLN_50epoch_no_medium_skip_conn_Model':'DRLN w/o SSC',
}
fig_dir = '/content/figures'
mkdir {fig_dir}
baseline_args = {
'linestyle':'dashed',
'color':'r'
}
none_args = {
'color': None
}
# figsize=(10, 8), dpi=80
def plot_psnr(epoch, model_names, figname):
logs = []
for model in model_names:
log = torch.load(model + '/psnr_log.pt')[:, 0].numpy()
logs.append(log)
axis = np.linspace(1, epoch, epoch)
# label = 'SR on DIV2K'
fig = plt.figure()
# plt.title(label)
for i, log in enumerate(logs):
color = 'r' if i == 0 else None
args = baseline_args if i ==0 else none_args
plt.plot(
axis,
log,
label=FILE_EXPNAME[model_names[i]],
**args
)
plt.legend()
plt.xlabel('Epochs')
plt.ylabel('PSNR')
plt.grid(True)
plt.savefig('{}/convergence_{}.png'.format(fig_dir, figname))
plt.show()
plt.close(fig)
[
'RDN_50epoch_Baseline_Bicubic',
'RDN_50epoch_AblateCMLRLGFF_Bicubic',
'RDN_50epoch_AblateLRLGFF_Bicubic',
'RDN_50epoch_AblateCMGFF_Bicubic',
'RDN_50epoch_AblateCMLRL_Bicubic',
'RDN_50epoch_AblateGFF_Bicubic',
'RDN_50epoch_AblateLRL_Bicubic',
'RDN_50epoch_AblateCM_Bicubic',
'RDN_50epoch_ShortSkipConn_Bicubic',
'RDN_50epoch_Laplacian_Bicubic',
'RDN_50epoch_ResidualBlock_Bicubic',
'RDN_50epoch_CascadingBlock_Bicubic',
'RDN_50epoch_BaselineD16C8G64_Bicubic',
'RDN_50epoch_AblateCMLRLGFFD16C8G64_Bicubic',
'DRLN_50epoch_Baseline_Model',
'DRLN_50epoch_No_Everything_Model',
'DRLN_50epoch_No-Laplacian_Model',
'DRLN_50epoch_No-LONG-No-Laplacian_Model',
'DRLN_50epoch_No_Long_Skip_Conn_Model',
'DRLN_50epoch_NO-Medium-Con-No-Long-Con_Model',
'DRLN_50epoch_NO-Medium-No-Laplacian_Model',
'DRLN_50epoch_no_medium_skip_conn_Model',
]
```
## Start plotting
```
plot_psnr(50, [
'DRLN_50epoch_Baseline_Model',
'DRLN_50epoch_No-Laplacian_Model',
'DRLN_50epoch_No_Long_Skip_Conn_Model',
'DRLN_50epoch_no_medium_skip_conn_Model',
'DRLN_50epoch_No-LONG-No-Laplacian_Model',
'DRLN_50epoch_NO-Medium-No-Laplacian_Model',
'DRLN_50epoch_NO-Medium-Con-No-Long-Con_Model',
], 'DRLN_BaselinesAblateComponents')
plot_psnr(50, [
'DRLN_50epoch_Baseline_Model',
'DRLN_50epoch_No_Everything_Model',
], 'DRLN_BaselinesAblateAll')
plot_psnr(50, [
'RDN_50epoch_Baseline_Bicubic',
'DRLN_50epoch_Baseline_Model',
], 'RDN_DRLN_Baselines')
plot_psnr(50, [
'RDN_50epoch_BaselineD16C8G64_Bicubic',
'RDN_50epoch_AblateCMLRLGFFD16C8G64_Bicubic',
], 'RDN_BaselineAblateAll_ConfigB')
plot_psnr(50, [
'RDN_50epoch_Baseline_Bicubic',
'RDN_50epoch_BaselineD16C8G64_Bicubic',
], 'RDN_BaselineModelSize.png')
plot_psnr(50, [
'RDN_50epoch_Baseline_Bicubic',
'RDN_50epoch_ShortSkipConn_Bicubic',
'RDN_50epoch_Laplacian_Bicubic',
'RDN_50epoch_ResidualBlock_Bicubic',
'RDN_50epoch_CascadingBlock_Bicubic',
], 'RDN_BaselineAddComponents')
plot_psnr(50, [
'RDN_50epoch_Baseline_Bicubic',
'RDN_50epoch_AblateLRLGFF_Bicubic',
'RDN_50epoch_AblateCMGFF_Bicubic',
'RDN_50epoch_AblateCMLRL_Bicubic',
'RDN_50epoch_AblateGFF_Bicubic',
'RDN_50epoch_AblateLRL_Bicubic',
'RDN_50epoch_AblateCM_Bicubic',
], 'RDN_BaselineAblateComponents')
plot_psnr(50, [
'RDN_50epoch_Baseline_Bicubic',
'RDN_50epoch_AblateCMLRLGFF_Bicubic',
], 'RDN_BaselineAblateAll')
plot_psnr(50, [
'RDN_50epoch_Baseline_Bicubic',
'RDN_50epoch_AblateCMLRLGFF_Bicubic',
'RDN_50epoch_AblateLRLGFF_Bicubic',
'RDN_50epoch_AblateCMGFF_Bicubic',
'RDN_50epoch_AblateCMLRL_Bicubic',
'RDN_50epoch_AblateGFF_Bicubic',
'RDN_50epoch_AblateLRL_Bicubic',
'RDN_50epoch_AblateCM_Bicubic',
], 'RDN_BaselineAblateAllandComponents')
plot_psnr(50, [
'DRLN_50epoch_Baseline_Model',
'DRLN_50epoch_No_Everything_Model',
'DRLN_50epoch_No-Laplacian_Model',
'DRLN_50epoch_No_Long_Skip_Conn_Model',
'DRLN_50epoch_no_medium_skip_conn_Model',
'DRLN_50epoch_No-LONG-No-Laplacian_Model',
'DRLN_50epoch_NO-Medium-No-Laplacian_Model',
'DRLN_50epoch_NO-Medium-Con-No-Long-Con_Model',
], 'DRLN_BaselinesAblateAllandComponents')
```
## Save figures
```
!zip -r figures.zip figures
!cp figures.zip {GDRIVE}
```
| github_jupyter |
This notebook is written by pythonash.
I was meant to find the proper parameter containing learning rate, dropout rate, and so on.
This notebook will be modified until either I finally get optimal structure or this competition is ended with my indifference due to my work.
```
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from sklearn.preprocessing import MinMaxScaler
import ubiquant
from sklearn.model_selection import KFold
import random
df = pd.read_parquet('../input/ubiquant-parquet/train_low_mem.parquet')
df
df.isnull().sum().sum()
# pd.read_parquet('../input/ubiquant-parquet/example_sample_submission.parquet')
df.columns
f_col = df.drop(['row_id','time_id','investment_id','target'],axis=1).columns
f_col
scaler = MinMaxScaler()
scaler.fit(pd.DataFrame(df[f_col]))
def make_dataset(df):
f_df = df[f_col]
scaled_f = scaler.transform(pd.DataFrame(f_df))
data_x = pd.DataFrame(scaled_f)
data_x.columns = f_df.columns
del f_df
data_x = data_x.astype('float16')
return data_x
# df=df.astype('float16')
df_x = make_dataset(df)
df_x
df_y = pd.DataFrame(df['target'])
df_y
del df
sns.distplot(df_y)
def pythonash_model():
neurons = random.randint(64, 1023)
# leaky_rate = random.randint(1,6)/10
drop_rate = random.randint(1,6)/10
lr_rate = random.uniform(0, 0.005)
# decay_st = random.randint(5000, 100000)
# decay_ra = random.randint(97,100) /100
inputs_ = tf.keras.Input(shape = [df_x.shape[1]])
x = tf.keras.layers.Dense(neurons, kernel_initializer = 'he_normal')(inputs_)
batch = tf.keras.layers.BatchNormalization()(x)
leaky = tf.keras.layers.PReLU()(batch)
x = tf.keras.layers.Dense(neurons, kernel_initializer = 'he_normal')(leaky)
batch = tf.keras.layers.BatchNormalization()(x)
leaky = tf.keras.layers.PReLU()(batch)
x = tf.keras.layers.Dense(neurons, kernel_initializer = 'he_normal')(leaky)
batch = tf.keras.layers.BatchNormalization()(x)
leaky = tf.keras.layers.PReLU()(batch)
x = tf.keras.layers.Dense(neurons, kernel_initializer = 'he_normal')(leaky)
batch = tf.keras.layers.BatchNormalization()(x)
leaky = tf.keras.layers.PReLU()(batch)
x = tf.keras.layers.Dense(neurons, kernel_initializer = 'he_normal')(leaky)
batch = tf.keras.layers.BatchNormalization()(x)
leaky = tf.keras.layers.PReLU()(batch)
drop = tf.keras.layers.Dropout(drop_rate)(leaky)
x = tf.keras.layers.Dense(neurons, kernel_initializer = 'he_normal')(drop)
batch = tf.keras.layers.BatchNormalization()(x)
leaky = tf.keras.layers.PReLU()(batch)
x = tf.keras.layers.Dense(neurons, kernel_initializer = 'he_normal')(leaky)
batch = tf.keras.layers.BatchNormalization()(x)
leaky = tf.keras.layers.PReLU()(batch)
drop = tf.keras.layers.Dropout(drop_rate)(leaky)
outputs_ = tf.keras.layers.Dense(1)(drop)
model = tf.keras.Model(inputs = inputs_, outputs = outputs_)
rmse = tf.keras.metrics.RootMeanSquaredError()
learning_sch = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate = lr_rate,
decay_steps = 10000,
decay_rate = 0.97)
adam = tf.keras.optimizers.Adam(learning_rate = learning_sch)
model.compile(loss = 'mse', metrics = rmse, optimizer = adam)
opt_name = str(model.optimizer).split('.')[3].split()[0]
print('Current set is \n neurons: {0},\n Drop rate: {1}, \n learning_rate: {2}'.format(neurons, drop_rate, lr_rate))
return neurons, drop_rate, lr_rate, model
simulation_log = []
num_iter = 1
for iteration in np.arange(1,20):
num_fold = 1
kfold_generator = KFold(n_splits =5, shuffle=True)
callbacks = tf.keras.callbacks.ModelCheckpoint('pythonash_model.h5', save_best_only = True)
neurons, drop_rate, lr_rate, model = pythonash_model()
fold_model = model.save('fold_model.h5')
del fold_model
del model
for train_index, val_index in kfold_generator.split(df_x, df_y):
fold_model = tf.keras.models.load_model('fold_model.h5')
# Split training dataset.
train_x, train_y = df_x.iloc[train_index], df_y.iloc[train_index]
# Split validation dataset.
val_x, val_y = df_x.iloc[val_index], df_y.iloc[val_index]
# Make tensor dataset.
tf_train = tf.data.Dataset.from_tensor_slices((train_x, train_y)).shuffle(2022).batch(1024, drop_remainder=True).prefetch(1)
tf_val = tf.data.Dataset.from_tensor_slices((val_x, val_y)).shuffle(2022).batch(1024, drop_remainder=True).prefetch(1)
# Load model
###############################################################################################
print('======================================Fold %d Start!======================================='%num_fold)
fit_history = fold_model.fit(tf_train, callbacks = callbacks, epochs = 5, #### change the epochs into more numbers.
validation_data = (tf_val), shuffle=True, verbose = 1)
min_loss = np.array(fit_history.history['val_loss']).min()
print('===========================================================================================')
print('Model achieves %f in validation set.' %min_loss)
print('===========================================================================================')
simulation_log.append([num_iter, num_fold, neurons, drop_rate, lr_rate, min_loss])
log_df = pd.DataFrame(simulation_log)
log_df.columns = ['num_iter','num_fold','neurons', 'drop_rate', 'lr_rate', 'min_loss']
print(log_df)
log_df.to_csv('./Parameter finder log.csv', encoding = 'utf-8-sig', index = False)
print('===========================================================================================')
# Delete tensor dataset and model for avoiding memory exploring.
del tf_train
del tf_val
del fit_history
del fold_model
num_fold += 1
# del model
del neurons
del drop_rate
del lr_rate
del min_loss
print('%d iteraion is over.' %num_iter)
print('===========================================================================================')
num_iter+=1
best_model = tf.keras.models.load_model('pythonash_model.h5')
best_model.summary()
best_model.layers
kfold_generator = KFold(n_splits = 5, shuffle=True, random_state = 2022)
kfold_generator
# Write your model name down in 'pythonash_model.h5'.
callbacks = tf.keras.callbacks.ModelCheckpoint('pythonash_model.h5', save_best_only = True)
for train_index, val_index in kfold_generator.split(df_x, df_y):
# Split training dataset.
train_x, train_y = df_x.iloc[train_index], df_y.iloc[train_index]
# Split validation dataset.
val_x, val_y = df_x.iloc[val_index], df_y.iloc[val_index]
# Make tensor dataset.
tf_train = tf.data.Dataset.from_tensor_slices((train_x, train_y)).shuffle(2022).batch(1024, drop_remainder=True).prefetch(1)
tf_val = tf.data.Dataset.from_tensor_slices((val_x, val_y)).shuffle(2022).batch(1024, drop_remainder=True).prefetch(1)
# Load model
model = pythonash_model()
# Model fitting
## I used 5 epochs for fast save.
## Change the epochs into more numbers.
model.fit(tf_train, callbacks = callbacks, epochs = 5, #### change the epochs into more numbers.
validation_data = (tf_val), shuffle=True)
# Delete tensor dataset and model for avoiding memory exploring.
del tf_train
del tf_val
del model
best_model = tf.keras.models.load_model('pythonash_model.h5')
env = ubiquant.make_env()
iter_test = env.iter_test()
for (test_df, sample_prediction_df) in iter_test:
test_df = make_dataset(test_df)
sample_prediction_df['target'] = best_model.predict(test_df)
env.predict(sample_prediction_df)
```
| github_jupyter |
### import libraries
```
! pip install netCDF4
import netCDF4 # python API to work with netcdf (.nc) files
import os
import datetime
from osgeo import gdal, ogr, osr
import numpy as np # library to work with matrixes and computations in general
import matplotlib.pyplot as plt # plotting library
from auxiliary_classes import convert_time,convert_time_reverse,kelvin_to_celsius,kelvin_to_celsius_vector,Grid,Image,subImage
import json
import geojson, gdal, subprocess
```
### auxiliary functions
```
def print_geojson(tname, tvalue, fname, longitude, latitude, startdoc, position,endloop): #for printing to geojson - start,end,attributes
fname = fname +".geojson"
pmode="a"
if startdoc==1:
with open(fname, mode="w", encoding='utf-8') as f1: #start of geojson
tstring = "{\n\"type\": \"FeatureCollection\",\n\"features\": ["
print(tstring, file=f1)
f1.close()
else:
if position==0: #for printing to geojson - geometry, longitude, latitude
tstring = "\"type\": \"Feature\",\n\"geometry\": {\n\"type\": \"Point\",\n\"coordinates\": [" + str(longitude) + ","+ str(latitude) + "]\n},\n\"properties\": {"
fname = fname
with open(fname, mode=pmode, encoding='utf-8') as f1:
print(tstring, file=f1)
f1.close()
elif position==1: #start of point attributes
with open(fname, mode=pmode, encoding='utf-8') as f1:
print("{", file=f1)
f1.close()
elif position==2: #print attribute (not last)
with open(fname, mode=pmode, encoding='utf-8') as f1:
ttext = "\"" + str(tname) + "\": \"" +str(tvalue) + "\","
print(ttext, file=f1)
f1.close()
elif position==3: #print last attribute
with open(fname, mode=pmode, encoding='utf-8') as f1:
ttext = "\"" + str(tname) + "\": \"" +str(tvalue) + "\""
print(ttext, file=f1)
f1.close()
elif position==4: #end of point attributes
with open(fname, mode=pmode, encoding='utf-8') as f1:
if endloop==0:
print("}\n},", file=f1)
f1.close()
else: #end of geojson
print("}\n}\n]\n}", file=f1)
f1.close()
def trend(inputlist, nametrend, namediff, fname):
listlong = len(inputlist)
if listlong <= 1:
trendcoef = 0
timediff = 0
else:
x = np.arange(0,len(inputlist))
y = inputlist
z = np.polyfit(x,y,1)
trendcoef=z[0]
timediff=int(trendcoef*(listlong-1))
print_geojson(nametrend, trendcoef, fname, 0, 0, 0, 2, 0)
print_geojson(namediff, timediff, fname, 0, 0, 0, 3, 0)
def trend2(inputlist, nametrend, namediff, endyear, startyear, fname,fnameavg):
listlong = endyear-startyear+1
numberweeks = len(inputlist[0])
for j in range(0, numberweeks,1):
tempweek = j +1
if listlong <= 1:
trendcoef = 0
timediff = 0
else:
x = np.arange(0,listlong)
y = []
for i in range(0, listlong, 1):
y.append( inputlist[i][j])
z = np.polyfit(x,y,1)
trendcoef=z[0]
timediff=int(trendcoef*(listlong-1))
nametrend2 = nametrend + str(tempweek)
namediff2 = namediff + str(tempweek)
print_geojson(nametrend2, trendcoef, fname, 0, 0, 0, 2, 0)
print_geojson(nametrend2, trendcoef, fnameavg, 0, 0, 0, 2, 0)
if j == (numberweeks-1):
print_geojson(namediff2, timediff, fname, 0, 0, 0, 3, 0)
print_geojson(namediff2, timediff, fnameavg, 0, 0, 0, 3, 0)
else:
print_geojson(namediff2, timediff, fname, 0, 0, 0, 2, 0)
print_geojson(namediff2, timediff, fnameavg, 0, 0, 0, 2, 0)
def avg2Dlist(inputlist,startyear,endyear): #average for 2D list ->1D list # inputs: inputlist = 2D list, output: avglist = 1D list with avg values
numberyear = endyear-startyear+1
listlen = len(inputlist[0])
templist = []
avglist = []
for i in range(0, listlen,1):
for j in range(0, numberyear,1):
templist.append(inputlist[j][i])
tempvalue=sum(templist)/len(templist)
avglist.append(tempvalue)
templist = []
return avglist
def acumulatelist(inputlist): #average for 2D list ->1D list # inputs: inputlist = 2D list, output: avglist = 1D list with avg values
listlen = len(inputlist)
for i in range (0,listlen-1,1):
inputlist[i+1] += inputlist[i]
return inputlist
def printlistasweekgeojson(inputlist,name,fname,fnameavg,endloop): # from list of week values print geojson
listlen = len(inputlist)
for i in range(0, listlen,1):
tempvalue=inputlist[i]
tvarname = name + str(i+1)
if endloop==1 and i == (listlen-1):
print_geojson(tvarname, tempvalue, fname, 0, 0, 0, 3, 0)
print_geojson(tvarname, tempvalue, fnameavg, 0, 0, 0, 2, 0)
else:
print_geojson(tvarname, tempvalue, fname, 0, 0, 0, 2, 0)
print_geojson(tvarname, tempvalue, fnameavg, 0, 0, 0, 2, 0)
```
### Deficits: function for one place
```
from datetime import date, timedelta
def findprecipitation(allweekevapolist,allmonthevapolist,yearevapolist,allweekrunlist,allmonthrunlist,yearrunlist,allweekdeficlist,allmonthdeficlist,yeardeficlist,latitude,longitude,year,endyear,im,enddate, startdate, fnamepreci, allweekprecilist,precipitationparam, fnameannualpreci, yearprecilist, unitcoeff,fnamepreciaccum,fnamempreciaccum, fnamempreci,allmonthprecilist,fnameevapo,fnameevapoaccum,fnamemevapo,fnamemevapoaccum,fnameavgevaporation,fnameavgmevaporation,fnameannualevapo,fnamerun,fnamerunaccum,fnamemrun,fnamemrunaccum,fnameavgrunoff,fnameavgmrunoff,fnameannualrun,fnamedefic,fnamedeficaccum,fnamemdefic,fnamemdeficaccum,fnameavgdeficits,fnameavgmdeficits,fnameannualdefic):
sdate = startdate # start date for searching last frost date
edate = enddate # end date for searching last frost date
delta = edate - sdate # as timedelta
sevendays=0 # for determination of new week (1-7)
currentweek=1 # for determination of weeks
starthourday = 0
endhourday = 23
sdaylong = str(sdate)
tmonth = int(sdaylong[5:7])
currentmonth = tmonth
weekprecilist = []
weekprecipitation=0
weekprecilist = []
monthprecilist =[]
weekprecisum = 0
yearprecisum = 0
monthprecisum = 0
weekevapolist = []
weekevaporation=0
weekevapolist = []
monthevapolist =[]
weekevaposum = 0
yearevaposum = 0
monthevaposum = 0
weekrunlist = []
weekrunoff=0
weekrunlist = []
monthrunlist =[]
weekrunsum = 0
yearrunsum = 0
monthrunsum = 0
weekdeficlist = []
weekdeficits=0
weekdeficlist = []
monthdeficlist =[]
weekdeficsum = 0
yeardeficsum = 0
monthdeficsum = 0
for i in range(delta.days+1):
daylong = sdate + timedelta(days=i)
sdaylong = str(daylong)
tday = int(sdaylong[8:10])
tmonth = int(sdaylong[5:7])
tyear = int(sdaylong[0:4])
dayprecisum = 0 # start value
dayevaposum = 0 # start value
dayrunsum = 0 # start value
daydeficsum = 0 # start value
sevendays+=1
for hour in range(starthourday, endhourday+1, 1): # for specific hours (all day,only sunrise hours,..)
time=convert_time_reverse(datetime.datetime(tyear, tmonth, tday, hour, 0))
slice_dictionary={'lon':[longitude,],'lat':[latitude],'time':[int(time)]}
currentpreci=float(im.slice(precipitationparam,slice_dictionary))*unitcoeff
currentevapo=float(im.slice(evaporationparam,slice_dictionary))*unitcoeff
currentrun=float(im.slice(runoffparam,slice_dictionary))*unitcoeff*0.0001
currentdefic=currentpreci+currentevapo-currentrun
dayprecisum += currentpreci
yearprecisum += currentpreci
dayevaposum += currentevapo
yearevaposum += currentevapo
dayrunsum += currentrun
yearrunsum += currentrun
daydeficsum += currentdefic
yeardeficsum += currentdefic
if daylong == edate: # save month date for last date in season
monthprecisum+=dayprecisum
monthprecilist.append(monthprecisum)
tvarname = "M" + str(year) + "_" + str(tmonth)
print_geojson(tvarname, monthprecisum, fnamempreci, 0, 0, 0, 2, 0)
tvarname = "M" + str(year) + "_" + str(tmonth)
print_geojson(tvarname, yearprecisum, fnamempreciaccum, 0, 0, 0, 2, 0)
monthevaposum+=dayevaposum
monthevapolist.append(monthevaposum)
tvarname = "M" + str(year) + "_" + str(tmonth)
print_geojson(tvarname, monthevaposum, fnamemevapo, 0, 0, 0, 2, 0)
tvarname = "M" + str(year) + "_" + str(tmonth)
print_geojson(tvarname, yearevaposum, fnamemevapoaccum, 0, 0, 0, 2, 0)
monthrunsum+=dayrunsum
monthrunlist.append(monthrunsum)
tvarname = "M" + str(year) + "_" + str(tmonth)
print_geojson(tvarname, monthrunsum, fnamemrun, 0, 0, 0, 2, 0)
tvarname = "M" + str(year) + "_" + str(tmonth)
print_geojson(tvarname, yearrunsum, fnamemrunaccum, 0, 0, 0, 2, 0)
monthdeficsum+=daydeficsum
monthdeficlist.append(monthdeficsum)
tvarname = "M" + str(year) + "_" + str(tmonth)
print_geojson(tvarname, monthdeficsum, fnamemdefic, 0, 0, 0, 2, 0)
tvarname = "M" + str(year) + "_" + str(tmonth)
print_geojson(tvarname, yeardeficsum, fnamemdeficaccum, 0, 0, 0, 2, 0)
elif tmonth == currentmonth:
monthprecisum+=dayprecisum
monthevaposum+=dayevaposum
monthrunsum+=dayrunsum
monthdeficsum+=daydeficsum
else:
monthprecilist.append(monthprecisum)
tvarname = "M" + str(year) + "_" + str(tmonth)
print_geojson(tvarname, monthprecisum, fnamempreci, 0, 0, 0, 2, 0)
tvarname = "M" + str(year) + "_" + str(tmonth)
print_geojson(tvarname, yearprecisum, fnamempreciaccum, 0, 0, 0, 2, 0)
monthprecisum=dayprecisum
currentmonth=tmonth
monthevapolist.append(monthevaposum)
tvarname = "M" + str(year) + "_" + str(tmonth)
print_geojson(tvarname, monthevaposum, fnamemevapo, 0, 0, 0, 2, 0)
tvarname = "M" + str(year) + "_" + str(tmonth)
print_geojson(tvarname, yearevaposum, fnamemevapoaccum, 0, 0, 0, 2, 0)
monthevaposum=dayevaposum
monthrunlist.append(monthrunsum)
tvarname = "M" + str(year) + "_" + str(tmonth)
print_geojson(tvarname, monthrunsum, fnamemrun, 0, 0, 0, 2, 0)
tvarname = "M" + str(year) + "_" + str(tmonth)
print_geojson(tvarname, yearrunsum, fnamemrunaccum, 0, 0, 0, 2, 0)
monthrunsum=dayrunsum
monthdeficlist.append(monthdeficsum)
tvarname = "M" + str(year) + "_" + str(tmonth)
print_geojson(tvarname, monthdeficsum, fnamemdefic, 0, 0, 0, 2, 0)
tvarname = "M" + str(year) + "_" + str(tmonth)
print_geojson(tvarname, yeardeficsum, fnamemdeficaccum, 0, 0, 0, 2, 0)
monthdeficsum=daydeficsum
if daylong == edate: # save week date for last date in season
weekprecisum+=dayprecisum
weekprecilist.append(weekprecisum)
tvarname = "W" + str(year) + "_" + str(currentweek)
print_geojson(tvarname, weekprecisum, fnamepreci, 0, 0, 0, 2, 0)
tvarname = "W" + str(year) + "_" + str(currentweek)
print_geojson(tvarname, yearprecisum, fnamepreciaccum, 0, 0, 0, 2, 0)
weekevaposum+=dayevaposum
weekevapolist.append(weekevaposum)
tvarname = "W" + str(year) + "_" + str(currentweek)
print_geojson(tvarname, weekevaposum, fnameevapo, 0, 0, 0, 2, 0)
tvarname = "W" + str(year) + "_" + str(currentweek)
print_geojson(tvarname, yearevaposum, fnameevapoaccum, 0, 0, 0, 2, 0)
weekrunsum+=dayrunsum
weekrunlist.append(weekrunsum)
tvarname = "W" + str(year) + "_" + str(currentweek)
print_geojson(tvarname, weekrunsum, fnamerun, 0, 0, 0, 2, 0)
tvarname = "W" + str(year) + "_" + str(currentweek)
print_geojson(tvarname, yearrunsum, fnamerunaccum, 0, 0, 0, 2, 0)
weekdeficsum+=daydeficsum
weekdeficlist.append(weekdeficsum)
tvarname = "W" + str(year) + "_" + str(currentweek)
print_geojson(tvarname, weekdeficsum, fnamedefic, 0, 0, 0, 2, 0)
tvarname = "W" + str(year) + "_" + str(currentweek)
print_geojson(tvarname, yeardeficsum, fnamedeficaccum, 0, 0, 0, 2, 0)
elif sevendays<=7: # new week?
weekprecisum+=dayprecisum
weekevaposum+=dayevaposum
weekrunsum+=dayrunsum
weekdeficsum+=daydeficsum
else:
weekprecilist.append(weekprecisum)
tvarname = "W" + str(year) + "_" + str(currentweek)
print_geojson(tvarname, weekprecisum, fnamepreci, 0, 0, 0, 2, 0)
tvarname = "W" + str(year) + "_" + str(currentweek)
print_geojson(tvarname, yearprecisum, fnamepreciaccum, 0, 0, 0, 2, 0)
weekprecisum=dayprecisum
sevendays=0
currentweek+=1
weekevapolist.append(weekevaposum)
tvarname = "W" + str(year) + "_" + str(currentweek)
print_geojson(tvarname, weekevaposum, fnameevapo, 0, 0, 0, 2, 0)
tvarname = "W" + str(year) + "_" + str(currentweek)
print_geojson(tvarname, yearevaposum, fnameevapoaccum, 0, 0, 0, 2, 0)
weekevaposum=dayevaposum
weekrunlist.append(weekrunsum)
tvarname = "W" + str(year) + "_" + str(currentweek)
print_geojson(tvarname, weekrunsum, fnamerun, 0, 0, 0, 2, 0)
tvarname = "W" + str(year) + "_" + str(currentweek)
print_geojson(tvarname, yearrunsum, fnamerunaccum, 0, 0, 0, 2, 0)
weekrunsum=dayrunsum
weekdeficlist.append(weekdeficsum)
tvarname = "W" + str(year) + "_" + str(currentweek)
print_geojson(tvarname, weekdeficsum, fnamedefic, 0, 0, 0, 2, 0)
tvarname = "W" + str(year) + "_" + str(currentweek)
print_geojson(tvarname, yeardeficsum, fnamedeficaccum, 0, 0, 0, 2, 0)
weekdeficsum=daydeficsum
allweekprecilist.append(weekprecilist)
allmonthprecilist.append(monthprecilist)
yearprecilist.append(yearprecisum)
tvarname = "Pr" + str(year)
print_geojson(tvarname, yearprecisum, fnameannualpreci, 0, 0, 0, 2, 0)
allweekevapolist.append(weekevapolist)
allmonthevapolist.append(monthevapolist)
yearevapolist.append(yearevaposum)
tvarname = "Ev" + str(year)
print_geojson(tvarname, yearevaposum, fnameannualevapo, 0, 0, 0, 2, 0)
allweekrunlist.append(weekrunlist)
allmonthrunlist.append(monthrunlist)
yearrunlist.append(yearrunsum)
tvarname = "Run" + str(year)
print_geojson(tvarname, yearrunsum, fnameannualrun, 0, 0, 0, 2, 0)
allweekdeficlist.append(weekdeficlist)
allmonthdeficlist.append(monthdeficlist)
yeardeficlist.append(yeardeficsum)
tvarname = "Pr" + str(year)
print_geojson(tvarname, yeardeficsum, fnameannualdefic, 0, 0, 0, 2, 0)
```
### Find deficits: function for selected years
```
def precipitationyearly(latorder,lonorder,startyear,endyear,endloop,datafolder,fnamepreci,enddatem, startdatem,enddated, startdated,precipitationparam, fnameannualpreci, unitcoeff,fnamepreciaccum,fnameavgprecipitation,fnamempreciaccum, fnamempreci,fnameavgmprecipitation,fnameevapo,fnameevapoaccum,fnamemevapo,fnamemevapoaccum,fnameavgevaporation,fnameavgmevaporation,fnameannualevapo,fnamerun,fnamerunaccum,fnamemrun,fnamemrunaccum,fnameavgrunoff,fnameavgmrunoff,fnameannualrun,fnamedefic,fnamedeficaccum,fnamemdefic,fnamemdeficaccum,fnameavgdeficits,fnameavgmdeficits,fnameannualdefic):
print_geojson("", "", fnamepreci, 0, 0, 0, 1,0)
print_geojson("", "", fnamepreciaccum, 0, 0, 0, 1,0)
print_geojson("", "", fnameannualpreci, 0, 0, 0, 1,0)
print_geojson("", "", fnameavgprecipitation, 0, 0, 0, 1,0)
print_geojson("", "", fnamempreci, 0, 0, 0, 1,0)
print_geojson("", "", fnamempreciaccum, 0, 0, 0, 1,0)
print_geojson("", "", fnameavgmprecipitation, 0, 0, 0, 1,0)
print_geojson("", "", fnameevapo, 0, 0, 0, 1,0)
print_geojson("", "", fnameevapoaccum, 0, 0, 0, 1,0)
print_geojson("", "", fnameannualevapo, 0, 0, 0, 1,0)
print_geojson("", "", fnameavgevaporation, 0, 0, 0, 1,0)
print_geojson("", "", fnamemevapo, 0, 0, 0, 1,0)
print_geojson("", "", fnamemevapoaccum, 0, 0, 0, 1,0)
print_geojson("", "", fnameavgmevaporation, 0, 0, 0, 1,0)
print_geojson("", "", fnamerun, 0, 0, 0, 1,0)
print_geojson("", "", fnamerunaccum, 0, 0, 0, 1,0)
print_geojson("", "", fnameannualrun, 0, 0, 0, 1,0)
print_geojson("", "", fnameavgrunoff, 0, 0, 0, 1,0)
print_geojson("", "", fnamemrun, 0, 0, 0, 1,0)
print_geojson("", "", fnamemrunaccum, 0, 0, 0, 1,0)
print_geojson("", "", fnameavgmrunoff, 0, 0, 0, 1,0)
print_geojson("", "", fnamedefic, 0, 0, 0, 1,0)
print_geojson("", "", fnamedeficaccum, 0, 0, 0, 1,0)
print_geojson("", "", fnameannualdefic, 0, 0, 0, 1,0)
print_geojson("", "", fnameavgdeficits, 0, 0, 0, 1,0)
print_geojson("", "", fnamemdefic, 0, 0, 0, 1,0)
print_geojson("", "", fnamemdeficaccum, 0, 0, 0, 1,0)
print_geojson("", "", fnameavgmdeficits, 0, 0, 0, 1,0)
endloopyear =0
allweekprecilist=[] # 2D list for all weeks many years
allmonthprecilist=[] # 2D list for all months many years
yearprecilist=[] # 2D list for all weeks many years
allweekevapolist=[] # 2D list for all weeks many years
allmonthevapolist=[] # 2D list for all months many years
yearevapolist=[] # 2D list for all weeks many years
allweekrunlist=[] # 2D list for all weeks many years
allmonthrunlist=[] # 2D list for all months many years
yearrunlist=[] # 2D list for all weeks many years
allweekdeficlist=[] # 2D list for all weeks many years
allmonthdeficlist=[] # 2D list for all months many years
yeardeficlist=[] # 2D list for all weeks many years
for year in range(startyear, endyear+1, 1):
source = datafolder + '/' + str(year) + '.nc'
im=Image(netCDF4.Dataset(source,'r'))
longlist = im.get_data().variables['lon'][:]
latlist= im.get_data().variables['lat'][:]
longitude = longlist [lonorder]
latitude = latlist[latorder]
if year == startyear:
print_geojson("", "", fnamepreci, longitude, latitude, 0, 0,0)
print_geojson("", "", fnameannualpreci, longitude, latitude, 0, 0,0)
print_geojson("", "", fnamepreciaccum, longitude, latitude, 0, 0,0)
print_geojson("", "", fnameavgprecipitation, longitude, latitude, 0, 0,0)
print_geojson("", "", fnamempreci, longitude, latitude, 0, 0,0)
print_geojson("", "", fnamempreciaccum, longitude, latitude, 0, 0,0)
print_geojson("", "", fnameavgmprecipitation, longitude, latitude, 0, 0,0)
print_geojson("", "", fnameevapo, longitude, latitude, 0, 0,0)
print_geojson("", "", fnameannualevapo, longitude, latitude, 0, 0,0)
print_geojson("", "", fnameevapoaccum, longitude, latitude, 0, 0,0)
print_geojson("", "", fnameavgevaporation, longitude, latitude, 0, 0,0)
print_geojson("", "", fnamemevapo, longitude, latitude, 0, 0,0)
print_geojson("", "", fnamemevapoaccum, longitude, latitude, 0, 0,0)
print_geojson("", "", fnameavgmevaporation, longitude, latitude, 0, 0,0)
print_geojson("", "", fnamerun, longitude, latitude, 0, 0,0)
print_geojson("", "", fnameannualrun, longitude, latitude, 0, 0,0)
print_geojson("", "", fnamerunaccum, longitude, latitude, 0, 0,0)
print_geojson("", "", fnameavgrunoff, longitude, latitude, 0, 0,0)
print_geojson("", "", fnamemrun, longitude, latitude, 0, 0,0)
print_geojson("", "", fnamemrunaccum, longitude, latitude, 0, 0,0)
print_geojson("", "", fnameavgmrunoff, longitude, latitude, 0, 0,0)
print_geojson("", "", fnamedefic, longitude, latitude, 0, 0,0)
print_geojson("", "", fnameannualdefic, longitude, latitude, 0, 0,0)
print_geojson("", "", fnamedeficaccum, longitude, latitude, 0, 0,0)
print_geojson("", "", fnameavgdeficits, longitude, latitude, 0, 0,0)
print_geojson("", "", fnamemdefic, longitude, latitude, 0, 0,0)
print_geojson("", "", fnamemdeficaccum, longitude, latitude, 0, 0,0)
print_geojson("", "", fnameavgmdeficits, longitude, latitude, 0, 0,0)
if year == endyear:
endloopyear=1
enddate=date(year, enddatem, enddated)
startdate=date(year, startdatem, startdated)
findprecipitation(allweekevapolist,allmonthevapolist,yearevapolist,allweekrunlist,allmonthrunlist,yearrunlist,allweekdeficlist,allmonthdeficlist,yeardeficlist,latitude,longitude,year,endyear,im,enddate, startdate, fnamepreci, allweekprecilist,precipitationparam, fnameannualpreci, yearprecilist, unitcoeff,fnamepreciaccum,fnamempreciaccum, fnamempreci,allmonthprecilist,fnameevapo,fnameevapoaccum,fnamemevapo,fnamemevapoaccum,fnameavgevaporation,fnameavgmevaporation,fnameannualevapo,fnamerun,fnamerunaccum,fnamemrun,fnamemrunaccum,fnameavgrunoff,fnameavgmrunoff,fnameannualrun,fnamedefic,fnamedeficaccum,fnamemdefic,fnamemdeficaccum,fnameavgdeficits,fnameavgmdeficits,fnameannualdefic)
avgweekprecilist = avg2Dlist(allweekprecilist,startyear,endyear)
avgmonthprecilist = avg2Dlist(allmonthprecilist,startyear,endyear)
printlistasweekgeojson(avgweekprecilist,"PrW",fnamepreci,fnameavgprecipitation, 0)
printlistasweekgeojson(avgmonthprecilist,"PrM",fnamempreci,fnameavgmprecipitation, 0)
avgweekacuprecilist = acumulatelist(avgweekprecilist)
avgmonthacuprecilist = acumulatelist(avgmonthprecilist)
printlistasweekgeojson(avgweekacuprecilist,"APrW",fnamepreciaccum,fnameavgprecipitation, endloopyear)
printlistasweekgeojson(avgmonthacuprecilist,"APrM",fnamempreciaccum,fnameavgmprecipitation, endloopyear)
avgpreciyear=sum(yearprecilist)/len(yearprecilist)
print_geojson("AvgPre", avgpreciyear, fnameannualpreci, 0, 0, 0, 2, 0)
nametrend = "AnTrCo"
namediff = "Andiff"
trend(yearprecilist, nametrend, namediff,fnameannualpreci)
nametrend = "TrCo"
namediff = "Diff"
trend2(allweekprecilist, nametrend, namediff, endyear, startyear, fnamepreci,fnameavgprecipitation)
trend2(allmonthprecilist, nametrend, namediff, endyear, startyear, fnamempreci,fnameavgmprecipitation)
print_geojson("", "", fnamepreci, 0, 0, 0, 4,endloop)
print_geojson("", "", fnameannualpreci, 0, 0, 0, 4,endloop)
print_geojson("", "", fnamepreciaccum, 0, 0, 0, 4,endloop)
print_geojson("", "", fnameavgprecipitation, 0, 0, 0, 4,endloop)
print_geojson("", "", fnamempreci, 0, 0, 0, 4,endloop)
print_geojson("", "", fnamempreciaccum, 0, 0, 0, 4,endloop)
print_geojson("", "", fnameavgmprecipitation, 0, 0, 0, 4,endloop)
avgweekevapolist = avg2Dlist(allweekevapolist,startyear,endyear)
avgmonthevapolist = avg2Dlist(allmonthevapolist,startyear,endyear)
printlistasweekgeojson(avgweekevapolist,"EvW",fnameevapo,fnameavgevaporation, 0)
printlistasweekgeojson(avgmonthevapolist,"EvM",fnamemevapo,fnameavgmevaporation, 0)
avgweekacuevapolist = acumulatelist(avgweekevapolist)
avgmonthacuevapolist = acumulatelist(avgmonthevapolist)
printlistasweekgeojson(avgweekacuevapolist,"AEvW",fnameevapoaccum,fnameavgevaporation, endloopyear)
printlistasweekgeojson(avgmonthacuevapolist,"AEvM",fnamemevapoaccum,fnameavgmevaporation, endloopyear)
avgevapoyear=sum(yearevapolist)/len(yearevapolist)
print_geojson("AvgEv", avgevapoyear, fnameannualevapo, 0, 0, 0, 2, 0)
nametrend = "AnTrCo"
namediff = "Andiff"
trend(yearevapolist, nametrend, namediff,fnameannualevapo)
nametrend = "TrCo"
namediff = "Diff"
trend2(allweekevapolist, nametrend, namediff, endyear, startyear, fnameevapo,fnameavgevaporation)
trend2(allmonthevapolist, nametrend, namediff, endyear, startyear, fnamemevapo,fnameavgmevaporation)
print_geojson("", "", fnameevapo, 0, 0, 0, 4,endloop)
print_geojson("", "", fnameannualevapo, 0, 0, 0, 4,endloop)
print_geojson("", "", fnameevapoaccum, 0, 0, 0, 4,endloop)
print_geojson("", "", fnameavgevaporation, 0, 0, 0, 4,endloop)
print_geojson("", "", fnamemevapo, 0, 0, 0, 4,endloop)
print_geojson("", "", fnamemevapoaccum, 0, 0, 0, 4,endloop)
print_geojson("", "", fnameavgmevaporation, 0, 0, 0, 4,endloop)
avgweekrunlist = avg2Dlist(allweekrunlist,startyear,endyear)
avgmonthrunlist = avg2Dlist(allmonthrunlist,startyear,endyear)
printlistasweekgeojson(avgweekrunlist,"RW",fnamerun,fnameavgrunoff, 0)
printlistasweekgeojson(avgmonthrunlist,"RM",fnamemrun,fnameavgmrunoff, 0)
avgweekacurunlist = acumulatelist(avgweekrunlist)
avgmonthacurunlist = acumulatelist(avgmonthrunlist)
printlistasweekgeojson(avgweekacurunlist,"ARunW",fnamerunaccum,fnameavgrunoff, endloopyear)
printlistasweekgeojson(avgmonthacurunlist,"ARunM",fnamemrunaccum,fnameavgmrunoff, endloopyear)
avgrunyear=sum(yearrunlist)/len(yearrunlist)
print_geojson("AvgR", avgrunyear, fnameannualrun, 0, 0, 0, 2, 0)
nametrend = "AnTrCo"
namediff = "Andiff"
trend(yearrunlist, nametrend, namediff,fnameannualrun)
nametrend = "TrCo"
namediff = "Diff"
trend2(allweekrunlist, nametrend, namediff, endyear, startyear, fnamerun,fnameavgrunoff)
trend2(allmonthrunlist, nametrend, namediff, endyear, startyear, fnamemrun,fnameavgmrunoff)
print_geojson("", "", fnamerun, 0, 0, 0, 4,endloop)
print_geojson("", "", fnameannualrun, 0, 0, 0, 4,endloop)
print_geojson("", "", fnamerunaccum, 0, 0, 0, 4,endloop)
print_geojson("", "", fnameavgrunoff, 0, 0, 0, 4,endloop)
print_geojson("", "", fnamemrun, 0, 0, 0, 4,endloop)
print_geojson("", "", fnamemrunaccum, 0, 0, 0, 4,endloop)
print_geojson("", "", fnameavgmrunoff, 0, 0, 0, 4,endloop)
avgweekdeficlist = avg2Dlist(allweekdeficlist,startyear,endyear)
avgmonthdeficlist = avg2Dlist(allmonthdeficlist,startyear,endyear)
printlistasweekgeojson(avgweekdeficlist,"DW",fnamedefic,fnameavgdeficits, 0)
printlistasweekgeojson(avgmonthdeficlist,"DM",fnamemdefic,fnameavgmdeficits, 0)
avgweekacudeficlist = acumulatelist(avgweekdeficlist)
avgmonthacudeficlist = acumulatelist(avgmonthdeficlist)
printlistasweekgeojson(avgweekacudeficlist,"ADefW",fnamedeficaccum,fnameavgdeficits, endloopyear)
printlistasweekgeojson(avgmonthacudeficlist,"ADefM",fnamemdeficaccum,fnameavgmdeficits, endloopyear)
avgdeficyear=sum(yeardeficlist)/len(yeardeficlist)
print_geojson("AvgD", avgdeficyear, fnameannualdefic, 0, 0, 0, 2, 0)
nametrend = "AnTrCo"
namediff = "Andiff"
trend(yeardeficlist, nametrend, namediff,fnameannualdefic)
nametrend = "TrCo"
namediff = "Diff"
trend2(allweekdeficlist, nametrend, namediff, endyear, startyear, fnamedefic,fnameavgdeficits)
trend2(allmonthdeficlist, nametrend, namediff, endyear, startyear, fnamemdefic,fnameavgmdeficits)
print_geojson("", "", fnamedefic, 0, 0, 0, 4,endloop)
print_geojson("", "", fnameannualdefic, 0, 0, 0, 4,endloop)
print_geojson("", "", fnamedeficaccum, 0, 0, 0, 4,endloop)
print_geojson("", "", fnameavgdeficits, 0, 0, 0, 4,endloop)
print_geojson("", "", fnamemdefic, 0, 0, 0, 4,endloop)
print_geojson("", "", fnamemdeficaccum, 0, 0, 0, 4,endloop)
print_geojson("", "", fnameavgmdeficits, 0, 0, 0, 4,endloop)
```
### Find deficits: function for selected latitudes, longitudes
```
def precipitationplaces(startlat, startlon, endlat, endlon, startyear,endyear,exportfolder,datafolder,fnamepreci1,enddatem, startdatem,enddated, startdated, alllatlonfile, precipitationparam, fnameannualpreci1, unitcoeff,fnamepreciaccum1,fnameavgprecipitation1,fnamempreciaccum1, fnamempreci1,fnameavgmprecipitation1,fnameevapo1,fnameevapoaccum1,fnamemevapo1,fnamemevapoaccum1,fnameavgevaporation1,fnameavgmevaporation1,fnameannualevapo1,fnamerun1,fnamerunaccum1,fnamemrun1,fnamemrunaccum1,fnameavgrunoff1,fnameavgmrunoff1,fnameannualrun1,fnamedefic1,fnamedeficaccum1,fnamemdefic1,fnamemdeficaccum1,fnameavgdeficits1,fnameavgmdeficits1,fnameannualdefic1):
fnamepreci= exportfolder + "/" +fnamepreci1
fnamepreciaccum= exportfolder + "/" +fnamepreciaccum1
fnameannualpreci= exportfolder + "/" +fnameannualpreci1
fnameavgprecipitation= exportfolder + "/" +fnameavgprecipitation1
fnamempreci= exportfolder + "/" +fnamempreci1
fnamempreciaccum= exportfolder + "/" +fnamempreciaccum1
fnameavgmprecipitation= exportfolder + "/" +fnameavgmprecipitation1
fnameevapo= exportfolder + "/" +fnameevapo1
fnameevapoaccum= exportfolder + "/" +fnameevapoaccum1
fnameannualevapo= exportfolder + "/" +fnameannualevapo1
fnameavgevaporation= exportfolder + "/" +fnameavgevaporation1
fnamemevapo= exportfolder + "/" +fnamemevapo1
fnamemevapoaccum= exportfolder + "/" +fnamemevapoaccum1
fnameavgmevaporation= exportfolder + "/" +fnameavgmevaporation1
fnamerun= exportfolder + "/" +fnamerun1
fnamerunaccum= exportfolder + "/" +fnamerunaccum1
fnameannualrun= exportfolder + "/" +fnameannualrun1
fnameavgrunoff= exportfolder + "/" +fnameavgrunoff1
fnamemrun= exportfolder + "/" +fnamemrun1
fnamemrunaccum= exportfolder + "/" +fnamemrunaccum1
fnameavgmrunoff= exportfolder + "/" +fnameavgmrunoff1
fnamedefic= exportfolder + "/" +fnamedefic1
fnamedeficaccum= exportfolder + "/" +fnamedeficaccum1
fnameannualdefic= exportfolder + "/" +fnameannualdefic1
fnameavgdeficits= exportfolder + "/" +fnameavgdeficits1
fnamemdefic= exportfolder + "/" +fnamemdefic1
fnamemdeficaccum= exportfolder + "/" +fnamemdeficaccum1
fnameavgmdeficits= exportfolder + "/" +fnameavgmdeficits1
#start in geojson files:
print_geojson("", "", fnamepreci, 0, 0, 1, 0,0)
print_geojson("", "", fnamepreciaccum, 0, 0, 1, 0,0)
print_geojson("", "", fnameannualpreci, 0, 0, 1, 0,0)
print_geojson("", "", fnameavgprecipitation, 0, 0, 1, 0,0)
print_geojson("", "", fnamempreci, 0, 0, 1, 0,0)
print_geojson("", "", fnamempreciaccum, 0, 0, 1, 0,0)
print_geojson("", "", fnameavgmprecipitation, 0, 0, 1, 0,0)
print_geojson("", "", fnameevapo, 0, 0, 1, 0,0)
print_geojson("", "", fnameevapoaccum, 0, 0, 1, 0,0)
print_geojson("", "", fnameannualevapo, 0, 0, 1, 0,0)
print_geojson("", "", fnameavgevaporation, 0, 0, 1, 0,0)
print_geojson("", "", fnamemevapo, 0, 0, 1, 0,0)
print_geojson("", "", fnamemevapoaccum, 0, 0, 1, 0,0)
print_geojson("", "", fnameavgmevaporation, 0, 0, 1, 0,0)
print_geojson("", "", fnamerun, 0, 0, 1, 0,0)
print_geojson("", "", fnamerunaccum, 0, 0, 1, 0,0)
print_geojson("", "", fnameannualrun, 0, 0, 1, 0,0)
print_geojson("", "", fnameavgrunoff, 0, 0, 1, 0,0)
print_geojson("", "", fnamemrun, 0, 0, 1, 0,0)
print_geojson("", "", fnamemrunaccum, 0, 0, 1, 0,0)
print_geojson("", "", fnameavgmrunoff, 0, 0, 1, 0,0)
print_geojson("", "", fnamedefic, 0, 0, 1, 0,0)
print_geojson("", "", fnamedeficaccum, 0, 0, 1, 0,0)
print_geojson("", "", fnameannualdefic, 0, 0, 1, 0,0)
print_geojson("", "", fnameavgdeficits, 0, 0, 1, 0,0)
print_geojson("", "", fnamemdefic, 0, 0, 1, 0,0)
print_geojson("", "", fnamemdeficaccum, 0, 0, 1, 0,0)
print_geojson("", "", fnameavgmdeficits, 0, 0, 1, 0,0)
endloop=0
if alllatlonfile==1: # if it is calculated for all latitudes and longitudes in input file
source = datafolder + '/' + str(startyear) + '.nc'
im=Image(netCDF4.Dataset(source,'r'))
arraylon = im.get_data().variables['lon'][0::]
arraylat = im.get_data().variables['lat'][0::]
startlat=0
startlon=0
endlon= len(arraylon)-1
endlat= len(arraylat)-1
for latorder in range(startlat, endlat+1, 1):
for lonorder in range(startlon, endlon+1, 1):
if latorder==endlat and lonorder==endlon:
endloop=1
precipitationyearly(latorder,lonorder,startyear,endyear,endloop,datafolder,fnamepreci,enddatem, startdatem,enddated, startdated,precipitationparam, fnameannualpreci, unitcoeff,fnamepreciaccum,fnameavgprecipitation,fnamempreciaccum, fnamempreci,fnameavgmprecipitation,fnameevapo,fnameevapoaccum,fnamemevapo,fnamemevapoaccum,fnameavgevaporation,fnameavgmevaporation,fnameannualevapo,fnamerun,fnamerunaccum,fnamemrun,fnamemrunaccum,fnameavgrunoff,fnameavgmrunoff,fnameannualrun,fnamedefic,fnamedeficaccum,fnamemdefic,fnamemdeficaccum,fnameavgdeficits,fnameavgmdeficits,fnameannualdefic)
```
## <font color=red>Find deficits: input parameters and launch</font>
```
#Time definition:
startyear=2010 #start year (integer)
endyear=2019 #end year (integer)
enddatem = 12 # start date (month) each year
enddated = 31 # start date (day) each year
startdatem = 1 # end date (month) each year
startdated = 1 # end date (day) each year
#Optimalization:
starthourday=0 # integer 0-23
endhourday=23 # integer 0-23
#Balanc unit:
units = 3 # 1 = m (default), 2 = cm, 3 = mm
#Files/Folders name:
datafolder = "data" #folder with data files (named by year) for each year #string
exportfolder = "export-preci_evapo_runoff" #for all files (if each file its folder -> insert name of folder to name of file) #export folder must be created #string
fnamepreci ="weekly_precipitation" #name of created files with week precipitation #string
fnamepreciaccum ="weekly_accum_precipitation" #name of created files with week precipitation #string
fnamempreci ="monthly_precipitation" #name of created files with month precipitation #string
fnamempreciaccum ="monthly_accum_precipitation" #name of created files with month precipitation #string
fnameavgprecipitation ="weekly_avg_precipitation" #name of created files with week precipitation #string
fnameavgmprecipitation ="monthly_avg_precipitation" #name of created files with month precipitation #string
fnameannualpreci ="annualsum_precipitation" #name of created files with annual/seasonal/defined period precipitation #string
fnameevapo ="weekly_evaporation" #name of created files with week evaporation #string
fnameevapoaccum ="weekly_accum_evaporation" #name of created files with week evaporation #string
fnamemevapo ="monthly_evaporation" #name of created files with month evaporation #string
fnamemevapoaccum ="monthly_accum_evaporation" #name of created files with month evaporation #string
fnameavgevaporation ="weekly_avg_evaporation" #name of created files with week evaporation #string
fnameavgmevaporation ="monthly_avg_evaporation" #name of created files with month evaporation #string
fnameannualevapo ="annualsum_evaporation" #name of created files with annual/seasonal/defined period evaporation #string
fnamerun ="weekly_runoff" #name of created files with week runoff #string
fnamerunaccum ="weekly_accum_runoff" #name of created files with week runoff #string
fnamemrun ="monthly_runoff" #name of created files with month runoff #string
fnamemrunaccum ="monthly_accum_runoff" #name of created files with month runoff #string
fnameavgrunoff ="weekly_avg_runoff" #name of created files with week runoff #string
fnameavgmrunoff ="monthly_avg_runoff" #name of created files with month runoff #string
fnameannualrun ="annualsum_runoff" #name of created files with annual/seasonal/defined period runoff #string
fnamedefic ="weekly_deficits" #name of created files with week deficits #string
fnamedeficaccum ="weekly_accum_deficits" #name of created files with week deficits #string
fnamemdefic ="monthly_deficits" #name of created files with month deficits #string
fnamemdeficaccum ="monthly_accum_deficits" #name of created files with month deficits #string
fnameavgdeficits ="weekly_avg_deficits" #name of created files with week deficits #string
fnameavgmdeficits ="monthly_avg_deficits" #name of created files with month deficits #string
fnameannualdefic ="annualsum_deficits" #name of created files with annual/seasonal/defined period deficits #string
#Area definition:
alllatlonfile=0 #calculate all latitudes and longitudes in input file (1=yes, 0=no)
# if alllatlonfile!=0 then:
startlat=4 # start number of list of latitudes from used netCDF4 file
startlon=4 # start number of list of longitudes from used netCDF4 file
endlat=17 # end number of list of latitudes from used netCDF4 file
endlon=19 # end number of list of longitudes from used netCDF4 file
# data parameter:
precipitationparam = 'tprate'
evaporationparam = 'eow_lwe'
runoffparam = 'ro_NON_CDM'
unitcoeff = 1000
if units == 2:
unitcoeff = 10000
elif units == 3:
unitcoeff = 1000000
precipitationplaces(startlat, startlon, endlat, endlon, startyear,endyear,exportfolder,datafolder,fnamepreci,enddatem, startdatem,enddated, startdated,alllatlonfile,precipitationparam, fnameannualpreci, unitcoeff, fnamepreciaccum,fnameavgprecipitation,fnamempreciaccum, fnamempreci,fnameavgmprecipitation,fnameevapo,fnameevapoaccum,fnamemevapo,fnamemevapoaccum,fnameavgevaporation,fnameavgmevaporation,fnameannualevapo,fnamerun,fnamerunaccum,fnamemrun,fnamemrunaccum,fnameavgrunoff,fnameavgmrunoff,fnameannualrun,fnamedefic,fnamedeficaccum,fnamemdefic,fnamemdeficaccum,fnameavgdeficits,fnameavgmdeficits,fnameannualdefic)
```
## From geojson to shp
```
args = ['ogr2ogr', '-f', 'ESRI Shapefile', 'export-preci_evapo_runoff/shp/annualsum_evapo_10.shp', 'export-preci_evapo_runoff/annualsum_evaporation.geojson']
subprocess.Popen(args)
args = ['ogr2ogr', '-f', 'ESRI Shapefile', 'export/shp/weekly_accum_precipitation.shp', 'export/weekly_accum_precipitation.geojson']
subprocess.Popen(args)
args = ['ogr2ogr', '-f', 'ESRI Shapefile', 'export/shp/annualsum_precipitation.shp', 'export/annualsum_precipitation.geojson']
subprocess.Popen(args)
args = ['ogr2ogr', '-f', 'ESRI Shapefile', 'export/shp/weekly_avg_precipitation.shp', 'export/weekly_avg_precipitation.geojson']
subprocess.Popen(args)
args = ['ogr2ogr', '-f', 'ESRI Shapefile', 'export/shp/monthly_precipitation.shp', 'export/monthly_precipitation.geojson']
subprocess.Popen(args)
args = ['ogr2ogr', '-f', 'ESRI Shapefile', 'export/shp/monthly_accum_precipitation.shp', 'export/monthly_accum_precipitation.geojson']
subprocess.Popen(args)
args = ['ogr2ogr', '-f', 'ESRI Shapefile', 'export/shp/monthly_avg_precipitation.shp', 'export/monthly_avg_precipitation.geojson']
subprocess.Popen(args)
```
| github_jupyter |
# Implementation of a Devito self adjoint variable density visco- acoustic isotropic modeling operator <br>-- Nonlinear Ops --
## This operator is contributed by Chevron Energy Technology Company (2020)
This operator is based on simplfications of the systems presented in:
<br>**Self-adjoint, energy-conserving second-order pseudoacoustic systems for VTI and TTI media for reverse time migration and full-waveform inversion** (2016)
<br>Kenneth Bube, John Washbourne, Raymond Ergas, and Tamas Nemeth
<br>SEG Technical Program Expanded Abstracts
<br>https://library.seg.org/doi/10.1190/segam2016-13878451.1
## Introduction
The goal of this tutorial set is to generate and prove correctness of modeling and inversion capability in Devito for variable density visco- acoustics using an energy conserving form of the wave equation. We describe how the linearization of the energy conserving *self adjoint* system with respect to modeling parameters allows using the same modeling system for all nonlinear and linearized forward and adjoint finite difference evolutions. There are three notebooks in this series:
##### 1. Implementation of a Devito self adjoint variable density visco- acoustic isotropic modeling operator -- Nonlinear Ops
- Implement the nonlinear modeling operations.
- [sa_01_iso_implementation1.ipynb](sa_01_iso_implementation1.ipynb)
##### 2. Implementation of a Devito self adjoint variable density visco- acoustic isotropic modeling operator -- Linearized Ops
- Implement the linearized (Jacobian) ```forward``` and ```adjoint``` modeling operations.
- [sa_02_iso_implementation2.ipynb](sa_02_iso_implementation2.ipynb)
##### 3. Implementation of a Devito self adjoint variable density visco- acoustic isotropic modeling operator -- Correctness Testing
- Tests the correctness of the implemented operators.
- [sa_03_iso_correctness.ipynb](sa_03_iso_correctness.ipynb)
There are similar series of notebooks implementing and testing operators for VTI and TTI anisotropy ([README.md](README.md)).
Below we introduce the *self adjoint* form of the scalar isotropic variable density visco- acoustic wave equation with a simple form of dissipation only Q attenuation. This dissipation only (no dispersion) attenuation term $\left (\frac{\displaystyle \omega}{Q}\ \partial_t\ u \right)$ is an approximation of a [Maxwell Body](https://en.wikipedia.org/wiki/Maxwell_material) -- that is to say viscoelasticity approximated with a spring and dashpot in series. In practice this approach for attenuating outgoing waves is very similar to the Cerjan style damping in absorbing boundaries used elsewhere in Devito ([References](#nl_refs)).
The derivation of the attenuation model is not in scope for this tutorial, but one important point is that the physics in the absorbing boundary region and the interior of the model are *unified*, allowing the same modeling equations to be used everywhere, with physical Q values in the interior tapering to non-physical small Q at the boundaries to attenuate outgoing waves.
## Outline
1. Define symbols
2. Introduce the SA wave equation
3. Show generation of skew symmetric derivatives and prove correctness with unit test
4. Derive the time update equation used to implement the nonlinear forward modeling operator
5. Create the Devito grid and model fields
6. Define a function to implement the attenuation profile ($\omega\ /\ Q$)
7. Create the Devito operator
8. Run the Devito operator
9. Plot the resulting wavefields
10. References
## Table of symbols
| Symbol | Description | Dimensionality |
| :--- | :--- | :--- |
| $\omega_c = 2 \pi f_c$ | center angular frequency | constant |
| $m(x,y,z)$ | P wave velocity | function of space |
| $b(x,y,z)$ | buoyancy $(1 / \rho)$ | function of space |
| $Q(x,y,z)$ | Attenuation at frequency $\omega_c$ | function of space |
| $u(t,x,y,z)$ | Pressure wavefield | function of time and space |
| $q(t,x,y,z)$ | Source term | function of time, localized in space |
| $\overleftarrow{\partial_t}$ | shifted first derivative wrt $t$ | shifted 1/2 sample backward in time |
| $\partial_{tt}$ | centered second derivative wrt $t$ | centered in time |
| $\overrightarrow{\partial_x},\ \overrightarrow{\partial_y},\ \overrightarrow{\partial_z}$ | + shifted first derivative wrt $x,y,z$ | shifted 1/2 sample forward in space |
| $\overleftarrow{\partial_x},\ \overleftarrow{\partial_y},\ \overleftarrow{\partial_z}$ | - shifted first derivative wrt $x,y,z$ | shifted 1/2 sample backward in space |
| $\Delta_t, \Delta_x, \Delta_y, \Delta_z$ | sampling rates for $t, x, y , z$ | $t, x, y , z$ |
## A word about notation
We use the arrow symbols over derivatives $\overrightarrow{\partial_x}$ as a shorthand notation to indicate that the derivative is taken at a shifted location. For example:
- $\overrightarrow{\partial_x}\ u(t,x,y,z)$ indicates that the $x$ derivative of $u(t,x,y,z)$ is taken at $u(t,x+\frac{\Delta x}{2},y,z)$.
- $\overleftarrow{\partial_z}\ u(t,x,y,z)$ indicates that the $z$ derivative of $u(t,x,y,z)$ is taken at $u(t,x,y,z-\frac{\Delta z}{2})$.
- $\overleftarrow{\partial_t}\ u(t,x,y,z)$ indicates that the $t$ derivative of $u(t,x,y,z)$ is taken at $u(t-\frac{\Delta_t}{2},x,y,z)$.
We usually drop the $(t,x,y,z)$ notation from wavefield variables unless required for clarity of exposition, so that $u(t,x,y,z)$ becomes $u$.
## Self adjoint variable density visco- acoustic wave equation
Our self adjoint wave equation is written:
$$
\frac{b}{m^2} \left( \frac{\omega_c}{Q} \overleftarrow{\partial_t}\ u + \partial_{tt}\ u \right) =
\overleftarrow{\partial_x}\left(b\ \overrightarrow{\partial_x}\ u \right) +
\overleftarrow{\partial_y}\left(b\ \overrightarrow{\partial_y}\ u \right) +
\overleftarrow{\partial_z}\left(b\ \overrightarrow{\partial_z}\ u \right) + q
$$
An advantage of this form is that the same system can be used to provide *stable modes of propagation* for all operations needed in quasi- Newton optimization:
- the nonlinear forward
- the linearized forward (Jacobian forward)
- the linearized adjoint (Jacobian adjoint)
This advantage is more important for anisotropic operators, where widely utilized non energy conserving formulations can provide unstable adjoints and thus unstable gradients for anisotropy parameters.
The *self adjoint* formulation is evident in the shifted spatial derivatives, with the derivative on the right side $\overrightarrow{\partial}$ shifting forward in space one-half cell, and the derivative on the left side $\overleftarrow{\partial}$ shifting backward in space one-half cell.
$\overrightarrow{\partial}$ and $\overleftarrow{\partial}$ are anti-symmetric (also known as skew symmetric), meaning that for two random vectors $x_1$ and $x_2$, correctly implemented numerical derivatives will have the following property:
$$
x_2 \cdot \left( \overrightarrow{\partial_x}\ x_1 \right) \approx -\
x_1 \cdot \left( \overleftarrow{\partial_x}\ x_2 \right)
$$
Below we will demonstrate this skew symmetry with a simple unit test on Devito generated derivatives.
In the following notebooks in this series, material parameters *sandwiched* between the derivatives -- including anisotropy parameters -- become much more interesting, but here buoyancy $b$ is the only material parameter between derivatives in our self adjoint (SA) wave equation.
## Imports
We have grouped all imports used in this notebook here for consistency.
```
import numpy as np
from examples.seismic import RickerSource, Receiver, TimeAxis
from devito import (Grid, Function, TimeFunction, SpaceDimension, Constant,
Eq, Operator, solve, configuration, norm)
from devito.finite_differences import Derivative
from devito.builtins import gaussian_smooth
from examples.seismic.self_adjoint import setup_w_over_q
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib import cm
from timeit import default_timer as timer
# These lines force images to be displayed in the notebook, and scale up fonts
%matplotlib inline
mpl.rc('font', size=14)
# Make white background for plots, not transparent
plt.rcParams['figure.facecolor'] = 'white'
# Set logging to debug, captures statistics on the performance of operators
configuration['log-level'] = 'DEBUG'
```
## Unit test demonstrating skew symmetry for shifted derivatives
As noted above, we prove with a small 1D unit test and 8th order spatial operator that the Devito shifted first derivatives are skew symmetric. This anti-symmetry can be demonstrated for the forward and backward half cell shift first derivative operators $\overrightarrow{\partial}$ and $\overleftarrow{\partial}$ with two random vectors $x_1$ and $x_2$ by verifying the *dot product test* as written above.
We will use Devito to implement the following two equations with an ```Operator```:
$$
\begin{aligned}
f_2 = \overrightarrow{\partial_x}\ f_1 \\[5pt]
g_2 = \overleftarrow{\partial_x}\ g_1
\end{aligned}
$$
And then verify the dot products are equivalent. Recall that skew symmetry introduces a minus sign in this equality:
$$
f_1 \cdot g_2 \approx - g_1 \cdot f_2
$$
We use the following test for relative error (note the flipped signs in numerator and denominator due to anti- symmetry):
$$
\frac{\displaystyle f_1 \cdot g_2 + g_1 \cdot f_2}
{\displaystyle f_1 \cdot g_2 - g_1 \cdot f_2}\ <\ \epsilon
$$
```
# NBVAL_IGNORE_OUTPUT
# Make 1D grid to test derivatives
n = 101
d = 1.0
shape = (n, )
spacing = (1 / (n-1), )
origin = (0., )
extent = (d * (n-1), )
dtype = np.float64
# Initialize Devito grid and Functions for input(f1,g1) and output(f2,g2)
# Note that space_order=8 allows us to use an 8th order finite difference
# operator by properly setting up grid accesses with halo cells
grid1d = Grid(shape=shape, extent=extent, origin=origin, dtype=dtype)
x = grid1d.dimensions[0]
f1 = Function(name='f1', grid=grid1d, space_order=8)
f2 = Function(name='f2', grid=grid1d, space_order=8)
g1 = Function(name='g1', grid=grid1d, space_order=8)
g2 = Function(name='g2', grid=grid1d, space_order=8)
# Fill f1 and g1 with random values in [-1,+1]
f1.data[:] = -1 + 2 * np.random.rand(n,)
g1.data[:] = -1 + 2 * np.random.rand(n,)
# Equation defining: [f2 = forward 1/2 cell shift derivative applied to f1]
equation_f2 = Eq(f2, f1.dx(x0=x+0.5*x.spacing))
# Equation defining: [g2 = backward 1/2 cell shift derivative applied to g1]
equation_g2 = Eq(g2, g1.dx(x0=x-0.5*x.spacing))
# Define an Operator to implement these equations and execute
op = Operator([equation_f2, equation_g2])
op()
# Compute the dot products and the relative error
f1g2 = np.dot(f1.data, g2.data)
g1f2 = np.dot(g1.data, f2.data)
diff = (f1g2+g1f2)/(f1g2-g1f2)
tol = 100 * np.finfo(dtype).eps
print("f1g2, g1f2, diff, tol; %+.6e %+.6e %+.6e %+.6e" % (f1g2, g1f2, diff, tol))
# At last the unit test
# Assert these dot products are float epsilon close in relative error
assert diff < 100 * np.finfo(dtype).eps
```
## Show the finite difference operators and generated code
You can inspect the finite difference coefficients and locations for evaluation with the code shown below.
For your reference, the finite difference coefficients seen in the first two stanzas below are exactly the coefficients generated in Table 2 of Fornberg's paper **Generation of Finite Difference Formulas on Arbitrarily Spaced Grids** linked below ([References](#nl_refs)).
Note that you don't need to inspect the generated code, but this does provide the option to use this highly optimized code in applications that do not need or require python. If you inspect the code you will notice hallmarks of highly optimized c code, including ```pragmas``` for vectorization, and ```decorations``` for pointer restriction and alignment.
```
# NBVAL_IGNORE_OUTPUT
# Show the FD coefficients generated by Devito
# for the forward 1/2 cell shifted first derivative operator
print("\n\nForward +1/2 cell shift;")
print("..................................")
print(f1.dx(x0=x+0.5*x.spacing).evaluate)
# Show the FD coefficients generated by Devito
# for the backward 1/2 cell shifted first derivative operator
print("\n\nBackward -1/2 cell shift;")
print("..................................")
print(f1.dx(x0=x-0.5*x.spacing).evaluate)
# Show code generated by Devito for applying the derivatives
print("\n\nGenerated c code;")
print("..................................")
print(op.ccode)
```
## Doing some algebra to solve for the time update
The next step in implementing our Devito modeling operator is to define the equation used to update the pressure wavefield as a function of time. What follows is a bit of algebra using the wave equation and finite difference approximations to time derivatives to express the pressure wavefield forward in time $u(t+\Delta_t)$ as a function of the current $u(t)$ and previous $u(t-\Delta_t)$ pressure wavefields.
#### 1. Numerical approximation for $\partial_{tt}\ u$, solved for for $u(t+\Delta_t)$
The second order accurate centered approximation to the second time derivative involves three wavefields: $u(t-\Delta_t)$, $u(t)$, and $u(t+\Delta_t)$. In order to advance our finite difference solution in time, we solve for $u(t+\Delta_t)$.
$$
\begin{aligned}
\partial_{tt}\ u &= \frac{\displaystyle u(t+\Delta_t)
- 2\ u(t) + u(t-\Delta_t)}{\displaystyle \Delta_t^2} \\[5pt]
u(t+\Delta_t)\ &= \Delta_t^2\ \partial_{tt}\ u + 2\ u(t) - u(t-\Delta_t)
\end{aligned}
$$
#### 2. Numerical approximation for $\overleftarrow{\partial_{t}}\ u$
The argument for using a backward approximation is a bit hand wavy, but goes like this: a centered or forward approximation for $\partial_{t}\ u$ would involve the term $u(t+\Delta_t)$, and hence $u(t+\Delta_t)$ would appear at two places in our time update equation below, essentially making the form implicit (although it would be easy to solve for $u(t+\Delta_t)$).
We are interested in explicit time stepping and the correct behavior of the attenuation term, and so prefer the backward approximation for $\overleftarrow{\partial_{t}}\ u$. Our experience is that the use of the backward difference is more stable than forward or centered.
The first order accurate backward approximation to the first time derivative involves two wavefields: $u(t-\Delta_t)$, and $u(t)$. We can use this expression as is.
$$
\overleftarrow{\partial_{t}}\ u = \frac{\displaystyle u(t) - u(t-\Delta_t)}{\displaystyle \Delta_t}
$$
#### 3. Solve the wave equation for $\partial_{tt}$
$$
\begin{aligned}
\frac{b}{m^2} \left( \frac{\omega_c}{Q} \overleftarrow{\partial_{t}}\ u +
\partial_{tt}\ u \right) &=
\overleftarrow{\partial_x}\left(b\ \overrightarrow{\partial_x}\ u \right) +
\overleftarrow{\partial_y}\left(b\ \overrightarrow{\partial_y}\ u \right) +
\overleftarrow{\partial_z}\left(b\ \overrightarrow{\partial_z}\ u \right) + q\\[10pt]
\partial_{tt}\ u &=
\frac{m^2}{b} \left[
\overleftarrow{\partial_x}\left(b\ \overrightarrow{\partial_x}\ u \right) +
\overleftarrow{\partial_y}\left(b\ \overrightarrow{\partial_y}\ u \right) +
\overleftarrow{\partial_z}\left(b\ \overrightarrow{\partial_z}\ u \right) + q
\right]
- \frac{\omega_c}{Q} \overleftarrow{\partial_{t}}\ u
\end{aligned}
$$
#### 4. Plug in $\overleftarrow{\partial_t} u$ and $\partial_{tt} u$ into the time update equation
Next we plug in the right hand sides for $\partial_{tt}\ u$ and $\overleftarrow{\partial_{t}}\ u$ into the the time update expression for $u(t+\Delta_t)$ from step 2.
$$
\begin{aligned}
u(t+\Delta_t) &= \Delta_t^2
\frac{m^2}{b} \left[
\overleftarrow{\partial_x}\left(b\ \overrightarrow{\partial_x}\ u \right) +
\overleftarrow{\partial_y}\left(b\ \overrightarrow{\partial_y}\ u \right) +
\overleftarrow{\partial_z}\left(b\ \overrightarrow{\partial_z}\ u \right) + q
\right] \\[10pt]
& \quad -\ \Delta_t^2 \frac{\omega_c}{Q}
\left( \frac{\displaystyle u(t) - u(t-\Delta_t)}
{\displaystyle \Delta_t} \right) + 2\ u(t) - u(t-\Delta_t)
\end{aligned}
$$
#### 5. Simplify ...
$$
\begin{aligned}
u(t+\Delta_t) &= \Delta_t^2
\frac{m^2}{b} \left[
\overleftarrow{\partial_x}\left(b\ \overrightarrow{\partial_x}\ u \right) +
\overleftarrow{\partial_y}\left(b\ \overrightarrow{\partial_y}\ u \right) +
\overleftarrow{\partial_z}\left(b\ \overrightarrow{\partial_z}\ u \right) + q
\right] \\[10pt]
& \quad \left(2 -\ \Delta_t\ \frac{\omega_c}{Q} \right) u(t)
+ \left(\Delta_t\ \frac{\omega_c}{Q} - 1 \right) u(t-\Delta_t)
\end{aligned}
$$
#### 6. et voila ...
The last equation is how we update the pressure wavefield at each time step, and depends on $u(t)$ and $u(t-\Delta_t)$.
The main work of the finite difference explicit time stepping is evaluating the nested spatial derivative operators on the RHS of this equation. The particular advantage of Devito symbolic optimization is that Devito is able to solve for the complicated expressions that result from substituting the discrete forms of high order numerical finite difference approximations for these nested spatial derivatives.
We have now completed the maths required to implement the modeling operator. The remainder of this notebook deals with setting up and using the required Devito objects.
## Instantiate the Devito grid for a two dimensional problem
Define the dimensions and coordinates for the model. The computational domain of the model is surrounded by an *absorbing boundary region* where we implement boundary conditions to eliminate outgoing waves. We define the sizes for the interior of the model ```nx``` and ```nz```, the width of the absorbing boundary region ```npad```, and the sizes for the entire model padded with absorbing boundaries become ```nxpad = nx + 2*npad``` and ```nzpad = nz + 2*npad```.
```
# Define dimensions for the interior of the model
nx,nz = 751,751
dx,dz = 10.0,10.0 # Grid spacing in m
shape = (nx, nz) # Number of grid points
spacing = (dx, dz) # Domain size is now 5 km by 5 km
origin = (0., 0.) # Origin of coordinate system, specified in m.
extent = tuple([s*(n-1) for s, n in zip(spacing, shape)])
# Define dimensions for the model padded with absorbing boundaries
npad = 50 # number of points in absorbing boundary region (all sides)
nxpad,nzpad = nx+2*npad, nz+2*npad
shape_pad = np.array(shape) + 2 * npad
origin_pad = tuple([o - s*npad for o, s in zip(origin, spacing)])
extent_pad = tuple([s*(n-1) for s, n in zip(spacing, shape_pad)])
# Define the dimensions
# Note if you do not specify dimensions, you get in order x,y,z
x = SpaceDimension(name='x', spacing=Constant(name='h_x',
value=extent_pad[0]/(shape_pad[0]-1)))
z = SpaceDimension(name='z', spacing=Constant(name='h_z',
value=extent_pad[1]/(shape_pad[1]-1)))
# Initialize the Devito grid
grid = Grid(extent=extent_pad, shape=shape_pad, origin=origin_pad,
dimensions=(x, z), dtype=dtype)
print("shape; ", shape)
print("origin; ", origin)
print("spacing; ", spacing)
print("extent; ", extent)
print("")
print("shape_pad; ", shape_pad)
print("origin_pad; ", origin_pad)
print("extent_pad; ", extent_pad)
print("")
print("grid.shape; ", grid.shape)
print("grid.extent; ", grid.extent)
print("grid.spacing_map;", grid.spacing_map)
```
## Define velocity and buoyancy model parameters
We have the following constants and fields from our self adjoint wave equation that we define as time invariant using ```Functions```:
| Symbol | Description |
| :---: | :--- |
| $$m(x,z)$$ | Acoustic velocity |
| $$b(x,z)=\frac{1}{\rho(x,z)}$$ | Buoyancy (reciprocal density) |
```
# Create the velocity and buoyancy fields.
# - We use a wholespace velocity of 1500 m/s
# - We use a wholespace density of 1 g/cm^3
# - These are scalar fields so we use Function to define them
# - We specify space_order to establish the appropriate size halo on the edges
space_order = 8
# Wholespace velocity
m = Function(name='m', grid=grid, space_order=space_order)
m.data[:] = 1.5
# Constant density
b = Function(name='b', grid=grid, space_order=space_order)
b.data[:,:] = 1.0 / 1.0
```
## Define the simulation time range
In this notebook we run 2 seconds of simulation using the sample rate related to the CFL condition as implemented in ```examples/seismic/self_adjoint/utils.py```.
**Important note** smaller Q values in highly viscous media may require smaller temporal sampling rates than a non-viscous medium to achieve dispersion free propagation. This is a cost of the visco- acoustic modification we use here.
We also use the convenience ```TimeRange``` as defined in ```examples/seismic/source.py```.
```
def compute_critical_dt(v):
"""
Determine the temporal sampling to satisfy CFL stability.
This method replicates the functionality in the Model class.
Note we add a safety factor, reducing dt by a factor 0.75 due to the
w/Q attentuation term.
Parameters
----------
v : Function
velocity
"""
coeff = 0.38 if len(v.grid.shape) == 3 else 0.42
dt = 0.75 * v.dtype(coeff * np.min(v.grid.spacing) / (np.max(v.data)))
return v.dtype("%.5e" % dt)
t0 = dtype(0.) # Simulation time start
tn = dtype(2000.) # Simulation time end (1 second = 1000 msec)
dt = compute_critical_dt(m)
time_range = TimeAxis(start=t0, stop=tn, step=dt)
print("Time min, max, dt, num; %10.6f %10.6f %10.6f %d" % (t0, tn, dt, int(tn//dt) + 1))
print("time_range; ", time_range)
```
## Define the acquisition geometry: locations of sources and receivers
**source**:
- X coordinate: center of the model: dx*(nx//2)
- Z coordinate: center of the model: dz*(nz//2)
- We use a 10 Hz center frequency [RickerSource](https://github.com/devitocodes/devito/blob/master/examples/seismic/source.py#L280) wavelet as defined in ```examples/seismic/source.py```
**receivers**:
- X coordinate: center of the model: dx*(nx//2)
- Z coordinate: vertical line from top to bottom of model
- We use a vertical line of [Receivers](https://github.com/devitocodes/devito/blob/master/examples/seismic/source.py#L80) as defined with a ```PointSource``` in ```examples/seismic/source.py```
```
# Source in the center of the model at 10 Hz center frequency
fpeak = 0.010
src = RickerSource(name='src', grid=grid, f0=fpeak, npoint=1, time_range=time_range)
src.coordinates.data[0,0] = dx * (nx//2)
src.coordinates.data[0,1] = dz * (nz//2)
# line of receivers along the right edge of the model
rec = Receiver(name='rec', grid=grid, npoint=nz, time_range=time_range)
rec.coordinates.data[:,0] = dx * (nx//2)
rec.coordinates.data[:,1] = np.linspace(0.0, dz*(nz-1), nz)
print("src_coordinate X; %+12.4f" % (src.coordinates.data[0,0]))
print("src_coordinate Z; %+12.4f" % (src.coordinates.data[0,1]))
print("rec_coordinates X min/max; %+12.4f %+12.4f" % \
(np.min(rec.coordinates.data[:,0]), np.max(rec.coordinates.data[:,0])))
print("rec_coordinates Z min/max; %+12.4f %+12.4f" % \
(np.min(rec.coordinates.data[:,1]), np.max(rec.coordinates.data[:,1])))
# We can plot the time signature to see the wavelet
src.show()
```
## Plot velocity and density models
Next we plot the velocity and density models for illustration.
- The demarcation between interior and absorbing boundary is shown with a dotted white line
- The source is shown as a large red asterisk
- The extent of the receiver array is shown with a thick black line
```
# note: flip sense of second dimension to make the plot positive downwards
plt_extent = [origin_pad[0], origin_pad[0] + extent_pad[0],
origin_pad[1] + extent_pad[1], origin_pad[1]]
vmin, vmax = 1.4, 1.7
dmin, dmax = 0.9, 1.1
plt.figure(figsize=(12,8))
plt.subplot(1, 2, 1)
plt.imshow(np.transpose(m.data), cmap=cm.jet,
vmin=vmin, vmax=vmax, extent=plt_extent)
plt.colorbar(orientation='horizontal', label='Velocity (m/msec)')
plt.plot([origin[0], origin[0], extent[0], extent[0], origin[0]],
[origin[1], extent[1], extent[1], origin[1], origin[1]],
'white', linewidth=4, linestyle=':', label="Absorbing Boundary")
plt.plot(rec.coordinates.data[:, 0], rec.coordinates.data[:, 1], \
'black', linestyle='-', label="Receiver")
plt.plot(src.coordinates.data[:, 0], src.coordinates.data[:, 1], \
'red', linestyle='None', marker='*', markersize=15, label="Source")
plt.xlabel("X Coordinate (m)")
plt.ylabel("Z Coordinate (m)")
plt.title("Velocity w/ absorbing boundary")
plt.subplot(1, 2, 2)
plt.imshow(np.transpose(1 / b.data), cmap=cm.jet,
vmin=dmin, vmax=dmax, extent=plt_extent)
plt.colorbar(orientation='horizontal', label='Density (m^3/kg)')
plt.plot([origin[0], origin[0], extent[0], extent[0], origin[0]],
[origin[1], extent[1], extent[1], origin[1], origin[1]],
'white', linewidth=4, linestyle=':', label="Absorbing Boundary")
plt.plot(rec.coordinates.data[:, 0], rec.coordinates.data[:, 1], \
'black', linestyle='-', label="Receiver")
plt.plot(src.coordinates.data[:, 0], src.coordinates.data[:, 1], \
'red', linestyle='None', marker='*', markersize=15, label="Source")
plt.xlabel("X Coordinate (m)")
plt.ylabel("Z Coordinate (m)")
plt.title("Density w/ absorbing boundary")
plt.tight_layout()
None
```
## Create and plot the $\frac{\omega_c}{Q}$ model used for dissipation only attenuation
We have two remaining constants and fields from our SA wave equation that we need to define:
| Symbol | Description |
| :---: | :--- |
| $$\omega_c = 2 \pi f_c$$ | Center angular frequency |
| $$\frac{1}{Q(x,z)}$$ | Inverse Q model used in the modeling system |
The absorbing boundary condition strategy we use is designed to eliminate any corners or edges in the attenuation profile. We do this by making Q a function of *distance from the nearest boundary*.
We have implemented the function ```setup_w_over_q``` for 2D and 3D fields in the file ```utils.py```, and will use it below. In Devito these fields are type ```Function```, a concrete implementation of ```AbstractFunction```.
Feel free to inspect the source at [utils.py](utils.py), which uses Devito's symbolic math to write a nonlinear equation describing the absorbing boundary for dispatch to automatic code generation.
Note that we will generate two Q models, one with strong attenuation (a Q value of 25) and one with moderate attenuation (a Q value of 100) -- in order to demonstrate the impact of attenuation in the plots near the end of this notebook.
```
# NBVAL_IGNORE_OUTPUT
# Initialize the attenuation profile for Q=25 and Q=100 models
w = 2.0 * np.pi * fpeak
print("w,fpeak; ", w, fpeak)
qmin = 0.1
wOverQ_025 = Function(name='wOverQ_025', grid=grid, space_order=space_order)
wOverQ_100 = Function(name='wOverQ_100', grid=grid, space_order=space_order)
setup_w_over_q(wOverQ_025, w, qmin, 25.0, npad)
setup_w_over_q(wOverQ_100, w, qmin, 100.0, npad)
# Plot the log of the generated Q profile
q025 = np.log10(w / wOverQ_025.data)
q100 = np.log10(w / wOverQ_100.data)
lmin, lmax = np.log10(qmin), np.log10(100)
plt.figure(figsize=(12,8))
plt.subplot(1, 2, 1)
plt.imshow(np.transpose(q025.data), cmap=cm.jet,
vmin=lmin, vmax=lmax, extent=plt_extent)
plt.colorbar(orientation='horizontal', label='log10(Q)')
plt.plot([origin[0], origin[0], extent[0], extent[0], origin[0]],
[origin[1], extent[1], extent[1], origin[1], origin[1]],
'white', linewidth=4, linestyle=':', label="Absorbing Boundary")
plt.plot([origin[0], origin[0], extent[0], extent[0], origin[0]],
[origin[1], extent[1], extent[1], origin[1], origin[1]],
'white', linewidth=4, linestyle=':', label="Absorbing Boundary")
plt.plot(rec.coordinates.data[:, 0], rec.coordinates.data[:, 1], \
'black', linestyle='-', label="Receiver")
plt.plot(src.coordinates.data[:, 0], src.coordinates.data[:, 1], \
'red', linestyle='None', marker='*', markersize=15, label="Source")
plt.xlabel("X Coordinate (m)")
plt.ylabel("Z Coordinate (m)")
plt.title("log10 of $Q=25$ model")
plt.subplot(1, 2, 2)
plt.imshow(np.transpose(q100.data), cmap=cm.jet,
vmin=lmin, vmax=lmax, extent=plt_extent)
plt.colorbar(orientation='horizontal', label='log10(Q)')
plt.plot([origin[0], origin[0], extent[0], extent[0], origin[0]],
[origin[1], extent[1], extent[1], origin[1], origin[1]],
'white', linewidth=4, linestyle=':', label="Absorbing Boundary")
plt.plot([origin[0], origin[0], extent[0], extent[0], origin[0]],
[origin[1], extent[1], extent[1], origin[1], origin[1]],
'white', linewidth=4, linestyle=':', label="Absorbing Boundary")
plt.plot(rec.coordinates.data[:, 0], rec.coordinates.data[:, 1], \
'black', linestyle='-', label="Receiver")
plt.plot(src.coordinates.data[:, 0], src.coordinates.data[:, 1], \
'red', linestyle='None', marker='*', markersize=15, label="Source")
plt.xlabel("X Coordinate (m)")
plt.ylabel("Z Coordinate (m)")
plt.title("log10 of $Q=100$ model")
plt.tight_layout()
None
```
## Define the pressure wavefield as a ```TimeFunction```
We specify the time_order as 2, which allocates 3 time steps in the pressure wavefield. As described elsewhere, Devito will use "cyclic indexing" to index into this multi-dimensional array, mneaning that via the *modulo operator*, the time indices $[0, 1, 2, 3, 4, 5, ...]$ are mapped into the modulo indices $[0, 1, 2, 0, 1, 2, ...]$
This [FAQ entry](https://github.com/devitocodes/devito/wiki/FAQ#as-time-increases-in-the-finite-difference-evolution-are-wavefield-arrays-swapped-as-you-might-see-in-cc-code) explains in more detail.
```
# Define the TimeFunction
u = TimeFunction(name="u", grid=grid, time_order=2, space_order=space_order)
# Get the symbols for dimensions for t, x, z
# We need these below in order to write the source injection and the
t,x,z = u.dimensions
```
## Define the source injection and receiver extraction
If you examine the equation for the time update we derived above you will see that the source $q$ is scaled by the term $(\Delta_t^2 m^2\ /\ b)$. You will see that scaling term in the source injection below. For $\Delta_t^2$ we use the time dimension spacing symbol ```t.spacing**2```.
Note that source injection and receiver extraction are accomplished via linear interpolation, as implemented in ```SparseTimeFunction``` in [sparse.py](https://github.com/devitocodes/devito/blob/master/devito/types/sparse.py#L747).
```
# Source injection, with appropriate scaling
src_term = src.inject(field=u.forward, expr=src * t.spacing**2 * m**2 / b)
# Receiver extraction
rec_term = rec.interpolate(expr=u.forward)
```
## Finally, the Devito operator
We next transcribe the time update expression we derived above into a Devito ```Eq```. Then we add that expression with the source injection and receiver extraction and build an ```Operator``` that will generate the c code for performing the modeling.
We copy the time update expression from above for clarity. Note we omit $q$ because we will be explicitly injecting the source using ```src_term``` defined immediately above. However, for the linearized *Born forward modeling* operation the $q$ term is an appropriately scaled field, as shown in the next notebook in this series.
$$
\begin{aligned}
u(t+\Delta_t) &= \Delta_t^2
\frac{m^2}{b} \left[
\overleftarrow{\partial_x}\left(b\ \overrightarrow{\partial_x}\ u \right) +
\overleftarrow{\partial_y}\left(b\ \overrightarrow{\partial_y}\ u \right) +
\overleftarrow{\partial_z}\left(b\ \overrightarrow{\partial_z}\ u \right) + q
\right] \\[10pt]
& \quad \left(2 -\ \Delta_t\ \frac{\omega_c}{Q} \right) u(t)
+ \left(\Delta_t\ \frac{\omega_c}{Q} - 1 \right) u(t-\Delta_t)
\end{aligned}
$$
```
# NBVAL_IGNORE_OUTPUT
# Generate the time update equation and operator for Q=25 model
eq_time_update = (t.spacing**2 * m**2 / b) * \
((b * u.dx(x0=x+x.spacing/2)).dx(x0=x-x.spacing/2) + \
(b * u.dz(x0=z+z.spacing/2)).dz(x0=z-z.spacing/2)) + \
(2 - t.spacing * wOverQ_025) * u + \
(t.spacing * wOverQ_025 - 1) * u.backward
stencil = Eq(u.forward, eq_time_update)
# Update the dimension spacing_map to include the time dimension
# These symbols will be replaced with the relevant scalars by the Operator
spacing_map = grid.spacing_map
spacing_map.update({t.spacing : dt})
print("spacing_map; ", spacing_map)
# op = Operator([stencil] + src_term + rec_term)
op = Operator([stencil] + src_term + rec_term, subs=spacing_map)
```
## Impact of hardwiring the grid spacing on operation count
The argument ```subs=spacing_map``` passed to the operator substitutes values for the temporal and spatial dimensions into the expressions before code generation. This reduces the number of floating point operations executed by the kernel by pre-evaluating certain coefficients, and possibly absorbing the spacing scalars from the denominators of the numerical finite difference approximations into the finite difference coefficients.
If you run the two cases of passing/not passing the ```subs=spacing_map``` argument by commenting/un-commenting the last two lines of the cell immediately above, you can inspect the difference in computed flop count for the operator. This is reported by setting Devito logging ```configuration['log-level'] = 'DEBUG'``` and is reported during Devito symbolic optimization with the output line ```Flops reduction after symbolic optimization```. Note also if you inspect the generated code for the two cases, you will see extra calling parameters are required for the case without the substitution. We have compiled the flop count for 2D and 3D operators into the table below.
| Dimensionality | Passing subs | Flops reduction | Delta |
|:---:|:---:|:---:|:---:|
| 2D | False | 588 --> 81 | |
| 2D | True | 300 --> 68 | 13.7% |
| 3D | False | 875 --> 116 | |
| 3D | True | 442 --> 95 | 18.1% |
Note the gain in performance is around 14% for this example in 2D, and around 18% in 3D.
## Print the arguments to the Devito operator
We use ```op.arguments()``` to print the arguments to the operator. As noted above depending on the use of ```subs=spacing_map``` you will see different arguments here. In the case of no ```subs=spacing_map``` argument to the operator, you will see arguments for the dimensional spacing constants as parameters to the operator, including ```h_x```, ```h_z```, and ```dt```.
```
# NBVAL_IGNORE_OUTPUT
op.arguments()
```
## Print the generated c code for review
We use ```print(op)``` to output the generated c code for review.
```
# NBVAL_IGNORE_OUTPUT
print(op)
```
## Run the operator for the Q=25 and Q=100 models
By setting Devito logging ```configuration['log-level'] = 'DEBUG'``` we have enabled output of statistics related to the performance of the operator, which you will see below when the operator runs.
We will run the Operator once with the Q model as defined ```wOverQ_025```, and then run a second time passing the ```wOverQ_100``` Q model. For the second run with the different Q model, we take advantage of the ```placeholder design patten``` in the Devito ```Operator```.
For more information on this see the [FAQ](https://github.com/devitocodes/devito/wiki/FAQ#how-are-abstractions-used-in-the-seismic-examples) entry.
```
# NBVAL_IGNORE_OUTPUT
# Run the operator for the Q=25 model
print("m min/max; %+12.6e %+12.6e" % (np.min(m.data), np.max(m.data)))
print("b min/max; %+12.6e %+12.6e" % (np.min(b.data), np.max(b.data)))
print("wOverQ_025 min/max; %+12.6e %+12.6e" % (np.min(wOverQ_025.data), np.max(wOverQ_025.data)))
print("wOverQ_100 min/max; %+12.6e %+12.6e" % (np.min(wOverQ_100.data), np.max(wOverQ_100.data)))
print(time_range)
u.data[:] = 0
op(time=time_range.num-1)
# summary = op(time=time_range.num-1, h_x=dx, h_z=dz, dt=dt)
# Save the Q=25 results and run the Q=100 case
import copy
uQ25 = copy.copy(u)
recQ25 = copy.copy(rec)
u.data[:] = 0
op(time=time_range.num-1, wOverQ_025=wOverQ_100)
print("Q= 25 receiver data min/max; %+12.6e %+12.6e" %\
(np.min(recQ25.data[:]), np.max(recQ25.data[:])))
print("Q=100 receiver data min/max; %+12.6e %+12.6e" %\
(np.min(rec.data[:]), np.max(rec.data[:])))
# Continuous integration hooks
# We ensure the norm of these computed wavefields is repeatable
assert np.isclose(norm(uQ25), 26.749, atol=0, rtol=1e-3)
assert np.isclose(norm(u), 161.131, atol=0, rtol=1e-3)
assert np.isclose(norm(recQ25), 368.153, atol=0, rtol=1e-3)
assert np.isclose(norm(rec), 413.414, atol=0, rtol=1e-3)
```
## Plot the computed Q=25 and Q=100 wavefields
```
# NBVAL_IGNORE_OUTPUT
# Plot the two wavefields, normalized to Q=100 (the larger amplitude)
amax_Q25 = 1.0 * np.max(np.abs(uQ25.data[1,:,:]))
amax_Q100 = 1.0 * np.max(np.abs(u.data[1,:,:]))
print("amax Q= 25; %12.6f" % (amax_Q25))
print("amax Q=100; %12.6f" % (amax_Q100))
plt.figure(figsize=(12,8))
plt.subplot(1, 2, 1)
plt.imshow(np.transpose(uQ25.data[1,:,:] / amax_Q100), cmap="seismic",
vmin=-1, vmax=+1, extent=plt_extent)
plt.colorbar(orientation='horizontal', label='Amplitude')
plt.plot([origin[0], origin[0], extent[0], extent[0], origin[0]],
[origin[1], extent[1], extent[1], origin[1], origin[1]],
'black', linewidth=4, linestyle=':', label="Absorbing Boundary")
plt.plot(src.coordinates.data[:, 0], src.coordinates.data[:, 1], \
'red', linestyle='None', marker='*', markersize=15, label="Source")
plt.xlabel("X Coordinate (m)")
plt.ylabel("Z Coordinate (m)")
plt.title("Data for $Q=25$ model")
plt.subplot(1, 2, 2)
plt.imshow(np.transpose(u.data[1,:,:] / amax_Q100), cmap="seismic",
vmin=-1, vmax=+1, extent=plt_extent)
plt.colorbar(orientation='horizontal', label='Amplitude')
plt.plot([origin[0], origin[0], extent[0], extent[0], origin[0]],
[origin[1], extent[1], extent[1], origin[1], origin[1]],
'black', linewidth=4, linestyle=':', label="Absorbing Boundary")
plt.plot(src.coordinates.data[:, 0], src.coordinates.data[:, 1], \
'red', linestyle='None', marker='*', markersize=15, label="Source")
plt.xlabel("X Coordinate (m)")
plt.ylabel("Z Coordinate (m)")
plt.title("Data for $Q=100$ model")
plt.tight_layout()
None
```
## Plot the computed Q=25 and Q=100 receiver gathers
```
# NBVAL_IGNORE_OUTPUT
# Plot the two receiver gathers, normalized to Q=100 (the larger amplitude)
amax_Q25 = 0.1 * np.max(np.abs(recQ25.data[:]))
amax_Q100 = 0.1 * np.max(np.abs(rec.data[:]))
print("amax Q= 25; %12.6f" % (amax_Q25))
print("amax Q=100; %12.6f" % (amax_Q100))
plt.figure(figsize=(12,8))
plt.subplot(1, 2, 1)
plt.imshow(recQ25.data[:,:] / amax_Q100, cmap="seismic",
vmin=-1, vmax=+1, extent=plt_extent, aspect="auto")
plt.colorbar(orientation='horizontal', label='Amplitude')
plt.xlabel("X Coordinate (m)")
plt.ylabel("Z Coordinate (m)")
plt.title("Receiver gather for $Q=25$ model")
plt.subplot(1, 2, 2)
plt.imshow(rec.data[:,:] / amax_Q100, cmap="seismic",
vmin=-1, vmax=+1, extent=plt_extent, aspect="auto")
plt.colorbar(orientation='horizontal', label='Amplitude')
plt.xlabel("X Coordinate (m)")
plt.ylabel("Z Coordinate (m)")
plt.title("Receiver gather for $Q=100$ model")
plt.tight_layout()
None
```
## Show the output from Devito solving for the stencil
Note this takes a **long time** ... about 50 seconds, but obviates the need to solve for the time update expression as we did above.
If you would like to see the time update equation as generated by Devito symbolic optimization, uncomment the lines for the solve below.
```
# NBVAL_IGNORE_OUTPUT
# Define the partial_differential equation
# Note the backward shifted time derivative is obtained via u.dt(x0=t-0.5*t.spacing)
pde = (b / m**2) * (wOverQ_100 * u.dt(x0=t-0.5*t.spacing) + u.dt2) -\
(b * u.dx(x0=x+0.5*x.spacing)).dx(x0=x-0.5*x.spacing) -\
(b * u.dz(x0=z+0.5*z.spacing)).dz(x0=z-0.5*z.spacing)
# Uncomment the next 5 lines to see the equation as generated by Devito
# t1 = timer()
# stencil = Eq(u.forward, solve(pde, u.forward))
# t2 = timer()
# print("solve ran in %.4f seconds." % (t2-t1))
# stencil
```
## Discussion
This concludes the implementation of the nonlinear forward operator. This series continues in the next notebook that describes the implementation of the Jacobian linearized forward and adjoint operators.
[sa_02_iso_implementation2.ipynb](sa_02_iso_implementation2.ipynb)
## References
- **A nonreflecting boundary condition for discrete acoustic and elastic wave equations** (1985)
<br>Charles Cerjan, Dan Kosloff, Ronnie Kosloff, and Moshe Resheq
<br> Geophysics, Vol. 50, No. 4
<br>https://library.seg.org/doi/pdfplus/10.1190/segam2016-13878451.1
- **Generation of Finite Difference Formulas on Arbitrarily Spaced Grids** (1988)
<br>Bengt Fornberg
<br>Mathematics of Computation, Vol. 51, No. 184
<br>http://dx.doi.org/10.1090/S0025-5718-1988-0935077-0
<br>https://web.njit.edu/~jiang/math712/fornberg.pdf
- **Self-adjoint, energy-conserving second-order pseudoacoustic systems for VTI and TTI media for reverse time migration and full-waveform inversion** (2016)
<br>Kenneth Bube, John Washbourne, Raymond Ergas, and Tamas Nemeth
<br>SEG Technical Program Expanded Abstracts
<br>https://library.seg.org/doi/10.1190/segam2016-13878451.1
| github_jupyter |
```
#hide
#skip
! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab
#export
from fastai.data.all import *
from fastai.text.core import *
#hide
from nbdev.showdoc import *
#default_exp text.models.awdlstm
#default_cls_lvl 3
```
# AWD-LSTM
> AWD LSTM from [Smerity et al.](https://arxiv.org/pdf/1708.02182.pdf)
## Basic NLP modules
On top of the pytorch or the fastai [`layers`](/layers.html#layers), the language models use some custom layers specific to NLP.
```
#export
def dropout_mask(x, sz, p):
"Return a dropout mask of the same type as `x`, size `sz`, with probability `p` to cancel an element."
return x.new_empty(*sz).bernoulli_(1-p).div_(1-p)
t = dropout_mask(torch.randn(3,4), [4,3], 0.25)
test_eq(t.shape, [4,3])
assert ((t == 4/3) + (t==0)).all()
#export
class RNNDropout(Module):
"Dropout with probability `p` that is consistent on the seq_len dimension."
def __init__(self, p=0.5): self.p=p
def forward(self, x):
if not self.training or self.p == 0.: return x
return x * dropout_mask(x.data, (x.size(0), 1, *x.shape[2:]), self.p)
dp = RNNDropout(0.3)
tst_inp = torch.randn(4,3,7)
tst_out = dp(tst_inp)
for i in range(4):
for j in range(7):
if tst_out[i,0,j] == 0: assert (tst_out[i,:,j] == 0).all()
else: test_close(tst_out[i,:,j], tst_inp[i,:,j]/(1-0.3))
```
It also supports doing dropout over a sequence of images where time dimesion is the 1st axis, 10 images of 3 channels and 32 by 32.
```
_ = dp(torch.rand(4,10,3,32,32))
#export
class WeightDropout(Module):
"A module that wraps another layer in which some weights will be replaced by 0 during training."
def __init__(self, module, weight_p, layer_names='weight_hh_l0'):
self.module,self.weight_p,self.layer_names = module,weight_p,L(layer_names)
for layer in self.layer_names:
#Makes a copy of the weights of the selected layers.
w = getattr(self.module, layer)
delattr(self.module, layer)
self.register_parameter(f'{layer}_raw', nn.Parameter(w.data))
setattr(self.module, layer, w.clone())
if isinstance(self.module, (nn.RNNBase, nn.modules.rnn.RNNBase)):
self.module.flatten_parameters = self._do_nothing
def _setweights(self):
"Apply dropout to the raw weights."
for layer in self.layer_names:
raw_w = getattr(self, f'{layer}_raw')
if self.training: w = F.dropout(raw_w, p=self.weight_p)
else: w = raw_w.clone()
setattr(self.module, layer, w)
def forward(self, *args):
self._setweights()
with warnings.catch_warnings():
# To avoid the warning that comes because the weights aren't flattened.
warnings.simplefilter("ignore", category=UserWarning)
return self.module(*args)
def reset(self):
for layer in self.layer_names:
raw_w = getattr(self, f'{layer}_raw')
setattr(self.module, layer, raw_w.clone())
if hasattr(self.module, 'reset'): self.module.reset()
def _do_nothing(self): pass
module = nn.LSTM(5,7)
dp_module = WeightDropout(module, 0.4)
wgts = dp_module.module.weight_hh_l0
tst_inp = torch.randn(10,20,5)
h = torch.zeros(1,20,7), torch.zeros(1,20,7)
dp_module.reset()
x,h = dp_module(tst_inp,h)
loss = x.sum()
loss.backward()
new_wgts = getattr(dp_module.module, 'weight_hh_l0')
test_eq(wgts, getattr(dp_module, 'weight_hh_l0_raw'))
assert 0.2 <= (new_wgts==0).sum().float()/new_wgts.numel() <= 0.6
assert dp_module.weight_hh_l0_raw.requires_grad
assert dp_module.weight_hh_l0_raw.grad is not None
assert ((dp_module.weight_hh_l0_raw.grad == 0.) & (new_wgts == 0.)).any()
#export
class EmbeddingDropout(Module):
"Apply dropout with probability `embed_p` to an embedding layer `emb`."
def __init__(self, emb, embed_p):
self.emb,self.embed_p = emb,embed_p
def forward(self, words, scale=None):
if self.training and self.embed_p != 0:
size = (self.emb.weight.size(0),1)
mask = dropout_mask(self.emb.weight.data, size, self.embed_p)
masked_embed = self.emb.weight * mask
else: masked_embed = self.emb.weight
if scale: masked_embed.mul_(scale)
return F.embedding(words, masked_embed, ifnone(self.emb.padding_idx, -1), self.emb.max_norm,
self.emb.norm_type, self.emb.scale_grad_by_freq, self.emb.sparse)
enc = nn.Embedding(10, 7, padding_idx=1)
enc_dp = EmbeddingDropout(enc, 0.5)
tst_inp = torch.randint(0,10,(8,))
tst_out = enc_dp(tst_inp)
for i in range(8):
assert (tst_out[i]==0).all() or torch.allclose(tst_out[i], 2*enc.weight[tst_inp[i]])
#export
class AWD_LSTM(Module):
"AWD-LSTM inspired by https://arxiv.org/abs/1708.02182"
initrange=0.1
def __init__(self, vocab_sz, emb_sz, n_hid, n_layers, pad_token=1, hidden_p=0.2, input_p=0.6, embed_p=0.1,
weight_p=0.5, bidir=False):
store_attr('emb_sz,n_hid,n_layers,pad_token')
self.bs = 1
self.n_dir = 2 if bidir else 1
self.encoder = nn.Embedding(vocab_sz, emb_sz, padding_idx=pad_token)
self.encoder_dp = EmbeddingDropout(self.encoder, embed_p)
self.rnns = nn.ModuleList([self._one_rnn(emb_sz if l == 0 else n_hid, (n_hid if l != n_layers - 1 else emb_sz)//self.n_dir,
bidir, weight_p, l) for l in range(n_layers)])
self.encoder.weight.data.uniform_(-self.initrange, self.initrange)
self.input_dp = RNNDropout(input_p)
self.hidden_dps = nn.ModuleList([RNNDropout(hidden_p) for l in range(n_layers)])
self.reset()
def forward(self, inp, from_embeds=False):
bs,sl = inp.shape[:2] if from_embeds else inp.shape
if bs!=self.bs: self._change_hidden(bs)
output = self.input_dp(inp if from_embeds else self.encoder_dp(inp))
new_hidden = []
for l, (rnn,hid_dp) in enumerate(zip(self.rnns, self.hidden_dps)):
output, new_h = rnn(output, self.hidden[l])
new_hidden.append(new_h)
if l != self.n_layers - 1: output = hid_dp(output)
self.hidden = to_detach(new_hidden, cpu=False, gather=False)
return output
def _change_hidden(self, bs):
self.hidden = [self._change_one_hidden(l, bs) for l in range(self.n_layers)]
self.bs = bs
def _one_rnn(self, n_in, n_out, bidir, weight_p, l):
"Return one of the inner rnn"
rnn = nn.LSTM(n_in, n_out, 1, batch_first=True, bidirectional=bidir)
return WeightDropout(rnn, weight_p)
def _one_hidden(self, l):
"Return one hidden state"
nh = (self.n_hid if l != self.n_layers - 1 else self.emb_sz) // self.n_dir
return (one_param(self).new_zeros(self.n_dir, self.bs, nh), one_param(self).new_zeros(self.n_dir, self.bs, nh))
def _change_one_hidden(self, l, bs):
if self.bs < bs:
nh = (self.n_hid if l != self.n_layers - 1 else self.emb_sz) // self.n_dir
return tuple(torch.cat([h, h.new_zeros(self.n_dir, bs-self.bs, nh)], dim=1) for h in self.hidden[l])
if self.bs > bs: return (self.hidden[l][0][:,:bs].contiguous(), self.hidden[l][1][:,:bs].contiguous())
return self.hidden[l]
def reset(self):
"Reset the hidden states"
[r.reset() for r in self.rnns if hasattr(r, 'reset')]
self.hidden = [self._one_hidden(l) for l in range(self.n_layers)]
```
This is the core of an AWD-LSTM model, with embeddings from `vocab_sz` and `emb_sz`, `n_layers` LSTMs potentially `bidir` stacked, the first one going from `emb_sz` to `n_hid`, the last one from `n_hid` to `emb_sz` and all the inner ones from `n_hid` to `n_hid`. `pad_token` is passed to the PyTorch embedding layer. The dropouts are applied as such:
- the embeddings are wrapped in `EmbeddingDropout` of probability `embed_p`;
- the result of this embedding layer goes through an `RNNDropout` of probability `input_p`;
- each LSTM has `WeightDropout` applied with probability `weight_p`;
- between two of the inner LSTM, an `RNNDropout` is applied with probability `hidden_p`.
THe module returns two lists: the raw outputs (without being applied the dropout of `hidden_p`) of each inner LSTM and the list of outputs with dropout. Since there is no dropout applied on the last output, those two lists have the same last element, which is the output that should be fed to a decoder (in the case of a language model).
```
tst = AWD_LSTM(100, 20, 10, 2, hidden_p=0.2, embed_p=0.02, input_p=0.1, weight_p=0.2)
x = torch.randint(0, 100, (10,5))
r = tst(x)
test_eq(tst.bs, 10)
test_eq(len(tst.hidden), 2)
test_eq([h_.shape for h_ in tst.hidden[0]], [[1,10,10], [1,10,10]])
test_eq([h_.shape for h_ in tst.hidden[1]], [[1,10,20], [1,10,20]])
test_eq(r.shape, [10,5,20])
test_eq(r[:,-1], tst.hidden[-1][0][0]) #hidden state is the last timestep in raw outputs
tst.eval()
tst.reset()
tst(x);
tst(x);
#hide
#test bs change
x = torch.randint(0, 100, (6,5))
r = tst(x)
test_eq(tst.bs, 6)
# hide
# cuda
tst = AWD_LSTM(100, 20, 10, 2, bidir=True).to('cuda')
tst.reset()
x = torch.randint(0, 100, (10,5)).to('cuda')
r = tst(x)
x = torch.randint(0, 100, (6,5), device='cuda')
r = tst(x)
#export
def awd_lstm_lm_split(model):
"Split a RNN `model` in groups for differential learning rates."
groups = [nn.Sequential(rnn, dp) for rnn, dp in zip(model[0].rnns, model[0].hidden_dps)]
groups = L(groups + [nn.Sequential(model[0].encoder, model[0].encoder_dp, model[1])])
return groups.map(params)
#export
awd_lstm_lm_config = dict(emb_sz=400, n_hid=1152, n_layers=3, pad_token=1, bidir=False, output_p=0.1,
hidden_p=0.15, input_p=0.25, embed_p=0.02, weight_p=0.2, tie_weights=True, out_bias=True)
#export
def awd_lstm_clas_split(model):
"Split a RNN `model` in groups for differential learning rates."
groups = [nn.Sequential(model[0].module.encoder, model[0].module.encoder_dp)]
groups += [nn.Sequential(rnn, dp) for rnn, dp in zip(model[0].module.rnns, model[0].module.hidden_dps)]
groups = L(groups + [model[1]])
return groups.map(params)
#export
awd_lstm_clas_config = dict(emb_sz=400, n_hid=1152, n_layers=3, pad_token=1, bidir=False, output_p=0.4,
hidden_p=0.3, input_p=0.4, embed_p=0.05, weight_p=0.5)
```
## QRNN
```
#export
class AWD_QRNN(AWD_LSTM):
"Same as an AWD-LSTM, but using QRNNs instead of LSTMs"
def _one_rnn(self, n_in, n_out, bidir, weight_p, l):
from fastai.text.models.qrnn import QRNN
rnn = QRNN(n_in, n_out, 1, save_prev_x=(not bidir), zoneout=0, window=2 if l == 0 else 1, output_gate=True, bidirectional=bidir)
rnn.layers[0].linear = WeightDropout(rnn.layers[0].linear, weight_p, layer_names='weight')
return rnn
def _one_hidden(self, l):
"Return one hidden state"
nh = (self.n_hid if l != self.n_layers - 1 else self.emb_sz) // self.n_dir
return one_param(self).new_zeros(self.n_dir, self.bs, nh)
def _change_one_hidden(self, l, bs):
if self.bs < bs:
nh = (self.n_hid if l != self.n_layers - 1 else self.emb_sz) // self.n_dir
return torch.cat([self.hidden[l], self.hidden[l].new_zeros(self.n_dir, bs-self.bs, nh)], dim=1)
if self.bs > bs: return self.hidden[l][:, :bs]
return self.hidden[l]
# cuda
# cpp
model = AWD_QRNN(vocab_sz=10, emb_sz=20, n_hid=16, n_layers=2, bidir=False)
x = torch.randint(0, 10, (7,5))
y = model(x)
test_eq(y.shape, (7, 5, 20))
# hide
# cuda
# cpp
# test bidir=True
model = AWD_QRNN(vocab_sz=10, emb_sz=20, n_hid=16, n_layers=2, bidir=True)
x = torch.randint(0, 10, (7,5))
y = model(x)
test_eq(y.shape, (7, 5, 20))
#export
awd_qrnn_lm_config = dict(emb_sz=400, n_hid=1552, n_layers=4, pad_token=1, bidir=False, output_p=0.1,
hidden_p=0.15, input_p=0.25, embed_p=0.02, weight_p=0.2, tie_weights=True, out_bias=True)
#export
awd_qrnn_clas_config = dict(emb_sz=400, n_hid=1552, n_layers=4, pad_token=1, bidir=False, output_p=0.4,
hidden_p=0.3, input_p=0.4, embed_p=0.05, weight_p=0.5)
```
## Export -
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
| github_jupyter |
# gerekli kütüphaneler
```
# uyarı ayarı
import warnings
warnings.filterwarnings("ignore")
# veri işleme
import pandas as pd
import numpy as np
# istatistik
import scipy as sc
import hypothetical
import pingouin
import statsmodels as sm
# veri görselleştirme
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
from IPython.display import HTML, display
# kütüphane ayarları
pd.set_option('display.max_columns', None)
pd.set_option('display.max_colwidth', None)
pd.set_option('display.max_rows', None)
pd.set_option('mode.chained_assignment', None)
```
# verinin çalışma ortamına alınması
2009-2011 dönemi verileri **veri** isimli değişken olarak çalışma ortamına alınır:
```
veri = pd.read_csv("dataset/online_retail_2.csv")
print(veri.shape)
veri.head()
veri.info()
```
# veri hazırlığı
**gözlemler:**
veride değişkenlerin birtakım rasyonel olmayan durumları olduğunu gözlemledik:
- **değişkenler:**
- <font color="red"> problem </font> → <font color="green"> gereken aksiyon </font> <font color="blue">{ neden }</font>
- **StockCode:**
- <font color="red"> tutarsız veri tipi </font> → <font color="green"> tamsayı dönüşümü </font> <font color="blue">{ veri doğru olmayan formatla kaydedilmiş }</font>
- **Description:**
- <font color="red"> kayıp bilgi </font> → <font color="green"> kayıtların atılması </font> <font color="blue">{ kayıp frekansı düşük. analizimizi etkilemez. }</font>
- **InvoiceDate:**
- <font color="red"> tutarsız veri tipi </font> → <font color="green"> tarih dönüşümü </font> <font color="blue">{ veri doğru olmayan formatla kaydedilmiş }</font>
- **Customer ID:**
- <font color="red"> kayıp bilgi </font> → <font color="green"> kayıtların atılması </font> <font color="blue">{ kayıp frekansı düşük. analizimizi etkilemez. }</font>
- <font color="red"> tutarsız veri tipi </font> → <font color="green"> tamsayı dönüşümü </font> <font color="blue">{ veri doğru olmayan formatla kaydedilmiş }</font>
# kayıp verilerin tanımlanması ve çözümlenmesi
kayıp veri sorununu çözmeden önce:
```
kayıp_veri = pd.DataFrame(index = veri.columns.values)
kayıp_veri['NullFrequency'] = veri.isnull().sum().values
yüzde = veri.isnull().sum().values/veri.shape[0]
kayıp_veri['MissingPercent'] = np.round(yüzde, decimals = 4) * 100
kayıp_veri.transpose()
```
**gözlemler:**
- **değişkenler:**
- <font color="red"> problem </font> → <font color="green"> gereken aksiyon </font> <font color="blue">{ neden }</font>
- **Description:**
- <font color="red"> kayıp veri (4382)</font> → <font color="green">kayıtları at</font> <font color="blue">{oran çok düşük ve analizimizi etkilemez.}</font>
- **Customer ID:**
- <font color="red">kayıp veri (243007)</font> → <font color="green">kayıtları at</font> <font color="blue">{segmentasyon modelinde potansiyel bir değişken, ama modelde kayıp değer kullanamayız.}</font>
aksiyonlar:
```
önceki_veri_boyutu = veri.shape
print('veri boyutu [önce]:', önceki_veri_boyutu)
veri.dropna(axis = 0, subset = ['Description', 'Customer ID'], inplace = True)
sonraki_veri_boyutu = veri.shape
print('veri boyutu [sonra]:', sonraki_veri_boyutu)
atılan_sayı = önceki_veri_boyutu[0] - sonraki_veri_boyutu[0]
atılan_oran = np.round(atılan_sayı / önceki_veri_boyutu[0], decimals = 2) * 100
print('atılan oran:', atılan_oran, '%')
```
yaptıklarımızı doğrulayalım:
```
kayıp_veri = pd.DataFrame(index = veri.columns.values)
kayıp_veri['NullFrequency'] = veri.isnull().sum().values
yüzde = veri.isnull().sum().values/veri.shape[0]
kayıp_veri['MissingPercent'] = np.round(yüzde, decimals = 4) * 100
kayıp_veri.transpose()
```
böylece kayıp değerleri **başarılı bir şekilde eledik.**
# gereksiz verilerin tanımlanması ve çözümlenmesi
çoklayan kayıtların kontrolü:
```
print('veride çoklayan kayıt var mı?:', veri.duplicated().any())
print('çoklayan kayıt sayısı:', veri.duplicated().sum())
```
**gözlemler:**
- buna göre veride **26.479 çoklayan** satır **bulunuyor**.
- bu kayıtları da analizlerimizde işe yaramayacağından **atıyoruz.**
aksiyonlar:
```
önceki_veri_boyutu = veri.shape
print('veri boyutu [önce]:', önceki_veri_boyutu)
veri.drop_duplicates(inplace = True)
sonraki_veri_boyutu = veri.shape
print('veri boyutu [sonra]:', sonraki_veri_boyutu)
atılan_sayı = önceki_veri_boyutu[0] - sonraki_veri_boyutu[0]
atılan_oran = np.round(atılan_sayı / önceki_veri_boyutu[0], decimals = 2) * 100
print('atılan oran:', atılan_oran, '%')
```
yaptıklarımızı doğrulayalım:
```
print('veride çoklayan kayıt var mı?:', veri.duplicated().any())
print('çoklayan kayıt sayısı:', veri.duplicated().sum())
```
böylece çoklayan kayıtları da **başarılı bir şekilde eledik.**
# tutarsız veri tiplerinin tanımlanması ve çözümlenmesi
```
veri_tipleri = pd.DataFrame(data = veri.dtypes, columns = ['Type'])
veri_tipleri.transpose()
```
**gözlemler:**
- **tutarsız veri:**
- <font color="red">mevcut değişken tipi</font> → <font color = "green">beklenen değişken tipi</font>
- **Invoice:**
- <font color="red">object</font> → <font color = "green">integer</font>
- **InvoiceDate:**
- <font color="red">object</font> → <font color = "green">DateTime</font>
- **Customer ID:**
- <font color="red">float</font> → <font color = "green">integer</font>
aksiyonlar:
```
#veri['Invoice'] = veri['Invoice'].astype(np.int64)
veri['InvoiceDate'] = pd.to_datetime(veri['InvoiceDate'])
veri['Customer ID'] = veri['Customer ID'].astype(np.int64)
```
* InvoiceNo: Fatura numarası. Nominal. Her işleme benzersiz şekilde atanan 6 basamaklı bir sayı. Bu kod 'c' harfiyle başlıyorsa, bir iptal olduğunu gösterir.
```
veri.sort_values(by='Invoice', ascending=False).head()
print("'C' ile başlayıp iptal edilmemiş işlem var mı?",((veri[veri['Invoice'].str.startswith("C",na=False)]['Quantity']) > 0).any())
print("'C' ile başlamayıp iptal edilmiş işlem var mı?", ((veri[veri['Invoice'].str.startswith("C",na=False) == False]['Quantity']) < 0).any())
veri = veri[veri['Invoice'].str.startswith("C",na=False) == False]
veri['Invoice'] = veri['Invoice'].astype(np.int64)
```
yaptıklarımızı kontrol edelim:
```
veri_tipleri = pd.DataFrame(data = veri.dtypes, columns = ['Type'])
veri_tipleri.transpose()
```
# yeni değişkenlerin oluşturulması
toplam fiyat değişkeni:
```
veri['TotalPrice'] = veri['Price']*veri['Quantity']
```
bölge değişkeni:
```
# ülke grupları
avrupa_ülkeleri = ['Austria', 'Belgium', 'Cyprus', 'Czech Republic', 'Denmark',
'EIRE', 'European Community', 'Finland', 'France', 'Germany',
'Greece', 'Iceland','Italy', 'Lithuania', 'Malta', 'Netherlands',
'Norway', 'Poland', 'Portugal', 'Spain', 'Sweden', 'Switzerland',
'United Kingdom', 'Channel Islands']
amerika_ülkeleri = ['Canada', 'USA', 'Brazil', 'Bermuda']
asya_ülkeleri = ['Bahrain','Hong Kong', 'Japan', 'Saudi Arabia', 'Singapore', 'Thailand', 'United Arab Emirates']
# ülke grupları fonksiyon
def ülke_grubu(row):
global avrupa_ülkeleri
global amerika_ülkeleri
global asya_ülkeleri
if row['Country'] in avrupa_ülkeleri:
return "Europe"
elif row['Country'] in amerika_ülkeleri:
return "America"
elif row['Country'] in asya_ülkeleri:
return "Asia"
else:
return "Other"
veri = veri.assign(CountryGroup=veri.apply(ülke_grubu, axis=1))
```
son olarak veriye bakalım:
```
print(veri.shape)
veri.head()
veri.info()
print(f"minimum alışveriş tarihi: {veri['InvoiceDate'].min()}")
print(f"minimum alışveriş tarihi: {veri['InvoiceDate'].max()}")
```
| github_jupyter |
# In-Class Coding Lab: Web Services and APIs
### Overview
The web has long evolved from user-consumption to device consumption. In the early days of the web when you wanted to check the weather, you opened up your browser and visited a website. Nowadays your smart watch / smart phone retrieves the weather for you and displays it on the device. Your device can't predict the weather. It's simply consuming a weather based service.
The key to making device consumption work are API's (Application Program Interfaces). Products we use everyday like smartphones, Amazon's Alexa, and gaming consoles all rely on API's. They seem "smart" and "powerful" but in actuality they're only interfacing with smart and powerful services in the cloud.
API consumption is the new reality of programming; it is why we cover it in this course. Once you undersand how to conusme API's you can write a program to do almost anything and harness the power of the internet to make your own programs look "smart" and "powerful."
This lab covers how to properly use consume web service API's with Python. Here's what we will cover.
1. Understading requests and responses
1. Proper error handling
1. Parameter handling
1. Refactoring as a function
## Pre-Requisites: Let's install what we need for the remainder of the course:
NOTE: Run this cell. It will install several Python packages you will need. It might take 2-3 minutes to do the installs please be patient.
```
!conda install -y -q pandas matplotlib beautifulsoup4
!pip install requests html5 lxml
!pip install plotly cufflinks folium
```
## Part 1: Understanding Requests and responses
In this part we learn about the Python requests module. http://docs.python-requests.org/en/master/user/quickstart/
This module makes it easy to write code to send HTTP requests over the internet and handle the responses. It will be the cornerstone of our API consumption in this course. While there are other modules which accomplish the same thing, `requests` is the most straightforward and easiest to use.
We'll begin by importing the modules we will need. We do this here so we won't need to include these lines in the other code we write in this lab.
```
# start by importing the modules we will need
import requests
import json
```
### The request
As you learned in class and your assigned readings, the HTTP protocol has **verbs** which consititue the type of request you will send to the remote resource, or **url**. Based on the url and request type, you will get a **response**.
The following line of code makes a **get** request (that's the HTTP verb) to Google's Geocoding API service. This service attempts to convert the address (in this case `Syracuse University`) into a set of coordinates global coordinates (Latitude and Longitude), so that location can be plotted on a map.
```
url = 'http://maps.googleapis.com/maps/api/geocode/json?address=Syracuse+University'
response = requests.get(url)
```
### The response
The `get()` method returns a `Response` object variable. I called it `response` in this example but it could be called anything.
The HTTP response consists of a *status code* and *body*. The status code lets you know if the request worked, while the body of the response contains the actual data.
```
response.ok # did the request work?
response.text # what's in the body of the response, as a raw string
```
### Converting responses into Python object variables
In the case of **web site url's** the response body is **HTML**. This should be rendered in a web browser. But we're dealing with Web Service API's so...
In the case of **web API url's** the response body could be in a variety of formats from **plain text**, to **XML** or **JSON**. In this course we will only focus on JSON format because as we've seen these translate easily into Python object variables.
Let's convert the response to a Python object variable. I this case it will be a Python dictionary
```
geodata = response.json() # try to decode the response from JSON format
geodata # this is now a Python object variable
```
With our Python object, we can now walk the python object to retrieve the latitude and longitude
```
coords = geodata['results'][0]['geometry']['location']
coords
```
In the code above we "walked" the Python dictionary to get to the location
- `geodata['results']` is a list
- `geodata['results'][0]` is the first item in that list, a dictionary
- `geodata['results'][0]['geometry']` is a key which represents another dictionary
- `geodata['results'][0]['geometry']['location']` is a key which contains the dictionary we want!
It should be noted that this process will vary for each API you call, so its important to get accustomed to performing this task. You'll be doing it quite often.
### Now You Try It!
Walk the `geodata` object variable and reteieve the value under the key `place_id` and the `formatted_address`
```
# todo:
# retrieve the place_id put in a variable
# retrieve the formatted_address put it in a variable
# print both of them out
place_id = geodata['results'][0]['place_id']
formatted_address = geodata['results'][0]['formatted_address']
place_id, formatted_address
```
## Part 2: Parameter Handling
In the example above we hard-coded "Syracuse University" into the request:
```
url = 'http://maps.googleapis.com/maps/api/geocode/json?address=Syracuse+University'
```
A better way to write this code is to allow for the input of any location and supply that to the service. To make this work we need to send parameters into the request as a dictionary. This way we can geolocate any address!
You'll notice that on the url, we are passing a **key-value pair** the key is `address` and the value is `Syracuse+University`. Python dictionaries are also key-value pairs, so:
```
url = 'http://maps.googleapis.com/maps/api/geocode/json' # base URL without paramters after the "?"
options = { 'address' : 'Syracuse University'} # options['address'] == 'Syracuse University'
response = requests.get(url, params = options)
geodata = response.json()
coords = geodata['results'][0]['geometry']['location']
print("Address", options)
print("Coordinates", coords)
print("%s is located at (%f,%f)" %(options['address'], coords['lat'], coords['lng']))
```
### Looking up any address
RECALL: For `requests.get(url, params = options)` the part that says `params = options` is called a **named argument**, which is Python's way of specifying an optional function argument.
With our parameter now outside the url, we can easily re-write this code to work for any location! Go ahead and execute the code and input `Queens, NY`. This will retrieve the coordinates `(40.728224,-73.794852)`
```
location = input("Enter a location: ")
url = 'http://maps.googleapis.com/maps/api/geocode/json'
options = { 'address' : location } # no longer 'Syracuse University' but whatever you type!
response = requests.get(url, params = options)
geodata = response.json()
coords = geodata['results'][0]['geometry']['location']
print("Address", options)
print("Coordinates", coords)
print("%s is located at (%f,%f)" %(location, coords['lat'], coords['lng']))
```
### So useful, it should be a function
One thing you'll come to realize quickly is that your API calls should be wrapped in functions. This promotes **readability** and **code re-use**. For example:
```
def get_coordinates_using_google(location):
options = { 'address' : location }
response = requests.get(url, params = options)
geodata = response.json()
coords = geodata['results'][0]['geometry']['location']
return coords
# main program here:
location = input("Enter a location: ")
coords = get_coordinates_using_google(location)
print("%s is located at (%f,%f)" %(location, coords['lat'], coords['lng']))
```
### Other request methods
Not every API we call uses the `get()` method. Some use `post()` because the amount of data you provide it too large to place on the url.
An example of this is the **Text-Processing.com** sentiment analysis service. http://text-processing.com/docs/sentiment.html This service will detect the sentiment or mood of text. You give the service some text, and it tells you whether that text is positive, negative or neutral.
```
# 'you suck' == 'negative'
url = 'http://text-processing.com/api/sentiment/'
options = { 'text' : 'you suck'}
response = requests.post(url, data = options)
sentiment = response.json()
sentiment
# 'I love cheese' == 'positive'
url = 'http://text-processing.com/api/sentiment/'
options = { 'text' : 'I love cheese'}
response = requests.post(url, data = options)
sentiment = response.json()
sentiment
```
In the examples provided we used the `post()` method instead of the `get()` method. the `post()` method has a named argument `data` which takes a dictionary of data. The key required by **text-processing.com** is `text` which hold the text you would like to process for sentiment.
We use a post in the event the text we wish to process is very long. Case in point:
```
tweet = "Arnold Schwarzenegger isn't voluntarily leaving the Apprentice, he was fired by his bad (pathetic) ratings, not by me. Sad end to great show"
url = 'http://text-processing.com/api/sentiment/'
options = { 'text' : tweet }
response = requests.post(url, data = options)
sentiment = response.json()
sentiment
```
## Part 3: Proper Error Handling (In 3 Simple Rules)
When you write code that depends on other people's code from around the Internet, there's a lot that can go wrong. Therefore we perscribe the following advice:
```
Assume anything that CAN go wrong WILL go wrong
```
### Rule 1: Don't assume the internet 'always works'
The first rule of programming over a network is to NEVER assume the network is available. You need to assume the worst. No WiFi, user types in a bad url, the remote website is down, etc.
We handle this in the `requests` module by catching the `requests.exceptions.RequestException` Here's an example:
```
url = "http://this is not a website"
try:
response = requests.get(url) # throws an exception when it cannot connect
# internet is broken
except requests.exceptions.RequestException as e:
print("ERROR: Cannot connect to ", url)
print("DETAILS:", e)
```
### Rule 2: Don't assume the response you get back is valid
Assuming the internet is not broken (Rule 1) You should now check for HTTP response 200 which means the url responded successfully. Other responses like 404 or 501 indicate an error occured and that means you should not keep processing the response.
Here's one way to do it:
```
url = 'http://www.syr.edu/mikeisawesum' # this should 404
try:
response = requests.get(url)
if response.ok: # same as response.status_code == 200
data = response.text
else: # Some other non 200 response code
print("There was an Error requesting:", url, " HTTP Response Code: ", response.status_code)
# internet is broken
except requests.exceptions.RequestException as e:
print("ERROR: Cannot connect to ", url)
print("DETAILS:", e)
```
### Rule 2a: Use exceptions instead of if else in this case
Personally I don't like to use `if ... else` to handle an error. Instead, I prefer to instruct `requests` to throw an exception of `requests.exceptions.HTTPError` whenever the response is not ok. This makes the code you write a little cleaner.
Errors are rare occurences, and so I don't like error handling cluttering up my code.
```
url = 'http://www.syr.edu/mikeisawesum' # this should 404
try:
response = requests.get(url) # throws an exception when it cannot connect
response.raise_for_status() # throws an exception when not 'ok'
data = response.text
# response not ok
except requests.exceptions.HTTPError as e:
print("ERROR: Response from ", url, 'was not ok.')
print("DETAILS:", e)
# internet is broken
except requests.exceptions.RequestException as e:
print("ERROR: Cannot connect to ", url)
print("DETAILS:", e)
```
### Rule 3: Don't assume the data you get back is the data you expect.
And finally, do not assume the data arriving the the `response` is the data you expected. Specifically when you try and decode the `JSON` don't assume that will go smoothly. Catch the `json.decoder.JSONDecodeError`.
```
url = 'http://www.syr.edu' # this is HTML, not JSON
try:
response = requests.get(url) # throws an exception when it cannot connect
response.raise_for_status() # throws an exception when not 'ok'
data = response.json() # throws an exception when cannot decode json
# cannot decode json
except json.decoder.JSONDecodeError as e:
print("ERROR: Cannot decode the response into json")
print("DETAILS", e)
# response not ok
except requests.exceptions.HTTPError as e:
print("ERROR: Response from ", url, 'was not ok.')
print("DETAILS:", e)
# internet is broken
except requests.exceptions.RequestException as e:
print("ERROR: Cannot connect to ", url)
print("DETAILS:", e)
```
### Now You try it!
Using the last example above, write a program to input a location, call the `get_coordinates_using_google()` function, then print the coordindates. Make sure to handle all three types of exceptions!!!
```
url = 'http://maps.googleapis.com/maps/api/geocode/json'
def get_coordinates_using_google(location):
options = { 'address' : location }
response = requests.get(url, params = options)
geodata = response.json()
coords = geodata['results'][0]['geometry']['location']
return coords
location = input("Enter the location: ")
try:
get_coordinates_using_google(location)
response = requests.get(url, params = options)
response.raise_for_status()
data = response.json()
print(coords)
except IndexError:
print("Invalid location")
except json.decoder.JSONDecodeError as e:
print("ERROR: Cannot decode the response into json")
print("DETAILS", e)
except requests.exceptions.HTTPError as e:
print("ERROR: Response from ", url, 'was not ok.')
print("DETAILS:", e)
except requests.exceptions.RequestException as e:
print("ERROR: Cannot connect to ", url)
print("DETAILS:", e)
```
| github_jupyter |
# Preface
The locations requiring configuration for your experiment are commented in capital text.
# Setup
**Installations**
```
!pip install apricot-select
!pip install sphinxcontrib-napoleon
!pip install sphinxcontrib-bibtex
!git clone https://github.com/decile-team/distil.git
!git clone https://github.com/circulosmeos/gdown.pl.git
!git clone https://github.com/owruby/shake-shake_pytorch.git
!mv distil asdf
!mv asdf/distil .
!mv shake-shake_pytorch/models .
```
**Experiment-Specific Imports**
```
from distil.utils.data_handler import DataHandler_CIFAR10, DataHandler_Points # IMPORT YOUR DATAHANDLER HERE
from models.shake_resnet import ShakeResNet
```
**Imports, Training Class Definition, Experiment Procedure Definition**
Nothing needs to be modified in this code block unless it specifically pertains to a change of experimental procedure.
```
import pandas as pd
import numpy as np
import copy
from torch.utils.data import Dataset
from torch.utils.data import DataLoader
from torch.utils.data import Subset
import torch.nn.functional as F
from torch import nn
from torchvision import transforms
from torchvision import datasets
from PIL import Image
import torch
import torch.optim as optim
from torch.optim.swa_utils import AveragedModel, SWALR, update_bn
from torch.autograd import Variable
import sys
sys.path.append('../')
import matplotlib.pyplot as plt
import time
import math
import random
import os
import pickle
from numpy.linalg import cond
from numpy.linalg import inv
from numpy.linalg import norm
from scipy import sparse as sp
from scipy.linalg import lstsq
from scipy.linalg import solve
from scipy.optimize import nnls
from distil.active_learning_strategies.badge import BADGE
from distil.active_learning_strategies.glister import GLISTER
from distil.active_learning_strategies.margin_sampling import MarginSampling
from distil.active_learning_strategies.entropy_sampling import EntropySampling
from distil.active_learning_strategies.random_sampling import RandomSampling
from distil.active_learning_strategies.gradmatch_active import GradMatchActive
from distil.active_learning_strategies.craig_active import CRAIGActive
from distil.active_learning_strategies.fass import FASS
from distil.active_learning_strategies.adversarial_bim import AdversarialBIM
from distil.active_learning_strategies.adversarial_deepfool import AdversarialDeepFool
from distil.active_learning_strategies.core_set import CoreSet
from distil.active_learning_strategies.least_confidence import LeastConfidence
from distil.active_learning_strategies.margin_sampling import MarginSampling
from distil.active_learning_strategies.bayesian_active_learning_disagreement_dropout import BALDDropout
from distil.utils.dataset import get_dataset
from google.colab import drive
import warnings
warnings.filterwarnings("ignore")
# -*- coding: utf-8 -*-
from models.shakeshake import ShakeShake
from models.shakeshake import Shortcut
class ShakeBlock(nn.Module):
def __init__(self, in_ch, out_ch, stride=1):
super(ShakeBlock, self).__init__()
self.equal_io = in_ch == out_ch
self.shortcut = self.equal_io and None or Shortcut(in_ch, out_ch, stride=stride)
self.branch1 = self._make_branch(in_ch, out_ch, stride)
self.branch2 = self._make_branch(in_ch, out_ch, stride)
def forward(self, x):
h1 = self.branch1(x)
h2 = self.branch2(x)
h = ShakeShake.apply(h1, h2, self.training)
h0 = x if self.equal_io else self.shortcut(x)
return h + h0
def _make_branch(self, in_ch, out_ch, stride=1):
return nn.Sequential(
nn.ReLU(inplace=False),
nn.Conv2d(in_ch, out_ch, 3, padding=1, stride=stride, bias=False),
nn.BatchNorm2d(out_ch),
nn.ReLU(inplace=False),
nn.Conv2d(out_ch, out_ch, 3, padding=1, stride=1, bias=False),
nn.BatchNorm2d(out_ch))
class ShakeResNet(nn.Module):
def __init__(self, depth, w_base, label):
super(ShakeResNet, self).__init__()
n_units = (depth - 2) / 6
in_chs = [16, w_base, w_base * 2, w_base * 4]
self.in_chs = in_chs
self.c_in = nn.Conv2d(3, in_chs[0], 3, padding=1)
self.layer1 = self._make_layer(n_units, in_chs[0], in_chs[1])
self.layer2 = self._make_layer(n_units, in_chs[1], in_chs[2], 2)
self.layer3 = self._make_layer(n_units, in_chs[2], in_chs[3], 2)
self.fc_out = nn.Linear(in_chs[3], label)
# Initialize paramters
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
elif isinstance(m, nn.Linear):
m.bias.data.zero_()
def forward(self, x, last=False):
h = self.c_in(x)
h = self.layer1(h)
h = self.layer2(h)
h = self.layer3(h)
h = F.relu(h)
h = F.avg_pool2d(h, 8)
h = h.view(-1, self.in_chs[3])
if last:
output = self.fc_out(h)
return output, h
else:
h = self.fc_out(h)
return h
def get_embedding_dim(self):
return self.fc_out.in_features
def _make_layer(self, n_units, in_ch, out_ch, stride=1):
layers = []
for i in range(int(n_units)):
layers.append(ShakeBlock(in_ch, out_ch, stride=stride))
in_ch, stride = out_ch, 1
return nn.Sequential(*layers)
def init_weights(m):
if type(m) == nn.Linear:
torch.nn.init.xavier_uniform_(m.weight)
m.bias.data.fill_(0.01)
#custom training
class data_train:
def __init__(self, X, Y, net, handler, args):
self.X = X
self.Y = Y
self.net = net
self.handler = handler
self.args = args
self.n_pool = len(Y)
if 'islogs' not in args:
self.args['islogs'] = False
if 'optimizer' not in args:
self.args['optimizer'] = 'sgd'
if 'isverbose' not in args:
self.args['isverbose'] = False
if 'isreset' not in args:
self.args['isreset'] = True
if 'max_accuracy' not in args:
self.args['max_accuracy'] = 0.95
if 'min_diff_acc' not in args: #Threshold to monitor for
self.args['min_diff_acc'] = 0.001
if 'window_size' not in args: #Window for monitoring accuracies
self.args['window_size'] = 10
if 'criterion' not in args:
self.args['criterion'] = nn.CrossEntropyLoss()
if 'device' not in args:
self.device = "cuda" if torch.cuda.is_available() else "cpu"
else:
self.device = args['device']
def update_index(self, idxs_lb):
self.idxs_lb = idxs_lb
def update_data(self, X, Y):
self.X = X
self.Y = Y
def get_acc_on_set(self, X_test, Y_test):
try:
self.clf
except:
self.clf = self.net
if X_test is None:
raise ValueError("Test data not present")
if Y_test is None:
raise ValueError("Test labels not present")
if X_test.shape[0] != Y_test.shape[0]:
raise ValueError("X_test has {self.X_test.shape[0]} values but {self.Y_test.shape[0]} labels")
if 'batch_size' in self.args:
batch_size = self.args['batch_size']
else:
batch_size = 1
loader_te = DataLoader(self.handler(X_test, Y_test, False, use_test_transform=True), shuffle=False, pin_memory=True, batch_size=batch_size)
self.clf.eval()
accFinal = 0.
with torch.no_grad():
self.clf = self.clf.to(device=self.device)
for batch_id, (x,y,idxs) in enumerate(loader_te):
x, y = x.to(device=self.device), y.to(device=self.device)
out = self.clf(x)
accFinal += torch.sum(1.0*(torch.max(out,1)[1] == y)).item() #.data.item()
return accFinal / len(loader_te.dataset.X)
def _train_weighted(self, epoch, loader_tr, optimizer, gradient_weights):
self.clf.train()
accFinal = 0.
criterion = self.args['criterion']
criterion.reduction = "none"
for batch_id, (x, y, idxs) in enumerate(loader_tr):
x, y = x.to(device=self.device), y.to(device=self.device)
gradient_weights = gradient_weights.to(device=self.device)
optimizer.zero_grad()
out = self.clf(x)
# Modify the loss function to apply weights before reducing to a mean
loss = criterion(out, y.long())
# Perform a dot product with the loss vector and the weight vector, then divide by batch size.
weighted_loss = torch.dot(loss, gradient_weights[idxs])
weighted_loss = torch.div(weighted_loss, len(idxs))
accFinal += torch.sum(torch.eq(torch.max(out,1)[1],y)).item() #.data.item()
# Backward now does so on the weighted loss, not the regular mean loss
weighted_loss.backward()
# clamp gradients, just in case
# for p in filter(lambda p: p.grad is not None, self.clf.parameters()): p.grad.data.clamp_(min=-.1, max=.1)
optimizer.step()
return accFinal / len(loader_tr.dataset.X), weighted_loss
def _train(self, epoch, loader_tr, optimizer):
self.clf.train()
accFinal = 0.
criterion = self.args['criterion']
for batch_id, (x, y, idxs) in enumerate(loader_tr):
x, y = x.to(device=self.device), y.to(device=self.device)
optimizer.zero_grad()
out = self.clf(x)
loss = criterion(out, y.long())
accFinal += torch.sum((torch.max(out,1)[1] == y).float()).item()
loss.backward()
# clamp gradients, just in case
# for p in filter(lambda p: p.grad is not None, self.clf.parameters()): p.grad.data.clamp_(min=-.1, max=.1)
optimizer.step()
return accFinal / len(loader_tr.dataset.X), loss
def check_saturation(self, acc_monitor):
saturate = True
for i in range(len(acc_monitor)):
for j in range(i+1, len(acc_monitor)):
if acc_monitor[j] - acc_monitor[i] >= self.args['min_diff_acc']:
saturate = False
break
return saturate
def train(self, gradient_weights=None):
print('Training..')
def weight_reset(m):
if hasattr(m, 'reset_parameters'):
m.reset_parameters()
train_logs = []
n_epoch = self.args['n_epoch']
if self.args['isreset']:
self.clf = self.net.apply(weight_reset).to(device=self.device)
else:
try:
self.clf
except:
self.clf = self.net.apply(weight_reset).to(device=self.device)
if self.args['optimizer'] == 'sgd':
optimizer = optim.SGD(self.clf.parameters(), lr = self.args['lr'], momentum=0.9, weight_decay=5e-4)
lr_sched = optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=n_epoch)
elif self.args['optimizer'] == 'adam':
optimizer = optim.Adam(self.clf.parameters(), lr = self.args['lr'], weight_decay=0)
# ADD stochastic weight averaging
swa_sched = SWALR(optimizer, anneal_strategy="cos", swa_lr=self.args['lr'])
swa_model = AveragedModel(self.clf).to(device=self.device)
if 'batch_size' in self.args:
batch_size = self.args['batch_size']
else:
batch_size = 1
# Set shuffle to true to encourage stochastic behavior for SGD
loader_tr = DataLoader(self.handler(self.X, self.Y, False), batch_size=batch_size, shuffle=True, pin_memory=True)
epoch = 1
accCurrent = 0
is_saturated = False
acc_monitor = []
start_swa = False
while (accCurrent < self.args['max_accuracy']) and (epoch < n_epoch) and (not is_saturated):
if gradient_weights is None:
accCurrent, lossCurrent = self._train(epoch, loader_tr, optimizer)
else:
accCurrent, lossCurrent = self._train_weighted(epoch, loader_tr, optimizer, gradient_weights)
acc_monitor.append(accCurrent)
if accCurrent > 0.95:
start_swa = True
if start_swa:
swa_model.update_parameters(self.clf)
swa_sched.step()
elif self.args['optimizer'] == 'sgd':
lr_sched.step()
epoch += 1
if(self.args['isverbose']):
if epoch % 50 == 0:
print(str(epoch) + ' training accuracy: ' + str(accCurrent), flush=True)
#Stop training if not converging
if len(acc_monitor) >= self.args['window_size']:
is_saturated = self.check_saturation(acc_monitor)
del acc_monitor[0]
log_string = 'Epoch:' + str(epoch) + '- training accuracy:'+str(accCurrent)+'- training loss:'+str(lossCurrent)
train_logs.append(log_string)
if (epoch % 50 == 0) and (accCurrent < 0.2): # resetif not converging
self.clf = self.net.apply(weight_reset).to(device=self.device)
if self.args['optimizer'] == 'sgd':
optimizer = optim.SGD(self.clf.parameters(), lr = self.args['lr'], momentum=0.9, weight_decay=5e-4)
lr_sched = optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=n_epoch)
else:
optimizer = optim.Adam(self.clf.parameters(), lr = self.args['lr'], weight_decay=0)
print('Epoch:', str(epoch), 'Training accuracy:', round(accCurrent, 3), flush=True)
# Update batch normalization and set averaged model to clf
update_bn(loader_tr, swa_model, device=self.device)
self.clf = swa_model.module
if self.args['islogs']:
return self.clf, train_logs
else:
return self.clf
class Checkpoint:
def __init__(self, acc_list=None, indices=None, state_dict=None, experiment_name=None, path=None):
# If a path is supplied, load a checkpoint from there.
if path is not None:
if experiment_name is not None:
self.load_checkpoint(path, experiment_name)
else:
raise ValueError("Checkpoint contains None value for experiment_name")
return
if acc_list is None:
raise ValueError("Checkpoint contains None value for acc_list")
if indices is None:
raise ValueError("Checkpoint contains None value for indices")
if state_dict is None:
raise ValueError("Checkpoint contains None value for state_dict")
if experiment_name is None:
raise ValueError("Checkpoint contains None value for experiment_name")
self.acc_list = acc_list
self.indices = indices
self.state_dict = state_dict
self.experiment_name = experiment_name
def __eq__(self, other):
# Check if the accuracy lists are equal
acc_lists_equal = self.acc_list == other.acc_list
# Check if the indices are equal
indices_equal = self.indices == other.indices
# Check if the experiment names are equal
experiment_names_equal = self.experiment_name == other.experiment_name
return acc_lists_equal and indices_equal and experiment_names_equal
def save_checkpoint(self, path):
# Get current time to use in file timestamp
timestamp = time.time_ns()
# Create the path supplied
os.makedirs(path, exist_ok=True)
# Name saved files using timestamp to add recency information
save_path = os.path.join(path, F"c{timestamp}1")
copy_save_path = os.path.join(path, F"c{timestamp}2")
# Write this checkpoint to the first save location
with open(save_path, 'wb') as save_file:
pickle.dump(self, save_file)
# Write this checkpoint to the second save location
with open(copy_save_path, 'wb') as copy_save_file:
pickle.dump(self, copy_save_file)
def load_checkpoint(self, path, experiment_name):
# Obtain a list of all files present at the path
timestamp_save_no = [f for f in os.listdir(path) if os.path.isfile(os.path.join(path, f))]
# If there are no such files, set values to None and return
if len(timestamp_save_no) == 0:
self.acc_list = None
self.indices = None
self.state_dict = None
return
# Sort the list of strings to get the most recent
timestamp_save_no.sort(reverse=True)
# Read in two files at a time, checking if they are equal to one another.
# If they are equal, then it means that the save operation finished correctly.
# If they are not, then it means that the save operation failed (could not be
# done atomically). Repeat this action until no possible pair can exist.
while len(timestamp_save_no) > 1:
# Pop a most recent checkpoint copy
first_file = timestamp_save_no.pop(0)
# Keep popping until two copies with equal timestamps are present
while True:
second_file = timestamp_save_no.pop(0)
# Timestamps match if the removal of the "1" or "2" results in equal numbers
if (second_file[:-1]) == (first_file[:-1]):
break
else:
first_file = second_file
# If there are no more checkpoints to examine, set to None and return
if len(timestamp_save_no) == 0:
self.acc_list = None
self.indices = None
self.state_dict = None
return
# Form the paths to the files
load_path = os.path.join(path, first_file)
copy_load_path = os.path.join(path, second_file)
# Load the two checkpoints
with open(load_path, 'rb') as load_file:
checkpoint = pickle.load(load_file)
with open(copy_load_path, 'rb') as copy_load_file:
checkpoint_copy = pickle.load(copy_load_file)
# Do not check this experiment if it is not the one we need to restore
if checkpoint.experiment_name != experiment_name:
continue
# Check if they are equal
if checkpoint == checkpoint_copy:
# This checkpoint will suffice. Populate this checkpoint's fields
# with the selected checkpoint's fields.
self.acc_list = checkpoint.acc_list
self.indices = checkpoint.indices
self.state_dict = checkpoint.state_dict
return
# Instantiate None values in acc_list, indices, and model
self.acc_list = None
self.indices = None
self.state_dict = None
def get_saved_values(self):
return (self.acc_list, self.indices, self.state_dict)
def delete_checkpoints(checkpoint_directory, experiment_name):
# Iteratively go through each checkpoint, deleting those whose experiment name matches.
timestamp_save_no = [f for f in os.listdir(checkpoint_directory) if os.path.isfile(os.path.join(checkpoint_directory, f))]
for file in timestamp_save_no:
delete_file = False
# Get file location
file_path = os.path.join(checkpoint_directory, file)
if not os.path.exists(file_path):
continue
# Unpickle the checkpoint and see if its experiment name matches
with open(file_path, "rb") as load_file:
checkpoint_copy = pickle.load(load_file)
if checkpoint_copy.experiment_name == experiment_name:
delete_file = True
# Delete this file only if the experiment name matched
if delete_file:
os.remove(file_path)
#Logs
def write_logs(logs, save_directory, rd, run):
file_path = save_directory + 'run_'+str(run)+'.txt'
with open(file_path, 'a') as f:
f.write('---------------------\n')
f.write('Round '+str(rd)+'\n')
f.write('---------------------\n')
for key, val in logs.items():
if key == 'Training':
f.write(str(key)+ '\n')
for epoch in val:
f.write(str(epoch)+'\n')
else:
f.write(str(key) + ' - '+ str(val) +'\n')
def train_one(X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, strategy, save_directory, run, checkpoint_directory, experiment_name):
# Define acc initially
acc = np.zeros(n_rounds+1)
initial_unlabeled_size = X_unlabeled.shape[0]
initial_round = 1
# Define an index map
index_map = np.array([x for x in range(initial_unlabeled_size)])
# Attempt to load a checkpoint. If one exists, then the experiment crashed.
training_checkpoint = Checkpoint(experiment_name=experiment_name, path=checkpoint_directory)
rec_acc, rec_indices, rec_state_dict = training_checkpoint.get_saved_values()
# Check if there are values to recover
if rec_acc is not None:
# Restore the accuracy list
for i in range(len(rec_acc)):
acc[i] = rec_acc[i]
# Restore the indices list and shift those unlabeled points to the labeled set.
index_map = np.delete(index_map, rec_indices)
# Record initial size of X_tr
intial_seed_size = X_tr.shape[0]
X_tr = np.concatenate((X_tr, X_unlabeled[rec_indices]), axis=0)
X_unlabeled = np.delete(X_unlabeled, rec_indices, axis = 0)
y_tr = np.concatenate((y_tr, y_unlabeled[rec_indices]), axis = 0)
y_unlabeled = np.delete(y_unlabeled, rec_indices, axis = 0)
# Restore the model
net.load_state_dict(rec_state_dict)
# Fix the initial round
initial_round = (X_tr.shape[0] - initial_seed_size) // budget + 1
# Ensure loaded model is moved to GPU
if torch.cuda.is_available():
net = net.cuda()
strategy.update_model(net)
strategy.update_data(X_tr, y_tr, X_unlabeled)
else:
if torch.cuda.is_available():
net = net.cuda()
acc[0] = dt.get_acc_on_set(X_test, y_test)
print('Initial Testing accuracy:', round(acc[0]*100, 2), flush=True)
logs = {}
logs['Training Points'] = X_tr.shape[0]
logs['Test Accuracy'] = str(round(acc[0]*100, 2))
write_logs(logs, save_directory, 0, run)
#Updating the trained model in strategy class
strategy.update_model(net)
##User Controlled Loop
for rd in range(initial_round, n_rounds+1):
print('-------------------------------------------------')
print('Round', rd)
print('-------------------------------------------------')
sel_time = time.time()
idx = strategy.select(budget)
sel_time = time.time() - sel_time
print("Selection Time:", sel_time)
#Saving state of model, since labeling new points might take time
# strategy.save_state()
#Adding new points to training set
X_tr = np.concatenate((X_tr, X_unlabeled[idx]), axis=0)
X_unlabeled = np.delete(X_unlabeled, idx, axis = 0)
#Human In Loop, Assuming user adds new labels here
y_tr = np.concatenate((y_tr, y_unlabeled[idx]), axis = 0)
y_unlabeled = np.delete(y_unlabeled, idx, axis = 0)
# Update the index map
index_map = np.delete(index_map, idx, axis = 0)
print('Number of training points -',X_tr.shape[0])
#Reload state and start training
# strategy.load_state()
strategy.update_data(X_tr, y_tr, X_unlabeled)
dt.update_data(X_tr, y_tr)
t1 = time.time()
clf, train_logs = dt.train(None)
t2 = time.time()
acc[rd] = dt.get_acc_on_set(X_test, y_test)
logs = {}
logs['Training Points'] = X_tr.shape[0]
logs['Test Accuracy'] = str(round(acc[rd]*100, 2))
logs['Selection Time'] = str(sel_time)
logs['Trainining Time'] = str(t2 - t1)
logs['Training'] = train_logs
write_logs(logs, save_directory, rd, run)
strategy.update_model(clf)
print('Testing accuracy:', round(acc[rd]*100, 2), flush=True)
# Create a checkpoint
used_indices = np.array([x for x in range(initial_unlabeled_size)])
used_indices = np.delete(used_indices, index_map).tolist()
round_checkpoint = Checkpoint(acc.tolist(), used_indices, clf.state_dict(), experiment_name=experiment_name)
round_checkpoint.save_checkpoint(checkpoint_directory)
print('Training Completed')
return acc
# Define a function to perform experiments in bulk and return the mean accuracies
def BADGE_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds + 1)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size'], 'device':args['device']}
strategy = BADGE(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds + 1)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# Define a function to perform experiments in bulk and return the mean accuracies
def random_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds + 1)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size'], 'device':args['device']}
strategy = RandomSampling(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds + 1)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# Define a function to perform experiments in bulk and return the mean accuracies
def entropy_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds + 1)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size'], 'device':args['device']}
strategy = EntropySampling(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds + 1)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# Define a function to perform experiments in bulk and return the mean accuracies
def GLISTER_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds + 1)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size'], 'lr': args['lr'], 'device':args['device']}
strategy = GLISTER(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args,valid=False, typeOf='rand', lam=0.1)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds + 1)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# Define a function to perform experiments in bulk and return the mean accuracies
def FASS_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds + 1)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size'], 'device':args['device']}
strategy = FASS(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds + 1)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# Define a function to perform experiments in bulk and return the mean accuracies
def adversarial_bim_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds + 1)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size'], 'device':args['device']}
strategy = AdversarialBIM(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds + 1)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# Define a function to perform experiments in bulk and return the mean accuracies
def adversarial_deepfool_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds + 1)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size'], 'device':args['device']}
strategy = AdversarialDeepFool(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds + 1)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# Define a function to perform experiments in bulk and return the mean accuracies
def coreset_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds + 1)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size'], 'device':args['device']}
strategy = CoreSet(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds + 1)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# Define a function to perform experiments in bulk and return the mean accuracies
def least_confidence_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds + 1)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size'], 'device':args['device']}
strategy = LeastConfidence(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds + 1)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# Define a function to perform experiments in bulk and return the mean accuracies
def margin_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds + 1)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size'], 'device':args['device']}
strategy = MarginSampling(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds + 1)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# Define a function to perform experiments in bulk and return the mean accuracies
def bald_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds + 1)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size'], 'device':args['device']}
strategy = BALDDropout(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds + 1)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
```
# CIFAR10
**Parameter Definitions**
Parameters related to the specific experiment are placed here. You should examine each and modify them as needed.
```
data_set_name = 'CIFAR10'
download_path = '../downloaded_data/'
handler = DataHandler_CIFAR10 # PUT DATAHANDLER HERE
net = ShakeResNet(depth=14, w_base=32, label=10) # MODEL HERE
# MODIFY AS NECESSARY
logs_directory = '/content/gdrive/MyDrive/colab_storage/logs/'
checkpoint_directory = '/content/gdrive/MyDrive/colab_storage/check/'
initial_model = data_set_name
model_directory = "/content/gdrive/MyDrive/colab_storage/model/"
experiment_name = "SHAKE-SHAKE AND SWA"
initial_seed_size = 1000 # INIT SEED SIZE HERE
training_size_cap = 25000 # TRAIN SIZE CAP HERE
nclasses = 10 # NUM CLASSES HERE
budget = 3000 # BUDGET HERE
# CHANGE ARGS AS NECESSARY
args = {'n_epoch':300, 'lr':float(0.01), 'batch_size':20, 'max_accuracy':float(0.99), 'num_classes':nclasses, 'islogs':True, 'isreset':True, 'isverbose':True, 'device':'cuda'}
# Train on approximately the full dataset given the budget contraints
n_rounds = (training_size_cap - initial_seed_size) // budget
# SET N EXP TO RUN (>1 for repeat)
n_exp = 1
```
**Initial Loading and Training**
You may choose to train a new initial model or to continue to load a specific model. If this notebook is being executed in Colab, you should consider whether or not you need the gdown line.
```
# Mount drive containing possible saved model and define file path.
colab_model_storage_mount = "/content/gdrive"
drive.mount(colab_model_storage_mount)
# Retrieve the model from Apurva's link and save it to the drive
os.makedirs(logs_directory, exist_ok = True)
os.makedirs(checkpoint_directory, exist_ok = True)
os.makedirs(model_directory, exist_ok = True)
model_directory = F"{model_directory}/{data_set_name}"
#!/content/gdown.pl/gdown.pl "clone link" "clone location" # MAY NOT NEED THIS LINE IF NOT CLONING MODEL FROM COLAB
X, y, X_test, y_test = get_dataset(data_set_name, download_path)
dim = np.shape(X)[1:]
X_tr = X[:initial_seed_size]
y_tr = y[:initial_seed_size].numpy()
X_unlabeled = X[initial_seed_size:]
y_unlabeled = y[initial_seed_size:].numpy()
X_test = X_test
y_test = y_test.numpy()
# COMMENT OUT ONE OR THE OTHER IF YOU WANT TO TRAIN A NEW INITIAL MODEL
load_model = False
#load_model = True
# Only train a new model if one does not exist.
if load_model:
net.load_state_dict(torch.load(model_directory))
dt = data_train(X_tr, y_tr, net, handler, args)
clf = net
else:
dt = data_train(X_tr, y_tr, net, handler, args)
clf, _ = dt.train(None)
torch.save(clf.state_dict(), model_directory)
print("Training for", n_rounds, "rounds with budget", budget, "on unlabeled set size", training_size_cap)
```
**Random Sampling**
```
strat_logs = logs_directory+F'{data_set_name}/random_sampling/'
os.makedirs(strat_logs, exist_ok = True)
mean_test_acc_random = random_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, clf, n_rounds, budget, args, nclasses, strat_logs, checkpoint_directory, F"{experiment_name}_random")
```
**Entropy (Uncertainty) Sampling**
```
strat_logs = logs_directory+F'{data_set_name}/entropy_sampling/'
os.makedirs(strat_logs, exist_ok = True)
mean_test_acc_entropy = entropy_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, clf, n_rounds, budget, args, nclasses, strat_logs, checkpoint_directory, F"{experiment_name}_entropy")
```
**BADGE**
```
strat_logs = logs_directory+F'{data_set_name}/badge/'
os.makedirs(strat_logs, exist_ok = True)
mean_test_acc_badge = BADGE_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, clf, n_rounds, budget, args, nclasses, strat_logs, checkpoint_directory, F"{experiment_name}_badge")
```
| github_jupyter |
# Creating a Sentiment Analysis Web App
## Using PyTorch and SageMaker
_Deep Learning Nanodegree Program | Deployment_
---
Now that we have a basic understanding of how SageMaker works we will try to use it to construct a complete project from end to end. Our goal will be to have a simple web page which a user can use to enter a movie review. The web page will then send the review off to our deployed model which will predict the sentiment of the entered review.
## Instructions
Some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a `# TODO: ...` comment. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.
> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted.
## General Outline
Recall the general outline for SageMaker projects using a notebook instance.
1. Download or otherwise retrieve the data.
2. Process / Prepare the data.
3. Upload the processed data to S3.
4. Train a chosen model.
5. Test the trained model (typically using a batch transform job).
6. Deploy the trained model.
7. Use the deployed model.
For this project, you will be following the steps in the general outline with some modifications.
First, you will not be testing the model in its own step. You will still be testing the model, however, you will do it by deploying your model and then using the deployed model by sending the test data to it. One of the reasons for doing this is so that you can make sure that your deployed model is working correctly before moving forward.
In addition, you will deploy and use your trained model a second time. In the second iteration you will customize the way that your trained model is deployed by including some of your own code. In addition, your newly deployed model will be used in the sentiment analysis web app.
```
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
```
## Step 1: Downloading the data
As in the XGBoost in SageMaker notebook, we will be using the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/)
> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.
```
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
```
## Step 2: Preparing and Processing the data
Also, as in the XGBoost notebook, we will be doing some initial data processing. The first few steps are the same as in the XGBoost example. To begin with, we will read in each of the reviews and combine them into a single input structure. Then, we will split the dataset into a training set and a testing set.
```
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
```
Now that we've read the raw training and testing data from the downloaded dataset, we will combine the positive and negative reviews and shuffle the resulting records.
```
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
```
Now that we have our training and testing sets unified and prepared, we should do a quick check and see an example of the data our model will be trained on. This is generally a good idea as it allows you to see how each of the further processing steps affects the reviews and it also ensures that the data has been loaded correctly.
```
print(train_X[100])
print(train_y[100])
```
The first step in processing the reviews is to make sure that any html tags that appear should be removed. In addition we wish to tokenize our input, that way words such as *entertained* and *entertaining* are considered the same with regard to sentiment analysis.
```
import nltk
from nltk.corpus import stopwords
from nltk.stem.porter import *
import re
from bs4 import BeautifulSoup
def review_to_words(review):
nltk.download("stopwords", quiet=True)
stemmer = PorterStemmer()
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
```
The `review_to_words` method defined above uses `BeautifulSoup` to remove any html tags that appear and uses the `nltk` package to tokenize the reviews. As a check to ensure we know how everything is working, try applying `review_to_words` to one of the reviews in the training set.
```
# TODO: Apply review_to_words to a review (train_X[100] or any other review)
print(review_to_words(train_X[100]))
```
**Question:** Above we mentioned that `review_to_words` method removes html formatting and allows us to tokenize the words found in a review, for example, converting *entertained* and *entertaining* into *entertain* so that they are treated as though they are the same word. What else, if anything, does this method do to the input?
**Answer:**
As we can also clearly see in the code and from the result:
* only alphabetical and numbers are kept
* Words are turned to lower case
* english stopwords are removed
* words are reduced to its stem
The method below applies the `review_to_words` method to each of the reviews in the training and testing datasets. In addition it caches the results. This is because performing this processing step can take a long time. This way if you are unable to complete the notebook in the current session, you can come back without needing to process the data a second time.
```
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
```
## Transform the data
In the XGBoost notebook we transformed the data from its word representation to a bag-of-words feature representation. For the model we are going to construct in this notebook we will construct a feature representation which is very similar. To start, we will represent each word as an integer. Of course, some of the words that appear in the reviews occur very infrequently and so likely don't contain much information for the purposes of sentiment analysis. The way we will deal with this problem is that we will fix the size of our working vocabulary and we will only include the words that appear most frequently. We will then combine all of the infrequent words into a single category and, in our case, we will label it as `1`.
Since we will be using a recurrent neural network, it will be convenient if the length of each review is the same. To do this, we will fix a size for our reviews and then pad short reviews with the category 'no word' (which we will label `0`) and truncate long reviews.
### (TODO) Create a word dictionary
To begin with, we need to construct a way to map words that appear in the reviews to integers. Here we fix the size of our vocabulary (including the 'no word' and 'infrequent' categories) to be `5000` but you may wish to change this to see how it affects the model.
> **TODO:** Complete the implementation for the `build_dict()` method below. Note that even though the vocab_size is set to `5000`, we only want to construct a mapping for the most frequently appearing `4998` words. This is because we want to reserve the special labels `0` for 'no word' and `1` for 'infrequent word'.
```
import numpy as np
def build_dict(data, vocab_size = 5000):
"""Construct and return a dictionary mapping each of the most frequently appearing words to a unique integer."""
# TODO: Determine how often each word appears in `data`. Note that `data` is a list of sentences and that a
# sentence is a list of words.
word_count = {} # A dict storing the words that appear in the reviews along with how often they occur
for sentence in data:
for word in sentence: # dont like that code, counter would be better than {}
if word in word_count:
word_count[word] += 1
else:
word_count[word] = 1
# TODO: Sort the words found in `data` so that sorted_words[0] is the most frequently appearing word and
# sorted_words[-1] is the least frequently appearing word.
sorted_words = sorted(word_count, # we want to sort tthe word_count
key=word_count.get, # sort by keys of word_ocunt
reverse=True) # in reverse order, highest number first
word_dict = {} # This is what we are building, a dictionary that translates words into integers
for idx, word in enumerate(sorted_words[:vocab_size - 2]): # The -2 is so that we save room for the 'no word'
word_dict[word] = idx + 2 # 'infrequent' labels
return word_dict
word_dict = build_dict(train_X)
```
**Question:** What are the five most frequently appearing (tokenized) words in the training set? Does it makes sense that these words appear frequently in the training set?
**Answer:**
The top 5 words are: movi, film, one, like, time
The words make perfet sense to me, I would expect them a lot in the training set.
```
# TODO: Use this space to determine the five most frequently appearing words in the training set.
list(word_dict.keys())[:5]
```
### Save `word_dict`
Later on when we construct an endpoint which processes a submitted review we will need to make use of the `word_dict` which we have created. As such, we will save it to a file now for future use.
```
data_dir = '../data/pytorch' # The folder we will use for storing data
if not os.path.exists(data_dir): # Make sure that the folder exists
os.makedirs(data_dir)
with open(os.path.join(data_dir, 'word_dict.pkl'), "wb") as f:
pickle.dump(word_dict, f)
```
### Transform the reviews
Now that we have our word dictionary which allows us to transform the words appearing in the reviews into integers, it is time to make use of it and convert our reviews to their integer sequence representation, making sure to pad or truncate to a fixed length, which in our case is `500`.
```
def convert_and_pad(word_dict, sentence, pad=500):
NOWORD = 0 # We will use 0 to represent the 'no word' category
INFREQ = 1 # and we use 1 to represent the infrequent words, i.e., words not appearing in word_dict
working_sentence = [NOWORD] * pad
for word_index, word in enumerate(sentence[:pad]):
if word in word_dict:
working_sentence[word_index] = word_dict[word]
else:
working_sentence[word_index] = INFREQ
return working_sentence, min(len(sentence), pad)
def convert_and_pad_data(word_dict, data, pad=500):
result = []
lengths = []
for sentence in data:
converted, leng = convert_and_pad(word_dict, sentence, pad)
result.append(converted)
lengths.append(leng)
return np.array(result), np.array(lengths)
train_X, train_X_len = convert_and_pad_data(word_dict, train_X)
test_X, test_X_len = convert_and_pad_data(word_dict, test_X)
```
As a quick check to make sure that things are working as intended, check to see what one of the reviews in the training set looks like after having been processeed. Does this look reasonable? What is the length of a review in the training set?
```
# Use this cell to examine one of the processed reviews to make sure everything is working as intended.
print(train_X[100])
print(len(train_X[100]))
```
**Question:** In the cells above we use the `preprocess_data` and `convert_and_pad_data` methods to process both the training and testing set. Why or why not might this be a problem?
**Answer:**
As we set both data sets to a length of 500 we see that most of the data is empty (at least for the given example) and that might cause a bad training.
## Step 3: Upload the data to S3
As in the XGBoost notebook, we will need to upload the training dataset to S3 in order for our training code to access it. For now we will save it locally and we will upload to S3 later on.
### Save the processed training dataset locally
It is important to note the format of the data that we are saving as we will need to know it when we write the training code. In our case, each row of the dataset has the form `label`, `length`, `review[500]` where `review[500]` is a sequence of `500` integers representing the words in the review.
```
import pandas as pd
pd.concat([pd.DataFrame(train_y), pd.DataFrame(train_X_len), pd.DataFrame(train_X)], axis=1) \
.to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
```
### Uploading the training data
Next, we need to upload the training data to the SageMaker default S3 bucket so that we can provide access to it while training our model.
```
import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = 'sagemaker/sentiment_rnn'
role = sagemaker.get_execution_role()
input_data = sagemaker_session.upload_data(path=data_dir, bucket=bucket, key_prefix=prefix)
```
**NOTE:** The cell above uploads the entire contents of our data directory. This includes the `word_dict.pkl` file. This is fortunate as we will need this later on when we create an endpoint that accepts an arbitrary review. For now, we will just take note of the fact that it resides in the data directory (and so also in the S3 training bucket) and that we will need to make sure it gets saved in the model directory.
## Step 4: Build and Train the PyTorch Model
In the XGBoost notebook we discussed what a model is in the SageMaker framework. In particular, a model comprises three objects
- Model Artifacts,
- Training Code, and
- Inference Code,
each of which interact with one another. In the XGBoost example we used training and inference code that was provided by Amazon. Here we will still be using containers provided by Amazon with the added benefit of being able to include our own custom code.
We will start by implementing our own neural network in PyTorch along with a training script. For the purposes of this project we have provided the necessary model object in the `model.py` file, inside of the `train` folder. You can see the provided implementation by running the cell below.
```
!pygmentize train/model.py
```
The important takeaway from the implementation provided is that there are three parameters that we may wish to tweak to improve the performance of our model. These are the embedding dimension, the hidden dimension and the size of the vocabulary. We will likely want to make these parameters configurable in the training script so that if we wish to modify them we do not need to modify the script itself. We will see how to do this later on. To start we will write some of the training code in the notebook so that we can more easily diagnose any issues that arise.
First we will load a small portion of the training data set to use as a sample. It would be very time consuming to try and train the model completely in the notebook as we do not have access to a gpu and the compute instance that we are using is not particularly powerful. However, we can work on a small bit of the data to get a feel for how our training script is behaving.
```
import torch
import torch.utils.data
# Read in only the first 250 rows
train_sample = pd.read_csv(os.path.join(data_dir, 'train.csv'), header=None, names=None, nrows=250)
# Turn the input pandas dataframe into tensors
train_sample_y = torch.from_numpy(train_sample[[0]].values).float().squeeze()
train_sample_X = torch.from_numpy(train_sample.drop([0], axis=1).values).long()
# Build the dataset
train_sample_ds = torch.utils.data.TensorDataset(train_sample_X, train_sample_y)
# Build the dataloader
train_sample_dl = torch.utils.data.DataLoader(train_sample_ds, batch_size=50)
```
### (TODO) Writing the training method
Next we need to write the training code itself. This should be very similar to training methods that you have written before to train PyTorch models. We will leave any difficult aspects such as model saving / loading and parameter loading until a little later.
```
def train(model, train_loader, epochs, optimizer, loss_fn, device):
for epoch in range(1, epochs + 1):
model.train()
total_loss = 0
for batch in train_loader:
batch_X, batch_y = batch
batch_X = batch_X.to(device)
batch_y = batch_y.to(device)
# TODO: Complete this train method to train the model provided.
optimizer.zero_grad()
output = model(batch_X)
loss = loss_fn(output, batch_y)
loss.backward()
optimizer.step()
total_loss += loss.data.item()
print("Epoch: {}, BCELoss: {}".format(epoch, total_loss / len(train_loader)))
```
Supposing we have the training method above, we will test that it is working by writing a bit of code in the notebook that executes our training method on the small sample training set that we loaded earlier. The reason for doing this in the notebook is so that we have an opportunity to fix any errors that arise early when they are easier to diagnose.
```
import torch.optim as optim
from train.model import LSTMClassifier
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = LSTMClassifier(32, 100, 5000).to(device)
optimizer = optim.Adam(model.parameters())
loss_fn = torch.nn.BCELoss()
train(model, train_sample_dl, 5, optimizer, loss_fn, device)
```
In order to construct a PyTorch model using SageMaker we must provide SageMaker with a training script. We may optionally include a directory which will be copied to the container and from which our training code will be run. When the training container is executed it will check the uploaded directory (if there is one) for a `requirements.txt` file and install any required Python libraries, after which the training script will be run.
### (TODO) Training the model
When a PyTorch model is constructed in SageMaker, an entry point must be specified. This is the Python file which will be executed when the model is trained. Inside of the `train` directory is a file called `train.py` which has been provided and which contains most of the necessary code to train our model. The only thing that is missing is the implementation of the `train()` method which you wrote earlier in this notebook.
**TODO**: Copy the `train()` method written above and paste it into the `train/train.py` file where required.
The way that SageMaker passes hyperparameters to the training script is by way of arguments. These arguments can then be parsed and used in the training script. To see how this is done take a look at the provided `train/train.py` file.
```
from sagemaker.pytorch import PyTorch
estimator = PyTorch(entry_point="train.py",
source_dir="train",
role=role,
framework_version='0.4.0',
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
hyperparameters={
'epochs': 10,
'hidden_dim': 200,
})
estimator.fit({'training': input_data})
```
## Step 5: Testing the model
As mentioned at the top of this notebook, we will be testing this model by first deploying it and then sending the testing data to the deployed endpoint. We will do this so that we can make sure that the deployed model is working correctly.
## Step 6: Deploy the model for testing
Now that we have trained our model, we would like to test it to see how it performs. Currently our model takes input of the form `review_length, review[500]` where `review[500]` is a sequence of `500` integers which describe the words present in the review, encoded using `word_dict`. Fortunately for us, SageMaker provides built-in inference code for models with simple inputs such as this.
There is one thing that we need to provide, however, and that is a function which loads the saved model. This function must be called `model_fn()` and takes as its only parameter a path to the directory where the model artifacts are stored. This function must also be present in the python file which we specified as the entry point. In our case the model loading function has been provided and so no changes need to be made.
**NOTE**: When the built-in inference code is run it must import the `model_fn()` method from the `train.py` file. This is why the training code is wrapped in a main guard ( ie, `if __name__ == '__main__':` )
Since we don't need to change anything in the code that was uploaded during training, we can simply deploy the current model as-is.
**NOTE:** When deploying a model you are asking SageMaker to launch an compute instance that will wait for data to be sent to it. As a result, this compute instance will continue to run until *you* shut it down. This is important to know since the cost of a deployed endpoint depends on how long it has been running for.
In other words **If you are no longer using a deployed endpoint, shut it down!**
**TODO:** Deploy the trained model.
```
# TODO: Deploy the trained model
predictor = estimator.deploy(initial_instance_count = 1,
instance_type = 'ml.m4.xlarge')
```
## Step 7 - Use the model for testing
Once deployed, we can read in the test data and send it off to our deployed model to get some results. Once we collect all of the results we can determine how accurate our model is.
```
test_X = pd.concat([pd.DataFrame(test_X_len), pd.DataFrame(test_X)], axis=1)
# We split the data into chunks and send each chunk seperately, accumulating the results.
def predict(data, rows=512):
split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
predictions = np.array([])
for array in split_array:
predictions = np.append(predictions, predictor.predict(array))
return predictions
predictions = predict(test_X.values)
predictions = [round(num) for num in predictions]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
```
**Question:** How does this model compare to the XGBoost model you created earlier? Why might these two models perform differently on this dataset? Which do *you* think is better for sentiment analysis?
**Answer:**
With the XGBoost model we got a score of 0.8694 which is slightly better. XRBoost is a regularizing gradient boosting framework which is especially good at segmant classification.
### (TODO) More testing
We now have a trained model which has been deployed and which we can send processed reviews to and which returns the predicted sentiment. However, ultimately we would like to be able to send our model an unprocessed review. That is, we would like to send the review itself as a string. For example, suppose we wish to send the following review to our model.
```
test_review = 'The simplest pleasures in life are the best, and this film is one of them. Combining a rather basic storyline of love and adventure this movie transcends the usual weekend fair with wit and unmitigated charm.'
```
The question we now need to answer is, how do we send this review to our model?
Recall in the first section of this notebook we did a bunch of data processing to the IMDb dataset. In particular, we did two specific things to the provided reviews.
- Removed any html tags and stemmed the input
- Encoded the review as a sequence of integers using `word_dict`
In order process the review we will need to repeat these two steps.
**TODO**: Using the `review_to_words` and `convert_and_pad` methods from section one, convert `test_review` into a numpy array `test_data` suitable to send to our model. Remember that our model expects input of the form `review_length, review[500]`.
```
# TODO: Convert test_review into a form usable by the model and save the results in test_data
#test_data = None
test_data = [np.array(convert_and_pad(word_dict, review_to_words(test_review))[0])]
```
Now that we have processed the review, we can send the resulting array to our model to predict the sentiment of the review.
```
predictor.predict(test_data)
```
Since the return value of our model is close to `1`, we can be certain that the review we submitted is positive.
### Delete the endpoint
Of course, just like in the XGBoost notebook, once we've deployed an endpoint it continues to run until we tell it to shut down. Since we are done using our endpoint for now, we can delete it.
```
estimator.delete_endpoint()
```
## Step 6 (again) - Deploy the model for the web app
Now that we know that our model is working, it's time to create some custom inference code so that we can send the model a review which has not been processed and have it determine the sentiment of the review.
As we saw above, by default the estimator which we created, when deployed, will use the entry script and directory which we provided when creating the model. However, since we now wish to accept a string as input and our model expects a processed review, we need to write some custom inference code.
We will store the code that we write in the `serve` directory. Provided in this directory is the `model.py` file that we used to construct our model, a `utils.py` file which contains the `review_to_words` and `convert_and_pad` pre-processing functions which we used during the initial data processing, and `predict.py`, the file which will contain our custom inference code. Note also that `requirements.txt` is present which will tell SageMaker what Python libraries are required by our custom inference code.
When deploying a PyTorch model in SageMaker, you are expected to provide four functions which the SageMaker inference container will use.
- `model_fn`: This function is the same function that we used in the training script and it tells SageMaker how to load our model.
- `input_fn`: This function receives the raw serialized input that has been sent to the model's endpoint and its job is to de-serialize and make the input available for the inference code.
- `output_fn`: This function takes the output of the inference code and its job is to serialize this output and return it to the caller of the model's endpoint.
- `predict_fn`: The heart of the inference script, this is where the actual prediction is done and is the function which you will need to complete.
For the simple website that we are constructing during this project, the `input_fn` and `output_fn` methods are relatively straightforward. We only require being able to accept a string as input and we expect to return a single value as output. You might imagine though that in a more complex application the input or output may be image data or some other binary data which would require some effort to serialize.
### (TODO) Writing inference code
Before writing our custom inference code, we will begin by taking a look at the code which has been provided.
```
!pygmentize serve/predict.py
```
As mentioned earlier, the `model_fn` method is the same as the one provided in the training code and the `input_fn` and `output_fn` methods are very simple and your task will be to complete the `predict_fn` method. Make sure that you save the completed file as `predict.py` in the `serve` directory.
**TODO**: Complete the `predict_fn()` method in the `serve/predict.py` file.
### Deploying the model
Now that the custom inference code has been written, we will create and deploy our model. To begin with, we need to construct a new PyTorchModel object which points to the model artifacts created during training and also points to the inference code that we wish to use. Then we can call the deploy method to launch the deployment container.
**NOTE**: The default behaviour for a deployed PyTorch model is to assume that any input passed to the predictor is a `numpy` array. In our case we want to send a string so we need to construct a simple wrapper around the `RealTimePredictor` class to accomodate simple strings. In a more complicated situation you may want to provide a serialization object, for example if you wanted to sent image data.
```
from sagemaker.predictor import RealTimePredictor
from sagemaker.pytorch import PyTorchModel
class StringPredictor(RealTimePredictor):
def __init__(self, endpoint_name, sagemaker_session):
super(StringPredictor, self).__init__(endpoint_name, sagemaker_session, content_type='text/plain')
model = PyTorchModel(model_data=estimator.model_data,
role = role,
framework_version='0.4.0',
entry_point='predict.py',
source_dir='serve',
predictor_cls=StringPredictor)
predictor = model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
```
### Testing the model
Now that we have deployed our model with the custom inference code, we should test to see if everything is working. Here we test our model by loading the first `250` positive and negative reviews and send them to the endpoint, then collect the results. The reason for only sending some of the data is that the amount of time it takes for our model to process the input and then perform inference is quite long and so testing the entire data set would be prohibitive.
```
import glob
def test_reviews(data_dir='../data/aclImdb', stop=250):
results = []
ground = []
# We make sure to test both positive and negative reviews
for sentiment in ['pos', 'neg']:
path = os.path.join(data_dir, 'test', sentiment, '*.txt')
files = glob.glob(path)
files_read = 0
print('Starting ', sentiment, ' files')
# Iterate through the files and send them to the predictor
for f in files:
with open(f) as review:
# First, we store the ground truth (was the review positive or negative)
if sentiment == 'pos':
ground.append(1)
else:
ground.append(0)
# Read in the review and convert to 'utf-8' for transmission via HTTP
review_input = review.read().encode('utf-8')
# Send the review to the predictor and store the results
results.append(float(predictor.predict(review_input)))
# Sending reviews to our endpoint one at a time takes a while so we
# only send a small number of reviews
files_read += 1
if files_read == stop:
break
return ground, results
ground, results = test_reviews()
from sklearn.metrics import accuracy_score
accuracy_score(ground, results)
```
As an additional test, we can try sending the `test_review` that we looked at earlier.
```
predictor.predict(test_review)
```
Now that we know our endpoint is working as expected, we can set up the web page that will interact with it. If you don't have time to finish the project now, make sure to skip down to the end of this notebook and shut down your endpoint. You can deploy it again when you come back.
## Step 7 (again): Use the model for the web app
> **TODO:** This entire section and the next contain tasks for you to complete, mostly using the AWS console.
So far we have been accessing our model endpoint by constructing a predictor object which uses the endpoint and then just using the predictor object to perform inference. What if we wanted to create a web app which accessed our model? The way things are set up currently makes that not possible since in order to access a SageMaker endpoint the app would first have to authenticate with AWS using an IAM role which included access to SageMaker endpoints. However, there is an easier way! We just need to use some additional AWS services.
<img src="Web App Diagram.svg">
The diagram above gives an overview of how the various services will work together. On the far right is the model which we trained above and which is deployed using SageMaker. On the far left is our web app that collects a user's movie review, sends it off and expects a positive or negative sentiment in return.
In the middle is where some of the magic happens. We will construct a Lambda function, which you can think of as a straightforward Python function that can be executed whenever a specified event occurs. We will give this function permission to send and recieve data from a SageMaker endpoint.
Lastly, the method we will use to execute the Lambda function is a new endpoint that we will create using API Gateway. This endpoint will be a url that listens for data to be sent to it. Once it gets some data it will pass that data on to the Lambda function and then return whatever the Lambda function returns. Essentially it will act as an interface that lets our web app communicate with the Lambda function.
### Setting up a Lambda function
The first thing we are going to do is set up a Lambda function. This Lambda function will be executed whenever our public API has data sent to it. When it is executed it will receive the data, perform any sort of processing that is required, send the data (the review) to the SageMaker endpoint we've created and then return the result.
#### Part A: Create an IAM Role for the Lambda function
Since we want the Lambda function to call a SageMaker endpoint, we need to make sure that it has permission to do so. To do this, we will construct a role that we can later give the Lambda function.
Using the AWS Console, navigate to the **IAM** page and click on **Roles**. Then, click on **Create role**. Make sure that the **AWS service** is the type of trusted entity selected and choose **Lambda** as the service that will use this role, then click **Next: Permissions**.
In the search box type `sagemaker` and select the check box next to the **AmazonSageMakerFullAccess** policy. Then, click on **Next: Review**.
Lastly, give this role a name. Make sure you use a name that you will remember later on, for example `LambdaSageMakerRole`. Then, click on **Create role**.
#### Part B: Create a Lambda function
Now it is time to actually create the Lambda function.
Using the AWS Console, navigate to the AWS Lambda page and click on **Create a function**. When you get to the next page, make sure that **Author from scratch** is selected. Now, name your Lambda function, using a name that you will remember later on, for example `sentiment_analysis_func`. Make sure that the **Python 3.6** runtime is selected and then choose the role that you created in the previous part. Then, click on **Create Function**.
On the next page you will see some information about the Lambda function you've just created. If you scroll down you should see an editor in which you can write the code that will be executed when your Lambda function is triggered. In our example, we will use the code below.
```python
# We need to use the low-level library to interact with SageMaker since the SageMaker API
# is not available natively through Lambda.
import boto3
def lambda_handler(event, context):
# The SageMaker runtime is what allows us to invoke the endpoint that we've created.
runtime = boto3.Session().client('sagemaker-runtime')
# Now we use the SageMaker runtime to invoke our endpoint, sending the review we were given
response = runtime.invoke_endpoint(EndpointName = '**ENDPOINT NAME HERE**', # The name of the endpoint we created
ContentType = 'text/plain', # The data format that is expected
Body = event['body']) # The actual review
# The response is an HTTP response whose body contains the result of our inference
result = response['Body'].read().decode('utf-8')
return {
'statusCode' : 200,
'headers' : { 'Content-Type' : 'text/plain', 'Access-Control-Allow-Origin' : '*' },
'body' : result
}
```
Once you have copy and pasted the code above into the Lambda code editor, replace the `**ENDPOINT NAME HERE**` portion with the name of the endpoint that we deployed earlier. You can determine the name of the endpoint using the code cell below.
```
predictor.endpoint
```
Once you have added the endpoint name to the Lambda function, click on **Save**. Your Lambda function is now up and running. Next we need to create a way for our web app to execute the Lambda function.
### Setting up API Gateway
Now that our Lambda function is set up, it is time to create a new API using API Gateway that will trigger the Lambda function we have just created.
Using AWS Console, navigate to **Amazon API Gateway** and then click on **Get started**.
On the next page, make sure that **New API** is selected and give the new api a name, for example, `sentiment_analysis_api`. Then, click on **Create API**.
Now we have created an API, however it doesn't currently do anything. What we want it to do is to trigger the Lambda function that we created earlier.
Select the **Actions** dropdown menu and click **Create Method**. A new blank method will be created, select its dropdown menu and select **POST**, then click on the check mark beside it.
For the integration point, make sure that **Lambda Function** is selected and click on the **Use Lambda Proxy integration**. This option makes sure that the data that is sent to the API is then sent directly to the Lambda function with no processing. It also means that the return value must be a proper response object as it will also not be processed by API Gateway.
Type the name of the Lambda function you created earlier into the **Lambda Function** text entry box and then click on **Save**. Click on **OK** in the pop-up box that then appears, giving permission to API Gateway to invoke the Lambda function you created.
The last step in creating the API Gateway is to select the **Actions** dropdown and click on **Deploy API**. You will need to create a new Deployment stage and name it anything you like, for example `prod`.
You have now successfully set up a public API to access your SageMaker model. Make sure to copy or write down the URL provided to invoke your newly created public API as this will be needed in the next step. This URL can be found at the top of the page, highlighted in blue next to the text **Invoke URL**.
## Step 4: Deploying our web app
Now that we have a publicly available API, we can start using it in a web app. For our purposes, we have provided a simple static html file which can make use of the public api you created earlier.
In the `website` folder there should be a file called `index.html`. Download the file to your computer and open that file up in a text editor of your choice. There should be a line which contains **\*\*REPLACE WITH PUBLIC API URL\*\***. Replace this string with the url that you wrote down in the last step and then save the file.
Now, if you open `index.html` on your local computer, your browser will behave as a local web server and you can use the provided site to interact with your SageMaker model.
If you'd like to go further, you can host this html file anywhere you'd like, for example using github or hosting a static site on Amazon's S3. Once you have done this you can share the link with anyone you'd like and have them play with it too!
> **Important Note** In order for the web app to communicate with the SageMaker endpoint, the endpoint has to actually be deployed and running. This means that you are paying for it. Make sure that the endpoint is running when you want to use the web app but that you shut it down when you don't need it, otherwise you will end up with a surprisingly large AWS bill.
**TODO:** Make sure that you include the edited `index.html` file in your project submission.
Now that your web app is working, trying playing around with it and see how well it works.
**Question**: Give an example of a review that you entered into your web app. What was the predicted sentiment of your example review?
**Answer:**
WOW! The best word that describes this movie is "wow"! Not only to say that this is the best Action movie of all time, this is probably one of the greatest movies ever made . The people in my country watched this film when there where limited VHS cassettes at all. And again, my favorite Director did an timeless epic-masterpiece. Yes, an epic. Every scene in this movie is beyond the perfection. The timeless plot. Groundbreaking effects. Unforgettable "Hasta la vista, baby." .
Your review was POSITIVE!
Super Mario Bros. is a movie based off of the Super Mario Bros. video games. Or at least, that's what they SAY it is. In reality, Super Mario Bros. is a movie that pretends to be Mario. He's what I'm talking about: In the games, the main villain is Bowser, King of the Koopas who wishes to kidnap the Princess and rule the mushroom kingdom. In the movie, the main villain is Koopa, tyrant of a small city in the center of a world-wide desert who wishes to take over the world of man. In the games, a big bertha is a fish. In the movie, Big Bertha is the bouncer at the Boom Boom bar. In the games, Boom Boom is a person. The movie is just a crappy movie that takes Nintendo's names in an attempt to look better. And it fails at that. DO NOT see this movie.
Your review was NEGATIVE!
### Delete the endpoint
Remember to always shut down your endpoint if you are no longer using it. You are charged for the length of time that the endpoint is running so if you forget and leave it on you could end up with an unexpectedly large bill.
```
predictor.delete_endpoint()
```
| github_jupyter |
# Solving Linear Equations
```
import numpy as np
import scipy.linalg as la
```
## Linear Equations
Consider a set of $m$ linear equations in $n$ unknowns:
\begin{align*}
a_{11} x_1 + &a_{12} x_2& +& ... + &a_{1n} x_n &=& b_1\\
\vdots && &&\vdots &= &\vdots\\
a_{m1} x_1 + &a_{m2} x_2& +& ... + &a_{mn} x_n &=&b_m
\end{align*}
We can let
\begin{align*}
A=\left[\begin{matrix}a_{11}&\cdots&a_{1n}\\
\vdots & &\vdots\\
a_{m1}&\cdots&a_{mn}\end{matrix}\right]
\end{align*}
\begin{align*}
x = \left[\begin{matrix}x_1\\
\vdots\\
x_n\end{matrix}\right] & \;\;\;\;\textrm{ and } &
b = \left[\begin{matrix}b_1\\
\vdots\\
b_m\end{matrix}\right]
\end{align*}
and re-write the system
$$ Ax = b$$
### Linear independence and existence of solutions
* If $A$ is an $m\times n$ matrix and $m>n$, if all $m$ rows are linearly independent, then the system is *overdetermined* and *inconsistent*. The system cannot be solved exactly. This is the usual case in data analysis, and why least squares is so important. For example, we may be finding the parameters of a linear model, where there are $m$ data points and $n$ parameters.
* If $A$ is an $m\times n$ matrix and $m<n$, if all $m$ rows are linearly independent, then the system is *underdetermined* and there are *infinite* solutions.
* If $A$ is an $m\times n$ matrix and some of its rows are linearly dependent, then the system is *reducible*. We can get rid of some equations. In other words, there are equations in the system that do not give us any new information.
* If $A$ is a square matrix and its rows are linearly independent, the system has a unique solution. ($A$ is invertible.)
For a linear system, we can only get a unique solution, no solution, or infinite solutions.
### Using `solve` to find unique solutions
\begin{align}
x + 2y &= 3 \\
3x + 4y &= 17
\end{align}
```
A = np.array([[1,2],[3,4]])
A
b = np.array([3,17]).reshape((-1,1))
b
x = la.solve(A, b)
x
np.allclose(A @ x, b)
```
#### Under the hood of `solve`
The `solve` function uses the `dgesv` fortran function in `lapack` (Linear Algebra Package) to do the actual work. In turn, `lapack` calls even lower level routines in `blas` (Basic Linear Algebra Subprograms). In particular, `solve` uses the LU matrix decomposition that we will show in detail shortly.
There is rarely any reason to use `blas` or `lapack` functions directly becuase the `linalg` package provides more convenient functions that also perfrom error checking, but you can use Python to experiment with `lapack` or `blas` before using them in a language like C or Fortran.
- [How to interpret lapack function names](http://www.netlib.org/lapack/lug/node24.html)
- [Summary of BLAS functions](http://cvxopt.org/userguide/blas.html)
- [Sumary of Lapack functions](http://cvxopt.org/userguide/lapack.html)
```
import scipy.linalg.lapack as lapack
lu, piv, x, info = lapack.dgesv(A, b)
x
```
## Gaussian elimination
Let's review how Gaussian elimination (ge) works. We will deal with a $3\times 3$ system of equations for conciseness, but everything here generalizes to the $n\times n$ case. Consider the following equation:
$$\left(\begin{matrix}a_{11}&a_{12} & a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{matrix}\right)\left(\begin{matrix}x_1\\x_2\\x_3\end{matrix}\right) = \left(\begin{matrix}b_1\\b_2\\b_3\end{matrix}\right)$$
For simplicity, let us assume that the leftmost matrix $A$ is non-singular. To solve the system using ge, we start with the 'augmented matrix':
$$\left(\begin{array}{ccc|c}a_{11}&a_{12} & a_{13}& b_1 \\a_{21}&a_{22}&a_{23}&b_2\\a_{31}&a_{32}&a_{33}&b_3\end{array}\right)$$
We begin at the first entry, $a_{11}$. If $a_{11} \neq 0$, then we divide the first row by $a_{11}$ and then subtract the appropriate multiple of the first row from each of the other rows, zeroing out the first entry of all rows. (If $a_{11}$ is zero, we need to permute rows. We will not go into detail of that here.) The result is as follows:
$$\left(\begin{array}{ccc|c}
1 & \frac{a_{12}}{a_{11}} & \frac{a_{13}}{a_{11}} & \frac{b_1}{a_{11}} \\
0 & a_{22} - a_{21}\frac{a_{12}}{a_{11}} & a_{23} - a_{21}\frac{a_{13}}{a_{11}} & b_2 - a_{21}\frac{b_1}{a_{11}}\\
0&a_{32}-a_{31}\frac{a_{12}}{a_{11}} & a_{33} - a_{31}\frac{a_{13}}{a_{11}} &b_3- a_{31}\frac{b_1}{a_{11}}\end{array}\right)$$
We repeat the procedure for the second row, first dividing by the leading entry, then subtracting the appropriate multiple of the resulting row from each of the third and first rows, so that the second entry in row 1 and in row 3 are zero. Gaussian elimination stops at *row echelon* form (upper triangular, with ones on the diagonal), and then uses *back substitution* to obtain the final answer.
Note that in some cases, it is necessary to permute rows to obtain row echelon form (when the pivot would otherwise be zero). This is called *partial pivoting*.
### Example
We perform Gaussian elimination on the the following augmented matrix
$$
\left(
\begin{array}{ccc|c}
1 & 3 & 4 & 1\\
2 & 1 & 3 & 2 \\
4 & 7 & 2 & 3
\end{array}
\right)
$$
We need to multiply row $1$ by $2$ and subtract from row $2$ to eliminate the first entry in row $2$, and then multiply row $1$ by $4$ and subtract from row $3$.
$$
\left(
\begin{array}{ccc|c}
1 & 3 & 4 & 1 \\
0 & -5 & -5 & 0\\
0 & -5 &-14 & -1
\end{array}
\right)
$$
And then we eliminate the second entry in the third row:
$$
\left(
\begin{matrix}
1 & 3 & 4 & 1\\
0 & -5 & -5 & 0 \\
0 & 0 & -9 & -1
\end{matrix}
\right)
$$
We can now do back-substitution to solve
$$
9x_3 = 1 \\
x_2 = -x_3 = 0 \\
x_1 = -3x_2 -4x_3 + 1
$$
to get
$$
x_1 = 8/9 \\
x_2 = -1/9 \\
x_3 = 1/9
$$
### Check
```
A = np.array([
[1,3,4],
[2,1,3],
[4,7,2]
])
b = np.array([1,2,3]).reshape((-1,1))
la.solve(A, b)
```
## Gauss-Jordan elimination
With Gauss-Jordan elimination, we make all the pivots equal to 1, and set all entries above and below a pivot 0.
$$
\left(
\begin{matrix}
1 & 0 & 0 & 8/9 \\
0 & 1 & 0 & -1/9 \\
0 & 0 & 1 & 1/9
\end{matrix}
\right)
$$
Then we can just read off the values of $x_1$, $x_2$ and $x_3$ directly. This is known as the *reduced row echelon form*.
## Gaussian-Jordan elimination and the number of solutions
Consider 3 matrices $m > n$, $m = n$ and $m < n$ and provide some intuition as to the existence of a no, unique or infinite solutions.
## LU Decomposition
LU stands for 'Lower Upper', and so an LU decomposition of a matrix $A$ is a decomposition so that
$$A= LU$$
where $L$ is lower triangular and $U$ is upper triangular.
Now, LU decomposition is essentially gaussian elimination, but we work only with the matrix $A$ (as opposed to the augmented matrix).
Gaussian elimination is all fine when we are solving a system one time, for one outcome $b$. Many applications involve solutions to multiple problems, where the left-hand-side of our matrix equation does not change, but there are many outcome vectors $b$. In this case, it is more efficient to *decompose* $A$.
First, we start just as in ge, but we 'keep track' of the various multiples required to eliminate entries. For example, consider the matrix
$$A = \left(\begin{matrix} 1 & 3 & 4 \\
2 & 1 & 3\\
4 & 7 & 2
\end{matrix}\right)$$
We need to multiply row $1$ by $2$ and subtract from row $2$ to eliminate the first entry in row $2$, and then multiply row $1$ by $4$ and subtract from row $3$. Instead of entering zeroes into the first entries of rows $2$ and $3$, we record the multiples required for their elimination, as so:
$$\left(\begin{matrix} 1 & 3 & 4 \\
(2)& -5 & -5\\
(4)&-5 &-14
\end{matrix}\right)$$
And then we eliminate the second entry in the third row:
$$\left(\begin{matrix} 1 & 3 & 4 \\
(2)& -5 & -5\\
(4)& (1)&-9
\end{matrix}\right)$$
And now we have the decomposition:
$$L= \left(\begin{matrix} 1 & 0 & 0 \\
2& 1 & 0\\
4& 1 &1
\end{matrix}\right) \,
U = \left(\begin{matrix} 1 & 3 & 4 \\
0& -5 & -5\\
0&0&-9
\end{matrix}\right)$$
### Elementary matrices
Why does the algorithm for LU work? Gaussian elimination consists of 3 elementary operations
- Op1: swapping two rows
- Op2: replace a row by the sum of that row and a multiple of another
- Op3: multiplying a row by a non-zero scalar
These can be recast as matrix operations - in particular, pre-multiplication with corresponding elementary matrices.
- Op1: swapping two rows uses a permutation matrix
$$
\begin{bmatrix}
0 & 1 \\
1 & 0
\end{bmatrix}\begin{bmatrix}
3 & 4 \\
1 & 2
\end{bmatrix} = \begin{bmatrix}
1 & 2 \\
3 & 4
\end{bmatrix}
$$
- Op2: replace a row by the sum of that row and a multiple of another uses an lower triangular matrix
$$
\begin{bmatrix}
1 & 0 \\
-3 & 1
\end{bmatrix}\begin{bmatrix}
1 & 2 \\
3 & 4
\end{bmatrix} = \begin{bmatrix}
1 & 2 \\
0 & -2
\end{bmatrix}
$$
Note: The inverse operation just substitutes the negative of the multiple, and is also lower triangular.
- Op3: multiplying a row by a non-zero scalar uses an lower triangular matrix
$$
\begin{bmatrix}
1 & 0 \\
0 & -0.5
\end{bmatrix}\begin{bmatrix}
1 & 2 \\
0 & -2
\end{bmatrix} = \begin{bmatrix}
1 & 2 \\
0 & 1
\end{bmatrix}
$$
Note: The inverse operation just substitutes the inverse of the scalar, and is also lower triangular.
Multiplying an upper triangular matrix by another lower triangular matrix gives an lower triangular matrix. Hence if we put the permutations aside (i.e. keep a separate permutation matrix), Gaussian elimination can be expressed as a product of lower triangular matrices, which is just another lower triangular matrix $L$, with the original matrix $A$ to give an upper triangular matrix $U$. The lower triangular matrix $L$ is then the product of the inverse operations.
```
A = np.array([
[1,3,4],
[2,1,3],
[4,7,2]
])
A
```
#### Construct U
```
b1 = np.array([
[1, 0, 0],
[-2, 1, 0],
[0, 0, 1]
])
b2 = np.array([
[1, 0, 0],
[0, 1, 0],
[-4, 0, 1]
])
b3 = np.array([
[1, 0, 0],
[0, 1, 0],
[0, -1, 1]
])
b1 @ A
b2 @ b1 @ A
b3 @ b2 @ b1 @ A
U = b3 @ b2 @ b1 @ A
U
```
#### Construct L
```
ib1 = np.array([
[1, 0, 0],
[2, 1, 0],
[0, 0, 1]
])
ib2 = np.array([
[1, 0, 0],
[0, 1, 0],
[4, 0, 1]
])
ib3 = np.array([
[1, 0, 0],
[0, 1, 0],
[0, 1, 1]
])
L = ib1 @ ib2 @ ib3
L
```
#### A is factorized into LU
```
L @ U
A
```
We can now use the LU decomposition to solve for *any* $b$ without having to perform Gaussian elimination again.
- First solve $Ly = b$
- Then solve $Ux = y$
Since $L$ and $U$ are triangular, they can be cheaply solved by substitution.
```
b
y = la.solve_triangular(L, b, lower=True)
y
x = la.solve_triangular(U, y)
x
```
### LU Decomposition in practice
```
P, L, U = la.lu(A)
L
U
```
P is a permutation matrix.
```
P
```
In practice, we can store both L and U in a single matrix LU.
```
LU, P = la.lu_factor(A)
LU
la.lu_solve((LU, P), b)
```
## Cholesky Decomposition
Recall that a square matrix $A$ is positive definite if
$$u^TA u > 0$$
for any non-zero n-dimensional vector $u$,
and a symmetric, positive-definite matrix $A$ is a positive-definite matrix such that
$$A = A^T$$
For a positive definite square matrix, all the pivots (diagonal elements of $U$) are positive. If the matrix is also symmetric, then we must have an $LDU$ decomposition of the form $LDL^T$, where all the diagonal elements of $D$ are positive. Given this, $D^{1/2}$ is well-defined, and we have
$$
A = LDL^T = LD^{1/2}D^{1/2}L^T = LD^{1/2}(LD^{1/2})^T = CC^{T}
$$
where $C$ is lower-triangular with positive diagonal elements and $C^T$ is its transpose. This decomposition is known as the Cholesky decomposition, and $C$ may be interpreted as the 'square root' of the matrix $A$.
### Algorithm
Let $A$ be an $n\times n$ matrix. We find the matrix $L$ using the following iterative procedure:
$$A = \left(\begin{matrix}a_{11}&A_{12}\\A_{12}&A_{22}\end{matrix}\right) =
\left(\begin{matrix}\ell_{11}&0\\
L_{12}&L_{22}\end{matrix}\right)
\left(\begin{matrix}\ell_{11}&L_{12}\\0&L_{22}\end{matrix}\right)
$$
1.) Let $\ell_{11} = \sqrt{a_{11}}$
2.) $L_{12} = \frac{1}{\ell_{11}}A_{12}$
3.) Solve $A_{22} - L_{12}L_{12}^T = L_{22}L_{22}^T$ for $L_{22}$
### Example
$$A = \left(\begin{matrix}1&3&5\\3&13&23\\5&23&42\end{matrix}\right)$$
$$\ell_{11} = \sqrt{a_{11}} = 1$$
$$L_{12} = \frac{1}{\ell_{11}} A_{12} = A_{12}$$
$\begin{eqnarray*}
A_{22} - L_{12}L_{12}^T &=& \left(\begin{matrix}13&23\\23&42\end{matrix}\right) - \left(\begin{matrix}9&15\\15&25\end{matrix}\right)\\
&=& \left(\begin{matrix}4&8\\8&17\end{matrix}\right)
\end{eqnarray*}$
This is also symmetric and positive definite, and can be solved by another iteration
$\begin{eqnarray*}
&=& \left(\begin{matrix}2&0\\4&\ell_{33}\end{matrix}\right) \left(\begin{matrix}2&4\\0&\ell_{33}\end{matrix}\right)\\
&=& \left(\begin{matrix}4&8\\8&16+\ell_{33}^2\end{matrix}\right)
\end{eqnarray*}$
And so we conclude that $\ell_{33}=1$.
This yields the decomposition:
$$\left(\begin{matrix}1&3&5\\3&13&23\\5&23&42\end{matrix}\right) =
\left(\begin{matrix}1&0&0\\3&2&0\\5&4&1\end{matrix}\right)\left(\begin{matrix}1&3&5\\0&2&4\\0&0&1\end{matrix}\right)$$
```
A = np.array([
[1,3,5],
[3,13,23],
[5,23,42]
])
C = la.cholesky(A)
C
C1 = la.cho_factor(A)
la.cho_solve(C1, b)
```
| github_jupyter |
# Combining features and adsorption energies into one dataframe
---
### Import Modules
```
import os
print(os.getcwd())
import sys
import time; ti = time.time()
import pickle
import copy
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression
pd.set_option("display.max_columns", None)
import plotly.graph_objs as go
# #########################################################
from methods import (
get_df_dft,
get_df_job_ids,
get_df_slab,
get_df_jobs,
get_df_jobs_data,
get_df_ads,
get_df_features,
get_df_octa_vol_init,
)
from methods import isnotebook
isnotebook_i = isnotebook()
if isnotebook_i:
from tqdm.notebook import tqdm
verbose = True
else:
from tqdm import tqdm
verbose = False
```
### Script Inputs
```
target_cols = ["g_o", "g_oh", "e_o", "e_oh", ]
```
### Read Data
```
df_ads = get_df_ads()
df_ads = df_ads.set_index(["compenv", "slab_id", "active_site", ], drop=False)
df_features = get_df_features()
df_features.index = df_features.index.droplevel(level=5)
df_slab = get_df_slab()
df_jobs = get_df_jobs()
df_jobs_data = get_df_jobs_data()
df_jobs_data["rerun_from_oh"] = df_jobs_data["rerun_from_oh"].fillna(value=False)
df_dft = get_df_dft()
df_job_ids = get_df_job_ids()
df_job_ids = df_job_ids.set_index("job_id")
df_job_ids = df_job_ids[~df_job_ids.index.duplicated(keep='first')]
df_octa_vol_init = get_df_octa_vol_init()
feature_cols = df_features["features"].columns.tolist()
```
### Collecting other relevent data columns from various data objects
```
# #########################################################
data_dict_list = []
# #########################################################
for index_i, row_i in df_ads.iterrows():
# #####################################################
data_dict_i = dict()
# #####################################################
index_dict_i = dict(zip(
list(df_ads.index.names), index_i, ))
# #####################################################
slab_id_i = row_i.slab_id
job_id_o = row_i.job_id_o
# #####################################################
# #####################################################
row_ids_i = df_job_ids.loc[job_id_o]
# #####################################################
bulk_id_i = row_ids_i.bulk_id
# #####################################################
# #####################################################
row_dft_i = df_dft.loc[bulk_id_i]
# #####################################################
stoich_i = row_dft_i.stoich
# #####################################################
# #####################################################
row_slab_i = df_slab.loc[slab_id_i]
# #####################################################
phase_i = row_slab_i.phase
# #####################################################
# #####################################################
data_dict_i["phase"] = phase_i
data_dict_i["stoich"] = stoich_i
# #####################################################
data_dict_i.update(index_dict_i)
# #####################################################
data_dict_list.append(data_dict_i)
# #####################################################
# #########################################################
df_extra_data = pd.DataFrame(data_dict_list)
df_extra_data = df_extra_data.set_index(
["compenv", "slab_id", "active_site", ], drop=True)
new_columns = []
for col_i in df_extra_data.columns:
new_columns.append(
("data", col_i, "")
)
idx = pd.MultiIndex.from_tuples(new_columns)
df_extra_data.columns = idx
# #########################################################
```
### Collating features data by looping over `df_ads`
```
dos_bader_feature_cols = [
"Ir*O_bader",
"Ir_bader",
"O_bader",
"p_band_center",
]
# # TEMP
# print(111 * "TEMP | ")
# df_ads = df_ads.loc[[("sherlock", "lufinanu_76", 46., )]]
# print("df_ads.shape[0]:", df_ads.shape[0])
# #########################################################
o_rows_list = []
o_index_list = []
# #########################################################
oh_rows_list = []
oh_index_list = []
# #########################################################
failed_indices_oh = []
for index_i, row_i in df_ads.iterrows():
# print("SIDJFIJSDIFJIDSIFJISDIFJSDIOFJIODS867tr86r7t86867876t8t76")
# #####################################################
index_dict_i = dict(zip(list(df_ads.index.names), index_i))
# #####################################################
job_id_o_i = row_i.job_id_o
job_id_oh_i = row_i.job_id_oh
job_id_bare_i = row_i.job_id_bare
# #####################################################
# #####################################################
ads_i = "o"
idx = pd.IndexSlice
df_feat_i = df_features.loc[idx[
index_dict_i["compenv"],
index_dict_i["slab_id"],
ads_i,
index_dict_i["active_site"],
:], :]
row_feat_i = df_feat_i[df_feat_i.data.job_id_max == job_id_o_i]
mess_i = "There should only be one row after the previous filtering"
assert row_feat_i.shape[0] == 1, mess_i
row_feat_i = row_feat_i.iloc[0]
tmp = list(row_feat_i["features"][dos_bader_feature_cols].to_dict().values())
num_nan = len([i for i in tmp if np.isnan(i)])
if num_nan > 0:
tmp_dict = dict()
df_tmp = df_feat_i["features"][dos_bader_feature_cols]
for i_cnt, (name_i, row_i) in enumerate(df_tmp.iterrows()):
# print(name_i)
row_values = list(row_i.to_dict().values())
num_nan = len([i for i in row_values if np.isnan(i)])
tmp_dict[i_cnt] = num_nan
max_key = None
for key, val in tmp_dict.items():
if val == np.min(list(tmp_dict.values())):
max_key = key
# print("Replaced row_feat_i with the row that has the dos/bader info")
row_feat_i = df_feat_i.iloc[max_key]
# elif num_nan == 0:
# tmp = 42
# #####################################################
o_rows_list.append(row_feat_i)
o_index_list.append(row_feat_i.name)
# #####################################################
ads_i = "oh"
idx = pd.IndexSlice
df_feat_i = df_features.loc[idx[
index_dict_i["compenv"],
index_dict_i["slab_id"],
ads_i,
index_dict_i["active_site"],
:], :]
if df_feat_i.shape[0] > 0:
row_feat_i = df_feat_i[df_feat_i.data.job_id_max == job_id_oh_i]
if row_feat_i.shape[0] > 0:
mess_i = "There should only be one row after the previous filtering"
assert row_feat_i.shape[0] == 1, mess_i
row_feat_i = row_feat_i.iloc[0]
# #############################################
oh_rows_list.append(row_feat_i)
oh_index_list.append(row_feat_i.name)
else:
# failed_indices_oh.append(index_i)
failed_indices_oh.append(job_id_oh_i)
# #########################################################
idx = pd.MultiIndex.from_tuples(o_index_list, names=df_features.index.names)
df_o = pd.DataFrame(o_rows_list, idx)
df_o.index = df_o.index.droplevel(level=[2, 4, ])
# #########################################################
idx = pd.MultiIndex.from_tuples(oh_index_list, names=df_features.index.names)
df_oh = pd.DataFrame(oh_rows_list, idx)
df_oh.index = df_oh.index.droplevel(level=[2, 4, ])
# #########################################################
# df = df_o.index.to_frame()
# df = df[
# (df["slab_id"] == "lufinanu_76") &
# (df["active_site"] == 46.) &
# # (df[""] == "") &
# [True for i in range(len(df))]
# ]
# df_o.loc[
# df.index
# ]
# assert False
```
### Checking failed_indices_oh against systems that couldn't be processed
```
from methods import get_df_atoms_sorted_ind
df_atoms_sorted_ind = get_df_atoms_sorted_ind()
df_atoms_sorted_ind_i = df_atoms_sorted_ind[
df_atoms_sorted_ind.job_id.isin(failed_indices_oh)
]
df_tmp_8 = df_atoms_sorted_ind_i[df_atoms_sorted_ind_i.failed_to_sort == False]
if df_tmp_8.shape[0] > 0:
print("Check out df_tmp_8, there where some *OH rows that weren't processed but maybe should be")
```
### Processing and combining feature data columns
```
from local_methods import combine_dfs_with_same_cols
df_dict_i = {
"oh": df_oh[["data"]],
"o": df_o[["data"]],
}
df_data_comb = combine_dfs_with_same_cols(
df_dict=df_dict_i,
verbose=False,
)
# Adding another empty level to column index
new_cols = []
for col_i in df_data_comb.columns:
# new_col_i = ("", col_i[0], col_i[1])
new_col_i = (col_i[0], col_i[1], "", )
new_cols.append(new_col_i)
idx = pd.MultiIndex.from_tuples(new_cols)
df_data_comb.columns = idx
```
### Creating `df_features_comb` and adding another column level for ads
```
# #########################################################
ads_i = "o"
df_features_o = df_o[["features"]]
columns_i = df_features_o.columns
new_columns_i = []
for col_i in columns_i:
new_col_i = (col_i[0], ads_i, col_i[1])
new_columns_i.append(new_col_i)
idx = pd.MultiIndex.from_tuples(new_columns_i)
df_features_o.columns = idx
# #########################################################
ads_i = "oh"
df_features_oh = df_oh[["features"]]
columns_i = df_features_oh.columns
new_columns_i = []
for col_i in columns_i:
new_col_i = (col_i[0], ads_i, col_i[1])
new_columns_i.append(new_col_i)
idx = pd.MultiIndex.from_tuples(new_columns_i)
df_features_oh.columns = idx
# #########################################################
df_features_comb = pd.concat([
df_features_o,
df_features_oh,
], axis=1)
eff_ox_state_list = []
for name_i, row_i in df_features_comb.iterrows():
eff_ox_state_o_i = row_i[("features", "o", "effective_ox_state", )]
eff_ox_state_oh_i = row_i[("features", "oh", "effective_ox_state", )]
eff_ox_state_i = eff_ox_state_oh_i
if eff_ox_state_oh_i != eff_ox_state_oh_i:
# print(name_i)
# print(20 * "-")
# print(eff_ox_state_oh_i)
# print(eff_ox_state_o_i)
# print("")
if np.isnan(eff_ox_state_oh_i):
# print(eff_ox_state_o_i)
if not np.isnan(eff_ox_state_o_i):
eff_ox_state_i = eff_ox_state_o_i
elif np.isnan(eff_ox_state_o_i):
# print(eff_ox_state_o_i)
if not np.isnan(eff_ox_state_oh_i):
eff_ox_state_i = eff_ox_state_oh_i
# if np.isnan(eff_ox_state_i):
# print(eff_ox_state_i)
eff_ox_state_list.append(
np.round(eff_ox_state_i, 6),
# eff_ox_state_i
)
df_features_comb[("features", "effective_ox_state", "")] = eff_ox_state_list
df_features_comb = df_features_comb.drop(columns=[
("features", "o", "effective_ox_state", ),
("features", "oh", "effective_ox_state", ),
]
)
non_ads_features = [
# "effective_ox_state",
"dH_bulk",
"volume_pa",
"bulk_oxid_state",
]
cols_to_drop = []
new_cols = []
# for col_i in df_features_comb["features"].columns:
for col_i in df_features_comb.columns:
# print(col_i)
if col_i[0] == "features":
tmp = 42
# print(col_i)
if col_i[2] in non_ads_features:
print(col_i)
if col_i[1] == "oh":
cols_to_drop.append(col_i)
new_cols.append(col_i)
elif col_i[1] == "o":
col_new_i = (col_i[0], col_i[2], "", )
new_cols.append(col_new_i)
else:
new_cols.append(col_i)
else:
new_cols.append(col_i)
# non_ads_features
idx = pd.MultiIndex.from_tuples(new_cols)
df_features_comb.columns = idx
df_features_comb = df_features_comb.drop(columns=cols_to_drop)
oh_features = []
o_features = []
other_features = []
for col_i in df_features_comb.columns:
if col_i[1] == "oh":
oh_features.append(col_i)
elif col_i[1] == "o":
o_features.append(col_i)
else:
other_features.append(col_i)
df_features_comb = df_features_comb[
oh_features + o_features + other_features
]
# Adding more levels to df_ads to combine
new_cols = []
for col_i in df_ads.columns:
# new_col_i = ("", "", col_i)
new_col_i = (col_i, "", "", )
new_cols.append(new_col_i)
idx = pd.MultiIndex.from_tuples(new_cols)
df_ads.columns = idx
```
### Combining all dataframes
```
df_all_comb = pd.concat([
df_features_comb,
df_data_comb,
df_ads,
df_extra_data,
], axis=1)
```
### Removing the p-band center feature for *OH (there are none)
```
df_all_comb = df_all_comb.drop(columns=[
('features', 'oh', 'p_band_center'),
])
df_all_comb = df_all_comb.drop(columns=[
('features', 'oh', 'Ir_bader'),
])
df_all_comb = df_all_comb.drop(columns=[
('features', 'oh', 'O_bader'),
])
df_all_comb = df_all_comb.drop(columns=[
('features', 'oh', 'Ir*O_bader'),
])
```
### Create `name_str` column
```
def method(row_i):
# #########################################################
name_i = row_i.name
# #########################################################
compenv_i = name_i[0]
slab_id_i = name_i[1]
active_site_i = name_i[2]
# #########################################################
name_i = compenv_i + "__" + slab_id_i + "__" + str(int(active_site_i)).zfill(3)
return(name_i)
df_all_comb["data", "name_str", ""] = df_all_comb.apply(
method,
axis=1)
df_ads_columns = [i[0] for i in df_ads.columns.tolist()]
for i in target_cols:
df_ads_columns.remove(i)
data_columns_all = [i[0] for i in df_all_comb["data"].columns]
df_ads_columns_to_add = []
df_ads_columns_to_drop = []
for col_i in df_ads_columns:
if col_i not in data_columns_all:
df_ads_columns_to_add.append(col_i)
else:
df_ads_columns_to_drop.append(col_i)
# #########################################################
for col_i in df_ads_columns_to_drop:
df_all_comb.drop(columns=(col_i, "", ""), inplace=True)
# #########################################################
new_columns = []
for col_i in df_all_comb.columns:
if col_i[0] in df_ads_columns_to_add:
new_columns.append(
("data", col_i[0], "", )
)
elif col_i[0] in target_cols:
new_columns.append(
("targets", col_i[0], "", )
)
else:
new_columns.append(col_i)
idx = pd.MultiIndex.from_tuples(new_columns)
df_all_comb.columns = idx
```
### Adding magmom comparison data
```
def process_df_magmoms_comp_i(df_magmoms_comp_i):
"""
"""
def method(row_i):
new_column_values_dict = dict(
job_id_0=None,
job_id_1=None,
job_id_2=None,
)
job_ids_tri = row_i.job_ids_tri
ids_sorted = list(np.sort(list(job_ids_tri)))
job_id_0 = ids_sorted[0]
job_id_1 = ids_sorted[1]
job_id_2 = ids_sorted[2]
new_column_values_dict["job_id_0"] = job_id_0
new_column_values_dict["job_id_1"] = job_id_1
new_column_values_dict["job_id_2"] = job_id_2
for key, value in new_column_values_dict.items():
row_i[key] = value
return(row_i)
df_magmoms_comp_i = df_magmoms_comp_i.apply(method, axis=1)
df_magmoms_comp_i = df_magmoms_comp_i.set_index(["job_id_0", "job_id_1", "job_id_2", ])
return(df_magmoms_comp_i)
from methods import get_df_magmoms, read_magmom_comp_data
df_magmoms = get_df_magmoms()
data_dict_list = []
for name_i, row_i in df_all_comb.iterrows():
# #####################################################
data_dict_i = dict()
# #####################################################
index_dict_i = dict(zip(df_all_comb.index.names, name_i))
# #####################################################
magmom_data_i = read_magmom_comp_data(name=name_i)
if magmom_data_i is not None:
df_magmoms_comp_i = magmom_data_i["df_magmoms_comp"]
df_magmoms_comp_i = process_df_magmoms_comp_i(df_magmoms_comp_i)
# tmp = df_magmoms_comp_i.sum_norm_abs_magmom_diff.min()
# tmp_list.append(tmp)
job_ids = []
for ads_j in ["o", "oh", "bare", ]:
job_id_j = row_i["data"]["job_id_" + ads_j][""]
if job_id_j is not None:
job_ids.append(job_id_j)
sum_norm_abs_magmom_diff_i = None
if len(job_ids) == 3:
job_ids = list(np.sort(job_ids))
job_id_0 = job_ids[0]
job_id_1 = job_ids[1]
job_id_2 = job_ids[2]
row_mags_i = df_magmoms_comp_i.loc[
(job_id_0, job_id_1, job_id_2, )
]
sum_norm_abs_magmom_diff_i = row_mags_i.sum_norm_abs_magmom_diff
norm_sum_norm_abs_magmom_diff_i = sum_norm_abs_magmom_diff_i / 3
# #################################################
data_dict_i.update(index_dict_i)
# #################################################
data_dict_i["sum_norm_abs_magmom_diff"] = sum_norm_abs_magmom_diff_i
data_dict_i["norm_sum_norm_abs_magmom_diff"] = norm_sum_norm_abs_magmom_diff_i
# #################################################
data_dict_list.append(data_dict_i)
# #################################################
# #########################################################
df_tmp = pd.DataFrame(data_dict_list)
df_tmp = df_tmp.set_index(["compenv", "slab_id", "active_site", ])
# #########################################################
new_cols = []
for col_i in df_tmp.columns:
new_col_i = ("data", col_i, "")
new_cols.append(new_col_i)
idx = pd.MultiIndex.from_tuples(new_cols)
df_tmp.columns = idx
df_all_comb = pd.concat([df_all_comb, df_tmp], axis=1)
```
### Add OER overpotential data
```
# #########################################################
import pickle; import os
directory = os.path.join(
os.environ["PROJ_irox_oer"],
"workflow/oer_analysis",
"out_data")
path_i = os.path.join(
directory,
"df_overpot.pickle")
with open(path_i, "rb") as fle:
df_overpot = pickle.load(fle)
# #########################################################
df_overpot = df_overpot.drop(columns="name")
new_cols = []
for col_i in df_overpot.columns:
new_col_i = ("data", col_i, "", )
new_cols.append(new_col_i)
df_overpot.columns = pd.MultiIndex.from_tuples(new_cols)
df_all_comb = pd.concat([
df_all_comb,
df_overpot,
], axis=1)
```
### Adding surface energy data
```
from methods import get_df_SE
df_SE = get_df_SE()
new_cols = []
for col_i in df_SE.columns:
new_col_i = ("data", col_i, "", )
new_cols.append(new_col_i)
df_SE.columns = pd.MultiIndex.from_tuples(new_cols)
cols_to_remove = []
for col_i in df_SE.columns.tolist():
if col_i in df_all_comb.columns.tolist():
cols_to_remove.append(col_i)
df_all_comb = pd.concat([
df_all_comb,
# df_SE,
df_SE.drop(columns=cols_to_remove),
], axis=1)
```
### Adding plot format properties
```
from proj_data import stoich_color_dict
# #########################################################
data_dict_list = []
# #########################################################
# for index_i, row_i in df_features_targets.iterrows():
for index_i, row_i in df_all_comb.iterrows():
# #####################################################
data_dict_i = dict()
# #####################################################
index_dict_i = dict(zip(list(df_all_comb.index.names), index_i))
# #####################################################
row_data_i = row_i["data"]
# #####################################################
stoich_i = row_data_i["stoich"][""]
norm_sum_norm_abs_magmom_diff_i = \
row_data_i["norm_sum_norm_abs_magmom_diff"][""]
# #####################################################
if stoich_i == "AB2":
color__stoich_i = stoich_color_dict["AB2"]
elif stoich_i == "AB3":
color__stoich_i = stoich_color_dict["AB3"]
else:
color__stoich_i = stoich_color_dict["None"]
# #####################################################
data_dict_i[("format", "color", "stoich")] = color__stoich_i
data_dict_i[("format", "color", "norm_sum_norm_abs_magmom_diff")] = \
norm_sum_norm_abs_magmom_diff_i
# #####################################################
data_dict_i.update(index_dict_i)
# #####################################################
data_dict_list.append(data_dict_i)
# #####################################################
# #########################################################
df_format = pd.DataFrame(data_dict_list)
df_format = df_format.set_index(["compenv", "slab_id", "active_site", ])
df_format.columns = pd.MultiIndex.from_tuples(df_format.columns)
# #########################################################
df_all_comb = pd.concat(
[
df_all_comb,
df_format,
],
axis=1,
)
```
### Mixing Bader charges with bond lengths
```
# df_all_comb["features"][""]
# df_all_comb[("features", "o", "Ir*O_bader", )]
df_all_comb[("features", "o", "Ir*O_bader/ir_o_mean", )] = \
df_all_comb[("features", "o", "Ir*O_bader", )] / df_all_comb[("features", "o", "ir_o_mean", )]
```
### Calculating ΔG_OmOH target column
```
# Computing ΔG_O-OH
g_o = df_all_comb[("targets", "g_o", "")]
g_oh = df_all_comb[("targets", "g_oh", "")]
df_all_comb[("targets", "g_o_m_oh", "")] = g_o - g_oh
# Computing ΔE_O-OH
e_o = df_all_comb[("targets", "e_o", "")]
e_oh = df_all_comb[("targets", "e_oh", "")]
df_all_comb[("targets", "e_o_m_oh", "")] = e_o - e_oh
```
### Adding in pre-DFT features
```
# #########################################################
data_dict_list = []
# #########################################################
for name_i, row_i in df_all_comb.iterrows():
# #####################################################
compenv_i = name_i[0]
slab_id_i = name_i[1]
active_site_i = name_i[2]
# #####################################################
job_id_o_i = row_i[("data", "job_id_o", "")]
name_octa_i = (compenv_i, slab_id_i,
"o", active_site_i, 1, )
row_octa_i = df_octa_vol_init.loc[
name_octa_i
]
row_octa_dict_i = row_octa_i["features"].to_dict()
# #####################################################
data_dict_i = {}
# #####################################################
data_dict_i["compenv"] = compenv_i
data_dict_i["slab_id"] = slab_id_i
data_dict_i["active_site"] = active_site_i
# #####################################################
data_dict_i.update(row_octa_dict_i)
# #####################################################
data_dict_list.append(data_dict_i)
# #####################################################
df_feat_pre = pd.DataFrame(data_dict_list)
df_feat_pre = df_feat_pre.set_index(["compenv", "slab_id", "active_site", ])
new_cols = []
for col_i in df_feat_pre.columns:
new_col_i = ("features_pre_dft", col_i + "__pre", "")
new_cols.append(new_col_i)
idx = pd.MultiIndex.from_tuples(new_cols)
df_feat_pre.columns = idx
df_all_comb = pd.concat([
df_all_comb,
df_feat_pre,
], axis=1)
```
### Reindexing multiindex to get order columns
```
df_all_comb = df_all_comb.reindex(columns=[
'targets',
'data',
'format',
'features',
'features_pre_dft',
'features_stan',
], level=0)
```
### Removing rows that aren't supposed to be processed (bad slabs)
```
from methods import get_df_slabs_to_run
df_slabs_to_run = get_df_slabs_to_run()
df_slabs_to_not_run = df_slabs_to_run[df_slabs_to_run.status == "bad"]
slab_ids_to_not_include = df_slabs_to_not_run.slab_id.tolist()
df_index = df_all_comb.index.to_frame()
df_all_comb = df_all_comb.loc[
~df_index.slab_id.isin(slab_ids_to_not_include)
]
```
### OLD DEPRECATED | Getting rid of NERSC jobs
```
# print("Getting rid of NERSC jobs and phase 1 systems")
# Getting rid of NERSC jobs
# indices_to_keep = []
# for i in df_all_comb.index:
# if i[0] != "nersc":
# indices_to_keep.append(i)
# df_all_comb = df_all_comb.loc[
# indices_to_keep
# ]
df_all_comb = df_all_comb[df_all_comb["data"]["phase"] > 1]
```
### Printing how many `NaN` rows there are for each feature
```
for col_i in df_all_comb.features.columns:
if verbose:
df_tmp_i = df_all_comb[df_all_comb["features"][col_i].isna()]
print(col_i, ":", df_tmp_i.shape[0])
```
### Write data to pickle
```
df_features_targets = df_all_comb
# Pickling data ###########################################
directory = os.path.join(
os.environ["PROJ_irox_oer"],
"workflow/feature_engineering",
"out_data")
file_name_i = "df_features_targets.pickle"
path_i = os.path.join(directory, file_name_i)
if not os.path.exists(directory): os.makedirs(directory)
with open(path_i, "wb") as fle:
pickle.dump(df_features_targets, fle)
# #########################################################
from methods import get_df_features_targets
df_features_targets_tmp = get_df_features_targets()
df_features_targets_tmp.head()
# #########################################################
print(20 * "# # ")
print("All done!")
print("Run time:", np.round((time.time() - ti) / 60, 3), "min")
print("combine_features_targets.ipynb")
print(20 * "# # ")
# #########################################################
```
| github_jupyter |
```
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import sys
from scipy import interpolate
import math
from ipywidgets import *
def firstQuad(data):
colnames = data.columns.values
colcount = len(colnames)
rowcount = len(data[colnames[0]])
# do a little clean up
# remove negative values of plate current
for i in range(1,colcount):
for j in range(rowcount):
if data[colnames[i]][j] < 0.00:
data[colnames[i]][j] = 0.0
return data
# used engauge to extract plot data from datasheet
triodefn = "6au6 plate characteristic - triode.csv"
platefn = "6au6 plate characteristic - pentode.csv"
screenfn = "6au6 screen characteristic - pentode.csv"
triodeData = firstQuad(pd.read_csv(triodefn))
plateData = firstQuad(pd.read_csv(platefn))
screenData = firstQuad(pd.read_csv(screenfn))
VaMAX = 300.0
IaMAX = 0.02
def powerDissipationData(powerDiss):
data = []
for v in range(5,int(VaMAX+5),5):
data.append((v,powerDiss/v))
return pd.DataFrame(data = data, columns = ['PlateVoltage','Current'])
def loadLineData(Va,Rp):
data = []
b = Va/Rp
m = -1/Rp
for v in range(0,int(Va+5),5):
data.append((v,m*v+b))
return pd.DataFrame(data = data, columns = ['PlateVoltage','Current'])
powerDissData = powerDissipationData(3.5)
loadLineData = loadLineData(250.0,18000)
def placeLabel(plt,text,x,y,angle):
null = plt.annotate(s=text,
rotation=angle,
xy=(x,y),
xycoords='data',
xytext=(0,0),
textcoords='offset points',
bbox=dict(boxstyle="round", fc="1.0"),
size=12,
# arrowprops=dict(arrowstyle="->",connectionstyle="angle,angleA=0,angleB=70,rad=10")
)
def plotData(datas):
fig = plt.figure(figsize=(15, 20))
null = plt.grid(linestyle='--', linewidth=0.5)
null = plt.xticks(np.arange(0,VaMAX+20,20))
null = plt.yticks(np.arange(0,IaMAX+0.002,0.002))
null = plt.xlim([0,VaMAX])
null = plt.ylim([0,IaMAX])
for data,color in datas:
colnames = data.columns.values
colcount = len(colnames)
rowcount = len(data[colnames[0]])
for x in range(1,colcount):
null = plt.plot(data['PlateVoltage'],
data[colnames[x]],label=colnames[x],
color=color)
return plt
def plotLabels(plt,datas):
for data in datas:
colnames = data.columns.values
colcount = len(colnames)
rowcount = len(data[colnames[0]])
for x in range(1,colcount):
for r in range(0,rowcount):
if data['PlateVoltage'][r] > VaMAX or data[colnames[x]][r] > IaMAX:
break
placeLabel(plt,colnames[x],data['PlateVoltage'][r-15],data[colnames[x]][r-5],0)
plotData([(triodeData,'blue'),(plateData,'black'),(powerDissData,'red'),(loadLineData,'green')])
plotLabels(plt,[triodeData,plateData])
```
| github_jupyter |
<a href="https://colab.research.google.com/github/KwonDoRyoung/AdvancedBasicEducationProgram/blob/main/Challenge01.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from google.colab import drive
drive.mount('/content/drive')
import os
import csv
root_path = "/content/drive/MyDrive/aihub30754" # 데이터가 존재하는 최상위 폴더
metadata_train_path = os.path.join(root_path, "train/train.csv") # 메타데이터의 경로
print(metadata_train_path)
dataset = [] # meta 데이터에서 정보를 추출 후, 저장
category = set() # 집합: Category 정보를 담기 위함
# csv 파일을 읽고(reader) 각각 행에 있는 정보를 순차적으로 호출(for 부분)
with open(metadata_train_path, "r") as f:
reader = csv.reader(f)
for idx, row in enumerate(reader):
if idx == 0:
# 1 행은 데이터 분류에 대한 정보
continue
# row[0]: 이미지 파일 이름
# row[1]: 이미지 Class
dataset.append(row)
category.add(row[-1]) # category=label=class
print(len(dataset), len(category))
print(category)
print(dataset)
import os
import csv
from PIL import Image
from torch.utils.data import Dataset
class SeaGarbage(Dataset):
def __init__(self, data_path, phase="train"):
self.phase = phase
self.data_path = data_path
self.dataset, self.category = self._read_metadata(data_path, phase)
self.classes = list(self.category.keys())
def __len__(self):
return len(self.dataset)
def _read_metadata(self, data_path, phase):
dataset = []
category = set()
metadata_path = os.path.join(data_path, f"{phase}.csv")
with open(metadata_path, "r") as f:
reader = csv.reader(f)
for idx, row in enumerate(reader):
if idx == 0:
# 1 행은 데이터 분류에 대한 정보
continue
# row[0]: 이미지 파일 이름
# row[1]: 이미지 Class
dataset.append(row)
category.add(row[-1]) # category=label=class
category_dict = {}
for _c, _n in enumerate(list(category)):
category_dict[_n] = _c
return dataset, category_dict
def __getitem__(self, idx):
image_path = os.path.join(self.data_path, "images", self.dataset[idx][0])
image = Image.open(image_path)
label = self.category[self.dataset[idx][-1]]
return image, label
root_path = "/content/drive/MyDrive/aihub30754"
train_path = os.path.join(root_path, "train")
train_dataset1 = SeaGarbage(train_path, phase="train")
print(f"# of dataset: {len(train_dataset1)}")
print(f"class: {train_dataset1.classes}")
img, label = train_dataset1[0]
plt.imshow(img)
plt.title(train_dataset1.classes[label])
# from genericpath import exists
# import os
# import csv
# import shutil
# root_path = "/content/drive/MyDrive/aihub30754" # 데이터가 존재하는 최상위 폴더
# metadata_train_path = os.path.join(root_path, "train/train.csv") # 메타데이터의 경로
# print(metadata_train_path)
# os.makedirs(os.path.join(root_path, "train/image_with_class"), exist_ok=True)
# # csv 파일을 읽고(reader) 각각 행에 있는 정보를 순차적으로 호출(for 부분)
# with open(metadata_train_path, "r") as f:
# reader = csv.reader(f)
# for idx, row in enumerate(reader):
# if idx == 0:
# # 1 행은 데이터 분류에 대한 정보
# continue
# if not os.path.exists(os.path.join(root_path, f"train/image_with_class/{row[-1]}")):
# os.makedirs(os.path.join(root_path, f"train/image_with_class/{row[-1]}"), exist_ok=True)
# shutil.copyfile(os.path.join(root_path, f"train/images/{row[0]}"), os.path.join(root_path, f"train/image_with_class/{row[-1]}/{row[0]}"))
import matplotlib.pyplot as plt
from torchvision.datasets import ImageFolder
root_path = "/content/drive/MyDrive/aihub30754"
train_path = os.path.join(root_path, "train/image_with_class")
train_dataset2 = ImageFolder(train_path)
print(f"# of dataset: {len(train_dataset2)}")
print(f"class: {train_dataset2.classes}")
img, label = train_dataset2[0]
plt.imshow(img)
plt.title(train_dataset2.classes[label])
```
| github_jupyter |
```
import os
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['mathtext.fontset'] = 'stix'
```
Load 200ns Aib9 trajectory
```
infile = '../../DATA/Train/AIB9/sum_phi_200ns.npy'
input_x = np.load(infile)
bins=np.arange(-15., 17, 1)
num_bins=len(bins)
idx_200ns=np.digitize(input_x, bins)
di=1
N_mean=np.sum(np.abs(idx_200ns[:-di]-idx_200ns[di:])==1)
N_mean/=len(idx_200ns)
N0=len(np.where(idx_200ns<=15)[0])
N1=len(np.where(idx_200ns>=16)[0])
kappa_in = N0/N1
print('kappa:', kappa_in)
print('Nearest neighbor:', N_mean)
```
Check 100ns Aib9 trajectory
```
idx_100ns = idx_200ns[:2000000]
di=1
N_mean=np.sum(np.abs(idx_100ns[:-di]-idx_100ns[di:])==1)
N_mean/=len(idx_100ns)
N0=len(np.where(idx_100ns<=15)[0])
N1=len(np.where(idx_100ns>=16)[0])
kappa_in = N0/N1
print('kappa:', kappa_in)
print('Nearest neighbor:', N_mean)
```
# Calculate Nearest neighbor $\langle N\rangle$ sampled from the first training
In the first training, we let 800 independent LSTMs predict 800 trajectories of 100$ns$. Since we are using LSTM as a generative model, we can also train just one LSTM and use it to generate 800 predictions, starting from either the same initial condition or different initial conditions.
Data location: `./Output/`
```
N_mean_list=[]
output_dir='./Output'
for i in range(800):
pred_dir=os.path.join(output_dir, '{}/prediction.npy'.format(i))
prediction=np.load(pred_dir)
di=1
N_mean=np.sum(np.abs(prediction[:-di]-prediction[di:])==1)
N_mean/=len(prediction)
N_mean_list.append(N_mean)
N_mean_arr=np.array(N_mean_list)
```
Plot distribution
```
hist = np.histogram( N_mean_arr, bins=50 )
prob = hist[0].T
mids = 0.5*(hist[1][1:]+hist[1][:-1])
fig, ax = plt.subplots(figsize=(5,4))
ax.set_title('Distribution', size=20)
ax.plot(mids, prob)
ax.tick_params(axis='both', which='both', direction='in', labelsize=14)
ax.set_xlabel('$\langle N\\rangle$', size=16)
ax.set_ylabel('Counts', size=16)
plt.show()
```
# Determine $\Delta\lambda$
Following the reference, we want to solve the following equation for $\Delta\lambda$
\begin{align}
\bar{s}^{(j)}_2&=\sum_{\Gamma}P^{(2)}_{\Gamma}s^{(j)}_{\Gamma} \nonumber \\
&=\frac{\sum_{k\in\Omega} s^{(j)}_k e^{-\Delta\lambda_j s^{(j)}_k} }{\sum_{k\in\Omega} e^{-\Delta\lambda_j s^{(j)}_k}} \\
&=f(\Delta\lambda)
\label{eq:lambda_solver}
\end{align}
To determine the $\Delta\lambda$ value, we can calculate the above equation and plot it versus $\Delta\lambda$, and find $\Delta\lambda=\Delta\lambda_{\ast}$ which gives
\begin{align}
\bar{s}^{(j)}_2=f(\Delta\lambda_{\ast})=s^{\rm target}
\end{align}
We would also like to predict the variance of $\bar{s}^{(j)}_2$, which can be obtained through derivative of $f$.
\begin{align}
f'(\lambda)=-\sigma^2_{\bar{s}^{(j)}_2}
\end{align}
### $s = \langle N\rangle$
```
def f(lm):
return np.sum(N_mean_arr*np.exp(-lm*N_mean_arr))/np.sum(np.exp(-lm*N_mean_arr))
def df(lm):
return f(lm)**2-np.sum(N_mean_arr*N_mean_arr*np.exp(-lm*N_mean_arr))/np.sum(np.exp(-lm*N_mean_arr))
lm_arr = np.linspace(0,1000)
f_arr = [f(lm_i) for lm_i in lm_arr]
fig, ax=plt.subplots(figsize=(5,3))
ax.plot(lm_arr, f_arr, label='$f$')
ax.tick_params(axis='both', which='both', direction='in', labelsize=14)
ax.set_xlabel('$\lambda$', size=16)
ax.set_ylabel('$f(\lambda)$', size=16)
ax.legend(fontsize=16)
plt.show()
lm=62
print( 'f({:.1f}) = {:.6f}'.format(lm, f(lm)) )
print( 'Standard error stderr[f(0)]={:.6f}'.format(np.std(N_mean_arr)/np.sqrt(len(N_mean_arr))) )
print( 'df({:.1f}) = {:.6f}'.format(lm, df(lm)) )
print( 'Expected standard error for new N_mean = {:.6f}'.format( np.sqrt(-df(lm))/np.sqrt(10) ) )
```
Let's see if select 10 predictions to build the subset is enough.
```
lm_ast=62
p=np.exp(-lm_ast*(N_mean_arr))
p/=np.sum(p)
subset_mean_arr = []
subset_stdv_arr = []
for i in range(200):
idx = np.random.choice(len(N_mean_arr), 10, p=p)
selected = N_mean_arr[idx]
mean=np.mean(selected)
stdv=np.std(selected)/np.sqrt(len(selected))
subset_mean_arr.append(mean)
subset_stdv_arr.append(stdv)
fig, ax = plt.subplots(figsize=(12,5), nrows=1, ncols=2)
ax[0].plot(subset_mean_arr)
ax[0].plot(np.arange(len(subset_mean_arr)), [0.38]*len(subset_mean_arr))
ax[1].plot(subset_stdv_arr)
ax[1].plot(np.arange(len(subset_stdv_arr)), [0.004]*len(subset_stdv_arr))
ax[0].tick_params(axis='both', which='both', direction='in', labelsize=16)
ax[0].set_xlabel('indices', size=16)
ax[0].set_ylabel('$\langle N\\rangle$', size=16)
ax[0].set_ylim(0.3,0.5)
ax[1].tick_params(axis='both', which='both', direction='in', labelsize=16)
ax[1].set_xlabel('indices', size=16)
ax[1].set_ylabel('$\sigma_{N}$', size=16)
ax[1].set_ylim(0.0,0.01)
plt.show()
```
So we will constrain our $\langle N\rangle$ to 0.38 with standard error 0.0041. Although we have shown above that subset size=10 is sufficient, there could be variance in mean constraint. Therefore, we will also constrain our sampling until we reach a reasonable value of $\langle N\rangle$ of the subset.
```
lm_ast=62
p=np.exp(-lm_ast*(N_mean_arr))
p/=np.sum(p)
mean=np.inf
stdv=np.inf
while abs(mean-0.380)>0.001 or abs(stdv-0.0041)>0.0001:
idx = np.random.choice(len(N_mean_arr), 10, p=p)
selected = N_mean_arr[idx]
mean=np.mean(selected)
stdv=np.std(selected)/np.sqrt(len(selected))
print( 'mean of selected sample = {:.3f}'.format(np.mean(selected)) )
print( 'Standard error stderr[selected sample] = {:.3f}'.format(np.std(selected)/np.sqrt(len(selected))) )
for ki in kappa_arr[idx]:
print('{:.3f}'.format(ki))
```
# Concatenate subset as a new training set
Concatenate the subset to a single trajectory, this concatenated trajectory is then used later to re-train a new LSTM.
```
conc=[]
output_dir='./Output'
for i in idx:
pred_dir=os.path.join(output_dir, '{}/prediction.npy'.format(i))
prediction=np.load(pred_dir)
conc.extend(prediction)
conc = np.array(conc)
```
We can also check what the $\langle N\rangle$ as well as $\kappa$ value of concatenated trajectory is.
```
N0=len(np.where(conc<=15)[0])
N1=len(np.where(conc>=16)[0])
kappa_conc = N0/N1
di=1
N_mean_conc=np.sum(np.abs(conc[:-di]-conc[di:])==1)
N_mean_conc/=len(conc)
print('kappa:', kappa_conc)
print('Nearest neighbor:', N_mean_conc)
```
| github_jupyter |
# Continuous Control
---
Congratulations for completing the second project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program! In this notebook, you will learn how to control an agent in a more challenging environment, where the goal is to train a creature with four arms to walk forward. **Note that this exercise is optional!**
### 1. Start the Environment
We begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
```
from unityagents import UnityEnvironment
import numpy as np
```
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.
- **Mac**: `"path/to/Crawler.app"`
- **Windows** (x86): `"path/to/Crawler_Windows_x86/Crawler.exe"`
- **Windows** (x86_64): `"path/to/Crawler_Windows_x86_64/Crawler.exe"`
- **Linux** (x86): `"path/to/Crawler_Linux/Crawler.x86"`
- **Linux** (x86_64): `"path/to/Crawler_Linux/Crawler.x86_64"`
- **Linux** (x86, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86"`
- **Linux** (x86_64, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86_64"`
For instance, if you are using a Mac, then you downloaded `Crawler.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:
```
env = UnityEnvironment(file_name="Crawler.app")
```
```
env = UnityEnvironment(file_name='../../crawler/Crawler.app')
```
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
```
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
```
### 2. Examine the State and Action Spaces
Run the code cell below to print some information about the environment.
```
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
```
### 3. Take Random Actions in the Environment
In the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.
Once this cell is executed, you will watch the agent's performance, if it selects an action at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment.
Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!
```
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
```
When finished, you can close the environment.
```
env.close()
```
### 4. It's Your Turn!
Now it's your turn to train your own agent to solve the environment! When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following:
```python
env_info = env.reset(train_mode=True)[brain_name]
```
| github_jupyter |
# Workshop 4: cartopy and best practices
# Part II: Best Practices
Here I dump my entire accumulated wisdom upon you, not so much hoping that you know it all by the end, but that you know of the concepts and know what to search for. I realize that many lessons will be learned the hard way.
## 1. Technical tips
### jupyter lab tips
|||
|--- |--- |
| `Option/Alt` + drag | multi-line selection |
| `Ctrl/Cmd` + `X`/`C`/`V` | cut/copy/paste lines|
| `Cmd` + `?`| comment out line |
### Linux and SSH
The science faculty has its own cluster: `gemini.science.uu.nl`. There you can perform heavier, longer computations.
You can tunnel into the cluster by typing
```
ssh (your_solis_id)@gemini.science.uu.nl
```
you are then prompted for your password.
Note that working `ssh` is not a very stable connection: `broken pipe` error. All active commands are interrupted if your connection is severed. One way around this is to use the `screen` functionality.
Many advanced text editors let you work remotely. For example, Visual Studio Code has a `Remote-SSH` extension that lets you work on the remote machine as if it were local.
Some useful commands when navigating the terminal and submitting jobs on gemini.
| command | effect |
| --- | --- |
| `pwd` | print working directory, where are you in the file system|
| `cd` | change directory, without: to home directory; `..` for level up|
| `ls` | list content of folder |
| `cp`/`mv` | copy/move file |
| `rm` | remove file (CAUTION: this is permanent) |
| `grep (word) (file)` | search for `word` in `file` |
| `touch (file)` | create empty `file` |
| `top` | monitor processes and system resources |
| `Up(arrow)` | previous command
| `Ctrl + C` | cancel |
| `Ctrl + R` | search command history |
| `qsub (your_job.sh)` | submits `your_job.sh` bash script to queue* |
| `qstat` | check on your jobs in the batch queue |
| `qdel (job-id)` | deletes job with id `job-id` in queue |
\* There are 48 job slots (12 nodes with 4 cores) with 4Gb of memory per core
### Project organization
Once your projects contain more than a few results (e.g. for SOAC and your thesis), it is worthwhile organizing. A common structure has proven useful for most cases:
```
| project_name
README (markdown or simple text file describing your project and its structure)
| data (when working with big data this may be external to the project folder)
| raw_data (never touch these files)
| processed (derived files)
| doc (includes `requirements.txt` with python environment description)
| src (all your [well-documented] code: .py, .ipynb, ... files)
| results (all figures, maybe in subfolders)
```
A well organized project helps you current and future self as well as anyone else looking at the results. It is thus an important step for reproducibility.
### colormaps
Be conscious about the colors and colormaps you use! Colors can hide or emphasize data, which can be used to improve your presentation. Read this [short blog post](https://jakevdp.github.io/blog/2014/10/16/how-bad-is-your-colormap/) and learn why the `jet`colormap is bad.
A main consideration for accessibility of your results must be the color blindness which afflicts quite a few people. See [ColorBrewer](https://colorbrewer2.org) for colors that work well together.
The `cmocean` package adds some [beautiful, well-designed colormaps for oceanography](https://matplotlib.org/cmocean/) to the standard matplotlib colormaps.
## 2. Fundemental programming guidelines
### Understanding Python error messages
Understanding Python errors can be daunting at first, especially if they are very long. But don;t despair, after some practive you become better at interpreting them and will find aathem helpful in pinning down the problem. The general idea is that Python shows you the steps from the line where you called the offending line all the way down to the line in the file that raised an error. Often, the most important part of the error message is located at the end.
### DRY: Don't repeat yourself
It is almost always a sign of bad programming if you have to repeat a line several times. It clutters the code and makes the code harder to maintain.
### simplify code
Instead of writing one huge function, __break your functions down into logical component functions__. This will save you many headaches when hunting for bugs.
### coding style
Python is very forgiving towards your code writing style. Just because it runs wihtout errors does not mean is well written, though.
How to write good, readable python code is laid out in the __[PEP8 Style Guide for Python Code](https://pep8.org/)__. Read it and try to adhere to it.
### reuse code
Once you have iterated to stable code (and you want to share it across jupyter notebooks), you should put it in a separate `.py` file. You can __import functions from `.py` files__ simply as `from file import function`.
### Defensive programming
Defensive programming is a programming philosophy that tries to guard against errors and minimize time spent on solving bugs. The fundamental idea is that of __unit testing__: you break the code into the small into the small steps (functions) and then test whether they give the expected (known) results for simple test cases. One would write a function with known input and output. Unit testing can be automated and this is known as "continuous integration (CI)" (integrated in GitHub,for example).
This approach works well for traditional software development with fixed goals, but it is not always suited to scientific programming as the goals shift with new knowledge.
However, the principle of defensive programming is still very valuable. A __simple and easy-to-implement version of this defensive philosophy__ can be implemented by using the `assert` statement often (this is not exactly unit testing). It checks whether a statement is true and can raise a custom error.
```
import numpy as np
def calc_circumference(radius):
""" simple example function to calculate circle cirumference """
return 2*np.pi*radius
def calc_circumference2(radius):
""" simple example function to calculate circle cirumference """
assert type(radius) in [float, int], 'radius must be a number'
return 2*np.pi*radius
# this works as expected
calc_circumference(1)
# this does not work and Python tells us why with its own error message
calc_circumference('hello')
# this does not work and our message tells us why
calc_circumference2('hello')
```
### Back-up
__Always back up your code and data!__ There is nothing more frustrating than having to rewrite code after you dropped your laptop or something crashed. Cloud services like _[SURFdrive](https://surfdrive.surf.nl)_ or Dropbox/OneDrive make this very easy. the advantage here is that is is __automated__ and does not rely on you remembering that you need to backup.
### Version control
Do you know this?

There is a better way: version control.
Version control systems start with a base version of the document and then record changes you make each step of the way. You can think of it as a recording of your progress: you can rewind to start at the base document and play back each change you made, eventually arriving at your more recent version.

Once you think of changes as separate from the document itself, you can then think about “playing back” different sets of changes on the base document, ultimately resulting in different versions of that document. For example, two users can make independent sets of changes on the same document - these changes can be organized into separate “branches”, or groupings of work that can be shared.

Unless there are conflicts, you can even incorporate two sets of changes into the same base document, or “merged”.

__Key points:__
- Version control is like an unlimited ‘undo’.
- Version control also allows many people to work in parallel.
- version control works well for human-readable files (e.g. .py, .txt, .tex), but not binary files (e.g. .docx, .png, ...) because it does line-by-line comparison.
`git` is one implementation of a distributed version control system. You can `Github` is a company that let's you host repositories (version controlled folders) online. Everyone can create a free GitHub account and as a student you can create a free Pro account.
The use of `git` and GitHub is requires its own tutorial, as there is a small learning up-front cost before you benefit from it. Much information in this section was talen from the [Software Carpentry tutorial](https://osulp.github.io/git-beginner/) which I recommend.
## 3. Open science, open access, reproducibility
### Open Science
From the [Open Science Wikipedia article](https://en.wikipedia.org/wiki/Open_science):
> Open science is the movement to make scientific research (including publications, data, physical samples, and software) and its dissemination accessible to all levels of an inquiring society, amateur or professional.
The fundamentl idea is that should be able to see all your steps for arriving at certain conclusions. In our context this specifically refers to making the (documented!) code available
At Utrecht University, there is a society dedicated to Open Science: [Open Science Community Utrecht](https://openscience-utrecht.com/).
### Open access
The traditional publishing business model has been to charge readers for access to articles. This is less than ideal, as the public pays for the research and the results are behind a paywall. This hinders knowledge transfer and there is a growing movement to open access (i.e. make it free) to scientific knowledge.
To publish open access usually costs money, as the publishers cannot earn money with selling the articles/journals. __The Dutch Universities have agreements with all major publishers to cover open access fees.__ Use this!
### Licenses
If you want to reuse any online content you must check for the license. If there is no license, you are legally not allowed to use it. This is why it's important that you include a license with your code if you want others to reuse it.
### reproducible code
When you publish your results (as a thesis or paper) you must __ensure that your code can reproduce all the results__. In jupyter notebooks you should check whether it runs completely from start to finish __without error__. The code must be documented. Ultimately, the clean version of __your code (and if possible raw data) should be uploaded to a permament repository__ such as [UU's own Yoda system](https://www.uu.nl/en/research/yoda), [Zenodo](https://zenodo.org/), [figshare](https://figshare.com/). It can then receivce a __digital object identifier number (DOI)__ and should be cited in your paper and __can be cited by others__.
### virtual environments
Virtual environments are custom python environments, with specific packages installed. So far you have likely used the `root` environment of your conda installation. This is fine for your course work. With conda you can easily create new environments in the GUI or the command line as such:
```
conda create -n my_new_env python
```
where `my_new_env` is the name of the environment. In the command line you would activate this environment with `conda activate my_new_env`. You will then see that name in parantheses in front of your prompt.
In general, it is advised to create a new environment for every major project (like a thesis or a particular paper). This ensures that you know which packages + versions you used to do your calculations. You can then __export a list of all the packages used__ at the end of your project and save it with the rest of the code. Only this __ensures reproducibility__.
You can view a list of your environments directly in the Anaconda GUI or byt typing `conda info --envs`
| github_jupyter |
# Step 7: Fit a nonrigid transformation
```
import os
import numpy as np
from functools import partial
from skimage.external import tifffile
from phathom.registration import registration as reg
from phathom import plotting
from phathom import io
from phathom.utils import pickle_save, pickle_load, read_voxel_size
working_dir = '/media/jswaney/Drive/Justin/marmoset/'
# working_dir = '/media/jswaney/Drive/Justin/coregistration/whole_brain_tde'
voxel_size = read_voxel_size(os.path.join(working_dir, 'voxel_size.csv'))
# Load the affine transformation
affine_path = 'affine_transformation.pkl'
affine_transformation = pickle_load(os.path.join(working_dir,
affine_path))
# Load the matching coordinates
fixed_keypts_path = 'fixed_keypts.npy'
affine_keypts_path = 'affine_keypts.npy'
moving_keypts_path = 'moving_keypts.npy'
fixed_keypts = np.load(os.path.join(working_dir, fixed_keypts_path))
affine_keypts = np.load(os.path.join(working_dir, affine_keypts_path))
moving_keypts = np.load(os.path.join(working_dir, moving_keypts_path))
print(fixed_keypts.shape)
nb_samples = 8921
idx = np.random.choice(np.arange(fixed_keypts.shape[0]),
nb_samples,
replace=False)
fixed_sample = fixed_keypts[idx]
affine_sample = affine_keypts[idx]
moving_sample = moving_keypts[idx]
# Fit the thin-plate spline
smooth = 10
rbf_z, rbf_y, rbf_x = reg.fit_rbf(affine_sample,
moving_sample,
smooth)
# Make the 3D TPS transformation
tps_transform = partial(reg.rbf_transform,
rbf_z=rbf_z,
rbf_y=rbf_y,
rbf_x=rbf_x)
# Apply the thin-plate spline warp on the affine keypts
nonrigid_keypts = tps_transform(affine_keypts)
# Save the TPS-warped keypts for reference
nonrigid_keypts_path = 'nonrigid_keypts.npy'
np.save(os.path.join(working_dir, nonrigid_keypts_path), nonrigid_keypts)
# Load nonrigid keypts if alreadyt warped
nonrigid_keypts_path = 'nonrigid_keypts.npy'
nonrigid_keypts = np.load(os.path.join(working_dir, nonrigid_keypts_path))
# Convert to um
fixed_keypts_um = fixed_keypts * np.asarray(voxel_size)
nonrigid_keypts_um = nonrigid_keypts * np.asarray(voxel_size)
moving_keypts_um = moving_keypts * np.asarray(voxel_size)
# Show residuals after thin-plate spline
nonrigid_residuals = reg.match_distance(nonrigid_keypts_um,
moving_keypts_um)
plotting.plot_hist(nonrigid_residuals, 128)
print('Nonrigid ave. distance [um]:', nonrigid_residuals.mean())
# Aside: save residuals
import pandas as pd
df = pd.DataFrame({'tps': nonrigid_residuals})
df.to_excel(os.path.join(working_dir, 'tps_residuals.xlsx'))
# Make a nonrigid mapping from fixed to moving coordinates
nonrigid_transform = partial(reg.nonrigid_transform,
affine_transform=affine_transformation,
rbf_z=rbf_z,
rbf_y=rbf_y,
rbf_x=rbf_x)
# Save the nonrigid transformation
nonrigid_path = 'nonrigid_transformation.pkl'
pickle_save(os.path.join(working_dir, nonrigid_path), nonrigid_transform)
# Open images
fixed_zarr_path = 'round1/syto16.zarr/1_1_1'
moving_zarr_path = 'round2/syto16.zarr/1_1_1'
fixed_img = io.zarr.open(os.path.join(working_dir,
fixed_zarr_path),
mode='r')
moving_img = io.zarr.open(os.path.join(working_dir,
moving_zarr_path),
mode='r')
# Warp a regular grid with exact TPS
nb_pts = 100
z = np.linspace(0, fixed_img.shape[0], nb_pts)
y = np.linspace(0, fixed_img.shape[1], nb_pts)
x = np.linspace(0, fixed_img.shape[2], nb_pts)
# values = reg.warp_regular_grid(nb_pts, z, y, x, nonrigid_transform)
# Save the grid values
np.save(os.path.join(working_dir, 'grid_values.npy'), values)
# Load grid values if already computed
values = np.load(os.path.join(working_dir, 'grid_values.npy'))
# Fit a RegularGridInterpolator
grid_interp = reg.fit_grid_interpolator(z, y, x, values)
# Fit a MapCoordinatesInterpolator
map_interp = reg.fit_map_interpolator(values, fixed_img.shape, order=1)
# Wrap the z,y,x interpolators together
grid_interpolator = partial(reg.interpolator, interp=grid_interp)
map_interpolator = partial(reg.interpolator, interp=map_interp)
# Show residuals after interpolator
interp_keypts = map_interpolator(pts=fixed_keypts)
interp_keypts_um = interp_keypts * np.asarray(voxel_size)
interp_residuals = reg.match_distance(interp_keypts_um,
moving_keypts_um)
plotting.plot_hist(interp_residuals, 128)
print('Interpolator ave. distance [um]:', interp_residuals.mean())
# Register a single slice to preview the result
zslice = 200
batch_size = 10000
padding = 4
reg_slice = reg.register_slice(moving_img,
zslice=zslice,
output_shape=fixed_img.shape[1:],
transformation=affine_transformation,
batch_size=batch_size,
padding=padding)
fixed_slice = fixed_img[zslice]
moving_slice = moving_img[zslice]
%matplotlib notebook
# Show an overlay of the registered and fixed slices
fixed_clim = [0, 3000]
reg_clim = [0, 3000]
figsize = (6, 6)
plotting.plot_overlay(fixed_slice,
reg_slice,
fixed_clim,
reg_clim,
figsize)
# Save the registered and fixed slice
fixed_slice_path = 'fixed_slice.tif'
moving_slice_path = 'moving_slice.tif'
reg_slice_path = 'registered_slice.tif'
tifffile.imsave(os.path.join(working_dir, fixed_slice_path),
fixed_slice)
tifffile.imsave(os.path.join(working_dir, moving_slice_path),
moving_slice)
tifffile.imsave(os.path.join(working_dir, reg_slice_path),
reg_slice)
# Save the grid_interpolator
interpolator_path = 'grid_interpolator.pkl'
pickle_save(os.path.join(working_dir, interpolator_path),
grid_interpolator)
# Save the map_interpolator
interpolator_path = 'map_interpolator.pkl'
pickle_save(os.path.join(working_dir, interpolator_path),
map_interpolator)
```
| github_jupyter |
# DoWhy-The Causal Story Behind Hotel Booking Cancellations

We consider the problem of estimating what impact does assigning a room different to what a customer had reserved has on the booking cancellation.
The gold standard of finding this out would be to use experiments such as *Randomized Controlled Trials* wherein each customer is randomly assigned to one of the two categories i.e. each customer is either assigned a different room or the same room as he had booked before.
But what if we cannot intervene or its too costly too peform such an experiment (Ex- The Hotel would start losing its reputation if people learn that its randomly assigning people to different rooms). Can we somehow answer our query using only observational data or data that has been collected in the past?
```
#!pip install dowhy
import dowhy
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import logging
logging.getLogger("dowhy").setLevel(logging.INFO)
dataset = pd.read_csv('https://raw.githubusercontent.com/Sid-darthvader/DoWhy-The-Causal-Story-Behind-Hotel-Booking-Cancellations/master/hotel_bookings.csv')
dataset.head()
dataset.columns
```
## Data Description
For a quick glance of the features and their descriptions the reader is referred here.
https://github.com/rfordatascience/tidytuesday/blob/master/data/2020/2020-02-11/readme.md
## Feature Engineering
Lets create some new and meaningful features so as to reduce the dimensionality of the dataset. The following features have been created:-
- **Total Stay** = stays_in_weekend_nights + stays_in_week_nights
- **Guests** = adults + children + babies
- **Different_room_assigned** = 1 if reserved_room_type & assigned_room_type are different, 0 otherwise.
```
# Total stay in nights
dataset['total_stay'] = dataset['stays_in_week_nights']+dataset['stays_in_weekend_nights']
# Total number of guests
dataset['guests'] = dataset['adults']+dataset['children'] +dataset['babies']
# Creating the different_room_assigned feature
dataset['different_room_assigned']=0
slice_indices =dataset['reserved_room_type']!=dataset['assigned_room_type']
dataset.loc[slice_indices,'different_room_assigned']=1
# Deleting older features
dataset = dataset.drop(['stays_in_week_nights','stays_in_weekend_nights','adults','children','babies'
,'reserved_room_type','assigned_room_type'],axis=1)
dataset.columns
dataset.isnull().sum() # Country,Agent,Company contain 488,16340,112593 missing entries
dataset = dataset.drop(['agent','company'],axis=1)
# Replacing missing countries with most freqently occuring countries
dataset['country']= dataset['country'].fillna(dataset['country'].mode()[0])
dataset = dataset.drop(['reservation_status','reservation_status_date','arrival_date_day_of_month'],axis=1)
dataset = dataset.drop(['arrival_date_year'],axis=1)
# Replacing 1 by True and 0 by False for the experiment and outcome variables
dataset['different_room_assigned']= dataset['different_room_assigned'].replace(1,True)
dataset['different_room_assigned']= dataset['different_room_assigned'].replace(0,False)
dataset['is_canceled']= dataset['is_canceled'].replace(1,True)
dataset['is_canceled']= dataset['is_canceled'].replace(0,False)
dataset.head()
dataset_copy = dataset
```
## Calculating Expected Counts
Since the number of number of cancellations and the number of times a different room was assigned is heavily imbalnced, we first choose 1000 observations at random to see that in how many cases do the variables; *'is_cancelled'* & *'different_room_assigned'* attain the same values. This whole process is then repeated 10000 times and the expected count turns out to be 51.8% which is almost 50% (i.e. the probability of these two variables attaining the same value at random).
So statistically speaking, we have no definite conclusion at this stage. Thus assigning rooms different to what a customer had reserved during his booking earlier, may or may not lead to him/her cancelling that booking.
```
counts_sum=0
for i in range(1,10000):
counts_i = 0
rdf = dataset.sample(1000)
counts_i = rdf[rdf["is_canceled"]== rdf["different_room_assigned"]].shape[0]
counts_sum+= counts_i
counts_sum/10000
```
We now consider the scenario when there were no booking changes and recalculate the expected count.
```
# Expected Count when there are no booking changes = 49.2%
counts_sum=0
for i in range(1,10000):
counts_i = 0
rdf = dataset[dataset["booking_changes"]==0].sample(1000)
counts_i = rdf[rdf["is_canceled"]== rdf["different_room_assigned"]].shape[0]
counts_sum+= counts_i
counts_sum/10000
```
In the 2nd case, we take the scenario when there were booking changes(>0) and recalculate the expected count.
```
# Expected Count when there are booking changes = 66.4%
counts_sum=0
for i in range(1,10000):
counts_i = 0
rdf = dataset[dataset["booking_changes"]>0].sample(1000)
counts_i = rdf[rdf["is_canceled"]== rdf["different_room_assigned"]].shape[0]
counts_sum+= counts_i
counts_sum/10000
```
There is definitely some change happening when the number of booking changes are non-zero. So it gives us a hint that *Booking Changes* must be a confounding variable.
But is *Booking Changes* the only confounding variable? What if there were some unobserved confounders, regarding which we have no information(feature) present in our dataset. Would we still be able to make the same claims as before?
<font size="6">Enter *DoWhy*</font>
## Step-1. Create a Causal Graph
Represent your prior knowledge about the predictive modelling problem as a CI graph using assumptions. Don't worry, you need not specify the full graph at this stage. Even a partial graph would be enough and the rest can be figured out by *DoWhy* ;-)
Here are a list of assumptions that have then been translated into a Causal Diagram:-
- *Market Segment* has 2 levels, “TA” refers to the “Travel Agents” and “TO” means “Tour Operators” so it should affect the Lead Time (which is simply the number of days between booking and arrival).
- *Country* would also play a role in deciding whether a person books early or not (hence more *Lead Time*) and what type of *Meal* a person would prefer.
- *Lead Time* would definitely affected the number of *Days in Waitlist* (There are lesser chances of finding a reservation if you’re booking late). Additionally, higher *Lead Times* can also lead to *Cancellations*.
- The number of *Days in Waitlist*, the *Total Stay* in nights and the number of *Guests* might affect whether the booking is cancelled or retained.
- *Previous Booking Retentions* would affect whether a customer is a *Repeated Guest* or not. Additionally, both of these variables would affect whether the booking get *cancelled* or not (Ex- A customer who has retained his past 5 bookings in the past has a higher chance of retaining this one also. Similarly a person who has been cancelling this booking has a higher chance of repeating the same).
- *Booking Changes* would affect whether the customer is assigned a *different room* or not which might also lead to *cancellation*.
- Finally, the number of *Booking Changes* being the only confounder affecting *Treatment* and *Outcome* is highly unlikely and its possible that there might be some *Unobsevered Confounders*, regarding which we have no information being captured in our data.
```
import pygraphviz
causal_graph = """digraph {
different_room_assigned[label="Different Room Assigned"];
is_canceled[label="Booking Cancelled"];
booking_changes[label="Booking Changes"];
previous_bookings_not_canceled[label="Previous Booking Retentions"];
days_in_waiting_list[label="Days in Waitlist"];
lead_time[label="Lead Time"];
market_segment[label="Market Segment"];
country[label="Country"];
U[label="Unobserved Confounders"];
is_repeated_guest;
total_stay;
guests;
meal;
market_segment -> lead_time;
lead_time->is_canceled; country -> lead_time;
different_room_assigned -> is_canceled;
U -> different_room_assigned; U -> lead_time; U -> is_canceled;
country->meal;
lead_time -> days_in_waiting_list;
days_in_waiting_list ->is_canceled;
previous_bookings_not_canceled -> is_canceled;
previous_bookings_not_canceled -> is_repeated_guest;
is_repeated_guest -> is_canceled;
total_stay -> is_canceled;
guests -> is_canceled;
booking_changes -> different_room_assigned; booking_changes -> is_canceled;
}"""
```
Here the *Treatment* is assigning the same type of room reserved by the customer during Booking. *Outcome* would be whether the booking was cancelled or not.
*Common Causes* represent the variables that according to us have a causal affect on both *Outcome* and *Treatment*.
As per our causal assumptions, the 2 variables satisfying this criteria are *Booking Changes* and the *Unobserved Confounders*.
So if we are not specifying the graph explicitly (Not Recommended!), one can also provide these as parameters in the function mentioned below.
```
model= dowhy.CausalModel(
data = dataset,
graph=causal_graph.replace("\n", " "),
treatment='different_room_assigned',
outcome='is_canceled')
model.view_model()
from IPython.display import Image, display
display(Image(filename="causal_model.png"))
```
## Step-2. Identify the Causal Effect
We say that Treatment causes Outcome if changing Treatment leads to a change in Outcome keeping everything else constant.
Thus in this step, by using properties of the causal graph, we identify the causal effect to be estimated
```
#Identify the causal effect
identified_estimand = model.identify_effect()
print(identified_estimand)
```
## Step-3. Estimate the identified estimand
```
estimate = model.estimate_effect(identified_estimand,
method_name="backdoor.propensity_score_stratification",target_units="ate")
# ATE = Average Treatment Effect
# ATT = Average Treatment Effect on Treated (i.e. those who were assigned a different room)
# ATC = Average Treatment Effect on Control (i.e. those who were not assigned a different room)
print(estimate)
```
## Step-4. Refute results
Note that the causal part does not come from data. It comes from your *assumptions* that lead to *identification*. Data is simply used for statistical *estimation*. Thus it becomes critical to verify whether our assumptions were even correct in the first step or not!
What happens when another common cause exists?
What happens when the treatment itself was placebo?
### Method-1
**Radom Common Cause:-** *Adds randomly drawn covariates to data and re-runs the analysis to see if the causal estimate changes or not. If our assumption was originally correct then the causal estimate shouldn't change by much.*
```
refute1_results=model.refute_estimate(identified_estimand, estimate,
method_name="random_common_cause")
print(refute1_results)
```
### Method-2
**Placebo Treatment Refuter:-** *Randomly assigns any covariate as a treatment and re-runs the analysis. If our assumptions were correct then this newly found out estimate should go to 0.*
```
refute2_results=model.refute_estimate(identified_estimand, estimate,
method_name="placebo_treatment_refuter")
print(refute2_results)
```
### Method-3
**Data Subset Refuter:-** *Creates subsets of the data(similar to cross-validation) and checks whether the causal estimates vary across subsets. If our assumptions were correct there shouldn't be much variation.*
```
refute3_results=model.refute_estimate(identified_estimand, estimate,
method_name="data_subset_refuter")
print(refute3_results)
```
This tells us that if the probability of cancellation was $0<x<1$, then changing the room causes the probability to go to $x+0.36$.
So the effect of our *treatment* is **36 percentage points**.
## Comparing Results with XGBoost Feature Importance
We now know the effect of assigning a different room is 36 percentage points, so it might be a good exercise to compare results with feature importance obtained using a model offering high predictive accuracy on this dataset.
We choose XGBoost as the model as it offers a fairly high predictive accuracy on this dataset. The *plot_importance* function of XGBoost is used to calculate feature importance. Contrary to our analysis using *DoWhy*, it is observed that *Different_room_assigned* attains a relatively low importance score.
```
# plot feature importance using built-in function
from xgboost import XGBClassifier
from xgboost import plot_importance
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, classification_report
from matplotlib import pyplot
# split data into X and y
X = dataset_copy
y = dataset_copy['is_canceled']
X = X.drop(['is_canceled'],axis=1)
# One-Hot Encode the dataset
X = pd.get_dummies(X)
# split data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=26)
# fit model no training data
model = XGBClassifier()
model.fit(X_train, y_train)
# make predictions for test data and evaluate
y_pred = model.predict(X_test)
predictions = [round(value) for value in y_pred]
accuracy = accuracy_score(y_test, predictions)
print("Accuracy: %.2f%%" % (accuracy * 100.0))
print(classification_report(y_test, predictions))
```
The feature importance plotted below uses **weight** to rank features. Here **weight** is the number of times a feature appears in a tree
```
# plot feature importance
plot_importance(model,max_num_features=20)
pyplot.show()
# Execute this code to hide all warnings
from IPython.display import HTML
HTML('''<script>
code_show_err=false;
function code_toggle_err() {
if (code_show_err){
$('div.output_stderr').hide();
} else {
$('div.output_stderr').show();
}
code_show_err = !code_show_err
}
$( document ).ready(code_toggle_err);
</script>
To toggle on/off output_stderr, click <a href="javascript:code_toggle_err()">here</a>.''')
```
| github_jupyter |
# Carvana Image Masking Challenge
https://www.kaggle.com/c/carvana-image-masking-challenge
```
IMG_ROWS = 480
IMG_COLS = 320
TEST_IMG_ROWS = 1918
TEST_IMG_COLS = 1280
```
## Загружаем исходные изображения
```
import cv2
import numpy as np
from scipy import ndimage
from glob import glob
SAMPLE = 5000
train_img_paths = sorted(glob('./data/train/*.jpg'))[:SAMPLE]
train_mask_paths = sorted(glob('./data/train_masks/*.gif'))[:SAMPLE]
train_imgs = np.array([cv2.resize(ndimage.imread(path), (IMG_ROWS, IMG_COLS))
for path in train_img_paths])
train_masks = np.array([cv2.resize(ndimage.imread(path, mode = 'L'), (IMG_ROWS, IMG_COLS))
for path in train_mask_paths])
train_masks = train_masks.astype(np.float32)
train_masks[train_masks<=127] = 0.
train_masks[train_masks>127] = 1.
train_masks = np.reshape(train_masks, (*train_masks.shape, 1))
%matplotlib inline
from matplotlib import pyplot as plt
fig = plt.figure(0, figsize=(20, 20))
fig.add_subplot(1, 2, 1)
plt.imshow(train_imgs[0])
fig.add_subplot(1, 2, 2)
plt.imshow(np.squeeze(train_masks[0]), cmap='gray')
```
## Инициализируем архитектуру U-Net
```
from keras.layers import Input
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Conv2DTranspose
from keras.layers import BatchNormalization
from keras.layers import concatenate
from keras.models import Model
inputs = Input((IMG_COLS, IMG_ROWS, 3))
bnorm1 = BatchNormalization()(inputs)
conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(bnorm1)
conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(pool1)
conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
conv3 = Conv2D(128, (3, 3), activation='relu', padding='same')(pool2)
conv3 = Conv2D(128, (3, 3), activation='relu', padding='same')(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
conv4 = Conv2D(256, (3, 3), activation='relu', padding='same')(pool3)
conv4 = Conv2D(256, (3, 3), activation='relu', padding='same')(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2))(conv4)
conv5 = Conv2D(512, (3, 3), activation='relu', padding='same')(pool4)
conv5 = Conv2D(512, (3, 3), activation='relu', padding='same')(conv5)
up6 = concatenate([Conv2DTranspose(256, (2, 2), strides=(2, 2), padding='same')(conv5), conv4], axis=3)
conv6 = Conv2D(256, (3, 3), activation='relu', padding='same')(up6)
conv6 = Conv2D(256, (3, 3), activation='relu', padding='same')(conv6)
up7 = concatenate([Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same')(conv6), conv3], axis=3)
conv7 = Conv2D(128, (3, 3), activation='relu', padding='same')(up7)
conv7 = Conv2D(128, (3, 3), activation='relu', padding='same')(conv7)
up8 = concatenate([Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same')(conv7), conv2], axis=3)
conv8 = Conv2D(64, (3, 3), activation='relu', padding='same')(up8)
conv8 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv8)
up9 = concatenate([Conv2DTranspose(32, (2, 2), strides=(2, 2), padding='same')(conv8), conv1], axis=3)
conv9 = Conv2D(32, (3, 3), activation='relu', padding='same')(up9)
conv9 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv9)
conv10 = Conv2D(1, (1, 1), activation='sigmoid')(conv9)
model = Model(inputs=[inputs], outputs=[conv10])
model.summary()
```
## Задаем функцию потерь
```
from keras import backend as K
from keras.losses import binary_crossentropy
SMOOTH = 1.
def dice_coef(y_true, y_pred):
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
return (2. * intersection + SMOOTH) / (K.sum(y_true_f) + K.sum(y_pred_f) + SMOOTH)
def bce_dice_loss(y_true, y_pred):
return 0.5 * binary_crossentropy(y_true, y_pred) - dice_coef(y_true, y_pred)
```
## Запускаем процесс обучения
```
from keras.optimizers import Adam
model.compile(Adam(lr=1e-4),
bce_dice_loss,
metrics=[binary_crossentropy, dice_coef])
model.fit(train_imgs[50:], train_masks[50:],
batch_size=12, epochs=10,
validation_data=(train_imgs[:50], train_masks[:50]))
```
## Предсказание модели
```
test_paths = sorted(glob('./data/test/*.jpg'))
def test_img_generator(test_paths):
while True:
for path in test_paths:
yield np.array([cv2.resize(ndimage.imread(path), (IMG_ROWS, IMG_COLS))])
pred = model.predict_generator(test_img_generator(test_paths[:10]), len(test_paths[:10]))
```
## Визуализируем результат
```
fig = plt.figure(0, figsize=(20, 10))
k = 5
fig.add_subplot(2, 2, 1)
plt.imshow(ndimage.imread(test_paths[k]))
fig.add_subplot(2, 2, 2)
plt.imshow(np.squeeze(cv2.resize(pred[k], (TEST_IMG_ROWS, TEST_IMG_COLS))), cmap='gray')
fig.add_subplot(2, 2, 3)
plt.imshow(ndimage.imread(test_paths[k+1]))
fig.add_subplot(2, 2, 4)
plt.imshow(np.squeeze(cv2.resize(pred[k+1], (TEST_IMG_ROWS, TEST_IMG_COLS))), cmap='gray')
```
## Подготавливаем данные для отправки
```
def rle_encode(mask):
pixels = mask.flatten()
pixels[0] = 0
pixels[-1] = 0
runs = np.where(pixels[1:] != pixels[:-1])[0] + 2
runs[1::2] = runs[1::2] - runs[:-1:2]
return runs
with open('submit.txt', 'w') as dst:
dst.write('img,rle_mask\n')
for path in test_paths:
img = np.array([cv2.resize(ndimage.imread(path), (IMG_ROWS, IMG_COLS))])
pred_mask = model.predict(img)[0]
bin_mask = 255. * cv2.resize(pred_mask, (TEST_IMG_ROWS, TEST_IMG_COLS))
bin_mask[bin_mask<=127] = 0
bin_mask[bin_mask>127] = 1
rle = rle_encode(bin_mask.astype(np.uint8))
rle = ' '.join(str(x) for x in rle)
dst.write('%s,%s\n' % (path.split('/')[-1], rle))
# 20 epochs
# loss: -0.9891 - binary_crossentropy: 0.0077 - dice_coef: 0.9930
# val_loss: -0.9889 - val_binary_crossentropy: 0.0085 - val_dice_coef: 0.9932
# kaggle: 0.9926
```
| github_jupyter |
# Classificação
```
import os
import numpy as np
from sklearn.datasets import make_moons, make_circles, make_classification
import itertools
import numpy as np
import matplotlib.pyplot as plt
# make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
def figsize(x, y):
# Get current size
fig_size = plt.rcParams["figure.figsize"]
# Prints: [8.0, 6.0]
fig_size[0] = x
fig_size[1] = y
plt.rcParams["figure.figsize"] = fig_size
X, y = make_classification(n_features=2, n_redundant=0, n_informative=2, random_state=1, n_clusters_per_class=1)
rng = np.random.RandomState(2)
X += 2 * rng.uniform(size=X.shape)
datasets = [make_moons(noise=0.3, random_state=0),
make_circles(noise=0.2, factor=0.5, random_state=1),
(X, y)]
def plot_classification(name, clf, X, y, cmap):
score = clf.score(X, y)
h = 0.2
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
if hasattr(clf, "decision_function"):
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
else:
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=cmap, alpha=.8)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Greys)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title(name + " - Score %.2f" % score)
def plot_multi_class(name, clf, X, y, cmap=plt.cm.PRGn):
score = clf.score(X, y)
h = 0.2
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=cmap, alpha=.8)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Greys)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title(name + " - Score %.2f" % score)
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
```
Considerando os seguintes dados, gerados aleatoriamente:
```
figsize(14, 5)
for i, (X, y) in enumerate(datasets):
plt.subplot(1,3,i+1)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Greys)
```
Gostaríamos de criar um **classificador** capaz de apropriadamente **separar** duas classes e corretamente classificar novas entradas.
## Solução usando Máquinas de suporte vetorial (SVM)
```
from sklearn.svm import SVC
svc = SVC(kernel='linear')
X, y = datasets[0]
svc.fit(X, y)
figsize(8,8)
plot_classification('SVC linear', svc, X, y, plt.cm.PRGn)
figsize(15, 5)
for dataset_idx, (X, y) in enumerate(datasets):
plt.subplot(1, 3, dataset_idx+1)
svc.fit(X, y)
plot_classification('SVC linear', svc, X, y, plt.cm.PRGn)
svc = SVC(kernel='poly', degree=3)
for dataset_idx, (X, y) in enumerate(datasets):
plt.subplot(1, 3, dataset_idx+1)
svc.fit(X, y)
plot_classification('SVC Polynomial', svc, X, y, plt.cm.PRGn)
svc = SVC(kernel='rbf')
for dataset_idx, (X, y) in enumerate(datasets):
plt.subplot(1, 3, dataset_idx+1)
svc.fit(X, y)
plot_classification('SVC RBF', svc, X, y, plt.cm.PRGn)
```
# Exercício *Iris*
- 50 amostras de 3 espécies diferentes de íris (150 amostras no total)
- Medidas: comprimento da sépala, largura da sépala, comprimento da pétala, largura da pétala

## Aprendizado de máquina no conjunto de dados da íris
Enquadrado como um problema de **aprendizado supervisionado**: Preveja as espécies de uma íris usando as suas medidas.
- Famoso conjunto de dados para aprendizado de máquina porque a previsão é **fácil**
- Saiba mais sobre o conjunto de dados da íris: [UCI Machine Learning Repository] (http://archive.ics.uci.edu/ml/datasets/Iris)
- Cada linha é uma **observação** (também conhecida como: exemplo, amostra, sample)
- Cada coluna é uma **feature** (também conhecido como: preditor, atributo, variável independente)
- Cada valor que estamos prevendo é a resposta (também conhecida como: target, outcome, label, dependent variable)
- A classificação é um aprendizado supervisionado no qual a resposta é categórica
- Regressão é a aprendizagem supervisionada em que a resposta é ordenada e contínua
```
from IPython.display import IFrame
IFrame('http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data', width=300, height=200)
from sklearn.datasets import load_iris
iris = load_iris()
print(type(iris))
print(iris.DESCR)
print(iris.data)
print(type(iris.data))
print(type(iris.target))
print(iris.data.shape)
print(iris.target.shape)
X = iris.data # Features
y = iris.target # Labels
figsize(8,8)
plt.scatter(X[:,0], X[:,1], c=y)
```
## Exercício:
Crie um classificador capaz de separar as 3 classes de plantas.
```
figsize(6,6)
plt.scatter(X[:,0], X[:,1], c=y)
figsize(6,6)
plt.scatter(X[:,2], X[:,3], c=y)
figsize(6,6)
plt.scatter(X[:,0], X[:,2], c=y)
np.random.seed(42)
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, confusion_matrix
train_features, test_features, train_labels, test_labels = train_test_split(X, y, test_size=0.2)
target_names = iris.target_names
print(iris.target_names)
print(test_features[:2])
print(train_features[:2])
print(test_labels[:10])
print(train_labels[:10])
classifier_svm = SVC()
classifier_svm.fit(train_features, train_labels)
svm_labels = classifier_svm.predict(test_features)
```
Apresente as métricas de validação Matriz de Confusão, Precision/Recall, F1 e ROC para este classificador.
```
classifier_svm.score(test_features, test_labels)
confusion_mat = confusion_matrix(test_labels, svm_labels)
confusion_mat
figsize(4, 4)
plot_confusion_matrix(confusion_mat, target_names)
print(classification_report(test_labels, svm_labels, target_names=target_names))
```
| github_jupyter |
# GLM: Robust Linear Regression
Author: [Thomas Wiecki](https://twitter.com/twiecki)
This tutorial first appeard as a post in small series on Bayesian GLMs on my blog:
1. [The Inference Button: Bayesian GLMs made easy with PyMC3](http://twiecki.github.com/blog/2013/08/12/bayesian-glms-1/)
2. [This world is far from Normal(ly distributed): Robust Regression in PyMC3](http://twiecki.github.io/blog/2013/08/27/bayesian-glms-2/)
3. [The Best Of Both Worlds: Hierarchical Linear Regression in PyMC3](http://twiecki.github.io/blog/2014/03/17/bayesian-glms-3/)
In this blog post I will write about:
- How a few outliers can largely affect the fit of linear regression models.
- How replacing the normal likelihood with Student T distribution produces robust regression.
- How this can easily be done with `PyMC3` and its new `glm` module by passing a `family` object.
This is the second part of a series on Bayesian GLMs (click [here for part I about linear regression](http://twiecki.github.io/blog/2013/08/12/bayesian-glms-1/)). In this prior post I described how minimizing the squared distance of the regression line is the same as maximizing the likelihood of a Normal distribution with the mean coming from the regression line. This latter probabilistic expression allows us to easily formulate a Bayesian linear regression model.
This worked splendidly on simulated data. The problem with simulated data though is that it's, well, simulated. In the real world things tend to get more messy and assumptions like normality are easily violated by a few outliers.
Lets see what happens if we add some outliers to our simulated data from the last post.
Again, import our modules.
```
%matplotlib inline
import pymc3 as pm
import matplotlib.pyplot as plt
import numpy as np
import theano
```
Create some toy data but also add some outliers.
```
size = 100
true_intercept = 1
true_slope = 2
x = np.linspace(0, 1, size)
# y = a + b*x
true_regression_line = true_intercept + true_slope * x
# add noise
y = true_regression_line + np.random.normal(scale=.5, size=size)
# Add outliers
x_out = np.append(x, [.1, .15, .2])
y_out = np.append(y, [8, 6, 9])
data = dict(x=x_out, y=y_out)
```
Plot the data together with the true regression line (the three points in the upper left corner are the outliers we added).
```
fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111, xlabel='x', ylabel='y', title='Generated data and underlying model')
ax.plot(x_out, y_out, 'x', label='sampled data')
ax.plot(x, true_regression_line, label='true regression line', lw=2.)
plt.legend(loc=0);
```
## Robust Regression
Lets see what happens if we estimate our Bayesian linear regression model using the `glm()` function as before. This function takes a [`Patsy`](http://patsy.readthedocs.org/en/latest/quickstart.html) string to describe the linear model and adds a Normal likelihood by default.
```
with pm.Model() as model:
pm.glm.GLM.from_formula('y ~ x', data)
trace = pm.sample(2000, cores=2)
```
To evaluate the fit, I am plotting the posterior predictive regression lines by taking regression parameters from the posterior distribution and plotting a regression line for each (this is all done inside of `plot_posterior_predictive()`).
```
plt.figure(figsize=(7, 5))
plt.plot(x_out, y_out, 'x', label='data')
pm.plot_posterior_predictive_glm(trace, samples=100,
label='posterior predictive regression lines')
plt.plot(x, true_regression_line,
label='true regression line', lw=3., c='y')
plt.legend(loc=0);
```
As you can see, the fit is quite skewed and we have a fair amount of uncertainty in our estimate as indicated by the wide range of different posterior predictive regression lines. Why is this? The reason is that the normal distribution does not have a lot of mass in the tails and consequently, an outlier will affect the fit strongly.
A Frequentist would estimate a [Robust Regression](http://en.wikipedia.org/wiki/Robust_regression) and use a non-quadratic distance measure to evaluate the fit.
But what's a Bayesian to do? Since the problem is the light tails of the Normal distribution we can instead assume that our data is not normally distributed but instead distributed according to the [Student T distribution](http://en.wikipedia.org/wiki/Student%27s_t-distribution) which has heavier tails as shown next (I read about this trick in ["The Kruschke"](http://www.indiana.edu/~kruschke/DoingBayesianDataAnalysis/), aka the puppy-book; but I think [Gelman](http://www.stat.columbia.edu/~gelman/book/) was the first to formulate this).
Lets look at those two distributions to get a feel for them.
```
normal_dist = pm.Normal.dist(mu=0, sd=1)
t_dist = pm.StudentT.dist(mu=0, lam=1, nu=1)
x_eval = np.linspace(-8, 8, 300)
plt.plot(x_eval, theano.tensor.exp(normal_dist.logp(x_eval)).eval(), label='Normal', lw=2.)
plt.plot(x_eval, theano.tensor.exp(t_dist.logp(x_eval)).eval(), label='Student T', lw=2.)
plt.xlabel('x')
plt.ylabel('Probability density')
plt.legend();
```
As you can see, the probability of values far away from the mean (0 in this case) are much more likely under the `T` distribution than under the Normal distribution.
To define the usage of a T distribution in `PyMC3` we can pass a family object -- `T` -- that specifies that our data is Student T-distributed (see `glm.families` for more choices). Note that this is the same syntax as `R` and `statsmodels` use.
```
with pm.Model() as model_robust:
family = pm.glm.families.StudentT()
pm.glm.GLM.from_formula('y ~ x', data, family=family)
trace_robust = pm.sample(2000, cores=2)
plt.figure(figsize=(7, 5))
plt.plot(x_out, y_out, 'x')
pm.plot_posterior_predictive_glm(trace_robust,
label='posterior predictive regression lines')
plt.plot(x, true_regression_line,
label='true regression line', lw=3., c='y')
plt.legend();
```
There, much better! The outliers are barely influencing our estimation at all because our likelihood function assumes that outliers are much more probable than under the Normal distribution.
## Summary
- `PyMC3`'s `glm()` function allows you to pass in a `family` object that contains information about the likelihood.
- By changing the likelihood from a Normal distribution to a Student T distribution -- which has more mass in the tails -- we can perform *Robust Regression*.
The next post will be about logistic regression in PyMC3 and what the posterior and oatmeal have in common.
*Extensions*:
- The Student-T distribution has, besides the mean and variance, a third parameter called *degrees of freedom* that describes how much mass should be put into the tails. Here it is set to 1 which gives maximum mass to the tails (setting this to infinity results in a Normal distribution!). One could easily place a prior on this rather than fixing it which I leave as an exercise for the reader ;).
- T distributions can be used as priors as well. I will show this in a future post on hierarchical GLMs.
- How do we test if our data is normal or violates that assumption in an important way? Check out this [great blog post](http://allendowney.blogspot.com/2013/08/are-my-data-normal.html) by Allen Downey.
| github_jupyter |
```
import copy
import logging
import argparse
import sys, os
import numpy as np.nfrom math import pi
import matplotlib.pyplot as plt
from os.path import dirname, abspath, join
sys.path.append("../")
import matplotlib
%matplotlib inline
import matplotlib as mpl
mpl.use('Qt5Agg')
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
from skimage import measure
from Grids import createGrid
from ValueFuncs import *
from Visualization import *
from DynamicalSystems import *
from InitialConditions import shapeCylinder
from Utilities import *
from ExplicitIntegration import *
from SpatialDerivative import *
from Visualization.value_viz import ValueVisualizer
args = Bundle({'pause_time': 1, 'visualize': True, 'elevation': 5, 'azimuth': 5})
obj = Bundle({})
# define the target set
def get_target(g):
cylinder = shapeCylinder(g.grid, g.axis_align, g.center, g.radius);
return cylinder
def get_hamiltonian_func(t, data, deriv, finite_diff_data):
global obj
ham_value = deriv[0] * obj.p1_term + \
deriv[1] * obj.p2_term - \
obj.omega_e_bound*np.abs(deriv[0]*obj.grid.xs[1] - \
deriv[1] * obj.grid.xs[0] - deriv[2]) + \
obj.omega_p_bound * np.abs(deriv[2])
return ham_value, finite_diff_data
def get_partial_func(t, data, derivMin, derivMax, \
schemeData, dim):
"""
Calculate the extrema of the absolute value of the partials of the
analytic Hamiltonian with respect to the costate (gradient).
"""
global obj
# print('dim: ', dim)
assert dim>=0 and dim <3, "grid dimension has to be between 0 and 2 inclusive."
return obj.alpha[dim]
## Grid
grid_min = expand(np.array((-.75, -1.25, -pi)), ax = 1)
grid_max = expand(np.array((3.25, 1.25, pi)), ax = 1)
pdDims = 2 # 3rd dimension is periodic
resolution = 100
N = np.array(([[
resolution,
np.ceil(resolution*(grid_max[1, 0] - grid_min[1, 0])/ \
(grid_max[0, 0] - grid_min[0, 0])),
resolution-1
]])).T.astype(int)
grid_max[2, 0]*= (1-2/N[2])
obj.grid = createGrid(grid_min, grid_max, N, pdDims)
# global params
obj.axis_align, obj.center, obj.radius = 2, np.zeros((3, 1)), 0.5
data0 = get_target(obj)
data = copy.copy(data0)
obj.v_e = +1
obj.v_p = +1
obj.omega_e_bound = +1
obj.omega_p_bound = +1
t_range = [0, 2.5]
obj.p1_term = obj.v_e - obj.v_p * np.cos(obj.grid.xs[2])
obj.p2_term = -obj.v_p * np.sin(obj.grid.xs[2])
obj.alpha = [ np.abs(obj.p1_term) + np.abs(obj.omega_e_bound * obj.grid.xs[1]), \
np.abs(obj.p2_term) + np.abs(obj.omega_e_bound * obj.grid.xs[0]), \
obj.omega_e_bound + obj.omega_p_bound ]
small = 100*eps
level = 0
finite_diff_data = Bundle({'grid': obj.grid, 'hamFunc': get_hamiltonian_func,
'partialFunc': get_partial_func,
'dissFunc': artificialDissipationGLF,
'derivFunc': upwindFirstENO2,
})
options = Bundle(dict(factorCFL=0.95, stats='on', singleStep='off'))
integratorOptions = odeCFLset(options)
"""
---------------------------------------------------------------------------
Restrict the Hamiltonian so that reachable set only grows.
The Lax-Friedrichs approximation scheme MUST already be completely set up.
"""
innerData = copy.copy(finite_diff_data)
del finite_diff_data
# Wrap the true Hamiltonian inside the term approximation restriction routine.
schemeFunc = termRestrictUpdate
finite_diff_data = Bundle(dict(innerFunc = termLaxFriedrichs,
innerData = innerData,
positive = 0
))
# Period at which intermediate plots should be produced.
plot_steps = 11
t_plot = (t_range[1] - t_range[0]) / (plot_steps - 1)
# Loop through t_range (subject to a little roundoff).
t_now = t_range[0]
start_time = np.time()
# Visualization paramters
spacing = tuple(obj.grid.dx.flatten().tolist())
init_mesh = implicit_mesh(data, level=0, spacing=spacing, edge_color='b', face_color='b')
params = Bundle(
{"grid": obj.grid,
'disp': True,
'labelsize': 16,
'labels': "Initial 0-LevelSet",
'linewidth': 2,
'data': data,
'elevation': args.elevation,
'azimuth': args.azimuth,
'mesh': init_mesh,
'init_conditions': False,
'pause_time': args.pause_time,
'level': 0, # which level set to visualize
'winsize': (12,7),
'fontdict': Bundle({'fontsize':12, 'fontweight':'bold'}),
"savedict": Bundle({"save": False,
"savename": "rcbrt",
"savepath": "../jpeg_dumps/rcbrt"})
})
if args.visualize:
rcbrt_viz = RCBRTVisualizer(params=params)
while(t_range[1] - t_now > small * t_range[1]):
time_step = f"{t_now}/{t_range[-1]}"
# Reshape data array into column vector for ode solver call.
y0 = data.flatten()
# How far to step?
t_span = np.hstack([ t_now, min(t_range[1], t_now + t_plot) ])
# Take a timestep.
t, y, _ = odeCFL2(termRestrictUpdate, t_span, y0, integratorOptions, finite_diff_data)
t_now = t
info(f't: {t:.3f}/{t_range[-1]} TargSet Min: {min(y):.3f}, TargSet Max: {max(y):.3f} TargSet Norm: {np.linalg.norm(y)}')
# Get back the correctly shaped data array
data = np.reshape(y, obj.grid.shape)
mesh=implicit_mesh(data, level=0, spacing=spacing, edge_color='None',
face_color='red')
if args.visualize:
rcbrt_viz.update_tube(data, mesh, time_step)
end_time = np.time()
info(f'Total execution time {end_time - start_time} seconds.')
plt.close('all')
```
| github_jupyter |
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges).
# Solution Notebook
## Problem: Implement an algorithm to have a robot move from the upper left corner to the bottom right corner of a grid.
* [Constraints](#Constraints)
* [Test Cases](#Test-Cases)
* [Algorithm](#Algorithm)
* [Code](#Code)
* [Unit Test](#Unit-Test)
## Constraints
* Are there restrictions to how the robot moves?
* The robot can only move right and down
* Are some cells invalid (off limits)?
* Yes
* Can we assume the starting and ending cells are valid cells?
* Yes
* Is this a rectangular grid? i.e. the grid is not jagged?
* Yes
* Will there always be a valid way for the robot to get to the bottom right?
* No, return None
* Can we assume the inputs are valid?
* No
* Can we assume this fits memory?
* Yes
## Test Cases
<pre>
o = valid cell
x = invalid cell
0 1 2 3
0 o o o o
1 o x o o
2 o o x o
3 x o o o
4 o o x o
5 o o o x
6 o x o x
7 o x o o
</pre>
* General case
```
expected = [(0, 0), (1, 0), (2, 0),
(2, 1), (3, 1), (4, 1),
(5, 1), (5, 2), (6, 2),
(7, 2), (7, 3)]
```
* No valid path, say row 7, col 2 is invalid
* None input
* Empty matrix
## Algorithm
To get to row r and column c [r, c], we will need to have gone:
* Right from [r, c-1] if this is a valid cell - [Path 1]
* Down from [r-1, c] if this is a valid cell - [Path 2]
If we look at [Path 1], to get to [r, c-1], we will need to have gone:
* Right from [r, c-2] if this is a valid cell
* Down from [r-1, c-1] if this is a valid cell
Continue this process until we reach the start cell or until we find that there is no path.
Base case:
* If the input row or col are < 0, or if [row, col] is not a valid cell
* Return False
Recursive case:
We'll memoize the solution to improve performance.
* Use the memo to see if we've already processed the current cell
* If any of the following is True, append the current cell to the path and set our result to True:
* We are at the start cell
* We get a True result from a recursive call on:
* [row, col-1]
* [row-1, col]
* Update the memo
* Return the result
Complexity:
* Time: O(row * col)
* Space: O(row * col) for the recursion depth
## Code
```
class Grid(object):
def find_path(self, matrix):
if matrix is None or not matrix:
return None
cache = {}
path = []
if self._find_path(matrix, len(matrix) - 1,
len(matrix[0]) - 1, cache, path):
return path
else:
return None
def _find_path(self, matrix, row, col, cache, path):
if row < 0 or col < 0 or not matrix[row][col]:
return False
cell = (row, col)
if cell in cache:
return cache[cell]
cache[cell] = (row == 0 and col == 0 or
self._find_path(matrix, row, col - 1, cache, path) or
self._find_path(matrix, row - 1, col, cache, path))
if cache[cell]:
path.append(cell)
return cache[cell]
```
## Unit Test
```
%%writefile test_grid_path.py
from nose.tools import assert_equal
class TestGridPath(object):
def test_grid_path(self):
grid = Grid()
assert_equal(grid.find_path(None), None)
assert_equal(grid.find_path([[]]), None)
max_rows = 8
max_cols = 4
matrix = [[1] * max_cols for _ in range(max_rows)]
matrix[1][1] = 0
matrix[2][2] = 0
matrix[3][0] = 0
matrix[4][2] = 0
matrix[5][3] = 0
matrix[6][1] = 0
matrix[6][3] = 0
matrix[7][1] = 0
result = grid.find_path(matrix)
expected = [(0, 0), (1, 0), (2, 0),
(2, 1), (3, 1), (4, 1),
(5, 1), (5, 2), (6, 2),
(7, 2), (7, 3)]
assert_equal(result, expected)
matrix[7][2] = 0
result = grid.find_path(matrix)
assert_equal(result, None)
print('Success: test_grid_path')
def main():
test = TestGridPath()
test.test_grid_path()
if __name__ == '__main__':
main()
%run -i test_grid_path.py
```
| github_jupyter |
Convolutional Dictionary Learning
=================================
This example demonstrates the use of [cbpdndl.ConvBPDNDictLearn](http://sporco.rtfd.org/en/latest/modules/sporco.dictlrn.cbpdndl.html#sporco.dictlrn.cbpdndl.ConvBPDNDictLearn) for learning a convolutional dictionary from a set of colour training images [[51]](http://sporco.rtfd.org/en/latest/zreferences.html#id54), using PGM solvers for both sparse coding [[13]](http://sporco.rtfd.org/en/latest/zreferences.html#id13) [[53]](http://sporco.rtfd.org/en/latest/zreferences.html#id56) and dictionary update steps [[26]](http://sporco.rtfd.org/en/latest/zreferences.html#id25).
```
from __future__ import print_function
from builtins import input
import pyfftw # See https://github.com/pyFFTW/pyFFTW/issues/40
import numpy as np
from sporco.dictlrn import cbpdndl
from sporco import util
from sporco import signal
from sporco import plot
plot.config_notebook_plotting()
from sporco.pgm.backtrack import BacktrackStandard
```
Load training images.
```
exim = util.ExampleImages(scaled=True, zoom=0.5)
img1 = exim.image('barbara.png', idxexp=np.s_[10:522, 100:612])
img2 = exim.image('kodim23.png', idxexp=np.s_[:, 60:572])
img3 = exim.image('monarch.png', idxexp=np.s_[:, 160:672])
S = np.stack((img1, img2, img3), axis=3)
```
Highpass filter training images.
```
npd = 16
fltlmbd = 5
sl, sh = signal.tikhonov_filter(S, fltlmbd, npd)
```
Construct initial dictionary.
```
np.random.seed(12345)
D0 = np.random.randn(16, 16, 3, 96)
```
Set regularization parameter and options for dictionary learning solver. Note the multi-scale dictionary filter sizes. Also note the possibility of changing parameters in the backtracking algorithm.
```
lmbda = 0.2
L_sc = 36.0
L_du = 50.0
dsz = ((8, 8, 3, 32), (12, 12, 3, 32), (16, 16, 3, 32))
opt = cbpdndl.ConvBPDNDictLearn.Options({
'Verbose': True, 'MaxMainIter': 200, 'DictSize': dsz,
'CBPDN': {'Backtrack': BacktrackStandard(gamma_u=1.1), 'L': L_sc},
'CCMOD': {'Backtrack': BacktrackStandard(), 'L': L_du}},
xmethod='pgm', dmethod='pgm')
```
Create solver object and solve.
```
d = cbpdndl.ConvBPDNDictLearn(D0, sh, lmbda, opt, xmethod='pgm',
dmethod='pgm')
D1 = d.solve()
print("ConvBPDNDictLearn solve time: %.2fs" % d.timer.elapsed('solve'))
```
Display initial and final dictionaries.
```
D1 = D1.squeeze()
fig = plot.figure(figsize=(14, 7))
plot.subplot(1, 2, 1)
plot.imview(util.tiledict(D0), title='D0', fig=fig)
plot.subplot(1, 2, 2)
plot.imview(util.tiledict(D1, dsz), title='D1', fig=fig)
fig.show()
```
Get iterations statistics from solver object and plot functional value, residuals, and automatically adjusted gradient step parameters against the iteration number.
```
its = d.getitstat()
fig = plot.figure(figsize=(20, 5))
plot.subplot(1, 3, 1)
plot.plot(its.ObjFun, xlbl='Iterations', ylbl='Functional', fig=fig)
plot.subplot(1, 3, 2)
plot.plot(np.vstack((its.X_Rsdl, its.D_Rsdl)).T,
ptyp='semilogy', xlbl='Iterations', ylbl='Residual',
lgnd=['X', 'D'], fig=fig)
plot.subplot(1, 3, 3)
plot.plot(np.vstack((its.X_L, its.D_L)).T, xlbl='Iterations',
ylbl='Inverse of Gradient Step Parameter', ptyp='semilogy',
lgnd=['$L_X$', '$L_D$'], fig=fig)
fig.show()
```
| github_jupyter |
```
"""
Snowflake + DataRobot Prediction API example code.
1. Data extracted via Snowflake python connector
2. Python scoring http request sent
3. Data written back to Snowflake via connector as raw json and flattened in Snowflake
4. Data flattened in python
5. Batch Scoring Script scoring
*******
NOTE:
Write back only shown as an example - the processed used here may be ok on some databases,
but Snowflake should ingest data back via stage objects
*******
v1.0 Mike Taveirne (doyouevendata) 1/17/2020
"""
import snowflake.connector
import datetime
import sys
from pandas.io.json import json_normalize
import pandas as pd
import requests
import my_creds
# datarobot parameters
API_KEY = my_creds.API_KEY
USERNAME = my_creds.USERNAME
DEPLOYMENT_ID = my_creds.DEPLOYMENT_ID
DATAROBOT_KEY = my_creds.DATAROBOT_KEY
# replace with the load balancer for your prediction instance(s)
DR_PREDICTION_HOST = my_creds.DR_PREDICTION_HOST
DR_MODELING_HEADERS = {'Content-Type': 'application/json', 'Authorization': 'token %s' % API_KEY}
headers = {'Content-Type': 'text/plain; charset=UTF-8', 'datarobot-key': DATAROBOT_KEY}
url = '{dr_prediction_host}/predApi/v1.0/deployments/{deployment_id}/'\
'predictions'.format(dr_prediction_host=DR_PREDICTION_HOST, deployment_id=DEPLOYMENT_ID)
response = requests.get('https://app.datarobot.com/api/v2/modelDeployments/'+DEPLOYMENT_ID+'/features/',
headers=DR_MODELING_HEADERS)
json_normalize(data=response.json()['data'])[['name', 'featureType', 'importance']]
# snowflake parameters
SNOW_ACCOUNT = my_creds.SNOW_ACCOUNT
SNOW_USER = my_creds.SNOW_USER
SNOW_PASS = my_creds.SNOW_PASS
SNOW_DB = 'TITANIC'
SNOW_SCHEMA = 'PUBLIC'
# create a connection
ctx = snowflake.connector.connect(
user=SNOW_USER,
password=SNOW_PASS,
account=SNOW_ACCOUNT,
database=SNOW_DB,
schema=SNOW_SCHEMA,
protocol='https'
)
# create a cursor
cur = ctx.cursor()
# execute sql
sql = "select passengerid, pclass, name, sex, age, sibsp, parch, fare, cabin, embarked " \
+ " from titanic.public.passengers"
cur.execute(sql)
# fetch results into dataframe
df = cur.fetch_pandas_all()
df.head()
predictions_response = requests.post(
url,
auth=(USERNAME, API_KEY),
data=df.to_csv(),
headers=headers,
# business key passed through
params={'passthroughColumns' : 'PASSENGERID'}
)
if predictions_response.status_code != 200:
print("error {status_code}: {content}".format(status_code=predictions_response.status_code, content=predictions_response.content))
sys.exit(-1)
# first 3 records json structure
predictions_response.json()['data'][0:3]
df_response = pd.DataFrame.from_dict(predictions_response.json())
df_response.head()
```
# Load raw json and flatten in Snowflake
```
ctx.cursor().execute('create or replace table passenger_scored_json(json_rec variant)')
df5 = df_response.head()
# this is not the proper way to insert data into snowflake, but is used for quick demo convenience.
# snowflake ingest should be done via snowflake stage objects.
for ind, row in df5.iterrows():
escaped = str(row['data']).replace("'", "''")
ctx.cursor().execute("insert into passenger_scored_json select parse_json('{rec}')".format(rec=escaped))
print(row['data'])
ctx.cursor().execute('create or replace table passenger_scored_flattened as \
select json_rec:passthroughValues.PASSENGERID::int as passengerid \
, json_rec:prediction::int as prediction \
, json_rec:predictionThreshold::numeric(10,9) as prediction_threshold \
, f.value:label as prediction_label \
, f.value:value as prediction_score \
from titanic.public.passenger_scored_json, table(flatten(json_rec:predictionValues)) f \
where f.value:label = 1')
sql = "select * from passenger_scored_flattened"
cur.execute(sql)
# fetch results into dataframe
df_new = cur.fetch_pandas_all()
df_new.head()
```
# Flatten in python instead
```
df_results = json_normalize(data=predictions_response.json()['data'], record_path='predictionValues',
meta = [['passthroughValues', 'PASSENGERID'], 'prediction', 'predictionThreshold'])
df_results = df_results[df_results['label'] == 1]
df_results.rename(columns={"passthroughValues.PASSENGERID": "PASSENGERID"}, inplace=True)
df_results.head()
```
# Client Side Batch Scoring Utility approach
https://github.com/datarobot/batch-scoring
Shreds input data up into batch payload requests, scores in parallel until input file is fully processed
defaults: (auto sampled request size, 4 concurrent request threads)
```
import os
df.to_csv('input.csv', index=False)
os.system('rm output.csv')
batch_script_string = 'batch_scoring_deployment_aware \
--host="{host}" \
--user="{user}" \
--api_token="{api_token}" \
--out="output.csv" \
--datarobot_key="{datarobot_key}" \
--keep_cols="PASSENGERID" \
--max_prediction_explanations=3 \
{deployment_id} \
input.csv'.format(host=DR_PREDICTION_HOST, user=USERNAME, api_token=API_KEY, datarobot_key=DATAROBOT_KEY, deployment_id=DEPLOYMENT_ID)
os.system(batch_script_string)
df_output = pd.read_csv('output.csv')
df_output.head()
```
| github_jupyter |
```
!unzip ./archive\ \(67\).zip
import pandas as pd
data = pd.read_csv('./names-by-nationality.csv')
data.head()
len(data)
data.isna().sum()
import json
def object_to_int(data,coloum):
info_dict = {}
all_info = []
index = -1
for info in data[coloum]:
if info not in info_dict:
index = index + 1
info_dict[info] = index
for info in data[coloum]:
all_info.append(info_dict[info])
with open(f'{coloum}.json','w') as json_file:
json.dump(info_dict,json_file)
return all_info,info_dict
nationality_info = object_to_int(data,'nationality')
data['nationality'] = nationality_info[0]
data.head()
!ls
sex_info = object_to_int(data,'sex')
data['sex'] = sex_info[0]
data.head()
data['sex'].value_counts()
data.head()
import matplotlib.pyplot as plt
pd.crosstab(data['sex'],data['nationality']).plot()
data.head()
from sklearn.tree import DecisionTreeClassifier,ExtraTreeClassifier
from sklearn.model_selection import *
from sklearn.metrics import *
from sklearn.ensemble import RandomForestClassifier,AdaBoostClassifier,BaggingClassifier,ExtraTreesClassifier,GradientBoostingClassifier,StackingClassifier,VotingClassifier
from sklearn.linear_model import RidgeClassifier,SGDClassifier,Ridge,RidgeClassifierCV,RidgeCV
from sklearn.naive_bayes import GaussianNB
import pickle
import os
def fit_and_calcuate_metrixs(models:dict,X_train,X_test,y_train,y_test):
models_info = {}
for name,model in models.items():
print(name)
model = model.fit(X_train,y_train)
y_preds = model.predict(X_test)
info = {
'Accuracy':model.score(X_test,y_test),
'F1 Score':f1_score(y_test,y_preds,average='macro'),
'Precision':precision_score(y_test,y_preds,average='macro'),
'Recall':recall_score(y_test,y_preds,average='macro')
}
plot_confusion_matrix(model,X_test,y_test)
# plot_roc_curve(model,X_test,y_test)
models_info[name] = info
print('\n\n')
return models_info
def calcuate_metrixs(models:dict,X_train,X_test,y_train,y_test):
models_info = {}
for name,model in models.items():
print(name)
y_preds = model.predict(X_test)
info = {
'Accuracy':model.score(X_test,y_test),
'F1 Score':f1_score(y_test,y_preds,average='macro'),
'Precision':precision_score(y_test,y_preds,average='macro'),
'Recall':recall_score(y_test,y_preds,average='macro')
}
plot_confusion_matrix(model,X_test,y_test)
plot_roc_curve(model,X_test,y_test)
models_info[name] = info
print('\n\n')
return models_info
data
data.
X = data['name']
y = data['sex']
from sklearn.feature_extraction.text import CountVectorizer,TfidfVectorizer
cv = CountVectorizer()
cv.fit(X)
X = cv.transform(X)
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.25)
models = {
'GradientBoostingClassifier':GradientBoostingClassifier(),
'ExtraTreesClassifier':ExtraTreesClassifier(),
'BaggingClassifier':BaggingClassifier(),
'AdaBoostClassifier':AdaBoostClassifier(),
'RandomForestClassifier':RandomForestClassifier(),
'SGDClassifier':SGDClassifier(),
'RidgeClassifier':RidgeClassifier(),
'ExtraTreeClassifier':ExtraTreeClassifier(),
'DecisionTreeClassifier':DecisionTreeClassifier(),
}
models_clf_results = fit_and_calcuate_metrixs(models,X_train,X_test,y_train,y_test)
models_clf_results = pd.DataFrame(models_clf_results.values(),models_clf_results.keys())
models_clf_results
models_clf_results.plot.bar(figsize=(20,10))
```
| github_jupyter |
TSG094 - Grafana logs
=====================
Steps
-----
### Parameters
```
import re
tail_lines = 2000
pod = None # All
container = "grafana"
log_files = [ "/var/log/supervisor/log/grafana*.log" ]
expressions_to_analyze = []
```
### Instantiate Kubernetes client
```
# Instantiate the Python Kubernetes client into 'api' variable
import os
try:
from kubernetes import client, config
from kubernetes.stream import stream
if "KUBERNETES_SERVICE_PORT" in os.environ and "KUBERNETES_SERVICE_HOST" in os.environ:
config.load_incluster_config()
else:
try:
config.load_kube_config()
except:
display(Markdown(f'HINT: Use [TSG118 - Configure Kubernetes config](../repair/tsg118-configure-kube-config.ipynb) to resolve this issue.'))
raise
api = client.CoreV1Api()
print('Kubernetes client instantiated')
except ImportError:
from IPython.display import Markdown
display(Markdown(f'HINT: Use [SOP059 - Install Kubernetes Python module](../install/sop059-install-kubernetes-module.ipynb) to resolve this issue.'))
raise
```
### Get the namespace for the big data cluster
Get the namespace of the Big Data Cluster from the Kuberenetes API.
**NOTE:**
If there is more than one Big Data Cluster in the target Kubernetes
cluster, then either:
- set \[0\] to the correct value for the big data cluster.
- set the environment variable AZDATA\_NAMESPACE, before starting
Azure Data Studio.
```
# Place Kubernetes namespace name for BDC into 'namespace' variable
if "AZDATA_NAMESPACE" in os.environ:
namespace = os.environ["AZDATA_NAMESPACE"]
else:
try:
namespace = api.list_namespace(label_selector='MSSQL_CLUSTER').items[0].metadata.name
except IndexError:
from IPython.display import Markdown
display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))
raise
print('The kubernetes namespace for your big data cluster is: ' + namespace)
```
### Get tail for log
```
# Display the last 'tail_lines' of files in 'log_files' list
pods = api.list_namespaced_pod(namespace)
entries_for_analysis = []
for p in pods.items:
if pod is None or p.metadata.name == pod:
for c in p.spec.containers:
if container is None or c.name == container:
for log_file in log_files:
print (f"- LOGS: '{log_file}' for CONTAINER: '{c.name}' in POD: '{p.metadata.name}'")
try:
output = stream(api.connect_get_namespaced_pod_exec, p.metadata.name, namespace, command=['/bin/sh', '-c', f'tail -n {tail_lines} {log_file}'], container=c.name, stderr=True, stdout=True)
except Exception:
print (f"FAILED to get LOGS for CONTAINER: {c.name} in POD: {p.metadata.name}")
else:
for line in output.split('\n'):
for expression in expressions_to_analyze:
if expression.match(line):
entries_for_analysis.append(line)
print(line)
print("")
print(f"{len(entries_for_analysis)} log entries found for further analysis.")
```
### Analyze log entries and suggest relevant Troubleshooting Guides
```
# Analyze log entries and suggest further relevant troubleshooting guides
from IPython.display import Markdown
import os
import json
import requests
import ipykernel
import datetime
from urllib.parse import urljoin
from notebook import notebookapp
def get_notebook_name():
"""Return the full path of the jupyter notebook. Some runtimes (e.g. ADS)
have the kernel_id in the filename of the connection file. If so, the
notebook name at runtime can be determined using `list_running_servers`.
Other runtimes (e.g. azdata) do not have the kernel_id in the filename of
the connection file, therefore we are unable to establish the filename
"""
connection_file = os.path.basename(ipykernel.get_connection_file())
# If the runtime has the kernel_id in the connection filename, use it to
# get the real notebook name at runtime, otherwise, use the notebook
# filename from build time.
try:
kernel_id = connection_file.split('-', 1)[1].split('.')[0]
except:
pass
else:
for servers in list(notebookapp.list_running_servers()):
try:
response = requests.get(urljoin(servers['url'], 'api/sessions'), params={'token': servers.get('token', '')}, timeout=.01)
except:
pass
else:
for nn in json.loads(response.text):
if nn['kernel']['id'] == kernel_id:
return nn['path']
def load_json(filename):
with open(filename, encoding="utf8") as json_file:
return json.load(json_file)
def get_notebook_rules():
"""Load the notebook rules from the metadata of this notebook (in the .ipynb file)"""
file_name = get_notebook_name()
if file_name == None:
return None
else:
j = load_json(file_name)
if "azdata" not in j["metadata"] or \
"expert" not in j["metadata"]["azdata"] or \
"log_analyzer_rules" not in j["metadata"]["azdata"]["expert"]:
return []
else:
return j["metadata"]["azdata"]["expert"]["log_analyzer_rules"]
rules = get_notebook_rules()
if rules == None:
print("")
print(f"Log Analysis only available when run in Azure Data Studio. Not available when run in azdata.")
else:
hints = 0
if len(rules) > 0:
for entry in entries_for_analysis:
for rule in rules:
if entry.find(rule[0]) != -1:
print (entry)
display(Markdown(f'HINT: Use [{rule[2]}]({rule[3]}) to resolve this issue.'))
hints = hints + 1
print("")
print(f"{len(entries_for_analysis)} log entries analyzed (using {len(rules)} rules). {hints} further troubleshooting hints made inline.")
print('Notebook execution complete.')
```
| github_jupyter |
# Strings
A string is a sequence of characters.
Computers do not deal with characters, they deal with numbers (binary). Even though you may see characters on your screen, internally it is stored and manipulated as a combination of 0's and 1's.
This conversion of character to a number is called encoding, and the reverse process is decoding. ASCII and Unicode are some of the popular encoding used.
In Python, string is a sequence of Unicode character.
For more details about unicode
https://docs.python.org/3.3/howto/unicode.html
# How to create a string?
Strings can be created by enclosing characters inside a single quote or double quotes.
Even triple quotes can be used in Python but generally used to represent multiline strings and docstrings.
```
myString = 'Hello'
print(myString)
myString = "Hello"
print(myString)
myString = '''Hello'''
print(myString)
```
# How to access characters in a string?
We can access individual characters using indexing and a range of characters using slicing.
Index starts from 0.
Trying to access a character out of index range will raise an IndexError.
The index must be an integer. We can't use float or other types, this will result into TypeError.
Python allows negative indexing for its sequences.
```
myString = "Hello"
#print first Character
print(myString[0])
#print last character using negative indexing
print(myString[-1])
#slicing 2nd to 5th character
print(myString[2:5])
```
If we try to access index out of the range or use decimal number, we will get errors.
```
print(myString[15])
print(myString[1.5])
```
# How to change or delete a string ?
Strings are immutable. This means that elements of a string cannot be changed once it has been assigned.
We can simply reassign different strings to the same name.
```
myString = "Hello"
myString[4] = 's' # strings are immutable
```
We cannot delete or remove characters from a string. But deleting the string entirely is possible using the keyword del.
```
del myString # delete complete string
print(myString)
```
# String Operations
# Concatenation
Joining of two or more strings into a single one is called concatenation.
The + operator does this in Python. Simply writing two string literals together also concatenates them.
The * operator can be used to repeat the string for a given number of times.
```
s1 = "Hello "
s2 = "Satish"
#concatenation of 2 strings
print(s1 + s2)
#repeat string n times
print(s1 * 3)
```
# Iterating Through String
```
count = 0
for l in "Hello World":
if l == 'o':
count += 1
print(count, ' letters found')
```
# String Membership Test
```
print('l' in 'Hello World') #in operator to test membership
print('or' in 'Hello World')
```
# String Methods
Some of the commonly used methods are lower(), upper(), join(), split(), find(), replace() etc
```
"Hello".lower()
"Hello".upper()
"This will split all words in a list".split()
' '.join(['This', 'will', 'split', 'all', 'words', 'in', 'a', 'list'])
"Good Morning".find("Mo")
s1 = "Bad morning"
s2 = s1.replace("Bad", "Good")
print(s1)
print(s2)
```
# Python Program to Check where a String is Palindrome or not ?
```
myStr = "Madam"
#convert entire string to either lower or upper
myStr = myStr.lower()
#reverse string
revStr = reversed(myStr)
#check if the string is equal to its reverse
if list(myStr) == list(revStr):
print("Given String is palindrome")
else:
print("Given String is not palindrome")
```
# Python Program to Sort Words in Alphabetic Order?
```
myStr = "python Program to Sort words in Alphabetic Order"
#breakdown the string into list of words
words = myStr.split()
#sort the list
words.sort()
#print Sorted words are
for word in words:
print(word)
```
| github_jupyter |
# Pommerman Demo.
This notebook demonstrates how to train Pommerman agents. Please let us know at support@pommerman.com if you run into any issues.
```
import os
import sys
import numpy as np
from pommerman.agents import SimpleAgent, RandomAgent, PlayerAgent, BaseAgent
from pommerman.configs import ffa_v0_env
from pommerman.envs.v0 import Pomme
from pommerman.characters import Bomber
from pommerman import utility
```
# Random agents
The following codes instantiates the environment with four random agents who take actions until the game is finished. (This will be a quick game.)
```
# Instantiate the environment
config = ffa_v0_env()
env = Pomme(**config["env_kwargs"])
# Add four random agents
agents = {}
for agent_id in range(4):
agents[agent_id] = RandomAgent(config["agent"](agent_id, config["game_type"]))
env.set_agents(list(agents.values()))
env.set_init_game_state(None)
# Seed and reset the environment
env.seed(0)
obs = env.reset()
# Run the random agents until we're done
done = False
while not done:
env.render()
actions = env.act(obs)
obs, reward, done, info = env.step(actions)
env.render(close=True)
env.close()
print(info)
```
# Human Agents
The following code runs the environment with 3 random agents and one agent with human input (use the arrow keys on your keyboard). This can also be called on the command line with:
`python run_battle.py --agents=player::arrows,random::null,random::null,random::null --config=PommeFFACompetition-v0`
You can also run this with SimpleAgents by executing:
`python run_battle.py --agents=player::arrows,test::agents.SimpleAgent,test::agents.SimpleAgent,test::agents.SimpleAgent --config=PommeFFACompetition-v0`
```
# Instantiate the environment
config = ffa_v0_env()
env = Pomme(**config["env_kwargs"])
# Add 3 random agents
agents = {}
for agent_id in range(3):
agents[agent_id] = RandomAgent(config["agent"](agent_id, config["game_type"]))
# Add human agent
agents[3] = PlayerAgent(config["agent"](agent_id, config["game_type"]), "arrows")
env.set_agents(list(agents.values()))
env.set_init_game_state(None)
# Seed and reset the environment
env.seed(0)
obs = env.reset()
# Run the agents until we're done
done = False
while not done:
env.render()
actions = env.act(obs)
obs, reward, done, info = env.step(actions)
env.render(close=True)
env.close()
# Print the result
print(info)
```
# Training an Agent
The following code uses Tensorforce to train a PPO agent. This is in the train_with_tensorforce.py module as well.
```
# Make sure you have tensorforce installed: pip install tensorforce
from tensorforce.agents import PPOAgent
from tensorforce.execution import Runner
from tensorforce.contrib.openai_gym import OpenAIGym
def make_np_float(feature):
return np.array(feature).astype(np.float32)
def featurize(obs):
board = obs["board"].reshape(-1).astype(np.float32)
bomb_blast_strength = obs["bomb_blast_strength"].reshape(-1).astype(np.float32)
bomb_life = obs["bomb_life"].reshape(-1).astype(np.float32)
position = make_np_float(obs["position"])
ammo = make_np_float([obs["ammo"]])
blast_strength = make_np_float([obs["blast_strength"]])
can_kick = make_np_float([obs["can_kick"]])
teammate = obs["teammate"]
if teammate is not None:
teammate = teammate.value
else:
teammate = -1
teammate = make_np_float([teammate])
enemies = obs["enemies"]
enemies = [e.value for e in enemies]
if len(enemies) < 3:
enemies = enemies + [-1]*(3 - len(enemies))
enemies = make_np_float(enemies)
return np.concatenate((board, bomb_blast_strength, bomb_life, position, ammo, blast_strength, can_kick, teammate, enemies))
class TensorforceAgent(BaseAgent):
def act(self, obs, action_space):
pass
# Instantiate the environment
config = ffa_v0_env()
env = Pomme(**config["env_kwargs"])
env.seed(0)
# Create a Proximal Policy Optimization agent
agent = PPOAgent(
states=dict(type='float', shape=env.observation_space.shape),
actions=dict(type='int', num_actions=env.action_space.n),
network=[
dict(type='dense', size=64),
dict(type='dense', size=64)
],
batching_capacity=1000,
step_optimizer=dict(
type='adam',
learning_rate=1e-4
)
)
# Add 3 random agents
agents = []
for agent_id in range(3):
agents.append(SimpleAgent(config["agent"](agent_id, config["game_type"])))
# Add TensorforceAgent
agent_id += 1
agents.append(TensorforceAgent(config["agent"](agent_id, config["game_type"])))
env.set_agents(agents)
env.set_training_agent(agents[-1].agent_id)
env.set_init_game_state(None)
class WrappedEnv(OpenAIGym):
def __init__(self, gym, visualize=False):
self.gym = gym
self.visualize = visualize
def execute(self, actions):
if self.visualize:
self.gym.render()
obs = self.gym.get_observations()
all_actions = self.gym.act(obs)
all_actions.insert(self.gym.training_agent, actions)
state, reward, terminal, _ = self.gym.step(all_actions)
agent_state = featurize(state[self.gym.training_agent])
agent_reward = reward[self.gym.training_agent]
return agent_state, terminal, agent_reward
def reset(self):
obs = self.gym.reset()
agent_obs = featurize(obs[3])
return agent_obs
# Instantiate and run the environment for 5 episodes.
wrapped_env = WrappedEnv(env, True)
runner = Runner(agent=agent, environment=wrapped_env)
runner.run(episodes=5, max_episode_timesteps=2000)
print("Stats: ", runner.episode_rewards, runner.episode_timesteps, runner.episode_times)
try:
runner.close()
except AttributeError as e:
pass
```
| github_jupyter |
## Do why example on ihdp(Infant Health and Development Program) dataset
```
# importing required libraries
import os, sys
sys.path.append(os.path.abspath("../../"))
import dowhy
from dowhy.do_why import CausalModel
import pandas as pd
import numpy as np
```
#### Loading Data
```
data= pd.read_csv("https://raw.githubusercontent.com/AMLab-Amsterdam/CEVAE/master/datasets/IHDP/csv/ihdp_npci_1.csv", header = None)
col = ["treatment", "y_factual", "y_cfactual", "mu0", "mu1" ,]
for i in range(1,26):
col.append("x"+str(i))
data.columns = col
data.head()
```
#### 1.Model
```
# Create a causal model from the data and given common causes.
xs = ""
for i in range(1,26):
xs += ("x"+str(i)+"+")
model=CausalModel(
data = data,
treatment='treatment',
outcome='y_factual',
common_causes=xs.split('+')
)
```
#### 2.Identify
```
#Identify the causal effect
identified_estimand = model.identify_effect()
```
#### 3. Estimate (using different methods)
#### 3.1 Using Linear Regression
```
# Estimate the causal effect and compare it with Average Treatment Effect
estimate = model.estimate_effect(identified_estimand,
method_name="backdoor.linear_regression", test_significance=True
)
print(estimate)
print("Causal Estimate is " + str(estimate.value))
data_1 = data[data["treatment"]==1]
data_0 = data[data["treatment"]==0]
print("ATE", np.mean(data_1["y_factual"])- np.mean(data_0["y_factual"]))
```
#### 3.2 Using Propensity Score Matching
```
estimate = model.estimate_effect(identified_estimand,
method_name="backdoor.propensity_score_matching"
)
print("Causal Estimate is " + str(estimate.value))
print("ATE", np.mean(data_1["y_factual"])- np.mean(data_0["y_factual"]))
```
#### 3.3 Using Propensity Score Stratification
```
estimate = model.estimate_effect(identified_estimand,
method_name="backdoor.propensity_score_stratification"
)
print("Causal Estimate is " + str(estimate.value))
print("ATE", np.mean(data_1["y_factual"])- np.mean(data_0["y_factual"]))
```
#### 3.4 Using Propensity Score Weighting? IPTW??
```
identified_estimand = model.identify_effect()
estimate = model.estimate_effect(identified_estimand,
method_name="backdoor.propensity_score_weighting"
)
print("Causal Estimate is " + str(estimate.value))
print("ATE", np.mean(data_1["y_factual"])- np.mean(data_0["y_factual"]))
```
#### 4. Refute
##### Refute the obtained estimate using multiple robustness checks.
##### 4.1 Adding a random common cause
```
refute_results=model.refute_estimate(identified_estimand, estimate,
method_name="random_common_cause")
print(refute_results)
```
##### 4.2 Using a placebo treatment
```
res_placebo=model.refute_estimate(identified_estimand, estimate,
method_name="placebo_treatment_refuter", placebo_type="permute")
print(res_placebo)
```
#### 4.3 Data Subset Refuter
```
res_subset=model.refute_estimate(identified_estimand, estimate,
method_name="data_subset_refuter", subset_fraction=0.9)
print(res_subset)
```
| github_jupyter |
# Notebook for testing performance of Visual Recognition Custom Classifiers
[Watson Developer Cloud](https://www.ibm.com/watsondevelopercloud) is a platform of cognitive services that leverage machine learning techniques to help partners and clients solve a variety of business problems. Furthermore, several of the WDC services fall under the **supervised learning** suite of machine learning algorithms, that is, algorithms that learn by example. This begs the questions: "How many examples should we provide?" and "When is my solution ready for prime time?"
It is critical to understand that training a machine learning solution is an iterative process where it is important to continually improve the solution by providing new examples and measuring the performance of the trained solution. In this notebook, we show how you can compute important Machine Learning metrics (accuracy, precision, recall, confusion_matrix) to judge the performance of your solution. For more details on these various metrics, please consult the **[Is Your Chatbot Ready for Prime-Time?](https://developer.ibm.com/dwblog/2016/chatbot-cognitive-performance-metrics-accuracy-precision-recall-confusion-matrix/)** blog.
<br> The notebook assumes you have already created a Watson [Visual Recognition](https://www.ibm.com/watson/developercloud/visual-recognition.html) instance and trained [custom classifiers](https://www.ibm.com/watson/developercloud/doc/visual-recognition/tutorial-custom-classifier.html). </br>
<br> To leverage this notebook, you need to provide the following information</br>
* Credentials for your Visual Recognition instance (apikey)
* id for your trained classifier (this is returned when you train your Visual Recognition custom classifier)
* csv file with your test images (paths to images on your local disk) and corresponding class labels
* results csv file to write the results to (true vs. predicted class labels)
* csv file to write confusion matrix to
Note that the input test csv file should have a header with the fields **image** and **class**.
```
# Only run this the first time if pandas_ml is not installed on your machine
!pip install pandas_ml
# latest version of watson_developer_cloud (1.0.0) as of November 20, 2017
!pip install -I watson_developer_cloud==1.0.0
# previous version of watson_developer_cloud
#Import utilities
import json
import csv
import sys
from sklearn.metrics import confusion_matrix
from sklearn.metrics import precision_recall_fscore_support
from sklearn.metrics import classification_report
from sklearn.metrics import accuracy_score
import matplotlib.pyplot as plt
import pandas_ml
from pandas_ml import ConfusionMatrix
from watson_developer_cloud import VisualRecognitionV3
```
Provide the path to the parms file which includes credentials to access your VR service as well as the input
test csv file and the output csv files to write the output results to.
```
# Sample parms file data
#{
# "url": "https://gateway-a.watsonplatform.net/visual-recognition/api",
# "apikey":"YOUR_VISUAL_RECOGNITION_APIKEY",
# "vr_id":"YOUR_VISUAL_RECOGNITION_CUSTOM_CLASSIFIER_ID",
# "test_csv_file": "COMPLETE_PATH_TO_YOUR_TEST_CSV_FILE",
# "results_csv_file": "COMPLETE PATH TO RESULTS FILE (any file you can write to)",
# "confmatrix_csv_file": "COMPLETE PATH TO CONFUSION MATRIX FILE (any file you can write to)"
#}
# Provide complete path to the file which includes all required parms
# A sample parms file is included (example_VR_parms.json)
vrParmsFile = 'COMPLETE PATH TO YOUR PARMS FILE'
parms = ''
with open(vrParmsFile) as parmFile:
parms = json.load(parmFile)
url=parms['url']
apikey=parms['apikey']
vr_id=parms['vr_id']
test_csv_file=parms['test_csv_file']
results_csv_file=parms['results_csv_file']
confmatrix_csv_file=parms['confmatrix_csv_file']
json.dumps(parms)
# Create an object for your Visual Recognition instance
visual_recognition = VisualRecognitionV3('2016-05-20', api_key=apikey)
```
Define useful methods to classify using custom VR classifier.
```
# Given an image and a pointer to VR instance and classifierID, get back VR response
def getVRresponse(vr_instance,classifierID,image_path):
with open(image_path, 'rb') as image_file:
parameters = json.dumps({'threshold':0.01, 'classifier_ids': [classifierID]})
#parameters = json.dumps({'threshold':0.01, 'classifier_ids': ['travel_1977348895','travel_2076475268','default']})
image_results = vr_instance.classify(images_file=image_file,
parameters = parameters)
# For our purposes, we assume each call is to classify one image
# Although the Visual Recognition classify endpoint accepts as input
# a .zip file, we need each image to be labeled with the correct class
classList = []
for classifier in image_results['images'][0]['classifiers']:
if classifier['classifier_id'] == vr_id:
classList = classifier['classes']
break
# Sort the returned classes by score
#print("classList: ", classList)
sorted_classList = sorted(classList, key=lambda k: k.get('score', 0), reverse=True)
#print("sortedList: ", sorted_classList)
return sorted_classList
# Process multiple images (provided via csv file) in batch. Effectively, read the csv file and for each image
# utterance, get VR response. Aggregate and return results.
def batchVR(vr_instance,classifierID,csvfile):
test_classes=[]
vr_predict_classes=[]
vr_predict_confidence=[]
images=[]
i=0
with open(csvfile, 'r') as csvfile:
csvReader=csv.DictReader(csvfile)
for row in csvReader:
test_classes.append(row['class'])
vr_response = getVRresponse(vr_instance,classifierID,row['image'])
vr_predict_classes.append(vr_response[0]['class'])
vr_predict_confidence.append(vr_response[0]['score'])
images.append(row['image'])
i = i+1
if(i%250 == 0):
print("")
print("Processed ", i, " records")
if(i%10 == 0):
sys.stdout.write('.')
print("")
print("Finished processing ", i, " records")
return test_classes, vr_predict_classes, vr_predict_confidence, images
# Plot confusion matrix as an image
def plot_conf_matrix(conf_matrix):
plt.figure()
plt.imshow(conf_matrix)
plt.show()
# Print confusion matrix to a csv file
def confmatrix2csv(conf_matrix,labels,csvfile):
with open(csvfile, 'w') as csvfile:
csvWriter = csv.writer(csvfile)
row=list(labels)
row.insert(0,"")
csvWriter.writerow(row)
for i in range(conf_matrix.shape[0]):
row=list(conf_matrix[i])
row.insert(0,labels[i])
csvWriter.writerow(row)
# List of all custom classifiers in your visual recognition service
#print(json.dumps(visual_recognition.list_classifiers(), indent=2))
# This is an optional step to quickly test response from Visual Recognition for a given image
##testImage='COMPLETE PATH TO YOUR TEST IMAGE'
##classifierList = "'" + vr_id + "'" + "," + "'" + "default" + "'"
##results = getVRresponse(visual_recognition,vr_id,testImage)
##print(json.dumps(results, indent=2))
```
Call Visual Recognition on the specified csv file and collect results.
```
test_classes,vr_predict_classes,vr_predict_conf,images=batchVR(visual_recognition,vr_id,test_csv_file)
# print results to csv file including original text, the correct label,
# the predicted label and the confidence reported by NLC.
csvfileOut=results_csv_file
with open(csvfileOut, 'w') as csvOut:
outrow=['image','true class','VR Predicted class','Confidence']
csvWriter = csv.writer(csvOut,dialect='excel')
csvWriter.writerow(outrow)
for i in range(len(images)):
outrow=[images[i],test_classes[i],vr_predict_classes[i],str(vr_predict_conf[i])]
csvWriter.writerow(outrow)
# Compute confusion matrix
labels=list(set(test_classes))
vr_confusion_matrix = confusion_matrix(test_classes, vr_predict_classes, labels)
vrConfMatrix = ConfusionMatrix(test_classes, vr_predict_classes)
# Print out confusion matrix with labels to csv file
confmatrix2csv(vr_confusion_matrix,labels,confmatrix_csv_file)
%matplotlib inline
vrConfMatrix.plot()
# Compute accuracy of classification
acc=accuracy_score(test_classes, vr_predict_classes)
print('Classification Accuracy: ', acc)
# print precision, recall and f1-scores for the different classes
print(classification_report(test_classes, vr_predict_classes, labels=labels))
#Optional if you would like each of these metrics separately
#[precision,recall,fscore,support]=precision_recall_fscore_support(test_classes, vr_predict_classes, labels=labels)
#print("precision: ", precision)
#print("recall: ", recall)
#print("f1 score: ", fscore)
#print("support: ", support)
```
| github_jupyter |
### Show distribution of heights and Z time form the most recent ephys round of animals
```
import math
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import matplotlib.lines as mlines
import matplotlib.patches as mpatches
from numpy import median
from scipy.stats import ranksums
import numpy as np
import scipy.stats
import os
import fnmatch
figures = 'C:/Users/Fabian/Desktop/Analysis/Round3_FS03_FS06/Figures/'
def average_Z_time_period(session_list):
total = []
for session in session_list:
total.append(Z_periods(session))
print (total)
average = statistics.median(total)
#plt.bar(range(len(total)),total)
return total
def Z_periods(positions):
high =0
low = 0
switch = 0
last = .60
z= positions[2]
for height in z:
if height >.62 and last<.62:
high +=1
last = height
return high
def Z_time(positions):
high =0
low=0
z= positions[2]
for height in z:
if height >.62:
high +=1
percent = high/ z.size*100
return percent
'//10.153.170.3/storage2/fabian/data/project/'+rat_ID
Animals=['//10.153.170.3/storage2/fabian/data/raw/FS03/Event_files_FS03/','//10.153.170.3/storage2/fabian/data/raw/FS04/Event_files_FS04/']
Names = ['FS03','FS04']
def All_new_animals(Animal_names):
fig, ax = plt.subplots(dpi= 100)
n=2
for animal_ID in Animal_names:
n+=1
height_periods = []
result=pd.DataFrame()
for dirpath, dirnames, files in os.walk(animal_ID, topdown=True):
for metadata in files:
if fnmatch.fnmatch(metadata, 'position_*'):
k=(animal_ID+'/'+metadata)
day = pd.read_csv(k,sep=" ", header=None,engine='python')
height_periods.append(Z_periods(day))
X_markers = range(len(height_periods))
if n==3:
ax.bar(X_markers,height_periods, label= '%s' %Names[0],alpha=.5)
else:
ax.bar(X_markers,height_periods, label= '%s' %Names[1],alpha=.5)
#ax.bar(range(len(list_of_days)),average_Z_time_period(list_of_days),label= 'old FS1',alpha=.2)
#ax.bar(range(len(list_of_days2)),average_Z_time_period(list_of_days2),label= 'old FS2',alpha=.2)
ax.set_ylabel('Rears')
ax.set_xlabel('Session')
ax.set_title('Rear periods during electrophysiology only')
ax.legend()
#ax.set_xticklabels(X_markers)
plt.savefig(figures+'Number_of_rears_3rd_batch.png', dpi = 300)
All_new_animals(Animals)
Animals=['//10.153.170.3/storage2/fabian/data/raw/FS03/Event_files_FS03/','//10.153.170.3/storage2/fabian/data/raw/FS04/Event_files_FS04/']
Names = ['FS03','FS04']
def All_new_animals(Animal_names):
fig, ax = plt.subplots(dpi= 100)
n=2
for animal_ID in Animal_names:
n+=1
height_periods = []
result=pd.DataFrame()
for dirpath, dirnames, files in os.walk(animal_ID, topdown=True):
for metadata in files:
if fnmatch.fnmatch(metadata, 'position_*'):
k=(animal_ID+'/'+metadata)
day = pd.read_csv(k,sep=" ", header=None,engine='python')
height_periods.append(Z_periods(day))
X_markers = range(len(height_periods))
if n==3:
ax.bar(X_markers,height_periods, label= '%s' %Names[0],alpha=.5)
else:
ax.bar(X_markers,height_periods, label= '%s' %Names[1],alpha=.5)
#ax.bar(range(len(list_of_days)),average_Z_time_period(list_of_days),label= 'old FS1',alpha=.2)
#ax.bar(range(len(list_of_days2)),average_Z_time_period(list_of_days2),label= 'old FS2',alpha=.2)
ax.set_ylabel('Rears')
ax.set_xlabel('Session')
ax.set_title('Rear periods during electrophysiology only')
ax.legend()
#ax.set_xticklabels(X_markers)
plt.savefig(figures+'Number_of_rears_in_highzone_ephys_only.png', dpi = 300)
All_new_animals(Animals)
Animals=['//10.153.170.3/storage2/fabian/data/raw/FS03/Event_files_FS03/','//10.153.170.3/storage2/fabian/data/raw/FS04/Event_files_FS04/']
Names = ['FS03','FS04']
def All_new_animals_height(Animal_names):
plt.figure(figsize=(20,20))
plt.ylim(.4, .8)
plt.title('Distribution of Heights per sessions - electrophysiology FS03 and FS04')
plt.ylabel("Animal height")
n=2
for animal_ID in Animal_names:
n+=1
height_time = []
result=pd.DataFrame()
for dirpath, dirnames, files in os.walk(animal_ID, topdown=True):
for metadata in files:
if fnmatch.fnmatch(metadata, 'position_*'):
k=(animal_ID+'/'+metadata)
day = pd.read_csv(k,sep=" ", header=None,engine='python')
height_time.append(Z_periods(day))
sns.distplot(day[2],vertical=True)
plt.savefig(figures+'FS3_6_Distribution of Heights per sessions_.png', dpi = 200)
All_new_animals_height(Animals)
Animals=['//10.153.170.3/storage2/fabian/data/raw/FS03/Event_files_FS03/','//10.153.170.3/storage2/fabian/data/raw/FS04/Event_files_FS04/']
Names = ['FS03','FS04']
def All_new_animals_height(Animal_names):
fig, ax = plt.subplots(dpi= 300)
ax.set_ylabel('percent of time in session rearing - electophysiology only')
ax.set_xlabel('Session')
ax.set_title('Percent time above .62m')
n=2
for animal_ID in Animal_names:
n+=1
height_time = []
result=pd.DataFrame()
for dirpath, dirnames, files in os.walk(animal_ID, topdown=True):
for metadata in files:
if fnmatch.fnmatch(metadata, 'position_*'):
k=(animal_ID+'/'+metadata)
day = pd.read_csv(k,sep=" ", header=None,engine='python')
height_time.append(Z_time(day))
if n==3:
ax.plot(height_time, label= '%s' %Names[0])
else:
ax.plot(height_time, label= '%s' %Names[1])
ax.legend()
#ax.set_xticklabels(range(len(height_time)))
plt.savefig(figures+'Percent_time_in_highzone - electophysiology only.png', dpi = 200)
All_new_animals_height(Animals)
```
| github_jupyter |
```
from pathflowai.utils import load_sql_df
import torch
import pickle
import os
import sys, os
import umap, numba
from sklearn.preprocessing import LabelEncoder
from torch_cluster import knn_graph
from torch_geometric.data import Data
import numpy as np
from torch_geometric.utils import train_test_split_edges
import os
os.environ['CUDA_VISIBLE_DEVICES']="0"
import argparse
from dgm.dgm import DGM
from dgm.plotting import *
from dgm.utils import *
from dgm.models import GraphClassifier, DGILearner
from torch_geometric.utils.convert import to_networkx
from torch_geometric.data import InMemoryDataset,DataLoader
import os,glob, pandas as pd
from sklearn.metrics import f1_score
import copy
import torch
import torch.nn.functional as F
from collections import Counter
from torch import nn
from torch_geometric.nn import GCNConv, GATConv, DeepGraphInfomax, SAGEConv
from torch_geometric.nn import DenseGraphConv
from torch_geometric.utils import to_dense_batch, to_dense_adj, dense_to_sparse
from torch_geometric.nn import GINEConv
from torch_geometric.utils import dropout_adj
from torch_geometric.nn import APPNP
import torch.nn as nn
import fire
EPS = 1e-15
class GCNNet(torch.nn.Module):
def __init__(self, inp_dim, out_dim, hidden_topology=[32,64,128,128], p=0.5, p2=0.1, drop_each=True):
super(GCNNet, self).__init__()
self.out_dim=out_dim
self.convs = nn.ModuleList([GATConv(inp_dim, hidden_topology[0])]+[GATConv(hidden_topology[i],hidden_topology[i+1]) for i in range(len(hidden_topology[:-1]))])
self.drop_edge = lambda edge_index: dropout_adj(edge_index,p=p2)[0]
self.dropout = nn.Dropout(p)
self.fc = nn.Linear(hidden_topology[-1], out_dim)
self.drop_each=drop_each
def forward(self, x, edge_index, edge_attr=None):
for conv in self.convs:
if self.drop_each and self.training: edge_index=self.drop_edge(edge_index)
x = F.relu(conv(x, edge_index, edge_attr))
if self.training:
x = self.dropout(x)
x = self.fc(x)
return x
class GCNFeatures(torch.nn.Module):
def __init__(self, gcn, bayes=False, p=0.05, p2=0.1):
super(GCNFeatures, self).__init__()
self.gcn=gcn
self.drop_each=bayes
self.gcn.drop_edge = lambda edge_index: dropout_adj(edge_index,p=p2)[0]
self.gcn.dropout = nn.Dropout(p)
def forward(self, x, edge_index, edge_attr=None):
for i,conv in enumerate(self.gcn.convs):
if self.drop_each: edge_index=self.gcn.drop_edge(edge_index)
x = conv(x, edge_index, edge_attr)
if i+1<len(self.gcn.convs):
x=F.relu(x)
if self.drop_each:
x = self.gcn.dropout(x)
y = F.softmax(self.gcn.fc(F.relu(x)))
return x,y
def extract_features(cv_split=2,
graph_data='datasets/graph_dataset_no_pretrain.pkl',
cv_splits='cv_splits/cv_splits.pkl',
models_dir="models_no_pretrain/",
out_dir='predictions_no_pretrain',
hidden_topology=[32,64,128,128],
p=0.5,
p2=0.3,
n_posterior=50
):
# prep data
datasets=pickle.load(open(graph_data,'rb'))
cv_splits=pickle.load(open(cv_splits,'rb'))[cv_split]
train_dataset=[datasets['graph_dataset'][i] for i in cv_splits['train_idx']]
val_dataset=[datasets['graph_dataset'][i] for i in np.hstack((cv_splits['val_idx'],cv_splits['test_idx']))]#consider adding val_idx to help optimize
# load model
model=GCNNet(datasets['graph_dataset'][0].x.shape[1],datasets['df']['annotation'].nunique(),hidden_topology=hidden_topology,p=p,p2=p2)
model=model.cuda()
# load previous save
model.load_state_dict(torch.load(os.path.join(models_dir,f"{cv_split}.model.pth")))
# dataloaders
dataloaders={}
dataloaders['train']=DataLoader(train_dataset,shuffle=True)
dataloaders['val']=DataLoader(val_dataset,shuffle=False)
dataloaders['warmup']=DataLoader(train_dataset,shuffle=False)
train_loader=dataloaders['warmup']
# uncertainty test
model.eval()
feature_extractor=GCNFeatures(model,bayes=True,p=p,p2=p2).cuda()
graphs=[]
for i,data in enumerate(dataloaders['val']):
with torch.no_grad():
graph = to_networkx(data).to_undirected()
model.train(False)
x=data.x.cuda()
xy=data.pos.numpy()
edge_index=data.edge_index.cuda()
y=data.y.numpy()
preds=torch.stack([feature_extractor(x,edge_index)[1] for j in range(n_posterior)]).cpu().numpy()
graphs.append(dict(y=y,G=graph,xy=xy,y_pred_posterior=preds.mean(0),y_std=preds.std(0)))
del x,edge_index
model.eval()
feature_extractor=GCNFeatures(model,bayes=False).cuda()
for i,data in enumerate(dataloaders['val']):
with torch.no_grad():
graph = to_networkx(data).to_undirected()
model.train(False)
x=data.x.cuda()
xy=data.pos.numpy()
edge_index=data.edge_index.cuda()
y=data.y.numpy()
preds=feature_extractor(x,edge_index)
z,y_pred=preds[0].detach().cpu().numpy(),preds[1].detach().cpu().numpy()
graphs[i].update(dict(z=z,y_pred=y_pred))
del x,edge_index
torch.save(graphs,os.path.join(out_dir,f"{cv_split}.predictions.pth"))
class Commands(object):
def __init__(self):
pass
def extract_features(self,cv_split=2,
graph_data='datasets/graph_dataset_no_pretrain.pkl',
cv_splits='cv_splits/cv_splits.pkl',
models_dir="models_no_pretrain/",
out_dir='predictions_no_pretrain',
hidden_topology=[32,64,128,128],
p=0.5,
p2=0.3,
n_posterior=50
):
extract_features(cv_split,
graph_data,
cv_splits,
models_dir,
out_dir,
hidden_topology,
p,
p2,
n_posterior)
your_args=dict(cv_split=2,
graph_data='datasets/graph_dataset.pkl',
cv_splits='cv_splits/cv_splits.pkl',
models_dir="models/",
out_dir='predictions',
hidden_topology=[32,64,128,128],
p=0.5,
p2=0.3,
n_posterior=50)
Commands().extract_features(**your_args)
```
| github_jupyter |
```
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = 'all' # default is ‘last_expr’
%load_ext autoreload
%autoreload 2
import sys
sys.path.append('/data/home/marmot/camtrap/PyCharm/CameraTraps-benchmark')
sys.path.append('/data/home/marmot/camtrap/PyCharm/CameraTraps-benchmark/detection/detector_eval')
import json
import os
from collections import defaultdict
import numpy as np
from detection.detector_eval import detector_eval
from data_management import cct_json_utils
from visualization.visualization_utils import plot_precision_recall_curve
detection_results_dir = '/home/marmot/mnt/ai4edevshare/api_outputs/Benchmark/190904'
results_path = [os.path.join(detection_results_dir, f) for f in os.listdir(detection_results_dir)]
results_path
num_gt_classes = 2
```
Ground truth: `[x, y, width, height]`
API output: `[x_min, y_min, width_of_box, height_of_box]`
TF format: `[y_min, x_min, y_max, x_max]`
```
def show_metrics(per_cat_metrics):
for cat, metrics in per_cat_metrics.items():
print('\n' + str(cat))
print('Number of gt:', metrics['num_gt'])
print('Average precision', metrics['average_precision'])
mAP_from_cats = detector_eval.find_mAP(per_cat_metrics)
print('\nmAP as the average of AP across the {} categories is {:.4f}'.format(num_gt_classes, mAP_from_cats))
precision_at_08_recall = detector_eval.find_precision_at_recall(per_cat_metrics[1]['precision'], per_cat_metrics[1]['recall'],
per_cat_metrics[1]['scores'],
recall_level=0.8)
print('Precision at 0.8 recall:', precision_at_08_recall)
plot_precision_recall_curve(per_cat_metrics[1]['precision'], per_cat_metrics[1]['recall'], title='bbox level')
empty_threshold = 0.5
```
# Train on SS S1, val on SS S1
## bbox
```
file_prefix = 'SER/'
results_path = '/home/marmot/mnt/ai4edevshare/api_outputs/Benchmark/190904/948_detections_train_ss1_val_ss1_20190908092426.json'
gt_db_path = '/beaver_disk/camtrap/ss_season1/benchmark/SnapshotSerengetiBboxesS01_20190903_val.json'
detection_res = make_detection_res(results_path, file_prefix=file_prefix)
gt_indexed = get_gt_db(gt_db_path)
per_image_gts, per_image_detections = detector_eval.get_per_image_gts_and_detections(gt_indexed, detection_res)
per_cat_metrics = detector_eval.compute_precision_recall_bbox(per_image_detections, per_image_gts, num_gt_classes,
matching_iou_threshold=0.5)
show_metrics(per_cat_metrics)
```
## empty vs non-empty
```
accuracy = detector_eval.compute_emptiness_accuracy(gt_indexed, detection_res, threshold=empty_threshold)
print('Accuracy:', accuracy)
```
# Train on SS S1, val on CCT-20
## bbox
```
file_prefix = 'cct_images/'
results_path = '/home/marmot/mnt/ai4edevshare/api_outputs/Benchmark/190904/6704_detections_train_ss1_val_cct20_20190908092921.json'
gt_db_path = '/beaver_disk/camtrap/caltech/benchmark/cct-20/caltech-20_bboxes_20190904_val.json'
detection_res = make_detection_res(results_path, file_prefix=file_prefix)
gt_indexed = get_gt_db(gt_db_path)
per_image_gts, per_image_detections = detector_eval.get_per_image_gts_and_detections(gt_indexed, detection_res)
per_cat_metrics = detector_eval.compute_precision_recall_bbox(per_image_detections, per_image_gts, num_gt_classes,
matching_iou_threshold=0.5)
show_metrics(per_cat_metrics)
```
## empty vs non-empty
```
accuracy = detector_eval.compute_emptiness_accuracy(gt_indexed, detection_res, threshold=empty_threshold)
print('Accuracy:', accuracy)
```
# Train on CCT-20, val on CCT-20
## bbox
```
file_prefix = 'cct_images/'
results_path = '/home/marmot/mnt/ai4edevshare/api_outputs/Benchmark/190904/8506_detections_train_cct_20_val_cct20_20190908093026.json'
gt_db_path = '/beaver_disk/camtrap/caltech/benchmark/cct-20/caltech-20_bboxes_20190904_val.json'
detection_res = make_detection_res(results_path, file_prefix=file_prefix)
gt_indexed = get_gt_db(gt_db_path)
per_image_gts, per_image_detections = detector_eval.get_per_image_gts_and_detections(gt_indexed, detection_res)
per_cat_metrics = detector_eval.compute_precision_recall_bbox(per_image_detections, per_image_gts, num_gt_classes,
matching_iou_threshold=0.5)
show_metrics(per_cat_metrics)
```
## empty vs non-empty
```
accuracy = detector_eval.compute_emptiness_accuracy(gt_indexed, detection_res, threshold=empty_threshold)
print('Accuracy:', accuracy)
```
# Train on CCT-20, val on SS S1
## bbox
```
file_prefix = 'SER/'
results_path = '/home/marmot/mnt/ai4edevshare/api_outputs/Benchmark/190904/5618_detections_train_cct_20_val_ss1_20190908093215.json'
gt_db_path = '/beaver_disk/camtrap/ss_season1/benchmark/SnapshotSerengetiBboxesS01_20190903_val.json'
detection_res = make_detection_res(results_path, file_prefix=file_prefix)
gt_indexed = get_gt_db(gt_db_path)
per_image_gts, per_image_detections = detector_eval.get_per_image_gts_and_detections(gt_indexed, detection_res)
per_cat_metrics = detector_eval.compute_precision_recall_bbox(per_image_detections, per_image_gts, num_gt_classes,
matching_iou_threshold=0.5)
show_metrics(per_cat_metrics)
```
## empty vs non-empty
```
accuracy = detector_eval.compute_emptiness_accuracy(gt_indexed, detection_res, threshold=empty_threshold)
print('Accuracy:', accuracy)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/kareem1925/Ismailia-school-of-AI/blob/master/quantum_mnist_classification/Classifying_mnist_data_using_quantum_features.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
We will first install Qulacs plugin with GPU for Pennylane and then refresh the environment.
```
import os
!pip install git+https://github.com/kareem1925/pennylane-qulacs@GPU_support
os.kill(os.getpid(), 9)
```
Run the following command to make sure everything is working perfectly
```
import qulacs
qulacs.QuantumStateGpu
from pennylane import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder,OneHotEncoder,normalize,LabelBinarizer
from sklearn.utils import compute_sample_weight
import pennylane as qml
from sklearn.datasets import load_digits
import warnings
from sklearn.metrics import balanced_accuracy_score as acc
from pennylane.optimize import AdamOptimizer,AdagradOptimizer
np.seterr(all="ignore")
warnings.filterwarnings('ignore')
```
**Defining the log loss function along with softmax and accuracy**
```
# this function is taken from scikit-learn code of meterics and it has been
# modified to comply to autograd's numpy
def log_loss(y_true, y_pred, eps=1e-15, normalize=True, sample_weight=None,
labels=None):
lb = LabelBinarizer()
if labels is not None:
lb.fit(labels)
else:
lb.fit(y_true)
if len(lb.classes_) == 1:
if labels is None:
raise ValueError('y_true contains only one label ({0}). Please '
'provide the true labels explicitly through the '
'labels argument.'.format(lb.classes_[0]))
else:
raise ValueError('The labels array needs to contain at least two '
'labels for log_loss, '
'got {0}.'.format(lb.classes_))
transformed_labels = lb.transform(y_true)
if transformed_labels.shape[1] == 1:
transformed_labels = np.append(1 - transformed_labels,
transformed_labels, axis=1)
# Clipping
y_pred = np.clip(y_pred, eps, 1 - eps)
# If y_pred is of single dimension, assume y_true to be binary
# and then check.
if y_pred.ndim == 1:
y_pred = y_pred[:, np.newaxis]
if y_pred.shape[1] == 1:
y_pred = np.append(1 - y_pred, y_pred, axis=1)
# Check if dimensions are consistent.
# transformed_labels = check_array(transformed_labels)
if len(lb.classes_) != y_pred.shape[1]:
if labels is None:
raise ValueError("y_true and y_pred contain different number of "
"classes {0}, {1}. Please provide the true "
"labels explicitly through the labels argument. "
"Classes found in "
"y_true: {2}".format(transformed_labels.shape[1],
y_pred.shape[1],
lb.classes_))
else:
raise ValueError('The number of classes in labels is different '
'from that in y_pred. Classes found in '
'labels: {0}'.format(lb.classes_))
# Renormalize
y_pred /= y_pred.sum(axis=1)[:, np.newaxis]
loss = -(transformed_labels * np.log(y_pred)).sum(axis=1)
# print(loss)
return loss
def accuracy_score(y_true, y_pred):
"""
This function computed the weighted aaverage accuarcy
"""
weights = compute_sample_weight('balanced',y_true)
return acc(y_true,y_pred,sample_weight=weights)
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum(axis=0)
```
**Data loading and splitting**
```
X,y = load_digits(n_class=3,return_X_y=True)
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.1,random_state=5,stratify=y)
la = OneHotEncoder(sparse=False).fit(y.reshape(-1,1))
y_train = la.transform(y_train.reshape(-1,1))
y_test = la.transform(y_test.reshape(-1,1))
y_test[:2]
```
**Defining the quantum circuit**
```
# initialize the device
dev = qml.device("qulacs.simulator", wires=7,shots=1000,analytic = True)
@qml.qnode(dev)
def qclassifier(weights, X=None):
# pennylane normalizes the input for us by setting normalize to True so no need for any preprocessing
qml.templates.AmplitudeEmbedding(X,wires=list(range(7)),pad=0.0,normalize=True)
### the following comments mimics the same line below except for CRX gate where you should define its weights
### because the init template in pennylane doesn't do that it only initializes the rotation parameters
# for i in range(weights.shape[0]):
# for j in range(weights.shape[1]):
# qml.Rot(*weights[i][j],wires=j)
# for x in range(cnots.shape[1]):
# qml.CRX(*cnots[i][x],wires=[x,(x+1)%6])
qml.templates.StronglyEntanglingLayers(weights,wires=list(range(7)))
return [qml.expval(qml.PauliZ(i)) for i in range(7)]
```
### **The Cost Function**
This fucntion contains the main logic of the full network. it takes a batched input with the weights and first pass the the quantum weights into the quantum classifier.
Then it adds the bias to the output from Qcircuit. After that, we apply the classical operations the relu and softmax as shown in the for loop below.
```
def cost(params, x, y):
# Compute prediction for each input in data batch
loss = []
for i in range(len(x)):
out = qclassifier(params[0],X=x[i])+params[1] # quantum output
out = np.maximum(0,np.dot(params[2],out)+params[3]) # reul on the first layer
loss.append(softmax(np.dot(params[4],out)+params[5])) # softmax on the second layer
loss = log_loss(y,np.array(loss),labels=y_train) # compute loss
weights = compute_sample_weight('balanced',y)
# weighted average to compensate for imbalanced batches
s = 0
for x, y in zip(loss, weights):
s += x * y
return s/sum(weights)
# a helper function to predict the label of the image
def predict(params,x,y):
prob = []
for i in range(len(x)):
out = qclassifier(params[0],X=x[i])+params[1]
out = np.maximum(0,np.dot(params[2],out)+params[3])
out = softmax(np.dot(params[4],out)+params[5])
prob.append(np.argmax(out))
return prob
```
### **Weights initialization**
```
np.random.seed(88)
# quantum parameters
n_qubits= 7
Q_n_layer = 8
Qweights = qml.init.strong_ent_layers_uniform(n_layers = Q_n_layer,n_wires = n_qubits,low=0,high=1,seed=0)
Qbias = np.random.uniform(low=-.1,high=.1,size=(n_qubits))*0
# first layer parameters
hidden_units = 12
linear2_layer = np.random.randn(hidden_units,n_qubits)*0.01
bias2_layer = np.random.randn(hidden_units)*0
classes = 3
# second layer parameters
linear3_layer = np.random.randn(classes,hidden_units)*0.01
bias3_layer = np.random.randn(classes)*0
params = [Qweights,Qbias,linear2_layer,bias2_layer,linear3_layer,bias3_layer]
params
```
**Load the saved weights**
You can download the weights from this [link](https://github.com/kareem1925/Ismailia-school-of-AI/raw/master/quantum_mnist_classification/final-grads.npy). Or, you can check the [repo](https://github.com/kareem1925/Ismailia-school-of-AI/tree/master/quantum_mnist_classification) itself.
```
final_weights = np.load("/content/final-grads.npy",allow_pickle=True)
```
Convert the one hot encoding back to its original labels
```
labels = la.inverse_transform(y_test)
predictions=predict(final_weights,X_test,y_test)
print(accuracy_score(labels,predictions))
from sklearn.metrics import classification_report
print(classification_report(labels,predictions))
```
### **Training procedure**
you can run this cell and have fun with the training.
```
from sklearn.utils import shuffle
learning_rate = 0.12
epochs = 1200
batch_size = 32
opt = AdamOptimizer(learning_rate) # classical adam optimizer
opt.reset()
loss = np.inf #random large number
grads = []
for it in range(epochs):
# data shuffling
X_train_1,y_train_1 = shuffle(X_train,y_train)
X_test_1,y_test_1 = shuffle(X_test,y_test)
# batching the data, i.e. every epoch processes the batch_size samples only
Xbatch = X_train_1[:batch_size]
ybatch = y_train_1[:batch_size]
params = opt.step(lambda v: cost(v, Xbatch, ybatch), params) # updating weights
grads.append(params)
if it % 1 == 0:
test_loss = cost(params, X_test_1[:50], y_test_1[:50])
if test_loss < loss:
loss = test_loss
print('heey new loss')
print("Iter: {:5d} | test loss: {:0.7f} ".format(it + 1, test_loss))
```
| github_jupyter |
# Module Efficiency History and Projections
```
import numpy as np
import pandas as pd
import os,sys
import matplotlib.pyplot as plt
plt.rcParams.update({'font.size': 22})
plt.rcParams['figure.figsize'] = (12, 8)
```
This journal covers the development of a historical baseline and baseline future projection of average module efficiency for each installation year.
```
cwd = os.getcwd() #grabs current working directory
skipcols = ['Source']
mod_eff_raw = pd.read_csv(cwd+"/../../../PV_ICE/baselines/SupportingMaterial/module_eff.csv",
index_col='Year', usecols=lambda x: x not in skipcols)
mod_eff_raw['mod_eff'] = pd.to_numeric(mod_eff_raw['mod_eff'])
print(mod_eff_raw['mod_eff'][2019])
plt.plot(mod_eff_raw, marker='o')
```
There appears to be an "outlier" in 2003. This is from a different source. It does however, fit within the range of module efficiency specified in the prior data point (2001, avg = 13.6, min = 12, max = 16.1). For the purposes of interpolation, we will drop this single datapoint.
```
mod_eff_raw['mod_eff'][2003]=np.nan
plt.plot(mod_eff_raw, marker='o')
```
Now interpolate for missing years. Going to break into 2 parts for this, a linear historical part, and an exponential decay out to 2050.
```
mod_eff_early = mod_eff_raw.loc[(mod_eff_raw.index<=2019)]
mod_eff_history = mod_eff_early.interpolate(method='linear',axis=0)
#print(mod_eff_history)
plt.plot(mod_eff_history)
# Import curve fitting package from scipy
from scipy.optimize import curve_fit
# Function to calculate the power-law with constants a and b
def power_law(x, a, b):
return a*np.power(x, b)
#generae a dataset for the area in between
mod_eff_late = mod_eff_raw.loc[(mod_eff_raw.index>=2020)]
y_dummy = power_law(mod_eff_late.index-2019, mod_eff_late['mod_eff'][2020], 0.073)
#played around with the exponential until y_dummy[31] closely matched projected 25.06% value. CITE
print(y_dummy[30])
plt.plot(y_dummy)
#create a dataframe of the projection
mod_eff_late['mod_eff'] = y_dummy
#print(mod_eff_late)
plt.plot(mod_eff_late)
```
Now smash the two dataframes back together for our average module efficiency baseline.
```
mod_eff = pd.concat([mod_eff_history, mod_eff_late])
mod_eff.to_csv(cwd+'/../../../PV_ICE/baselines/SupportingMaterial/output_avg_module_eff_final.csv', index=True)
plt.plot(mod_eff)
plt.title('Average Module Efficiency (%)')
plt.ylabel('Efficiency (%)')
#graph for paper
plt.rcParams.update({'font.size': 22})
plt.rcParams['figure.figsize'] = (12, 8)
plt.axvspan(2020, 2050.5, facecolor='gray', alpha=0.1)
plt.plot(mod_eff_raw, marker='o', label='Raw Data')
plt.plot(mod_eff, '--k', label='PV ICE Baseline')
plt.title('Average Module Efficiency [%]')
plt.ylabel('Efficiency [%]')
plt.legend()
plt.xlim([1974, 2050.5])
```
| github_jupyter |
```
#Creating a colletion called movies_bulk and rewriting previous updates to the collection
import pymongo
from pymongo import MongoClient, UpdateOne
from datetime import datetime
import pprint
import re
from IPython.display import clear_output
# Replace XXXX with your connection URI from the Atlas UI
client = MongoClient('mongodb+srv://dbAdmin:pa55word@mflix.phy3v.mongodb.net/mflix_db?retryWrites=true&w=majority')
pipeline = [
{
'$out':"movies_bulk"
}
]
clear_output()
pprint.pprint(list(client.mflix_db.movies_initial2.aggregate(pipeline)))
#count = client.mflix_db.movie
'''Updating documents by removing null values using the batch method,
which is good for updating large quantities of documents
and keeping fields with data'''
import pymongo
from pymongo import MongoClient, UpdateOne
from datetime import datetime
import pprint
import re
from IPython.display import clear_output
# Replace XXXX with your connection URI from the Atlas UI
client = MongoClient('mongodb+srv://dbAdmin:pa55word@mflix.phy3v.mongodb.net/mflix_db?retryWrites=true&w=majority')
#regular expression
#compile the expression to display the runtime pattern in minutes
runtime_pat = re.compile(r'([0-9]+) min')
batch_size = 1000
updates = []
count = 0
for movie in client.mflix_db.movies_bulk.find({}):
fields_to_set = {}
fields_to_unset = {}
for k,v in movie.copy().items():
if v == "" or v == [""]:
del movie[k]
fields_to_unset[k] = ""
if 'director' in movie:
fields_to_unset['director'] = ""
fields_to_set['directors'] = movie['director'].split(", ")
if 'cast' in movie:
fields_to_set['cast'] = movie['cast'].split(", ")
if 'writer' in movie:
fields_to_unset['writer'] = ""
fields_to_set['writers'] = movie['writer'].split(", ")
if 'genre' in movie:
fields_to_unset['genre'] = ""
fields_to_set['genres'] = movie['genre'].split(", ")
if 'language' in movie:
fields_to_unset['language'] = ""
fields_to_set['languages'] = movie['language'].split(", ")
if 'country' in movie:
fields_to_unset['country'] = ""
fields_to_set['countries'] = movie['country'].split(", ")
if 'fullplot' in movie:
fields_to_unset['fullplot'] = ""
fields_to_set['fullPlot'] = movie['fullplot']
if 'rating' in movie:
fields_to_unset['rating'] = ""
fields_to_set['rated'] = movie['rating']
imdb = {}
if 'imdbID' in movie:
fields_to_unset['imdbID'] = ""
imdb['id'] = movie['imdbID']
if 'imdbRating' in movie:
fields_to_unset['imdbRating'] = ""
imdb['rating'] = movie['imdbRating']
if 'imdbVotes' in movie:
fields_to_unset['imdbVotes'] = ""
imdb['votes'] = movie['imdbVotes']
if imdb:
fields_to_set['imdb'] = imdb
if 'released' in movie:
fields_to_set['released'] = datetime.strptime(movie['released'],
"%Y-%m-%d")
if 'lastUpdated' in movie:
fields_to_set['lastUpdated'] = datetime.strptime(movie['lastUpdated'][0:19],
"%Y-%m-%d %H:%M:%S")
if 'runtime' in movie:
m = runtime_pat.match(movie['runtime'])
if m:
fields_to_set['runtime'] = int(m.group(1))
update_doc = {}
if fields_to_set:
update_doc['$set'] = fields_to_set
if fields_to_unset:
update_doc['$unset'] = fields_to_unset
#append updated documents in updates list
#UpdateOne - operation class to create an UpdateOne object
updates.append(UpdateOne({'_id': movie['_id']}, update_doc))
#use counter to count number of documents to be updated
count += 1
if count == batch_size: #if the number of documents is equal to the batch size proceed with condition
#use the bulk_write method to write the updated documents in memory and display in the console
client.mflix_db.movies_bulk.bulk_write(updates)
updates = [] #reset updates list to reprocess the n number of batches.
count = 0 #restart counter to process number of documents based on the set batch number
#if updates list is true proceed with printing the updated documents in the console in a batch form in an efficient manner
if updates:
client.mflix_db.movies_bulk.bulk_write(updates)
```
| github_jupyter |
Minando datos de 8ch.net con python
====================
[](https://anaconda.org/bc-privsec-devel/chanscrape)
**Esta es la libreta #2 de 2 en esta serie. **
El codigo fuente, asi como instrucciones para instalar y ejecutar esta libreta por su propia cuenta, estan disponibles en su [repositorio oficial de Github](https://github.com/BC-PRIVSEC/ChanScrape). Los autores y colaboradores en el desarrollo de este software hemos decidido **dedicar este trabajo al Dominio Publico** renunciando a cualquier derecho de autor _en detrimento propio y de nuestros sucesores_; y para beneficio de el publico general. Consulte los [detalles legales en el Repositorio oficial](https://github.com/BC-PRIVSEC/ChanScrape/LICENCIA-ES).
Queremos demostrar de esta manera que **no tenemos interes alguno en lucrar** con las victimas, por el contrario el objetivo es proporcionarles herramientas informaticas y conocimiento en mejores practicas para **defender su seguridad, integridad e identidad en Linea.**
#### Use esta libreta y la herramienta que representa para los fines legales que a usted convenga.
------------
### Recabando todos los datos que sea posible
Ya hemos visto que la libreria `py8chan` y el API de la misma pagina resultan en una poderosa combinacion. En esta entrega, haremos unas sencillas modificaciones al programa que escribimos en la libreta anteror para lograr lo siguiente:
- Recabar correos electronicos de todos los mensajes
- Adiconalmente recabar todas las estadisticas que nos sea posible del api
- Organizar y grabar esta informacion de manera eficiente
- Hacer de esta operacion una tarea automatizada y rutinaria de manera que podamos minar la informacion constantemente y sin ser penalizados por el sitio.
- Experimentar con los datos para ver que conclusiones respecto a la comunidad monitoreada podemos obtener, asi como sus habitos y de ser posible, perfilar usuarios individuales
Definiremos una lista de los sitios que conocemos que operan bajo el mimo esquema, practicamente se les podria considerar una sola comunidad, por varias razones que estan mas alla del alcance tecnico de esta libreta. definiremos entonces la siguiente estructura de datos como la lista maestra de tableros a monitorear:
```
import py8chan #iniciamos el programa conel encabezado importando las librerias que ocuparemos
import json
import re
from datetime import datetime
from bs4 import BeautifulSoup
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
```
Definimos de una buena vez las funciones auxiliares adicionales, seran las mismas que las utilizadas en la libreta 1
```
def alt_unique(input_list):
''' implementacion eficiente de elementos uniicos usando
conversion intermedia a set() que no permite duplicados. de
esta manera evitamos caminar por cada elemento de la lista en
un bucle convencional'''
output = set(input_list)
return list(output)
```
Definiremos Nuestro inventario de datos maestro como una listas nativa de _Phyton_ para aprovechar las caracteristicas y prestaciones que el lenguaje define para los objetos de este tipo.
En particular es util que se comportan como _Generadores_ e _Iteradores_ lo que nos ahorrara muchas lineas de codigo. Cada uno de los elementos de esta lista sern los nombres de los tableros de interes en dicho sitio.
```
Inventario_Maestro =['ensenada', 'tijuana', 'mexicali',
'mexicaligirls', 'mexicalilocal',
'mexicali2', 'drive646']
timestamp = datetime.isoformat( datetime.now()) #Hora exacta como sello para identificar la corrida
patron_email = re.compile(r'[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,4}')
emails = []
mensajes = 0
total_conversaciones=0
total_mensajes = 0
```
### Enlace al API e instancias de objetos de datos
Aplicaremos la propiedad generadora de la lista:
```
for nombre_de_tablero in Inventario_Maestro:
tablero = py8chan.Board(nombre_de_tablero)
conversaciones = tablero.get_all_threads()
total_conversaciones += len(conversaciones)
print('Procesando el tablero', nombre_de_tablero, '...')
print('Hay ',len(conversaciones), 'conversaciones en', nombre_de_tablero)
mensajes_del_tablero = 0
for conversacion in conversaciones:
mensajes += len(conversacion.all_posts)
mensajes_del_tablero += len(conversacion.all_posts)
for msg in conversacion.all_posts:
em = patron_email.findall(msg.comment)
if em:
for m in em:
emails.append(m)
print('Hay ', mensajes_del_tablero, 'mensajes en total para el tablero', nombre_de_tablero)
print(' ')
unique_emails = alt_unique(emails)
unique_emails.sort()
print('Se han procesado', len(Inventario_Maestro) , 'tableros')
print('Hemos capturado ', total_conversaciones, 'conversaciones')
print('Contienen ', mensajes, 'mensajes en total')
print('Se extrajeron ', len(unique_emails), 'emails unicos de', len(emails), 'instancias')
print('eso es cada', round(mensajes/len(emails),2), 'mensajes alguien pide envios y proporciona su direccion')
%%bash
mkdir -vp data/solo-email-todos-los-boards
filename = "data/solo-email-todos-los-boards/" + timestamp.split(".")[0] + ".json"
with open(filename,'w+') as storage:
storage.write(json.dumps(unique_emails))
print("Si no hay ningun error,los corrreos se han grabado en el archivo ", filename )
%%bash
find . -name \*.json |
while read f
do
echo -n registros en `basename $f` :
cat $f | jq '.' | wc -l
done
```
| github_jupyter |
<img src="https://github.com/pmservice/ai-openscale-tutorials/raw/master/notebooks/images/banner.png" align="left" alt="banner">
# Working with Watson Machine Learning
The notebook will train, create and deploy a Credit Risk model. It will then configure OpenScale to monitor drift in data and accuracy by injecting sample payloads for viewing in the OpenScale Insights dashboard.
### Contents
- [1. Setup](#setup)
- [2. Model building and deployment](#model)
- [3. OpenScale configuration](#openscale)
- [4. Generate drift model](#driftmodel)
- [5. Submit payload](#payload)
- [6. Enable drift monitoring](#monitor)
- [7. Run drift monitor](# )
# 1.0 Setup <a name="setup"></a>
## 1.1 Package installation
```
import warnings
warnings.filterwarnings('ignore')
!rm -rf /home/spark/shared/user-libs/python3.6*
!pip install --upgrade opt-einsum==2.3.2 --no-cache | tail -n 1
!pip install --upgrade typing-extensions==3.6.2.1 --no-cache | tail -n 1
!pip install --upgrade jupyter==1 --no-cache | tail -n 1
!pip install --upgrade tensorboard==1.15.0 | tail -n 1
!pip install --upgrade ibm-ai-openscale==2.2.1 --no-cache | tail -n 1
!pip install --upgrade JPype1-py3 | tail -n 1
!pip install --upgrade watson-machine-learning-client-V4==1.0.93 | tail -n 1
!pip install --upgrade numpy==1.18.3 --no-cache | tail -n 1
!pip install --upgrade SciPy==1.4.1 --no-cache | tail -n 1
!pip install --upgrade pyspark==2.3 | tail -n 1
!pip install --upgrade scikit-learn==0.20.3 | tail -n 1
!pip install --upgrade pandas==0.24.2 | tail -n 1
!pip install --upgrade ibm-wos-utils>=1.2.1
```
### Action: restart the kernel!
## 1.2 Configure credentials
- WOS_CREDENTIALS (ICP)
- WML_CREDENTIALS (ICP)
- DATABASE_CREDENTIALS (DB2 on ICP)
- SCHEMA_NAME
The url for `WOS_CREDENTIALS` is the url of the CP4D cluster, i.e. `https://zen-cpd-zen.apps.com`.
```
WOS_CREDENTIALS = {
"url": "********",
"username": "********",
"password": "********"
}
WML_CREDENTIALS = WOS_CREDENTIALS.copy()
WML_CREDENTIALS['instance_id']='openshift'
WML_CREDENTIALS['version']='3.0.0'
```
Provide `DATABASE_CREDENTIALS`. Watson OpenScale uses a database to store payload logs and calculated metrics. If an OpenScale datamart exists in Db2, the existing datamart will be used and no data will be overwritten. Details in the cell below is removed as it contains password.
```
DATABASE_CREDENTIALS = {
}
```
Provide SCHEMA_NAME. Details in the cell below is removedDetails in the cell below is removed
```
SCHEMA_NAME = ''
```
Provide a custom name to be concatenated to model name, deployment name and open scale monitor. Sample value for CUSTOM_NAME could be ```CUSTOM_NAME = 'SAMAYA_OPENSCALE_3.0'```
```
CUSTOM_NAME = 'SAMAYA-DRIFT'
```
# 2.0 Model building and deployment <a name="model"></a>
In this section you will learn how to train Spark MLLib model and next deploy it as web-service using Watson Machine Learning service.
## 2.1 Load the training data
```
import pandas as pd
!rm -rf german_credit_data_biased_training.csv
!wget https://raw.githubusercontent.com/IBM/cpd-intelligent-loan-agent-assets/master/data/german_credit_data_biased_training.csv -O german_credit_data_biased_training.csv
!ls -lh german_credit_data_biased_training.csv
data_df = pd.read_csv('german_credit_data_biased_training.csv', sep=",", header=0)
data_df.head()
from pyspark.sql import SparkSession
import json
spark = SparkSession.builder.getOrCreate()
df_data = spark.read.csv(path="german_credit_data_biased_training.csv", sep=",", header=True, inferSchema=True)
df_data.head()
```
## 2.2 Explore data
```
df_data.printSchema()
print("Number of records: " + str(df_data.count()))
```
## 2.3 Create a model
Choose a unique name (i.e. your name or initials) and a date or date-time for `MODEL_NAME` and `DEPLOYMENT_NAME`
```
MODEL_NAME = CUSTOM_NAME + "_MODEL"
DEPLOYMENT_NAME = CUSTOM_NAME + "_DEPLOYMENT"
spark_df = df_data
(train_data, test_data) = spark_df.randomSplit([0.8, 0.2], 24)
print("Number of records for training: " + str(train_data.count()))
print("Number of records for evaluation: " + str(test_data.count()))
spark_df.printSchema()
```
The code below creates a Random Forest Classifier with Spark, setting up string indexers for the categorical features and the label column. Finally, this notebook creates a pipeline including the indexers and the model, and does an initial Area Under ROC evaluation of the model.
```
from pyspark.ml.feature import OneHotEncoder, StringIndexer, IndexToString, VectorAssembler
from pyspark.ml.evaluation import BinaryClassificationEvaluator
from pyspark.ml import Pipeline, Model
si_CheckingStatus = StringIndexer(inputCol = 'CheckingStatus', outputCol = 'CheckingStatus_IX')
si_CreditHistory = StringIndexer(inputCol = 'CreditHistory', outputCol = 'CreditHistory_IX')
si_LoanPurpose = StringIndexer(inputCol = 'LoanPurpose', outputCol = 'LoanPurpose_IX')
si_ExistingSavings = StringIndexer(inputCol = 'ExistingSavings', outputCol = 'ExistingSavings_IX')
si_EmploymentDuration = StringIndexer(inputCol = 'EmploymentDuration', outputCol = 'EmploymentDuration_IX')
si_Sex = StringIndexer(inputCol = 'Sex', outputCol = 'Sex_IX')
si_OthersOnLoan = StringIndexer(inputCol = 'OthersOnLoan', outputCol = 'OthersOnLoan_IX')
si_OwnsProperty = StringIndexer(inputCol = 'OwnsProperty', outputCol = 'OwnsProperty_IX')
si_InstallmentPlans = StringIndexer(inputCol = 'InstallmentPlans', outputCol = 'InstallmentPlans_IX')
si_Housing = StringIndexer(inputCol = 'Housing', outputCol = 'Housing_IX')
si_Job = StringIndexer(inputCol = 'Job', outputCol = 'Job_IX')
si_Telephone = StringIndexer(inputCol = 'Telephone', outputCol = 'Telephone_IX')
si_ForeignWorker = StringIndexer(inputCol = 'ForeignWorker', outputCol = 'ForeignWorker_IX')
si_Label = StringIndexer(inputCol="Risk", outputCol="label").fit(spark_df)
label_converter = IndexToString(inputCol="prediction", outputCol="predictedLabel", labels=si_Label.labels)
va_features = VectorAssembler(inputCols=["CheckingStatus_IX", "CreditHistory_IX", "LoanPurpose_IX", "ExistingSavings_IX", "EmploymentDuration_IX", "Sex_IX", \
"OthersOnLoan_IX", "OwnsProperty_IX", "InstallmentPlans_IX", "Housing_IX", "Job_IX", "Telephone_IX", "ForeignWorker_IX", \
"LoanDuration", "LoanAmount", "InstallmentPercent", "CurrentResidenceDuration", "LoanDuration", "Age", "ExistingCreditsCount", \
"Dependents"], outputCol="features")
from pyspark.ml.classification import RandomForestClassifier
classifier = RandomForestClassifier(featuresCol="features")
pipeline = Pipeline(stages=[si_CheckingStatus, si_CreditHistory, si_EmploymentDuration, si_ExistingSavings, si_ForeignWorker, si_Housing, si_InstallmentPlans, si_Job, si_LoanPurpose, si_OthersOnLoan,\
si_OwnsProperty, si_Sex, si_Telephone, si_Label, va_features, classifier, label_converter])
model = pipeline.fit(train_data)
predictions = model.transform(test_data)
evaluatorDT = BinaryClassificationEvaluator(rawPredictionCol="prediction", metricName='areaUnderROC')
area_under_curve = evaluatorDT.evaluate(predictions)
evaluatorDT = BinaryClassificationEvaluator(rawPredictionCol="prediction", metricName='areaUnderPR')
area_under_PR = evaluatorDT.evaluate(predictions)
#default evaluation is areaUnderROC
print("areaUnderROC = %g" % area_under_curve, "areaUnderPR = %g" % area_under_PR)
```
### 2.4 evaluate more metrics by exporting them into pandas and numpy
```
from sklearn.metrics import classification_report
y_pred = predictions.toPandas()['prediction']
y_pred = ['Risk' if pred == 1.0 else 'No Risk' for pred in y_pred]
y_test = test_data.toPandas()['Risk']
print(classification_report(y_test, y_pred, target_names=['Risk', 'No Risk']))
```
## 2.5 Publish the model
In this section, the notebook uses Watson Machine Learning to save the model (including the pipeline) to the WML instance. Previous versions of the model are removed so that the notebook can be run again, resetting all data for another demo.
```
from watson_machine_learning_client import WatsonMachineLearningAPIClient
import json
wml_client = WatsonMachineLearningAPIClient(WML_CREDENTIALS)
```
### 2.5.1 Set default space
This is a new feature in CP4D, in order to deploy a model, you would have to create different
deployment spaces and deploy your models there. You can list all the spaces using the .list()
function, or you can create new spaces by going to CP4D menu on top left corner --> analyze -->
analytics deployments --> New Deployment Space. Once you know which space you want to deploy
in, simply use the GUID of the space as argument for .set.default_space() function below
```
wml_client.spaces.list()
```
We'll use the `GUID` for your Deployment space as listed for the `default_space` in the method below:
```
wml_client.set.default_space('346b75fd-018d-4465-8cb8-0985406cfdee')
```
Alternately, set `space_name` below and use the following cell to create a space with that name
```
# space_name = "my_space_name"
# spaces = wml_client.spaces.get_details()['resources']
# space_id = None
# for space in spaces:
# if space['entity']['name'] == space_name:
# space_id = space["metadata"]["guid"]
# if space_id is None:
# space_id = wml_client.spaces.store(
# meta_props={wml_client.spaces.ConfigurationMetaNames.NAME: space_name})["metadata"]["guid"]
#wml_client.set.default_space(space_id)
```
### 2.5.2 Remove existing model and deployment
```
deployment_details = wml_client.deployments.get_details()
for deployment in deployment_details['resources']:
deployment_id = deployment['metadata']['guid']
model_id = deployment['entity']['asset']['href'].split('/')[3].split('?')[0]
if deployment['entity']['name'] == DEPLOYMENT_NAME:
print('Deleting deployment id', deployment_id)
wml_client.deployments.delete(deployment_id)
print('Deleting model id', model_id)
wml_client.repository.delete(model_id)
wml_client.repository.list_models()
```
### 2.5.3 Set `training_data_reference`
```
training_data_reference = {
"name": "Credit Risk feedback",
"connection": DATABASE_CREDENTIALS,
"source": {
"tablename": "CREDIT_RISK_TRAINING",
'schema_name': 'TRAININGDATA',
"type": "db2"
}
}
```
### 2.5.4 Store the model in Watson Machine Learning on CP4D
```
wml_models = wml_client.repository.get_model_details()
model_uid = None
for model_in in wml_models['resources']:
if MODEL_NAME == model_in['entity']['name']:
model_uid = model_in['metadata']['guid']
break
if model_uid is None:
print("Storing model ...")
metadata = {
wml_client.repository.ModelMetaNames.NAME: MODEL_NAME,
wml_client.repository.ModelMetaNames.TYPE: 'mllib_2.3',
wml_client.repository.ModelMetaNames.RUNTIME_UID: 'spark-mllib_2.3',
}
published_model_details = wml_client.repository.store_model(model, metadata, training_data=df_data, pipeline=pipeline)
model_uid = wml_client.repository.get_model_uid(published_model_details)
print("Done")
model_uid
```
## 2.6 Deploy the model
The next section of the notebook deploys the model as a RESTful web service in Watson Machine Learning. The deployed model will have a scoring URL you can use to send data to the model for predictions.
```
wml_deployments = wml_client.deployments.get_details()
deployment_uid = None
for deployment in wml_deployments['resources']:
if DEPLOYMENT_NAME == deployment['entity']['name']:
deployment_uid = deployment['metadata']['guid']
break
if deployment_uid is None:
print("Deploying model...")
meta_props = {
wml_client.deployments.ConfigurationMetaNames.NAME: DEPLOYMENT_NAME,
wml_client.deployments.ConfigurationMetaNames.ONLINE: {}
}
deployment = wml_client.deployments.create(artifact_uid=model_uid, meta_props=meta_props)
deployment_uid = wml_client.deployments.get_uid(deployment)
print("Model id: {}".format(model_uid))
print("Deployment id: {}".format(deployment_uid))
```
# 3.0 Configure OpenScale <a name="openscale"></a>
The notebook will now import the necessary libraries and set up a Python OpenScale client.
```
from ibm_ai_openscale import APIClient4ICP
from ibm_ai_openscale.engines import *
from ibm_ai_openscale.utils import *
from ibm_ai_openscale.supporting_classes import PayloadRecord, Feature
from ibm_ai_openscale.supporting_classes.enums import *
ai_client = APIClient4ICP(WOS_CREDENTIALS)
ai_client.version
```
## 3.1 Create datamart
### 3.1.1 Set up datamart
Watson OpenScale uses a database to store payload logs and calculated metrics. If an OpenScale datamart exists in Db2, the existing datamart will be used and no data will be overwritten.
Prior instances of the Credit model will be removed from OpenScale monitoring.
```
try:
data_mart_details = ai_client.data_mart.get_details()
print('Using existing external datamart')
except:
print('Setting up external datamart')
ai_client.data_mart.setup(db_credentials=DATABASE_CREDENTIALS, schema=SCHEMA_NAME)
data_mart_details = ai_client.data_mart.get_details()
```
## 3.2 Bind machine learning engines
Watson OpenScale needs to be bound to the Watson Machine Learning instance to capture payload data into and out of the model. If this binding already exists, this code will output a warning message and use the existing binding.
```
binding_uid = ai_client.data_mart.bindings.add('WML instance', WatsonMachineLearningInstance4ICP(wml_credentials=WML_CREDENTIALS))
if binding_uid is None:
binding_uid = ai_client.data_mart.bindings.get_details()['service_bindings'][0]['metadata']['guid']
bindings_details = ai_client.data_mart.bindings.get_details()
binding_uid
ai_client.data_mart.bindings.list()
```
## 3.3 Subscriptions
```
ai_client.data_mart.bindings.list_assets()
ai_client.data_mart.bindings.get_details(binding_uid)
```
### 3.3.1 Remove existing credit risk subscriptions
This code removes previous subscriptions to the Credit model to refresh the monitors with the new model and new data.
```
subscriptions_uids = ai_client.data_mart.subscriptions.get_uids()
for subscription in subscriptions_uids:
sub_name = ai_client.data_mart.subscriptions.get_details(subscription)['entity']['asset']['name']
if sub_name == MODEL_NAME:
ai_client.data_mart.subscriptions.delete(subscription)
print('Deleted existing subscription for', MODEL_NAME)
```
This code creates the model subscription in OpenScale using the Python client API. Note that we need to provide the model unique identifier, and some information about the model itself.
```
subscription = ai_client.data_mart.subscriptions.add(WatsonMachineLearningAsset(
model_uid,
problem_type=ProblemType.BINARY_CLASSIFICATION,
input_data_type=InputDataType.STRUCTURED,
label_column='Risk',
prediction_column='predictedLabel',
probability_column='probability',
feature_columns = ["CheckingStatus","LoanDuration","CreditHistory","LoanPurpose","LoanAmount","ExistingSavings","EmploymentDuration","InstallmentPercent","Sex","OthersOnLoan","CurrentResidenceDuration","OwnsProperty","Age","InstallmentPlans","Housing","ExistingCreditsCount","Job","Dependents","Telephone","ForeignWorker"],
categorical_columns = ["CheckingStatus","CreditHistory","LoanPurpose","ExistingSavings","EmploymentDuration","Sex","OthersOnLoan","OwnsProperty","InstallmentPlans","Housing","Job","Telephone","ForeignWorker"]
))
if subscription is None:
print('Subscription already exists; get the existing one')
subscriptions_uids = ai_client.data_mart.subscriptions.get_uids()
for sub in subscriptions_uids:
if ai_client.data_mart.subscriptions.get_details(sub)['entity']['asset']['name'] == MODEL_NAME:
subscription = ai_client.data_mart.subscriptions.get(sub)
```
Get subscription list
```
subscriptions_uids = ai_client.data_mart.subscriptions.get_uids()
ai_client.data_mart.subscriptions.list()
subscription_details = subscription.get_details()
```
# 4.0 Generate drift model <a name="driftmodel"></a>
Drift requires a trained model to be uploaded manually for WML. You can train, create and download a drift detection model using the code below. The entire code can be found [here](https://github.com/IBM-Watson/aios-data-distribution/blob/master/training_statistics_notebook.ipynb) ( check for Drift detection model generation).
```
training_data_info = {
"class_label":'Risk',
"feature_columns":["CheckingStatus","LoanDuration","CreditHistory","LoanPurpose","LoanAmount","ExistingSavings","EmploymentDuration","InstallmentPercent","Sex","OthersOnLoan","CurrentResidenceDuration","OwnsProperty","Age","InstallmentPlans","Housing","ExistingCreditsCount","Job","Dependents","Telephone","ForeignWorker"],
"categorical_columns":["CheckingStatus","CreditHistory","LoanPurpose","ExistingSavings","EmploymentDuration","Sex","OthersOnLoan","OwnsProperty","InstallmentPlans","Housing","Job","Telephone","ForeignWorker"]
}
#Set model_type. Acceptable values are:["binary","multiclass","regression"]
model_type = "binary"
#model_type = "multiclass"
#model_type = "regression"
def score(training_data_frame):
#To be filled by the user
WML_CREDENTAILS = WML_CREDENTIALS
#The data type of the label column and prediction column should be same .
#User needs to make sure that label column and prediction column array should have the same unique class labels
prediction_column_name = "predictedLabel"
probability_column_name = "probability"
feature_columns = list(training_data_frame.columns)
training_data_rows = training_data_frame[feature_columns].values.tolist()
#print(training_data_rows)
payload_scoring = {
wml_client.deployments.ScoringMetaNames.INPUT_DATA: [{
"fields": feature_columns,
"values": [x for x in training_data_rows]
}]
}
score = wml_client.deployments.score(deployment_uid, payload_scoring)
score_predictions = score.get('predictions')[0]
prob_col_index = list(score_predictions.get('fields')).index(probability_column_name)
predict_col_index = list(score_predictions.get('fields')).index(prediction_column_name)
if prob_col_index < 0 or predict_col_index < 0:
raise Exception("Missing prediction/probability column in the scoring response")
import numpy as np
probability_array = np.array([value[prob_col_index] for value in score_predictions.get('values')])
prediction_vector = np.array([value[predict_col_index] for value in score_predictions.get('values')])
return probability_array, prediction_vector
#Generate drift detection model
from ibm_wos_utils.drift.drift_trainer import DriftTrainer
drift_detection_input = {
"feature_columns":training_data_info.get('feature_columns'),
"categorical_columns":training_data_info.get('categorical_columns'),
"label_column": training_data_info.get('class_label'),
"problem_type": model_type
}
drift_trainer = DriftTrainer(data_df,drift_detection_input)
if model_type != "regression":
#Note: batch_size can be customized by user as per the training data size
drift_trainer.generate_drift_detection_model(score,batch_size=data_df.shape[0])
#Note: Two column constraints are not computed beyond two_column_learner_limit(default set to 200)
#User can adjust the value depending on the requirement
drift_trainer.learn_constraints(two_column_learner_limit=200)
drift_trainer.create_archive()
#Generate a download link for drift detection model
from IPython.display import HTML
import base64
import io
def create_download_link_for_ddm( title = "Download Drift detection model", filename = "drift_detection_model.tar.gz"):
#Retains stats information
with open(filename,'rb') as file:
ddm = file.read()
b64 = base64.b64encode(ddm)
payload = b64.decode()
html = '<a download="{filename}" href="data:text/json;base64,{payload}" target="_blank">{title}</a>'
html = html.format(payload=payload,title=title,filename=filename)
return HTML(html)
create_download_link_for_ddm()
#!rm -rf drift_detection_model.tar.gz
#!wget -O drift_detection_model.tar.gz https://github.com/IBM/cpd-intelligent-loan-agent-assets/blob/master/models/drift_detection_model.tar.gz?raw=true
```
# 5.0 Submit payload <a name="payload"></a>
### Score the model so we can configure monitors
Now that the WML service has been bound and the subscription has been created, we need to send a request to the model before we configure OpenScale. This allows OpenScale to create a payload log in the datamart with the correct schema, so it can capture data coming into and out of the model.
```
fields = ["CheckingStatus","LoanDuration","CreditHistory","LoanPurpose","LoanAmount","ExistingSavings","EmploymentDuration","InstallmentPercent","Sex","OthersOnLoan","CurrentResidenceDuration","OwnsProperty","Age","InstallmentPlans","Housing","ExistingCreditsCount","Job","Dependents","Telephone","ForeignWorker"]
values = [
["no_checking",13,"credits_paid_to_date","car_new",1343,"100_to_500","1_to_4",2,"female","none",3,"savings_insurance",46,"none","own",2,"skilled",1,"none","yes"],
["no_checking",24,"prior_payments_delayed","furniture",4567,"500_to_1000","1_to_4",4,"male","none",4,"savings_insurance",36,"none","free",2,"management_self-employed",1,"none","yes"],
["0_to_200",26,"all_credits_paid_back","car_new",863,"less_100","less_1",2,"female","co-applicant",2,"real_estate",38,"none","own",1,"skilled",1,"none","yes"],
["0_to_200",14,"no_credits","car_new",2368,"less_100","1_to_4",3,"female","none",3,"real_estate",29,"none","own",1,"skilled",1,"none","yes"],
["0_to_200",4,"no_credits","car_new",250,"less_100","unemployed",2,"female","none",3,"real_estate",23,"none","rent",1,"management_self-employed",1,"none","yes"],
["no_checking",17,"credits_paid_to_date","car_new",832,"100_to_500","1_to_4",2,"male","none",2,"real_estate",42,"none","own",1,"skilled",1,"none","yes"],
["no_checking",33,"outstanding_credit","appliances",5696,"unknown","greater_7",4,"male","co-applicant",4,"unknown",54,"none","free",2,"skilled",1,"yes","yes"],
["0_to_200",13,"prior_payments_delayed","retraining",1375,"100_to_500","4_to_7",3,"male","none",3,"real_estate",37,"none","own",2,"management_self-employed",1,"none","yes"]
]
payload_scoring = {"fields": fields,"values": values}
payload = {
wml_client.deployments.ScoringMetaNames.INPUT_DATA: [payload_scoring]
}
scoring_response = wml_client.deployments.score(deployment_uid, payload)
print('Single record scoring result:', '\n fields:', scoring_response['predictions'][0]['fields'], '\n values: ', scoring_response['predictions'][0]['values'][0])
```
# 6. Enable drift monitoring <a name="monitor"></a>
```
subscription.drift_monitoring.enable(threshold=0.05, min_records=10,model_path="drift_detection_model.tar.gz")
```
# 7. Run Drift monitor on demand <a name="driftrun"></a>
```
!rm german_credit_feed.json
!wget https://raw.githubusercontent.com/IBM/cpd-intelligent-loan-agent-assets/master/data/german_credit_feed.json
import random
with open('german_credit_feed.json', 'r') as scoring_file:
scoring_data = json.load(scoring_file)
fields = scoring_data['fields']
values = []
for _ in range(10):
current = random.choice(scoring_data['values'])
#set age of all rows to 100 to increase drift values on dashboard
current[12] = 100
values.append(current)
payload_scoring = {"fields": fields, "values": values}
payload = {
wml_client.deployments.ScoringMetaNames.INPUT_DATA: [payload_scoring]
}
scoring_response = wml_client.deployments.score(deployment_uid, payload)
drift_run_details = subscription.drift_monitoring.run(background_mode=False)
subscription.drift_monitoring.get_table_content()
```
## Congratulations!
You have finished running all the cells within the notebook for IBM Watson OpenScale. You can now view the OpenScale dashboard by going to the CP4D `Home` page, and clicking `Services`. Choose the `OpenScale` tile and click the menu to `Open`. Click on the tile for the model you've created to see fairness, accuracy, and performance monitors. Click on the timeseries graph to get detailed information on transactions during a specific time window.
OpenScale shows model performance over time. You have two options to keep data flowing to your OpenScale graphs:
* Download, configure and schedule the [model feed notebook](https://raw.githubusercontent.com/emartensibm/german-credit/master/german_credit_scoring_feed.ipynb). This notebook can be set up with your WML credentials, and scheduled to provide a consistent flow of scoring requests to your model, which will appear in your OpenScale monitors.
* Re-run this notebook. Running this notebook from the beginning will delete and re-create the model and deployment, and re-create the historical data. Please note that the payload and measurement logs for the previous deployment will continue to be stored in your datamart, and can be deleted if necessary.
| github_jupyter |
```
import tensorflow as tf
print(tf.__version__)
!pip install keras-tuner
import kerastuner
from kerastuner.tuners import RandomSearch, Hyperband, BayesianOptimization
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.layers import MaxPooling2D
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Flatten
from tensorflow.keras.optimizers import SGD, Adam
from tensorflow.keras.layers import BatchNormalization
from tensorflow.keras.layers import Dropout
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import matplotlib.pyplot as plt
(trainX, trainY), (testX, testY) = tf.keras.datasets.cifar100.load_data()
# one hot encode target values
trainY = to_categorical(trainY)
testY = to_categorical(testY)
# convert from integers to floats
#train_norm = trainX.astype('float32')
#test_norm = testX.astype('float32')
# normalize to range 0-1
trainX = trainX / 255.0
testX = testX / 255.0
# define cnn model
def build_model(hp):
model = Sequential()
hp_filters = hp.Int('filters', min_value = 32, max_value = 64, step = 32)
model.add(Conv2D(filters=hp_filters, kernel_size=(3, 3), activation='relu', kernel_initializer='he_uniform', padding='same', input_shape=(32, 32, 3)))
model.add(BatchNormalization())
model.add(Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(BatchNormalization())
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(BatchNormalization())
model.add(Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(BatchNormalization())
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.3))
model.add(Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(BatchNormalization())
model.add(Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(BatchNormalization())
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.4))
model.add(Flatten())
model.add(Dense(128, activation='relu', kernel_initializer='he_uniform'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(100, activation='softmax'))
# compile model
#opt = SGD(lr=0.001, momentum=0.9)
hp_learning_rate = hp.Choice('learning_rate', values = [1e-2, 1e-3])
opt = Adam(learning_rate=hp_learning_rate)#, momentum=0.9)
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
return model
tuner = BayesianOptimization(
build_model,
objective='val_accuracy',
max_trials=4)
tuner.search(trainX, trainY,
epochs=5,
validation_data=(testX, testY))
tuner.results_summary()
# Get the optimal hyperparameters
best_hps = tuner.get_best_hyperparameters(num_trials = 1)[0]
print(best_hps.values)
model = tuner.hypermodel.build(best_hps)
history = model.fit(trainX, trainY,
epochs=5,
validation_data=(testX, testY))
# evaluate model
_, acc = model.evaluate(testX, testY, verbose=0)
print('> %.3f' % (acc * 100.0))
# Plot training & validation accuracy values
epoch_range = range(1, 6) #6 here is the number of epochs of final training
plt.plot(epoch_range, history.history['accuracy'])
plt.plot(epoch_range, history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Val'], loc='upper left')
plt.show()
# Plot training & validation loss values
plt.plot(epoch_range, history.history['loss'])
plt.plot(epoch_range, history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Val'], loc='upper left')
plt.show()
```
| github_jupyter |
# Generates images from text prompts with a CLIP conditioned Decision Transformer.
By Katherine Crowson (https://github.com/crowsonkb, https://twitter.com/RiversHaveWings).
```
# @title Licensed under the MIT License
# Copyright (c) 2021 Katherine Crowson
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
!nvidia-smi
!git clone https://github.com/openai/CLIP
!git clone https://github.com/CompVis/taming-transformers
!pip install ftfy regex tqdm omegaconf pytorch-lightning einops transformers
!pip install -e ./taming-transformers
!curl -OL --http1.1 'https://the-eye.eu/public/AI/models/cond_transformer_2/transformer_cond_2_00003_090000_modelonly.pth'
!curl -L 'https://heibox.uni-heidelberg.de/d/a7530b09fed84f80a887/files/?p=%2Fconfigs%2Fmodel.yaml&dl=1' > vqgan_imagenet_f16_16384.yaml
!curl -L 'https://heibox.uni-heidelberg.de/d/a7530b09fed84f80a887/files/?p=%2Fckpts%2Flast.ckpt&dl=1' > vqgan_imagenet_f16_16384.ckpt
import argparse
from pathlib import Path
import sys
from IPython import display
from omegaconf import OmegaConf
import torch
from torch import nn
from torch.nn import functional as F
from torchvision import transforms
from torchvision.transforms import functional as TF
from transformers import top_k_top_p_filtering
from tqdm.notebook import trange
sys.path.append('./taming-transformers')
from CLIP import clip
from taming.models import vqgan
class CausalTransformerEncoder(nn.TransformerEncoder):
def forward(self, src, mask=None, src_key_padding_mask=None, cache=None):
output = src
if self.training:
if cache is not None:
raise ValueError("cache parameter should be None in training mode")
for mod in self.layers:
output = mod(output, src_mask=mask, src_key_padding_mask=src_key_padding_mask)
if self.norm is not None:
output = self.norm(output)
return output
new_token_cache = []
compute_len = src.shape[0]
if cache is not None:
compute_len -= cache.shape[1]
for i, mod in enumerate(self.layers):
output = mod(output, compute_len=compute_len)
new_token_cache.append(output)
if cache is not None:
output = torch.cat([cache[i], output], dim=0)
if cache is not None:
new_cache = torch.cat([cache, torch.stack(new_token_cache, dim=0)], dim=1)
else:
new_cache = torch.stack(new_token_cache, dim=0)
return output, new_cache
class CausalTransformerEncoderLayer(nn.TransformerEncoderLayer):
def forward(self, src, src_mask=None, src_key_padding_mask=None, compute_len=None):
if self.training:
return super().forward(src, src_mask, src_key_padding_mask)
if compute_len is None:
src_last_tok = src
else:
src_last_tok = src[-compute_len:, :, :]
attn_mask = src_mask if compute_len > 1 else None
tmp_src = self.self_attn(src_last_tok, src, src, attn_mask=attn_mask,
key_padding_mask=src_key_padding_mask)[0]
src_last_tok = src_last_tok + self.dropout1(tmp_src)
src_last_tok = self.norm1(src_last_tok)
tmp_src = self.linear2(self.dropout(self.activation(self.linear1(src_last_tok))))
src_last_tok = src_last_tok + self.dropout2(tmp_src)
src_last_tok = self.norm2(src_last_tok)
return src_last_tok
class CLIPToImageTransformer(nn.Module):
def __init__(self, clip_dim, seq_len, n_toks):
super().__init__()
self.clip_dim = clip_dim
d_model = 1024
self.clip_in_proj = nn.Linear(clip_dim, d_model, bias=False)
self.clip_score_in_proj = nn.Linear(1, d_model, bias=False)
self.in_embed = nn.Embedding(n_toks, d_model)
self.out_proj = nn.Linear(d_model, n_toks)
layer = CausalTransformerEncoderLayer(d_model, d_model // 64, d_model * 4,
dropout=0, activation='gelu')
self.encoder = CausalTransformerEncoder(layer, 24)
self.pos_emb = nn.Parameter(torch.zeros([seq_len + 1, d_model]))
self.register_buffer('mask', self._generate_causal_mask(seq_len + 1), persistent=False)
@staticmethod
def _generate_causal_mask(size):
mask = (torch.triu(torch.ones([size, size])) == 1).transpose(0, 1)
mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0))
mask[0, 1] = 0
return mask
def forward(self, clip_embed, clip_score, input=None, cache=None):
if input is None:
input = torch.zeros([len(clip_embed), 0], dtype=torch.long, device=clip_embed.device)
clip_embed_proj = self.clip_in_proj(F.normalize(clip_embed, dim=1) * self.clip_dim**0.5)
clip_score_proj = self.clip_score_in_proj(clip_score)
embed = torch.cat([clip_embed_proj.unsqueeze(0),
clip_score_proj.unsqueeze(0),
self.in_embed(input.T)])
embed_plus_pos = embed + self.pos_emb[:len(embed)].unsqueeze(1)
mask = self.mask[:len(embed), :len(embed)]
out, cache = self.encoder(embed_plus_pos, mask, cache=cache)
return self.out_proj(out[1:]).transpose(0, 1), cache
```
## Settings for this run:
```
args = argparse.Namespace(
prompt='Alien Friend by Odilon Redon',
batch_size=16,
clip_score=0.475,
half=True,
k=8,
n=128,
output='out',
seed=0,
temperature=1.,
top_k=0,
top_p=0.95,
)
```
### Actually do the run...
```
device = torch.device('cuda:0')
dtype = torch.half if args.half else torch.float
perceptor = clip.load('ViT-B/32', jit=False)[0].to(device).eval().requires_grad_(False)
normalize = transforms.Normalize(mean=[0.48145466, 0.4578275, 0.40821073],
std=[0.26862954, 0.26130258, 0.27577711])
vqgan_config = OmegaConf.load('vqgan_imagenet_f16_16384.yaml')
vqgan_model = vqgan.VQModel(**vqgan_config.model.params).to(device)
vqgan_model.eval().requires_grad_(False)
vqgan_model.init_from_ckpt('vqgan_imagenet_f16_16384.ckpt')
del vqgan_model.loss
clip_dim = perceptor.visual.output_dim
clip_input_res = perceptor.visual.input_resolution
e_dim = vqgan_model.quantize.e_dim
f = 2**(vqgan_model.decoder.num_resolutions - 1)
n_toks = vqgan_model.quantize.n_e
size_x, size_y = 384, 384
toks_x, toks_y = size_x // f, size_y // f
model = CLIPToImageTransformer(clip_dim, toks_y * toks_x, n_toks)
ckpt = torch.load('transformer_cond_2_00003_090000_modelonly.pth', map_location='cpu')
model.load_state_dict(ckpt['model'])
del ckpt
model = model.to(device, dtype).eval().requires_grad_(False)
if args.seed is not None:
torch.manual_seed(args.seed)
text_embed = perceptor.encode_text(clip.tokenize(args.prompt).to(device)).to(dtype)
text_embed = text_embed.repeat([args.n, 1])
clip_score = torch.ones([text_embed.shape[0], 1], device=device, dtype=dtype) * args.clip_score
@torch.no_grad()
def sample(clip_embed, clip_score, temperature=1., top_k=0, top_p=1.):
tokens = torch.zeros([len(clip_embed), 0], dtype=torch.long, device=device)
cache = None
for i in trange(toks_y * toks_x, leave=False):
logits, cache = model(clip_embed, clip_score, tokens, cache=cache)
logits = logits[:, -1] / temperature
logits = top_k_top_p_filtering(logits, top_k, top_p)
next_token = logits.softmax(1).multinomial(1)
tokens = torch.cat([tokens, next_token], dim=1)
return tokens
def decode(tokens):
z = vqgan_model.quantize.embedding(tokens).view([-1, toks_y, toks_x, e_dim]).movedim(3, 1)
return vqgan_model.decode(z).add(1).div(2).clamp(0, 1)
try:
out_lst, sim_lst = [], []
for i in trange(0, len(text_embed), args.batch_size):
tokens = sample(text_embed[i:i+args.batch_size], clip_score[i:i+args.batch_size],
temperature=args.temperature, top_k=args.top_k, top_p=args.top_p)
out = decode(tokens)
out_lst.append(out)
out_for_clip = F.interpolate(out, (clip_input_res, clip_input_res),
mode='bilinear', align_corners=False)
image_embed = perceptor.encode_image(normalize(out_for_clip)).to(dtype)
sim = torch.cosine_similarity(text_embed[i:i+args.batch_size], image_embed)
sim_lst.append(sim)
out = torch.cat(out_lst)
sim = torch.cat(sim_lst)
best_values, best_indices = sim.topk(min(args.k, args.n))
for i, index in enumerate(best_indices):
filename = args.output + f'_{i:03}.png'
TF.to_pil_image(out[index]).save(filename)
print(f'Actual CLIP score for output {i}: {best_values[i].item():g}')
display.display(display.Image(filename))
except KeyboardInterrupt:
pass
```
| github_jupyter |
## Reinterpreting by patching an existing HistFactory pdf spec
An important pattern in High-Energy physics in the reinterpretation of analyses with respect to new signal models.
The main idea is that a given phase space selection (an "analysis") designed for some original BSM physics signal may not only be efficient for that signal (indeed, likely it was *optimized* for that signal) but also be reasonably efficient for other signals (albeit not optimal). Thus, upon generating the new signal, one can pass the new signal sample through the analysis pipeline to obtain a new estimate of its distribution with the channels defined by the analysis.
The final step is then to construct a new statistical model by swapping out the old for the new signal and evaluate new limits based on this new, modified models.
In `pyhf` this final step is demonstrated here is very easy to perform as demonstrated in this notebook.
First some basic import and plotting code we will use later
```
import jsonpatch
import pyhf
from pyhf.contrib.viz import brazil
%pylab inline
def invert_interval(test_mus, cls_obs, cls_exp, test_size=0.05):
point05cross = {"exp": [], "obs": None}
for cls_exp_sigma in cls_exp:
yvals = cls_exp_sigma
point05cross["exp"].append(
np.interp(test_size, list(reversed(yvals)), list(reversed(test_mus)))
)
yvals = cls_obs
point05cross["obs"] = np.interp(
test_size, list(reversed(yvals)), list(reversed(test_mus))
)
return point05cross
```
### The originial statistical Model
```
data = [51, 62.0]
original = pyhf.simplemodels.uncorrelated_background(
signal=[5.0, 6.0], bkg=[50.0, 65.0], bkg_uncertainty=[5.0, 3.0]
)
test_mus = np.linspace(0, 5)
results = [
pyhf.infer.hypotest(
mu,
data + original.config.auxdata,
original,
original.config.suggested_init(),
original.config.suggested_bounds(),
return_expected_set=True,
)
for mu in test_mus
]
fig, ax = plt.subplots(1, 1)
ax.set_title("Hypothesis Tests")
ax.set_ylabel("CLs")
ax.set_xlabel("µ")
artists = brazil.plot_results(test_mus, results, test_size=0.05, ax=ax)
```
### Patching the likelihood to replace the BSM components
A nice thing about being able to specify the entire statistical model using the ubiquitous JSON format is that we can leverage a wide ecosystem of tools to manipulate JSON documents.
In particular we can use the [JSON-Patch](https://tools.ietf.org/html/rfc6902]) format (a proposed IETF standard) to replace the signal component of the statistical model with a new signal.
This new signal distribution could for example be the result of a third-party analysis implementation such as Rivet.
```
new_signal = [20.0, 10.0]
patch = jsonpatch.JsonPatch(
[
{"op": "replace", "path": "/channels/0/samples/0/data", "value": new_signal},
]
)
recast = pyhf.Model(patch.apply(original.spec))
recast.spec
```
### The Recasted Result
```
test_mus = np.linspace(0, 5)
results = [
pyhf.infer.hypotest(
mu,
data + recast.config.auxdata,
recast,
recast.config.suggested_init(),
recast.config.suggested_bounds(),
return_expected_set=True,
)
for mu in test_mus
]
fig, ax = plt.subplots(1, 1)
ax.set_title("Hypothesis Tests")
ax.set_ylabel("CLs")
ax.set_xlabel("µ")
artists = brazil.plot_results(test_mus, results, test_size=0.05, ax=ax)
```
| github_jupyter |
# Run Ad-Hoc Model Bias Analysis
## Run Bias Analysis In The Notebook using `smclarify`
https://github.com/aws/amazon-sagemaker-clarify
```
!pip install -q smclarify==0.1
from smclarify.bias.report import *
from smclarify.util.dataset import Datasets, german_lending_readable_values
from typing import Dict
from collections import defaultdict
import pandas as pd
df_pre_training = pd.read_csv('./data-clarify/amazon_reviews_us_giftcards_software_videogames_balanced.csv')
df_pre_training.shape
df_pre_training.head()
```
# Pre-Training Bias Analysis
```
facet_column = FacetColumn('product_category')
label_column = LabelColumn('star_rating', df_pre_training['star_rating'], [5, 4])
group_variable = df_pre_training['product_category']
pre_training_report = bias_report(
df_pre_training,
facet_column,
label_column,
stage_type=StageType.PRE_TRAINING,
group_variable=group_variable
)
pre_training_report
```
# Post-Training Bias Analysis
## _TODO: Implement Batch Prediction_
```
data = {
'star_rating': [1, 2, 3, 4, 5],
'review_body': ['Worst ever', 'Expected more', 'Its ok', 'I like it', 'I love it'],
'product_category': ['Gift Card', 'Gift Card', 'Gift Card', 'Digital_Software', 'Digital_Software'],
'star_rating_predicted': [1, 2, 3, 4, 5]
}
df = pd.DataFrame(data, columns = ['star_rating','review_body', 'product_category','star_rating_predicted'])
print (df)
```
# Convert data columns into `categorical` data type required for Clarify
```
df['star_rating'] = df['star_rating'].astype('category')
df['star_rating_predicted'] = df['star_rating_predicted'].astype('category')
df['product_category'] = df['product_category'].astype('category')
```
# Configure Clarify
```
facet_column = FacetColumn(
name='product_category',
sensitive_values=['Gift Card']
)
label_column = LabelColumn(
name='star_rating',
data=df['star_rating'],
positive_label_values=[5,4])
predicted_label_column = LabelColumn(
name='star_rating_predicted',
data=df['star_rating_predicted'],
positive_label_values=[5,4])
group_variable = df['product_category']
post_training_report = bias_report(
df,
facet_column=facet_column,
label_column=label_column,
stage_type=StageType.POST_TRAINING,
predicted_label_column=predicted_label_column,
metrics=['DPPL', 'DI', 'DCA', 'DCR', 'RD', 'DAR', 'DRR', 'AD', 'CDDPL', 'TE'],
group_variable=group_variable
)
```
# Show Post-Training Bias Report
```
from pprint import pprint
pprint(post_training_report)
```
# Release Resources
```
%%html
<p><b>Shutting down your kernel for this notebook to release resources.</b></p>
<button class="sm-command-button" data-commandlinker-command="kernelmenu:shutdown" style="display:none;">Shutdown Kernel</button>
<script>
try {
els = document.getElementsByClassName("sm-command-button");
els[0].click();
}
catch(err) {
// NoOp
}
</script>
%%javascript
try {
Jupyter.notebook.save_checkpoint();
Jupyter.notebook.session.delete();
}
catch(err) {
// NoOp
}
```
| github_jupyter |
```
import re
import os
import random
import itertools
import numpy as np
import pandas as pd
import seaborn as sns
import tensorflow as tf
from urllib.parse import urlparse
from sklearn import metrics
from tensorflow import keras
from sklearn.ensemble import RandomForestClassifier
from tensorflow.keras import backend as K
from sklearn.tree import DecisionTreeClassifier
import matplotlib.pyplot as plt
import xgboost as xgb
from sklearn.utils import shuffle
from tensorflow.keras.models import Sequential
from tensorflow.keras.preprocessing import sequence
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report,confusion_matrix
from tensorflow.keras.layers import LSTM, Dense, Dropout, Embedding, Conv2D, MaxPooling2D, Flatten
from tensorflow.keras.callbacks import EarlyStopping,ModelCheckpoint,ReduceLROnPlateau
from pyspark.ml.feature import Tokenizer, RegexTokenizer
from pyspark.ml.classification import LinearSVC
from pyspark.sql.functions import col, udf
from pyspark.sql.types import IntegerType
from pyspark.ml.feature import NGram,HashingTF, IDF
from sklearn.feature_extraction.text import TfidfVectorizer
from pyspark.ml.feature import StandardScaler
from pyspark.sql.functions import lit
from pyspark.mllib.feature import StandardScaler, StandardScalerModel
from pyspark.mllib.linalg import Vectors
from pyspark.mllib.util import MLUtils
from pyspark.ml.classification import LogisticRegression, OneVsRest
from pyspark.ml import Pipeline
from pyspark.sql import Row
from sklearn.feature_extraction.text import TfidfVectorizer
from pyspark.ml.feature import RegexTokenizer, StopWordsRemover, CountVectorizer
from sklearn.linear_model import LogisticRegression
SIZE = 100
BATCH_SIZE = 16
EPOCHS = 100
SEED = 0
os.environ['PYTHONHASHSEED']=str(SEED)
random.seed(SEED)
np.random.seed(SEED)
tf.random.set_seed(SEED)
df=pd.read_csv("/Users/abhinavshinow/Documents/GitHub/Mal_URL/Data/mal_2.csv")
df2=pd.read_csv("/Users/abhinavshinow/Documents/GitHub/Mal_URL/Data/mal_3.csv")
df.drop(df.columns[df.columns.str.contains('unnamed',case = False)],axis = 1, inplace = True)
df.drop('label',axis = 1, inplace = True)
df=df.rename(columns={'result': 'type'})
df2=df2.rename(columns={'label': 'type'})
df2['type']=df2['type'].replace({'bad':1,'good':0})
df.head()
df2.head()
def getTokens(input):
tokensBySlash = str(input.encode('utf-8')).split('/')
allTokens=[]
for i in tokensBySlash:
tokens = str(i).split('-')
tokensByDot = []
for j in range(0,len(tokens)):
tempTokens = str(tokens[j]).split('.')
tokentsByDot = tokensByDot + tempTokens
allTokens = allTokens + tokens + tokensByDot
allTokens = list(set(allTokens))
if 'com' in allTokens:
allTokens.remove('com')
return allTokens
#Model--1
data1 = np.array(df)
y1=[d[1] for d in data1]
url1=[d[0] for d in data1]
vectorised_url1=TfidfVectorizer()
x1=vectorised_url1.fit_transform(url1)
x_train1, x_test1, y_train1, y_test1 = train_test_split(x1,y1,test_size=0.2,shuffle='True',stratify=y1)
#Model--2
data2 = np.array(df2)
y2=[d[1] for d in data2]
url2=[d[0] for d in data2]
vectorised_url2=TfidfVectorizer()
x2=vectorised_url2.fit_transform(url2)
x_train2, x_test2, y_train2, y_test2 = train_test_split(x2,y2,test_size=0.2,shuffle='True',stratify=y2)
#Logistic Regression
model_lg1 = LogisticRegression(solver='lbfgs', max_iter=10000)
model_lg2 = LogisticRegression(solver='lbfgs', max_iter=10000)
#XGBoost
model_xg1 = xgb.XGBClassifier(n_jobs = 8)
model_xg2 = xgb.XGBClassifier(n_jobs = 8)
#Decision Tree
model_dc1 = DecisionTreeClassifier()
model_dc2 = DecisionTreeClassifier()
model_lg1.fit(x_train1,y_train1)
score_lg1 = model_lg1.score(x_test1,y_test1)
print(score_lg1)
model_lg2.fit(x_train2,y_train2)
score_lg2 = model_lg2.score(x_test2,y_test2)
print(score_lg2)
model_xg1.fit(x_train1,y_train1)
score_xg1 = model_xg1.score(x_test1,y_test1)
print(score_xg1)
model_xg2.fit(x_train2,y_train2)
score_xg2 = model_xg2.score(x_test2,y_test2)
print(score_xg2)
model_dc1.fit(x_train1,y_train1)
score_dc1 = model_dc1.score(x_test1,y_test1)
print(score_dc1)
model_dc2.fit(x_train2,y_train2)
score_dc2 = model_dc2.score(x_test2,y_test2)
print(score_dc2)
pred_lg1 = model_lg1.predict(x_test1)
pred_lg2 = model_lg2.predict(x_test2)
pred_xg1 = model_xg1.predict(x_test1)
pred_xg2 = model_xg2.predict(x_test2)
pred_dc1 = model_dc1.predict(x_test1)
pred_dc2 = model_dc2.predict(x_test2)
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
print(classification_report(y_test1,pred_lg1))
cm = metrics.confusion_matrix(y_test1, pred_lg1, labels=[0,1])
plot_confusion_matrix(cm,classes=['benign','malicious'])
print(classification_report(y_test2,pred_lg2))
cm = metrics.confusion_matrix(y_test2, pred_lg2, labels=[0,1])
plot_confusion_matrix(cm,classes=['benign','malicious'])
print(classification_report(y_test1,pred_xg1))
cm = metrics.confusion_matrix(y_test1, pred_xg1, labels=[0,1])
plot_confusion_matrix(cm,classes=['benign','malicious'])
print(classification_report(y_test2,pred_xg2))
cm = metrics.confusion_matrix(y_test2, pred_xg2, labels=[0,1])
plot_confusion_matrix(cm,classes=['benign','malicious'])
print(classification_report(y_test1,pred_dc1))
cm = metrics.confusion_matrix(y_test1, pred_dc1, labels=[0,1])
plot_confusion_matrix(cm,classes=['benign','malicious'])
print(classification_report(y_test2,pred_dc2))
cm = metrics.confusion_matrix(y_test2, pred_dc2, labels=[0,1])
plot_confusion_matrix(cm,classes=['benign','malicious'])
url_test = ['http://www.824555.com/app/member/SportOption.php?uid=guest&langx=gb','http://202.77.121.186/kliping/','crackspider.us/toolbar/install.php?pack=exe','www.google.com']
url_test1 = []
url_test2 = []
for url in url_test:
if(urlparse(url).scheme != ''):
url_test1.append(url)
else:
url_test2.append(url)
x_pred1=vectorised_url1.transform(url_test1)
x_pred2=vectorised_url2.transform(url_test2)
pred1=model_xg1.predict(x_pred1)
pred2=model_dc2.predict(x_pred2)
pred1
pred2
```
| github_jupyter |
```
import requests as req
import sentinelhub as sh
import matplotlib.pyplot as plt
import numpy as np
import instance_id as inid
import mimetypes
import sentinelhub.constants
from spectral import *
%matplotlib notebook
#%matplotlib inline
from PIL import Image
im = Image.open('rukban.tif')
import numpy as np
imarray = np.array(im)
fig = plt.subplots(nrows=1, ncols=1, figsize=(30, 30))
plt.imshow(imarray)
imarray.shape
fig = plt.subplots(nrows=1, ncols=1, figsize=(5, 5))
plt.imshow(imarray[800:850,800:850])
fig = plt.subplots(nrows=1, ncols=1, figsize=(30, 30))
plt.imshow(m, cmap='RdYlGn')
from spectral import *
(m, c) = kmeans(imarray, 5, 30)
plt.figure()
#plt.hold(True)
for i in range(c.shape[0]):
plt.plot(c[i])
plt.show()
imarray.shape
m.shape
c.shape
m.shape
m[939]
np.where(m==1)
ax.imshow(imarray)
plt.imshow(imarray)
imarray
import cv2
#!/usr/bin/python
# Standard imports
import cv2
import numpy as np;
# Read image
#im = cv2.imread("blob.jpg", cv2.IMREAD_GRAYSCALE)
im = m
# Setup SimpleBlobDetector parameters.
params = cv2.SimpleBlobDetector_Params()
# Change thresholds
params.minThreshold = 10
params.maxThreshold = 200
# Filter by Area.
params.filterByArea = True
params.minArea = 1500
# Filter by Circularity
params.filterByCircularity = True
params.minCircularity = 0.1
# Filter by Convexity
params.filterByConvexity = True
params.minConvexity = 0.87
# Filter by Inertia
params.filterByInertia = True
params.minInertiaRatio = 0.01
# Create a detector with the parameters
ver = (cv2.__version__).split('.')
if int(ver[0]) < 3 :
detector = cv2.SimpleBlobDetector(params)
else :
detector = cv2.SimpleBlobDetector_create(params)
# Detect blobs.
keypoints = detector.detect(im)
# Draw detected blobs as red circles.
# cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures
# the size of the circle corresponds to the size of blob
im_with_keypoints = cv2.drawKeypoints(im, keypoints, np.array([]), (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
# Show blobs
cv2.imshow("Keypoints", im_with_keypoints)
cv2.waitKey(0)
sentinelhub.constants.AwsConstants.S2_L1C_BANDS
base_url = 'http://services.sentinel-hub.com/ogc/wms/'
instance_id = inid.instance_id()
test = req.get('http://services.sentinel-hub.com/ogc/wms/420323a2-e3b4-45f3-bd98-e7d353665736?REQUEST=GetMap&BBOX=3238005,5039853,3244050,5045897&LAYERS=TRUE_COLOR&MAXCC=20&WIDTH=320&HEIGHT=320&FORMAT=image/jpeg&TIME=2016-01-29/2016-02-29')
test.status_code
def plot_image(image, factor=1):
"""
Utility function for plotting RGB images.
"""
fig = plt.subplots(nrows=1, ncols=1, figsize=(30, 30))
if np.issubdtype(image.dtype, np.floating):
plt.imshow(np.minimum(image * factor, 1))
else:
plt.imshow(image)
coords_wgs84 = [38.6, 33.3, 38.65, 33.35]
betsiboka_bbox = sh.BBox(bbox=coords_wgs84, crs=sh.CRS.WGS84)
wms_true_color_request = sh.WmsRequest(layer='FALSE_COLOR_URBAN',
bbox=betsiboka_bbox,
time=('2008-1-15','2018-1-15'),
width=1024,
image_format=sh.constants.MimeType.TIFF_d32f,
maxcc=0.0,
instance_id=instance_id)
wms_true_color_img = wms_true_color_request.get_data()
dates = wms_true_color_request.get_dates()
wms_true_color_img[0].dtype
wms_true_color_img[0]
dates[0]
len(wms_true_color_img)
plot_image(wms_true_color_img[10])
plt.imshow(wms_true_color_img[-1])
plot_image(wms_true_color_img[-1])
dates[-1
]
wms_true_color_img[-1].shape
len(wms_true_color_img)
wms_true_color_img.
len(dates)
len(wms_true_color_img)
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_openml
from future import standard_library
standard_library.install_aliases()
from builtins import range
from builtins import object
import os
import pickle as pickle
from IPython.display import clear_output
VAL_RATIO = 0.1
TEST_RATIO = 0.2
MNIST = fetch_openml("mnist_784")
def load_mnist():
val_len = int(len(MNIST["data"]) * VAL_RATIO)
test_len = int(len(MNIST["data"]) * TEST_RATIO)
train_len = int(len(MNIST["data"]) - val_len - test_len)
X_train= MNIST["data"][:train_len].reshape((-1,1,28,28)) / 255.0
X_val = MNIST["data"][train_len : train_len + val_len].reshape((-1,1,28,28)) / 255.0
X_test = MNIST["data"][-test_len:].reshape((-1,1,28,28)) / 255.0
y_train = MNIST["target"][:train_len]
y_val = MNIST["target"][train_len : train_len + val_len]
y_test = MNIST["target"][-test_len:]
return X_train, X_val, X_test, y_train, y_val, y_test
X_train, X_val, X_test, y_train, y_val, y_test = load_mnist()
```
basic layers
```
#fully connected
def affine_forward(x, w, b):
#print("affine forward")
#print("affine X shape : {}, w shape {}, bias shape {}".format(x.shape, w.shape, b.shape))
out = None
x_row = x.reshape(x.shape[0], -1)
out = np.matmul(x_row,w) + b
cache = (x, w, b)
return out, cache
def affine_backward(dout, cache):
#print("affine_backward")
#print("dout : {}".format(dout.shape))
x, w, b = cache
dx, dw, db = None, None, None
dx = np.matmul(dout, w.T)
dx = dx.reshape(x.shape)
x = x.reshape(x.shape[0], -1)
dw = np.matmul(x.T, dout)
db = np.sum(dout, axis = 0)
return dx, dw, db
#relu
def relu_forward(x):
# print("relu forward")
out = None
out = np.maximum(x,0)
cache = x
return out, cache
def relu_backward(dout, cache):
#print("relu backward")
dx, x = None, cache
#print("dout : {}, x : {}".format(dout.shape, x.shape))
dx = (x > 0) * dout
return dx
#dropout
def dropout_forward(x, dropout_param):
p, mode = dropout_param["p"], dropout_param["mode"]
if "seed" in dropout_param:
np.random.seed(dropout_param["seed"])
mask = None
out = None
if mode == "train":
mask = np.random.binomial(np.ones_like(x), p) / p
elif mode == "test":
out = x
cache = (dropout_param, mask)
out = out.astype(x.dtype, copy=False)
return out, cache
def dropout_backward(dout, cache):
dropout_param, mask = cache
mode = dropout_param["mode"]
dx = None
if mode == "train":
dx = dout * mask
elif mode == "test":
dx = dout
return dx
#convolution
def conv_forward_naive(x, w, b, conv_param):
#print("conv forward")
out = None
N,C,H,W = x.shape
F,C,HH,WW = w.shape
padding, stride = conv_param["pad"], conv_param["stride"]
assert (H + 2*padding - HH) % stride == 0, "shape error"
assert (W + 2*padding - WW) % stride == 0, "shape error"
out_N = N
out_C = F
out_H = int((H + 2*padding - HH) / stride + 1)
out_W = int((W + 2*padding - WW) / stride + 1)
out = np.zeros((out_N, out_C, out_H, out_W))
padded_x = np.pad(x, ((0,0), (0,0), (padding,padding), (padding,padding)))
for n in range(out_N):
for c in range(out_C):
for h in range(out_H):
for w_ in range(out_W):
out[n,c,h,w_] = np.sum(padded_x[n,:, h*stride: h*stride + HH, w_*stride : w_*stride + WW] * w[c]) + b[c]
cache = (x, w, b, conv_param)
#print("conv : x {} w {} b {}".format(x.shape, w.shape, b.shape))
return out, cache
def conv_backward_naive(dout, cache):
#print("conv backward")
dx, dw, db = None, None, None
x, w, b, conv_param = cache
padding = conv_param["pad"]
stride = conv_param["stride"]
N,C,H,W = x.shape
F,C,HH,WW = w.shape
assert (H + 2*padding - HH) % stride == 0, "shape error"
assert (W + 2*padding - WW) % stride == 0, "shape error"
dx = np.zeros_like(x)
dw = np.zeros_like(w)
db = np.zeros_like(b)
out_N = N
out_C = F
out_H = int((H + 2*padding - HH) / stride + 1)
out_W = int((W + 2*padding - WW) / stride + 1)
out = np.zeros((out_N, out_C, out_H, out_W))
padded_x = np.pad(x, ((0,0), (0,0), (padding, padding), (padding, padding)))
padded_dx = np.pad(dx, ((0,0), (0,0), (padding, padding), (padding, padding)))
#print("padded dx : {} dout : {} w : {}".format(padded_dx.shape, dout.shape, w.shape))
#backpropagation
for n in range(out_N):
for c in range(out_C):
for h in range(out_H):
for w_ in range(out_W):
window_x = padded_x[n, :, h*stride:h*stride + HH, w_*stride:w_*stride + WW]
#bias 는 add gate
db[c] += dout[n,c,h,w_]
#weight는 mul gate
dw[c] += window_x * dout[n,c,h,w_]
#x는 mul gate
padded_dx[n, :, h*stride:h*stride + HH, w_*stride:w_*stride + WW] += w[c]* dout[n,c,h,w_]
#padding한 만큼 자르기
dx = padded_dx[:, :, padding:padding + H, padding : padding + W]
return dx, dw, db
#max pooling
def max_pool_forward_naive(x, pool_param):
#print("pooling forward")
out = None
N, C, H, W = x.shape
pooling_h = pool_param["pool_height"]
pooling_w = pool_param["pool_width"]
stride = pool_param["stride"]
out_n = N
out_c = C
out_h = int((H - pooling_h) / stride + 1)
out_w = int((W - pooling_w) / stride + 1)
out = np.zeros((out_n,out_c, out_h, out_w))
for n in range(out_n):
for c in range(out_c):
for h in range(out_h):
for w in range(out_w):
window_x = x[n,c,h*stride : h*stride + pooling_h, w*stride : w*stride + pooling_w]
out[n,c,h,w] = np.max(window_x)
cache = (x, pool_param)
return out, cache
def max_pool_backward_naive(dout, cache):
#print("pooling backward")
dx = None
x, pool_param = cache
N, C, H, W = x.shape
pooling_h = pool_param["pool_height"]
pooling_w = pool_param["pool_width"]
stride = pool_param["stride"]
out_n = N
out_c = C
out_h = int((H - pooling_h) / stride + 1)
out_w = int((W - pooling_w) / stride + 1)
#print("dout : {}".format(dout.shape))
dx = np.zeros_like(x)
for n in range(out_n):
for c in range(out_c):
for h in range(out_h):
for w in range(out_w):
window_x = x[n,c,h*stride : h*stride + pooling_h, w*stride : w*stride + pooling_w]
mask = (window_x == np.max(window_x))
dx[n,c,h*stride : h*stride + pooling_h,w*stride : w*stride + pooling_w] = mask * dout[n,c,h,w]
return dx
#softmax
def softmax(x,y):
#print("softmax")
logits = x - np.max(x, axis= 1, keepdims = True)
Z = np.sum(np.exp(logits), axis =1 ,keepdims = True)
log_probs = logits - np.log(Z)
probs = np.exp(log_probs)
N = x.shape[0]
y = y.astype(np.int8)
loss = -np.sum(log_probs[np.arange(N), y]) / N
dx = probs.copy()
dx[np.arange(N) , y] -=1
dx /= N
return loss, dx
#For predict
def predict_softmax(x):
#print("predict softmax")
logits = x - np.max(x, axis= 1, keepdims = True)
Z = np.sum(np.exp(logits), axis =1 ,keepdims = True)
log_probs = logits - np.log(Z)
probs = np.exp(log_probs)
return probs
def sgd(w, dw, config=None):
if config is None:
config = {}
config.setdefault("learning_rate", 1)
w -= config["learning_rate"] * dw
return w, config
```
model sequences
conv_param = {"pad", "stride", "num_filters", "channel", "filter_size"}
pool_param = {"pool_height", "pool_width", "stride"}
dense_param = {"num_filters", "H", "W", "hidden_dim"}
```
import sys
class Sequential():
def __init__(self, weight_scale = 1e-03, reg = 0.0, dtype = np.float32):
self.pipeline = {}
self.weight_bias = []
self.layers_name = set(["Relu", "Conv", "Dense", "Pooling"])
self.weight_scale = weight_scale
self.reg = reg
self.dtype = dtype
self.num_conv = 1
self.num_bias = 1
self.num_relu = 1
self.num_pool = 1
self.num_dense = 1
def add(self, layer, params = None):
assert layer in self.layers_name, "You have to choose layer in {}".format(self.layers_name)
if layer == "Conv":
assert params is not None, "Enter params"
self.pipeline["Conv_{}".format(self.num_conv)] = [self.weight_scale * np.random.randn(params["num_filters"], params["channel"], params["filter_size"],
params["filter_size"]),np.zeros(params["num_filters"]), params]
self.weight_bias.append("Conv_{}".format(self.num_conv))
self.weight_bias.append("Bias_{}".format(self.num_bias))
self.num_conv += 1
self.num_bias += 1
elif layer == "Relu":
self.pipeline["Relu_{}".format(self.num_relu)] = (None, None)
self.num_relu +=1
elif layer == "Dense":
assert params is not None, "Enter params"
self.pipeline["Dense_{}".format(self.num_dense)] = [self.weight_scale * np.random.randn(params["num_filters"] * params["H"] * params["W"] , params["hidden_dim"]),
np.zeros(params["hidden_dim"]), params]
self.weight_bias.append("Dense_{}".format(self.num_dense))
self.weight_bias.append("Bias_{}".format(self.num_bias))
self.num_dense += 1
self.num_bias += 1
elif layer == "Pooling":
assert params is not None, "Enter params"
self.pipeline["Pooling_{}".format(self.num_pool)] = (None, params)
self.num_pool += 1
def type_fix(self):
for layer, weight in self.pipeline.items():
if layer[:4] == "Pool" or layer[:4] == "Relu":
continue
self.pipeline[layer] = weight.astype(self.dtype)
def loss(self, x, y= None):
cache = None
#For backpropagation.
cache_list = []
weight_list = []
bias_list = []
for layer in self.pipeline.keys():
layer_name = layer[:4]
if layer_name == "Pool":
pool_param = self.pipeline[layer][1]
x, cache = max_pool_forward_naive(x, pool_param)
cache_list.append(cache)
elif layer_name == "Relu":
x, cache = relu_forward(x)
cache_list.append(cache)
elif layer_name == "Conv":
filter_size = self.pipeline[layer][0].shape[2]
conv_weight = self.pipeline[layer][0]
bias_weight= self.pipeline[layer][1]
weight_list.append(conv_weight)
bias_list.append(bias_weight)
conv_param = self.pipeline[layer][2]
x, cache = conv_forward_naive(x, conv_weight, bias_weight, conv_param)
cache_list.append(cache)
elif layer_name == "Dens":
dense_weight = self.pipeline[layer][0]
bias_weight = self.pipeline[layer][1]
x, cache = affine_forward(x, dense_weight, bias_weight)
weight_list.append(dense_weight)
bias_list.append(bias_weight)
cache_list.append(cache)
else:
assert 1 != 1, "Improper layer is in pipeline!!"
#backpropagation, x: score(probability)
b_weight_bias_list = []
weight_idx = 1
b_cache_idx = 1
b_layer_idx = 1
data_loss, dx = softmax(x, y)
while b_cache_idx <= len(cache_list):
layer_name = list(self.pipeline.keys())[-b_layer_idx][:4]
if layer_name == "Conv":
dx, dW, db = conv_backward_naive(dx, cache_list[-b_cache_idx])
b_cache_idx += 1
dW += self.reg * weight_list[-weight_idx]
weight_idx += 1
b_weight_bias_list.append(db)
b_weight_bias_list.append(dW)
elif layer_name == "Pool":
dx = max_pool_backward_naive(dx, cache_list[-b_cache_idx])
b_cache_idx += 1
elif layer_name == "Relu":
dx = relu_backward(dx, cache_list[-b_cache_idx])
b_cache_idx += 1
elif layer_name == "Dens":
dx, dW, db = affine_backward(dx, cache_list[-b_cache_idx])
b_cache_idx += 1
dW += self.reg * weight_list[-weight_idx]
weight_idx += 1
b_weight_bias_list.append(db)
b_weight_bias_list.append(dW)
b_layer_idx += 1
reg_loss = 0.5 * self.reg * sum(np.sum(W * W) for W in weight_list)
loss = data_loss + reg_loss
grads = {key: value for key,value in zip(self.weight_bias,b_weight_bias_list[::-1])}
return loss, grads
def compile_(self, optimizer, num_epochs = 10, **kwargs):
self.batch_size = kwargs.pop("batch_size", 256)
self.loss_history = []
self.update_rule = optimizer
self.optimizer_configs = {"learning_rate" : 1e-02}
self.num_epochs = num_epochs
self.checkpoint_name = kwargs.pop("checkpoint_name" , None)
self.verbose = kwargs.pop("verbose", True)
assert len(kwargs) <=0, "Unrecognized arguments {}".format(", ".join("{}".format(k) for k in list(kwargs.keys())))
for p in self.pipeline:
d = {k:v for k,v in self.optimizer_configs.items()}
self.optimizer_configs[p] = d
def fit(self, X_data, y_data,batch_size = 256,num_epochs = 10):
num_train = X_data.shape[0]
num_batches = num_train // batch_size
for epoch in range(num_epochs):
print("epoch : {}".format(epoch))
start = 0
end = batch_size
for i in range(num_batches):
X_batch = X_data[start:end]
y_batch = y_data[start:end]
num_bias = 1
loss, grads = self.loss(X_batch, y_batch)
#weight update
for layer,weights in self.pipeline.items():
if layer[:4] == "Pool" or layer[:4] == "Relu":
continue
dw = grads[layer]
config = self.optimizer_configs[layer]
#print("{} before update : {}".format(layer, weights[0]))
next_w , next_config = self.update_rule(weights[0], dw,config)
self.pipeline[layer][0] = next_w
#print("after update : {}".format(self.pipeline[layer][0]))
self.optimizer_configs[layer] = next_config
bias_layer = "Bias_{}".format(num_bias)
#print("{} before update : {}".format(bias_layer, weights[1]))
dw = grads[bias_layer]
next_w, next_config = self.update_rule(weights[1], dw, config)
self.pipeline[layer][1] = next_w
#print("after update : {}".format(self.pipeline[layer][1]))
self.optimizer_configs[bias_layer] = next_config
num_bias +=1
start += batch_size
end += batch_size
def predict(self,X_data, y_data,batch_size = 256):
N = X_data.shape[0]
num_batches = N//batch_size
start = 0
end = batch_size
y_pred = []
for i in range(num_batches):
print(i)
x = X_data[start:end]
start += batch_size
end += batch_size
cache = None
for layer in self.pipeline.keys():
layer_name = layer[:4]
if layer_name == "Pool":
pool_param = self.pipeline[layer][1]
x, cache = max_pool_forward_naive(x, pool_param)
elif layer_name == "Relu":
x, cache = relu_forward(x)
elif layer_name == "Conv":
filter_size = self.pipeline[layer][0].shape[2]
conv_weight = self.pipeline[layer][0]
bias_weight= self.pipeline[layer][1]
conv_param = self.pipeline[layer][2]
x, cache = conv_forward_naive(x, conv_weight, bias_weight, conv_param)
elif layer_name == "Dens":
dense_weight = self.pipeline[layer][0]
bias_weight = self.pipeline[layer][1]
x, cache = affine_forward(x, dense_weight, bias_weight)
else:
assert 1 != 1, "Improper layer is in pipeline!!"
return predict_softmax(x)
model = Sequential()
model.add("Conv", params = {"pad" : 1, "stride" : 1,"channel" : 1, "num_filters" : 16, "filter_size" : 3})
model.add("Relu")
model.add("Pooling", params = {"pool_height" : 2, "pool_width" : 2, "stride": 2})
model.add("Conv", params = {"pad" : 1, "stride" : 1, "channel" : 16, "num_filters" : 32, "filter_size" : 3})
model.add("Relu")
model.add("Pooling", params = {"pool_height" : 2, "pool_width" : 2, "stride": 2})
model.add("Conv", params = {"pad" : 1, "stride" : 1, "channel" : 32, "num_filters" : 64, "filter_size" : 3})
model.add("Relu")
model.add("Dense", params = {"num_filters" : 64, "H": 7, "W": 7, "hidden_dim" : 10})
```
# 훈련전
```
model.predict(X_train[0:1],y_train[0:1], batch_size = 1)
model.compile_(optimizer = sgd)
model.fit(X_train, y_train)
```
# 훈련 후
```
model.predict(X_train[0:1], y_train[0:1],batch_size = 1)
```
| github_jupyter |
<img src="Images/slide_1_clustering.png" width="700" height="700">
<img src="Images/slide_2_clustering.png" width="700" height="700">
## Text Vectorization
Question: What is text vectorization?
Answer: The process to transform text data to numerical vectors
## Options for Text Vectorization
- Count the number of unique words for each sentence (BOW)
- Assign weights to each word in the sentences
- Map each word to a number (dictionary with words as key and numbers as values) and represent each sentences as the sequence of numbers
## Bag-of-Word Matrix
- BoW is a matrix where its rows are sentences and its columns are unique words for the whole documents (corpus)
- We can write down our own function to return BoW matrix based
- Below, we will seehow we can build BoW by calling sklearn methods
```
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.cluster import KMeans
from sklearn.metrics import adjusted_rand_score
documents = ['This is the first sentence.',
'This one is the second sentence.',
'And this is the third one.',
'Is this the first sentence?']
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(documents)
# X.torray() is BoW
print(X.toarray())
```
## How to get unique words?
```
# Get the unique words
print(vectorizer.get_feature_names())
```
## Clustering
- Clustering is an unsupervised learning method
- This is very often used **because we usually don’t have labeled data**
- K-Means clustering is one of the popular clustering algorithm
- The goal of any cluster algorithm is to find groups (clusters) in the given data
## Examples of Clustering
- Cluster movie dataset -> We expect the movies which their genres are similar be clustred in the same group
- News Article Clustering -> We want the News related to science be in the same group, News related to sport be in the same group
## Demo of K-means
```
from figures import plot_kmeans_interactive
plot_kmeans_interactive()
```
## K-means algorithm:
Assume the inputs are $s_1$, $s_2$, ..., $s_n$. Choose $K$ arbitrarily.
Step 1 - Pick $K$ random points as cluster centers (called centroids)
Step 2 - Assign each $s_i$ to nearest cluster by calculating its distance to each centroid
Step 3 - Find new cluster center by taking the average of the assigned points
Step 4 - Repeat Step 2 and 3 until none of the cluster assignments change
## Lets generate sample dataset
```
from sklearn.datasets.samples_generator import make_blobs
import matplotlib.pyplot as plt
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=0.60)
plt.scatter(X[:, 0], X[:, 1])
```
### How to choose correct number of cluster (K)?
Choose arbitrary K
1- Compute all of the distances of red points to red centroid
2- Do step (1) for other colors (purple, blue, ...)
3- Add them up
```
import numpy as np
from scipy.spatial import distance
distortions = []
K = range(1, 10)
for k in K:
km = KMeans(n_clusters=k)
km.fit(X)
distortions.append(sum(np.min(distance.cdist(X, km.cluster_centers_, 'euclidean'), axis=1)) / X.shape[0])
# Plot the elbow
plt.plot(K, distortions, 'bx-')
plt.xlabel('k')
plt.ylabel('Distortion')
plt.title('The Elbow Method showing the optimal k')
plt.show()
```
## Another implementation for obtaining the appropriate number of cluster
```
sum_of_squared_distances = []
K = range(1,15)
for k in K:
km = KMeans(n_clusters=k)
km.fit(X)
sum_of_squared_distances.append(km.inertia_)
# Plot the elbow
plt.plot(K, sum_of_squared_distances, 'bx-')
plt.xlabel('k')
plt.ylabel('Distortion')
plt.title('The Elbow Method showing the optimal k')
plt.show()
```
## Combine Text Vectorization and Clustering the Texts
- Based on the documents that is given, we want to cluster sentences
- To do this: We need two steps:
- Vectorize the sentences (texts)
- Aplly Kmeans to cluster our vectorized sentences (texts)
```
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.cluster import KMeans
from sklearn.metrics import adjusted_rand_score
documents = ["This little kitty came to play when I was eating at a restaurant.",
"Merley has the best squooshy kitten belly.",
"Google Translate app is incredible.",
"If you open 100 tab in google you get a smiley face.",
"Best cat photo I've ever taken.",
"Climbing ninja cat.",
"Impressed with google map feedback.",
"Key promoter extension for Google Chrome."]
vectorizer = TfidfVectorizer(stop_words='english')
X = vectorizer.fit_transform(documents)
print(vectorizer.get_feature_names())
print(X.shape)
true_k = 2
model = KMeans(n_clusters=true_k, init='k-means++')
model.fit(X)
# print('M:')
# print(model.cluster_centers_.argsort())
# print(model.cluster_centers_.argsort()[:, ::-1])
# print("Top terms per cluster:")
# order_centroids = model.cluster_centers_.argsort()[:, ::-1]
# terms = vectorizer.get_feature_names()
# for i in range(true_k):
# print("Cluster %d:" % i),
# for ind in order_centroids[i, :10]:
# print(' %s' % terms[ind]),
# print("\n")
# print("Prediction")
Y = vectorizer.transform(["chrome browser to open."])
print('Y:')
print(Y.toarray())
prediction = model.predict(Y)
print(prediction)
Y = vectorizer.transform(["My cat is hungry."])
prediction = model.predict(Y)
print(prediction)
```
## Other clustering methods and comparison:
http://scikit-learn.org/stable/modules/clustering.html
## Resources:
- https://www.youtube.com/watch?v=FrmrHyOSyhE
- https://jakevdp.github.io/PythonDataScienceHandbook/05.11-k-means.html
## Summary
- Inorder to work with text, we should transform it into vector of numbers
- We learned three methods for text vectorization
- Clustering as an unseprvised learning algorithm obtains the groups based on geometric
| github_jupyter |
# Derive Extended Interval Algebra
<b>NOTE:</b> From a derivation point-of-view, what distinquishes this algebra from Allen's algebra it the definition of <b>less than</b> used to define intervals. In particular, this derivation uses '=|<' rather than '<', which allows intervals to be degenerate (i.e., equal a point). See the section, below, titled, <i>"Derive the Extended Interval Algebra as a Dictionary"</i>.
## References
1. ["Maintaining Knowledge about Temporal Intervals" by J.F. Allen](https://cse.unl.edu/~choueiry/Documents/Allen-CACM1983.pdf) - Allen's original paper
1. [Allen's Interval Algebra](https://www.ics.uci.edu/~alspaugh/cls/shr/allen.html) or [here](https://thomasalspaugh.org/pub/fnd/allen.html) - summarizes Allen's algebra of proper time intervals
1. ["Intervals, Points, and Branching Time" by A.J. Reich](https://www.researchgate.net/publication/220810644_Intervals_Points_and_Branching_Time) - basis for the extensions here to Allen's algebra
1. [W3C Time Ontology in OWL](https://www.w3.org/TR/owl-time/) - temporal vocabulary used here is based on the W3C vocabulary of time
1. [bitsets Python package](https://bitsets.readthedocs.io/en/stable/) - used to implement Algebra relation sets and operations
1. [NetworkX Python package](http://networkx.github.io/) - used to represent directed graph of constraints
1. [Python format string syntax](https://docs.python.org/3/library/string.html#format-string-syntax) - used in Algebra summary method
1. [Spatial Ontology](https://www.w3.org/2017/sdwig/bp/) - I'm still looking for a standard spatial vocabulary; maybe start here
1. [Qualitative Spatial Relations (QSR) Library](https://qsrlib.readthedocs.io/en/latest/index.html) - an alternative library to the one defined here
## Dependencies
```
import os
import qualreas as qr
import numpy as np
import sys
sys.setrecursionlimit(10000)
path = os.path.join(os.getenv('PYPROJ'), 'qualreas')
```
## Deriving Extended Interval Algebra from Extended Point Algebra
### Extended Point Algebra
```
pt_alg = qr.Algebra(os.path.join(path, "Algebras/Linear_Point_Algebra.json"))
pt_alg.summary()
qr.print_point_algebra_composition_table(pt_alg)
```
### Derive the Extended Interval Algebra as a Dictionary
The definition of <b>less than</b>, below, either restricts intervals to be proper ('<') or allows intervals to be degenerate ('=|<') (i.e., integrates points and intervals).
```
less_than_rel = '=|<'
#less_than_rel = '<'
ext_alg_name="Derived_Extended_Interval_Algebra"
ext_alg_desc="Extended linear interval algebra derived from point relations"
verbose = True
%time test_ext_alg_dict = qr.derive_algebra(pt_alg, less_than_rel, name=ext_alg_name, description=ext_alg_desc, verbose=verbose)
test_ext_alg_dict
```
### Save Algebra Dictionary to JSON File
```
test_ext_json_path = os.path.join(path, "Algebras/test_derived_extended_interval_algebra.json")
test_ext_json_path
qr.algebra_to_json_file(test_ext_alg_dict, test_ext_json_path)
```
### Instantiate an Algebra Object from JSON File
```
test_ext_alg = qr.Algebra(test_ext_json_path)
test_ext_alg
test_ext_alg.summary()
test_ext_alg.check_composition_identity()
test_ext_alg.is_associative()
```
### Load Original Extended Interval Algebra
```
ext_alg = qr.Algebra(os.path.join(path, "Algebras/Extended_Linear_Interval_Algebra.json"))
ext_alg
ext_alg.summary()
```
### Compare Derived Extended Interval Algebra with Original
```
print(f"Same as original algebra? {ext_alg.equivalent_algebra(test_ext_alg)}")
```
| github_jupyter |
# 第1章: 準備運動
## 00. 文字列の逆順
***
文字列”stressed”の文字を逆に(末尾から先頭に向かって)並べた文字列を得よ.
```
str = 'stressed'
ans = str[::-1]
print(ans)
```
## 01. 「パタトクカシーー」
***
「パタトクカシーー」という文字列の1,3,5,7文字目を取り出して連結した文字列を得よ.
```
str = 'パタトクカシーー'
ans = str[::2]
print(ans)
```
## 02. 「パトカー」+「タクシー」=「パタトクカシーー」
***
「パトカー」+「タクシー」の文字を先頭から交互に連結して文字列「パタトクカシーー」を得よ.
```
str1 = 'パトカー'
str2 = 'タクシー'
ans = ''.join([i + j for i, j in zip(str1, str2)])
print(ans)
```
## 03. 円周率
***
“Now I need a drink, alcoholic of course, after the heavy lectures involving quantum mechanics.”という文を単語に分解し,各単語の(アルファベットの)文字数を先頭から出現順に並べたリストを作成せよ.
```
import re
str = 'Now I need a drink, alcoholic of course, after the heavy lectures involving quantum mechanics.'
str = re.sub('[,\.]', '', str) # ,と.を除去
splits = str.split() # スペースで区切って単語ごとのリストを作成
ans = [len(i) for i in splits]
print(ans)
```
## 04. 元素記号
***
“Hi He Lied Because Boron Could Not Oxidize Fluorine. New Nations Might Also Sign Peace Security Clause. Arthur King Can.”という文を単語に分解し,1, 5, 6, 7, 8, 9, 15, 16, 19番目の単語は先頭の1文字,それ以外の単語は先頭に2文字を取り出し,取り出した文字列から単語の位置(先頭から何番目の単語か)への連想配列(辞書型もしくはマップ型)を作成せよ.
```
str = 'Hi He Lied Because Boron Could Not Oxidize Fluorine. New Nations Might Also Sign Peace Security Clause. Arthur King Can.'
splits = str.split()
one_ch = [1, 5, 6, 7, 8, 9, 15, 16, 19] # 1文字を取り出す単語の番号リスト
ans = {}
for i, word in enumerate(splits):
if i + 1 in one_ch:
ans[word[:1]] = i + 1 # リストにあれば1文字を取得
else:
ans[word[:2]] = i + 1 # なければ2文字を取得
print(ans)
```
## 05. n-gram
***
与えられたシーケンス(文字列やリストなど)からn-gramを作る関数を作成せよ.この関数を用い,”I am an NLPer”という文から単語bi-gram,文字bi-gramを得よ.
```
def ngram(n, lst):
return list(zip(*[lst[i:] for i in range(n)]))
str = 'I am an NLPer'
words_bi_gram = ngram(2, str.split())
chars_bi_gram = ngram(2, str)
print('単語bi-gram:', words_bi_gram)
print('文字bi-gram:', chars_bi_gram)
str = 'I am an NLPer'
[str[i:] for i in range(2)]
```
## 06. 集合
***
“paraparaparadise”と”paragraph”に含まれる文字bi-gramの集合を,それぞれ, XとYとして求め,XとYの和集合,積集合,差集合を求めよ.さらに,’se’というbi-gramがXおよびYに含まれるかどうかを調べよ.
```
str1 = 'paraparaparadise'
str2 = 'paragraph'
X = set(ngram(2, str1))
Y = set(ngram(2, str2))
union = X | Y
intersection = X & Y
difference = X - Y
print('X:', X)
print('Y:', Y)
print('和集合:', union)
print('積集合:', intersection)
print('差集合:', difference)
print('Xにseが含まれるか:', {('s', 'e')} <= X)
print('Yにseが含まれるか:', {('s', 'e')} <= Y)
```
## 07. テンプレートによる文生成
***
引数x, y, zを受け取り「x時のyはz」という文字列を返す関数を実装せよ.さらに,x=12, y=”気温”, z=22.4として,実行結果を確認せよ.
```
def generate_sentence(x, y, z):
print(f'{x}時のとき{y}は{z}')
generate_sentence(12, '気温', 22.4)
```
## 08. 暗号文
***
与えられた文字列の各文字を,以下の仕様で変換する関数cipherを実装せよ.
英小文字ならば(219 - 文字コード)の文字に置換
その他の文字はそのまま出力
この関数を用い,英語のメッセージを暗号化・復号化せよ.
```
def cipher(str):
rep = [chr(219 - ord(x)) if x.islower() else x for x in str]
return ''.join(rep)
message = 'the quick brown fox jumps over the lazy dog'
message = cipher(message)
print('暗号化:', message)
message = cipher(message)
print('復号化:', message)
```
## 09. Typoglycemia
***
スペースで区切られた単語列に対して,各単語の先頭と末尾の文字は残し,それ以外の文字の順序をランダムに並び替えるプログラムを作成せよ.ただし,長さが4以下の単語は並び替えないこととする.適当な英語の文(例えば”I couldn’t believe that I could actually understand what I was reading : the phenomenal power of the human mind .”)を与え,その実行結果を確認せよ.
```
import random
def shuffle(words):
result = []
for word in words.split():
if len(word) > 4: # 長さが4超であればシャッフル
word = word[:1] + ''.join(random.sample(word[1:-1], len(word) - 2)) + word[-1:]
result.append(word)
return ' '.join(result)
words = "I couldn't believe that I could actually understand what I was reading : the phenomenal power of the human mind ."
ans = shuffle(words)
print(ans)
```
| github_jupyter |
## Facial Filters
Using your trained facial keypoint detector, you can now do things like add filters to a person's face, automatically. In this optional notebook, you can play around with adding sunglasses to detected face's in an image by using the keypoints detected around a person's eyes. Checkout the `images/` directory to see what pther .png's have been provided for you to try, too!
<img src="images/face_filter_ex.png" width=60% height=60%/>
Let's start this process by looking at a sunglasses .png that we'll be working with!
```
# import necessary resources
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import os
import cv2
# load in sunglasses image with cv2 and IMREAD_UNCHANGED
sunglasses = cv2.imread('images/sunglasses.png', cv2.IMREAD_UNCHANGED)
# plot our image
plt.imshow(sunglasses)
# print out its dimensions
print('Image shape: ', sunglasses.shape)
```
## The 4th dimension
You'll note that this image actually has *4 color channels*, not just 3 as your avg RGB image does. This is due to the flag we set `cv2.IMREAD_UNCHANGED`, which tells this to read in another color channel.
#### Alpha channel
It has the usual red, blue, and green channels any color image has, and the 4th channel respresents the **transparency level of each pixel** in the image; this is often called the **alpha** channel. Here's how the transparency channel works: the lower the value, the more transparent, or see-through, the pixel will become. The lower bound (completely transparent) is zero here, so any pixels set to 0 will not be seen; these look like white background pixels in the image above, but they are actually totally transparent.
This transparent channel allows us to place this rectangular image of sunglasses on an image of a face and still see the face area that is techically covered by the transparentbackground of the sunglasses image!
Let's check out the alpha channel of our sunglasses image in the next Python cell. Because many of the pixels in the background of the image have an alpha value of 0, we'll need to explicitly print out non-zero values if we want to see them.
```
# print out the sunglasses transparency (alpha) channel
alpha_channel = sunglasses[:,:,3]
print ('The alpha channel looks like this (black pixels = transparent): ')
plt.imshow(alpha_channel, cmap='gray')
# just to double check that there are indeed non-zero values
# let's find and print out every value greater than zero
values = np.where(alpha_channel != 0)
print ('The non-zero values of the alpha channel are: ')
print (values)
```
#### Overlaying images
This means that when we place this sunglasses image on top of another image, we can use the transparency channel as a filter:
* If the pixels are non-transparent (alpha_channel > 0), overlay them on the new image
#### Keypoint locations
In doing this, it's helpful to understand which keypoint belongs to the eyes, mouth, etc., so in the image below we also print the index of each facial keypoint directly on the image so you can tell which keypoints are for the eyes, eyebrows, etc.,
<img src="images/landmarks_numbered.jpg" width=50% height=50%/>
It may be useful to use keypoints that correspond to the edges of the face to define the width of the sunglasses, and the locations of the eyes to define the placement.
Next, we'll load in an example image. Below, you've been given an image and set of keypoints from the provided training set of data, but you can use your own CNN model to generate keypoints for *any* image of a face (as in Notebook 3) and go through the same overlay process!
```
# load in training data
key_pts_frame = pd.read_csv('data/training_frames_keypoints.csv')
# print out some stats about the data
print('Number of images: ', key_pts_frame.shape[0])
# helper function to display keypoints
def show_keypoints(image, key_pts):
"""Show image with keypoints"""
plt.imshow(image)
plt.scatter(key_pts[:, 0], key_pts[:, 1], s=20, marker='.', c='m')
# a selected image
n = 120
image_name = key_pts_frame.iloc[n, 0]
image = mpimg.imread(os.path.join('data/training/', image_name))
key_pts = key_pts_frame.iloc[n, 1:].as_matrix()
key_pts = key_pts.astype('float').reshape(-1, 2)
print('Image name: ', image_name)
plt.figure(figsize=(5, 5))
show_keypoints(image, key_pts)
plt.show()
```
Next, you'll see an example of placing sunglasses on the person in the loaded image.
Note that the keypoints are numbered off-by-one in the numbered image above, and so `key_pts[0,:]` corresponds to the first point (1) in the labelled image.
```
# Display sunglasses on top of the image in the appropriate place
# copy of the face image for overlay
image_copy = np.copy(image)
# top-left location for sunglasses to go
# 17 = edge of left eyebrow
x = int(key_pts[17, 0])
y = int(key_pts[17, 1])
# height and width of sunglasses
# h = length of nose
h = int(abs(key_pts[27,1] - key_pts[34,1]))
# w = left to right eyebrow edges
w = int(abs(key_pts[17,0] - key_pts[26,0]))
# read in sunglasses
sunglasses = cv2.imread('images/sunglasses.png', cv2.IMREAD_UNCHANGED)
# resize sunglasses
new_sunglasses = cv2.resize(sunglasses, (w, h), interpolation = cv2.INTER_CUBIC)
# get region of interest on the face to change
roi_color = image_copy[y:y+h,x:x+w]
# find all non-transparent pts
ind = np.argwhere(new_sunglasses[:,:,3] > 0)
# for each non-transparent point, replace the original image pixel with that of the new_sunglasses
for i in range(3):
roi_color[ind[:,0],ind[:,1],i] = new_sunglasses[ind[:,0],ind[:,1],i]
# set the area of the image to the changed region with sunglasses
image_copy[y:y+h,x:x+w] = roi_color
# display the result!
plt.imshow(image_copy)
```
#### Further steps
Look in the `images/` directory to see other available .png's for overlay! Also, you may notice that the overlay of the sunglasses is not entirely perfect; you're encouraged to play around with the scale of the width and height of the glasses and investigate how to perform [image rotation](https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_geometric_transformations/py_geometric_transformations.html) in OpenCV so as to match an overlay with any facial pose.
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pylab as plt
import seaborn as sns
import os
import glob
import sys
sys.path.insert(0, '../scripts/')
from football_field import create_football_field
from plots import plot_play
import math
%matplotlib inline
pd.options.display.max_columns = 100
%load_ext autoreload
%autoreload 2
pprd
df = pd.read_parquet('../working/max_risk_partners.parquet')
vr = pd.read_csv('../working/video_review-detailed.csv')
pi = pd.read_parquet('../working/play_information_detailed.parquet')
pprd = pd.read_parquet('../working/punt_play_role_data_pivoted.parquet')
injury_risk_factor =pd.merge(vr,
df,
left_on=['Season_Year','GameKey','PlayID','GSISID','Primary_Partner_GSISID'],
right_on=['season_year','gamekey','playid','gsisid','gsisid_partner'],
how='left')
risk_metrics_by_rolepair = df.groupby(['role','role_partner'])['risk_factor'].agg([np.mean, np.std, 'count']).reset_index()
risk = pd.merge(df, risk_metrics_by_rolepair, suffixes=('','_risk_factor_pair'))
high_high_risk = risk.loc[(risk['count'] > 100) &
((risk['mean'] + risk['std']) < risk['risk_factor']) &
(risk['risk_factor'] > 1000)].copy()
high_high_risk.loc[~high_high_risk[['season_year', 'gamekey', 'playid']].duplicated()].shape
high_high_risk['count'] = 1
play_risk_values = high_high_risk.groupby(['season_year', 'gamekey', 'playid']).agg('count')[['count']].reset_index().sort_values('count', ascending=False)
play_with_risk = pd.merge(pi, play_risk_values, left_on=['season_year','gamekey','playid'],
right_on=['season_year','gamekey','playid'], how='left').fillna(0).sort_values('count', ascending=False)
play_with_risk = play_with_risk.rename(columns={'count':'risk_factor'})
play_with_risk[['season_year','gamekey','playid']].dtypes
play_with_risk = pd.merge(play_with_risk, pprd, left_on=['season_year','gamekey','playid'],
right_on=['Season_Year', 'GameKey', 'PlayID'])
play_with_risk[play_with_risk['out of bounds']].shape
sns.scatterplot(x='return_yards', y='risk_factor', data=play_with_risk, hue='out of bounds')
play_with_risk.sort_values('risk_factor', ascending=False).to_csv('../working/play_with_risk_positions.csv')
play_with_risk['punt_yards_str'].head()
play_with_risk.loc[1103, 'punt_yards_str_short'] = '47'
play_with_risk['punt_yards_str_short'] = play_with_risk['punt_yards_str'] \
.str.replace('punts ','') \
.str.replace('yards','') \
.str.replace('to','') \
.str.replace('No yards', '0')
play_with_risk['punt_yards_str'] \
= play_with_risk['playdescription'].str.extract('(punts .* yards to)', expand=True).fillna(False)
play_with_risk.loc[play_with_risk['punt_yards_str'] == False, 'punt_yards_str_short'] = 'No yards'
play_with_risk.loc[5149, 'punt_yards_str_short'] = '64'
play_with_risk.loc[10, 'punt_yards_str_short'] = '40'
play_with_risk.loc[1103, 'punt_yards_str_short'] = '47'
play_with_risk.loc[2918, 'punt_yards_str_short'] = '64'
play_with_risk.loc[5627, 'punt_yards_str_short'] = '62'
play_with_risk.loc[83, 'punt_yards_str_short'] = '63'
play_with_risk.loc[6435, 'punt_yards_str_short'] = '56'
play_with_risk.loc[3650, 'punt_yards_str_short'] = '50'
play_with_risk.loc[4671, 'punt_yards_str_short'] = '44'
play_with_risk.loc[665, 'punt_yards_str_short'] = '57'
play_with_risk.loc[5223, 'punt_yards_str_short'] = '35'
play_with_risk.loc[459, 'punt_yards_str_short'] = '46'
play_with_risk.loc[2480, 'punt_yards_str_short'] = '36'
play_with_risk.loc[4542, 'punt_yards_str_short'] = '22'
play_with_risk.loc[1057, 'punt_yards_str_short'] = '58'
play_with_risk['punt yards'] = play_with_risk['punt_yards_str_short'].replace('No yards', 0).astype('int')
sns.scatterplot(x='punt yards', y='risk_factor', data=play_with_risk, hue='PENALTY')
vr['injury_play'] = True
play_with_risk = pd.merge(play_with_risk, vr, how='left').fillna(False)
fig, ax = plt.subplots(figsize=(15, 5))
sns.scatterplot(x='punt yards',
y='risk_factor',
data=play_with_risk.sort_values('injury_play'),
hue='injury_play', #linewidth=0,
alpha=0.5,
style='PENALTY', ax=ax)
```
| github_jupyter |
```
%%html
<link href="http://mathbook.pugetsound.edu/beta/mathbook-content.css" rel="stylesheet" type="text/css" />
<link href="https://aimath.org/mathbook/mathbook-add-on.css" rel="stylesheet" type="text/css" />
<style>.subtitle {font-size:medium; display:block}</style>
<link href="https://fonts.googleapis.com/css?family=Open+Sans:400,400italic,600,600italic" rel="stylesheet" type="text/css" />
<link href="https://fonts.googleapis.com/css?family=Inconsolata:400,700&subset=latin,latin-ext" rel="stylesheet" type="text/css" /><!-- Hide this cell. -->
<script>
var cell = $(".container .cell").eq(0), ia = cell.find(".input_area")
if (cell.find(".toggle-button").length == 0) {
ia.after(
$('<button class="toggle-button">Toggle hidden code</button>').click(
function (){ ia.toggle() }
)
)
ia.hide()
}
</script>
```
**Important:** to view this notebook properly you will need to execute the cell above, which assumes you have an Internet connection. It should already be selected, or place your cursor anywhere above to select. Then press the "Run" button in the menu bar above (the right-pointing arrowhead), or press Shift-Enter on your keyboard.
$\newcommand{\identity}{\mathrm{id}}
\newcommand{\notdivide}{\nmid}
\newcommand{\notsubset}{\not\subset}
\newcommand{\lcm}{\operatorname{lcm}}
\newcommand{\gf}{\operatorname{GF}}
\newcommand{\inn}{\operatorname{Inn}}
\newcommand{\aut}{\operatorname{Aut}}
\newcommand{\Hom}{\operatorname{Hom}}
\newcommand{\cis}{\operatorname{cis}}
\newcommand{\chr}{\operatorname{char}}
\newcommand{\Null}{\operatorname{Null}}
\newcommand{\lt}{<}
\newcommand{\gt}{>}
\newcommand{\amp}{&}
$
<div class="mathbook-content"><h2 class="heading hide-type" alt="Section 14.1 Groups Acting on Sets"><span class="type">Section</span><span class="codenumber">14.1</span><span class="title">Groups Acting on Sets</span></h2><a href="section-groups-acting-on-sets.ipynb" class="permalink">¶</a></div>
<div class="mathbook-content"><p id="p-2029">Let $X$ be a set and $G$ be a group. A <dfn class="terminology">(left) action</dfn> of $G$ on $X$ is a map $G \times X \rightarrow X$ given by $(g,x) \mapsto gx\text{,}$ where </p><ol class="decimal"><li id="li-465"><p id="p-2030">$ex = x$ for all $x \in X\text{;}$</p></li><li id="li-466"><p id="p-2031">$(g_1 g_2)x = g_1(g_2 x)$ for all $x \in X$ and all $g_1, g_2 \in G\text{.}$</p></li></ol><p> Under these considerations $X$ is called a <dfn class="terminology">$G$-set</dfn>. Notice that we are not requiring $X$ to be related to $G$ in any way. It is true that every group $G$ acts on every set $X$ by the trivial action $(g,x) \mapsto x\text{;}$ however, group actions are more interesting if the set $X$ is somehow related to the group $G\text{.}$</p></div>
<div class="mathbook-content"><article class="example-like" id="example-actions-gl2"><h6 class="heading"><span class="type">Example</span><span class="codenumber">14.1</span></h6><p id="p-2032">Let $G = GL_2( {\mathbb R} )$ and $X = {\mathbb R}^2\text{.}$ Then $G$ acts on $X$ by left multiplication. If $v \in {\mathbb R}^2$ and $I$ is the identity matrix, then $Iv = v\text{.}$ If $A$ and $B$ are $2 \times 2$ invertible matrices, then $(AB)v = A(Bv)$ since matrix multiplication is associative.</p></article></div>
<div class="mathbook-content"><article class="example-like" id="example-actions-d4"><h6 class="heading"><span class="type">Example</span><span class="codenumber">14.2</span></h6><p id="p-2033">Let $G = D_4$ be the symmetry group of a square. If $X = \{ 1, 2, 3, 4 \}$ is the set of vertices of the square, then we can consider $D_4$ to consist of the following permutations:</p><div class="displaymath">
\begin{equation*}
\{ (1), (13), (24), (1432), (1234), (12)(34), (14)(23), (13)(24) \}.
\end{equation*}
</div><p>The elements of $D_4$ act on $X$ as functions. The permutation $(13)(24)$ acts on vertex 1 by sending it to vertex 3, on vertex 2 by sending it to vertex 4, and so on. It is easy to see that the axioms of a group action are satisfied.</p></article></div>
<div class="mathbook-content"><p id="p-2034">In general, if $X$ is any set and $G$ is a subgroup of $S_X\text{,}$ the group of all permutations acting on $X\text{,}$ then $X$ is a $G$-set under the group action</p><div class="displaymath">
\begin{equation*}
(\sigma, x) \mapsto \sigma(x)
\end{equation*}
</div><p>for $\sigma \in G$ and $x \in X\text{.}$</p></div>
<div class="mathbook-content"><article class="example-like" id="example-actions-left-mult"><h6 class="heading"><span class="type">Example</span><span class="codenumber">14.3</span></h6><p id="p-2035">If we let $X = G\text{,}$ then every group $G$ acts on itself by the left regular representation; that is, $(g,x) \mapsto \lambda_g(x) = gx\text{,}$ where $\lambda_g$ is left multiplication:</p><div class="displaymath">
\begin{gather*}
e \cdot x = \lambda_e x = ex = x\\
(gh) \cdot x = \lambda_{gh}x = \lambda_g \lambda_h x = \lambda_g(hx) = g \cdot ( h \cdot x).
\end{gather*}
</div><p>If $H$ is a subgroup of $G\text{,}$ then $G$ is an $H$-set under left multiplication by elements of $H\text{.}$</p></article></div>
<div class="mathbook-content"><article class="example-like" id="example-actions-conjugation"><h6 class="heading"><span class="type">Example</span><span class="codenumber">14.4</span></h6><p id="p-2036">Let $G$ be a group and suppose that $X=G\text{.}$ If $H$ is a subgroup of $G\text{,}$ then $G$ is an $H$-set under <dfn class="terminology">conjugation</dfn>; that is, we can define an action of $H$ on $G\text{,}$</p><div class="displaymath">
\begin{equation*}
H \times G \rightarrow G,
\end{equation*}
</div><p>via</p><div class="displaymath">
\begin{equation*}
(h,g) \mapsto hgh^{-1}
\end{equation*}
</div><p>for $h \in H$ and $g \in G\text{.}$ Clearly, the first axiom for a group action holds. Observing that</p><div class="displaymath">
\begin{align*}
(h_1 h_2, g) & = h_1 h_2 g (h_1 h_2 )^{-1}\\
& = h_1( h_2 g h_2^{-1}) h_1^{-1}\\
& = (h_1, (h_2, g) ),
\end{align*}
</div><p>we see that the second condition is also satisfied.</p></article></div>
<div class="mathbook-content"><article class="example-like" id="example-actions-left-cosets"><h6 class="heading"><span class="type">Example</span><span class="codenumber">14.5</span></h6><p id="p-2037">Let $H$ be a subgroup of $G$ and ${\mathcal L}_H$ the set of left cosets of $H\text{.}$ The set ${\mathcal L}_H$ is a $G$-set under the action</p><div class="displaymath">
\begin{equation*}
(g, xH) \mapsto gxH.
\end{equation*}
</div><p>Again, it is easy to see that the first axiom is true. Since $(g g')xH = g( g'x H)\text{,}$ the second axiom is also true.</p></article></div>
<div class="mathbook-content"><p id="p-2038">If $G$ acts on a set $X$ and $x, y \in X\text{,}$ then $x$ is said to be <dfn class="terminology">$G$-equivalent</dfn> to $y$ if there exists a $g \in G$ such that $gx =y\text{.}$ We write $x \sim_G y$ or $x \sim y$ if two elements are $G$-equivalent.</p></div>
<div class="mathbook-content"><article class="theorem-like" id="proposition-28"><h6 class="heading"><span class="type">Proposition</span><span class="codenumber">14.6</span></h6><p id="p-2039">Let $X$ be a $G$-set. Then $G$-equivalence is an equivalence relation on $X\text{.}$</p></article><article class="proof" id="proof-86"><h6 class="heading"><span class="type">Proof</span></h6><p id="p-2040">The relation $\sim$ is reflexive since $ex = x\text{.}$ Suppose that $x \sim y$ for $x, y \in X\text{.}$ Then there exists a $g$ such that $gx = y\text{.}$ In this case $g^{-1}y=x\text{;}$ hence, $y \sim x\text{.}$ To show that the relation is transitive, suppose that $x \sim y$ and $y \sim z\text{.}$ Then there must exist group elements $g$ and $h$ such that $gx = y$ and $hy= z\text{.}$ So $z = hy = (hg)x\text{,}$ and $x$ is equivalent to $z\text{.}$</p></article></div>
<div class="mathbook-content"><p id="p-2041">If $X$ is a $G$-set, then each partition of $X$ associated with $G$-equivalence is called an <dfn class="terminology">orbit</dfn> of $X$ under $G\text{.}$ We will denote the orbit that contains an element $x$ of $X$ by ${\mathcal O}_x\text{.}$ </p></div>
<div class="mathbook-content"><article class="example-like" id="example-actions-permute"><h6 class="heading"><span class="type">Example</span><span class="codenumber">14.7</span></h6><p id="p-2042">Let $G$ be the permutation group defined by</p><div class="displaymath">
\begin{equation*}
G =\{(1), (1 2 3), (1 3 2), (4 5), (1 2 3)(4 5), (1 3 2)(4 5) \}
\end{equation*}
</div><p>and $X = \{ 1, 2, 3, 4, 5\}\text{.}$ Then $X$ is a $G$-set. The orbits are ${\mathcal O}_1 = {\mathcal O}_2 = {\mathcal O}_3 =\{1, 2, 3\}$ and ${\mathcal O}_4 = {\mathcal O}_5 = \{4, 5\}\text{.}$</p></article></div>
<div class="mathbook-content"><p id="p-2043">Now suppose that $G$ is a group acting on a set $X$ and let $g$ be an element of $G\text{.}$ The <dfn class="terminology">fixed point set</dfn> of $g$ in $X\text{,}$ denoted by $X_g\text{,}$ is the set of all $x \in X$ such that $gx = x\text{.}$ We can also study the group elements $g$ that fix a given $x \in X\text{.}$ This set is more than a subset of $G\text{,}$ it is a subgroup. This subgroup is called the <dfn class="terminology">stabilizer subgroup</dfn> or <dfn class="terminology">isotropy subgroup</dfn> of $x\text{.}$ We will denote the stabilizer subgroup of $x$ by $G_x\text{.}$ </p></div>
<div class="mathbook-content"><article class="remark-like" id="remark-6"><h6 class="heading"><span class="type">Remark</span><span class="codenumber">14.8</span></h6><p id="p-2044">It is important to remember that $X_g \subset X$ and $G_x \subset G\text{.}$</p></article></div>
<div class="mathbook-content"><article class="example-like" id="example-actions-stabilizer"><h6 class="heading"><span class="type">Example</span><span class="codenumber">14.9</span></h6><p id="p-2045">Let $X = \{1, 2, 3, 4, 5, 6\}$ and suppose that $G$ is the permutation group given by the permutations</p><div class="displaymath">
\begin{equation*}
\{ (1), (1 2)(3 4 5 6), (3 5)(4 6), (1 2)( 3 6 5 4) \}.
\end{equation*}
</div><p>Then the fixed point sets of $X$ under the action of $G$ are</p><div class="displaymath">
\begin{gather*}
X_{(1)} = X,\\
X_{(3 5)(4 6)} = \{1,2\},\\
X_{(1 2)(3 4 5 6)} = X_{(1 2)(3 6 5 4)} = \emptyset,
\end{gather*}
</div><p>and the stabilizer subgroups are</p><div class="displaymath">
\begin{gather*}
G_1 = G_2 = \{(1), (3 5)(4 6) \},\\
G_3 = G_4 = G_5 = G_6 = \{(1)\}.
\end{gather*}
</div><p>It is easily seen that $G_x$ is a subgroup of $G$ for each $x \in X\text{.}$</p></article></div>
<div class="mathbook-content"><article class="theorem-like" id="proposition-29"><h6 class="heading"><span class="type">Proposition</span><span class="codenumber">14.10</span></h6><p id="p-2046">Let $G$ be a group acting on a set $X$ and $x \in X\text{.}$ The stabilizer group of $x\text{,}$ $G_x\text{,}$ is a subgroup of $G\text{.}$</p></article><article class="proof" id="proof-87"><h6 class="heading"><span class="type">Proof</span></h6><p id="p-2047">Clearly, $e \in G_x$ since the identity fixes every element in the set $X\text{.}$ Let $g, h \in G_x\text{.}$ Then $gx = x$ and $hx = x\text{.}$ So $(gh)x = g(hx) = gx = x\text{;}$ hence, the product of two elements in $G_x$ is also in $G_x\text{.}$ Finally, if $g \in G_x\text{,}$ then $x = ex = (g^{-1}g)x = (g^{-1})gx = g^{-1} x\text{.}$ So $g^{-1}$ is in $G_x\text{.}$</p></article></div>
<div class="mathbook-content"><p id="p-2048">We will denote the number of elements in the fixed point set of an element $g \in G$ by $|X_g|$ and denote the number of elements in the orbit of $x \in X$ by $|{\mathcal O}_x|\text{.}$ The next theorem demonstrates the relationship between orbits of an element $x \in X$ and the left cosets of $G_x$ in $G\text{.}$</p></div>
<div class="mathbook-content"><article class="theorem-like" id="theorem-orbit"><h6 class="heading"><span class="type">Theorem</span><span class="codenumber">14.11</span></h6><p id="p-2049">Let $G$ be a finite group and $X$ a finite $G$-set. If $x \in X\text{,}$ then $|{\mathcal O}_x| = [G:G_x]\text{.}$</p></article><article class="proof" id="proof-88"><h6 class="heading"><span class="type">Proof</span></h6><p id="p-2050">We know that $|G|/|G_x|$ is the number of left cosets of $G_x$ in $G$ by Lagrange's Theorem (Theorem <a href="section-lagranges-theorem.ipynb#theorem-lagrange" class="xref" alt="Theorem 6.10 Lagrange" title="Theorem 6.10 Lagrange">6.10</a>). We will define a bijective map $\phi$ between the orbit ${\mathcal O}_x$ of $X$ and the set of left cosets ${\mathcal L}_{G_x}$ of $G_x$ in $G\text{.}$ Let $y \in {\mathcal O}_x\text{.}$ Then there exists a $g$ in $G$ such that $g x = y\text{.}$ Define $\phi$ by $\phi( y ) = g G_x\text{.}$ To show that $\phi$ is one-to-one, assume that $\phi(y_1) = \phi(y_2)\text{.}$ Then</p><div class="displaymath">
\begin{equation*}
\phi(y_1) = g_1 G_x = g_2 G_x = \phi(y_2),
\end{equation*}
</div><p>where $g_1 x = y_1$ and $g_2 x = y_2\text{.}$ Since $g_1 G_x = g_2 G_x\text{,}$ there exists a $g \in G_x$ such that $g_2 = g_1 g\text{,}$</p><div class="displaymath">
\begin{equation*}
y_2 = g_2 x = g_1 g x = g_1 x = y_1;
\end{equation*}
</div><p>consequently, the map $\phi$ is one-to-one. Finally, we must show that the map $\phi$ is onto. Let $g G_x$ be a left coset. If $g x = y\text{,}$ then $\phi(y) = g G_x\text{.}$</p></article></div>
| github_jupyter |
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
%matplotlib inline
torch.manual_seed(777) # reproducibility
# Hyper parameters
num_epochs = 30
num_classes = 10
batch_size = 100
learning_rate = 0.001
# Device configuration
device = torch.device('cuda')
# transform images to tensors of normalized range [-1, 1]
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
train_dataset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
shuffle=True, num_workers=2)
test_dataset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size,
shuffle=False, num_workers=2)
# Write the code to define the convolutional neural network for CIFAR-10
class Net(nn.Module):
def __init__(self, num_classes):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, num_classes)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
model = Net(num_classes).to(device)
print(model)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
# Train the model
total_step = len(train_loader)
for epoch in range(num_epochs):
running_loss = 0.0
for i, (images, labels) in enumerate(train_loader):
# Move tensors to the configured device
images = images.to(device)
labels = labels.to(device)
# Forward pass
outputs = model(images)
loss = criterion(outputs, labels)
# Backward and optimize
# zero the parameter gradients
optimizer.zero_grad()
# backward + optimize
loss.backward()
optimizer.step()
running_loss += loss.item()
if (i+1) % 100 == 0:
print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'
.format(epoch+1, num_epochs, i+1, total_step, running_loss / 100))
running_loss = 0.0
# Test the model
with torch.no_grad():
correct = 0
total = 0
for images, labels in test_loader:
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: {} %'.format(100 * correct / total))
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
%run plot.py
```
### Function for the random step
$DX$ is the standard deviation, $bias$ is the constant average of the step
```
# random seed for reproducibility
np.random.seed(12345)
# function for the random step, using lambda construction
# int() for cleaner look and for mimicing a detector with finite resolution
jump = lambda drift, stdev: int(np.random.normal(drift,stdev))
for i in range(10):
print(jump(5,50))
```
### Function for the added pattern
to add to part of a time series, over $z$ bins, with amplitude $a$
```
def pattern(i,z,a):
return int(a*np.sin((np.pi*i)/z))
# random seed for reproducibility
np.random.seed(12345)
# pattern parameters: Z=nr of steps, A=amplitude
Z=12
A=500
# number of data samples
N=10000
# size of each sample of the timeseries
L=60
# step parameters: introduce small positive bias
DX = 50
bias = 5
y = [0] * N
x = [[0] * L for i in range(N)]
for i in range(N):
if i>0:
x[i][0] = x[i-1][-1] + jump(bias,DX)
for j in range(1,L):
x[i][j] = x[i][j-1] + jump(bias,DX)
y[i] = i%3
##y[i] = random.randint(0,2)
if y[i]>0:
j0 = np.random.randint(0,L-1-Z)
###print(i,j0,j1)
sign = 3-2*y[i]
for j in range(Z):
x[i][j0+j] += sign*pattern(j,Z,A)
for i in range(min(3,N)):
print(x[i],y[i])
Show_data(x,L,"original data")
```
### Save data on file
```
# command in linux
!mkdir DATA
str0 = f'ts_L{L}_Z{Z}_A{A}_DX{DX}_bias{bias}_N{N}.dat'
print(str0)
fname='DATA/x_'+str0
np.savetxt(fname,x,fmt="%d")
fname='DATA/y_'+str0
np.savetxt(fname,y,fmt="%d")
```
<a style='text-decoration:none;line-height:16px;display:flex;color:#5B5B62;padding:10px;justify-content:end;' href='https://deepnote.com?utm_source=created-in-deepnote-cell&projectId=1cb9264e-65a5-431d-a980-16667908489e' target="_blank">
<img alt='Created in deepnote.com' style='display:inline;max-height:16px;margin:0px;margin-right:7.5px;' src='data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPHN2ZyB3aWR0aD0iODBweCIgaGVpZ2h0PSI4MHB4IiB2aWV3Qm94PSIwIDAgODAgODAiIHZlcnNpb249IjEuMSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayI+CiAgICA8IS0tIEdlbmVyYXRvcjogU2tldGNoIDU0LjEgKDc2NDkwKSAtIGh0dHBzOi8vc2tldGNoYXBwLmNvbSAtLT4KICAgIDx0aXRsZT5Hcm91cCAzPC90aXRsZT4KICAgIDxkZXNjPkNyZWF0ZWQgd2l0aCBTa2V0Y2guPC9kZXNjPgogICAgPGcgaWQ9IkxhbmRpbmciIHN0cm9rZT0ibm9uZSIgc3Ryb2tlLXdpZHRoPSIxIiBmaWxsPSJub25lIiBmaWxsLXJ1bGU9ImV2ZW5vZGQiPgogICAgICAgIDxnIGlkPSJBcnRib2FyZCIgdHJhbnNmb3JtPSJ0cmFuc2xhdGUoLTEyMzUuMDAwMDAwLCAtNzkuMDAwMDAwKSI+CiAgICAgICAgICAgIDxnIGlkPSJHcm91cC0zIiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgxMjM1LjAwMDAwMCwgNzkuMDAwMDAwKSI+CiAgICAgICAgICAgICAgICA8cG9seWdvbiBpZD0iUGF0aC0yMCIgZmlsbD0iIzAyNjVCNCIgcG9pbnRzPSIyLjM3NjIzNzYyIDgwIDM4LjA0NzY2NjcgODAgNTcuODIxNzgyMiA3My44MDU3NTkyIDU3LjgyMTc4MjIgMzIuNzU5MjczOSAzOS4xNDAyMjc4IDMxLjY4MzE2ODMiPjwvcG9seWdvbj4KICAgICAgICAgICAgICAgIDxwYXRoIGQ9Ik0zNS4wMDc3MTgsODAgQzQyLjkwNjIwMDcsNzYuNDU0OTM1OCA0Ny41NjQ5MTY3LDcxLjU0MjI2NzEgNDguOTgzODY2LDY1LjI2MTk5MzkgQzUxLjExMjI4OTksNTUuODQxNTg0MiA0MS42NzcxNzk1LDQ5LjIxMjIyODQgMjUuNjIzOTg0Niw0OS4yMTIyMjg0IEMyNS40ODQ5Mjg5LDQ5LjEyNjg0NDggMjkuODI2MTI5Niw0My4yODM4MjQ4IDM4LjY0NzU4NjksMzEuNjgzMTY4MyBMNzIuODcxMjg3MSwzMi41NTQ0MjUgTDY1LjI4MDk3Myw2Ny42NzYzNDIxIEw1MS4xMTIyODk5LDc3LjM3NjE0NCBMMzUuMDA3NzE4LDgwIFoiIGlkPSJQYXRoLTIyIiBmaWxsPSIjMDAyODY4Ij48L3BhdGg+CiAgICAgICAgICAgICAgICA8cGF0aCBkPSJNMCwzNy43MzA0NDA1IEwyNy4xMTQ1MzcsMC4yNTcxMTE0MzYgQzYyLjM3MTUxMjMsLTEuOTkwNzE3MDEgODAsMTAuNTAwMzkyNyA4MCwzNy43MzA0NDA1IEM4MCw2NC45NjA0ODgyIDY0Ljc3NjUwMzgsNzkuMDUwMzQxNCAzNC4zMjk1MTEzLDgwIEM0Ny4wNTUzNDg5LDc3LjU2NzA4MDggNTMuNDE4MjY3Nyw3MC4zMTM2MTAzIDUzLjQxODI2NzcsNTguMjM5NTg4NSBDNTMuNDE4MjY3Nyw0MC4xMjg1NTU3IDM2LjMwMzk1NDQsMzcuNzMwNDQwNSAyNS4yMjc0MTcsMzcuNzMwNDQwNSBDMTcuODQzMDU4NiwzNy43MzA0NDA1IDkuNDMzOTE5NjYsMzcuNzMwNDQwNSAwLDM3LjczMDQ0MDUgWiIgaWQ9IlBhdGgtMTkiIGZpbGw9IiMzNzkzRUYiPjwvcGF0aD4KICAgICAgICAgICAgPC9nPgogICAgICAgIDwvZz4KICAgIDwvZz4KPC9zdmc+' > </img>
Created in <span style='font-weight:600;margin-left:4px;'>Deepnote</span></a>
| github_jupyter |
# Recurrent Neural Networks with ``gluon``
With gluon, now we can train the recurrent neural networks (RNNs) more neatly, such as the long short-term memory (LSTM) and the gated recurrent unit (GRU). To demonstrate the end-to-end RNN training and prediction pipeline, we take a classic problem in language modeling as a case study. Specifically, we will show how to predict the distribution of the next word given a sequence of previous words.
## Import packages
To begin with, we need to make the following necessary imports.
```
import math
import os
import time
import numpy as np
import mxnet as mx
from mxnet import gluon, autograd
from mxnet.gluon import nn, rnn
```
## Define classes for indexing words of the input document
In a language modeling problem, we define the following classes to facilitate the routine procedures for loading document data. In the following, the ``Dictionary`` class is for word indexing: words in the documents can be converted from the string format to the integer format.
In this example, we use consecutive integers to index words of the input document.
```
class Dictionary(object):
def __init__(self):
self.word2idx = {}
self.idx2word = []
def add_word(self, word):
if word not in self.word2idx:
self.idx2word.append(word)
self.word2idx[word] = len(self.idx2word) - 1
return self.word2idx[word]
def __len__(self):
return len(self.idx2word)
```
The ``Dictionary`` class is used by the ``Corpus`` class to index the words of the input document.
```
class Corpus(object):
def __init__(self, path):
self.dictionary = Dictionary()
self.train = self.tokenize(path + 'train.txt')
self.valid = self.tokenize(path + 'valid.txt')
self.test = self.tokenize(path + 'test.txt')
def tokenize(self, path):
"""Tokenizes a text file."""
assert os.path.exists(path)
# Add words to the dictionary
with open(path, 'r') as f:
tokens = 0
for line in f:
words = line.split() + ['<eos>']
tokens += len(words)
for word in words:
self.dictionary.add_word(word)
# Tokenize file content
with open(path, 'r') as f:
ids = np.zeros((tokens,), dtype='int32')
token = 0
for line in f:
words = line.split() + ['<eos>']
for word in words:
ids[token] = self.dictionary.word2idx[word]
token += 1
return mx.nd.array(ids, dtype='int32')
```
## Provide an exposition of different RNN models with ``gluon``
Based on the ``gluon.Block`` class, we can make different RNN models available with the following single ``RNNModel`` class.
Users can select their preferred RNN model or compare different RNN models by configuring the argument of the constructor of ``RNNModel``. We will show an example following the definition of the ``RNNModel`` class.
```
class RNNModel(gluon.Block):
"""A model with an encoder, recurrent layer, and a decoder."""
def __init__(self, mode, vocab_size, num_embed, num_hidden,
num_layers, dropout=0.5, tie_weights=False, **kwargs):
super(RNNModel, self).__init__(**kwargs)
with self.name_scope():
self.drop = nn.Dropout(dropout)
self.encoder = nn.Embedding(vocab_size, num_embed,
weight_initializer = mx.init.Uniform(0.1))
if mode == 'rnn_relu':
self.rnn = rnn.RNN(num_hidden, num_layers, activation='relu', dropout=dropout,
input_size=num_embed)
elif mode == 'rnn_tanh':
self.rnn = rnn.RNN(num_hidden, num_layers, dropout=dropout,
input_size=num_embed)
elif mode == 'lstm':
self.rnn = rnn.LSTM(num_hidden, num_layers, dropout=dropout,
input_size=num_embed)
elif mode == 'gru':
self.rnn = rnn.GRU(num_hidden, num_layers, dropout=dropout,
input_size=num_embed)
else:
raise ValueError("Invalid mode %s. Options are rnn_relu, "
"rnn_tanh, lstm, and gru"%mode)
if tie_weights:
self.decoder = nn.Dense(vocab_size, in_units = num_hidden,
params = self.encoder.params)
else:
self.decoder = nn.Dense(vocab_size, in_units = num_hidden)
self.num_hidden = num_hidden
def forward(self, inputs, hidden):
emb = self.drop(self.encoder(inputs))
output, hidden = self.rnn(emb, hidden)
output = self.drop(output)
decoded = self.decoder(output.reshape((-1, self.num_hidden)))
return decoded, hidden
def begin_state(self, *args, **kwargs):
return self.rnn.begin_state(*args, **kwargs)
```
## Select an RNN model and configure parameters
For demonstration purposes, we provide an arbitrary selection of the parameter values. In practice, some parameters should be more fine tuned based on the validation data set.
For instance, to obtain a better performance, as reflected in a lower loss or perplexity, one can set ``args_epochs`` to a larger value.
In this demonstration, LSTM is the chosen type of RNN. For other RNN options, one can replace the ``'lstm'`` string to ``'rnn_relu'``, ``'rnn_tanh'``, or ``'gru'`` as provided by the aforementioned ``gluon.Block`` class.
```
args_data = '../data/nlp/ptb.'
args_model = 'rnn_relu'
args_emsize = 100
args_nhid = 100
args_nlayers = 2
args_lr = 1.0
args_clip = 0.2
args_epochs = 1
args_batch_size = 32
args_bptt = 5
args_dropout = 0.2
args_tied = True
args_cuda = 'store_true'
args_log_interval = 500
args_save = 'model.param'
```
## Load data as batches
We load the document data by leveraging the aforementioned ``Corpus`` class.
To speed up the subsequent data flow in the RNN model, we pre-process the loaded data as batches. This procedure is defined in the following ``batchify`` function.
```
context = mx.gpu() # this notebook takes too long on cpu
corpus = Corpus(args_data)
def batchify(data, batch_size):
"""Reshape data into (num_example, batch_size)"""
nbatch = data.shape[0] // batch_size
data = data[:nbatch * batch_size]
data = data.reshape((batch_size, nbatch)).T
return data
train_data = batchify(corpus.train, args_batch_size).as_in_context(context)
val_data = batchify(corpus.valid, args_batch_size).as_in_context(context)
test_data = batchify(corpus.test, args_batch_size).as_in_context(context)
```
## Build the model
We go on to build the model, initialize model parameters, and configure the optimization algorithms for training the RNN model.
```
ntokens = len(corpus.dictionary)
model = RNNModel(args_model, ntokens, args_emsize, args_nhid,
args_nlayers, args_dropout, args_tied)
model.collect_params().initialize(mx.init.Xavier(), ctx=context)
trainer = gluon.Trainer(model.collect_params(), 'sgd',
{'learning_rate': args_lr, 'momentum': 0, 'wd': 0})
loss = gluon.loss.SoftmaxCrossEntropyLoss()
```
## Train the model and evaluate on validation and testing data sets
Now we can define functions for training and evaluating the model. The following are two helper functions that will be used during model training and evaluation.
```
def get_batch(source, i):
seq_len = min(args_bptt, source.shape[0] - 1 - i)
data = source[i : i + seq_len]
target = source[i + 1 : i + 1 + seq_len]
return data, target.reshape((-1,))
def detach(hidden):
if isinstance(hidden, (tuple, list)):
hidden = [i.detach() for i in hidden]
else:
hidden = hidden.detach()
return hidden
```
The following is the function for model evaluation. It returns the loss of the model prediction. We will discuss the details of the loss measure shortly.
```
def eval(data_source):
total_L = 0.0
ntotal = 0
hidden = model.begin_state(func = mx.nd.zeros, batch_size = args_batch_size, ctx=context)
for i in range(0, data_source.shape[0] - 1, args_bptt):
data, target = get_batch(data_source, i)
output, hidden = model(data, hidden)
L = loss(output, target)
total_L += mx.nd.sum(L).asscalar()
ntotal += L.size
return total_L / ntotal
```
Now we are ready to define the function for training the model. We can monitor the model performance on the training, validation, and testing data sets over iterations.
```
def train():
best_val = float("Inf")
for epoch in range(args_epochs):
total_L = 0.0
start_time = time.time()
hidden = model.begin_state(func = mx.nd.zeros, batch_size = args_batch_size, ctx = context)
for ibatch, i in enumerate(range(0, train_data.shape[0] - 1, args_bptt)):
data, target = get_batch(train_data, i)
hidden = detach(hidden)
with autograd.record():
output, hidden = model(data, hidden)
L = loss(output, target)
L.backward()
grads = [i.grad(context) for i in model.collect_params().values()]
# Here gradient is for the whole batch.
# So we multiply max_norm by batch_size and bptt size to balance it.
gluon.utils.clip_global_norm(grads, args_clip * args_bptt * args_batch_size)
trainer.step(args_batch_size)
total_L += mx.nd.sum(L).asscalar()
if ibatch % args_log_interval == 0 and ibatch > 0:
cur_L = total_L / args_bptt / args_batch_size / args_log_interval
print('[Epoch %d Batch %d] loss %.2f, perplexity %.2f' % (
epoch + 1, ibatch, cur_L, math.exp(cur_L)))
total_L = 0.0
val_L = eval(val_data)
print('[Epoch %d] time cost %.2fs, validation loss %.2f, validation perplexity %.2f' % (
epoch + 1, time.time() - start_time, val_L, math.exp(val_L)))
if val_L < best_val:
best_val = val_L
test_L = eval(test_data)
model.save_parameters(args_save)
print('test loss %.2f, test perplexity %.2f' % (test_L, math.exp(test_L)))
else:
args_lr = args_lr * 0.25
trainer._init_optimizer('sgd',
{'learning_rate': args_lr,
'momentum': 0,
'wd': 0})
model.load_parameters(args_save, context)
```
Recall that the RNN model training is based on maximization likelihood of observations. For evaluation purposes, we have used the following two measures:
* Loss: the loss function is defined as the average negative log likelihood of the target words (ground truth) under prediction: $$\text{loss} = -\frac{1}{N} \sum_{i = 1}^N \text{log} \ p_{\text{target}_i}, $$ where $N$ is the number of predictions and $p_{\text{target}_i}$ the predicted likelihood of the $i$-th target word.
* Perplexity: the average per-word perplexity is $\text{exp}(\text{loss})$.
To orient the reader using concrete examples, let us illustrate the idea of the perplexity measure as follows.
* Consider the perfect scenario where the model always predicts the likelihood of the target word as 1. In this case, for every $i$ we have $p_{\text{target}_i} = 1$. As a result, the perplexity of the perfect model is 1.
* Consider a baseline scenario where the model always predicts the likelihood of the target word randomly at uniform among the given word set $W$. In this case, for every $i$ we have $p_{\text{target}_i} = 1 / |W|$. As a result, the perplexity of a uniformly random prediction model is always $|W|$.
* Consider the worst-case scenario where the model always predicts the likelihood of the target word as 0. In this case, for every $i$ we have $p_{\text{target}_i} = 0$. As a result, the perplexity of the worst model is positive infinity.
Therefore, a model with a lower perplexity that is closer to 1 is generally more effective. Any effective model has to achieve a perplexity lower than the cardinality of the target set.
Now we are ready to train the model and evaluate the model performance on validation and testing data sets.
```
train()
model.load_parameters(args_save, context)
test_L = eval(test_data)
print('Best test loss %.2f, test perplexity %.2f'%(test_L, math.exp(test_L)))
```
## Next
[Introduction to optimization](../chapter06_optimization/optimization-intro.ipynb)
For whinges or inquiries, [open an issue on GitHub.](https://github.com/zackchase/mxnet-the-straight-dope)
| github_jupyter |
## This notebook constructs the GRAND dam network using the Free-Flow Rivers Dataset (Grill et al., 2019)
```
import os
import numpy as np
import pandas as pd
import geopandas as gpd
import rasterio
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import seaborn as sns
from scipy import stats
import networkx as nx
import time
```
### Load Free-Flowing Rivers (Grill et al., 2019)
```
# *The original gdb data is too large to deploy
# gdf = gpd.read_file('/Users/dlee/data/ffr_network/dlee/subset.dbf')
# df = gdf[gdf.columns[:-1]]
# df.to_hdf('/Users/dlee/data/ffr_network/dlee/ffr_network.hdf', 'df', complib='blosc:zstd', complevel=9)
ffr = pd.read_hdf('/Users/dlee/data/ffr_network/dlee/ffr_network.hdf',key='df')
# Dam shapefile and Degree of Regulation (DOR)
df_dam = gpd.read_file('./data/granddams_eval.shp')
df_dor = pd.read_hdf('./data/df_dor.hdf')
ffr.head()
```
### Generate a network of GRAND dams showing downstream and outlet reservoirs
```
fn_network = './data/grand_network_ffr.hdf'
if not os.path.exists(fn_network):
t0 = time.time()
network = df_dor.reset_index().drop(['DOR','DOF','CAP_MCM'], axis=1)
network['prev'] = network['NOID'].values
network['next'] = np.NaN
network['to'] = np.NaN
network['length'] = 0
# Initialize negative reach length
con, ind1, ind2 = np.intersect1d( network['prev'], ffr['NOID'], return_indices=True)
temp_length = ffr.loc[ind2, 'LENGTH_KM'].values
for i, loc in enumerate(con):
network.loc[network['prev']==loc,'length'] = -temp_length[i]
result = network.copy().set_index('GRAND_ID')
# network = network[np.isin(network['GRAND_ID'], [24,25,27])]
print(network.shape[0])
# Loop of the network finding algorithm
while network.shape[0] > 0:
# Find downstream index and reach length
con, ind1, ind2 = np.intersect1d( network['prev'], ffr['NOID'], return_indices=True)
temp_down = ffr.loc[ind2, 'NDOID'].values
temp_length = ffr.loc[ind2, 'LENGTH_KM'].values
for i, loc in enumerate(con):
network.loc[network['prev']==loc,'next'] = temp_down[i]
network.loc[network['prev']==loc,'length'] += temp_length[i]
# Check downstream is the dam
con = np.intersect1d(result['NOID'], network['next'])
for loc in con:
network.loc[network['next'] == loc, 'to'] = result[result['NOID'] == loc].index[0]
# Check downstream is the outlet
network.loc[network['next'] == 0, 'to'] = -network.loc[network['next'] == 0, 'prev'].values # outlet (-gridid)
# Store the connected dams
if network['to'].notna().sum() > 0:
right = network.loc[network['to'].notna(), ['GRAND_ID','prev','next','to','length']].set_index('GRAND_ID')
result.update(right)
# Exclude connected dams from the network DataFrame
network = network[network['to'].isna()].reset_index(drop=True)
# Exchange grid indices
network['prev'] = network['next'].copy().values
network['next'] = np.NaN
print('%.2fs took' % (time.time() - t0))
# Save the result
result.to_hdf(fn, key='df')
print('%s is saved.' % fn_network)
else:
result = pd.read_hdf(fn_network)
print('%s is loaded.' % fn_network)
result = result.reset_index()
result.head()
```
### Degree of Regulation (DOR)
```
fn_newdor = './data/new_dor.hdf'
if not os.path.exists(fn_newdor):
spec = df_dor.reset_index().drop(['DOF'], axis=1).rename(columns={'DOR':'DOR1'})
spec['DOR2'] = np.NaN
spec = spec[['GRAND_ID', 'NOID', 'DOR1', 'DOR2', 'CAP_MCM']]
for _, row in spec[['GRAND_ID', 'NOID']].iterrows():
gid = row['GRAND_ID']
nuoid = ffr.loc[ffr['NOID']==row['NOID'],'NUOID'].values
if nuoid:
# There are upstream reaches
nuoid = [int(i) for i in nuoid[0].split('_')]
temp = ffr.loc[ffr['NOID'].isin(np.array(nuoid)), ['DIS_AV_CMS','DOR']]
av_dor = np.sum(temp['DIS_AV_CMS'] * temp['DOR']) / temp['DIS_AV_CMS'].sum()
else:
# No upstream reaches (headwater dam)
av_dor = 0
spec.loc[spec['GRAND_ID']== gid, 'DOR2'] = av_dor
# save new dor
spec.to_hdf(fn_newdor, key='df')
print('%s is saved.' % fn_newdor)
else:
# Load new DOR
spec = pd.read_hdf(fn_newdor)
print('%s is loaded.' % fn_newdor)
spec.head()
```
### Load GRAND dam specification variables
```
# Load Jia's regression variables
df_var = pd.read_hdf('./data/regression_variables.hdf').reset_index()
# Merge DataFrames
data = result.merge(spec[['GRAND_ID', 'DOR1', 'DOR2', 'CAP_MCM']], how='inner', on='GRAND_ID')
data = data.merge(df_var[['GRAND_ID','ECAP']], how='inner', on='GRAND_ID')
# Save the result for Jia
temp = data.copy()
temp= temp[['GRAND_ID','to','length','DOR2','CAP_MCM','ECAP']]
temp = temp.rename(columns={'to':'DOWN_GRAND_ID',
'length':'LENGTH_TO_DOWN_GRAND_ID_KM',
'DOR2':'DOR'})
if False: temp.to_csv('./data/new_dor_200514.csv'); print("%s is saved." % './data/new_dor_200514.csv')
data.head()
```
### Plotting the number of GRAND dams according to DOR value
```
# Scatterplot of DOR and CAP_MCM
ax = sns.jointplot(x="DOR2", y="ECAP", data=data,height=5, ratio=3)
ax.set_axis_labels(xlabel='DOR', ylabel='Capacity (MCM)')
plt.show()
# Number of dams according to the DOR threshold
ECAP_TOTAL = 1292000 # 1,292 GW (IHA, 2018)
thresholds = [0, 30, 50, 70, 90, 100]
for thsd in thresholds:
safe = data['DOR2'] <= thsd
print('DOR <= %d: %4d Dams (%d%%) with %d%% of global ECAP' %
(thsd, safe.sum(), safe.sum()/1593*100, data[safe].ECAP.sum()/ECAP_TOTAL*100))
# Scatterplot of DOR and CAP_MCM
sns.set(font_scale=1.3)
sns.set_style("whitegrid")
fig, ax = plt.subplots(1,1,figsize=(7,5))
ax = sns.distplot(data['DOR2'], color='b', bins=20,
hist_kws={'cumulative': True, 'alpha':0.95}, kde=False)
ax.set_ylabel('Selected number of dams')
ax.set_xlabel('DOR threshold')
ax.set_xlim([0, 100])
ax.set_ylim([0, 1600])
plt.tight_layout()
plt.show()
# Save a figure
if True:
fn_save = './figures/ndam_dor.png'
fig.savefig(fn_save, bbox_inches='tight')
print('%s is saved.' % fn_save)
```
### Check grid adjustment of 1593 (735) dams
```
adj = pd.read_excel('./data/grand1593_newInflowsR2_dlee.xlsx',sheet_name='R')
adj = adj.loc[adj.GRID_ADJ == 'Yes','GRAND_ID'].reset_index(drop=True)
num_adj_of_735 = np.isin(data[data.DOR2 == 0].GRAND_ID, adj).sum()
num_adj_of_735
```
### Initialize mapping parameters
```
import matplotlib as mpl
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
from pyproj import Proj, transform # In case of re-projection
from tools import cbarpam, GDFPlotOrder
# Load 1593 GranD dam shapefile
gdfDam = gpd.read_file('./data/granddams_eval.shp')
gdfDam = gdfDam.drop(gdfDam.columns[1:-1], axis=1)
# Load world base map (exclude Antarctica)
world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
world = world[(world.name!="Antarctica")]
# Reprojection to Robinson projection (ESRI:54030)
lims = [-135, 167, -55, 80]
if True:
inProj = Proj(init="epsg:4326")
outProj = Proj(init="esri:54030")
world = world.to_crs({'init':'esri:54030'})
gdfDam = gdfDam.to_crs({'init':'esri:54030'})
xmin, _ = transform(inProj, outProj, lims[0], 0)
xmax, _ = transform(inProj, outProj, lims[1], 0)
_, ymin = transform(inProj, outProj, 0, lims[2])
_, ymax = transform(inProj, outProj, 0, lims[3])
lims = [xmin, xmax, ymin, ymax]
```
### Mapping GRAND dams according to DOR
This is for checking their geographical distribution.
```
damMap = gdfDam.merge(data, on='GRAND_ID')
# Colormap
bounds = list(np.arange(0.1,1.0,0.1)*100)
boundaries = [-10]+bounds+[100]
cmap, norm, vmin, vmax, ticks, boundaries = cbarpam(bounds, 'rainbow', labloc='in',
boundaries=boundaries, extension='both')
# Plotting
fig, axes = plt.subplots(nrows=3,ncols=2,figsize=(12,9), facecolor='w')
figLabel = ['']
fignumb = ['(a)', '(b)', '(c)', '(d)','(e)','(f)']
thresholds = [100,90,70,50,30,0]
for (i, thsd) in enumerate(thresholds):
ax = axes.flatten('C')[i]
ax.set_axis_off()
ax.set_aspect('equal')
ax.axis(lims)
world.plot(ax=ax, color='white', edgecolor='gray')
tempMap = damMap[damMap['DOR2']<=thsd]
GDFPlotOrder(tempMap, boundaries, ax, 'DOR2',
cmap, norm, vmin, vmax, order='seq')
ax.annotate(fignumb[i], xy=(0.015, 0.98), xycoords='axes fraction',
horizontalalignment='center', verticalalignment='center',
fontname='arial',fontsize=18, backgroundcolor="w")
anot = 'DOR ≤ {} ({:,} dams)'.format(thsd, tempMap.shape[0])
if i in [5]: anot = 'DOR = {} ({:,} dams)'.format(thsd, tempMap.shape[0])
ax.annotate(anot, xy=(0.55, 0.05), xycoords='axes fraction',
horizontalalignment='center', verticalalignment='center',
fontname='arial',fontsize=14, backgroundcolor="w")
plt.tight_layout(w_pad=-5)
# Colorbar
cax = inset_axes(ax,
width="46%",
height="3%",
loc='lower left',
bbox_to_anchor=(-0.25, -0.2, 1.3, 1.4),
bbox_transform=ax.transAxes,
borderpad=0)
cbar = mpl.colorbar.ColorbarBase(cax, cmap=cmap, norm=norm,
boundaries=boundaries,
extend='both',
extendfrac=0.08,
ticks = bounds,
spacing='uniform',
orientation='horizontal',
drawedges=True)
cbar.ax.set_xticklabels(['%d'%lab for lab in bounds],
fontname='arial', fontsize=14)
cbar.ax.tick_params(length=0)
cbar.outline.set_edgecolor('black')
cbar.set_label('DOR', labelpad=-43,
fontname='arial', fontsize=14,
horizontalalignment='center')
plt.show()
# Save a figure
if False:
fn_save = './figures/dam_dor.png'
fig.savefig(fn_save, bbox_inches='tight')
print('%s is saved.' % fn_save)
```
### GRAND dam network map
```
temp = data.copy()
temp = temp.sort_values('to', ascending=False)
temp = temp.iloc[280:300]
temp['to'] = temp['to'].astype(int)
# Build your graph
G=nx.from_pandas_edgelist(temp, source='GRAND_ID', target='to')
# Plot it
plt.figure(figsize=(5, 5))
nx.draw(G, with_labels='to', font_size = 11, node_color='None', edge_color='blue', )
# plt.tight_layout()
plt.show()
# https://networkx.github.io/documentation/stable/tutorial.html
# https://www.datacamp.com/community/tutorials/networkx-python-graph-tutorial
# https://towardsdatascience.com/catching-that-flight-visualizing-social-network-with-networkx-and-basemap-ce4a0d2eaea6
```
| github_jupyter |
# Module 6
## Video 29: Working with Aggregated Cargo Movements Data
**Python for the Energy Industry**
In this lesson, we will be working with the data from the previous lesson. We will practice visualising this data.
[Cargo Movements documentation](https://vortechsa.github.io/python-sdk/endpoints/cargo_movements/)
To start we follow the steps to get our Cargo Movements DataFrame:
```
# initial imports
import pandas as pd
import numpy as np
from datetime import datetime
from dateutil.relativedelta import relativedelta
import vortexasdk as v
# datetimes to access last 7 weeks of data
now = datetime.utcnow()
seven_weeks_ago = now - relativedelta(weeks=7)
# Find US ID
us = [g.id for g in v.Geographies().search('united states').to_list() if 'country' in g.layer]
assert len(us) == 1
# Find crude ID
crude = [p.id for p in v.Products().search('crude').to_list() if p.name=='Crude']
assert len(crude) == 1
# Columns to pull out, and shortened names
required_columns = ["vessels.0.name","vessels.0.vessel_class","product.group.label","product.category.label","quantity",
"status","events.cargo_port_load_event.0.location.port.label","events.cargo_port_load_event.0.end_timestamp",
"events.cargo_port_unload_event.0.location.port.label","events.cargo_port_unload_event.0.location.country.label",
"events.cargo_port_unload_event.0.end_timestamp"]
new_labels = ["vessel_name","vessel_class","product_group","product_category","quantity","status",
"loading_port","loading_finish","unloading_port","unloading_country","unloading_finish"]
relabel = dict(zip(required_columns,new_labels))
cms = v.CargoMovements().search(
filter_activity = 'loading_end',
filter_origins = us,
exclude_destinations = us,
filter_products = crude,
filter_time_min = seven_weeks_ago,
filter_time_max = now,
cm_unit = 'b'
).to_df(columns=required_columns).rename(relabel,axis=1)
cms['loading_week'] = cms['loading_finish'].dt.isocalendar().week
```
Let's start by making a bar chart of weekly exports:
```
weekly_quantity = cms.groupby('loading_week').sum()
ax = weekly_quantity.plot.bar(y='quantity',legend=False,figsize=(8,6))
ax.set_xlabel('Week')
ax.set_ylabel('US exports (bbl)')
```
By assigning the plot to the variable `ax` we can make some further tweaks, like setting the x and y axis labels.
What if we wanted to represent the breakdown by product category? These can be plotted with bars side-by-side:
```
quantity_by_category = cms.groupby(by = ['loading_week','product_category']).sum().reset_index()
quantity_by_category = quantity_by_category.pivot(index = 'loading_week',columns = 'product_category',values = 'quantity')
quantity_by_category = quantity_by_category.fillna(0)
ax = quantity_by_category.plot.bar(figsize=(8,6))
ax.set_xlabel('Week')
ax.set_ylabel('US exports (bbl)')
```
As there are many products with zero exports, this leaves a lot of holes in the plot. A better way to represent this is 'stacked':
```
ax = quantity_by_category.plot.bar(stacked=True,figsize=(8,6))
ax.set_xlabel('Week')
ax.set_ylabel('US exports (bbl)')
```
What about visualising the share of exports to destination countries? A pie chart would be suitable for this.
```
quantity_by_destination = cms.groupby('unloading_country').sum()[['quantity']]
quantity_by_destination.sort_values(by='quantity',ascending = False, inplace=True)
top_destination_countries = quantity_by_destination.head(10)
rest = pd.DataFrame(index = ['Other'], columns = ['quantity'])
rest.loc['Other'] = quantity_by_destination[10:].sum().values
top_destination_countries = pd.concat([top_destination_countries, rest]).astype(int)
top_destination_countries['%'] = round(top_destination_countries['quantity']*100 / top_destination_countries['quantity'].sum(),2)
top_destination_countries.plot.pie(y='%',figsize=(6,6),legend=False,autopct='%.0f')
```
Another type of plot we can make is the histogram, which shows the distribution of values in a column. Here's the distribution of quantity:
```
cms.plot.hist(y='quantity',bins=30)
```
Do we think this distribution is different for different products? We can test by pivoting.
```
cms_product = cms.pivot(columns = 'product_category',values = 'quantity')[['Heavy-Sour','Light-Sweet','Medium-Sour']]
cms_product.plot.hist(bins=30)
```
### Exercise
Instead of US crude exports, pick a different dataset to examine. Say, Saudi Arabian exports, or Chinese imports. Follow the steps of the last 2 lessons to aggregate and visualise different aspects of this data.
| github_jupyter |
```
import csv
import datetime
import h5py
import itertools
import keras
import numpy as np
import os
import pandas as pd
import pescador
import random
import sys
import tensorflow as tf
import time
sys.path.append("../src")
import localmodule
# Define constants.
dataset_name = localmodule.get_dataset_name()
folds = localmodule.fold_units()
models_dir = localmodule.get_models_dir()
n_input_hops = 104
n_filters = [24, 48, 48]
kernel_size = [5, 5]
pool_size = [2, 4]
n_hidden_units = 64
steps_per_epoch = 1
epochs = 128
validation_steps = 1
batch_size = 32
# Read command-line arguments.
args = ["all", "unit01", "trial-9"]
aug_kind_str = args[0]
unit_str = args[1]
trial_str = args[2]
# Retrieve fold such that unit_str is in the test set.
fold = [f for f in folds if unit_str in f[0]][0]
test_units = fold[0]
training_units = fold[1]
validation_units = fold[2]
# Print header.
start_time = int(time.time())
print(str(datetime.datetime.now()) + " Start.")
print("Training Salamon's ICASSP 2017 convnet on " + dataset_name + ". ")
print("Training set: " + ", ".join(training_units) + ".")
print("Validation set: " + ", ".join(validation_units) + ".")
print("Test set: " + ", ".join(test_units) + ".")
print("")
print('h5py version: {:s}'.format(h5py.__version__))
print('keras version: {:s}'.format(keras.__version__))
print('numpy version: {:s}'.format(np.__version__))
print('pandas version: {:s}'.format(pd.__version__))
print('pescador version: {:s}'.format(pescador.__version__))
print('tensorflow version: {:s}'.format(tf.__version__))
print("")
# Define and compile Keras model.
# NB: the original implementation of Justin Salamon in ICASSP 2017 relies on
# glorot_uniform initialization for all layers, and the optimizer is a
# stochastic gradient descent (SGD) with a fixed learning rate of 0.1.
# Instead, we use a he_uniform initialization for the layers followed
# by rectified linear units (see He ICCV 2015), and replace the SGD by
# the Adam adaptive stochastic optimizer (see Kingma ICLR 2014).
# Input
inputs = keras.layers.Input(shape=(128, n_input_hops, 1))
# Layer 1
bn = keras.layers.normalization.BatchNormalization()(inputs)
conv1 = keras.layers.Convolution2D(n_filters[0], kernel_size,
padding="same", kernel_initializer="he_normal")(bn)
pool1 = keras.layers.MaxPooling2D(pool_size=pool_size)(conv1)
# Layer 2
conv2 = keras.layers.Convolution2D(n_filters[1], kernel_size,
padding="same", kernel_initializer="he_normal", activation="relu")(pool1)
pool2 = keras.layers.MaxPooling2D(pool_size=pool_size)(conv2)
# Layer 3
conv3 = keras.layers.Convolution2D(n_filters[2], kernel_size,
padding="same", kernel_initializer="he_normal", activation="relu")(pool2)
# Layer 4
flatten = keras.layers.Flatten()(conv3)
dense1 = keras.layers.Dense(n_hidden_units,
kernel_initializer="he_normal", activation="relu",
kernel_regularizer=keras.regularizers.l2(0.001))(flatten)
# Layer 5
# We put a single output instead of 43 in the original paper, because this
# is binary classification instead of multilabel classification.
# Furthermore, this layer contains 43 times less connections than in the
# original paper, so we divide the l2 weight penalization by 50, which is
# of the same order of magnitude as 43.
# 0.001 / 50 = 0.00002
dense2 = keras.layers.Dense(1,
kernel_initializer="normal", activation="sigmoid",
kernel_regularizer=keras.regularizers.l2(0.00002))(dense1)
# Compile model, print model summary.
model = keras.models.Model(inputs=inputs, outputs=dense2)
model.compile(loss="binary_crossentropy",
optimizer="adam", metrics=["accuracy"])
model.summary()
# Build Pescador streamers corresponding to log-mel-spectrograms in augmented
# training set.
training_streamer = localmodule.multiplex_tfr(
aug_kind_str, training_units, n_input_hops, batch_size)
# none
history = model.fit_generator(
training_streamer,
steps_per_epoch = 1024,
epochs = 1,
verbose = True)
history = model.fit_generator(
training_streamer,
steps_per_epoch = 32,
epochs = 32,
verbose = True)
Xy = next(training_streamer)
X, y = Xy[0], Xy[1]
np.set_printoptions(suppress=True, precision=10)
print(np.stack((model.predict(X), y)).T)
print(model.evaluate(X, y))
%matplotlib inline
from matplotlib import pyplot as plt
X = next(training_streamer)
plt.hist(np.mean(X[0], axis=(1, 2, 3)))
%matplotlib inline
from matplotlib import pyplot as plt
X = next(noise_streamer)
plt.hist(np.mean(X[0] - 18, axis=(1, 2, 3)))
# Parse augmentation kind string (aug_kind_str).
fold_units = localmodule.get_units()
n_hops = 192
augs = ["original"]
lms_paths = []
# Generate a Pescador streamer for every HDF5 container, that is,
# every unit-augmentation-instance triplet.
aug_dict = localmodule.get_augmentations()
data_dir = localmodule.get_data_dir()
dataset_name = localmodule.get_dataset_name()
logmelspec_name = "_".join([dataset_name, "logmelspec"])
logmelspec_dir = os.path.join(data_dir, logmelspec_name)
streams = []
for aug_str in augs:
aug_dir = os.path.join(logmelspec_dir, aug_str)
if aug_str == "original":
instances = [aug_str]
else:
n_instances = aug_dict[aug_str]
instances = ["-".join([aug_str, str(instance_id)])
for instance_id in range(n_instances)]
for instanced_aug_str in instances:
for unit_str in fold_units:
lms_name = "_".join([dataset_name, instanced_aug_str, unit_str])
lms_path = os.path.join(aug_dir, lms_name + ".hdf5")
lms_paths.append(lms_path)
stream = pescador.Streamer(localmodule.yield_tfr, lms_path, n_hops)
streams.append(stream)
import tqdm
lms_means = []
for lms_path in tqdm.tqdm(lms_paths):
f = h5py.File(lms_path)
for key in list(f["logmelspec"].keys()):
lms_mean = np.mean(f["logmelspec"][key])
lms_means.append(lms_mean)
from matplotlib import pyplot as plt
%matplotlib inline
plt.hist(noisy_means)
from matplotlib import pyplot as plt
%matplotlib inline
plt.hist(original_means)
np.mean(noisy_means) - np.mean(original_means)
```
| github_jupyter |
# Characterization of Discrete Systems in the Spectral Domain
*This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Communications Engineering, Universität Rostock. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).*
## Magnitude and Phase
The discrete-time Fourier domain transfer function $H(e^{j \Omega})$ characterizes the transmission properties of a linear time-invariant (LTI) system with respect to an [harmonic exponential signal](../discrete_signals/standard_signals.ipynb#Complex-Exponential-Signal) $e^{j \Omega k}$ with normalized frequency $\Omega$. In order to investigate the characteristics of an LTI system, often the magnitude $| H(e^{j \Omega}) |$ and phase $\varphi_H(e^{j \Omega})$ of the transfer function are regarded separately. Decomposing the output signal $Y(e^{j \Omega}) = X(e^{j \Omega}) \cdot H(e^{j \Omega})$ into its magnitude $| Y(e^{j \Omega}) |$ and phase $\varphi_Y(e^{j \Omega})$ yields
\begin{align}
| Y(e^{j \Omega}) | &= | X(e^{j \Omega}) | \cdot | H(e^{j \Omega}) | \\
\varphi_Y(e^{j \Omega}) &= \varphi_X(e^{j \Omega}) + \varphi_H(e^{j \Omega})
\end{align}
where $X(e^{j \Omega})$ denotes the input signal, and $| X(e^{j \Omega}) |$ and $\varphi_X(e^{j \Omega})$ its magnitude and phase, respectively. It can be concluded, that the magnitude $| H(e^{j \Omega}) |$ quantifies the frequency-dependent attenuation of the magnitude $| X(e^{j \Omega}) |$ of the input signal by the system, while $\varphi_H(e^{j \Omega})$ quantifies the introduced phase-shift.
A rational transfer function $H(z)$ which is composed from polynomials in $z^{-1}$ can be expressed [in terms of its poles and zeros](../z_transform/definition.ipynb#Representation). Applying this representation to the transfer function $H(e^{j \Omega})$ in the discrete-time Fourier domain yields
\begin{equation}
H(e^{j \Omega}) = K \cdot \frac{\prod_{\mu=0}^{Q} (e^{j \Omega} - z_{0 \mu})}{\prod_{\nu=0}^{P} (e^{j \Omega} - z_{\infty \nu})}
\end{equation}
where $z_{0 \mu}$ and $z_{\infty \nu}$ denote the $\mu$-th zero and $\nu$-th pole of $H(z)$, and $Q$ and $P$ the total number of zeros and poles, respectively. Often the logarithmic magnitude of the transfer function $20 \log_{10} | H(e^{j \Omega}) |$ in [decibels](https://en.wikipedia.org/wiki/Decibel) (dB) is considered. The representation of amplitudes in dB is beneficial due to its clear coverage of wide amplitude ranges and convenient calculus for scaled amplitudes. Using the representation of the transfer function with respect to its poles and zeros yields for the logarithm of the magnitude and the phase
\begin{align}
\log_{10} | H(e^{j \Omega}) | &= \sum_{\mu=0}^{Q} \log_{10} |e^{j \Omega} - z_{0 \mu}| - \sum_{\nu=0}^{P} \log_{10} |e^{j \Omega} - z_{\infty \nu}| + \log_{10} |K| \\
\varphi_H(e^{j \Omega}) &= \sum_{\mu=0}^{Q} \arg (e^{j \Omega} - z_{0 \mu}) - \sum_{\nu=0}^{P} \arg (e^{j \Omega} - z_{\infty \nu})
\end{align}
where $\arg(\cdot)$ denotes the [argument](https://en.wikipedia.org/wiki/Argument_%28complex_analysis%29) (phase) of a complex function. It can be concluded, that the individual contributions of the poles and zeros to the logarithm of the magnitude and phase can be superimposed. This fact may be exploited to estimate the frequency and phase response of discrete-time systems by considering the influence of individual poles and zeros. This is discussed in the following for a system composed from one pole and one zero ($P=Q=0$).

The magnitude of the transfer function depends on the ratio of Euclidean distances from the pole and zero to the position $e^{j \Omega}$ on the unit circle $|z| = 1$. Small distances, e.g. a pole or zero close to the unit circle, lead to large negative values of the individual summands in the logarithmic magnitude response, e.g. $\log_{10} |e^{j \Omega} - z_{0 \mu}|$. This holds especially for normalized frequencies $\Omega$ in the proximity of the pole/zero. It can be concluded that
* a pole close to the unit circle results in a resonance (e.g. maximum), and
* a zero close to the unit circle results in a notch/anti-resonance (e.g. minimum)
in the magnitude response. The normalized frequency $\Omega$ of this maximum/minimum is given by the argument of the pole $\arg \{ z_{\infty \mu} \}$ and zero $\arg\{ z_{0 \mu} \}$, respectively.
**Example - Second order system**
The magnitude response of a real-valued second-order LTI system with the specific transfer function
\begin{equation}
H(e^{j \Omega}) = \frac{(e^{j \Omega} - z_0)(e^{j \Omega} - z_0^*)}{(e^{j \Omega} - z_\infty)(e^{j \Omega} - z_\infty^*)}
\end{equation}
is considered. It is convenient to represent the pole/zero by its magnitude and phase in order to illustrate its location relative to the unit circle
\begin{align}
z_0 &= r_0 e^{j \Omega_0} &\text{with } \quad &r_0 = 0.8 & \Omega_0 = \frac{\pi}{2} \\
z_\infty &= r_\infty e^{j \Omega_\infty} & &r_\infty = 0.95 & \Omega_\infty = \frac{\pi}{4}
\end{align}
where for instance $r_0 = |z_0|$ and $\Omega_0 = \arg \{ z_0 \}$. First the contribution of the pair of complex conjugate zeros $z_0$ and $z_0^*$ to the logarithmic magnitude of the transfer function $H(e^{j \Omega})$ is computed and plotted over the normalized frequency $\Omega$.
```
import sympy as sym
%matplotlib inline
sym.init_printing()
def db(x):
'compute dB value'
return 20 * sym.log(sym.Abs(x), 10)
W = sym.symbols('Omega', real=True)
z_0 = 0.8 * sym.exp(sym.I * sym.pi/2)
H1 = (sym.exp(sym.I * W) - z_0)*(sym.exp(sym.I * W) - sym.conjugate(z_0))
Hlog1 = db(H1)
sym.plot(Hlog1, (W, 0, sym.pi), xlabel='$\Omega$',
ylabel='$|H_0(e^{j \Omega})|$ in dB')
```
Now the contribution of the pair of complex conjugate poles $z_\infty$ and $z_\infty^*$ is computed and plotted
```
z_inf = 0.95 * sym.exp(sym.I * sym.pi/4)
H2 = 1/((sym.exp(sym.I * W) - z_inf)*(sym.exp(sym.I * W) - sym.conjugate(z_inf)))
Hlog2 = db(H2)
sym.plot(Hlog2, (W, 0, sym.pi), xlabel='$\Omega$',
ylabel='$|H_\infty(e^{j \Omega})|$ in dB')
```
The logarithmic magnitude response of the system is given by superposition of the individual contributions from the zeros and the poles
```
Hlog = Hlog1 + Hlog2
sym.plot(Hlog, (W, 0, sym.pi), xlabel='$\Omega$',
ylabel='$|H(e^{j \Omega})|$ in dB')
```
**Exercise**
* Why is the system real valued?
* Examine the magnitude response of the system. How is it related to the magnitude responses of the individual zeros/poles?
* Move the poles and/or zeros closer to the unit circle by changing the values $r_\infty$ and/or $r_0$. What influence does this have on the magnitude response?
* Change the phase of the pole and/or zero by changing the values $\Omega_\infty$ and/or $\Omega_0$. What influence does this have on the magnitude response?
**Copyright**
This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Continuous- and Discrete-Time Signals and Systems - Theory and Computational Examples*.
| github_jupyter |
```
import cv2
import numpy as np
import string
import random
import glob
import imgaug.augmenters as iaa
import matplotlib.pyplot as plt
%matplotlib inline
bg_pattern = glob.glob(r"pattern/*.*")
random.shuffle(bg_pattern)
```
# Augmentation settings for Anchor, Positive, Negative
```
sometimes = lambda aug: iaa.Sometimes(0.3, aug)
sometimes2 = lambda aug: iaa.Sometimes(0.1, aug)
seq_a = iaa.Sequential([
iaa.GaussianBlur(sigma=(0.5, 1.5)),
])
seq_p_n = iaa.Sequential([
iaa.Crop(px=(2, 20), keep_size=True),
#iaa.Fliplr(0.5),
iaa.GaussianBlur(sigma=(0, 1.5)),
sometimes(iaa.Affine(
scale={"x": (0.95, 1.1), "y": (0.95, 1.05)},
translate_percent={"x": (-0.05, 0.05), "y": (-0.05, 0.05)},
rotate=(-2, 2),
cval=(0, 255)
)),
sometimes(iaa.FastSnowyLandscape(lightness_threshold=(10, 30))),
sometimes(iaa.Snowflakes(density=(0.002, 0.004))),
sometimes(iaa.AddToHueAndSaturation((-5, 5)))
])
def random_generator(size=6, chars=string.ascii_uppercase + string.digits + " "):
"""
generate a random text
@size the number of generated characters
@chars the allowed list of characters to generate from
output a generated string
"""
return ''.join(random.choice(chars) for x in range(size))
def get_enlarged_p_n(img_size, input_rect, ratio = 0.2):
"""
enlarge the input_rect by ratio and shouldn't exceed the img_size limits
@img_size (width, height)
@input_rect [x0, y0, x1, y1]
output the enlarged new_rect by ratio
"""
new_rect = input_rect[:]
w_ratio = int((new_rect[2] - new_rect[0]) * ratio)
h_ratio = int((new_rect[3] - new_rect[1]) * ratio)
if new_rect[0] - w_ratio < 0:
new_rect[0] = 0
else:
new_rect[0] = new_rect[0] - w_ratio
if new_rect[2] + w_ratio > img_size[0]:
new_rect[2] = img_size[0]
else:
new_rect[2] = new_rect[2] + w_ratio
if new_rect[1] - h_ratio < 0:
new_rect[1] = 0
else:
new_rect[1] = new_rect[1] - h_ratio
if new_rect[3] + h_ratio > img_size[1]:
new_rect[3] = img_size[1]
else:
new_rect[3] = new_rect[3] + h_ratio
return new_rect
def compute_new_size_by_min_dim(input_size, min_threshold_dim = 80):
'''
upscale the minimum dimension of input_size to min_threshold_dim if its value is less and upscale the other dimension
based on this ratio
return the new size
'''
new_size = input_size[:]
min_size = min(input_size[0], input_size[1])
if min_size == new_size[0]: # width
new_size[0] = min_threshold_dim
new_size[1] = int((min_threshold_dim/input_size[0]) * input_size[1])
else:
new_size[0] = int((min_threshold_dim/input_size[1]) * input_size[0])
new_size[1] = min_threshold_dim
return new_size #width, height
def get_random_bg():
"""
Generate a random solid background / or get a random image from patterns folder
"""
img = []
if random.random() > 0.6:
img_path = random.choice(bg_pattern)
img = cv2.imread(img_path, 1)
img = cv2.resize(img, (1000, 400))
else:
img = np.zeros((400, 1000, 3), np.uint8)
img[:] = (random.randint(0, 255), random.randint(0, 255), random.randint(0, 255))
return img.copy()
def generate_batch_of_triplets(batch_size=8):
"""
generate a batch of triplets where anchor and positive have the same text, while negative have a different text.
@batch_size the number of triplets to generate
output triplets [a][p][n], each one have the shape (batch_size, height, width, channels)
"""
# choose length for the batch
length = random.randint(2, 20)
random_b_str = random_generator(length)
font_b = random.randint(0, 7)
fontScale_b = random.uniform(0.9, 2.0)
thickness_b = random.randint(1, 4)
textsize_batch_area = cv2.getTextSize(random_b_str, font_b, fontScale_b, thickness_b)[0]
batch_img_size = (textsize_batch_area[0]+20, textsize_batch_area[1]+20) # add padding 10 from all sides - so (20 to width, 20 to height)
b_width, b_heigth = batch_img_size[0], batch_img_size[1]
b_width, b_heigth = compute_new_size_by_min_dim([b_width, b_heigth])
triplets=[np.zeros((batch_size, b_heigth, b_width, 3), dtype=np.uint8) for i in range(3)]
triplets_txt=[[],[],[]]
for b in range(batch_size):
# choose background
anchor_bg = get_random_bg()
w, h = anchor_bg.shape[1], anchor_bg.shape[0]
pos_bg = anchor_bg.copy()
if random.random() > 0.5:
pos_bg = get_random_bg()
pos_bg = cv2.resize(pos_bg, (w, h))
neg_bg = anchor_bg.copy()
if random.random() > 0.8:
neg_bg = get_random_bg()
neg_bg = cv2.resize(neg_bg, (w, h), interpolation = cv2.INTER_AREA)
# write anchor sentence / different sentence on same background / similar one
random_anchor_str = random_generator(length)
length_n = length
if random.random() > 0.9:
length_n = random.randint(2,30)
random_neg_str = random_generator(length_n)
# put the text on background
font_a = random.randint(0,5)
font_n = random.randint(0,5)
org = (40, int(h/2))
fontScale = random.uniform(0.9,2.0)
color = (0, 0, 0)
if random.random() > 0.8:
color = (255, 255, 255)
elif random.random() > 0.5:
color = (random.randint(0,255),random.randint(0,255),random.randint(0,255))
thickness = random.randint(1,4)
textsize_p = cv2.getTextSize(random_anchor_str, font_a, fontScale, thickness)[0]
image_anchor = cv2.putText(anchor_bg, random_anchor_str, org, font_a,
fontScale, color, thickness, cv2.LINE_AA)
image_pos = cv2.putText(pos_bg, random_anchor_str, org, font_a,
fontScale, color, thickness, cv2.LINE_AA)
textsize_n = cv2.getTextSize(random_neg_str, font_n, fontScale, thickness)[0]
image_neg = cv2.putText(neg_bg, random_neg_str, org, font_n,
fontScale, color, thickness, cv2.LINE_AA)
# crop anchor, pos, neg
cur_rect = [org[0]-10, org[1]-textsize_p[1]-10, org[0]+textsize_p[0]+20, org[1]+20] # x0, y0, x1, y1
text_area_a = image_anchor[cur_rect[1]:cur_rect[3], cur_rect[0]:cur_rect[2],:]
rect_p_20 = get_enlarged_p_n((image_anchor.shape[1],image_anchor.shape[0]), cur_rect)
text_area_p = image_pos[rect_p_20[1]:rect_p_20[3], rect_p_20[0]:rect_p_20[2],:]
rect_n_20 = get_enlarged_p_n((image_neg.shape[1],image_neg.shape[0]), cur_rect)
text_area_n = image_neg[rect_n_20[1]:rect_n_20[3], rect_n_20[0]:rect_n_20[2],:]
# apply augmentation on both postivie and negative
image_aug_a = seq_a(images=np.expand_dims(text_area_a,axis=0))[0]
image_aug_a = cv2.resize(image_aug_a, (b_width, b_heigth))
image_aug_a = cv2.normalize(image_aug_a, None, 0, 255, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)
image_aug_p = seq_p_n(images=np.expand_dims(text_area_p,axis=0))[0]
image_aug_p = cv2.resize(image_aug_p, (b_width, b_heigth))
image_aug_p = cv2.normalize(image_aug_p, None, 0, 255, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)
image_aug_n = seq_p_n(images=np.expand_dims(text_area_n,axis=0))[0]
image_aug_n = cv2.resize(image_aug_n, (b_width, b_heigth))
image_aug_n = cv2.normalize(image_aug_n, None, 0, 255, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)
triplets[0][b,:,:,:] = image_aug_a
triplets[1][b,:,:,:] = image_aug_p
triplets[2][b,:,:,:] = image_aug_n
triplets_txt[0].append(random_anchor_str)
triplets_txt[1].append(random_anchor_str)
triplets_txt[2].append(random_neg_str)
return triplets, triplets_txt
```
# Save to disk
```
import os
count = 10
sa_counter = 0
syn_save_path = "save/"
if not os.path.exists(syn_save_path):
os.makedirs(syn_save_path)
for yt in range(count):
print(yt)
triplets, gt = generate_batch_of_triplets(batch_size=6)
for item0_idx in range(len(triplets[0])):
curr_img = triplets[0][item0_idx]
curr_img = cv2.normalize(curr_img, None, 0, 255, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)
curr_img = cv2.cvtColor(curr_img, cv2.COLOR_RGB2BGR)
cv2.imwrite(syn_save_path+str(sa_counter)+"_"+"0"+"_"+gt[0][item0_idx]+".jpg", curr_img)
curr_img = triplets[1][item0_idx]
curr_img = cv2.normalize(curr_img, None, 0, 255, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)
curr_img = cv2.cvtColor(curr_img, cv2.COLOR_RGB2BGR)
cv2.imwrite(syn_save_path+str(sa_counter)+"_"+"1"+"_"+gt[1][item0_idx]+".jpg", curr_img)
curr_img = triplets[2][item0_idx]
curr_img = cv2.normalize(curr_img, None, 0, 255, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)
curr_img = cv2.cvtColor(curr_img, cv2.COLOR_RGB2BGR)
cv2.imwrite(syn_save_path+str(sa_counter)+"_"+"2"+"_"+gt[2][item0_idx]+".jpg", curr_img)
sa_counter+=1
del triplets, gt
```
# Draw Triplets
```
def drawTriplets(tripletbatch, nbmax=None):
"""
display (anchor, positive, negative) images foreach triplets in the batch
"""
labels = ["Anchor", "Positive", "Negative"]
if (nbmax==None):
nbrows = tripletbatch[0].shape[0]
else:
nbrows = min(nbmax,tripletbatch[0].shape[0])
for row in range(nbrows):
fig=plt.figure(figsize=(16,2))
for i in range(3):
subplot = fig.add_subplot(1,3,i+1)
plt.imshow(tripletbatch[i][row,:,:])
subplot.title.set_text(labels[i])
triplets, txtgt = generate_batch_of_triplets(6)
print(triplets[0].shape)
drawTriplets(triplets)
```
| github_jupyter |
# Multiple Regression Analysis: Further Issues
## Effects of Data Scaling on OLS Statistics
By analysing an example, we have when the data for **dependent variable** are scaled to $k$ times as before,
- the OLS coefficient estimates are scaled to $\DeclareMathOperator*{\argmin}{argmin}
\DeclareMathOperator*{\argmax}{argmax}
\DeclareMathOperator*{\plim}{plim}
\newcommand{\using}[1]{\stackrel{\mathrm{#1}}{=}}
\newcommand{\ffrac}{\displaystyle \frac}
\newcommand{\asim}{\overset{\text{a}}{\sim}}
\newcommand{\space}{\text{ }}
\newcommand{\bspace}{\;\;\;\;}
\newcommand{\QQQ}{\boxed{?\:}}
\newcommand{\void}{\left.\right.}
\newcommand{\Tran}[1]{{#1}^{\mathrm{T}}}
\newcommand{\d}[1]{\displaystyle{#1}}
\newcommand{\CB}[1]{\left\{ #1 \right\}}
\newcommand{\SB}[1]{\left[ #1 \right]}
\newcommand{\P}[1]{\left( #1 \right)}
\newcommand{\abs}[1]{\left| #1 \right|}
\newcommand{\norm}[1]{\left\| #1 \right\|}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\Exp}{\mathrm{E}}
\newcommand{\RR}{\mathbb{R}}
\newcommand{\EE}{\mathbb{E}}
\newcommand{\NN}{\mathbb{N}}
\newcommand{\ZZ}{\mathbb{Z}}
\newcommand{\QQ}{\mathbb{Q}}
\newcommand{\AcA}{\mathscr{A}}
\newcommand{\FcF}{\mathscr{F}}
\newcommand{\Var}[2][\,\!]{\mathrm{Var}_{#1}\left[#2\right]}
\newcommand{\Avar}[2][\,\!]{\mathrm{Avar}_{#1}\left[#2\right]}
\newcommand{\Cov}[2][\,\!]{\mathrm{Cov}_{#1}\left(#2\right)}
\newcommand{\Corr}[2][\,\!]{\mathrm{Corr}_{#1}\left(#2\right)}
\newcommand{\I}[1]{\mathrm{I}\left( #1 \right)}
\newcommand{\N}[1]{\mathcal{N} \left( #1 \right)}
\newcommand{\ow}{\text{otherwise}}k$ times, intercept included
- statistical significance won't change
- the standard error are scaled to $k$ times
- $t$ statistics won't change
- the endpoints for the CI are scaled to $k$ times
- $R$-squared won't change
- sum of squared residuals, $\text{SSR}$ are scaled to $k^2$ times
- standard error of the regression, $\hat\sigma = \sqrt{\ffrac{\text{SSR}} {n-k-1}}$, are scaled to $k$ times
Now we scale on of the **independent variable** to its $k$ times, say $x_1$
- only $\hat\beta_1$ is scaled to its $1/k$ times
- also its standard error is scaled to its $1/k$ times
- all other things keep the same
### Beta Coefficients
Standard deviation and mean are key features of the data, so now we introduce regression using $z$-scores, where the variable is ***standardized*** in the sample by *subtracting off its mean* and *dividing by its standard deviation*.
$$\begin{cases}
y_i = \hat\beta_0 + \hat\beta_1 x_{i1} + \hat\beta_2 x_{i2} + \cdots + \hat\beta_k x_{ik} + \hat u_i \\
\bar y = \hat\beta_0 + \hat\beta_1 \bar x_1 + \hat\beta_2 \bar x_{2} + \cdots + \hat\beta_k \bar x_k + 0
\end{cases} \\[1.5em]
\Rightarrow\begin{align}
\ffrac{y_i-\bar y} {\hat\sigma_y} &= \P{\ffrac{\hat\sigma_1} {\hat\sigma_y}} \hat\beta_1 \P{\ffrac{x_{i1} - \bar x_1} {\hat\sigma_1}} + \P{\ffrac{\hat\sigma_2} {\hat\sigma_y}} \hat\beta_2 \P{\ffrac{x_{i2} - \bar x_2} {\hat\sigma_2}} + \cdots + \P{\ffrac{\hat\sigma_k} {\hat\sigma_y}} \hat\beta_k \P{\ffrac{x_{ik} - \bar x_k} {\hat\sigma_k}} + \ffrac{\hat u_i} {\hat\sigma_y} \\
z_y &\equiv \hat b_1 z_1 + \hat b_2 z_2 + \cdots + \hat b_k z_k + \text{error}
\end{align}
$$
where $z_y$ denotes the $z$-score of $y$, $z_i$ for $i=1,2,\dots,k$ is the $z$-score of $x_i$. And the new coefficients are $\hat b_j = \ffrac{\hat\sigma_j} {\hat\sigma_y} \hat\beta_j$, called the ***standardized coefficients*** or ***beta coefficients***.
## More on Functional Form
### More on Using Logarithmic Functional Forms
$Review$
>For model $\log\P{y} = \beta_0 + \beta_1\log\P{x_1} + \beta_2 x_2 + u$, $\beta_1 = \ffrac{\partial \log\P{y}} {\partial \log\P{x_1}}$, the elasticity, presenting the percentage change of $y$ if $x_1$ increases by $1\%$.
Benefits, blahblahblah; Drawbacks, blahblahblah...
***
About rescaling: for variables appearing in logarithmic form, their slope coefficients are invariant to rescalings.
$$ \log\P{y_i} = \beta_0 + \beta_1 x_i + u_i \Rightarrow \log\P{c_iy_i} = \P{\log\P{c_1} +\beta_0} + \beta_1 x_i + u_i \\
y_i = \beta_0 + \beta_1 \log\P{x_i} + u_i \Rightarrow y_i = \P{\beta_1\log\P{c_1} + \beta_0} + \beta_1 \log\P{x_i} + u_i$$
### Models with Quadratics
A simple example, the estimated equation: $\hat y = \hat\beta_0 +\hat\beta_1 x + \hat\beta_2 x^2$. So that the approximation:
$$\Delta \hat y \approx \P{\hat\beta_1 + 2\hat\beta_2 x}\Delta x\Rightarrow \ffrac{\Delta \hat y} {\Delta x} = \hat\beta_1 + 2\hat\beta_2 x$$
Some times there's marginal effects, when $\hat\beta_1\cdot\hat\beta_2 < 0$. The turning point is at
$$x^* = \abs{\ffrac{\hat\beta_1} {2\hat\beta_2}}$$
$Remark$
>More possible model: $\log\P{y} = \beta_0 + \beta_1\log\P{x} +\beta_2\P{\log\P{x}}^2$. It's elasticity is $\beta_1$ when $\beta_2 = 0$ otherwise
>
>$$\ffrac{\partial\log\P{y}} {\partial\log\P{x}} = \beta_1 + 2\beta_2 \log\P{x}$$
### Models with Interaction Terms
Sometimes we have the model like:
$$y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_1 \cdot x_2 + u$$
then the partial effect of $x_2$ on $y$ is $\ffrac{\Delta y} {\Delta x_2} = \beta_2 + \beta_3 x_1$ meaning that there's a ***interaction effect*** between $x_1$ and $x_2$. And this can only complicate out explanation to the model. Not recommended.
One thing to do for a better explanation is to ***reparameterize*** the model, from the original one to
$$y = \alpha_0 + \delta_1 x_1 + \delta_2 x_2 + \beta_3 \P{x_1 - \mu_1}\P{x_2 - \mu_2} +u$$
- Easy interpretation of all parameters
- Standard errors for partial effects at the mean values available
- If necessary, interaction may be centered at other interesting values
## More on Goodness-of-Fit and Selection of Regressors
$R^2$ is simply an estimate of how much variation in $y$ is explained by $x_1, x_2, \dots,x_k$ in the population. Two forgettable points:
- A high $R^2$ does not imply that there is a causal interpretation因果关系
- A low $R^2$ does not preclude排除 precise estimation of partial effects
### Adjusted $R$-Squared
$Review$
The ordinary $R$-squared: $R^2 = 1-\ffrac{\text{SSR}} {\text{SST}}$, as am estimate for $1-\ffrac{\sigma_{u}^2} {\sigma_{y}^2}$, the ***population ***$R$***-squared***. Here's the adjusted one:
$$\bar R^2 = 1-\ffrac{\ffrac{\text{SSR}} {n-k-1}} {\ffrac{\text{SST}} {n-1}} = 1-\ffrac{\hat \sigma^2} {\ffrac{ \text{SST}} {n-1}}$$
- it imposes a penalty for adding additional independent variables to a model
- Not like $R^2$ a non-decreasing one, $\bar R^2$ will increase $iff$ the $t$ statistic on the new variable is greater than one in absolute value
- could be Negative
$$\bar R^2 = 1 - \P{1-R^2} \ffrac{n-1} {n-k-1}$$
### using Adjusted $R$-Squared to Choose between Nonnested Models
***nonnested models***: Two models are **nonnested models** because neither is a special case of the other.
If we're comparing two nonnested models to decide which one is better,
- if they have different number of parameters, $R^2$ can't reveal a thing. We should take the degree of freedom into account thus use $\bar R^2$
- also not comparable when they differ in their definition of the dependent variable
### Controlling for Too Many Factors in Regression Analysis
Certain variables should not be held fixed, if so, we're ***over controlling***. And it really partially depend on what the problem we are focusing on is.
### Adding Regressors to Reduce the Error Variance
With more regressors
1. more multicollinearity problems
2. reduces the error variance
3. but it's hard to find those variables that are uncorrelated with other regressors
## Prediction and Residual Analysis
### Confidence Intervals for Predictions
The estimated model: $\hat y = \hat\beta_0 + \hat\beta_1 x_1 + \hat\beta_2 x_2 + \cdots + \hat\beta_k x_k$. And the estimated parameter is:
$$\theta_0 = \Exp\SB{y\mid x_1 = c_1, x_2 = c_2,\dots,x_k = c_k} = \beta_0 + \beta_1 c_1 + \cdots + \beta_k c_k$$
Its estimator is $\hat\theta_0 = \hat\beta_0 + \hat\beta_1 c_1 + \cdots + \hat\beta_k c_k$
And the standard error for the confident interval: $\hat\theta_0 \pm z^*\cdot \text{se}\P{\hat\theta_0}$. How to find that? We can write
$$\begin{align}
y &= \beta_0 + \beta_1 x_1 + \cdots + \beta_k x_k + u \\
&= \P{\theta_0 -\beta_1 c_1 - \cdots - \beta_k c_k} + \beta_1 x_1 + \cdots + \beta_k x_k + u\\
&= \theta_0 + \beta_1\P{x_1 - c_1} + \beta_2 \P{x_2 - c_2} + \cdots + \beta_k \P{x_k - c_k} + u
\end{align}$$
Then run regression for $y$ on $x_i-c_i$. The intercept, that's the $\hat\theta_0$. Then we put the *variance in the unobserved error* into consideration.
$x_1^0,x_2^0,\dots,x_k^0$ are the new values of the independent variables, $u^0$ is the unobserved error. Then we have
$$y^0 = \beta_0 + \beta_1 x_1^0 + \cdots + \beta_k x_k^0 +u^0$$
As before, our best prediction of $y^0$ is the expected value of $y^0$ given the explanatory variables. So there's actually a ***prediction error***:
$$\hat e^0 = y^0 - \hat y^0 = \P{\beta_0 + \beta_1 x_1^0 + \cdots + \beta_k x_k^0 } + u^0 - \hat y^0$$
and by the **zero mean** of the unobserved error and the unbiasedness of the parameters we have
$$\Exp\SB{\hat e^0} = \P{\beta_0 + \beta_1 x_1^0 + \cdots + \beta_k x_k^0 } + 0 - \P{\beta_0 + \beta_1 x_1^0 + \cdots + \beta_k x_k^0 } = 0$$
However, its variance is not $0$. The ***variance of the prediction error*** (conditional on all in-sample values of the independent variables) is
$$\Var{\hat e^0} = \Var{\hat y^0} + \Var{u^0} = \Var{\hat y^0} + \sigma^2$$
$$\text{se}\P{\hat e^0} = \CB{\SB{\text{se}\P{\hat y^0}}^2+\hat\sigma^2}^{0.5}$$
Using the same reasoning for the $t$ statistics of the $\hat\beta_j$, $\ffrac{\hat e^0} {\text{se}\P{\hat e^0}}$ has a $t$ distribution with $df = n-\P{k+1}$. Therefore,
$$P\CB{-t_{0.025} \leq \ffrac{\hat e^0} {\text{se}\P{\hat e^0}} \leq t_{0.025}} = 0.95$$
and the ***prediction interval*** is $\hat y^0 \pm t_{0.025} \cdot \text{se} \P{\hat e^0}$, which is *wider* than the confidence interval for $\hat y^0$ because of there's an extra $\hat\sigma^2$ when calculating $\text{se}\P{\hat e^0}$.
### Residual Analysis
### Predicting $y$ When $\log\P{y}$ Is the Dependent Variable
The model is: $\log\P{y} = \beta_0 + \beta_1 x_1 + \cdots + \beta_k x_k +u$. Then, the predict $\log\P{y}$ is $\widehat{\log\P{y}} = \hat\beta_0 +\hat\beta_1 x_1 + \hat\beta_2 x_2 + \cdots + \hat\beta_k x_k$. However, we CANNOT predict $y$ as $\hat y =\exp\P{\widehat{\log\P{y}}}$. With the assumption $\text{MLR}.1$ through $\text{MLR}.6$, we have
$$\Exp\SB{y \mid \mathbf{x}} = \exp\P{\ffrac{\sigma^2} {2}} \cdot\exp\P{\beta_0 + \beta_1 x_1 + \cdots + \beta_k x_k}$$
>$Remark$
>
>Here the term $\exp\P{\ffrac{\sigma^2} {2}}$ can be obtained by the moment generating function for normal distribution $r.v.$
Then the actual prediction should be $\hat y = \exp\P{\ffrac{\hat\sigma^2} {2}} \cdot\exp\P{\widehat{\log\P{y}}}$. And the truth is that this prediction is not unbiased, but consistent.
And while this method depend on the normality of the error term, $u$. Here's an alternative way to avoid that:
If we just assume that $u$ is independent of the explanatory variables, then we have
$$\Exp\SB{y\mid \mathbf{x}} = \alpha_0 \exp\P{\beta_0 + \beta_1 x_1 + \beta_2 x_2 + \cdots + \beta_k x_k},\bspace \alpha_0 = \Exp\SB{\exp\P{u}}>1$$
If we know what's $\hat\alpha_0$ then we can directly say that $\hat y = \hat\alpha_0\exp\P{\widehat{\log\P{y}}}$. So how to estimate $\alpha_0$ without the normality assumption?
### Smearing Estimate
The first method is based on $\alpha_0 = \Exp\SB{\exp\P{u}}$. We replace the unobserved errors $u_i$ to the corresponding OLS residuals: $\hat u_i = \log\P{y_i} - \hat\beta_0 - \hat\beta_1x_{i1} - \hat\beta_2 x_{i2} - \cdots - \hat\beta_k x_{ik}$, so that
$$\hat\alpha_0 = \ffrac{1} {n} \sum_{i=1}^{n} \exp\P{\hat u_i}$$
A **moments estimator**! And it's consistent but not unbiased, because we have replaced $u_i$ with $\hat u_i$ inside a nonlinear function. This version of $\hat\alpha_0$ is called the ***smearing estimate***.
### Regression Estimate
A different estimate of $\alpha_0$ is based on a *simple regression through the origin*. We first define:
$$m_i = \exp\P{\beta_0 + \beta_1 x_{i1} + \beta_2 x_{i2} + \cdots + \beta_k x_{ik}}$$
so that $\Exp\SB{y_i\mid m_i} = \alpha_0 m_i$ and if we could really observe what $m_i$ is, we could obtain an *unbiased* estimator of $\alpha_0$ from the regression $y_i$ on $m_i$ without an intercept.
Well, not possible. So we replace the $\beta_j$ with their OLS estimates and obtain $\hat m_i = \exp\P{\widehat{\log\P{y_i}}}$ where, again, $\log\P{y_i}$ are the fitted values from the regression $\log\P{y_i}$ on $x_{i1}, x_{i2}, \dots, x_{ik}$ (here is a different regression with an intercept, just to obtain $\log\P{y_i}$). Then the OLS slope estimate from the simple regression $y_i$ on $m_i$ (without an intercept):
$$\check{\alpha}_0 = \ffrac{\d{\sum_{i=1}^{n} \hat m_i y_i}} {\d{\sum_{i=1}^{n} \hat m_i^2}}$$
And this is the ***regression estimate*** of $\alpha_0$, still, consistent but not unbiased.
$Remark$
> **smearing estimate** can gurantee that the estimates are greater than $1$ while **regression estimate** cna't. **regression estimate** will be much less than $1$ when the assumption of independence between $u$ and $x_j$ is violated.
$Steps$
1. Obtain the fitted values, $\widehat{\log\P{y_i}}$, and residuals, $\hat u_i$, from the regression $\log\P{y}$ on $x_{1}, x_{2}, \dots, x_{k}$, with intercept.
2. Calculate $\hat\alpha_0$ or $\check\alpha_0$
3. For given values of $x_{1}, x_{2}, \dots, x_{k}$, obtain $\widehat{\log\P{y}}$ from $\hat y = \hat\alpha_0\exp\P{\widehat{\log\P{y}}}$
4. Obtain the prediction $\hat y$ also from the preceding equation
***
Further, what's the goodness-of-fit for our $\log\P{y}$. One that is easy to implement, and have the same value whether we estimate $\alpha_0$ as $\hat\alpha_0 = \Exp\SB{\exp\P{u}} = \exp\P{\ffrac{\hat\sigma^2} {2}} $, or $\hat\alpha_0 = \ffrac{1} {n} \sum_{i=1}^{n} \exp\P{\hat u_i}$, or $\check{\alpha}_0 = \ffrac{\d{\sum\nolimits_{i=1}^{n} \hat m_i y_i}} {\d{\sum\nolimits_{i=1}^{n} \hat m_i^2}}$. The $R$-squared is defined as
$$R^2 = 1 - \ffrac{\text{SSR}} {\text{SST}}$$
or, just the square of the correlation between $y_i$ and $\hat y_i$, $\rho^2\P{y_i,\hat y_i}$. So when the independent variable is $\log\P{y}$ we would define
$$R^2 = \rho^2\P{y_i,\hat y_i} = \rho^2\P{y_i,\hat \alpha_0 m_i} = \rho^2\P{y_i,\hat m_i}$$
***
| github_jupyter |
```
# The URL of the MISP instance to connect to
misp_url = 'http://127.0.0.1:8080'
# Can be found in the MISP web interface under ||
# http://+MISP_URL+/users/view/me -> Authkey
misp_key = 'LBelWqKY9SQyG0huZzAMqiEBl6FODxpgRRXMsZFu'
# Should PyMISP verify the MISP certificate
misp_verifycert = False
```
# Getting the API key (automatically generated on the trainig VM)
```
from pathlib import Path
api_file = Path('apikey')
if api_file.exists():
misp_url = 'http://127.0.0.1'
misp_verifycert = False
with open(api_file) as f:
misp_key = f.read().strip()
print(misp_key)
```
# Initialize PyMISP - NG
```
from pymisp import ExpandedPyMISP
misp = ExpandedPyMISP(misp_url, misp_key, misp_verifycert, debug=False)
```
# Index Search (fast, only returns events metadata)
## Search unpublished events
**WARNING**: By default, the search query will only return all the events listed on teh index page
```
r = misp.search_index(published=False)
print(r)
```
## Get the meta data of events
```
r = misp.search_index(eventid=[17217, 1717, 1721, 17218])
```
## Search Tag & mix with other parameters
```
r = misp.search_index(tags=['tlp:white'], pythonify=True)
for e in r:
print(e)
r = misp.search_index(tag='TODO:VT-ENRICHMENT', published=False)
r = misp.search_index(tag=['!TODO:VT-ENRICHMENT', 'tlp:white'], published=False) # ! means "not this tag"
```
## Full text search on event info field
```
r = misp.search_index(eventinfo='circl')
```
## Search by org
```
r = misp.search_index(org='CIRCL')
```
## Search updated events
```
r = misp.search_index(timestamp='1h')
```
# Search full events (Slower, returns full events)
## Getting timestamps
```
from datetime import datetime, date, timedelta
from dateutil.parser import parse
int(datetime.now().timestamp())
d = parse('2018-03-24')
int(d.timestamp())
today = int(datetime.today().timestamp())
yesterday = int((datetime.today() - timedelta(days=1)).timestamp())
print(today, yesterday)
complex_query = misp.build_complex_query(or_parameters=['uibo.lembit@mail.ee', '103.195.185.222'])
r = misp.search(value=complex_query, pythonify=True)
print(r)
r = misp.search(category='Payload delivery')
r = misp.search(value='uibo.lembit@mail.ee', metadata=True, pythonify=True) # no attributes
r = misp.search(timestamp=['2h', '1h'])
r = misp.search(value='8.8.8.8', enforceWarninglist=True)
r = misp.search(value='8.8.8.8', deleted=True)
r = misp.search(value='8.8.8.8', publish_timestamp=1521846000) # everything published since that timestamp
r = misp.search(value='8.8.8.8', last='1d') # everything published in the last <interval>
r = misp.search(value='8.8.8.8', timestamp=[yesterday, today]) # everything updated since that timestamp
r = misp.search(value='8.8.8.8', withAttachments=True) # Return attachments
```
# Search for attributes
```
r = misp.search(controller='attributes', value='8.8.8.9')
r = misp.search(controller='attributes', value='wrapper.no', event_timestamp='5d') # only consider events updated since this timestamp
r
```
## Because reason
```
tag_to_remove = 'foo'
events = misp.search(tags=tag_to_remove, pythonify=True)
for event in events:
for tag in event.tags:
if tag.name == tag_to_remove:
print(f'Got {tag_to_remove} in {event.info}')
misp.untag(event.uuid, tag_to_remove)
break
for attribute in event.attributes:
for tag in attribute.tags:
if tag.name == tag_to_remove:
print(f'Got {tag_to_remove} in {attribute.value}')
misp.untag(attribute.uuid, tag_to_remove)
break
logs = misp.search_logs(model='Tag', title='tlp:white')
print(logs)
logs = misp.search_logs(model='Event', pythonify=True)
#print(logs)
for l in logs:
print(l.title)
log = misp.search_logs(model='Tag', title=tag_to_remove)[0]
roles = misp.get_roles_list()
for r in roles:
if r['Role']['name'] == 'User':
new_role = r['Role']['id']
break
user = misp.get_user(log['Log']['user_id'])
user['User']['role_id'] = new_role
misp.edit_user(user['User']['id'], **user['User'])
```
| github_jupyter |
```
name = '2016-06-10-arcgis-intro'
title = 'Introduction to ArcGIS and its Python interface'
tags = 'gis, maps, basics'
author = 'Melanie Froude'
from nb_tools import connect_notebook_to_post
from IPython.core.display import HTML
html = connect_notebook_to_post(name, title, tags, author)
```
Today Melanie lead the meeting with a session on the ArcGIS software and how we can use Python to automatise the geospatial data processing. The slides are available below.
We started with a brief introduction to the types of data and analysis you can do in ArcGIS. Then Melanie demonstrated how to produce a 3D terrain model using the ArcScene toolbox.
### Presentation
```
# embed pdf into an automatically resized window (requires imagemagick)
w_h_str = !identify -format "%w %h" ../pdfs/arcgis-intro.pdf[0]
HTML('<iframe src=../pdfs/arcgis-intro.pdf width={0[0]} height={0[1]}></iframe>'.format([int(i)*0.8 for i in w_h_str[0].split()]))
```
We all agreed that ArcGIS has a lot to offer to geoscientists. But what makes this software even more appealing is that you can work in a command-line interface using Python (ArcPy module).
So we looked at how to run processes using the Python window command-by-command and how you might integrate ArcGIS processes within a longer script. This was exemplified by Melanie's script that she used to analyse vegetation regrowth after a volcanic eruption.
The script takes two vegetation photos in GeoTIFF format retrieved by Landsat as input and calculates the Normalised Difference Vegetation Index (NDVI) for each of them. We can then compare the output to see how vegetation has changed over the time period.
<div class="alert alert-warning", style="font-size: 100%">
<li>Standard ArcGIS uses Python 2.7 (Python 3 is available in ArcGIS Pro)
<li>The commands below require ArcGIS installed, and hence are not in executable cells in this notebook.
</div>
### ArcPy script example: NDVI of the two geotiff images
Import modules
```
import arcpy, string, arcpy.sa
from arcpy import env
```
Check out extension and set overwrite outputs
```
arcpy.CheckOutExtension("spatial")
arcpy.env.overwriteOutput = True
```
Stop outputs being added to the map
```
arcpy.env.addOutputsToMap = "FALSE"```
Set workspace and declare variations
```
env.workspace = ("/path/to/demo/demo1")
print(arcpy.env.workspace)
```
Load the data
```
rasterb3 = arcpy.Raster("p046r28_5t900922_nn3.tif")
rasterb4 = arcpy.Raster("p046r28_5t900922_nn4.tif")
```
Describe variables
```
desc = arcpy.Describe(rasterb4)
print(desc.dataType)
print(desc.meanCellHeight)
```
Calculate the NDVI
```
Num = arcpy.sa.Float(rasterb4-rasterb3)
Denom = arcpy.sa.Float(rasterb4 + rasterb3)
NDVI1990 = arcpy.sa.Divide(Num, Denom)
```
Save the result as another .tif image
```
NDVI1990.save("/path/to/demo/demo1/NDVI1990.tif")
```
Do the same calculation for the images from a later year
```
rasterb3a = arcpy.Raster("L71046028_02820050721_B30.TIF")
rasterb4a = arcpy.Raster("L71046028_02820050721_B40.TIF")
Num = arcpy.sa.Float(rasterb4a-rasterb3a)
Denom = arcpy.sa.Float(rasterb4a + rasterb3a)
NDVI2005 = arcpy.sa.Divide(Num, Denom)
```
And after saving the second result, calculate the NDVI difference
```
NDVI2005.save("/path/to/demo/demo1/NDVI2005.tif")
NDVIdiff = NDVI2005 - NDVI1990
NDVIdiff.save("/path/to/demo/demo1/NDVIdiff.tif")
```
The result is shown in the slide 5.
```
HTML(html)
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.