text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# MicaSense RedEdge Image Processing Tutorial 1
## Overview
This tutorial assumes you have gone through the basic setup [here](./MicaSense Image Processing Setup.html) and your system is set up and ready to go.
In this tutorial, we will walk through how to convert RedEdge data from raw images to radiace and then to reflectance. We will cover the tools required to do this, and walk through some of the basic image processing and radiometric conversions.
### Opening an image with pyplot
RedEdge 16-bit images can be read with pyplot directly into numpy arrays using the pyplot `imread` function or the matplotlib `imread` function, and then we can display the image inline using the `imshow` function of `matplotlib`.
```
import cv2
import matplotlib.pyplot as plt
import numpy as np
import os,glob
import math
%matplotlib inline
imagePath = os.path.join('.','data','0000SET','000')
imageName = os.path.join(imagePath,'IMG_0000_4.tif')
# Read raw image DN values
# reads 16 bit tif - this will likely not work for 12 bit images
imageRaw=plt.imread(imageName)
# Display the image
fig, ax = plt.subplots(figsize=(8,6))
ax.imshow(imageRaw, cmap='gray')
plt.show()
```
### MicaSense Utilities Module
For many of the steps in the tutorial, we will use code from the MicaSense utilities module. The code is in the micasense directory and can be imported via normal python import commands using the syntax `import micasense` or `import micasense.submodule as short_name` for use in this and other scripts. While we will not cover all of the utility functions in this tutorial, they are available for reference and some will be used and discussed in future tutorials.
### Adding a colorbar
We will use start by using a plotting function in `micasense.plotutils` that adds a colorbar to the display, so that we can more easily see changes in the values in the images and also see the range of the image values after various conversions. This function also colorizes the grayscale images, so that changes can more easily be seen. Depending on your viewing style, you may prefer a different color map and you can also select that colormap here or browsing the colormaps on the [matplotlib site](https://matplotlib.org/users/colormaps.html).
```
import micasense.plotutils as plotutils
# Optional: pick a color map that fits your viewing style
# one of 'gray, viridis, plasma, inferno, magma, nipy_spectral'
plotutils.colormap('viridis');
fig = plotutils.plotwithcolorbar(imageRaw, title='Raw image values with colorbar')
```
### Reading RedEdge Metadata
In order to perform various processing on the images, we need to read the metadata of each image. For this we use ExifTool. We can read standard image capture metadata such as location, UTC time, imager exposure and gain, but also RedEdge specific metadata which can make processing workflows easier.
For example, each image contains a unique capture identifier. Capture identifiers are shared between all 5 images captured by RedEdge at the same moment, and can be used to unambiguously group images in post processing, regardless of how the images are named or stored on disk. Each image also contains a flight identifer which is the same for all images taken during a single power cycle of the camera. This can be used in post-processing workflows to group images and in many cases, more easily identify when the vehicle took off and landed.
```
import micasense.metadata as metadata
exiftoolPath = None
if os.name == 'nt':
exiftoolPath = os.environ.get('exiftoolpath')
# get image metadata
meta = metadata.Metadata(imageName, exiftoolPath=exiftoolPath)
cameraMake = meta.get_item('EXIF:Make')
cameraModel = meta.get_item('EXIF:Model')
firmwareVersion = meta.get_item('EXIF:Software')
bandName = meta.get_item('XMP:BandName')
print('{0} {1} firmware version: {2}'.format(cameraMake,
cameraModel,
firmwareVersion))
print('Exposure Time: {0} seconds'.format(meta.get_item('EXIF:ExposureTime')))
print('Imager Gain: {0}'.format(meta.get_item('EXIF:ISOSpeed')/100.0))
print('Size: {0}x{1} pixels'.format(meta.get_item('EXIF:ImageWidth'),meta.get_item('EXIF:ImageHeight')))
print('Band Name: {0}'.format(bandName))
print('Center Wavelength: {0} nm'.format(meta.get_item('XMP:CentralWavelength')))
print('Bandwidth: {0} nm'.format(meta.get_item('XMP:WavelengthFWHM')))
print('Capture ID: {0}'.format(meta.get_item('XMP:CaptureId')))
print('Flight ID: {0}'.format(meta.get_item('XMP:FlightId')))
print('Focal Length: {0}'.format(meta.get_item('XMP:FocalLength')))
```
### Converting raw images to Radiance
Ultimately most RedEdge users want to calibrate raw images from the camera into reflectance maps. This can be done using off-the-shelf software from third parties, but you are here because there is no fun in that! Along with this tutorial we have included some helper utilities that will handle much of this conversion for you, but here we will walk through a few of those functions to discuss what is happening inside.
Any RedEdge workflow must include these common steps.
1. Un-bias images by accounting for the dark pixel offset
1. Compensate for imager-level effects
1. Compensate for optical chain effects
1. Normalize images by exposure and gain settings
1. Convert to a common unit system (radiance)
All of these are handled by the `micasense.utils.raw_image_to_radiance(metadata, raw_image)` function. Let us take a look at that fuction in more detail.
First, we get the darkPixel values. These values come from optically-covered pixels on the imager which are exposed at the same time as the image pixels. They measure the small amount of random charge generation in each pixel, independent of incoming light, which is common to all semiconductor imaging devices.
```python
blackLevel = np.array(meta.get_item('Exif.BlackLevel'))
darkLevel = blackLevel.mean()
```
Now, we get the imager-specific calibrations.
```python
a1, a2, a3 = meta.get_item('XMP:RadiometricCalibration')
```
We get the parameters of the optical chain (vignette) effects and create a vignette map. This map will be multiplied by the black-level corrected image values to reverse the darkening seen at the image corners. See the `vignette_map` function for the details of the vignette parameters and their use.
```python
V, x, y = vignette_map(meta, xDim, yDim)
```
Now we can calculate the imager-specfic radiometric correction function, which help to account for the radiometric inaccuracies of the CMOS imager pixels.
```python
# row gradient correction
R = 1.0 / (1.0 + a2 * y / exposureTime - a3 * y)
```
Finally, we apply these functions to the raw image to result in a corrected image
```python
# subtract the dark level and adjust for vignette and row gradient
L = V * R * (imageRaw - darkLevel)
```
Next, we get the exposure and gain settings (gain is represented in the photographic parameter ISO, with a base ISO of 100, so we divide the result to get a numeric gain).
```python
exposureTime = float(meta.get_item('EXIF:ExposureTime'))
gain = float(meta.get_item('EXIF:ISOSpeed'))/100.0
```
Now that we have a corrected image, we can apply a conversion from calibrated digital number values to radiance units (W/m^2/nm/sr). Note that in this conversion, we need to normalize by the image bitdepth (2^16 for 16 bit images, 2^12 for 12-bit images), because the calibration coefficients are scaled to work with normalized input values.
```python
# apply the radiometric calibration -
# scale by the gain-exposure product and multiply with the radiometric calibration coefficient
bitsPerPixel = meta.get_item('EXIF:BitsPerSample')
dnMax = float(2**bitsPerPixel)
radianceImage = L.astype(float)/(gain * exposureTime)*a1/dnMax
```
For convenience, we have written the `raw_image_to_radiance` function to return the intermediate compensation images as well, so we can visualize them for the tutorial. These intermediate results are not required in most implementations and can be ommitted, if performance is a concern.
```
import micasense.utils as msutils
radianceImage, L, V, R = msutils.raw_image_to_radiance(meta, imageRaw)
plotutils.plotwithcolorbar(V,'Vignette Factor');
plotutils.plotwithcolorbar(R,'Row Gradient Factor');
plotutils.plotwithcolorbar(V*R,'Combined Corrections');
plotutils.plotwithcolorbar(L,'Vignette and row gradient corrected raw values');
plotutils.plotwithcolorbar(radianceImage,'All factors applied and scaled to radiance');
```
### Convert radiance to reflectance
Now that we have a flat and calibrated radiance image, we can convert into reflectance. To do this, we will use the radiance values of the panel image of known reflectance to determine a scale factor between radiance and reflectance.
In this case, we have our MicaSense calibrated reflectance panel and it's known reflectance of 62% in the band of interest. We will extract the area of the image containing the lambertian panel, determine it's radiance to reflectance scale factor, and then scale the whole image by that factor to get a reflectance image.
```
markedImg = radianceImage.copy()
ulx = 660 # upper left column (x coordinate) of panel area
uly = 490 # upper left row (y coordinate) of panel area
lrx = 840 # lower right column (x coordinate) of panel area
lry = 670 # lower right row (y coordinate) of panel area
cv2.rectangle(markedImg,(ulx,uly),(lrx,lry),(0,255,0),3)
# Our panel calibration by band (from MicaSense for our specific panel)
panelCalibration = {
"Blue": 0.67,
"Green": 0.69,
"Red": 0.68,
"Red edge": 0.67,
"NIR": 0.61
}
# Select panel region from radiance image
panelRegion = radianceImage[uly:lry, ulx:lrx]
plotutils.plotwithcolorbar(markedImg, 'Panel region in radiance image')
meanRadiance = panelRegion.mean()
print('Mean Radiance in panel region: {:1.3f} W/m^2/nm/sr'.format(meanRadiance))
panelReflectance = panelCalibration[bandName]
radianceToReflectance = panelReflectance / meanRadiance
print('Radiance to reflectance conversion factor: {:1.3f}'.format(radianceToReflectance))
reflectanceImage = radianceImage * radianceToReflectance
plotutils.plotwithcolorbar(reflectanceImage, 'Converted Reflectane Image');
```
In some cases we might notice that some parts of a converted reflectance image show reflectances above 1.0, or 100%, and wonder how this is possible. In fact, reflectances higher than 100% are normal in specific cases of [specular reflections](https://en.wikipedia.org/wiki/Specular_reflection). The panel area is a special material that reflects incident light equally well in all directions; however, some of the objects in a scene (especially man-made objects) instead reflect most incident light in one direction, more like a mirror. An example is the reflection of the sun off of the smooth surface of a car or the reflection off of a body of water.
Now we will extract the same region and verify the reflectance in that region is what we expect. In the process, we will blur and visualize the extracted area to look for any trends. The area should have a very consistent reflectance. If a gradient or a high standard deviation (>3% absolute reflectance) is noticed across the panel area it is possible that the panel was captured under inconsistent lighting conditions (e.g. next to a wall or vehicle) or it was captured too close to the edge of the image where the optical calibration is the least accurate.
```
panelRegionRaw = imageRaw[uly:lry, ulx:lrx]
panelRegionRefl = reflectanceImage[uly:lry, ulx:lrx]
panelRegionReflBlur = cv2.GaussianBlur(panelRegionRefl,(55,55),5)
plotutils.plotwithcolorbar(panelRegionReflBlur, 'Smoothed panel region in reflectance image')
print('Min Reflectance in panel region: {:1.2f}'.format(panelRegionRefl.min()))
print('Max Reflectance in panel region: {:1.2f}'.format(panelRegionRefl.max()))
print('Mean Reflectance in panel region: {:1.2f}'.format(panelRegionRefl.mean()))
print('Standard deviation in region: {:1.4f}'.format(panelRegionRefl.std()))
```
In this case the panel is less uniform than we would like, but we can also notice that the full color scale is only 4% of absolute reflectance. Likewise, it's well below the standard deviation threshold we have set of 3% absolute reflectance. This panel has seen over two years of hard field use, so it may be time for it to retire.
Reasons for a high standard deviation across a panel can include panel contamination or inconsistent lighting across the panel due to environmental conditions. Based on the context of the image, it is also clear that the user is taking the panel image facing the sun, which can cast reflected light from the operator's clothing on the panel and contaminate results. For this reason it is always best to capture panel images in an open area and with the operator's back to the sun.
### Undistorting images
Finally, we need to remove lens distortion effects from images for some processing workflows, such as band-to-band image alignment. Generally for photogrammetry processes on raw (or radiance/reflectance) images, this step is not required, as the photogrammetry process will optimize a lens distortion model as part of it's bulk bundle adjustment. RedEdge has very low distortion lenses, so the changes to images in this step tend to be very small and noticeable only in pixels on the border of the image.
```
# correct for lens distortions to make straight lines straight
undistortedReflectance = msutils.correct_lens_distortion(meta, reflectanceImage)
plotutils.plotwithcolorbar(undistortedReflectance, 'Undistorted reflectance image');
```
# In Practice
Now that we can convert from raw RedEge images to reflectance, we will use these methods to convert an image taken during the same campaign to a reflectance image, and look at a few interesting areas of the image to validate our conversion.
```
flightImageName = os.path.join(imagePath,'IMG_0001_4.tif')
flightImageRaw=plt.imread(flightImageName)
plotutils.plotwithcolorbar(flightImageRaw, 'Raw Image')
flightRadianceImage, _, _, _ = msutils.raw_image_to_radiance(meta, flightImageRaw)
flightReflectanceImage = flightRadianceImage * radianceToReflectance
flightUndistortedReflectance = msutils.correct_lens_distortion(meta, flightReflectanceImage)
plotutils.plotwithcolorbar(flightUndistortedReflectance, 'Reflectance converted and undistorted image');
```
# Conclusion
In this tutorial we have found that we can read MicaSense RedEdge images and their metadata, and use python and OpenCV to convert those images to radiance and then to reflectance using the standard scientific field method of imaging a lambertian reflector. We have corrected for both the electro-optical effects of the sensor and optical chain, as well as the incident light at the time of capture.
In future tutorials, we will introduce the Downwelling Light Sensor (DLS) information into the calibration process in order to account for changing irradiance over time (e.g. such as clouds). However, since the panel method is straightforward and repeatable under constant illumination conditions, and is the standard scientific calibration method of surface reflectance, this process is useful and sufficient for many calibration needs.
Looking for more? Try the second tutorial [here](./MicaSense%20Image%20Processing%20Tutorial%202.html).
---
Copyright (c) 2017-2019 MicaSense, Inc. For licensing information see the [project git repository](https://github.com/micasense/imageprocessing)
| github_jupyter |
<a href="https://www.bigdatauniversity.com"><img src = "https://ibm.box.com/shared/static/cw2c7r3o20w9zn8gkecaeyjhgw3xdgbj.png" width = 400, align = "center"></a>
<h1 align=center><font size = 5>COLLABORATIVE FILTERING</font></h1>
Recommendation systems are a collection of algorithms used to recommend items to users based on information taken from the user. These systems have become ubiquitous can be commonly seen in online stores, movies databases and job finders. In this notebook, we will explore recommendation systems based on Collaborative Filtering and implement simple version of one using Python and the Pandas library.
### Table of contents
<div class="alert alert-block alert-info" style="margin-top: 20px">
- <p><a href="#ref1">Acquiring the Data</a></p>
- <p><a href="#ref2">Preprocessing</a></p>
- <p><a href="#ref3">Collaborative Filtering</a></p>
<p></p>
</div>
<br>
<hr>
<a id="ref1"></a>
# Acquiring the Data
To acquire and extract the data, simply run the following Bash scripts:
Dataset acquired from [GroupLens](http://grouplens.org/datasets/movielens/). Lets download the dataset. To download the data, we will use **`!wget`**. To download the data, we will use `!wget` to download it from IBM Object Storage.
__Did you know?__ When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
```
!wget -O moviedataset.zip https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/moviedataset.zip
print('unziping ...')
!unzip -o -j moviedataset.zip
```
Now you're ready to start working with the data!
<hr>
<a id="ref2"></a>
# Preprocessing
First, let's get all of the imports out of the way:
```
#Dataframe manipulation library
import pandas as pd
#Math functions, we'll only need the sqrt function so let's import only that
from math import sqrt
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
Now let's read each file into their Dataframes:
```
#Storing the movie information into a pandas dataframe
movies_df = pd.read_csv('movies.csv')
#Storing the user information into a pandas dataframe
ratings_df = pd.read_csv('ratings.csv')
```
Let's also take a peek at how each of them are organized:
```
#Head is a function that gets the first N rows of a dataframe. N's default is 5.
movies_df.head()
```
So each movie has a unique ID, a title with its release year along with it (Which may contain unicode characters) and several different genres in the same field. Let's remove the year from the title column and place it into its own one by using the handy [extract](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.extract.html#pandas.Series.str.extract) function that Pandas has.
Let's remove the year from the __title__ column by using pandas' replace function and store in a new __year__ column.
```
#Using regular expressions to find a year stored between parentheses
#We specify the parantheses so we don't conflict with movies that have years in their titles
movies_df['year'] = movies_df.title.str.extract('(\(\d\d\d\d\))',expand=False)
#Removing the parentheses
movies_df['year'] = movies_df.year.str.extract('(\d\d\d\d)',expand=False)
#Removing the years from the 'title' column
movies_df['title'] = movies_df.title.str.replace('(\(\d\d\d\d\))', '')
#Applying the strip function to get rid of any ending whitespace characters that may have appeared
movies_df['title'] = movies_df['title'].apply(lambda x: x.strip())
```
Let's look at the result!
```
movies_df.head()
```
With that, let's also drop the genres column since we won't need it for this particular recommendation system.
```
#Dropping the genres column
movies_df = movies_df.drop('genres', 1)
```
Here's the final movies dataframe:
```
movies_df.head()
```
<br>
Next, let's look at the ratings dataframe.
```
ratings_df.head()
```
Every row in the ratings dataframe has a user id associated with at least one movie, a rating and a timestamp showing when they reviewed it. We won't be needing the timestamp column, so let's drop it to save on memory.
```
#Drop removes a specified row or column from a dataframe
ratings_df = ratings_df.drop('timestamp', 1)
```
Here's how the final ratings Dataframe looks like:
```
ratings_df.head()
```
<hr>
<a id="ref3"></a>
# Collaborative Filtering
Now, time to start our work on recommendation systems.
The first technique we're going to take a look at is called __Collaborative Filtering__, which is also known as __User-User Filtering__. As hinted by its alternate name, this technique uses other users to recommend items to the input user. It attempts to find users that have similar preferences and opinions as the input and then recommends items that they have liked to the input. There are several methods of finding similar users (Even some making use of Machine Learning), and the one we will be using here is going to be based on the __Pearson Correlation Function__.
<img src="https://ibm.box.com/shared/static/1ql8cbwhtkmbr6nge5e706ikzm5mua5w.png" width=800px>
The process for creating a User Based recommendation system is as follows:
- Select a user with the movies the user has watched
- Based on his rating to movies, find the top X neighbours
- Get the watched movie record of the user for each neighbour.
- Calculate a similarity score using some formula
- Recommend the items with the highest score
Let's begin by creating an input user to recommend movies to:
Notice: To add more movies, simply increase the amount of elements in the userInput. Feel free to add more in! Just be sure to write it in with capital letters and if a movie starts with a "The", like "The Matrix" then write it in like this: 'Matrix, The' .
```
userInput = [
{'title':'Breakfast Club, The', 'rating':5},
{'title':'Toy Story', 'rating':3.5},
{'title':'Jumanji', 'rating':2},
{'title':"Pulp Fiction", 'rating':5},
{'title':'Akira', 'rating':4.5}
]
inputMovies = pd.DataFrame(userInput)
inputMovies
```
#### Add movieId to input user
With the input complete, let's extract the input movies's ID's from the movies dataframe and add them into it.
We can achieve this by first filtering out the rows that contain the input movies' title and then merging this subset with the input dataframe. We also drop unnecessary columns for the input to save memory space.
```
#Filtering out the movies by title
inputId = movies_df[movies_df['title'].isin(inputMovies['title'].tolist())]
#Then merging it so we can get the movieId. It's implicitly merging it by title.
inputMovies = pd.merge(inputId, inputMovies)
#Dropping information we won't use from the input dataframe
inputMovies = inputMovies.drop('year', 1)
#Final input dataframe
#If a movie you added in above isn't here, then it might not be in the original
#dataframe or it might spelled differently, please check capitalisation.
inputMovies
```
#### The users who has seen the same movies
Now with the movie ID's in our input, we can now get the subset of users that have watched and reviewed the movies in our input.
```
#Filtering out users that have watched movies that the input has watched and storing it
userSubset = ratings_df[ratings_df['movieId'].isin(inputMovies['movieId'].tolist())]
userSubset.head()
```
We now group up the rows by user ID.
```
#Groupby creates several sub dataframes where they all have the same value in the column specified as the parameter
userSubsetGroup = userSubset.groupby(['userId'])
```
lets look at one of the users, e.g. the one with userID=1130
```
userSubsetGroup.get_group(1130)
```
Let's also sort these groups so the users that share the most movies in common with the input have higher priority. This provides a richer recommendation since we won't go through every single user.
```
#Sorting it so users with movie most in common with the input will have priority
userSubsetGroup = sorted(userSubsetGroup, key=lambda x: len(x[1]), reverse=True)
```
Now lets look at the first user
```
userSubsetGroup[0:3]
```
#### Similarity of users to input user
Next, we are going to compare all users (not really all !!!) to our specified user and find the one that is most similar.
we're going to find out how similar each user is to the input through the __Pearson Correlation Coefficient__. It is used to measure the strength of a linear association between two variables. The formula for finding this coefficient between sets X and Y with N values can be seen in the image below.
Why Pearson Correlation?
Pearson correlation is invariant to scaling, i.e. multiplying all elements by a nonzero constant or adding any constant to all elements. For example, if you have two vectors X and Y,then, pearson(X, Y) == pearson(X, 2 * Y + 3). This is a pretty important property in recommendation systems because for example two users might rate two series of items totally different in terms of absolute rates, but they would be similar users (i.e. with similar ideas) with similar rates in various scales .

The values given by the formula vary from r = -1 to r = 1, where 1 forms a direct correlation between the two entities (it means a perfect positive correlation) and -1 forms a perfect negative correlation.
In our case, a 1 means that the two users have similar tastes while a -1 means the opposite.
We will select a subset of users to iterate through. This limit is imposed because we don't want to waste too much time going through every single user.
```
userSubsetGroup = userSubsetGroup[0:100]
```
Now, we calculate the Pearson Correlation between input user and subset group, and store it in a dictionary, where the key is the user Id and the value is the coefficient
```
#Store the Pearson Correlation in a dictionary, where the key is the user Id and the value is the coefficient
pearsonCorrelationDict = {}
#For every user group in our subset
for name, group in userSubsetGroup:
#Let's start by sorting the input and current user group so the values aren't mixed up later on
group = group.sort_values(by='movieId')
inputMovies = inputMovies.sort_values(by='movieId')
#Get the N for the formula
nRatings = len(group)
#Get the review scores for the movies that they both have in common
temp_df = inputMovies[inputMovies['movieId'].isin(group['movieId'].tolist())]
#And then store them in a temporary buffer variable in a list format to facilitate future calculations
tempRatingList = temp_df['rating'].tolist()
#Let's also put the current user group reviews in a list format
tempGroupList = group['rating'].tolist()
#Now let's calculate the pearson correlation between two users, so called, x and y
Sxx = sum([i**2 for i in tempRatingList]) - pow(sum(tempRatingList),2)/float(nRatings)
Syy = sum([i**2 for i in tempGroupList]) - pow(sum(tempGroupList),2)/float(nRatings)
Sxy = sum( i*j for i, j in zip(tempRatingList, tempGroupList)) - sum(tempRatingList)*sum(tempGroupList)/float(nRatings)
#If the denominator is different than zero, then divide, else, 0 correlation.
if Sxx != 0 and Syy != 0:
pearsonCorrelationDict[name] = Sxy/sqrt(Sxx*Syy)
else:
pearsonCorrelationDict[name] = 0
pearsonCorrelationDict.items()
pearsonDF = pd.DataFrame.from_dict(pearsonCorrelationDict, orient='index')
pearsonDF.columns = ['similarityIndex']
pearsonDF['userId'] = pearsonDF.index
pearsonDF.index = range(len(pearsonDF))
pearsonDF.head()
```
#### The top x similar users to input user
Now let's get the top 50 users that are most similar to the input.
```
topUsers=pearsonDF.sort_values(by='similarityIndex', ascending=False)[0:50]
topUsers.head()
```
Now, let's start recommending movies to the input user.
#### Rating of selected users to all movies
We're going to do this by taking the weighted average of the ratings of the movies using the Pearson Correlation as the weight. But to do this, we first need to get the movies watched by the users in our __pearsonDF__ from the ratings dataframe and then store their correlation in a new column called _similarityIndex". This is achieved below by merging of these two tables.
```
topUsersRating=topUsers.merge(ratings_df, left_on='userId', right_on='userId', how='inner')
topUsersRating.head()
```
Now all we need to do is simply multiply the movie rating by its weight (The similarity index), then sum up the new ratings and divide it by the sum of the weights.
We can easily do this by simply multiplying two columns, then grouping up the dataframe by movieId and then dividing two columns:
It shows the idea of all similar users to candidate movies for the input user:
```
#Multiplies the similarity by the user's ratings
topUsersRating['weightedRating'] = topUsersRating['similarityIndex']*topUsersRating['rating']
topUsersRating.head()
#Applies a sum to the topUsers after grouping it up by userId
tempTopUsersRating = topUsersRating.groupby('movieId').sum()[['similarityIndex','weightedRating']]
tempTopUsersRating.columns = ['sum_similarityIndex','sum_weightedRating']
tempTopUsersRating.head()
#Creates an empty dataframe
recommendation_df = pd.DataFrame()
#Now we take the weighted average
recommendation_df['weighted average recommendation score'] = tempTopUsersRating['sum_weightedRating']/tempTopUsersRating['sum_similarityIndex']
recommendation_df['movieId'] = tempTopUsersRating.index
recommendation_df.head()
```
Now let's sort it and see the top 20 movies that the algorithm recommended!
```
recommendation_df = recommendation_df.sort_values(by='weighted average recommendation score', ascending=False)
recommendation_df.head(10)
movies_df.loc[movies_df['movieId'].isin(recommendation_df.head(10)['movieId'].tolist())]
```
### Advantages and Disadvantages of Collaborative Filtering
##### Advantages
* Takes other user's ratings into consideration
* Doesn't need to study or extract information from the recommended item
* Adapts to the user's interests which might change over time
##### Disadvantages
* Approximation function can be slow
* There might be a low of amount of users to approximate
* Privacy issues when trying to learn the user's preferences
## Want to learn more?
IBM SPSS Modeler is a comprehensive analytics platform that has many machine learning algorithms. It has been designed to bring predictive intelligence to decisions made by individuals, by groups, by systems – by your enterprise as a whole. A free trial is available through this course, available here: [SPSS Modeler](http://cocl.us/ML0101EN-SPSSModeler).
Also, you can use Watson Studio to run these notebooks faster with bigger datasets. Watson Studio is IBM's leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, Watson Studio enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of Watson Studio users today with a free account at [Watson Studio](https://cocl.us/ML0101EN_DSX)
### Thanks for completing this lesson!
Notebook created by: <a href = "https://ca.linkedin.com/in/saeedaghabozorgi">Saeed Aghabozorgi</a>, Gabriel Garcez Barros Sousa
<hr>
Copyright © 2018 [Cognitive Class](https://cocl.us/DX0108EN_CC). This notebook and its source code are released under the terms of the [MIT License](https://bigdatauniversity.com/mit-license/).
| github_jupyter |
# Ground Observatory Data - FTP
> Authors: Luca Mariani, Clemens Kloss
>
> Abstract: Demonstrates ground observatory data by direct access to the BGS FTP server (AUX_OBS dataset). Note that in the future there will be a VirES-based access method (work in progress).
<a id="top"/>
## Contents
- [Settings and functions](#settings)
- [Hourly mean values](#obs)
- [Read data from ASCII files](#obs-read-ascii)
- [Read data from multiple files](#obs-multifiles)
- [Examples](#obs-examples)
- [Minute and second mean values](#obsms)
- [Read data from CDF files](#obsms-read-cdf)
- [Read data from multiple files](#obsms-multifiles)
```
%load_ext watermark
%watermark -i -v -p viresclient,pandas,xarray,matplotlib
# Python standard library
import os
import re
from contextlib import closing
from datetime import datetime
from ftplib import FTP
from pathlib import Path
from tempfile import TemporaryFile
from zipfile import ZipFile
# Extra libraries
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import cdflib
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from tqdm import tqdm
from viresclient import SwarmRequest
```
<a id="settings" />
## Settings and functions
[[TOP]](#top)
First we define a number of functions to enable convenient searching, downloading and reading from the FTP server.
```
# FTP server
HOST = 'ftp.nerc-murchison.ac.uk'
# Local directories (update paths according to your environment)
OBS_HOUR_LOCAL_DIR = Path('~/data/AUX_OBS/hour').expanduser()
OBS_MINUTE_LOCAL_DIR = Path('~/data/AUX_OBS/minute').expanduser()
OBS_SECOND_LOCAL_DIR = Path('~/data/AUX_OBS/second').expanduser()
# Create directories to use
os.makedirs(OBS_HOUR_LOCAL_DIR, exist_ok=True)
os.makedirs(OBS_MINUTE_LOCAL_DIR, exist_ok=True)
os.makedirs(OBS_SECOND_LOCAL_DIR, exist_ok=True)
def search(obstype, start_date=None, end_date=None):
"""Search OBS data file on the FTP server.
Parameters
----------
obstype : str
OBS file type: `hour`, `minute`, `second`.
start_date : str or numpy.datetime64
lower bound of the time interval (default: no time interval).
stop_date : str or numpy.datetime64
upper bound of the time interval (default: no time interval).
Returns
-------
list(str)
OBS data files.
Raises
------
ValueError
if `obstype` is not valid.
ftplib.all_errors
in case of FTP errors.
"""
OBS_HOUR_DIR = '/geomag/Swarm/AUX_OBS/hour'
OBS_MINUTE_DIR = '/geomag/Swarm/AUX_OBS/minute'
OBS_SECOND_DIR = '/geomag/Swarm/AUX_OBS/second'
PATTERN = re.compile(
r'SW_OPER_AUX_OBS[_MS]2__(?P<start>\d{8}T\d{6})_'
r'(?P<stop>\d{8}T\d{6})_\d{4}\.ZIP$'
)
MINDATE = np.datetime64('0000', 's')
MAXDATE = np.datetime64('9999', 's')
def _callback(line, result, start_date, end_date):
if line[0] == '-':
match = PATTERN.match(line[56:])
if match:
start, stop = match.groupdict().values()
start = np.datetime64(datetime.strptime(start, '%Y%m%dT%H%M%S'))
stop = np.datetime64(datetime.strptime(stop, '%Y%m%dT%H%M%S'))
if end_date >= start and start_date <= stop:
result.append(line[56:])
start_date = MINDATE if start_date is None else np.datetime64(start_date)
end_date = MAXDATE if end_date is None else np.datetime64(end_date)
paths = {
'hour': OBS_HOUR_DIR,
'minute': OBS_MINUTE_DIR,
'second': OBS_SECOND_DIR
}
if obstype not in paths:
raise ValueError(
f'obstype must be hour, minute or second, not {obstype}'
)
result = []
with FTP(HOST) as ftp:
ftp.login()
ftp.dir(paths[obstype], lambda line: _callback(line, result, start_date, end_date))
return [f'{paths[obstype]}/{name}' for name in sorted(result)]
def loacal_search(obstype, start_date=None, end_date=None):
"""Search OBS data file on local filesystem.
Parameters
----------
obstype : str
OBS file type: `hour`, `minute`, `second`.
start_date : str or numpy.datetime64
lower bound of the time interval (default: no time interval).
stop_date : str or numpy.datetime64
upper bound of the time interval (default: no time interval).
Returns
-------
list(pathlib.Path)
OBS data files.
Raises
------
ValueError
if `obstype` is not valid.
"""
PATTERN = re.compile(
r'SW_OPER_AUX_OBS[_MS]2__(?P<start>\d{8}T\d{6})_'
r'(?P<stop>\d{8}T\d{6})_\d{4}\.\w{3}$'
)
MINDATE = np.datetime64('0000', 's')
MAXDATE = np.datetime64('9999', 's')
start_date = MINDATE if start_date is None else np.datetime64(start_date)
end_date = MAXDATE if end_date is None else np.datetime64(end_date)
paths = {
'hour': OBS_HOUR_LOCAL_DIR,
'minute': OBS_MINUTE_LOCAL_DIR,
'second': OBS_SECOND_LOCAL_DIR
}
if obstype not in paths:
raise ValueError(
f'obstype must be hour, minute or second, not {obstype}'
)
result = []
for file in (elm for elm in paths[obstype].iterdir() if elm.is_file()):
match = PATTERN.match(file.name)
if match:
start, stop = match.groupdict().values()
start = np.datetime64(datetime.strptime(start, '%Y%m%dT%H%M%S'))
stop = np.datetime64(datetime.strptime(stop, '%Y%m%dT%H%M%S'))
if end_date >= start and start_date <= stop:
result.append(file)
return sorted(result)
def download(files, outdir='', show_progress=True):
"""Download files from the FTP server.
Parameters
----------
outdir : str or os.PathLike
output directory (default: current directory).
files : collections.abc.Iterable(str)
path(s) of the file(s) to be downloaded
Returns
-------
list(pathlib.Path)
list of downloaded files.
Raises
------
ftplib.all_errors
in case of FTP errors.
"""
def _callback(data, fh, pbar):
pbar.update(len(data))
fh.write(data)
outdir = Path(outdir)
downloaded = []
with FTP(HOST) as ftp:
ftp.login()
for file in files:
file = str(file)
basename = file.split('/')[-1]
with TemporaryFile(dir=outdir) as tmp:
with tqdm(total=ftp.size(file), unit='B',
unit_scale=True, desc=basename,
disable=not show_progress) as pbar:
ftp.retrbinary(f'RETR {file}', callback=lambda x: _callback(x, tmp, pbar))
with ZipFile(tmp) as zf:
hdr = Path(basename).with_suffix('.HDR').name
datafile = [elm for elm in zf.namelist()if elm != hdr][0]
outfile = zf.extract(datafile, outdir)
downloaded.append(Path(outfile))
return downloaded
def ascii_to_pandas(file):
"""Convert an OBS ASCII file to a pandas DataFrame.
Parameters
----------
file : str or os.PathLike
OBS ASCII file.
Returns
-------
pandas.DataFrame
data contained in the OBS ASCII file.
"""
df = pd.read_csv(
file,
comment='#',
delim_whitespace=True,
names = ['IAGA_code', 'Latitude', 'Longitude', 'Radius',
'yyyy', 'mm', 'dd', 'UT', 'B_N', 'B_E', 'B_C'],
parse_dates={'Timestamp': [4, 5, 6]},
infer_datetime_format=True
)
df['Timestamp'] = df['Timestamp'] + pd.to_timedelta(df['UT'], 'h')
df.drop(columns='UT', inplace=True)
df.set_index('Timestamp', inplace=True)
return df
def cdf_to_pandas(file):
"""Convert an OBS CDF file to a pandas DataFrame.
Parameters
----------
file : str or os.PathLike
OBS CDF file.
Returns
-------
pandas.DataFrame
data contained in the OBS CDF file.
"""
with closing(cdflib.cdfread.CDF(file)) as data:
ts = pd.DatetimeIndex(
cdflib.cdfepoch.encode(data.varget('Timestamp'), iso_8601=True),
name='Timestamp'
)
df = pd.DataFrame(
{
'IAGA_code': data.varget('IAGA_code')[:,0,0],
'Latitude': data.varget('Latitude'),
'Longitude': data.varget('Longitude'),
'Radius': data.varget('Radius'),
'B_N': data.varget('B_NEC')[:,0],
'B_E': data.varget('B_NEC')[:,1],
'B_C': data.varget('B_NEC')[:,2]
},
index=ts
)
return df
def download_obslist(outdir=''):
"""Search observatory list file on the FTP server.
Parameters
----------
outdir : str or os.PathLike
output directory (default: current directory).
Returns
-------
str
Observatory list file.
Raises
------
ftplib.all_errors
in case of FTP errors.
"""
OBS_HOUR_DIR = '/geomag/Swarm/AUX_OBS/hour'
def _callback(line, result):
if line[0] == '-':
match = re.match('obslist.+_gd\.all$', line[56:])
if match:
result.append(line[56:])
outdir = Path(outdir)
files = []
with FTP(HOST) as ftp:
ftp.login()
ftp.dir(OBS_HOUR_DIR, lambda line: _callback(line, files))
remote_obslist_file = f'{OBS_HOUR_DIR}/{files[0]}'
local_obslist_file = outdir / files[0]
with local_obslist_file.open('w') as fh:
ftp.retrlines(f'RETR {remote_obslist_file}', lambda line: print(line, file=fh))
return local_obslist_file
def read_obslist(file):
"""Convert observatory list ASCII file to a pandas DataFrame.
Parameters
----------
file : str or os.PathLike
observatory list ASCII file.
Returns
-------
pandas.DataFrame
data contained in the observatory list ASCII file.
"""
df = pd.read_csv(
file,
delim_whitespace=True,
names = ['IAGA_code', 'Latitude', 'Longitude', 'Altitude'],
)
return df
```
<a id="obs" />
## Hourly mean values
[[TOP]](#top)
Hourly means hosted at:
- ftp://ftp.nerc-murchison.ac.uk/geomag/Swarm/AUX_OBS/hour/
Processing methodology:
- Macmillan, S., Olsen, N. Observatory data and the Swarm mission. Earth Planet Sp 65, 15 (2013). https://doi.org/10.5047/eps.2013.07.011
<a id="obs-read-ascii" />
### Read data from ASCII files
[[TOP]](#top)
Use the `search()` function (see [Settings and functions](#settings)) to search OBS hourly data from 2018-01-01T00:00:00 to 2019-12-31T23:59:59 on the FTP server:
```
result = search('hour', '2018-01-01', '2019-12-31T23:59:59')
result
```
Use the `download()` function (see [Settings and functions](#settings)) to download data:
```
downloaded = download(result, outdir=OBS_HOUR_LOCAL_DIR)
downloaded
```
Select one of the AUX_OBS_2_ files (e.g. the first one):
```
file1 = downloaded[0]
file1
```
Read ASCII file and convert data to a `pandas.DataFrame`:
```
df1 = pd.read_csv(
file1,
comment='#',
delim_whitespace=True,
names = ['IAGA_code', 'Latitude', 'Longitude',
'Radius', 'yyyy', 'mm', 'dd', 'UT', 'B_N', 'B_E', 'B_C'],
parse_dates={'Timestamp': [4, 5, 6]},
infer_datetime_format=True
)
df1['Timestamp'] = df1['Timestamp'] + pd.to_timedelta(df1['UT'], 'h')
df1.drop(columns='UT', inplace=True)
df1.set_index('Timestamp', inplace=True)
df1
```
For more information on `pandas.Dataframe` see: https://pandas.pydata.org/docs/reference/frame.
The same result can be obtained with the `ascii_to_pandas()` function (see [Settings and functions](#settings)).
```
new = ascii_to_pandas(file1)
new
```
Compare the two data frames:
```
pd.testing.assert_frame_equal(df1, new)
```
Example: get minimum and maximum dates:
```
df1.index.min(), df1.index.max()
```
Example: get list of observatories (IAGA codes) stored in the files:
```
df1['IAGA_code'].unique()
```
<a id="obs-multifiles" />
### Read data from multiple files
[[TOP]](#top)
Pandas dataframes can be concatenated to represent data obtained from more than one file. E.g. read data from the next AUX_OBS_2_ file:
```
file2 = downloaded[1]
df2 = ascii_to_pandas(file2)
df2
```
The two dataframes can be concatenated using the `pandas.concat()` function (for more information see: https://pandas.pydata.org/docs/reference/api/pandas.concat.html#pandas.concat):
```
concatenated = pd.concat([df1, df2])
concatenated.sort_values(by=['IAGA_code', 'Timestamp'], inplace=True)
concatenated.index.min(), concatenated.index.max()
concatenated
```
<a id="obs-examples"/>
### Examples
[[TOP]](#top)
Plot hourly mean values on a map:
```
df = ascii_to_pandas(file1)
# Add F column
df['F'] = np.linalg.norm(df[['B_N', 'B_E', 'B_C']], axis=1)
# Select date
date = '2018-01-01T01:30:00'
fig = plt.figure(figsize=(16, 10))
# Draw map
ax = plt.subplot2grid((1, 1), (0, 0), projection=ccrs.PlateCarree())
ax.coastlines()
ax.add_feature(cfeature.OCEAN, facecolor='lightgrey')
ax.gridlines()
# Plot observatory measurements at date
cm = ax.scatter(
df[date]['Longitude'], df[date]['Latitude'], c=df[date]['F'],
marker='D', transform=ccrs.PlateCarree(),
label=f'OBS F hourly mean value at {date}'
)
# Add IAGA codes
for row in df[date].itertuples():
ax.annotate(
row.IAGA_code, (row.Longitude, row.Latitude),
xycoords=ccrs.PlateCarree()._as_mpl_transform(ax)
)
# Set title and legendbb
plt.title('Magnetic field intensities')
plt.legend()
# Add colorbar
cax = fig.add_axes([0.92, 0.2, 0.02, 0.6])
plt.colorbar(cm, cax=cax, label='F [nT]')
plt.show()
```
Read list of all observatories (use the `download_obslist()` and `read_obslist()` functions defined in [Settings and functions](#settings)):
```
obslist = download_obslist(outdir=OBS_HOUR_LOCAL_DIR)
obslist
obs = read_obslist(obslist)
obs
```
Add the missing observatories, i.e. those not included in the observatory hourly mean values, to the plot:
```
df = ascii_to_pandas(file1)
# Add F column
df['F'] = np.linalg.norm(df[['B_N', 'B_E', 'B_C']], axis=1)
# Select date
date = '2018-01-01T01:30:00'
fig = plt.figure(figsize=(16, 10))
# Draw map
ax = plt.subplot2grid((1, 1), (0, 0), projection=ccrs.PlateCarree())
ax.coastlines()
ax.add_feature(cfeature.OCEAN, facecolor='lightgrey')
ax.gridlines()
# Plot observatory measurements at date
cm = ax.scatter(
df[date]['Longitude'], df[date]['Latitude'], c=df[date]['F'],
marker='D', transform=ccrs.PlateCarree(),
label=f'OBS F hourly mean value at {date}'
)
# Add IAGA codes
for row in df[date].itertuples():
ax.annotate(
row.IAGA_code, (row.Longitude, row.Latitude),
xycoords=ccrs.PlateCarree()._as_mpl_transform(ax)
)
# Add missing observatories from obslist (position only)
missing = obs[~obs['IAGA_code'].isin(df[date]['IAGA_code'].unique())]
cm2 = ax.scatter(missing['Longitude'], missing['Latitude'], c='black', marker='D', alpha=0.1)
# Set title and legendbb
plt.title('Magnetic field intensities')
plt.legend()
# Add colorbar
cax = fig.add_axes([0.92, 0.2, 0.02, 0.6])
plt.colorbar(cm, cax=cax, label='F [nT]')
plt.show()
```
Add Swarm F measurements between 01:00:00 and 02:00:00 of the same day:
```
# using viresclient
request = SwarmRequest()
request.set_collection('SW_OPER_MAGA_LR_1B')
request.set_products(measurements='F')
start_date = '2018-01-01T01:00:00'
end_date = '2018-01-01T02:00:00'
data = request.get_between(start_date, end_date)
df = ascii_to_pandas(file1)
# Add F column
df['F'] = np.linalg.norm(df[['B_N', 'B_E', 'B_C']], axis=1)
# Select date
date = '2018-01-01T01:30:00'
fig = plt.figure(figsize=(16, 10))
# Draw map
ax = plt.subplot2grid((1, 1), (0, 0), projection=ccrs.PlateCarree())
ax.coastlines()
ax.add_feature(cfeature.OCEAN, facecolor='lightgrey')
ax.gridlines()
# Plot observatory measurements at date
cm = ax.scatter(
df[date]['Longitude'], df[date]['Latitude'], c=df[date]['F'],
marker='D', transform=ccrs.PlateCarree(),
label=f'OBS F hourly mean value at {date}'
)
# Add IAGA codes
for row in df[date].itertuples():
ax.annotate(
row.IAGA_code, (row.Longitude, row.Latitude),
xycoords=ccrs.PlateCarree()._as_mpl_transform(ax)
)
# Add missing observatories from obslist (position only)
missing = obs[~obs['IAGA_code'].isin(df[date]['IAGA_code'].unique())]
ax.scatter(missing['Longitude'], missing['Latitude'], c='black', marker='D', alpha=0.1)
# Add Swarm A data
swarm = data.as_dataframe()
ax.scatter(
swarm['Longitude'], swarm['Latitude'], c=swarm['F'],
transform=ccrs.PlateCarree(),
label=f'Swarm A - F measurements between {start_date} and {end_date}'
)
# Set title and legendbb
plt.title('Magnetic field intensities')
plt.legend()
# Add colorbar
cax = fig.add_axes([0.92, 0.2, 0.02, 0.6])
plt.colorbar(cm, cax=cax, label='F [nT]')
plt.show()
```
<a id="obsms" />
## Minute and second mean values
[[TOP]](#top)
Files containing observatory minute and second mean values have CDF format. They can be downloade from:
- ftp://ftp.nerc-murchison.ac.uk/geomag/Swarm/AUX_OBS/minute/
- ftp://ftp.nerc-murchison.ac.uk/geomag/Swarm/AUX_OBS/second/
<a id="obsms-read-cdf" />
### Read data from CDF files
[[TOP]](#top)
Use the `search()` function (see [Settings and functions](#settings)) to search OBS minute/second data from 2019-12-01T00:00:00 to 2019-12-31T23:59:59 on the FTP server:
```
minute = search('minute', '2019-12-01', '2019-12-31T23:59:59')
minute
second = search('second', '2019-12-01', '2019-12-31T23:59:59')
second
```
Use the `download()` function (see [Settings and functions](#settings)) to download data:
```
dl_minute = download(minute, outdir=OBS_MINUTE_LOCAL_DIR)
dl_second = download(second, outdir=OBS_SECOND_LOCAL_DIR)
```
Select one of the AUX_OBSM2_ files (e.g. the first one):
```
file1 = dl_minute[0]
file1
```
Read CDF file using `cdflib` (for more information on `cdflib`, see: https://github.com/MAVENSDC/cdflib)
```
data = cdflib.CDF(file1)
```
Get info about the file as a Python dictionary:
```
data.cdf_info()
```
You can see that measurements are stored as *zVariables*:
```
data.cdf_info()['zVariables']
```
Data can be retrieved via the `.varget()` method, e.g:
```
data.varget('B_NEC')
```
Data is returned as a `numpy.ndarray` object (for more information on `numpy.ndarray`, see: https://docs.scipy.org/doc/numpy/reference/arrays.ndarray.html).
Variable attributes can be retrieved using the `.varattsget()` method, e.g.:
```
data.varattsget('B_NEC')
```
Attributes are returned as a Python dictionary.
Let's retrieve the timestamps:
```
data.varget('Timestamp')
```
`Timestamp` type is:
```
data.varget('Timestamp').dtype
```
Timestamps are represented as NumPy `float64` values. Why? Get info about `Timestamp` variable using the `.varinq()` method:
```
data.varinq('Timestamp')
```
The returned dictionary shows that the data type is *CDF_EPOCH* consising in a floating point value representing the number of milliseconds since 01-Jan-0000 00:00:00.000. It can be converted to a more readable format (list of strings) using the `cdflib.cdfepoch.encode()` function:
```
ts = cdflib.cdfepoch.encode(data.varget('Timestamp'), iso_8601=True)
ts[:5]
```
Or to a numpy array of `numpy.datetime64` values:
```
ts = np.array(cdflib.cdfepoch.encode(data.varget('Timestamp'), iso_8601=True), dtype='datetime64')
ts[:5]
```
You may be interested also in the CDF global attributes:
```
data.globalattsget()
```
Close the file when you have finished:
```
data.close()
```
AUX_OBSS2_ data contains the same variables:
```
with closing(cdflib.cdfread.CDF(dl_second[0])) as data:
zvariables = data.cdf_info()['zVariables']
zvariables
```
Data can be represented as a `pandas.DataFrame` object:
```
with closing(cdflib.cdfread.CDF(file1)) as data:
ts = pd.DatetimeIndex(
cdflib.cdfepoch.encode(data.varget('Timestamp'), iso_8601=True),
name='Timestamp'
)
df1 = pd.DataFrame(
{
'IAGA_code': data.varget('IAGA_code')[:,0,0],
'Latitude': data.varget('Latitude'),
'Longitude': data.varget('Longitude'),
'Radius': data.varget('Radius'),
'B_N': data.varget('B_NEC')[:,0],
'B_E': data.varget('B_NEC')[:,1],
'B_C': data.varget('B_NEC')[:,2]
},
index=ts
)
df1
```
For more information on `pandas.Dataframe` see: https://pandas.pydata.org/docs/reference/frame.
The same result can be obtained with the `cdf_to_pandas()` function (see [Settings and functions](#settings)).
```
new = cdf_to_pandas(file1)
new
```
Compare the two data frames:
```
pd.testing.assert_frame_equal(df1, new)
```
Example: get minimum and maximum dates:
```
df1.index.min(), df1.index.max()
```
Example: get list of observatories (IAGA codes) stored in the files:
```
df1['IAGA_code'].unique()
```
Example: get list of observatories (IAGA codes) included in the following ranges of coordinates:
- $30 \leq Latitude \leq 70$
- $-10 \leq Longitude \leq 40$
```
df1[(df1['Latitude'] >= 30) & (df1['Latitude'] <= 70) & (df1['Longitude'] >= -10) & (df1['Longitude'] <= 40)]['IAGA_code'].unique()
```
You can do the same using the `.query()` method (see: https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.query.html#pandas.DataFrame.query):
```
df1.query('(30 <= Latitude <= 70) and (-10 <= Longitude <= 40)')['IAGA_code'].unique()
```
<a id="obsms-multifiles" />
### Read data from multiple files
[[TOP]](#top)
Pandas dataframes can be concatenated to represent data obtained from more than one file. E.g. read data from the next AUX_OBSM2_ file:
```
file2 = dl_minute[1]
df2 = cdf_to_pandas(file2)
df2
```
The two dataframes can be concatenated using the `pandas.concat()` function (for more information see: https://pandas.pydata.org/docs/reference/api/pandas.concat.html#pandas.concat):
```
concatenated = pd.concat([df1, df2])
concatenated.sort_values(by=['IAGA_code', 'Timestamp'], inplace=True)
concatenated.index.min(), concatenated.index.max()
concatenated
```
With AUX_OBSS2_ data:
```
files = dl_second[:2]
files
concatenated = pd.concat([cdf_to_pandas(file) for file in files])
concatenated.sort_values(by=['IAGA_code', 'Timestamp'], inplace=True)
concatenated.index.min(), concatenated.index.max()
concatenated
```
| github_jupyter |
```
#questions to be answerd
#Is there a relationship between how much the world is affected by climate change and the amount of mentions in their speeches?
#To do
#Temperature data set mixing with this
#Pre-process/Transform temperature dataset into DataFrame
#Map with color for temperature changes
#Regression for temperature and climate change mentions
```
## Import the databases
```
import os
import numpy as np
import pandas as pd
sessions = np.arange(25, 76)
data=[]
for session in sessions:
directory = "./TXT/Session "+str(session)+" - "+str(1945+session)
for filename in os.listdir(directory):
f = open(os.path.join(directory, filename),encoding="utf8")
if filename[0]==".": #ignore hidden files
continue
splt = filename.split("_")
data.append([session, 1945+session, splt[0], f.read()])
df_speech = pd.DataFrame(data, columns=['Session','Year','ISO-alpha3 Code','Speech'])
df_speech.tail()
df_speech = pd.DataFrame(data, columns=['Session','Year','ISO-alpha3 Code','Speech'])
import pandas as pd
import matplotlib.pyplot as plt
temp = pd.read_csv('./data/greenpeace.csv')
#set index to years and change column names for easy use
temp = temp.set_index('Year')
temp = temp.rename(columns={'Entity': 'Country', 'Surface temperature anomaly': 'Anomaly'})
#only get years 1970 and above, main data set is 1970 and above
temp = temp[temp.index >1969]
# can't take the average of tempereture anomaly, difference in area etc, so a different dataset for total global warming
world = pd.read_csv('./data/globaltemp.csv')
#create extra column that gives only the year of the anomaly per month
world['Year'] = pd.DatetimeIndex(world['Day']).year
#change column names & remove 'Code'
world = world.rename(columns = {'Entity': 'Country', 'temperature_anomaly': 'Anomaly'})[['Country', 'Anomaly', 'Year']]
# date is per month, change it to anomaly per year, set index to year
world = world[world['Country'] == 'World']
world = world.groupby('Year').mean()
#only get the data from 1970 and up, because of our main data set being 1970 and higher
world = world[world.index > 1969]
plt.plot(world)
```
## Fit the model
```
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
import numpy as np
df_copy = pd.read_pickle("climate_mentions.pkl")
df_copy['Climate Mentions Count - Absolute'] = df_copy['Climate Mentions Count'] * df_copy['Speech'].str.len()
x = df_copy.groupby("Year").mean()["Climate Mentions Count - Absolute"]
y = world[world.Anomaly.index <= 2020]
plt.plot(x, y, 'o', color='red');
groupedcount = df_copy.groupby("Year").mean()
counttemp = groupedcount.join(world)[["Climate Mentions Count - Absolute", "Anomaly"]]
# add columns for cumulative count
counttemp["Countcum"] = counttemp["Climate Mentions Count - Absolute"].cumsum()
counttemp["Anomalycum"] = counttemp["Anomaly"].cumsum()
import operator
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.preprocessing import PolynomialFeatures
# load the dataset
data = counttemp
data = data.values
# choose the input and output variables
x, y = data[:, 1], data[:, 2]
# transforming the data to include another axis
x = x[:, np.newaxis]
y = y[:, np.newaxis]
# construct first degree polynomial
polynomial_features= PolynomialFeatures(degree=1)
x_poly = polynomial_features.fit_transform(x)
# construct model
model = LinearRegression()
model.fit(x_poly, y)
y_poly_pred = model.predict(x_poly)
# Calculate RMSE and R2
rmse = np.sqrt(mean_squared_error(y,y_poly_pred))
r2 = r2_score(y,y_poly_pred)
print("RMSE: %.5f" % (rmse))
print("R2: %.5f" % (r2))
# plot
plt.scatter(x, y, s=10)
# sort the values of x before line plot
sort_axis = operator.itemgetter(0)
sorted_zip = sorted(zip(x,y_poly_pred), key=sort_axis)
x, y_poly_pred = zip(*sorted_zip)
plt.plot(x, y_poly_pred, color='m')
plt.show()
```
## Remove outliers
```
import seaborn as sns
#find outliers
sns.boxplot(counttemp['Climate Mentions Count - Absolute'])
counttemp['Climate Mentions Count - Absolute'].idxmax()
#manually remove outlier
counttemp = counttemp[counttemp.index != 2019]
sns.boxplot(counttemp['Climate Mentions Count - Absolute'])
```
## Check the models again with removed outliers
```
# load the dataset
data = counttemp
data = data.values
# choose the input and output variables
x, y = data[:, 1], data[:, 2]
# transforming the data to include another axis
x = x[:, np.newaxis]
y = y[:, np.newaxis]
# construct first degree polynomial
polynomial_features= PolynomialFeatures(degree=1)
x_poly = polynomial_features.fit_transform(x)
# construct model
model = LinearRegression()
model.fit(x_poly, y)
y_poly_pred = model.predict(x_poly)
# Calculate RMSE and R2
rmse1 = np.sqrt(mean_squared_error(y,y_poly_pred))
r2_1 = r2_score(y,y_poly_pred)
print("RMSE: %.5f" % (rmse1))
print("R2: %.5f" % (r2_1))
# plot
plt.scatter(x, y, s=10)
# sort the values of x before line plot
sort_axis = operator.itemgetter(0)
sorted_zip = sorted(zip(x,y_poly_pred), key=sort_axis)
x1, y_poly_pred1 = zip(*sorted_zip)
plt.plot(x1, y_poly_pred1, color='m')
plt.show()
## Check for a higher degree polynomial fit
# load the dataset
data = counttemp
data = data.values
# choose the input and output variables
x, y = data[:, 1], data[:, 2]
# transforming the data to include another axis
x = x[:, np.newaxis]
y = y[:, np.newaxis]
# construct third degree polynomial
polynomial_features= PolynomialFeatures(degree=3)
x_poly = polynomial_features.fit_transform(x)
# construct model
model = LinearRegression()
model.fit(x_poly, y)
y_poly_pred = model.predict(x_poly)
# Calculate RMSE and R2
rmse2 = np.sqrt(mean_squared_error(y,y_poly_pred))
r2_2 = r2_score(y,y_poly_pred)
print("RMSE: %.5f" % (rmse2))
print("R2: %.5f" % (r2_2))
# plot
plt.scatter(x, y, s=10)
# sort the values of x before line plot
sort_axis = operator.itemgetter(0)
sorted_zip = sorted(zip(x,y_poly_pred), key=sort_axis)
x2, y_poly_pred2 = zip(*sorted_zip)
plt.plot(x2, y_poly_pred2, color='m')
plt.show()
# load the dataset
data = counttemp
data = data.values
# choose the input and output variables
x, y = data[:, 1], data[:, 2]
# transforming the data to include another axis
x = x[:, np.newaxis]
y = y[:, np.newaxis]
# construct fifth degree polynomial
polynomial_features= PolynomialFeatures(degree=5)
x_poly = polynomial_features.fit_transform(x)
# construct model
model = LinearRegression()
model.fit(x_poly, y)
y_poly_pred = model.predict(x_poly)
# Calculate RMSE and R2
rmse3 = np.sqrt(mean_squared_error(y,y_poly_pred))
r2_3 = r2_score(y,y_poly_pred)
print("RMSE: %.5f" % (rmse3))
print("R2: %.5f" % (r2_3))
# plot
plt.scatter(x, y, s=10)
# sort the values of x before line plot
sort_axis = operator.itemgetter(0)
sorted_zip = sorted(zip(x,y_poly_pred), key=sort_axis)
x3, y_poly_pred3 = zip(*sorted_zip)
plt.plot(x3, y_poly_pred3, color='m')
plt.show()
from matplotlib.pyplot import figure
figure(figsize=(11, 9), dpi=80)
print("The first degree polynomial has a RMSE of %.5f" % (rmse1))
print("The third degree polynomial has a RMSE of %.5f" % (rmse2))
print("The fifth degree polynomial has a RMSE of %.5f" % (rmse3))
print("The first degree polynomial has a R2 of %.5f" % (r2_1))
print("The third degree polynomial has a R2 of %.5f" % (r2_2))
print("The fifth degree polynomial has a R2 of %.5f" % (r2_3))
# plot
plt.scatter(x, y, s=10)
# sort the values of x before line plot
sort_axis = operator.itemgetter(0)
sorted_zip = sorted(zip(x,y_poly_pred), key=sort_axis)
plt.plot(x1, y_poly_pred1, color='c', label="first degree")
plt.plot(x2, y_poly_pred2, color='y', label="third degree")
plt.plot(x3, y_poly_pred3, color='r', label="fifth degree")
plt.legend()
plt.show()
```
| github_jupyter |
#### Notebook style configuration (optional)
```
from IPython.core.display import display, HTML
style = open("./style.css").read()
display(HTML("<style>%s</style>" % style))
```
# Dataviz catalogue
In this lesson, we'll review a few of the many different types of plot matplotlib offers and manipulate them.
<img src="../images/plot-basic.png" width="50%" align="right" /> <img src="../images/plot-advanced.png" width="50%" />
These images come from the [cheatsheets](https://github.com/matplotlib/cheatsheets).
### Initialization
Before we start, let's set some default settings such that we do not have to write them each time we start a new figure.
```
import numpy as np
import matplotlib.pyplot as plt
p = plt.rcParams
p["figure.dpi"] = 300
```
## 1. Line plot
We have already manipulated line [plot](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.plot.html) in the first lesson but we used it to render a... line plot (how surprising!). But this command is quite powerful and can be used to render many other type of plot such as, for example, a scatter plot.
```
X = np.random.normal(0.0, .5, 10000)
Y = np.random.normal(0.0, .5, len(X))
fig = plt.figure(figsize=(10,10))
ax = plt.subplot()
ax.plot(X, Y, linestyle="", color="C1", alpha=0.1,
marker="o", markersize=5, markeredgewidth=0)
plt.show();
```
In the figure above, we took advantage of the `alpha` parameter that sets the transparency level of markers. Consequently, areas with a higher number of point will be more opaque, suggesting density to the reader.
```
X = [ 0,0,0,0, None, 1,1,1,1,1, None, 2,2,2, None, 3,3,3,3,3,3]
Y = [ 1,2,3,4, None, 1,2,3,4,5, None, 1,2,3, None, 1,2,3,4,5,6]
fig = plt.figure(figsize=(10,4))
ax = plt.subplot()
ax.plot(X, Y, "-o", linewidth=5,
markersize=12, markeredgecolor="white", markeredgewidth=2)
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.set_xlim(-.5,3.5); ax.set_xticks([0,1,2,3])
ax.set_ylim(0,7); ax.set_yticks([])
plt.show();
```
In the figure above, the important point to notice is the use of the `None` keyword in X and Y. This indicates matpltolib that we have several series and matplotlib won't draw a line between the end point of a series and the start point of the next series.
We can also combine several line plots to create a specific effect. For example, here is a progress bar made of several plots with various parameters. Here we take advantage of very thick line and specify line capstyle.
```
fig = plt.figure(figsize=(10,2))
ax = plt.subplot(frameon=False)
ax.plot([1,9], [0,0], linewidth=20, color="black", solid_capstyle="round")
ax.plot([1,9], [0,0], linewidth=18, color="white", solid_capstyle="round")
ax.plot([1,5], [0,0], linewidth=12, color="C1", solid_capstyle="round")
ax.plot([5,6], [0,0], linewidth=12, color="C1", solid_capstyle="butt")
ax.plot([6,6], [-0.5,0.5], "--", linewidth=1, color="black")
ax.set_xlim(0,10); ax.set_xticks([])
ax.set_ylim(-1.5,1.5); ax.set_yticks([])
plt.show();
```
## 2. Scatter plot
We have just seen that the `plot` command can be used to draw a scatter plot and yet, there exists a [scatter](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.scatter.html) function. You may ask yourself what is the different between these two functions. In fact, the regular plot command can be used to represent as scatter plot as long as points share the same properties, i.e. share the same color and size. If for some reason we need to have different colors or sizes, then we need to use the scatter command.
```
T = np.random.uniform(0, 2*np.pi, 500)
R = np.random.uniform(0.1, 0.5, len(T))
X, Y = R*np.cos(T), R*np.sin(T) # Position
S = np.random.uniform(50, 350, len(X)) # Size
V = np.arctan2(X,Y) # Value
fig = plt.figure(figsize=(10,10))
ax = plt.subplot(1, 1, 1)
ax.scatter(X, Y, S, V, cmap="twilight", edgecolor="white")
plt.show();
```
In the example above, each marker possesses its own size and color (using a colormap). We could even specify individual marker type.
Scatter can thus be used to produce heat map veary easily as shown below.
```
np.random.seed(1);
X, Y = np.arange(24), np.arange(12)
X, Y = np.meshgrid(X,Y)
V = np.random.uniform(50, 250, X.shape)
fig = plt.figure(figsize=(10,5))
ax = plt.subplot()
ax.scatter(X, Y, V, V, marker='s', cmap="Blues")
plt.show();
```
We can also add "special effect" like we did previously with the line plot by plotting the scatter plot several times to highlight contour.
```
np.random.seed(1)
X = np.random.uniform(0,10,250)
Y = X*np.abs(np.random.normal(0,1,len(X)))**2
fig = plt.figure(figsize=(10,4));
ax = plt.subplot();
ax.scatter(X, Y, 50, linewidth=5, color="black", clip_on=False);
ax.scatter(X, Y, 50, linewidth=3, color="white", clip_on=False);
ax.scatter(X, Y, 50, linewidth=0, color="black", alpha=0.25, clip_on=False);
ax.spines['right'].set_visible(False);
ax.spines['left'].set_visible(False);
ax.spines['top'].set_visible(False);
ax.set_xlim(0,10); ax.set_xticks([0,10])
ax.set_yticks([])
plt.show();
```
Note that for this last example, we need to tell maplotlib not to clip markers that are oustide the axis (x=0 and x=10) using `clip_on=False` argument.
## 3. Image plot
We saw in the previous section how to do a heat map using a scatter plot. If we had used a fixed marker size, we could have use it to display an image. But this would be very inefficient. Instead, we can use the dedicated function [imshow](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.imshow.html) which means image show.
Let's first generate some data.
```
def f(x,y,n=100):
X,Y = np.meshgrid(np.linspace(-3, 3, n),
np.linspace(-3, 3, n))
return (1-X/2+X**5+Y**3)*np.exp(-X**2-Y**2)
Z5 = f(X, Y, n=5)
Z10 = f(X, Y, n=10)
Z25 = f(X, Y, n=25)
Z50 = f(X, Y, n=50)
Z100 = f(X, Y, n=100)
```
We are now ready to display them.
```
fig = plt.figure(figsize=(10,6))
ax = plt.subplot(2,3,1); ax.imshow(Z5)
ax = plt.subplot(2,3,2); ax.imshow(Z10)
ax = plt.subplot(2,3,3); ax.imshow(Z25)
plt.show();
```
To be able to display these images, matplotlib made several implicit choices. First, you may have noticed that the limit on the axis are different on each of the three plots and do not relate to the [-3,+3] domain we used to define the image. The reason is that matpltolib has not idea where your images come from and cannot guess these limits. To solve this problem, we thus need to specify the extent of the image.
```
fig = plt.figure(figsize=(10,6))
extent = [-3,+3,-3,+3]
ax = plt.subplot(2,3,1); ax.imshow(Z5, extent=extent)
ax = plt.subplot(2,3,2); ax.imshow(Z10, extent=extent)
ax = plt.subplot(2,3,3); ax.imshow(Z25, extent=extent)
plt.show();
```
The second choice matplotlib made concerns colors. Our arrays Z5, Z10 and Z25 are really two-dimensional scalar arrays and the question is thus, how do we map a scalar to a color? To do that, matplotlib uses what is called a colormap that map a normalized value to a given color. The default colormap is called "viridis" but there are [plenty of others](https://matplotlib.org/stable/tutorials/colors/colormaps.html). Let's try "RdYlBu".
```
fig = plt.figure(figsize=(10,6))
extent = [-3,+3,-3,+3]
cmap = "RdYlBu"
ax = plt.subplot(2,3,1); ax.imshow(Z5, extent=extent, cmap=cmap)
ax = plt.subplot(2,3,2); ax.imshow(Z10, extent=extent, cmap=cmap)
ax = plt.subplot(2,3,3); ax.imshow(Z25, extent=extent, cmap=cmap)
plt.show();
```
On important implicit choice when displaying image is the interpolation methods between pixels composing the oputput. The default method is to use the nearest filter which results in pixelated images. This is a sane default for scientific visualization. However, in some specific case, you might want to use a smoother interpolation method and you can do so by proidivind the name to method among [those availables](https://matplotlib.org/stable/gallery/images_contours_and_fields/interpolation_methods.html?highlight=interpolation). Let's see the effect of the bicubic method.
```
fig = plt.figure(figsize=(10,6))
extent = [-3,+3,-3,+3]
cmap = "RdYlBu"
interpolation = "bicubic"
ax = plt.subplot(2,3,1);
ax.imshow(Z5, extent=extent, cmap=cmap, interpolation=interpolation)
ax = plt.subplot(2,3,2)
ax.imshow(Z10, extent=extent, cmap=cmap, interpolation=interpolation)
ax = plt.subplot(2,3,3)
ax.imshow(Z25, extent=extent, cmap=cmap, interpolation=interpolation)
plt.show();
```
Since we are using a colormap we need to show how scalar values are mapped to colors and for this we need to add a colorbar. Since there are three images, we should use three colorbars because the mapping could potentially different. This is not the case here because we are using the same data in each image. But let's pretend it's not the case. To ensure the mapping is the same in all three images, we'll explicitely set what is the minimum and maximum values.
```
fig = plt.figure(figsize=(10,6))
extent = [-3,+3,-3,+3]
cmap = "RdYlBu"
interpolation = "bicubic"
vmin, vmax = -1, 1
ax = plt.subplot(2,3,1);
ax.imshow(Z5, extent=extent, interpolation=interpolation,
cmap=cmap, vmin=vmin, vmax=vmax)
ax = plt.subplot(2,3,2);
ax.imshow(Z10, extent=extent, interpolation=interpolation,
cmap=cmap, vmin=vmin, vmax=vmax)
ax = plt.subplot(2,3,3);
ax.imshow(Z25, extent=extent, interpolation=interpolation,
cmap=cmap, vmin=vmin, vmax=vmax)
plt.show();
```
Let's now display a colorbar on the left. To that, we'll use a gridspec and specify width ratio as we did in the introduction.
```
from matplotlib.gridspec import GridSpec
fig = plt.figure(figsize=(10,6));
G = GridSpec(1, 4, width_ratios=(20, 20, 20, 1))
extent = [-3,+3,-3,+3]
cmap = "RdYlBu"
interpolation = "bicubic"
vmin, vmax = -1, 1
ax = plt.subplot(G[0], aspect=1);
I = ax.imshow(Z5, extent=extent, interpolation=interpolation,
cmap=cmap, vmin=vmin, vmax=vmax)
ax = plt.subplot(G[1], aspect=1)
I = ax.imshow(Z10, extent=extent, interpolation=interpolation,
cmap=cmap, vmin=vmin, vmax=vmax)
ax = plt.subplot(G[2], aspect=1)
I = ax.imshow(Z25, extent=extent, interpolation=interpolation,
cmap=cmap, vmin=vmin, vmax=vmax)
plt.colorbar(I, cax=plt.subplot(G[3], aspect=20))
plt.show();
```
To finish our plot, let's add some contour levels using the [contour](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.contour.html) function. Since we have several arrays showing the same data with different resolution (Z5, Z10, Z25, Z50 and Z100), we can use the highest resolution to compute the contours. This will result in much smoother curves. Note that we need to vertically re-orient the contour (using the `origin` argument) such that it matches the orientation of the image.
```
from matplotlib.gridspec import GridSpec
fig = plt.figure(figsize=(10,6));
G = GridSpec(1, 4, width_ratios=(20, 20, 20, 1))
extent = [-3,+3,-3,+3]
cmap = "RdYlBu"
interpolation = "bicubic"
vmin, vmax = -1, 1
ax = plt.subplot(G[0], aspect=1);
I = ax.imshow(Z5, extent=extent, interpolation=interpolation,
cmap=cmap, vmin=vmin, vmax=vmax)
C = ax.contour(Z100, levels=10, extent=extent, origin="upper",
colors="black", linewidths=1)
ax = plt.subplot(G[1], aspect=1);
I = ax.imshow(Z10, extent=extent, interpolation=interpolation,
cmap=cmap, vmin=vmin, vmax=vmax)
C = ax.contour(Z100, levels=10, extent=extent, origin="upper",
colors="black", linewidths=1)
ax = plt.subplot(G[2], aspect=1);
I = ax.imshow(Z25, extent=extent, interpolation=interpolation,
cmap=cmap, vmin=vmin, vmax=vmax)
C = ax.contour(Z100, levels=10, extent=extent, origin="upper",
colors="black", linewidths=1)
plt.colorbar(I, cax=plt.subplot(G[3], aspect=20))
plt.show();
```
There are many other things that can be done with imshow & contour and we'll some later in the advanced matplotlib series.
## 4. Bar plot
We'll finish this lesson with [bar](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.bar.html) plots that are omnipresent in the scientific litterature in order to represent quantities or histograms.
```
np.random.seed(1)
X = np.arange(0,10)
Y = np.random.uniform(0.5, 1.0, len(X))
fig = plt.figure(figsize=(10,4))
ax = plt.subplot()
ax.bar(X, Y)
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.set_xticks(X)
ax.set_yticks([])
plt.show();
```
When your date represents a mean, it is quite common to represent the standard deviation using an error bar.
```
np.random.seed(1)
X = np.arange(0,10)
Y = np.random.uniform(0.5, 1.0, (len(X),10))
fig = plt.figure(figsize=(10,4))
ax = plt.subplot();
ax.bar(X, Y.mean(axis=1), color="C0", yerr=Y.std(axis=1),
error_kw=dict(ecolor="C0", linewidth=3, capsize=5, capthick=3))
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.set_xticks(X)
ax.set_yticks([])
plt.show();
```
It is also common to have several series that need to be grouped. To do that, we can use several bar plots and play with the X coordinate.
```
np.random.seed(1)
X = np.arange(0,5)
Y1 = np.random.uniform(0.5, 1.0, len(X))
Y2 = np.random.uniform(0.5, 1.0, len(X))
Y3 = np.random.uniform(0.5, 1.0, len(X))
Y4 = np.random.uniform(0.5, 1.0, len(X))
fig = plt.figure(figsize=(10,4));
ax = plt.subplot();
ax.bar(X*5, Y1, color="C0", alpha=1.00)
ax.bar(X*5+1, Y2, color="C0", alpha=0.75)
ax.bar(X*5+2, Y3, color="C0", alpha=0.50)
ax.bar(X*5+3, Y4, color="C0", alpha=0.25)
ax.set_xticks(X*5+1.5)
ax.set_xticklabels(["2015","2016","2017","2018","2019"])
ax.set_yticks([])
plt.show();
```
Similarly, we can further modify the plot with additional series below using a negative height.
```
np.random.seed(1)
X = np.arange(0,5)
Y1 = np.random.uniform(0.25, 1.0, len(X))
Y2 = np.random.uniform(0.25, 1.0, len(X))
Y3 = np.random.uniform(0.25, 1.0, len(X))
Y4 = np.random.uniform(0.25, 1.0, len(X))
Y5 = np.random.uniform(0.25, 1.0, len(X))
Y6 = np.random.uniform(0.25, 1.0, len(X))
Y7 = np.random.uniform(0.25, 1.0, len(X))
Y8 = np.random.uniform(0.25, 1.0, len(X))
fig = plt.figure(figsize=(10,4));
ax = plt.subplot();
ax.bar(X*5, Y1, color="C0", alpha=1.00)
ax.bar(X*5+1, Y2, color="C0", alpha=0.75)
ax.bar(X*5+2, Y3, color="C0", alpha=0.50)
ax.bar(X*5+3, Y4, color="C0", alpha=0.25)
ax.bar(X*5, -Y5, color="C1", alpha=1.00)
ax.bar(X*5+1, -Y6, color="C1", alpha=0.75)
ax.bar(X*5+2, -Y7, color="C1", alpha=0.50)
ax.bar(X*5+3, -Y8, color="C1", alpha=0.25)
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.spines['bottom'].set_position(("data",0))
ax.set_xticks([])
ax.set_yticks([])
plt.show();
```
Bar plot can be also oriented horizontally using the `barh` function.
```
np.random.seed(1)
X = np.arange(0,25)
Y1 = np.random.uniform(75, 100, len(X))*(25-X)
Y2 = np.random.uniform(75, 100, len(X))*(25-X)
fig = plt.figure(figsize=(10,5))
ax = plt.subplot(1,1,1);
ax.barh(X, +Y1, color="C1")
ax.barh(X, -Y2, color="C3")
ax.spines['right'].set_visible(False);
ax.spines['top'].set_visible(False);
ax.spines['left'].set_position(("data",0))
ax.set_ylim(-0.5, len(X))
ax.set_xticks([-2000,-1000,0, 1000,2000]);
ax.set_xticklabels(["2000","1000","0", "1000","2000"])
ax.set_yticks([])
plt.show();
```
We have only scratched the surface of matplotlib and there exist many other type of plots that might be useful depending on your scientific domain. To learn about them, best is to have a look at the [cheatsheets](https://github.com/rougier/matplotlib-cheatsheet) and the [gallery](https://matplotlib.org/stable/gallery/index.html).
## 5. Exercises
### 5.1 Regular hexagonal scatter
Since there exists an hexagonal marker ( `h` ), it is almost straightforward to create a regular hexagonal scatter plot as shown below. Try to reproduce the figure below by first placing the marker with the right size and then try to color them.
<img src="../images/02-exercise-1.png" width="100%" />
### 5.2 Scatter-bar
We can mix scatter and bar plot to better represent data dispersion around the mean. Try to reproduce the figure below with the exact same appearance.
<img src="../images/02-exercise-2.png" width="100%" />
### 5.3 Mona Lisa variations
Using the [imread](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.imread.html) function and the provide Mona Lisa [image](../images/MonaLisa.jpg), try to reproduce the figure below. Be careful with the image pixel format (RGBA). If you want to use a colormap, you need to extract a single channel.
<img src="../images/02-exercise-3.png" width="100%" />
----
**Copyright (c) 2021 Nicolas P. Rougier**
This work is licensed under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/).
<br/>
Code is licensed under a [2-Clauses BSD license](https://opensource.org/licenses/BSD-2-Clause)
| github_jupyter |
```
import pandas as pd
import numpy as np
from IPython.display import display, HTML
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [14, 6]
import matplotlib.ticker as ticker
import seaborn as sns
import glob
import re
import matplotlib.ticker as mtick
# load data
df = pd.read_csv("./yahoo_change_data_3mon.csv")
df["Change%"] = df['Change%'].str.replace('%', '').astype('float')
display(df.head())
plt.style.use('seaborn-darkgrid')
fig, ax = plt.subplots()
ax.plot(df["Date"], df["Close"])
plt.xticks(rotation=30)
ax.set_xticks(ax.get_xticks()[::4])
plt.title("S&P 500 Change in Price over time", loc='left', fontsize=12, fontweight=0, color='black')
plt.xlabel("Date", fontsize=12)
plt.ylabel("Value of S&P 500", fontsize=12)
plt.show()
plt.style.use('seaborn-darkgrid')
# Graph 1
fig, ax1 = plt.subplots(figsize=(14, 6))
ax1.plot(df["Date"], df["Close"])
# Graph 2
ax2 = ax1.twinx()
ax2.grid(False)
ax2.bar(df["Date"], df["Volume"], alpha=0.2)
ax2.get_xaxis().set_visible(False)
# Plot settings
plt.setp( ax1.xaxis.get_majorticklabels(), rotation=30 )
ax2.set_xticks(ax2.get_xticks()[::4])
plt.title("S&P 500 Change in Price over time", loc='left', fontsize=12, fontweight=0, color='black')
ax1.set_xlabel("Date", fontsize=12)
ax1.set_ylabel("Value of S&P 500", fontsize=12)
ax2.set_ylabel("Daily Volume", fontsize=12)
ax2.yaxis.set_major_formatter(ticker.FuncFormatter(lambda x, p: str(x)[:1] + " Billion"))
plt.show()
from statsmodels import robust
df_t = df
def mad(row, df_t, key):
const = 0.6745
data_point = row[key]
median = df_t[key].median()
med_abs_dev = robust.scale.mad(df_t[key])
return abs(const * (data_point-median))/med_abs_dev
df_t["z_score_change"] = df_t.apply(lambda row: mad(row, df_t, "Change%"), axis=1)
df_t["z_score_volume"] = df_t.apply(lambda row: mad(row, df_t, "Volume"), axis=1)
df_t.tail(30).plot(x="Date", y=["z_score_change", "z_score_volume"], kind="bar",
legend=True, title="Modified Z-Score over the last 30 days", figsize=(16, 6));
plt.style.use('seaborn-darkgrid')
# Graph 1
fig = plt.figure()
ax1 = fig.add_axes([0.1, 0.1, 0.65, 0.8])
ax2 = fig.add_axes([0.78, 0.1, 0.12, 0.8])
ax1.bar(df["Date"], df["Change%"], alpha=0.8)
# Graph 2
ax2.boxplot(df_t["Change%"], patch_artist=True)
# Plot settings
plt.setp( ax1.xaxis.get_majorticklabels(), rotation=30 )
ax1.set_xticks(ax1.get_xticks()[::4])
ax1.set_title("S&P 500 Daily Change % over time", loc='left', fontsize=12, fontweight=0, color='black')
ax2.set_title("Outlier Boxplot", loc='left', fontsize=12, fontweight=0, color='black')
ax1.set_xlabel("Date", fontsize=12)
ax1.set_ylabel("Percent Change", fontsize=12)
ax1.yaxis.set_major_formatter(ticker.FuncFormatter(lambda x, p: str(x) + " %"))
ax2.get_xaxis().set_visible(False)
ax2.yaxis.tick_right()
ax2.set_ylabel("Percent Change", fontsize=12)
ax2.yaxis.set_label_position("right")
plt.show()
# load data
df_2 = pd.read_csv("./list_drawdown.csv")
display(df_2)
def process_df(dataframe):
# Add new columns for day and % change
dataframe["day"] = dataframe.index
dataframe["change"] = (1+(1*((dataframe.Open-dataframe.Open.shift(1))/dataframe.Open)))
dataframe["value"] = 10000 * dataframe.change.cumprod()
# Drop extra columns
dataframe = dataframe.drop(['Date', 'Open', 'High', 'Low', 'Close', 'Adj Close', 'Volume'], axis=1)
#Drop last row (no change)
dataframe.drop(dataframe.head(1).index,inplace=True)
return dataframe
list_of_files = glob.glob("raw_data/*")
plt.style.use('seaborn-darkgrid')
# create a color palette
palette = plt.get_cmap('tab10')
fig, ax = plt.subplots(figsize=(15, 7))
num=0
for file in list_of_files:
num+=1
file_label = re.search("\\d+", file).group(0)
#print(file, file_label)
df_temp = pd.read_csv(file)
df = process_df(df_temp)
if(file == "raw_data\\2020_1.csv"):
ax.plot(df['day'], df['value'], marker='', color="red", linewidth=1.5, alpha=1, label=file_label)
else:
ax.plot(df['day'], df['value'], marker='', color=palette(num), linewidth=1, alpha=0.6, label=file_label)
# Add legend
plt.legend(loc=1, ncol=2)
plt.title("Comparing the COVID-19 drawdown to others from history", loc='left', fontsize=12, fontweight=0, color='black')
plt.xlabel("Days into the drawdown", fontsize=12)
plt.ylabel("Value of a $10,000 investment", fontsize=12)
fmt = '${x:,.0f}'
tick = mtick.StrMethodFormatter(fmt)
ax.yaxis.set_major_formatter(tick)
plt.show()
import warnings
import matplotlib.cbook
warnings.filterwarnings("ignore",category=matplotlib.cbook.mplDeprecation)
plt.style.use('seaborn-darkgrid')
# create a color palette
palette = plt.get_cmap('tab10')
fig, ax = plt.subplots(figsize=(15, 8))
# custom ranges
custom_xlim = (0, 300)
custom_ylim = (5000, 10000)
num = 0
for i in range(1, 10):
num+=1
file = list_of_files[i-1]
ax = plt.subplot(3, 3, i)
file_label = re.search("\\d+", file).group(0)
df_temp = pd.read_csv(file)
df = process_df(df_temp)
if(file == "raw_data\\2020_1.csv"):
ax.plot(df['day'], df['value'], marker='', color="red", linewidth=1.5, alpha=1, label=file_label)
else:
ax.plot(df['day'], df['value'], marker='', color=palette(num), linewidth=1, alpha=0.6, label=file_label)
# Not ticks everywhere
if num in range(7) :
plt.tick_params(labelbottom='off')
if num not in [1,4,7] :
plt.tick_params(labelleft='off')
plt.legend(loc=1, ncol=1, handlelength=0)
fmt = '${x:,.0f}'
tick = mtick.StrMethodFormatter(fmt)
ax.yaxis.set_major_formatter(tick)
plt.setp(ax, xlim=custom_xlim, ylim=custom_ylim)
# Axis title
fig.text(0.5, 0.02, 'Days into the bear market', ha='center', va='center')
fig.text(0.06, 0.5, 'Value of a $10,000 investment', ha='center', va='center', rotation='vertical')
plt.show()
```
| github_jupyter |
MNIST GAN
---------
In this example, we will train a Generative Adversarial Network (GAN) on the MNIST dataset. This is a large collection of 28x28 pixel images of handwritten digits. We will try to train a network to produce new images of handwritten digits.
To begin, let's import all the libraries we'll need and load the dataset (which comes bundled with Tensorflow).
```
import deepchem as dc
import tensorflow as tf
from deepchem.models.tensorgraph import layers
from tensorflow.examples.tutorials.mnist import input_data
import matplotlib.pyplot as plot
import matplotlib.gridspec as gridspec
%matplotlib inline
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
images = mnist.train.images.reshape((-1, 28, 28, 1))
dataset = dc.data.NumpyDataset(images)
```
Let's view some of the images to get an idea of what they look like.
```
def plot_digits(im):
plot.figure(figsize=(3, 3))
grid = gridspec.GridSpec(4, 4, wspace=0.05, hspace=0.05)
for i, g in enumerate(grid):
ax = plot.subplot(g)
ax.set_xticks([])
ax.set_yticks([])
ax.imshow(im[i,:,:,0], cmap='gray')
plot_digits(images)
```
Now we can create our GAN. It consists of two parts:
1. The generator takes random noise as its input and produces output that will hopefully resemble the training data.
2. The discriminator takes a set of samples as input (possibly training data, possibly created by the generator), and tries to determine which are which. Its output is interpreted as the probability that each sample is from the training set.
```
class DigitGAN(dc.models.GAN):
def get_noise_input_shape(self):
return (None, 10)
def get_data_input_shapes(self):
return [(None, 28, 28, 1)]
def create_generator(self, noise_input, conditional_inputs):
dense1 = layers.Dense(50, in_layers=noise_input, activation_fn=tf.nn.relu, normalizer_fn=tf.layers.batch_normalization)
dense2 = layers.Dense(28*28, in_layers=dense1, activation_fn=tf.sigmoid, normalizer_fn=tf.layers.batch_normalization)
reshaped = layers.Reshape((None, 28, 28, 1), in_layers=dense2)
return [reshaped]
def create_discriminator(self, data_inputs, conditional_inputs):
conv = layers.Conv2D(num_outputs=32, kernel_size=5, stride=2, activation_fn=dc.models.tensorgraph.model_ops.lrelu(0.2), normalizer_fn=tf.layers.batch_normalization, in_layers=data_inputs)
dense = layers.Dense(1, in_layers=layers.Flatten(conv), activation_fn=tf.sigmoid)
return dense
gan = DigitGAN(learning_rate=0.0005)
```
Now to train it. The generator and discriminator are both trained together. The generator tries to get better at fooling the discriminator, while the discriminator tries to get better at distinguishing real data from generated data (which in turn gives the generator a better training signal to learn from).
```
def iterbatches(epochs):
for i in range(epochs):
for batch in dataset.iterbatches(batch_size=gan.batch_size):
yield {gan.data_inputs[0]: batch[0]}
gan.fit_gan(iterbatches(50), generator_steps=1.5, checkpoint_interval=5000)
```
Let's generate some data and see how the results look.
```
plot_digits(gan.predict_gan_generator(batch_size=16))
```
Not too bad. Most of the generated images look convincingly like handwritten digits, but only a few digits are represented. This is a common problem in GANs called "mode collapse". The generator learns to produce output that closely resembles training data, but the range of its output only covers a subset of the training distribution. Finding ways to prevent mode collapse is an active area of research for GANs.
| github_jupyter |
<img src="fig/scikit-hep-logo.svg" style="height: 200px; margin-left: auto; margin-bottom: -75px">
# Scikit-HEP tutorial for the STAR collaboration
This notebook shows you how to do physics analysis in Python using Scikit-HEP tools: Uproot, Awkward Array, Vector, hist, etc., and it uses a STAR PicoDST file as an example. I presented this tutorial on Zoom on September 13, 2021 (see [STAR collaboration website](https://drupal.star.bnl.gov/STAR/meetings/star-collaboration-meeting-september-2021/juniors-day), if you have access). You can also find it on GitHub at [jpivarski-talks/2021-09-13-star-uproot-awkward-tutorial](https://github.com/jpivarski-talks/2021-09-13-star-uproot-awkward-tutorial).
You can [run this notebook on Binder](https://mybinder.org/v2/gh/jpivarski-talks/2021-09-13-star-uproot-awkward-tutorial/HEAD?urlpath=lab/tree/tutorial.ipynb), which loads all of the package dependencies on Binder's servers; you don't have to install anything on your computer. But if you would like to run it on your computer, see the [requirements.txt](https://github.com/jpivarski-talks/2021-09-13-star-uproot-awkward-tutorial/blob/main/requirements.txt) file. This specifies exact versions of dependencies that are known to work for this notebook, though if you plan to use these packages later on, you'll want the latest versions of each.
The first 5 sections are introductory, and the last contains exercises. In the live tutorial, we spent one hour on the introductory material and one hour in small groups, working on the exercises.
## 1. Python: interactively building up an analysis
<img src="fig/python-logo.svg" style="height: 150px">
### Introduction: why Python?
**It's where the party's at.**
I mean it: the best argument for Python is that there's a huge community of _people_ who can help you and _people_ developing infrastructure.
You probably learned Python in a university course, and you'll probably use it a lot in your career.
<br><br><br>
**Python is especially popular for data analysis:**
<img src="fig/analytics-by-language.svg" width="800px">
See also Jake VanderPlas's [_Unexpected Effectiveness of Python in Science_](https://speakerdeck.com/jakevdp/the-unexpected-effectiveness-of-python-in-science) talk (2017).
<br><br><br>
**Python is one of a large class of languages in which "debugging mode is always on."**
* Except for bugs in external libraries (and very rare bugs in the language itself), you can't cause a segmentation fault or memory leak.
* Errors produce a stack trace _with line numbers_.
* You can print any object.
* Interactive debuggers are superfluous; the language itself is interactive (raise an exception at a break point that exports variables).
* Although long-running programs are slow, it _starts up_ quickly, minimizing the debug cycle.
C++ is not one of these languages.
<br><br><br>
**"Always debugging" has a cost: performance.**
Most of Python's design choices prevent fast computation:
* runtime type checking
* garbage collection
* boxing numbers as objects
* no value types or move semantics at all (all Python references are "pointer chasing")
* virtual machine indirection
* Global Interpreter Lock (GIL) prevents threads from running in parallel
_(Maybe not a necessary cost: [Julia](https://julialang.org/) might be exempt, but Julia is not popular—yet.)_
<br><br><br>
**And yet, scientists with big datasets have made Python their home.**
This was only possible because an ecosystem grew around _arrays_ and _array-oriented programming._
<img src="fig/harris-array-programming-nature.png" width="800px">
From Harris, C.R., Millman, K.J., van der Walt, S.J. _et al._ Array programming with NumPy. _Nature_ **585,** 357–362 (2020).
<br>
https://doi.org/10.1038/s41586-020-2649-2
<br><br><br>
**Array-oriented programming is also the paradigm of GPU programming.**
The slowness of Python might have pushed us toward it, but dividing your work into
* complex bookkeeping that doesn't need to be fast and
* a mathematical formula to compute on billions of data points
is also the right way to massively parallelize.
<br><br><br>
### What is Scikit-HEP?
[Scikit-HEP](https://scikit-hep.org/) is a collection of Python packages for array-oriented data analysis in particle physics.
(I know, "High Energy" does not describe all of Particle Physics, but Scikit-PP was a worse name.)
The goal of this umbrella organization is to make sure that these packages
* work well together
* work well with the Python ecosystem (NumPy, Pandas, Matplotlib, ...) and the traditional HEP ecosystem (ROOT, formats, generators, ...)
* are packaged well and designed well, to minimize physicist frustration!
I'm the author of Uproot and Awkward Array, so I'll have the most to say about these.
But the context is that they works with a suite of other libraries.
<br><br><br>
### Python language features that we'll be using
We don't have time for an intro course, but many exist.
<br><br><br>
The basic control structures:
```
x = 5
if x < 5:
print("small")
else:
print("big")
for i in range(x):
print(i)
```
are less relevant for this tutorial because these are what we avoid when working with large arrays.
<br><br><br>
We will be focusing on operations like
```python
import compiled_library
compiled_library.do_computationally_expensive_thing(big_array)
```
because array-oriented Python is about separating small scale, complex bookkeeping (with `if` and `for`) from large-scale processing in compiled libraries.
<br><br><br>
The trick is for the Python-side code to be expressive enough and the compiled code to be general enough that you don't need a new `compiled_library` for every little thing.
<br><br><br>
Much of this expressivity grew up around Python's syntax for slicing lists:
```
some_list = [0.0, 1.1, 2.2, 3.3, 4.4, 5.5, 6.6, 7.7, 8.8, 9.9]
```
Single-element access is like C (counting starts at zero):
```
some_list[0]
some_list[4]
```
Except when the index is negative; then it counts from the end of the list.
```
some_list[-1]
some_list[-2]
```
Running off the end of a list is an exception, not a segmentation fault or data corruption.
```
some_list[10]
some_list[-11]
```
With a colon (`:`), you can also select ranges. The _start_ of each range is included; the _stop_ is excluded.
```
some_list[2:7]
some_list[-6:]
```
Out-of-bounds ranges don't cause exceptions; they get clipped.
```
some_list[8:100]
some_list[-1000:-500]
```
Slices have a third argument for how many elements to skip between each step.
```
some_list[1:10:2]
some_list[::2]
```
This _step_ is not as useful as the _start_ and _stop_, but you can always reverse a list:
```
some_list[::-1]
```
The _start_, _stop_, and _step_ are exactly the arguments of `range`:
```
for i in range(1, 9, 3):
print(i)
some_list[1:9:3]
```
<br><br><br>
In Python, "dicts" (mappings, like `std::map`) are just as important as lists (like `std::vector`).
They use the square-bracket syntax in a different way:
```
some_dict = {"one": 1, "two": 2, "three": 3, "four": 4, "five": 5}
some_dict["two"]
some_dict["not in dict"]
```
<br><br><br>
Apart from function calls, arithmetic, variables and assignment—which are the same in all mainstream languages—slicing is the only language feature we need for NumPy.
<br><br><br>
## 2. NumPy: thinking one array at a time
<img src="fig/numpy-logo.svg" style="height: 150px">
### Introduction
NumPy is a Python library consisting of one major data type, `np.ndarray`, and a suite of functions to manipulate objects of that type.
<br><br><br>
<img src="fig/Numpy_Python_Cheat_Sheet.svg" width="100%">
<br><br><br>
This is "array-oriented." In each step, you decide what to do to _all values in a large dataset_.
There is a tradition of languages built around this paradigm. Most of them have been for interactive data processing. NumPy is unusual in that it's a library.
<img src="fig/apl-timeline.svg" width="800px">
<br><br><br>
### Slicing in NumPy
NumPy has the same slicing syntax for arrays as Python has for lists.
```
import numpy as np
some_array = np.array([0.0, 1.1, 2.2, 3.3, 4.4, 5.5, 6.6, 7.7, 8.8, 9.9])
some_array[3]
some_array[-2]
some_array[-3::-1]
```
But NumPy slices can operate on multiple dimensions.
```
array3d = np.arange(2*3*5).reshape(2, 3, 5)
array3d
array3d[1:, 1:, 1:]
```
Python lists _do not_ do this.
```
list3d = array3d.tolist()
list3d
list3d[1:, 1:, 1:]
```
NumPy slices can have mixed types:
```
array3d[:, -1, -1]
```
<img src="fig/numpy-slicing.png" width="400px">
Most importantly, NumPy slices can also be arrays.
```
some_array = np.array([ 0.0, 1.1, 2.2, 3.3, 4.4, 5.5, 6.6, 7.7, 8.8, 9.9])
boolean_slice = np.array([True, True, True, True, True, False, True, False, True, False])
some_array[boolean_slice]
integer_slice = np.array([4, 2, 2, 0, 9, 8, 3])
some_array[integer_slice]
some_array[np.random.permutation(10)]
```
#### **Application:** A boolean-array slice is what we like to call a cut!
```
primary_vertexes = np.random.normal(0, 1, (1000000, 2))
primary_vertexes
len(primary_vertexes)
trigger_decision = np.random.randint(0, 2, 1000000, dtype=np.bool_)
trigger_decision
primary_vertexes[trigger_decision]
len(primary_vertexes[trigger_decision])
```
#### **Observation:** An integer-array slice is more general than a boolean-array slice.
```
indexes_that_pass_trigger = np.nonzero(trigger_decision)[0]
indexes_that_pass_trigger
primary_vertexes[indexes_that_pass_trigger]
len(primary_vertexes[indexes_that_pass_trigger])
```
In fact, integer-array slicing is [as general as function composition](https://github.com/scikit-hep/awkward-1.0/blob/0.2.6/docs/theory/arrays-are-functions.pdf).
Often, a hard problem can be solved by (1) constructing the appropriate integer array and (2) using it as a slice.
Many problems can be solved at compile-time speed (through NumPy) without compiling a new extension module.
#### **Application:** An integer-array slice maps objects in one collection to objects in another collection.
```
tracks = np.array([(0, 0.0, -1), (1, 1.1, 3), (2, 2.2, 0), (3, 3.3, -1), (4, 4.4, 2)], [("id", int), ("pt", float), ("shower_id", int)])
tracks["id"], tracks["pt"], tracks["shower_id"]
showers = np.array([(0, 2.0, 2), (1, 9.9, -1), (2, 4.0, 4), (3, 1.0, 1)], [("id", int), ("E", float), ("track_id", int)])
showers["id"], showers["E"], showers["track_id"]
showers[2]
showers[2]["track_id"]
tracks[showers[2]["track_id"]]
tracks[showers["track_id"]]
showers[tracks["shower_id"]]
```
### Elementwise operations
All "scalars → scalar" math functions, such as `+`, `sqrt`, `sinh`, can also take arrays as input and return an array as output.
The mathematical operation is performed elementwise.
```
some_array = np.arange(10)
some_array
some_array + 100
some_array + np.array([0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9])
np.sqrt(some_array)
```
This is the part of array-oriented programming in Python that most resembles GPU programming.
One step ("kernel") of a GPU calculation is usually a simple function applied to every element of an array on a GPU.
#### **Observation:** Elementwise operations can be used to construct boolean arrays for cuts.
```
evens_and_odds = np.arange(10)
evens_and_odds
is_even = (evens_and_odds % 2 == 0)
is_even
evens_and_odds[is_even]
evens_and_odds[~is_even]
```
This works because the comparison operators, `==`, `!=`, `<`, `<=`, `>`, `>=` are also elementwise.
```
np.arange(9) < 5
```
Cuts can be combined with _bitwise_ operators: `&` (and), `|` (or), `~` (not).
Python's normal set of _logical_ operators—`and`, `or`, `not`—do not apply elementwise. (They just raise an error if you try to use them with arrays.)
The worst part of this is that the bitwise operators have stronger precedence than comparisons:
```python
cut = pt > 5 & abs(eta) < 3 # WRONG!
cut = (pt > 5) & (abs(eta) < 3) # right
```
#### **Application:** Applying a mathematical formula to every event of a large dataset.
```
zmumu = np.load("data/Zmumu.npz") # NumPy's I/O format; only using it here because I haven't introduced Uproot yet
pt1 = zmumu["pt1"]
eta1 = zmumu["eta1"]
phi1 = zmumu["phi1"]
pt2 = zmumu["pt2"]
eta2 = zmumu["eta2"]
phi2 = zmumu["phi2"]
pt1
eta1
phi1
```
This formula computes invariant mass of particles 1 and 2 from $p_T$, $\eta$, and $\phi$.
Let's apply it to the first item of each of the arrays.
```
np.sqrt(2*pt1[0]*pt2[0]*(np.cosh(eta1[0] - eta2[0]) - np.cos(phi1[0] - phi2[0])))
```
Now let's apply it to every item of the arrays.
```
np.sqrt(2*pt1*pt2*(np.cosh(eta1 - eta2) - np.cos(phi1 - phi2)))
```
#### **Application:** Building up an analysis from quick plots.
Elementwise array formulas provide a quick way to make a plot.
```
import matplotlib.pyplot as plt
plt.hist(np.sqrt(2*pt1*pt2*(np.cosh(eta1 - eta2) - np.cos(phi1 - phi2))), bins=120, range=(0, 120));
```
In ROOT, there's a temptation to do a whole analysis in `TTree::Draw` expressions because the feedback is immediate, like array formulas.
However, that wouldn't scale to large datasets. (`TTree::Draw` would repeatedly read from disk.)
Array formulas, however, can be both quick plots _and_ scale to large datasets: **put the array formula into a loop over batches**.
```
def get_batch(i):
"Fetches a batch from a large dataset (1000 events in this example)."
zmumu = np.load("data/Zmumu.npz")
pt1 = zmumu["pt1"][i*1000 : (i+1)*1000]
eta1 = zmumu["eta1"][i*1000 : (i+1)*1000]
phi1 = zmumu["phi1"][i*1000 : (i+1)*1000]
pt2 = zmumu["pt2"][i*1000 : (i+1)*1000]
eta2 = zmumu["eta2"][i*1000 : (i+1)*1000]
phi2 = zmumu["phi2"][i*1000 : (i+1)*1000]
return pt1, eta1, phi1, pt2, eta2, phi2
```
The array formula from the quick plot can be pasted directly into this loop over batches.
```
# accumulated histogram
counts, edges = None, None
for i in range(3):
pt1, eta1, phi1, pt2, eta2, phi2 = get_batch(i)
# exactly the same array formula
invariant_mass = np.sqrt(2*pt1*pt2*(np.cosh(eta1 - eta2) - np.cos(phi1 - phi2)))
batch_counts, batch_edges = np.histogram(invariant_mass, bins=120, range=(0, 120))
if counts is None:
counts, edges = batch_counts, batch_edges
else:
counts += batch_counts # not the first time: add these counts to the previous counts
counts, edges
import mplhep
mplhep.histplot(counts, edges);
```
## 3. Uproot: array-oriented ROOT I/O
<img src="fig/uproot-logo.svg" style="height: 150px">
### Introduction
**Uproot is a reimplementation of ROOT file I/O in Python.**
See [uproot.readthedocs.io](https://uproot.readthedocs.io/) for tutorials and reference documentation.
Uproot can read most data types, but is particularly good for simple data (such as the contents of a PicoDST file). Uproot can also write simple data types (not covered in this tutorial).
<img src="fig/abstraction-layers.svg" width="800px">
<br><br><br>
**Data in ROOT files are stored in arrays (batches called "TBaskets"). The format is ideally suited for array-centric programming.**
Navigating the file in Python is slower than it would be in C++ (the "complex bookkeeping"), but the time to extract and decompress a large array is independent of C++ vs Python.
<br>
<img src="fig/terminology.svg" width="650px">
<br><br><br>
### Navigating a file: PicoDST
For this tutorial, we will be using a 1 GB PicoDST file.
Local and remote files can be opened with [uproot.open](https://uproot.readthedocs.io/en/latest/uproot.reading.open.html).
```
import uproot
picodst_file = uproot.open("https://pivarski-princeton.s3.amazonaws.com/pythia_ppZee_run17emb.picoDst.root")
picodst_file
```
A [ReadOnlyDirectory](https://uproot.readthedocs.io/en/latest/uproot.reading.ReadOnlyDirectory.html) is a mapping, like a Python dict.
The `keys()` method and square-bracket syntax works on a directory as it would on a dict.
```
picodst_file.keys()
picodst = picodst_file["PicoDst"]
picodst
```
In Uproot, a [TTree](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TTree.TTree.html) is also a mapping, but this time the keys are [TBranch](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TBranch.TBranch.html)es.
```
picodst.keys()
picodst["Event"]
picodst["Event"].keys()
picodst["Event/Event.mEventId"]
```
These objects have a lot of methods and properties, some of which are very low-level (e.g. get the [number of uncompressed bytes for a TBasket](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TBranch.TBranch.html#basket-uncompressed-bytes)).
[TBasket.typename](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TBranch.TBranch.html#typename), which returns the C++ type as a string, is a useful one.
```
picodst["Event/Event.mEventId"].typename
picodst["Event/Event.mTriggerIds"].typename
picodst["Event.mNHitsHFT[4]"].typename
```
The [TBasket.keys](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TBranch.TBranch.html#keys) function has `filter_name` and `filter_typename` to search for TBranches.
```
picodst.keys(filter_name="*Primary*")
for key in picodst.keys(filter_name="*Event*", filter_typename="*float*"):
print(key.ljust(40), picodst[key].typename)
```
A convenient way to see all the branches and types at once is [TTree.show](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TTree.TTree.html#show).
```
picodst.show()
```
### Extracting arrays
[TBranch.array](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TBranch.TBranch.html#array) reads an entire TBranch (all TBaskets) from the file.
```
picodst["Event/Event.mEventId"].array()
```
For large and/or remote files, you can limit how much it reads with `entry_start` and `entry_stop`.
That can save time when you're exploring, or it can be used in a parallel job (e.g. "this task is responsible for `entry_start=10000, entry_stop=20000`").
```
picodst["Event/Event.mEventId"].array(entry_stop=10)
```
[TTree.arrays](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TTree.TTree.html#arrays) reads multiple TBranches from the file.
It is important to filter the TBranches somehow, with `filter_name`/`filter_typename` or with a set of `expressions`, to prevent it from trying to read the whole file!
First, test the filter on [TTree.keys](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TTree.TTree.html#keys) to be sure it's selecting the TBranches you want.
```
picodst.keys(filter_name="*mPrimaryVertex[XYZ]")
```
Then, apply the same filters to [TTree.arrays](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TTree.TTree.html#arrays) to read everything in one network request.
```
primary_vertex = picodst.arrays(filter_name="*mPrimaryVertex[XYZ]")
primary_vertex
```
The data have been downloaded. Extracting its parts is fast.
```
primary_vertex["Event.mPrimaryVertexX"]
primary_vertex["Event.mPrimaryVertexY"]
primary_vertex["Event.mPrimaryVertexZ"]
```
### Iterating over data in batches
The last section of the NumPy tutorial advocated an "iterate over batches" approach.
[TTree.iterate](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TTree.TTree.html#iterate) does that with an interface that's similar to [TTree.arrays](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TTree.TTree.html#arrays), and [uproot.iterate](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TBranch.iterate.html) does that with a collection of files.
```
picodst.keys(filter_name="*mGMomentum[XYZ]")
```
Quick check to see how much this is going to download (234 MB).
```
picodst["Track/Track.mGMomentumX"].compressed_bytes * 3 / 1000**2
picodst.num_entries
for arrays in picodst.iterate(filter_name="*mGMomentum[XYZ]", step_size=100):
# put your analysis in here
print(len(arrays), arrays)
```
## 4. Awkward Array: complex data in arrays
<img src="fig/awkward-logo.svg" style="height: 150px">
### Introduction
You might have noticed that the arrays Uproot returned (by default) were not NumPy arrays.
That's because our data frequently has variable-length structures, and NumPy deals strictly with rectilinear arrays.
**Awkward Array is an extension of NumPy to generic data types, including nested lists.**
See [awkward-array.org](https://awkward-array.org/) for tutorials and reference documentation.
<img src="fig/pivarski-one-slide-summary.svg" width="1000px">
### Slicing in Awkward Array
Like NumPy, the Awkward Array library has one major type, `ak.Array`, and a suite of functions that operate on arrays.
Like NumPy, `ak.Arrays` can be sliced in multiple dimensions, including boolean-array and integer-array slicing.
```
import awkward as ak
some_array = ak.Array([0.0, 1.1, 2.2, 3.3, 4.4, 5.5, 6.6, 7.7, 8.8, 9.9])
some_array
some_array[2]
some_array[-3::-1]
some_array = ak.Array([ 0.0, 1.1, 2.2, 3.3, 4.4, 5.5, 6.6, 7.7, 8.8, 9.9])
boolean_slice = ak.Array([True, True, True, True, True, False, True, False, True, False])
some_array[boolean_slice]
integer_slice = ak.Array([4, 2, 2, 0, 9, 8, 3])
some_array[integer_slice]
```
Unlike NumPy, lists within a multidimensional array need not have the same length.
```
another_array = ak.Array([[0.0, 1.1, 2.2], [], [3.3, 4.4], [5.5], [6.6, 7.7, 8.8, 9.9]])
another_array
```
But it can still be sliced.
```
another_array[:, 2:]
another_array[::2, 0]
```
Since inner lists can have different lengths and some might even be empty, it's much easier to make slices that can't be satisfied. For instance,
```
another_array[:, 0]
```
Some of Awkward Array's functions, such as [ak.num](https://awkward-array.readthedocs.io/en/latest/_auto/ak.num.html), can help to construct valid slices.
```
ak.num(another_array)
ak.num(another_array) > 0
another_array[ak.num(another_array) > 0]
another_array[ak.num(another_array) > 0, 0]
```
### Slices beyond NumPy
Since Awkward Arrays can have shapes that aren't possible in NumPy, there will be situations where we'll want to make slices that have no NumPy equivalent.
Like, what if we want to select all the even numbers from:
```
even_and_odd = ak.Array([[0, 1, 2], [], [3, 4], [5], [6, 7, 8, 9]])
even_and_odd
```
We can use an elementwise comparison to make a boolean array.
```
is_even = (even_and_odd % 2 == 0)
is_even
```
This `is_even` contains variable-length lists. You can see that from the type and when we turn it back into Python lists:
```
is_even.type
is_even.tolist()
```
But the lengths of the lists in this array _are the same_ as the lengths of the lists in `even_and_odd` because the elementwise operation does not change list lengths.
We'd like to use this as a cut:
```
even_and_odd[is_even]
```
As long as the slicing array "fits into" the array to slice, the array can be sliced.
This lets us use Awkward Arrays in the same situations as NumPy arrays, the only difference being that the data can't be arranged into a rectilinear tensor.
### Record arrays
Another data type that Awkward Array supports is a "record." This is an object with named, typed fields, like a `class` or `struct` in C++.
When converting to or from Python objects, Awkward records correspond to Python dicts. But these are not general mappings like dicts: every instance of a record in the array must have the same fields.
```
array_of_records = ak.Array([{"x": 1, "y": 1.1}, {"x": 2, "y": 2.2}, {"x": 3, "y": 3.3}])
array_of_records
array_of_records.type
array_of_records.tolist()
```
Records can be _in_ lists and/or their fields can _contain_ lists.
```
array_of_lists_of_records = ak.Array([
[{"x": 1, "y": 1.1}, {"x": 2, "y": 2.2}, {"x": 3, "y": 3.3}],
[],
[{"x": 4, "y": 4.4}, {"x": 5, "y": 5.5}],
])
array_of_lists_of_records
array_of_lists_of_records.type
array_of_records_with_list = ak.Array([
{"x": 1, "y": [1]}, {"x": 2, "y": [1, 2]}, {"x": 3, "y": [1, 2, 3]}
])
array_of_records_with_list
array_of_records_with_list.type
```
### Fluidity of records
Whereas a class instance is a "solid" object in Python or C++, whose existence takes up memory and constructing or deconstructing them takes time, records in Awkward Array are very fluid.
Records can be "projected" to separate arrays and separate arrays can be "zipped" into records for very little computational cost.
```
array_of_records
array_of_records["x"] # or array_of_records.x
array_of_records["y"] # or array_of_records.y
array_of_lists_of_records.x
array_of_lists_of_records.y
array_of_records_with_list.x
array_of_records_with_list.y
```
Going in the opposite direction, we can [ak.zip](https://awkward-array.readthedocs.io/en/latest/_auto/ak.zip.html) these together:
```
new_records = ak.zip({"a": array_of_records.y, "b": array_of_records_with_list.y})
new_records
new_records.type
new_records.tolist()
```
The only constraint is that any arrays list variable-length lists have the same list lengths as the other arrays.
If they don't, they can't be "zipped" in all dimensions.
However, you can limit how deeply it _attempts_ to zip them with `depth_limit`.
```
everything = ak.zip(
{
"a": array_of_records.x,
"b": array_of_records.y,
"c": array_of_lists_of_records.x,
"d": array_of_lists_of_records.y,
"e": array_of_records_with_list.x,
"f": array_of_records_with_list.y,
}, depth_limit=1
)
everything
everything.type
everything.tolist()
```
### Combinatorics
With this much structure—variable-length lists and records—we can compute particle combinatorics in an array-oriented way.
Suppose we have two arrays of lists with _different_ lengths in each list.
```
numbers = ak.Array([[0, 1, 2], [], [3, 4], [5]])
letters = ak.Array([["a", "b"], ["c"], ["d"], ["e", "f"]])
ak.num(numbers), ak.num(letters)
```
Now suppose that we're interested in all _pairs_ of letters and numbers, with one letter and one number in each pair.
However, we want to do this per list: all pairs in `numbers[0]` and `letters[0]`, followed by all pairs in `numbers[1]` and `letters[1]`, etc.
If the nested lists represent particles in events, what this means is that we do not want to mix data from one event with data from another event: the usual case in particle physics.
Awkward Array has a function that does that: [ak.cartesian](https://awkward-array.readthedocs.io/en/latest/_auto/ak.cartesian.html).
<img src="fig/cartoon-cartesian.svg" width="300px">
```
pairs = ak.cartesian((numbers, letters))
pairs
pairs.type
pairs.tolist()
```
Very often, we want to create these (per-event) Cartesian products so that we can use both halves in a formula.
The left-sides and right-sides of these 2-tuples are the sort of thing that could have been constructed with [ak.zip](https://awkward-array.readthedocs.io/en/latest/_auto/ak.zip.html), so they can be deconstructed with [ak.unzip](https://awkward-array.readthedocs.io/en/latest/_auto/ak.unzip.html).
```
lefts, rights = ak.unzip(pairs)
lefts, rights
```
Note that `lefts` and `rights` do not have the lengths of either `numbers` or `letters`, but they have the same lengths as each other because the came from the same array of 2-tuples.
```
ak.num(lefts), ak.num(rights)
```
The Cartesian product is equivalent to this C++ `for` loop:
```c++
for (int i = 0; i < numbers.size(); i++) {
for (int j = 0; j < letters.size(); j++) {
// compute formula with numbers[i] and letters[j]
}
}
```
[ak.cartesian](https://awkward-array.readthedocs.io/en/latest/_auto/ak.cartesian.html) is often used to search for particles that decay into two different types of daughters, such as $\Lambda \to p \, \pi^-$ where the protons $p$ are all in one array and the pions $\pi^-$ are in another array, having been selected by particle ID.
It can also be used for track-shower matching, in which tracks and showers are the two collections.
Sometimes, however, you have a single collection and want to find all pairs within it without repetition, equivalent to this C++ `for` loop:
```c++
for (int i = 0; i < numbers.size(); i++) {
for (int j = i + 1; i < numbers.size(); j++) {
// compute formula with numbers[i] and numbers[j]
}
}
```
For that, there's [ak.combinations](https://awkward-array.readthedocs.io/en/latest/_auto/ak.combinations.html).
<img src="fig/cartoon-combinations.svg" width="300px">
```
numbers
pairs = ak.combinations(numbers, 2)
pairs
pairs.type
pairs.tolist()
lefts, rights = ak.unzip(pairs)
lefts, rights
```
## 5. Vector, hist, Particle...
<img src="fig/vector-logo.svg" style="height: 60px; margin-bottom: 40px"> <img src="fig/hist-logo.svg" style="height: 200px"> <img src="fig/particle-logo.svg" style="height: 120px; margin-bottom: 30px">
There are a lot more libraries, but these are the ones used in the exercises.
### Vector
Vector performs 2D, 3D, and 4D (Lorentz) vector calculations on a variety of backends.
One of these is Python itself: `vector.obj` makes a single vector as a Python object.
```
import vector
v = vector.obj(px=1.1, py=2.2, pz=3.3, E=4.4)
v
v.pt
v.boostZ(0.999)
v.to_rhophieta()
```
But we're primarily interested in _arrays of vectors_ (for array-oriented programming).
Vector can make relativity-aware subclasses of NumPy arrays. Any [structured array](https://numpy.org/doc/stable/user/basics.rec.html) with field names that can be recognized as coordinates can be cast as an array of vectors.
```
vec_numpy = (
np.random.normal(10, 1, (1000000, 4)) # a bunch of random numbers
.view([("px", float), ("py", float), ("pz", float), ("M", float)]) # label the fields with coordinates
.view(vector.MomentumNumpy4D) # cast as an array of vectors
)
vec_numpy
```
Now all of the geometry and Lorentz-vector methods apply elementwise to the whole array.
```
# compute deltaR between each vector and the one after it
vec_numpy[1:].deltaR(vec_numpy[:-1])
```
Vector canalso make relativity-aware Awkward Arrays.
Any records with coordinate-named fields and a `"Momentum4D"` as a record label will have Lorentz-vector methods if `vector.register_awkward()` has been called.
```
vector.register_awkward()
# get momenta of tracks from the PicoDST file
px, py, pz = picodst.arrays(filter_name="*mGMomentum[XYZ]", entry_stop=100, how=tuple)
vec_awkward = ak.zip({"px": px, "py": py, "pz": pz, "M": 0}, with_name="Momentum4D")
vec_awkward
```
Now we can use Awkward Array's irregular slicing with Lorentz-vector methods.
```
# compute deltaR between each vector and the one after it WITHIN each event
vec_awkward[:, 1:].deltaR(vec_awkward[:, :-1])
```
### Hist
The hist library fills N-dimensional histograms with [advanced slicing](https://uhi.readthedocs.io/en/latest/indexing.html#examples) and [plotting](https://hist.readthedocs.io/en/latest/user-guide/notebooks/Plots.html).
```
import hist
```
Instead of predefined 1D, 2D, and 3D histogram classes, hist builds a histogram from a series of [axes of various types](https://hist.readthedocs.io/en/latest/user-guide/axes.html).
```
vertexhist = hist.Hist(
hist.axis.Regular(300, -1, 1, label="x"),
hist.axis.Regular(300, -1, 1, label="y"),
hist.axis.Regular(20, -200, 200, label="z"),
hist.axis.Variable([-1030000, -500000, 500000] + list(range(1000000, 1030000, 2000)), label="ranking"),
)
```
The `fill` method accepts arrays. ([ak.flatten](https://awkward-array.readthedocs.io/en/latest/_auto/ak.flatten.html) reduces the nested lists into one-dimensional arrays.)
```
# get primary vertex positions from the PicoDST file
vertex_data = picodst.arrays(filter_name=["*mPrimaryVertex[XYZ]", "*mRanking"])
vertexhist.fill(
ak.flatten(vertex_data["Event.mPrimaryVertexX"]),
ak.flatten(vertex_data["Event.mPrimaryVertexY"]),
ak.flatten(vertex_data["Event.mPrimaryVertexZ"]),
ak.flatten(vertex_data["Event.mRanking"]),
)
```
Now the binned data can be sliced in some dimensions and summed over in others.
```
vertexhist[:, :, sum, sum].plot2d_full();
```
`bh.loc` computes the index position of coordinates from the data. You can use that to zoom in.
```
import boost_histogram as bh
vertexhist[bh.loc(-0.25):bh.loc(0.25), bh.loc(-0.25):bh.loc(0.25), sum, sum].plot2d_full();
vertexhist[sum, sum, :, sum].plot();
```
The `mRanking` quantity is irregularly binned because it has a lot of detail above 1000000, but featureless peaks at 0 and -1000000.
```
vertexhist[sum, sum, sum, :].plot();
vertexhist[sum, sum, sum, bh.loc(1000000):].plot();
```
With a large enough (or cleverly binned) histogram, you can do an exploratory analysis in aggregated data.
Here, we investigate the shape of the $x$-$y$ primary vertex projection for `mRanking` above and below 1000000.
The feasibility of this analysis depends on available memory and the number of bins, but _not_ the number of events.
```
vertexhist[bh.loc(-0.25):bh.loc(0.25), bh.loc(-0.25):bh.loc(0.25), sum, bh.loc(1000000)::sum].plot2d_full();
vertexhist[bh.loc(-0.25):bh.loc(0.25), bh.loc(-0.25):bh.loc(0.25), sum, :bh.loc(1000000):sum].plot2d_full();
```
### Particle
Particle is like a searchable Particle Data Group (PDG) booklet in Python.
```
import particle
[particle.Particle.from_string("p~")]
particle.Particle.from_string("p~")
from hepunits import GeV
z_boson = particle.Particle.from_string("Z0")
z_boson.mass / GeV, z_boson.width / GeV
[particle.Particle.from_pdgid(111)]
particle.Particle.findall(lambda p: p.pdgid.is_meson and p.pdgid.has_strange and p.pdgid.has_charm)
from hepunits import mm
particle.Particle.findall(lambda p: p.pdgid.is_meson and p.ctau > 1 * mm)
```
## Exercises: translating a C++ analysis into array-oriented Python
In this section, we will plot the mass of e⁺e⁻ pairs from the PicoDST file, using a C++ framework ([star-picodst-reference](star-picodst-reference)) as a guide.
The solutions are hidden. Try to solve each exercise on your own (by filling in the "`???`") before comparing with the solutions we've provided.
### C++ version of the analysis
The analysis we want to reproduce is the following:
```c++
// histogram to fill
TH1F *hM = new TH1F("hM", "e+e- invariant mass (GeV/c)", 120, 0, 120);
// get a reader and initialize it
const Char_t *inFile = "pythia_ppZee_run17emb.picoDst.root";
StPicoDstReader* picoReader = new StPicoDstReader(inFile);
picoReader->Init();
Long64_t events2read = picoReader->chain()->GetEntries();
// loop over events
for (Long64_t iEvent = 0; iEvent < events2read; iEvent++) {
Bool_t readEvent = picoReader->readPicoEvent(iEvent);
StPicoDst *dst = picoReader->picoDst();
Int_t nTracks = dst->numberOfTracks();
// for collecting good tracks
std::vector<StPicoTrack*> goodTracks;
// loop over tracks
for (Int_t iTrk = 0; iTrk < nTracks; iTrk++) {
StPicoTrack *picoTrack = dst->track(iTrk);
// track quality cuts
if (!picoTrack->isPrimary()) continue;
if (picoTrack->nHitsFit() / picoTrack()->nHitsMax() < 0.2) continue;
// track -> associated electromagnetic calorimeter energy
if (picoTrack->isBemcTrack()) {
StPicoBEmcPidTraits *trait = dst->bemcPidTraits(
picoTrack->bemcPidTraitsIndex()
);
// matched energy cut
double pOverE = picoTrack->pMom().Mag() / trait->btowE();
if (pOverE < 0.1) continue;
// this is a good track
goodTracks.push_back(picoTrack);
}
}
// loop over good pairs with opposite-sign charge and fill the invariant mass plot
for (UInt_t i = 0; i < goodTracks.size(); i++) {
for (UInt_t j = i + 1; j < goodTracks.size(); j++) {
// make Lorentz vectors with electron mass
TLorentzVector one(goodTracks[i].pMom(), 0.0005109989461);
TLorentzVector two(goodTracks[j].pMom(), 0.0005109989461);
// opposite-sign charge cut
if (goodTracks[i].charge() != goodTracks[j].charge()) {
// fill the histogram
hM->Fill((one + two).M());
}
}
}
}
```
### Reading the data
As before, we start by reading the file.
```
import uproot
import awkward as ak
import numpy as np
picodst = uproot.open("https://pivarski-princeton.s3.amazonaws.com/pythia_ppZee_run17emb.picoDst.root:PicoDst")
picodst
```
By examining the C++ code, we see that we need to compute
```c++
StPicoTrack::isPrimary
StPicoTrack::nHitsFit
StPicoTrack::nHitsMax
StPicoTrack::isBemcTrack
StPicoTrack::bemcPidTraitsIndex
StPicoTrack::pMom
StPicoBEmcPidTraits::btowE
```
From [star-picodst-reference/StPicoTrack.h](star-picodst-reference/StPicoTrack.h) and [star-picodst-reference/StPicoBEmcPidTraits.h](star-picodst-reference/StPicoBEmcPidTraits.h), we learn that these are derived from the following TBranches:
* `mPMomentumX`
* `mPMomentumY`
* `mPMomentumZ`
* `mNHitsFit`
* `mNHitsMax`
* `mBEmcPidTraitsIndex`
* `mBtowE`
#### **Exercise 1:** Extract these TBranches as arrays, using the same names as variable names.
**Hint:** To avoid long download times while you experiment, set `entry_stop=10` in your calls to [TTree.arrays](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TTree.TTree.html#arrays) or [TBranch.array](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TBranch.TBranch.html#array). Just be sure to remove it to get all entries in the end.
From the location of Binder's servers on the Internet, it takes about 1 minute to read. If it's taking 2 or more minutes, you're probably downloading more than you intended.
```
mPMomentumX = ???
mPMomentumY = ???
mPMomentumZ = ???
mNHitsFit = ???
mNHitsMax = ???
mBEmcPidTraitsIndex = ???
mBtowE = ???
```
The types of these arrays should be
```
(
mPMomentumX.type,
mPMomentumY.type,
mPMomentumZ.type,
mNHitsFit.type,
mNHitsMax.type,
mBEmcPidTraitsIndex.type,
mBtowE.type,
)
```
<details style="border: dashed 1px black; padding: 10px">
<summary><b>Solutions</b> (no peeking!)</summary>
<br>
There are several ways to get these data; here are two.
**(1)** You could navigate to each TBranch and ask for its `array`.
```python
mPMomentumX = picodst["Track.mPMomentumX"].array()
mPMomentumY = picodst["Track.mPMomentumY"].array()
mPMomentumZ = picodst["Track.mPMomentumZ"].array()
mNHitsFit = picodst["Track.mNHitsFit"].array()
mNHitsMax = picodst["Track.mNHitsMax"].array()
mBEmcPidTraitsIndex = picodst["Track.mBEmcPidTraitsIndex"].array()
mBtowE = picodst["EmcPidTraits.mBtowE"].array() / 1000
```
<br>
**(2)** You could ask the TTree for its `arrays`, with a filter to keep from reading everything over the network, then extract each field of the resulting record array. (There's a slight performance advantage to this method, since it only has to make 1 request across the network, rather than 7, _if_ you filter the TBranches to read. If you don't, _it will read all branches_, which will take much longer.)
```python
single_array = picodst.arrays(filter_name=[
"Track.mPMomentum[XYZ]",
"Track.mNHits*",
"Track.mBEmcPidTraitsIndex",
"EmcPidTraits.mBtowE",
])
mPMomentumX = single_array["Track.mPMomentumX"]
mPMomentumY = single_array["Track.mPMomentumY"]
mPMomentumZ = single_array["Track.mPMomentumZ"]
mNHitsFit = single_array["Track.mNHitsFit"]
mNHitsMax = single_array["Track.mNHitsMax"]
mBEmcPidTraitsIndex = single_array["Track.mBEmcPidTraitsIndex"]
mBtowE = single_array["EmcPidTraits.mBtowE"] / 1000
```
<br>
Either way, be sure to divide the `mBtowE` branch by 1000, as it is in the C++ code.
</details>
### Making momentum objects with charges
The C++ code uses ROOT [TVector3](https://root.cern.ch/doc/master/classTVector3.html) and [TLorentzVector](https://root.cern.ch/doc/master/classTLorentzVector.html) objects for vector calculations. We'll use the (array-oriented) Vector library.
The definitions of `pMom` and `charge` in [star-picodst-reference/StPicoTrack.h](star-picodst-reference/StPicoTrack.h) are
```c++
TVector3 pMom() const { return TVector3(mPMomentumX, mPMomentumY, mPMomentumZ); }
Short_t charge() const { return (mNHitsFit > 0) ? 1 : -1; }
```
(Yes, the `charge` bit is hidden inside the `mNHitsFit` integer.)
```
import vector
vector.register_awkward()
import particle, hepunits
electron_mass = particle.Particle.find("e-").mass / hepunits.GeV
electron_mass
```
#### **Exercise 2a:** First, make an array of `charge` as positive and negative `1` integers. You may use [ak.where](https://awkward-array.readthedocs.io/en/latest/_auto/ak.where.html) or clever arithmetic.
```
charge = ???
```
The type should be:
```
charge.type
```
And the first and last values should be:
```
charge
```
#### **Exercise 2b:** Next, use [ak.zip](https://awkward-array.readthedocs.io/en/latest/_auto/ak.zip.html) to combine `mPMomentumX`, `mPMomentumY`, `mPMomentumZ`, `electron_mass`, and `charge` into a single array of type
```python
8004 * var * {"px": float32, "py": float32, "pz": float32, "M": float64, "charge": int64}
```
It is very important that the type is lists of records (`var * {"px": float32, ...}`), not records of lists (`{"px": var * float32, ...}`).
```
record_array = ???
```
The second record in the first event should be:
```
record_array[0, 1].tolist()
```
#### **Exercise 2c:** Finally, search Awkward Array's [reference documentation](https://awkward-array.readthedocs.io/) for a way to add the `"Momentum4D"` name to these records to turn them into Lorentz vectors.
```
pMom = ???
```
The type of `pMom` should be:
```
pMom.type
```
Lorentz vector operations don't require the `"charge"`, but it will be convenient to keep that in the same package. The Vector library will ignore it.
<details style="border: dashed 1px black; padding: 10px">
<summary><b>Solutions</b> (no peeking!)</summary>
<br>
The charge can be computed using:
```python
charge = ak.where(mNHitsFit > 0, 1, -1)
```
<br>
or "clever arithmetic" (booleans in a numerical expression become `false → 0`, `true → 1`):
```python
charge = (mNHitsFit > 0) * 2 - 1
```
<br>
Making the record array is a direct application of [ak.zip](https://awkward-array.readthedocs.io/en/latest/_auto/ak.zip.html):
```python
record_array = ak.zip(
{"px": mPMomentumX, "py": mPMomentumY, "pz": mPMomentumZ, "M": electron_mass, "charge": charge}
)
```
<br>
Combining the variable-length lists of `mPMomentumX`, `mPMomentumY`, `mPMomentumZ`, and `charge` is just what [ak.zip](https://awkward-array.readthedocs.io/en/latest/_auto/ak.zip.html) does (and if those lists had different lengths, it would raise an error). Only using `depth_limit=1` or the [ak.Array](https://awkward-array.readthedocs.io/en/latest/_auto/ak.Array.html) constructor would produce the wrong type.
Also, the constant `electron_mass` does not need special handling. Constants and lower-dimension arrays are [broadcasted](https://awkward-array.readthedocs.io/en/latest/_auto/ak.broadcast_arrays.html) to the same shape as larger-dimension arrays when used in the same function. (This is similar to, but an extension of, NumPy's [concept of broadcasting](https://numpy.org/doc/stable/user/basics.broadcasting.html).)
Finally, to add the `"Momentum4D"` name to all the records, you could use [ak.zip](https://awkward-array.readthedocs.io/en/latest/_auto/ak.zip.html) again, as it has a `with_name` argument:
```python
pMom = ak.zip(
{"px": mPMomentumX, "py": mPMomentumY, "pz": mPMomentumZ, "M": electron_mass, "charge": charge},
with_name="Momentum4D",
)
```
<br>
Or pass the already-built `record_array` into [ak.with_name](https://awkward-array.readthedocs.io/en/latest/_auto/ak.with_name.html):
```python
pMom = ak.with_name(record_array, "Momentum4D")
```
<br>
Or pass the already-built `record_array` into the [ak.Array](https://awkward-array.readthedocs.io/en/latest/_auto/ak.Array.html) constructor with a `with_name` argument:
```python
pMom = ak.Array(record_array, with_name="Momentum4D")
```
</details>
### Computing track cuts
In the C++, the following cuts are applied to the tracks:
```c++
if (!picoTrack->isPrimary()) continue;
if (picoTrack->nHitsFit() / picoTrack()->nHitsMax() < 0.2) continue;
if (picoTrack->isBemcTrack()) {
// ...
}
```
Some of the cuts in C++ are applied by jumping to the next loop iteration with `continue` (a dangerous practice, in my opinion) while another is in a nested `if` statement. Note that the `continue` conditions describe the _opposite_ of a good track.
The quantities used in the cuts are defined in [star-picodst-reference/StPicoTrack.h](star-picodst-reference/StPicoTrack.h):
```c++
Bool_t isPrimary() const { return ( pMom().Mag()>0 ); }
TVector3 pMom() const { return TVector3(mPMomentumX, mPMomentumY, mPMomentumZ); }
Int_t nHitsFit() const { return (mNHitsFit > 0) ? (Int_t)mNHitsFit : (Int_t)(-1 * mNHitsFit); }
Int_t nHitsMax() const { return (Int_t)mNHitsMax; }
Bool_t isBemcTrack() const { return (mBEmcPidTraitsIndex<0) ? false : true; }
```
#### **Exercise 3:** Convert these cuts into a [boolean array slice](https://awkward-array.readthedocs.io/en/latest/_auto/ak.Array.html#filtering).
```
isPrimary = ???
nHitsFit = ???
nHitsMax = ???
isBemcTrack = ???
track_quality_cuts = ???
```
The type of `track_quality_cuts` should be:
```
track_quality_cuts.type
```
And the number of passing tracks in the first and last events should be:
```
np.count_nonzero(track_quality_cuts, axis=1)
```
<details style="border: dashed 1px black; padding: 10px">
<summary><b>Solutions</b> (no peeking!)</summary>
<br>
There are several equivalent ways to compute `isPrimary`:
```python
isPrimary = pMom.mag > 0
```
<br>
and
```python
isPrimary = (abs(mPMomentumX) > 0) & (abs(mPMomentumY) > 0) & (abs(mPMomentumZ) > 0)
```
<br>
and
```python
isPrimary = mPMomentumX**2 + mPMomentumY**2 + mPMomentumZ**2 > 0 # or with np.sqrt
```
<br>
The most straightforward way to compute `nHitsFit` is:
```python
nHitsFit = abs(mNHitsFit)
```
<br>
but you could use [ak.where](https://awkward-array.readthedocs.io/en/latest/_auto/ak.where.html)/[np.where](https://numpy.org/doc/stable/reference/generated/numpy.where.html) to make it look more like the C++:
```python
nHitsFit = np.where(mNHitsFit > 0, mNHitsFit, -1 * mNHitsFit)
```
<br>
`nHitsMax` is exactly equal to `mNHitsMax`, and `isBemcTrack` is:
```python
isBemcTrack = mBEmcPidTraitsIndex >= 0 # be sure to get the inequality right
```
<br>
or
```python
isBemcTrack = np.where(mBEmcPidTraitsIndex < 0, False, True)
```
<br>
to make it look more like the C++.
Finally, `track_quality_cuts` is a logical-AND of three selections:
```python
track_quality_cuts = isPrimary & (nHitsFit / nHitsMax >= 0.2) & isBemcTrack
```
<br>
Be sure to get the inequality right: `continue` _throws away_ bad tracks, but we want an expression that will _keep_ good tracks.
</details>
### Matching tracks to electromagnetic showers
The final track quality cut requires us to match the track with its corresponding shower. Tracks and showers have different multiplicities.
```
ak.num(mPMomentumX), ak.num(mBtowE)
```
The PicoDst file provides us with an index for each track that is the position of the corresponding shower in the showers array. It is `-1` when there is no corresponding shower.
```
mBEmcPidTraitsIndex
```
#### **Exercise 4:** Filter `mBEmcPidTraitsIndex` with `track_quality_cuts` and make an array of shower energy `mBtowE` for each quality track.
```
quality_mBtowE = ???
```
The type of `quality_mBtowE` should be:
```
quality_mBtowE.type
```
Its first and last values should be:
```
quality_mBtowE
```
And it should have as many values in each event as there are "`true`" booleans in `track_quality_cuts`:
```
ak.num(quality_mBtowE)
np.all(ak.num(quality_mBtowE) == np.count_nonzero(track_quality_cuts, axis=1))
```
<details style="border: dashed 1px black; padding: 10px">
<summary><b>Solutions</b> (no peeking!)</summary>
<br>
The answer could be written in one line:
```python
quality_mBtowE = mBtowE[mBEmcPidTraitsIndex[track_quality_cuts]]
```
<br>
The first part, `mBEmcPidTraitsIndex[track_quality_cuts]`, applies the track quality cuts to the `mBEmcPidTraitsIndex` so that there are no more `-1` values in it. The remaining array of lists of integers is exactly what is required to pick energy values from `mBtowE` in lists of the right lengths and orders.
Naturally, you could write it in two lines (or as many as you find easy to read).
If you know about [ak.mask](https://awkward-array.readthedocs.io/en/latest/_auto/ak.mask.html), you might have tried masking `mBEmcPidTraitsIndex` instead of filtering it:
```python
quality_mBEmcPidTraitsIndex = mBEmcPidTraitsIndex.mask[track_quality_cuts]
quality_mBtowE = mBtowE[quality_mBEmcPidTraitsIndex]
```
<br>
Instead of changing the lengths of the lists by dropping bad tracks, this would replace them with missing value placeholders ("`None`"). _This is not wrong,_ and it's a good alternative to the overall problem because it simplifies the process of filtering filtered data. (The placeholders keep the arrays the same lengths, so cuts can be applied in any order.)
However, it changes how the next step would have to be handled, and you'd eventually have to use [ak.is_none](https://awkward-array.readthedocs.io/en/latest/_auto/ak.is_none.html) to remove the missing values. For the sake of this walkthrough, to keep everyone on the same page, let's not do that.
</details>
### Applying the energy cut and making track-pairs
The last quality cut requires total track momentum divided by shower energy to be at least 0.1.
We can get the total track momentum from `pMom.mag` (3D magnitude of 3D or 4D vectors), but apply the quality cuts to it so that it has the same length as `quality_mBtowE` (which already has quality cuts applied).
```
pMom.mag
quality_total_momentum = pMom[track_quality_cuts].mag
quality_total_momentum
quality_pOverE = quality_total_momentum / quality_mBtowE
quality_pOverE
```
(You may see a warning when calculating the above; some values of the denominator are zero. It's possible to selectively suppress such messages with NumPy's [np.errstate](https://numpy.org/doc/stable/reference/generated/numpy.errstate.html).)
```
quality_pOverE_cut = (quality_pOverE >= 0.1)
np.count_nonzero(quality_pOverE_cut, axis=1)
```
An array with all cuts applied, the equivalent of `goodTracks` in the C++, is:
```
goodTracks = pMom[track_quality_cuts][quality_pOverE_cut]
goodTracks
```
(As mentioned in one of the solutions, above, [ak.mask](https://awkward-array.readthedocs.io/en/latest/_auto/ak.mask.html) would allow `track_quality_cuts` and `quality_pOverE_cut` to be applied in either order, at the expense of having to remove the placeholder "`None`" values with [ak.is_none](https://awkward-array.readthedocs.io/en/latest/_auto/ak.is_none.html). Extra credit if you can rework all of the above to use this technique.)
#### **Exercise 5a:** Use [ak.combinations](https://awkward-array.readthedocs.io/en/latest/_auto/ak.combinations.html) to make all pairs of good tracks, per event, the equivalent of this code:
```c++
for (UInt_t i = 0; i < goodTracks.size(); i++) {
for (UInt_t j = i + 1; j < goodTracks.size(); j++) {
// make Lorentz vectors with electron mass
TLorentzVector one(goodTracks[i].pMom(), 0.0005109989461);
TLorentzVector two(goodTracks[j].pMom(), 0.0005109989461);
```
```
pairs = ???
```
The type of `pairs` should be lists of 2-tuples of Momentum4D:
```
pairs.type
```
And the number of such pairs in the first and last events should be:
```
ak.num(pairs)
```
Note that this is not the same as the number of good tracks:
```
ak.num(goodTracks)
```
In particular, 3 good tracks → 3 pairs, 2 good tracks → 1 pair, and 4 good tracks → 6 pairs ($n$ choose $2$ = $n(n - 1)/2$ for $n$ good tracks).
#### **Exercise 5b:** Search Awkward Array's [reference documentation](https://awkward-array.readthedocs.io/) for a way to get an array named `one` with the first of each pair and an array named `two` with the second of each pair, as two arrays with equal-length lists.
```
one, two = ???
```
The types of `one` and `two` should be:
```
one.type, two.type
```
And the lengths of their lists should be the same as `ak.num(goodTracks)` (above).
```
ak.num(one), ak.num(two)
```
**Hint:** Remember how we _combined_ arrays of lists of the same lengths into `record_array`? This is the opposite of that.
<details style="border: dashed 1px black; padding: 10px">
<summary><b>Solutions</b> (no peeking!)</summary>
<br>
The first step is a direct application of the [ak.combinations](https://awkward-array.readthedocs.io/en/latest/_auto/ak.combinations.html) function:
```python
pairs = ak.combinations(goodTracks, 2)
```
<br>
The default `axis` is `axis=1`, which means to find all combinations in each entry. (Not all combinations of entries, which would be `axis=0`!)
The second step could be a direct application of [ak.unzip](https://awkward-array.readthedocs.io/en/latest/_auto/ak.unzip.html):
```python
one, two = ak.unzip(pairs)
```
<br>
But tuples, like the 2-tuples in these `pairs`, are just records with unnamed fields. We can extract record fields with string-valued slices, and tuples can be indexed by position, so the slices that would extract the first of all tuple fields and the second of all tuple fields is `"0"` (a string!) and `"1"` (a string!).
```python
one, two = pairs["0"], pairs["1"]
```
<br>
That's prone to misunderstanding: the numbers really must be inside strings. Perhaps a safer way to do it is:
```python
one, two = pairs.slot0, pairs.slot1
```
<br>
which works up to `slot9`. Whereas these methods extract one tuple-field at a time, [ak.unzip](https://awkward-array.readthedocs.io/en/latest/_auto/ak.unzip.html) extracts all fields (of any tuple _or_ record).
</details>
### Selecting opposite-sign charges among those pairs
The opposite-sign charge cut is not a track quality cut, since it depends on a relationship between two tracks.
Now we have arrays `one` and `two` representing the left and right halves of those pairs, and we can define and apply the cut.
#### **Exercise 6a:** Make an array of booleans that are `true` for opposite-sign charges and `false` for same-sign charges.
```
opposite_charge_cut = ???
```
The type should be:
```
opposite_charge_cut.type
```
And the number of `true` values in the first and last events should be:
```
ak.count_nonzero(opposite_charge_cut, axis=1)
```
#### **Exercise 6b:** Apply that cut to `one` and `two`.
```
quality_one = ???
quality_two = ???
```
The types should be remain:
```
quality_one.type, quality_two.type
```
And the lengths of the lists should become:
```
ak.num(quality_one), ak.num(quality_two)
```
<details style="border: dashed 1px black; padding: 10px">
<summary><b>Solutions</b> (no peeking!)</summary>
<br>
I've seen three different ways people calculate opposite-sign charges. I think this is the simplest:
```python
opposite_charge_cut = one.charge != two.charge
```
<br>
This one is also intuitive, since the Z boson that decays to two electrons has net zero charge:
```python
opposite_charge_cut = one.charge + two.charge == 0
```
<br>
This one is odd, but I see it quite a lot:
```python
opposite_charge_cut = one.charge * two.charge == -1
```
<br>
As for applying the cut, the pattern should be getting familiar:
```python
quality_one = one[opposite_charge_cut]
quality_two = two[opposite_charge_cut]
```
</details>
### Computing invariant mass of the track pairs
Up to this point, the only Lorentz vector method that we used was `mag`. Now we want to add the left and right halves of each pair and compute their invariant mass.
#### **Exercise 7:** Check the [Vector documentation](https://vector.readthedocs.io/en/latest/usage/intro.html) and figure out how to do that.
```
invariant_mass = ???
```
The type should be:
```
invariant_mass.type
```
The first and last values should be:
```
invariant_mass
```
And the lengths of each list in the first and last events should be:
```
ak.num(invariant_mass)
```
<details style="border: dashed 1px black; padding: 10px">
<summary><b>Solutions</b> (no peeking!)</summary>
<br>
It could look (almost) exactly like the C++:
```python
invariant_mass = (quality_one + quality_two).M
```
<br>
But I prefer:
```python
invariant_mass = (quality_one + quality_two).mass
```
<br>
The Vector library has only one way to "spell" this quantity for purely geometric vectors, "`tau`" (for proper time), but when vectors are labeled as "Momentum", they get synonyms: "`mass`", "`M`", "`m`".
It's worth noting that `(quality_one + quality_two)` is a new array of vectors, and therefore the fields Vector doesn't recognize are lost. The type of `(quality_one + quality_two)` is:
```
8004 * var * Momentum4D["x": float32, "y": float32, "z": float32, "tau": float64]
```
<br>
with no `"charge"`. Vector does not add the charges because adding would not be the correct thing to do with any unrecognized field. (You might have named it `"q"` or `"Q"`.) Of course, you can add it yourself:
```python
quality_one.charge + quality_two.charge
```
<br>
and insert it into a new object. This works, for instance:
```python
z_bosons = (quality_one + quality_two)
z_bosons["charge"] = quality_one.charge + quality_two.charge
```
<br>
and `z_bosons` has type
```
8004 * var * Momentum4D["x": float32, "y": float32, "z": float32, "tau": float64, "charge": int64]
```
</details>
### Plotting the invariant mass
Constructing a histogram, which was the first step in C++:
```c++
TH1F *hM = new TH1F("hM", "e+e- invariant mass (GeV/c)", 120, 0, 120);
```
is the last step here.
```
import hist
```
#### **Exercise 8:** Check the [hist documentation](https://hist.readthedocs.io/en/latest/user-guide/quickstart.html) and define a one-dimensional histogram with 120 regularly-spaced bins from 0 to 120 GeV. Then fill it with the `invariant_mass` data.
**Hint:** The `invariant_mass` array contains _lists_ of numbers, but histograms present a distribution of _numbers_. Search Awkward Array's [reference documentation](https://awkward-array.readthedocs.io/) for a way to flatten these lists into a one-dimensional array, and experiment with that step _before_ attempting to fill the histogram. (The error messages will be easier to understand.)
```
flat_invariant_mass = ???
hM = ???
hM.fill(???)
```
The flattened invariant mass should look like this:
```
flat_invariant_mass
```
Note that the type does not have any "`var`" in it.
Whenever a `hist.Hist` is the return value of an expression in Jupyter (such as after the `fill`), you'll see a mini-plot to aid in interactive analysis. But professional-quality plots are made through Matplotlib:
```
hM.plot();
```
By importing Matplotlib, we can configure the plot, mix it with other plots, tweak how it looks, etc.
```
import matplotlib.pyplot as plt
hM.plot()
plt.yscale("log")
```
**Physics note:** The broad peak at 85 GeV _is_ the Z boson (in this Monte Carlo sample). It's offset from the 91 GeV Z mass and has a width of 14 GeV due to tracking resolution for these high-momentum tracks (roughly 40 GeV per track).
<details style="border: dashed 1px black; padding: 10px">
<summary><b>Solutions</b> (no peeking!)</summary>
<br>
The `invariant_mass` array can be flattened with [ak.flatten](https://awkward-array.readthedocs.io/en/latest/_auto/ak.flatten.html) or [ak.ravel](https://awkward-array.readthedocs.io/en/latest/_auto/ak.ravel.html)/[np.ravel](https://numpy.org/doc/stable/reference/generated/numpy.ravel.html). The [ak.flatten](https://awkward-array.readthedocs.io/en/latest/_auto/ak.flatten.html) function only flattens one dimension (by default, `axis=1`), which is all we need in this case. "Ravel" is NumPy's spelling for "flatten all dimensions."
```python
flat_invariant_mass = ak.flatten(invariant_mass)
```
<br>
or
```python
flat_invariant_mass = np.ravel(invariant_mass)
```
<br>
The reason you have to do this manually is because it's an information-losing operation: [there are many ways](https://awkward-array.org/how-to-restructure-flatten.html) to get a dimensionless set of values from nested data, and in some circumstances, you might have wanted one of the other ones. For instance, maybe you want to ensure that you only plot one Z candidate per event, and you have some criteria for selecting the "best" one. This is where you would put that alternative.
As for constructing the histogram and filling it:
```python
hM = hist.Hist(hist.axis.Regular(120, 0, 120, label="e+e- invariant mass (GeV/c)"))
hM.fill(flat_invariant_mass)
```
<br>
Be sure to use hist's array-oriented `fill` method. Iterating over the values in the array (or even the lists withing an array of lists) would be hundreds of times slower than filling it in one call.
Calling `fill` multiple times to accumulate batches, however, is fine: the important thing is to give it a large array with each call, so that most of its time can be spent in its compiled histogram-fill loop, not in Python loops.</details>
### Retrospective
(Spoilers; keep hidden until you're done.)
This might have seemed like a lot of steps to produce a simple invariant mass plot, but the intention of the exercises above was to walk you through it slowly.
A speedrun would look more like this:
```
import awkward as ak
import numpy as np
import matplotlib.pyplot as plt
import uproot
import particle
import hepunits
import hist
import vector
vector.register_awkward()
picodst = uproot.open("https://pivarski-princeton.s3.amazonaws.com/pythia_ppZee_run17emb.picoDst.root:PicoDst")
# make an array of track momentum vectors
pMom = ak.zip(dict(zip(["px", "py", "pz"], picodst.arrays(filter_name="Track.mPMomentum[XYZ]", how=tuple))), with_name="Momentum4D")
pMom["M"] = particle.Particle.find("e-").mass / hepunits.GeV
# get all the other arrays we need
mNHitsFit, mNHitsMax, mBEmcPidTraitsIndex, mBtowE = \
picodst.arrays(filter_name=["Track.mNHitsFit", "Track.mNHitsMax", "Track.mBEmcPidTraitsIndex", "EmcPidTraits.mBtowE"], how=tuple)
mBtowE = mBtowE / 1000
# add charge to the momentum vector
pMom["charge"] = (mNHitsFit > 0) * 2 - 1
# compute track quality cuts
isPrimary = pMom.mag > 0
isBemcTrack = mBEmcPidTraitsIndex >= 0
track_quality_cuts = isPrimary & (abs(mNHitsFit) / mNHitsMax >= 0.2) & isBemcTrack
# find shower energies for quality tracks
quality_mBtowE = mBtowE[mBEmcPidTraitsIndex[track_quality_cuts]]
# compute the momentum-over-energy cut (some denominators are zero)
with np.errstate(divide="ignore"):
quality_pOverE_cut = (pMom[track_quality_cuts].mag / quality_mBtowE >= 0.1)
# apply all track quality cuts, including momentum-over-energy
goodTracks = pMom[track_quality_cuts][quality_pOverE_cut]
# form pairs of quality cuts and apply an opposite-sign charge constraint
pairs = ak.combinations(goodTracks, 2)
one, two = ak.unzip(pairs)
quality_one, quality_two = ak.unzip(pairs[one.charge != two.charge])
# make the plot
hM = hist.Hist(hist.axis.Regular(120, 0, 120, label="e+e- invariant mass (GeV/c)"))
hM.fill(ak.flatten((quality_one + quality_two).mass))
hM.plot()
plt.yscale("log")
```
#### **Final words about array-oriented data analysis**
The key thing about this interface is the _order_ in which you do things.
1. Scan through the TBranch names to see what you can play with.
2. Get some promising-looking arrays. If the dataset is big or remote, use `entry_stop=small_number` to fetch only as much as you need to investigate.
3. Compute _one quantity_ on the _entire array(s)_. Then look at a few of its values or plot it.
4. Decide whether that was what you wanted to compute or cut. If not, go back to 2.
5. When you've built up a final result (on a small dataset), clean up the notebook or copy it to a non-notebook script.
6. Put the computation code in an [uproot.iterate](https://uproot.readthedocs.io/en/latest/uproot.behaviors.TBranch.iterate.html) loop or a parallel process that writes and adds up histograms.
7. Parallelize, collect the histograms, beautify them, publish.
Most importantly, each computation _step_ applies to _entire_ (possibly small) datasets, so you can look at/plot what you've computed before you decide what to compute next.
Imperative code forces you to put all steps into a loop; you have to run the whole loop to see any of the results. You can still do iterative data analysis, but the turn-around time to identify and fix mistakes is longer.
#### **I'm not just the president; I'm also a client**
I experienced this first-hand (again) while preparing this tutorial. There were some things I didn't understand about STAR's detectors and I didn't believe the final result (I thought the Z peak was fake; sculped by cuts), so I furiously plotted everything versus everything in this TTree until I came to understand that the Z peak was correct after all. (Dmitry Kalinkin helped: thanks!)
Offline, I have a ridiculously messy `Untitled.ipynb` with all those plots, mostly in the form
```python
plt.hist(ak.flatten(some_quantity), bins=100); # maybe add range=(low, high)
plt.yscale("log")
```
for brevity. Only when _I_ felt _I_ understood what was going on could I clean up all of that mess into something coherent, which is the exercises above.
| github_jupyter |
# HackerMath for ML
# Introduction
Intro to Stats & Maths for Machine Learning
<br>
*Amit Kapoor*
@amitkaps
<br>
*Bargava Subramanian*
@bargava
---
> What I cannot create, I do not understand
-- Richard Feynman
---
# Philosophy of HackerMath
> Hacker literally means developing mastery over something.
-- Paul Graham
<br>
Here we will aim to learn Math essential for Data Science in this hacker way.
---
# **Three Key Questions**
- Why do you need to understand the math?
- What math knowledge do you need?
- Why approach it the hacker's way?
---
# Approach
- Understand the Math.
- Code it to learn it.
- Play with code.
---
# Module 1: Linear Algebra
## Supervised ML - Regression, Classification
- Solve $Ax = b$ for $ n \times n$
- Solve $Ax = b$ for $ n \times p + 1$
- Linear Regression
- Ridge Regularization (L2)
- Bootstrapping
- Logistic Regression (Classification)
---
# Module 2: Statistics
## Hypothesis Testing: A/B Testing
- Basic Statistics
- Distributions
- Shuffling
- Bootstrapping & Simulation
- A/B Testing
---
# Module 3: Linear Algebra contd.
## Unsupervised ML: Dimensionality Reduction
- Solve $Ax = \lambda x$ for $ n \times n$
- Eigenvectors & Eigenvalues
- Principle Component Analysis
- Cluster Analysis (K-Means)
---
# Schedule
- 0900 - 1000: Breakfast
- 1000 - 1130: Session 1
- 1130 - 1145: Tea Break
- 1145 - 1315: Session 2
- 1315 - 1400: Lunch
- 1400 - 1530: Session 3
- 1530 - 1545: Tea Break
- 1545 - 1700: Session 4
> It’s tough to make predictions, especially about the future.
-- Yogi Berra
## What is Machine Learning (ML)?
> [Machine learning is the] field of study that gives computers the ability to learn without being explicitly programmed.
-- *Arthur Samuel*
> Machine learning is the study of computer algorithm that improve automatically through experience
-- *Tom Mitchell*
## ML Problems
- “Is this cancer?”
- “What is the market value of this house?”
- “Which of these people are friends?”
- “Will this person like this movie?”
- “Who is this?”
- “What did you say?”
- “How do you fly this thing?”.
## ML in use Everyday
- Search
- Photo Tagging
- Spam Filtering
- Recommendation
- ...
## Broad ML Application
- Database Mining e.g. Clickstream data, Business data
- Automating e.g. Handwriting, Natural Language Processing, Computer Vision
- Self Customising Program e.g. Recommendations
---
## ML Thought Process

## Learning Paradigm
- *Supervised* Learning
- *Unsupervised* Learning
- *Reinforcement* Learning
- *Online* Learning
## Supervised Learning
- Regression
- Classification

## Unsupervised Learning
- Clustering
- Dimensionality Reduction

## ML Pipeline
- *Frame*: Problem definition
- *Acquire*: Data ingestion
- *Refine*: Data wrangline
- *Transform*: Feature creation
- *Explore*: Feature selection
- *Model*: Model creation & assessment
- *Insight*: Communication
## Linear Regression

---
### Linear Relationship
$$ y_i = \alpha + \beta_1 x_1 + \beta_2 x_2 + .. $$
### Objective Function
$$ \epsilon = \sum_{k=1}^n (y_i - \hat{y_i} ) ^ 2 $$
*Interactive Example: [http://setosa.io/ev/](http://setosa.io/ev/ordinary-least-squares-regression/)*
## Logit Function
$$ \sigma (t)={\frac {e^{t}}{e^{t}+1}}={\frac {1}{1+e^{-t}}}$$

## Logistic Regression

## Logistic Relationship
Find the $ \beta $ parameters that best fit:
$ y=1 $ if $\beta _{0}+\beta _{1}x+\epsilon > 0$
$ y=0$, otherwise
Follows:
$$ P(x)={\frac {1}{1+e^{-(\beta _{0}+\beta _{1}x)}}} $$
---
## Fitting a Model

## Bias-Variance Tradeoff

## Train and Test Datasets
Split the Data - 80% / 20%

## Train and Test Datasets
Measure the error on Test data

## Model Complexity

## Cross Validation

## Regularization
Attempts to impose Occam's razor on the solution

## Model Evaluation
Mean Squared Error
$$ MSE = 1/n \sum_{k=1}^n (y_i - \hat{y_i} ) ^ 2 $$
## Model Evaluation
Confusion Matrix

## Model Evaluation
**Classification Metrics**

Recall (TPR) = TP / (TP + FN)
<br>
Precision = TP / (TP + FP)
<br>
Specificity (TNR) = TN / (TN + FP)
## Model Evaluation
**Receiver Operating Characteristic Curve**
Plot of TPR vs FPR at different discrimination threshold

---
## Decision Tree
Example: Survivor on Titanic

## Decision Tree
- Easy to interpret
- Little data preparation
- Scales well with data
- White-box model
- Instability – changing variables, altering sequence
- Overfitting
## Bagging
- Also called bootstrap aggregation, reduces variance
- Uses decision trees and uses a model averaging approach
## Random Forest
- Combines bagging idea and random selection of features.
- Similar to decision trees are constructed – but at each split, a random subset of features is used.

## Challenges
> If you torture the data enough, it will confess.
-- Ronald Case
- Data Snooping
- Selection Bias
- Survivor Bias
- Omitted Variable Bias
- Black-box model Vs White-Box model
- Adherence to regulations
| github_jupyter |
<img src="http://akhavanpour.ir/notebook/images/srttu.gif" alt="SRTTU" style="width: 150px;"/>
[](https://notebooks.azure.com/import/gh/Alireza-Akhavan/class.vision)
## Let's take a closer look at color spaces
You may have remembered we talked about images being stored in RGB (Red Green Blue) color Spaces. Let's take a look at that in OpenCV.
### First thing to remember about OpenCV's RGB is that it's BGR (I know, this is annoying)
Let's look at the image shape again. The '3L'
```
import cv2
import numpy as np
image = cv2.imread('./images/input.jpg')
```
### Let's look at the individual color levels for the first pixel (0,0)
```
# BGR Values for the first 0,0 pixel
B, G, R = image[10, 50]
print (B, G, R)
print (image.shape)
```
Let's see what happens when we convert it to grayscale
```
gray_img = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
print (gray_img.shape)
print (gray_img[0, 0])
```
It's now only 2 dimensions. Each pixel coordinate has only one value (previously 3) with a range of 0 to 255
```
gray_img[0, 0]
```
### Another useful color space is HSV
<ul>
<li>Hue – Color Value (0 – 179)</li>
<li>Saturation – Vibrancy of color (0-255)</li>
<li>Value – Brightness or intensity (0-255)</li>
</ul>
<img src="./lecture_images/HSV_color_solid_cylinder.png" />
Infact HSV is very useful in color filtering.
```
#H: 0 - 180, S: 0 - 255, V: 0 - 255
image = cv2.imread('./images/input.jpg')
hsv_image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
cv2.imshow('HSV image', hsv_image)
cv2.imshow('Hue channel', hsv_image[:, :, 0])
cv2.imshow('Saturation channel', hsv_image[:, :, 1])
cv2.imshow('Value channel', hsv_image[:, :, 2])
cv2.waitKey()
cv2.destroyAllWindows()
```
### Let's now explore lookng at individual channels in an RGB image
```
image = cv2.imread('./images/input.jpg')
# OpenCV's 'split' function splites the image into each color index
B, G, R = cv2.split(image)
print (B.shape)
cv2.imshow("Red", R)
cv2.imshow("Green", G)
cv2.imshow("Blue", B)
cv2.waitKey(0)
cv2.destroyAllWindows()
# Let's re-make the original image,
merged = cv2.merge([B, G, R])
cv2.imshow("Merged", merged)
# Let's amplify the blue color
merged = cv2.merge([B+100, G, R])
cv2.imshow("Merged with Blue Amplified", merged)
cv2.waitKey(0)
cv2.destroyAllWindows()
import cv2
import numpy as np
B, G, R = cv2.split(image)
# Let's create a matrix of zeros
# with dimensions of the image h x w
height, width, channels = image.shape
zeros = np.zeros([height,width], dtype = "uint8") # zeros = np.zeros(image.shape[:2], dtype = "uint8")
cv2.imshow("Red", cv2.merge([zeros, zeros, R]))
cv2.imshow("Green", cv2.merge([zeros, G, zeros]))
cv2.imshow("Blue", cv2.merge([B, zeros, zeros]))
cv2.waitKey(0)
cv2.destroyAllWindows()
image.shape[:2]
```
<h1 dir="rtl" style="text-align:right;">توضیحاتی در ارتباط با بازه عددی کانال های رنگی</h1>
<p dir="rtl" style="text-align:right;">
در ابتدا باید توجه داشته باشیم که به طور پیش فرض opencv با کانال های رنگی با نوع uint8 کار میکنه و برای اجرای توابعی که در ادامه ذکر می شود نیاز به تغییر آن ها داریم:
<br>
خب در ابتدا نیاز داریم تا به هر خانه ی آرایه مقداری را اضافه کنیم تا تصویر amplified شود و در ادامه اگر مقدار نهایی از 255 بیشتر شد، عدد 255 را جایگزین کنیم. برای اینکار می توانیم یکی از دو روش زیر را انجام بدهیم :
<br>
پیمایش آرایه :
</p>
```
for i in range (0,w):
for j in range(0,h):
B[i,j]=min(B[i][j]+ampNumber,255)
```
<p dir="rtl" style="text-align:right;">
در این روش به علت حجم بالای اکثر تصاویر سرعت پیمایش پایین است.
<br>
استفاده از توابع کتابخانه Numpy:
</p>
```
B = np.clip(B.astype(np.int)+ampNumber, None, 255)
B = B.astype(np.uint8)
```
<p dir="rtl" style="text-align:right;">
در این روش نوع ماتریس B را در ابتدا تغییر داده و بعد از مینیمم گیری به حالت قبل برمیگردانیم تا opencv خطایی از ما نگیرد. شما میتونید برای مثال روش های دیگه رو در این <a href="https://stackoverflow.com/questions/48833374/get-minimum-of-each-matrix-element-with-a-const-number/48833464#48833464">لینک</a> پیدا کنید .
</p>
#### You can view a list of color converisons here, but keep in mind you won't ever use or need many of these
http://docs.opencv.org/trunk/d7/d1b/group__imgproc__misc.html#ga4e0972be5de079fed4e3a10e24ef5ef0
<div class="alert alert-block alert-info">
<div style="direction:rtl;text-align:right;font-family:B Lotus, B Nazanin, Tahoma"> دانشگاه تربیت دبیر شهید رجایی<br>مباحث ویژه - آشنایی با بینایی کامپیوتر<br>علیرضا اخوان پور<br>96-97<br>
</div>
<a href="https://www.srttu.edu/">SRTTU.edu</a> - <a href="http://class.vision">Class.Vision</a> - <a href="http://AkhavanPour.ir">AkhavanPour.ir</a>
</div>
| github_jupyter |
# Hello, TensorFlow
## A beginner-level, getting started, basic introduction to TensorFlow
TensorFlow is a general-purpose system for graph-based computation. A typical use is machine learning. In this notebook, we'll introduce the basic concepts of TensorFlow using some simple examples.
TensorFlow gets its name from [tensors](https://en.wikipedia.org/wiki/Tensor), which are arrays of arbitrary dimensionality. A vector is a 1-d array and is known as a 1st-order tensor. A matrix is a 2-d array and a 2nd-order tensor. The "flow" part of the name refers to computation flowing through a graph. Training and inference in a neural network, for example, involves the propagation of matrix computations through many nodes in a computational graph.
When you think of doing things in TensorFlow, you might want to think of creating tensors (like matrices), adding operations (that output other tensors), and then executing the computation (running the computational graph). In particular, it's important to realize that when you add an operation on tensors, it doesn't execute immediately. Rather, TensorFlow waits for you to define all the operations you want to perform. Then, TensorFlow optimizes the computation graph, deciding how to execute the computation, before generating the data. Because of this, a tensor in TensorFlow isn't so much holding the data as a placeholder for holding the data, waiting for the data to arrive when a computation is executed.
## Adding two vectors in TensorFlow
Let's start with something that should be simple. Let's add two length four vectors (two 1st-order tensors):
$\begin{bmatrix} 1. & 1. & 1. & 1.\end{bmatrix} + \begin{bmatrix} 2. & 2. & 2. & 2.\end{bmatrix} = \begin{bmatrix} 3. & 3. & 3. & 3.\end{bmatrix}$
```
import tensorflow as tf
with tf.Session():
input1 = tf.constant([1.0, 1.0, 1.0, 1.0])
input2 = tf.constant([2.0, 2.0, 2.0, 2.0])
output = tf.add(input1, input2)
result = output.eval()
print result
```
What we're doing is creating two vectors, [1.0, 1.0, 1.0, 1.0] and [2.0, 2.0, 2.0, 2.0], and then adding them. Here's equivalent code in raw Python and using numpy:
```
print [x + y for x, y in zip([1.0] * 4, [2.0] * 4)]
import numpy as np
x, y = np.full(4, 1.0), np.full(4, 2.0)
print "{} + {} = {}".format(x, y, x + y)
```
## Details of adding two vectors in TensorFlow
The example above of adding two vectors involves a lot more than it seems, so let's look at it in more depth.
>`import tensorflow as tf`
This import brings TensorFlow's public API into our IPython runtime environment.
>`with tf.Session():`
When you run an operation in TensorFlow, you need to do it in the context of a `Session`. A session holds the computation graph, which contains the tensors and the operations. When you create tensors and operations, they are not executed immediately, but wait for other operations and tensors to be added to the graph, only executing when finally requested to produce the results of the session. Deferring the execution like this provides additional opportunities for parallelism and optimization, as TensorFlow can decide how to combine operations and where to run them after TensorFlow knows about all the operations.
>>`input1 = tf.constant([1.0, 1.0, 1.0, 1.0])`
>>`input2 = tf.constant([2.0, 2.0, 2.0, 2.0])`
The next two lines create tensors using a convenience function called `constant`, which is similar to numpy's `array` and numpy's `full`. If you look at the code for `constant`, you can see the details of what it is doing to create the tensor. In summary, it creates a tensor of the necessary shape and applies the constant operator to it to fill it with the provided values. The values to `constant` can be Python or numpy arrays. `constant` can take an optional shape parameter, which works similarly to numpy's `fill` if provided, and an optional name parameter, which can be used to put a more human-readable label on the operation in the TensorFlow operation graph.
>>`output = tf.add(input1, input2)`
You might think `add` just adds the two vectors now, but it doesn't quite do that. What it does is put the `add` operation into the computational graph. The results of the addition aren't available yet. They've been put in the computation graph, but the computation graph hasn't been executed yet.
>>`result = output.eval()`
>>`print result`
`eval()` is also slightly more complicated than it looks. Yes, it does get the value of the vector (tensor) that results from the addition. It returns this as a numpy array, which can then be printed. But, it's important to realize it also runs the computation graph at this point, because we demanded the output from the operation node of the graph; to produce that, it had to run the computation graph. So, this is the point where the addition is actually performed, not when `add` was called, as `add` just put the addition operation into the TensorFlow computation graph.
## Multiple operations
To use TensorFlow, you add operations on tensors that produce tensors to the computation graph, then execute that graph to run all those operations and calculate the values of all the tensors in the graph.
Here's a simple example with two operations:
```
import tensorflow as tf
with tf.Session():
input1 = tf.constant(1.0, shape=[4])
input2 = tf.constant(2.0, shape=[4])
input3 = tf.constant(3.0, shape=[4])
output = tf.add(tf.add(input1, input2), input3)
result = output.eval()
print result
```
This version uses `constant` in a way similar to numpy's `fill`, specifying the optional shape and having the values copied out across it.
The `add` operator supports operator overloading, so you could try writing it inline as `input1 + input2` instead as well as experimenting with other operators.
```
with tf.Session():
input1 = tf.constant(1.0, shape=[4])
input2 = tf.constant(2.0, shape=[4])
output = input1 + input2
print output.eval()
```
## Adding two matrices
Next, let's do something very similar, adding two matrices:
$\begin{bmatrix}
1. & 1. & 1. \\
1. & 1. & 1. \\
\end{bmatrix} +
\begin{bmatrix}
1. & 2. & 3. \\
4. & 5. & 6. \\
\end{bmatrix} =
\begin{bmatrix}
2. & 3. & 4. \\
5. & 6. & 7. \\
\end{bmatrix}$
```
import tensorflow as tf
import numpy as np
with tf.Session():
input1 = tf.constant(1.0, shape=[2, 3])
input2 = tf.constant(np.reshape(np.arange(1.0, 7.0, dtype=np.float32), (2, 3)))
output = tf.add(input1, input2)
print output.eval()
```
Recall that you can pass numpy or Python arrays into `constant`.
In this example, the matrix with values from 1 to 6 is created in numpy and passed into `constant`, but TensorFlow also has `range`, `reshape`, and `tofloat` operators. Doing this entirely within TensorFlow could be more efficient if this was a very large matrix.
Try experimenting with this code a bit -- maybe modifying some of the values, using the numpy version, doing this using, adding another operation, or doing this using TensorFlow's `range` function.
## Multiplying matrices
Let's move on to matrix multiplication. This time, let's use a bit vector and some random values, which is a good step toward some of what we'll need to do for regression and neural networks.
```
#@test {"output": "ignore"}
import tensorflow as tf
import numpy as np
with tf.Session():
input_features = tf.constant(np.reshape([1, 0, 0, 1], (1, 4)).astype(np.float32))
weights = tf.constant(np.random.randn(4, 2).astype(np.float32))
output = tf.matmul(input_features, weights)
print "Input:"
print input_features.eval()
print "Weights:"
print weights.eval()
print "Output:"
print output.eval()
```
Above, we're taking a 1 x 4 vector [1 0 0 1] and multiplying it by a 4 by 2 matrix full of random values from a normal distribution (mean 0, stdev 1). The output is a 1 x 2 matrix.
You might try modifying this example. Running the cell multiple times will generate new random weights and a new output. Or, change the input, e.g., to \[0 0 0 1]), and run the cell again. Or, try initializing the weights using the TensorFlow op, e.g., `random_normal`, instead of using numpy to generate the random weights.
What we have here is the basics of a simple neural network already. If we are reading in the input features, along with some expected output, and change the weights based on the error with the output each time, that's a neural network.
## Use of variables
Let's look at adding two small matrices in a loop, not by creating new tensors every time, but by updating the existing values and then re-running the computation graph on the new data. This happens a lot with machine learning models, where we change some parameters each time such as gradient descent on some weights and then perform the same computations over and over again.
```
#@test {"output": "ignore"}
import tensorflow as tf
import numpy as np
with tf.Session() as sess:
# Set up two variables, total and weights, that we'll change repeatedly.
total = tf.Variable(tf.zeros([1, 2]))
weights = tf.Variable(tf.random_uniform([1,2]))
# Initialize the variables we defined above.
tf.initialize_all_variables().run()
# This only adds the operators to the graph right now. The assignment
# and addition operations are not performed yet.
update_weights = tf.assign(weights, tf.random_uniform([1, 2], -1.0, 1.0))
update_total = tf.assign(total, tf.add(total, weights))
for _ in range(5):
# Actually run the operation graph, so randomly generate weights and then
# add them into the total. Order does matter here. We need to update
# the weights before updating the total.
sess.run(update_weights)
sess.run(update_total)
print weights.eval(), total.eval()
```
This is more complicated. At a high level, we create two variables and add operations over them, then, in a loop, repeatedly execute those operations. Let's walk through it step by step.
Starting off, the code creates two variables, `total` and `weights`. `total` is initialized to \[0, 0\] and `weights` is initialized to random values between -1 and 1.
Next, two assignment operators are added to the graph, one that updates weights with random values from [-1, 1], the other that updates the total with the new weights. Again, the operators are not executed here. In fact, this isn't even inside the loop. We won't execute these operations until the `eval` call inside the loop.
Finally, in the for loop, we run each of the operators. In each iteration of the loop, this executes the operators we added earlier, first putting random values into the weights, then updating the totals with the new weights. This call uses `eval` on the session; the code also could have called `eval` on the operators (e.g. `update_weights.eval`).
It can be a little hard to wrap your head around exactly what computation is done when. The important thing to remember is that computation is only performed on demand.
Variables can be useful in cases where you have a large amount of computation and data that you want to use over and over again with just a minor change to the input each time. That happens quite a bit with neural networks, for example, where you just want to update the weights each time you go through the batches of input data, then run the same operations over again.
## What's next?
This has been a gentle introduction to TensorFlow, focused on what TensorFlow is and the very basics of doing anything in TensorFlow. If you'd like more, the next tutorial in the series is Getting Started with TensorFlow, also available in the [notebooks directory](..).
```
x = tf.placeholder(tf.string)
a = tf.identity(x)
with tf.Session() as sess:
output = sess.run(a, feed_dict={x: 'Hello World'})
print(output)
x = tf.placeholder(tf.string)
y = tf.placeholder(tf.int32)
z = tf.placeholder(tf.float32)
with tf.Session() as sess:
output = sess.run(x, feed_dict={x: 'Test String', y: 123, z: 45.67})
print(output)
#tf.subtract(tf.constant(2.0),tf.constant(1))
output = tf.sub(tf.cast(tf.constant(2.0), tf.int32), tf.constant(1)) # 1
print(output)
import tensorflow as tf
# TODO: Convert the following to TensorFlow:
x = tf.constant(10)
y = tf.constant(2)
y = tf.placeholder(tf.float32, 10)
x = tf.placeholder(tf.float32, 2)
z = tf.sub(tf.div(x,y),tf.cast(tf.constant(1), tf.float64))
# TODO: Print z from a session
with tf.Session() as sess:
output = sess.run(z)
print(output)
init = tf.initialize_all_variables()
with tf.Session() as sess:
sess.run(init)
n_labels = 5
init = tf.initialize_all_variables()
bias = tf.Variable(tf.zeros(n_labels))
with tf.Session() as sess:
output = sess.run(bias)
print(output)
n_features = 120
n_labels = 5
weights = tf.Variable(tf.truncated_normal((n_features, n_labels)))
# Solution is available in the other "quiz_solution.py" tab
import tensorflow as tf
def get_weights(n_features, n_labels):
"""
Return TensorFlow weights
:param n_features: Number of features
:param n_labels: Number of labels
:return: TensorFlow weights
"""
# TODO: Return weights
return tf.Variable(tf.truncated_normal((n_features, n_labels)))
def get_biases(n_labels):
"""
Return TensorFlow bias
:param n_labels: Number of labels
:return: TensorFlow bias
"""
# TODO: Return biases
return tf.Variable(tf.zeros(n_labels))
def linear(input, w, b):
"""
Return linear function in TensorFlow
:param input: TensorFlow input
:param w: TensorFlow weights
:param b: TensorFlow biases
:return: TensorFlow linear function
"""
# TODO: Linear Function (xW + b)
return tf. add(tf.matmul(input,w),b)
# Solution is available in the other "sandbox_solution.py" tab
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
#from quiz import get_weights, get_biases, linear
def mnist_features_labels(n_labels):
"""
Gets the first <n> labels from the MNIST dataset
:param n_labels: Number of labels to use
:return: Tuple of feature list and label list
"""
mnist_features = []
mnist_labels = []
mnist = input_data.read_data_sets('/datasets/ud730/mnist', one_hot=True)
# In order to make quizzes run faster, we're only looking at 10000 images
for mnist_feature, mnist_label in zip(*mnist.train.next_batch(10000)):
# Add features and labels if it's for the first <n>th labels
if mnist_label[:n_labels].any():
mnist_features.append(mnist_feature)
mnist_labels.append(mnist_label[:n_labels])
return mnist_features, mnist_labels
# Number of features (28*28 image is 784 features)
n_features = 784
# Number of labels
n_labels = 3
# Features and Labels
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)
# Weights and Biases
w = get_weights(n_features, n_labels)
b = get_biases(n_labels)
# Linear Function xW + b
logits = linear(features, w, b)
# Training data
train_features, train_labels = mnist_features_labels(n_labels)
with tf.Session() as session:
# TODO: Initialize session variables
session.run(tf.initialize_all_variables())
# Softmax
prediction = tf.nn.softmax(logits)
# Cross entropy
# This quantifies how far off the predictions were.
# You'll learn more about this in future lessons.
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
# You'll learn more about this in future lessons.
loss = tf.reduce_mean(cross_entropy)
# Rate at which the weights are changed
# You'll learn more about this in future lessons.
learning_rate = 0.08
# Gradient Descent
# This is the method used to train the model
# You'll learn more about this in future lessons.
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: train_features, labels: train_labels})
# Print loss
print('Loss: {}'.format(l))
# Solution is available in the other "solution.py" tab
import tensorflow as tf
def run():
output = None
logit_data = [2.0, 1.0, 0.1]
logits = tf.placeholder(tf.float32)
# TODO: Calculate the softmax of the logits
softmax = tf.nn.softmax(logit_data)
with tf.Session() as sess:
# TODO: Feed in the logit data
output = sess.run(softmax, feed_dict={logits: logit_data} )
return output
# Solution is available in the other "solution.py" tab
import tensorflow as tf
softmax_data = [0.27, 0.11, 0.33, 0.10, 0.19]
one_hot_data = [0, 0, 0, 1, 0]
softmax = tf.placeholder(tf.float32)
one_hot = tf.placeholder(tf.float32)
# TODO: Print cross entropy from session
cross_entropy = -tf.reduce_sum(tf.mul(one_hot, tf.log(softmax)))
with tf.Session() as sess:
print(sess.run(cross_entropy, feed_dict={softmax:softmax_data, one_hot:one_hot_data}))
import pandas as pd
# TODO: Set weight1, weight2, and bias
weight1 = 1.0
weight2 = 1.0
bias = -2.0
# DON'T CHANGE ANYTHING BELOW
# Inputs and outputs
test_inputs = [(0, 0), (0, 1), (1, 0), (1, 1)]
correct_outputs = [False, False, False, True]
outputs = []
# Generate and check output
for test_input, correct_output in zip(test_inputs, correct_outputs):
linear_combination = weight1 * test_input[0] + weight2 * test_input[1] + bias
output = int(linear_combination >= 0)
is_correct_string = 'Yes' if output == correct_output else 'No'
outputs.append([test_input[0], test_input[1], linear_combination, output, is_correct_string])
# Print output
num_wrong = len([output[4] for output in outputs if output[4] == 'No'])
output_frame = pd.DataFrame(outputs, columns=['Input 1', ' Input 2', ' Linear Combination', ' Activation Output', ' Is Correct'])
if not num_wrong:
print('Nice! You got it all correct.\n')
else:
print('You got {} wrong. Keep trying!\n'.format(num_wrong))
print(output_frame.to_string(index=False))
import pandas as pd
# TODO: Set weight1, weight2, and bias
weight1 = 1.0
weight2 = 1.0
bias = -2.0
# DON'T CHANGE ANYTHING BELOW
# Inputs and outputs
test_inputs = [(0, 0), (0, 1), (1, 0), (1, 1)]
correct_outputs = [False, False, False, True]
outputs = []
# Generate and check output
for test_input, correct_output in zip(test_inputs, correct_outputs):
linear_combination = weight1 * test_input[0] + weight2 * test_input[1] + bias
output = int(linear_combination >= 0)
is_correct_string = 'Yes' if output == correct_output else 'No'
outputs.append([test_input[0], test_input[1], linear_combination, output, is_correct_string])
# Print output
num_wrong = len([output[4] for output in outputs if output[4] == 'No'])
output_frame = pd.DataFrame(outputs, columns=['Input 1', ' Input 2', ' Linear Combination', ' Activation Output', ' Is Correct'])
if not num_wrong:
print('Nice! You got it all correct.\n')
else:
print('You got {} wrong. Keep trying!\n'.format(num_wrong))
print(output_frame.to_string(index=False))
import numpy as np
def sigmoid(x):
# TODO: Implement sigmoid function
return 1/(1 + np.exp(-x))
inputs = np.array([0.7, -0.3])
weights = np.array([0.1, 0.8])
bias = -0.1
# TODO: Calculate the output
output = sigmoid(np.dot(weights, inputs) + bias)
print('Output:')
print(output)
import numpy as np
def sigmoid(x):
"""
Calculate sigmoid
"""
return 1/(1+np.exp(-x))
def sigmoid_prime(x):
"""
# Derivative of the sigmoid function
"""
return sigmoid(x) * (1 - sigmoid(x))
learnrate = 0.5
x = np.array([1, 2, 3, 4])
y = np.array(0.5)
# Initial weights
w = np.array([0.5, -0.5, 0.3, 0.1])
### Calculate one gradient descent step for each weight
### Note: Some steps have been consilated, so there are
### fewer variable names than in the above sample code
# TODO: Calculate the node's linear combination of inputs and weights
h = np.dot(x, w)
# TODO: Calculate output of neural network
nn_output = sigmoid(h)
# TODO: Calculate error of neural network
error = y - nn_output
# TODO: Calculate the error term
# Remember, this requires the output gradient, which we haven't
# specifically added a variable for.
error_term = error * sigmoid_prime(h)
# Note: The sigmoid_prime function calculates sigmoid(h) twice,
# but you've already calculated it once. You can make this
# code more efficient by calculating the derivative directly
# rather than calling sigmoid_prime, like this:
# error_term = error * nn_output * (1 - nn_output)
# TODO: Calculate change in weights
del_w = learnrate * error_term * x
print('Neural Network output:')
print(nn_output)
print('Amount of Error:')
print(error)
print('Change in Weights:')
print(del_w)
import numpy as np
def sigmoid(x):
# TODO: Implement sigmoid function
return 1/(1 + np.exp(-x))
inputs = np.array([0.7, -0.3])
weights = np.array([0.1, 0.8])
bias = -0.1
# TODO: Calculate the output
output = sigmoid(np.dot(weights, inputs) + bias)
print('Output:')
print(output)
import numpy as np
def sigmoid(x):
"""
Calculate sigmoid
"""
return 1/(1+np.exp(-x))
def sigmoid_prime(x):
"""
# Derivative of the sigmoid function
"""
return sigmoid(x) * (1 - sigmoid(x))
learnrate = 0.5
x = np.array([1, 2, 3, 4])
y = np.array(0.5)
# Initial weights
w = np.array([0.5, -0.5, 0.3, 0.1])
### Calculate one gradient descent step for each weight
### Note: Some steps have been consilated, so there are
### fewer variable names than in the above sample code
# TODO: Calculate the node's linear combination of inputs and weights
h = np.dot(x, w)
# TODO: Calculate output of neural network
nn_output = sigmoid(h)
# TODO: Calculate error of neural network
error = y - nn_output
# TODO: Calculate the error term
# Remember, this requires the output gradient, which we haven't
# specifically added a variable for.
error_term = error * sigmoid_prime(h)
# Note: The sigmoid_prime function calculates sigmoid(h) twice,
# but you've already calculated it once. You can make this
# code more efficient by calculating the derivative directly
# rather than calling sigmoid_prime, like this:
# error_term = error * nn_output * (1 - nn_output)
# TODO: Calculate change in weights
del_w = learnrate * error_term * x
print('Neural Network output:')
print(nn_output)
print('Amount of Error:')
print(error)
print('Change in Weights:')
print(del_w)
import numpy as np
import pandas as pd
admissions = pd.read_csv('binary.csv')
# Make dummy variables for rank
data = pd.concat([admissions, pd.get_dummies(admissions['rank'], prefix='rank')], axis=1)
data = data.drop('rank', axis=1)
# Standarize features
for field in ['gre', 'gpa']:
mean, std = data[field].mean(), data[field].std()
data.loc[:,field] = (data[field]-mean)/std
# Split off random 10% of the data for testing
np.random.seed(42)
sample = np.random.choice(data.index, size=int(len(data)*0.9), replace=False)
data, test_data = data.ix[sample], data.drop(sample)
# Split into features and targets
features, targets = data.drop('admit', axis=1), data['admit']
features_test, targets_test = test_data.drop('admit', axis=1), test_data['admit']
import numpy as np
from data_prep import features, targets, features_test, targets_test
def sigmoid(x):
"""
Calculate sigmoid
"""
return 1 / (1 + np.exp(-x))
# TODO: We haven't provided the sigmoid_prime function like we did in
# the previous lesson to encourage you to come up with a more
# efficient solution. If you need a hint, check out the comments
# in solution.py from the previous lecture.
# Use to same seed to make debugging easier
np.random.seed(42)
n_records, n_features = features.shape
last_loss = None
# Initialize weights
weights = np.random.normal(scale=1 / n_features**.5, size=n_features)
# Neural Network hyperparameters
epochs = 1000
learnrate = 0.5
for e in range(epochs):
del_w = np.zeros(weights.shape)
for x, y in zip(features.values, targets):
# Loop through all records, x is the input, y is the target
# Activation of the output unit
# Notice we multiply the inputs and the weights here
# rather than storing h as a separate variable
output = sigmoid(np.dot(x, weights))
# The error, the target minus the network output
error = y - output
# The error term
# Notice we calulate f'(h) here instead of defining a separate
# sigmoid_prime function. This just makes it faster because we
# can re-use the result of the sigmoid function stored in
# the output variable
error_term = error * output * (1 - output)
# The gradient descent step, the error times the gradient times the inputs
del_w += error_term * x
# Update the weights here. The learning rate times the
# change in weights, divided by the number of records to average
weights += learnrate * del_w / n_records
# Printing out the mean square error on the training set
if e % (epochs / 10) == 0:
out = sigmoid(np.dot(features, weights))
loss = np.mean((out - targets) ** 2)
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
# Calculate accuracy on test data
tes_out = sigmoid(np.dot(features_test, weights))
predictions = tes_out > 0.5
accuracy = np.mean(predictions == targets_test)
print("Prediction accuracy: {:.3f}".format(accuracy))
import numpy as np
def sigmoid(x):
"""
Calculate sigmoid
"""
return 1/(1+np.exp(-x))
# Network size
N_input = 4
N_hidden = 3
N_output = 2
np.random.seed(42)
# Make some fake data
X = np.random.randn(4)
weights_input_to_hidden = np.random.normal(0, scale=0.1, size=(N_input, N_hidden))
weights_hidden_to_output = np.random.normal(0, scale=0.1, size=(N_hidden, N_output))
# TODO: Make a forward pass through the network
hidden_layer_in = np.dot(X, weights_input_to_hidden)
hidden_layer_out = sigmoid(hidden_layer_in)
print('Hidden-layer Output:')
print(hidden_layer_out)
output_layer_in = np.dot(hidden_layer_out, weights_hidden_to_output)
output_layer_out = sigmoid(output_layer_in)
print('Output-layer Output:')
print(output_layer_out)
import numpy as np
def sigmoid(x):
"""
Calculate sigmoid
"""
return 1 / (1 + np.exp(-x))
x = np.array([0.5, 0.1, -0.2])
target = 0.6
learnrate = 0.5
weights_input_hidden = np.array([[0.5, -0.6],
[0.1, -0.2],
[0.1, 0.7]])
weights_hidden_output = np.array([0.1, -0.3])
## Forward pass
hidden_layer_input = np.dot(x, weights_input_hidden)
hidden_layer_output = sigmoid(hidden_layer_input)
output_layer_in = np.dot(hidden_layer_output, weights_hidden_output)
output = sigmoid(output_layer_in)
## Backwards pass
## TODO: Calculate output error
error = target - output
# TODO: Calculate error term for output layer
output_error_term = error * output * (1 - output)
# TODO: Calculate error term for hidden layer
hidden_error_term = np.dot(output_error_term, weights_hidden_output) * \
hidden_layer_output * (1 - hidden_layer_output)
# TODO: Calculate change in weights for hidden layer to output layer
delta_w_h_o = learnrate * output_error_term * hidden_layer_output
# TODO: Calculate change in weights for input layer to hidden layer
delta_w_i_h = learnrate * hidden_error_term * x[:, None]
print('Change in weights for hidden layer to output layer:')
print(delta_w_h_o)
print('Change in weights for input layer to hidden layer:')
print(delta_w_i_h)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/BrunoGomesCoelho/mosquito-networking/blob/master/notebooks/1.2-BrunoGomesCoelho_Colab2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import time
start_time = time.time()
COLAB_IDX = 2
TESTING = False
COLAB = True
if COLAB:
BASE_DIR = "/content/drive/My Drive/IC/mosquito-networking/"
else:
BASE_DIR = "../"
from google.colab import drive
drive.mount('/content/drive')
import sys
sys.path.append("/content/drive/My Drive/IC/mosquito-networking/")
!python3 -m pip install -qr "/content/drive/My Drive/IC/mosquito-networking/drive_requirements.txt"
```
- - -
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
# Trying out a full pytorch experiment, with tensorboard, // processing, etc
```
# OPTIONAL: Load the "autoreload" extension so that code can change
#%load_ext autoreload
# OPTIONAL: always reload modules so that as you change code in src, it gets loaded
#%autoreload 2
import numpy as np
import pandas as pd
from src.data import make_dataset
from src.data import read_dataset
from src.data import util
from src.data.colab_dataset import MosquitoDatasetColab
import joblib
from torchsummary import summary
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
```
# Experiment params
```
# Parameters
params = {'batch_size': 64,
'shuffle': True,
'num_workers': 0}
max_epochs = 1
if TESTING:
params["num_workers"] = 0
version = !python3 --version
version = version[0].split(".")[1]
if int(version) < 7 and params["num_workers"]:
print("WARNING\n"*10)
print("Parallel execution only works for python3.7 or above!")
print("Running in parallel with other versions is not guaranted to work")
print("See https://discuss.pytorch.org/t/valueerror-signal-number-32-out-of-range-when-loading-data-with-num-worker-0/39615/2")
## Load gpu or cpu
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(f"Using device {device}")
```
# load data
```
# Load scaler
#scaler = joblib.load("../data/interim/scaler.pkl")
scaler = joblib.load(BASE_DIR + "data/interim/scaler.pkl")
data = np.load(BASE_DIR + "data/interim/all_wavs.npy", allow_pickle=True)
data = data[data[:, -1].argsort()]
df = pd.read_csv(BASE_DIR + "data/interim/file_names.csv")
df.sort_values("original_name", inplace=True)
errors = (df["original_name"].values != data[:, -1]).sum()
if errors:
print(f"We have {errors} errors!")
raise ValueError("Error in WAV/CSV")
x = data[:, 0]
y = df["label"]
train_idx = df["training"] == 1
# Generators
training_set = MosquitoDatasetColab(x[train_idx], y[train_idx].values,
device=device, scaler=scaler)
training_generator = torch.utils.data.DataLoader(training_set, **params,
pin_memory=True)
test_set = MosquitoDatasetColab(x[~train_idx], y[~train_idx].values,
device=device, scaler=scaler)
test_generator = torch.utils.data.DataLoader(test_set, **params,
pin_memory=True)
#sc Generate some example data
temp_generator = torch.utils.data.DataLoader(training_set, **params)
for (local_batch, local_labels) in temp_generator:
example_x = local_batch
example_y = local_labels
break
```
# Load model
```
from src.models.BasicMosquitoNet2 import BasicMosquitoNet
# create your optimizer
net = BasicMosquitoNet()
criterion = nn.BCEWithLogitsLoss()
optimizer = optim.SGD(net.parameters(), lr=0.1, momentum=0.9)
if device.type == "cuda":
net.cuda()
summary(net, input_size=example_x.shape[1:])
```
# Start tensorboard
```
from torch.utils.tensorboard import SummaryWriter
save_path = BASE_DIR + f"runs/colab/{COLAB_IDX}/"
# default `log_dir` is "runs" - we'll be more specific here
writer = SummaryWriter(save_path)
```
# train function
```
# Simple train function
def train(net, optimizer, max_epochs, testing=False, testing_idx=0,
save_idx=1, save_path=""):
# Loop over epochs
last_test_loss = 0
for epoch in range(max_epochs):
# Training
cumulative_train_loss = 0
cumulative_train_acc = 0
amount_train_samples = 0
for idx, (local_batch, local_labels) in enumerate(training_generator):
amount_train_samples += len(local_batch)
local_batch, local_labels = util.convert_cuda(local_batch,
local_labels,
device)
optimizer.zero_grad() # zero the gradient buffers
output = net(local_batch)
loss = criterion(output, local_labels)
cumulative_train_loss += loss.data.item()
loss.backward()
optimizer.step() # Does the update
# Stores loss
pred = output >= 0.5
cumulative_train_acc += pred.float().eq(local_labels).sum().data.item()
if testing and idx == testing_idx:
break
cumulative_train_loss /= (idx+1)
cumulative_train_acc /= amount_train_samples
writer.add_scalar("Train Loss", cumulative_train_loss, epoch)
writer.add_scalar("Train Acc", cumulative_train_acc, epoch)
# Validation
with torch.set_grad_enabled(False):
cumulative_test_loss = 0
cumulative_test_acc = 0
amount_test_samples = 0
for idx, (local_batch, local_labels) in enumerate(training_generator):
amount_test_samples += len(local_batch)
local_batch, local_labels = util.convert_cuda(local_batch,
local_labels,
device)
output = net(local_batch)
loss = criterion(output, local_labels)
cumulative_test_loss += loss.data.item()
# Stores loss
pred = output >= 0.5
cumulative_test_acc += pred.float().eq(local_labels).sum().data.item()
if testing:
break
cumulative_test_loss /= (idx+1)
cumulative_test_acc /= amount_test_samples
writer.add_scalar("Test Loss", cumulative_test_loss, epoch)
writer.add_scalar("Test Acc", cumulative_test_acc, epoch)
writer.flush()
torch.save(net.state_dict(), save_path + f"model_epoch_{epoch}.pt")
writer.close()
return cumulative_test_loss
%%time
train(net, optimizer, 150, testing=TESTING, save_path=save_path)
print(start_time)
print(time.time() - start_time)
```
| github_jupyter |
```
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
import math
import pingouin as pg
%matplotlib inline
```
# Patient 8 THY
## 3M Littmann Data
```
#image = Image.open('3M.bmp')
image = Image.open('3M_thy_post_s.bmp')
image
x = image.size[0]
y = image.size[1]
print(x)
print(y)
matrix = []
points = []
integrated_density = 0
for i in range(x):
matrix.append([])
for j in range(y):
matrix[i].append(image.getpixel((i,j)))
#integrated_density += image.getpixel((i,j))[1]
#points.append(image.getpixel((i,j))[1])
```
### Extract Red Line Position
```
redMax = 0
xStore = 0
yStore = 0
for xAxis in range(x):
for yAxis in range(y):
currentPoint = matrix[xAxis][yAxis]
if currentPoint[0] == 255 and currentPoint[1] < 10 and currentPoint[2] < 10:
redMax = currentPoint[0]
xStore = xAxis
yStore = yAxis
print(xStore, yStore)
```
### Extract Blue Points
```
redline_pos = 279
gain = 120
absMax = 0
littmannArr = []
points_vertical = []
theOne = 0
for xAxis in range(x):
for yAxis in range(y):
currentPoint = matrix[xAxis][yAxis]
# Pickup Blue points
if currentPoint[2] == 255 and currentPoint[0] < 220 and currentPoint[1] < 220:
points_vertical.append(yAxis)
#print(points_vertical)
# Choose the largest amplitude
for item in points_vertical:
if abs(item-redline_pos) > absMax:
absMax = abs(item-redline_pos)
theOne = item
littmannArr.append((theOne-redline_pos)*gain)
absMax = 0
theOne = 0
points_vertical = []
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(littmannArr, linewidth=0.6, color='blue')
```
# Ascul Pi Data
```
pathBase = 'C://Users//triti//OneDrive//Dowrun//Text//Manuscripts//Data//TianHaoyang//AusculPi_Post//'
filename = 'Numpy_Array_File_2020-06-24_18_15_52.npy'
line = pathBase + filename
arr = np.load(line)
arr
arr.shape
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(arr[0], linewidth=1.0, color='black')
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(arr[:,100], linewidth=1.0, color='black')
start = 1675
end = 2040
start_adj = int(start * 2583 / 3000)
end_adj = int(end * 2583 / 3000)
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(arr[start_adj:end_adj,240], linewidth=0.6, color='black')
start_adj-end_adj
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(littmannArr, linewidth=0.6, color='blue')
asculArr = arr[start_adj:end_adj,400]
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(asculArr, linewidth=0.6, color='black')
```
## Preprocess the two array
```
asculArr_processed = []
littmannArr_processed = []
for ascul in asculArr:
asculArr_processed.append(math.fabs(ascul))
for item in littmannArr:
littmannArr_processed.append(math.fabs(item))
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(asculArr_processed, linewidth=0.6, color='black')
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(littmannArr_processed, linewidth=0.6, color='blue')
len(littmannArr)
len(asculArr)
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(asculArr_processed[:170], linewidth=0.6, color='black')
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(littmannArr_processed[:170], linewidth=0.6, color='blue')
```
### Coeffient
```
stats.pearsonr(asculArr_processed, littmannArr_processed)
stats.pearsonr(asculArr_processed[:170], littmannArr_processed[:170])
```
### Fitness
```
stats.chisquare(asculArr_processed[:80], littmannArr_processed[2:82])
def cosCalculate(a, b):
l = len(a)
sumXY = 0
sumRootXSquare = 0
sumRootYSquare = 0
for i in range(l):
sumXY = sumXY + a[i]*b[i]
sumRootXSquare = sumRootXSquare + math.sqrt(a[i]**2)
sumRootYSquare = sumRootYSquare + math.sqrt(b[i]**2)
cosValue = sumXY / (sumRootXSquare * sumRootYSquare)
return cosValue
cosCalculate(asculArr_processed, littmannArr_processed)
```
# Cross Comparing
### Volunteer 3M vs Patient 8 Ascul Post
```
#image = Image.open('3M.bmp')
image = Image.open('3Ms.bmp')
image
x = image.size[0]
y = image.size[1]
matrix = []
points = []
integrated_density = 0
for i in range(x):
matrix.append([])
for j in range(y):
matrix[i].append(image.getpixel((i,j)))
#integrated_density += image.getpixel((i,j))[1]
#points.append(image.getpixel((i,j))[1])
redline_pos = 51
absMax = 0
littmannArr2 = []
points_vertical = []
theOne = 0
for xAxis in range(x):
for yAxis in range(y):
currentPoint = matrix[xAxis][yAxis]
# Pickup Blue points
if currentPoint[2] == 255 and currentPoint[0] < 220 and currentPoint[1] < 220:
points_vertical.append(yAxis)
#print(points_vertical)
# Choose the largest amplitude
for item in points_vertical:
if abs(item-redline_pos) > absMax:
absMax = abs(item-redline_pos)
theOne = item
littmannArr2.append((theOne-redline_pos)*800)
absMax = 0
theOne = 0
points_vertical = []
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(littmannArr2, linewidth=0.6, color='blue')
len(littmannArr2)
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(littmannArr2[:400], linewidth=0.6, color='blue')
asculArr
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(asculArr, linewidth=0.6, color='black')
asculArr_processed = []
littmannArr2_processed = []
for ascul in asculArr:
asculArr_processed.append(math.fabs(ascul))
for item in littmannArr2[:400]:
littmannArr2_processed.append(math.fabs(item))
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(asculArr_processed, linewidth=1.0, color='black')
fig = plt.figure()
s = fig.add_subplot(111)
s.plot(littmannArr2_processed[:314], linewidth=1.0, color='blue')
len(asculArr_processed)
len(littmannArr2_processed[:314])
stats.pearsonr(asculArr_processed, littmannArr2_processed[:314])
```
| github_jupyter |
<a href="https://colab.research.google.com/github/shivammehta007/QuestionGenerator/blob/master/Classifier_to_detect_type_of_questions.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Testing Classifier Model
```
# Essential Installation for working of notebook
!pip install -U tqdm
```
### Imports
```
import os
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import spacy
import seaborn as sns
import torch
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_validate, train_test_split
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import LabelEncoder
from sklearn.svm import SVC
from sklearn.metrics import f1_score
from tqdm.auto import tqdm, trange
from wordcloud import WordCloud
from xgboost import XGBClassifier
```
### Environment Setup
```
SEED=1234
def seed_all(seed=1234):
"""Seed the results for duplication"""
random.seed(seed)
os.environ["PYTHONHASHSEED"] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
seed_all(SEED)
tqdm.pandas()
nlp = spacy.load("en_core_web_sm")
from google.colab import drive
drive.mount('/content/drive')
DATASET_LOCATION = '/content/drive/My Drive/Data/GrammarDataset.csv'
```
## Dataset Overview
```
original_dataset = pd.read_csv(DATASET_LOCATION, sep="\t")
original_dataset.head()
original_dataset.dtypes
```
#### EDA
```
ax, fig = plt.subplots(figsize=(10, 7))
question_class = original_dataset["Type of Question"].value_counts()
question_class.plot(kind='bar')
plt.title('Type of Question Counts')
plt.show()
insincere_wordcloud = WordCloud(width=600, height=400).generate(" ".join(original_dataset["key"]))
plt.figure( figsize=(10,8), facecolor='k')
plt.imshow(insincere_wordcloud)
plt.axis("off")
plt.tight_layout(pad=0)
plt.show()
```
### PreProcessing
```
def preprocessor(dataset):
# Replace continuous underscores with single one
dataset["Question"] = dataset["Question"].str.replace("[_]{2,}", "_")
# Remove Brackets
dataset["Question"] = dataset["Question"].str.replace("[\)\(]", "")
# Strip whitespaces
dataset["Question"] = dataset["Question"].apply(lambda x: x.strip())
# Convert all text to lowercase
for columns in dataset.columns:
dataset[columns] = dataset[columns].str.lower()
return dataset
original_dataset = preprocessor(original_dataset)
original_dataset.columns
```
#### Encoding Labels
```
label_encoder = LabelEncoder()
original_dataset["Type of Question"] = label_encoder.fit_transform(original_dataset["Type of Question"])
```
#### Split Training and Testing Data
```
X_train_orig_dataset, X_test_orig_dataset, y_train_orig_dataset, y_test_orig_dataset = train_test_split(original_dataset[["Question", "key", "answer"]], original_dataset["Type of Question"], random_state=SEED, test_size=0.15)
X_train_orig_dataset.shape, X_test_orig_dataset.shape, y_train_orig_dataset.shape, y_test_orig_dataset.shape
```
## Experiments:
```
X_train_orig_dataset.head()
```
### Experimentation Setup
#### Models
```
text_vectorizers = [
('CountVectorizer', CountVectorizer(tokenizer=lambda x: x.split())),
('TfIdFVectorize', TfidfVectorizer(tokenizer=lambda x: x.split()))
]
classifiers = [
('MultiNomial Naive Bais', MultinomialNB(alpha=0.1)),
('LogisticRegression', LogisticRegression(max_iter=5000)),
('SVM', SVC()), ('RandomForest', RandomForestClassifier()),
('XGBClassifier', XGBClassifier(random_state=SEED, learning_rate=0.01))
]
def check_classification(X_train, y_train, X_test, y_test):
# Result DataFrame
result_dataframe = pd.DataFrame({
'Vectorizer': [name for name, model in text_vectorizers]
}, columns = ['Vectorizer'] + [name for name, model in classifiers])
result_dataframe.set_index('Vectorizer', inplace=True)
best_score = 0
best_model = None
best_pipe = None
for classifier_name, classifier in classifiers:
for text_vectorizer_name, text_vectorizer in text_vectorizers:
pipe = Pipeline(steps=[
('text_vec', text_vectorizer),
('class', classifier)
])
pipe.fit(X_train, y_train)
f1_measure = f1_score(pipe.predict(X_test), y_test, average='micro')
# print('Model : {} -> {}: accuracy: {:.4f}'.format(text_vectorizer_name, classifier_name, acc*100))
result_dataframe[classifier_name][text_vectorizer_name] = '{:.4f}'.format(f1_measure)
if f1_measure > best_score:
best_score = f1_measure
best_pipe = pipe
best_model = '{} -> {}'.format(text_vectorizer_name, classifier_name)
# TODO: Remove while actually writing the code Code for Debugging
# analyzer = best_pipe['text_vec'].build_analyzer()
# print(analyzer("i _ the steak for dinner. choose i chose the steak for dinner."))
# print(analyzer("words#i _ words#_ words#the words#steak words#for words#dinner words#. words# words#choose words# words#i words#chose words#the words#steak words#for words#dinner words#."))
# with np.printoptions(threshold=np.inf):
# print("First: {}".format(best_pipe['text_vec'].transform(["i _ the steak for dinner. choose i chose the steak for dinner."])[0]))
# print("Second: {}".format(best_pipe['text_vec'].transform(["words#i words#_ words#the words#steak words#for words#dinner words#. words# words#choose words# words#i words#chose words#the words#steak words#for words#dinner words#."])[0]))
print("\n\nBest F1 Measure was: {:.4f} with the Model: {}".format(best_score, best_model))
return result_dataframe
```
#### Result Placeholder
```
result_dataframe = pd.DataFrame({
'Vectorizer': [name for name, model in text_vectorizers]
}, columns = ['Vectorizer'] + [name for name, model in classifiers])
result_dataframe.set_index('Vectorizer', inplace=True)
results = []
```
#### N-Gram Generator
```
from itertools import cycle
from collections import deque
def ngrams(sentence, n=2):
words = [word.text for word in nlp(sentence)]
d = deque(maxlen=n)
d.extend(words[:n])
words = words[n:]
results = []
for window, word in zip(cycle((d,)), words):
results.append([ngram for ngram in window])
d.append(word)
results.append([ngram for ngram in d])
return results
ngrams("this sentence is a test sentence to check ngrams")
for n in range(2,5):
print(" {}-Grams:".format(n), end=" ")
print(ngrams("Hello World! This is a test example of N-Gram generator", n))
```
#### Sample Generator
```
def get_input_samples(X, y, n=5):
results = []
for i in random.sample(range(len(X) - 1), n):
results.append("{} --> {} ".format(X.iloc[i], label_encoder.inverse_transform([y.iloc[i]])))
return results
```
### Experiment 1: Word#\$\{$words_i$\}
Concatenating Question + key + answer in a Bag Of Words Approach \\
No Feature Engineering
```
experiment_text = "Concatenating question + key + answer. like word#word_i"
```
#### Preprocessing
```
X_train = X_train_orig_dataset["Question"] + " " + X_train_orig_dataset["key"] + " " + X_train_orig_dataset["answer"]
y_train = y_train_orig_dataset
X_test = X_test_orig_dataset["Question"] + " " + X_test_orig_dataset["key"] + " " + X_test_orig_dataset["answer"]
y_test = y_test_orig_dataset
def add_word_template(text):
tokens = nlp(text)
text = []
for token in tokens:
text.append("words#{}".format(token.text))
return " ".join(text)
add_word_template("i _ the steak for dinner. choose i chose the steak for dinner.")
X_train = X_train.progress_apply(add_word_template)
X_test = X_test.progress_apply(add_word_template)
```
#### Input Samples
```
get_input_samples(X_train, y_train)
```
#### Experimentation Results
```
result = check_classification(X_train, y_train, X_test, y_test)
results.append((experiment_text, result))
result
# First: (0, 24) 1
# (0, 193) 1
# (0, 194) 1
# (0, 257) 2
# (0, 356) 2
# (0, 447) 2
# (0, 851) 2
# (0, 912) 2
# Second: (0, 24)
```
### Experiment 2: word_pos#\$\{$word_i$\}_\$\{$pos_i$\}
With Word and POS tags, token.tag_ gives a detailed POS tag where we can distinguish between forms of verb. \\
More Information https://spacy.io/api/annotation#pos-tagging about the tags
```
experiment_text = "Adding Pos Tags along with word word_pos#{word_i}_{pos_i} + k#tags + a#tags Unigrams"
```
#### Preprocessing
```
X_train = X_train_orig_dataset["Question"] + " " + X_train_orig_dataset["key"] + " " + X_train_orig_dataset["answer"]
y_train = y_train_orig_dataset
X_test = X_test_orig_dataset["Question"] + " " + X_test_orig_dataset["key"] + " " + X_test_orig_dataset["answer"]
y_test = y_test_orig_dataset
def add_word_pos_template(text):
tokens = nlp(text)
text = []
for token in tokens:
text.append("word_pos#{}_{}".format(token.text, token.tag_))
return " ".join(text)
# Testing method
test_sentence = "Testing the Pos Tagger in this sentence let's see how it works!".lower()
add_word_pos_template(test_sentence)
X_train = X_train.progress_apply(add_word_pos_template)
X_test = X_test.progress_apply(add_word_pos_template)
```
#### Input Samples
```
get_input_samples(X_train, y_train)
```
#### Experimentation Results
```
result = check_classification(X_train, y_train, X_test, y_test)
results.append((experiment_text, result))
result
```
### Experiment 3: pos#\$\{$pos_i$\}
With just POS
```
experiment_text = "Classifying based on POS tags pos#{pos_i}"
```
#### Preprocessing
```
X_train = X_train_orig_dataset["Question"] + " " + X_train_orig_dataset["key"] + " " + X_train_orig_dataset["answer"]
y_train = y_train_orig_dataset
X_test = X_test_orig_dataset["Question"] + " " + X_test_orig_dataset["key"] + " " + X_test_orig_dataset["answer"]
y_test = y_test_orig_dataset
def add_pos_template(text):
tokens = nlp(text)
text = []
for token in tokens:
text.append("pos#{}".format(token.tag_))
return " ".join(text)
# Testing method
test_sentence = "Testing the Pos Tagger in this sentence let's see how it works!".lower()
add_pos_template(test_sentence)
X_train = X_train.progress_apply(add_pos_template)
X_test = X_test.progress_apply(add_pos_template)
```
#### Input Samples
```
get_input_samples(X_train, y_train)
```
#### Experimentation Results
```
result = check_classification(X_train, y_train, X_test, y_test)
results.append((experiment_text, result))
result
```
### Experiment 4: word\_tag#\${$word_i$}_\${$( q, k, a )$}
With Question, Key, Answer Tagging
```
experiment_text = "Tagging word with q, k, a example: word_tag#{word_i}_{q/k/a}"
```
#### Preprocessing
```
def add_word_tag_template(text, tag):
tokens = nlp(text)
text = []
for token in tokens:
text.append("word_tag#{}_{}".format(token.text, tag))
return " ".join(text)
add_word_tag_template("test sentence", "q")
X_train = X_train_orig_dataset["Question"].apply(lambda x: add_word_tag_template(x, "q")) + " " + X_train_orig_dataset["key"].apply(lambda x: add_word_tag_template(x, "k")) + " " + X_train_orig_dataset["answer"].apply(lambda x: add_word_tag_template(x, "a"))
y_train = y_train_orig_dataset
X_test = X_test_orig_dataset["Question"].apply(lambda x: add_word_tag_template(x, "q")) + " " + X_test_orig_dataset["key"].apply(lambda x: add_word_tag_template(x, "k")) + " " + X_test_orig_dataset["answer"].apply(lambda x: add_word_tag_template(x, "a"))
y_test = y_test_orig_dataset
```
#### Input Samples
```
get_input_samples(X_train, y_train)
```
#### Experimentation Results
```
result = check_classification(X_train, y_train, X_test, y_test)
results.append((experiment_text, result))
result
```
### Experiment 5: word_pos_tag#\${$word_i$}_\${$pos_i$}_{(q, k, a)}
POS Tagger with q,k,a tagging
```
experiment_text = "Adding POS Tagging and qka tagging example: word_pos_tag#{word_i}_{pos_i}_{(q, k, a)}"
```
#### Preprocessing
```
def add_word_pos_tag_template(text, tag):
tokens = nlp(text)
text = []
for token in tokens:
text.append("word_pos_tag#{}_{}_{}".format(token.text, token.tag_, tag))
return " ".join(text)
add_word_pos_tag_template("this is test example", "a")
X_train = X_train_orig_dataset["Question"].apply(lambda x: add_word_pos_tag_template(x, "q")) + " " + X_train_orig_dataset["key"].apply(lambda x: add_word_pos_tag_template(x, "k")) + " " + X_train_orig_dataset["answer"].apply(lambda x: add_word_pos_tag_template(x, "a"))
y_train = y_train_orig_dataset
X_test = X_test_orig_dataset["Question"].apply(lambda x: add_word_pos_tag_template(x, "q")) + " " + X_test_orig_dataset["key"].apply(lambda x: add_word_pos_tag_template(x, "k")) + " " + X_test_orig_dataset["answer"].apply(lambda x: add_word_pos_tag_template(x, "a"))
y_test = y_test_orig_dataset
```
#### Input Samples
```
get_input_samples(X_train, y_train)
```
#### Experimentation Results
```
result = check_classification(X_train, y_train, X_test, y_test)
results.append((experiment_text, result))
result
```
### Experiment 6: word#{$word_i$} word_pos_tag#\${$word_i$}_\${$pos_i$}_{(q, k, a)}
```
experiment_text = "Combining word and word_pos_tags"
```
#### Preprocessing
```
def add_word_and_word_pos_tag_template(text, tag):
tokens = nlp(text)
text = []
for token in tokens:
text.append("word#{} word_pos_tag##{}_{}_{}".format(token.text, token.text, token.tag_, tag))
return " ".join(text)
add_word_and_word_pos_tag_template("This is a test sentence! We will see the difference between playing and played", "q")
X_train = X_train_orig_dataset["Question"].apply(lambda x: add_word_and_word_pos_tag_template(x, "q")) + " " + X_train_orig_dataset["key"].apply(lambda x: add_word_and_word_pos_tag_template(x, "k")) + " " + X_train_orig_dataset["answer"].apply(lambda x: add_word_and_word_pos_tag_template(x, "a"))
y_train = y_train_orig_dataset
X_test = X_test_orig_dataset["Question"].apply(lambda x: add_word_and_word_pos_tag_template(x, "q")) + " " + X_test_orig_dataset["key"].apply(lambda x: add_word_and_word_pos_tag_template(x, "k")) + " " + X_test_orig_dataset["answer"].apply(lambda x: add_word_and_word_pos_tag_template(x, "a"))
y_test = y_test_orig_dataset
```
#### Input Samples
```
get_input_samples(X_train, y_train)
```
#### Experimentation Results
```
result = check_classification(X_train, y_train, X_test, y_test)
results.append((experiment_text, result))
result
```
### Experiment 7: ngram#{$word_i$}_{$word_{i+1}$}
```
experiment_text = "Adding bigrams of words bigram#{word_i}_{word_i+1}"
```
#### Preprocessing
```
def add_ngram_template(text, n=2):
tokens = ngrams(text, n)
text = []
for token in tokens:
text.append("ngram#{}".format("_".join(token)))
return " ".join(text)
add_ngram_template("This is a test sentence! We will see the difference between playing and played")
X_train = X_train_orig_dataset["Question"].apply(lambda x: add_ngram_template(x)) + " " + X_train_orig_dataset["key"].apply(lambda x: add_ngram_template(x)) + " " + X_train_orig_dataset["answer"].apply(lambda x: add_ngram_template(x))
y_train = y_train_orig_dataset
X_test = X_test_orig_dataset["Question"].apply(lambda x: add_ngram_template(x)) + " " + X_test_orig_dataset["key"].apply(lambda x: add_ngram_template(x)) + " " + X_test_orig_dataset["answer"].apply(lambda x: add_ngram_template(x))
y_test = y_test_orig_dataset
```
#### Input Samples
```
get_input_samples(X_train, y_train)
```
#### Experimentation Results
```
result = check_classification(X_train, y_train, X_test, y_test)
results.append((experiment_text, result))
result
```
### Experiment 8: ngrampos#{$pos_i$}_{$pos_{i+1}$}
```
experiment_text = "Adding bigrams of pos tags ngrampos#{pos_i}_{pos_i+1}"
```
#### Preprocessing
```
def add_ngram_pos_template(text, n=2):
tokens = nlp(text)
text = []
for token in tokens:
text.append("{}".format(token.tag_))
text = " ".join(text)
tokens = ngrams(text, n)
text = []
for token in tokens:
text.append("ngrampos#{}".format("_".join(token)))
return " ".join(text)
add_ngram_pos_template("This is a test sentence! We will see the difference between playing and played")
X_train = X_train_orig_dataset["Question"].apply(lambda x: add_ngram_pos_template(x)) + " " + X_train_orig_dataset["key"].apply(lambda x: add_ngram_pos_template(x)) + " " + X_train_orig_dataset["answer"].apply(lambda x: add_ngram_pos_template(x))
y_train = y_train_orig_dataset
X_test = X_test_orig_dataset["Question"].apply(lambda x: add_ngram_pos_template(x)) + " " + X_test_orig_dataset["key"].apply(lambda x: add_ngram_pos_template(x)) + " " + X_test_orig_dataset["answer"].apply(lambda x: add_ngram_pos_template(x))
y_test = y_test_orig_dataset
```
#### Input Samples
```
get_input_samples(X_train, y_train)
```
#### Experimentation Results
```
result = check_classification(X_train, y_train, X_test, y_test)
results.append((experiment_text, result))
result
```
# Results
```
from IPython.display import display, HTML
for i, (description, result) in enumerate(results):
print("Experiment {}:".format(i+1))
print(description)
display(HTML(result.to_html()))
print("\n\n")
```
# Summary
After these series of experiments, we could distinguish the important featues
1. Q/K/A Tagging was one of the most important feature
2. POS tagging with different form of verbs improved the performance over
3. Combination of QKA and POS tag was is an important feature too
4. N-Grams (bi) were not that effective while classifying with this data
5. Random Forest, Logistic Regression and XGBClassifier all these three can be a valid choice while selecting classifier for this classification
```
```
| github_jupyter |
# Train a Simple Audio Recognition model for microcontroller use
This notebook demonstrates how to train a 20kb [Simple Audio Recognition](https://www.tensorflow.org/tutorials/sequences/audio_recognition) model for [TensorFlow Lite for Microcontrollers](https://tensorflow.org/lite/microcontrollers/overview). It will produce the same model used in the [micro_speech](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples/micro_speech) example application.
The model is designed to be used with [Google Colaboratory](https://colab.research.google.com).
<table class=\"tfo-notebook-buttons\" align=\"left\">
<td>
<a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/examples/micro_speech/train_speech_model.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>
</td>
<td>
<a target=\"_blank\" href=\"https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/examples/micro_speech/train_speech_model.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>
</td>
</table>
The notebook runs Python scripts to train and freeze the model, and uses the TensorFlow Lite converter to convert it for use with TensorFlow Lite for Microcontrollers.
**Training is much faster using GPU acceleration.** Before you proceed, ensure you are using a GPU runtime by going to **Runtime -> Change runtime type** and selecting **GPU**. Training 18,000 iterations will take 1.5-2 hours on a GPU runtime.
## Configure training
The following `os.environ` lines can be customized to set the words that will be trained for, and the steps and learning rate of the training. The default values will result in the same model that is used in the micro_speech example. Run the cell to set the configuration:
```
import os
# A comma-delimited list of the words you want to train for.
# The options are: yes,no,up,down,left,right,on,off,stop,go
# All other words will be used to train an "unknown" category.
os.environ["WANTED_WORDS"] = "yes,no"
# The number of steps and learning rates can be specified as comma-separated
# lists to define the rate at each stage. For example,
# TRAINING_STEPS=15000,3000 and LEARNING_RATE=0.001,0.0001
# will run 18,000 training loops in total, with a rate of 0.001 for the first
# 15,000, and 0.0001 for the final 3,000.
os.environ["TRAINING_STEPS"]="15000,3000"
os.environ["LEARNING_RATE"]="0.001,0.0001"
# Calculate the total number of steps, which is used to identify the checkpoint
# file name.
total_steps = sum(map(lambda string: int(string),
os.environ["TRAINING_STEPS"].split(",")))
os.environ["TOTAL_STEPS"] = str(total_steps)
# Print the configuration to confirm it
!echo "Training these words: ${WANTED_WORDS}"
!echo "Training steps in each stage: ${TRAINING_STEPS}"
!echo "Learning rate in each stage: ${LEARNING_RATE}"
!echo "Total number of training steps: ${TOTAL_STEPS}"
```
## Install dependencies
Next, we'll install a GPU build of TensorFlow, so we can use GPU acceleration for training. We also clone the TensorFlow repository, which contains the scripts that train and freeze the model.
```
# Install the nightly build
!pip install -q tf-nightly-gpu==1.15.0.dev20190729
!git clone https://github.com/dansitu/tensorflow
```
## Load TensorBoard
Now, set up TensorBoard so that we can graph our accuracy and loss as training proceeds.
```
# Delete any old logs from previous runs
!rm -rf /content/retrain_logs
# Load TensorBoard
%load_ext tensorboard
%tensorboard --logdir /content/retrain_logs
```
## Begin training
Next, run the following script to begin training. The script will first download the training data:
```
!python tensorflow/tensorflow/examples/speech_commands/train.py \
--model_architecture=tiny_conv --window_stride=20 --preprocess=micro \
--wanted_words=${WANTED_WORDS} --silence_percentage=25 --unknown_percentage=25 \
--quantize=1 --verbosity=WARN --how_many_training_steps=${TRAINING_STEPS} \
--learning_rate=${LEARNING_RATE} --summaries_dir=/content/retrain_logs \
--data_dir=/content/speech_dataset --train_dir=/content/speech_commands_train \
```
## Freeze the graph
Once training is complete, run the following cell to freeze the graph.
```
!python tensorflow/tensorflow/examples/speech_commands/freeze.py \
--model_architecture=tiny_conv --window_stride=20 --preprocess=micro \
--wanted_words=${WANTED_WORDS} --quantize=1 --output_file=/content/tiny_conv.pb \
--start_checkpoint=/content/speech_commands_train/tiny_conv.ckpt-${TOTAL_STEPS}
```
## Convert the model
Run this cell to use the TensorFlow Lite converter to convert the frozen graph into the TensorFlow Lite format, fully quantized for use with embedded devices.
```
!toco \
--graph_def_file=/content/tiny_conv.pb --output_file=/content/tiny_conv.tflite \
--input_shapes=1,1960 --input_arrays=Reshape_1 --output_arrays='labels_softmax' \
--inference_type=QUANTIZED_UINT8 --mean_values=0 --std_dev_values=9.8077
```
The following cell will print the model size, which will be under 20 kilobytes.
```
import os
model_size = os.path.getsize("/content/tiny_conv.tflite")
print("Model is %d bytes" % model_size)
```
Finally, we use xxd to transform the model into a source file that can be included in a C++ project and loaded by TensorFlow Lite for Microcontrollers.
```
# Install xxd if it is not available
!apt-get -qq install xxd
# Save the file as a C source file
!xxd -i /content/tiny_conv.tflite > /content/tiny_conv.cc
# Print the source file
!cat /content/tiny_conv.cc
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Image/get_image_resolution.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Image/get_image_resolution.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Image/get_image_resolution.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://geemap.org). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('Installing geemap ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
import ee
import geemap
```
## Create an interactive map
The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
```
Map = geemap.Map(center=[40,-100], zoom=4)
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
naip = ee.Image('USDA/NAIP/DOQQ/m_3712213_sw_10_1_20140613')
Map.setCenter(-122.466123, 37.769833, 17)
Map.addLayer(naip, {'bands': ['N', 'R','G']}, 'NAIP')
naip_resolution =naip.select('N').projection().nominalScale()
print("NAIP resolution: ", naip_resolution.getInfo())
landsat = ee.Image('LANDSAT/LC08/C01/T1/LC08_044034_20140318')
landsat_resolution =landsat.select('B1').projection().nominalScale()
print("Landsat resolution: ", landsat_resolution.getInfo())
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
```
import os
import sys
dir_parent = os.path.abspath(os.path.join(os.getcwd(), os.pardir))
sys.path.append(dir_parent) # add parent directory to path
import json
import winsound
import datetime
import numpy as np
import pandas as pd
import yfinance as yf
from tqdm import tqdm
from src.db import DataBase
from src.utils_beeps import beeps
from src.utils_date import add_days
from src.utils_stocks import get_ls_sym
from src.utils_stocks import get_df_prices
from src.utils_stocks import suppress_stdout
dir_db = os.path.join(dir_parent, 'data', 'db')
dir_db_demo = os.path.join(dir_parent, 'data', 'demo',)
ls_init_str = [
#prices_m
'''CREATE TABLE IF NOT EXISTS prices_m(
sym TEXT
,datetime TEXT
,open REAL
,high REAL
,low REAL
,adj_close REAL
,volume INTEGER
,is_reg_hours INTEGER)''',
'''CREATE INDEX IF NOT EXISTS index_prices_m_all
ON prices_m(sym, date(datetime), is_reg_hours)''',
'''CREATE INDEX IF NOT EXISTS index_prices_m_date
ON prices_m(date(datetime))''',
#prices_d
'''CREATE TABLE IF NOT EXISTS prices_d(
sym TEXT
,date TEXT
,open REAL
,high REAL
,low REAL
,adj_close REAL
,volume INTEGER)''',
'''CREATE INDEX IF NOT EXISTS index_prices_d_date
ON prices_d(sym, date(date))''',
#stocks
'''CREATE TABLE IF NOT EXISTS stocks(
sym TEXT
,long_name TEXT
,sec TEXT
,ind TEXT
,quote_type TEXT
,fund_family TEXT
,summary TEXT
,timestamp TEXT)''',
'''CREATE INDEX IF NOT EXISTS index_stocks
ON stocks(sym, quote_type)''',
#stocks_error
'''CREATE TABLE IF NOT EXISTS stocks_error(
sym TEXT)''',
#proba
'''CREATE TABLE IF NOT EXISTS proba(
sym TEXT
,datetime TEXT
,my_index INTEGER
,proba REAL
,datetime_update TEXT)''',
]
db = DataBase(ls_init_str, dir_db)
db_demo = DataBase(ls_init_str, dir_db_demo)
```
# Get Misc Dataframes
```
q = '''
select
sym
,count(*)
from
prices_m
where
date(datetime) = '2021-02-18'
group by
sym
order by
count(*) desc
limit 3
'''
df = pd.read_sql(q, db.conn)
df
```
# View Tables, Indexes
```
q = '''
SELECT *
FROM sqlite_master
'''
pd.read_sql(q, db.conn)
q='''
SELECT DISTINCT DATE(date)
FROM prices_d
WHERE sym='BYND'
AND DATE(date) >= '{}'
--AND DATE(date) <= '{}'
ORDER BY date
'''.format('2021-03-01', '2021-02-06')
df = pd.read_sql(q, db.conn)
df
```
# Manual db view/modify
```
#execute
assert 0
q = '''
ALTER TABLE trading_days
RENAME COLUMN Date TO date
'''
q = '''
drop table trading_days
'''
q='''
ALTER TABLE prices_interday
RENAME TO prices_d
'''
q='''
drop index index_prices_interday
'''
q='''
UPDATE prices_m
SET is_reg_hours = CASE
WHEN time(datetime) < time('09:30:00') THEN 0
WHEN time(datetime) > time('15:59:00') THEN 0
ELSE 1
END
WHERE DATE(datetime) >= '2020-11-02'
AND DATE(datetime) <= '2020-11-24'
'''
q='''
--UPDATE prices_m
SET datetime = DATETIME(datetime, '-1 hours')
WHERE DATE(datetime) >= '2020-11-02'
AND DATE(datetime) <= '2020-11-24'
'''
db.execute(q)
beeps()
get_df_prices('BYND', '2020-11-28', '2020-12-03')
dt_info = yf.Ticker('GME').info
q = '''
DELETE
FROM prices_d
WHERE DATE(date) = '{}'
'''.format('2021-03-02')
db.execute(q)
dt_info
def get_df_info(sym):
'''Returns dataframe containing general info about input symbol
Args:
sym (str): e.g. BYND
Returns:
df_info (pandas.DataFrame)
sym (str)
long_name (str)
sec (str)
ind (str)
quote_type (str)
fund_family (str)
summary (str)
timestamp (datetime)
'''
dt_info = yf.Ticker(sym).info
dt_info['timestamp'] = datetime.datetime.now()
dt_info['sector'] = dt_info.get('sector')
dt_col = {
'symbol':'sym',
'longName':'long_name',
'sector':'sec',
'industry':'ind',
'quoteType':'quote_type',
'fundFamily':'fund_family',
'longBusinessSummary':'summary',
'timestamp':'timestamp',
}
dt_info = {key:dt_info.get(key) for key in dt_col}
df_info = pd.DataFrame([dt_info])
df_info = df_info.rename(columns=dt_col)
return df_info
test = get_df_info('GME')
test
```
| github_jupyter |
# Binary search practice
Let's get some practice doing binary search on an array of integers. We'll solve the problem two different ways—both iteratively and resursively.
Here is a reminder of how the algorithm works:
1. Find the center of the list (try setting an upper and lower bound to find the center)
2. Check to see if the element at the center is your target.
3. If it is, return the index.
4. If not, is the target greater or less than that element?
5. If greater, move the lower bound to just above the current center
6. If less, move the upper bound to just below the current center
7. Repeat steps 1-6 until you find the target or until the bounds are the same or cross (the upper bound is less than the lower bound).
## Problem statement:
Given a sorted array of integers, and a target value, find the index of the target value in the array. If the target value is not present in the array, return -1.
## Iterative solution
First, see if you can code an iterative solution (i.e., one that uses loops). If you get stuck, the solution is below.
```
def binary_search(array, target):
'''Write a function that implements the binary search algorithm using iteration
args:
array: a sorted array of items of the same type
target: the element you're searching for
returns:
int: the index of the target, if found, in the source
-1: if the target is not found
'''
start_index = 0
end_index = len(array) - 1
while start_index <= end_index:
mid_index = (start_index + end_index)//2 # integer division in Python 3
mid_element = array[mid_index]
if target == mid_element: # we have found the element
return mid_index
elif target < mid_element: # the target is less than mid element
end_index = mid_index - 1 # we will only search in the left half
else: # the target is greater than mid element
start_index = mid_element + 1 # we will search only in the right half
return -1
```
Here's some code you can use to test the function:
```
def test_function(test_case):
answer = binary_search(test_case[0], test_case[1])
if answer == test_case[2]:
print("Pass!")
else:
print("Fail!")
array = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
target = 6
index = 6
test_case = [array, target, index]
test_function(test_case)
```
## Recursive solution
Now, see if you can write a function that gives the same results, but that uses recursion to do so.
```
def binary_search_recursive(array, target, start_index, end_index):
'''Write a function that implements the binary search algorithm using recursion
args:
array: a sorted array of items of the same type
target: the element you're searching for
returns:
int: the index of the target, if found, in the source
-1: if the target is not found
'''
return binary_search_recursive_soln(array, target, 0, len(array) - 1)
```
<span class="graffiti-highlight graffiti-id_6wztnno-id_9gaa8a3"><i></i><button>Show Solution</button></span>
Here's some code you can use to test the function:
```
def test_function(test_case):
answer = binary_search_recursive(test_case[0], test_case[1])
if answer == test_case[2]:
print("Pass!")
else:
print("Fail!")
array = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
target = 4
index = 4
test_case = [array, target, index]
test_function(test_case)
```
| github_jupyter |
# MadMiner particle physics tutorial
# Part 4a: Limit setting
Johann Brehmer, Felix Kling, Irina Espejo, and Kyle Cranmer 2018-2019
In part 4a of this tutorial we will use the networks trained in step 3a and 3b to calculate the expected limits on our theory parameters.
## 0. Preparations
```
from __future__ import absolute_import, division, print_function, unicode_literals
import six
import logging
import numpy as np
import matplotlib
from matplotlib import pyplot as plt
%matplotlib inline
from madminer.limits import AsymptoticLimits
from madminer.sampling import SampleAugmenter
from madminer import sampling
from madminer.plotting import plot_histograms
# MadMiner output
logging.basicConfig(
format='%(asctime)-5.5s %(name)-20.20s %(levelname)-7.7s %(message)s',
datefmt='%H:%M',
level=logging.INFO
)
# Output of all other modules (e.g. matplotlib)
for key in logging.Logger.manager.loggerDict:
if "madminer" not in key:
# print("Deactivating logging output for", key)
logging.getLogger(key).setLevel(logging.WARNING)
```
## 1. Preparations
In the end, what we care about are not plots of the log likelihood ratio, but limits on parameters. But at least under some asymptotic assumptions, these are directly related. MadMiner makes it easy to calculate p-values in the asymptotic limit with the `AsymptoticLimits` class in the `madminer.limits`:
```
limits = AsymptoticLimits('data/delphes_data_shuffled.h5')
```
This class provids two high-level functions:
- `AsymptoticLimits.observed_limits()` lets us calculate p-values on a parameter grid for some observed events, and
- `AsymptoticLimits.expected_limits()` lets us calculate expected p-values on a parameter grid based on all data in the MadMiner file.
First we have to define the parameter grid on which we evaluate the p-values.
```
grid_ranges = [(0., 5.)]
grid_resolutions = [25]
```
What luminosity (in inverse pb) are we talking about?
```
lumi = 100.
p_values = {}
mle = {}
```
## 2. Expected limits based on histogram
First, as a baseline, let us calculate the expected limits based on a simple jet pT histogram. Right now, there are not a lot of option for this function; MadMiner even calculates the binning automatically. (We will add more functionality!)
The keyword `include_xsec` determines whether we include information from the total rate or just use the shapes. Since we don't model backgrounds and systematics in this tutorial, the rate information is unrealistically large, so we leave it out here.
```
theta_grid, p_values_expected_histo, best_fit_expected_histo, _, _, (histos, observed, observed_weights) = limits.expected_limits(
mode="histo",
hist_vars=["pt_j1"],
#hist_vars=["delta_eta_jj"],
theta_true=[5.],
grid_ranges=grid_ranges,
grid_resolutions=grid_resolutions,
luminosity=lumi,
include_xsec=False,
return_asimov=True
)
p_values["Histogram"] = p_values_expected_histo
mle["Histogram"] = best_fit_expected_histo
```
With `mode="rate"`, we could calculate limits based on only the rate -- but again, since the rate is extremely powerful when backgrounds and systematics are not taken into account, we don't do that in this tutorial.
Let's visualize the likelihood estimated with these histograms:
```
len(histos)
#indices = [12 + i * 25 for i in [6,9,12,15,18]]
#indices = [6,9,12,15,18]
indices = [0,5,10,15,20]
fig = plot_histograms(
histos=[histos[i] for i in indices],
observed=[observed[i] for i in indices],
observed_weights=observed_weights,
histo_labels=[r"$\theta_0 = {:.2f}$".format(theta_grid[i,0]) for i in indices],
xlabel="Jet $p_T$",
xrange=(0.,400.),
#xrange=(0.,5.),
)
plt.show()
```
## 3. Expected limits based on ratio estimators
Next, `mode="ml"` allows us to calculate limits based on any `ParamterizedRatioEstimator` instance like the ALICES estimator trained above:
```
theta_grid, p_values_expected_alices, best_fit_expected_alices, _, _, _ = limits.expected_limits(
mode="ml",
model_file='models/alices',
theta_true=[1.],
grid_ranges=grid_ranges,
grid_resolutions=grid_resolutions,
luminosity=lumi,
include_xsec=False,
)
p_values["ALICES"] = p_values_expected_alices
mle["ALICES"] = best_fit_expected_alices
```
## 4. Expected limits based on score estimators
To get p-values from a SALLY estimator, we have to use histograms of the estimated score:
```
theta_grid, p_values_expected_sally, best_fit_expected_sally, _, _, (histos, observed, observed_weights) = limits.expected_limits(
mode="sally",
model_file='models/sally',
theta_true=[1.],
grid_ranges=grid_ranges,
grid_resolutions=grid_resolutions,
luminosity=lumi,
include_xsec=False,
return_asimov=True,
)
p_values["SALLY"] = p_values_expected_sally
mle["SALLY"] = best_fit_expected_sally
```
Let's have a look at the underlying 2D histograms:
```
#indices = [12 + i * 25 for i in [0,6,12,18,24]]
indices = [0,5,10,15,20]
fig = plot_histograms(
histos=[histos[i] for i in indices],
observed=observed[0,:100,:],
observed_weights=observed_weights[:100],
histo_labels=[r"$\theta_0 = {:.2f}$".format(theta_grid[i,0]) for i in indices],
xlabel=r'$\hat{t}_0(x)$',
ylabel=r'$\hat{t}_1(x)$',
xrange=(0.,.5),
yrange=(-3.,3.),
log=True,
zrange=(1.e-3,1.),
markersize=10.
)
```
## 6. Toy signal
In addition to these expected limits (based on the SM), let us inject a mock signal. We first generate the data:
```
sampler = SampleAugmenter('data/delphes_data_shuffled.h5')
x_observed, _, _ = sampler.sample_test(
theta=sampling.morphing_point([4.]),
n_samples=1000,
folder=None,
filename=None,
)
obs_n = 79
_, p_values_observed, best_fit_observed, _, _, _ = limits.observed_limits(
n_observed=obs_n,
x_observed=x_observed,
mode="ml",
model_file='models/alices',
grid_ranges=grid_ranges,
grid_resolutions=grid_resolutions,
luminosity=lumi,
include_xsec=False,
)
p_values["ALICES signal"] = p_values_observed
mle["ALICES signal"] = best_fit_observed
theta_grid, p_values_expected_alices, best_fit_expected_alices, _, _, _ = limits.expected_limits(
mode="ml",
model_file='models/alices',
theta_true=[4.],
grid_ranges=grid_ranges,
grid_resolutions=grid_resolutions,
luminosity=lumi,
include_xsec=False,
)
p_values["ALICES signal"] = p_values_expected_alices
mle["ALICES signal"] = best_fit_expected_alices
theta_grid, p_values_expected_sally, best_fit_expected_sally, _, _, _ = limits.observed_limits(
mode="sally",
model_file='models/sally',
n_observed=obs_n,
x_observed=x_observed,
grid_ranges=grid_ranges,
grid_resolutions=grid_resolutions,
luminosity=lumi,
include_xsec=False,
)
p_values["SALLY signal"] = p_values_expected_sally
mle["SALLY signal"] = best_fit_expected_sally
theta_grid, p_values_expected_histo, best_fit_expected_histo, _, _, _ = limits.observed_limits(
mode="histo",
hist_vars=["pt_j1"],
n_observed=obs_n,
x_observed=x_observed,
grid_ranges=grid_ranges,
grid_resolutions=grid_resolutions,
luminosity=lumi,
include_xsec=False,
)
p_values["Histogram signal"] = p_values_expected_histo
mle["Histogram signal"] = best_fit_expected_histo
theta_grid, p_values_expected_histo, best_fit_expected_histo, _, _, (histos, observed, observed_weights) = limits.expected_limits(
mode="histo",
hist_vars=["pt_j1"],
theta_true=[4.],
grid_ranges=grid_ranges,
grid_resolutions=grid_resolutions,
luminosity=lumi,
include_xsec=False,
return_asimov=True
)
p_values["Histogram signal"] = p_values_expected_histo
mle["Histogram signal"] = best_fit_expected_histo
```
## Plot
Let's plot the results:
```
show = "ALICES"
cmin, cmax = 1.e-3, 1.
bin_size = (grid_ranges[0][1] - grid_ranges[0][0])/(grid_resolutions[0] - 1)
edges = np.linspace(grid_ranges[0][0] - bin_size/2, grid_ranges[0][1] + bin_size/2, grid_resolutions[0] + 1)
centers = np.linspace(grid_ranges[0][0], grid_ranges[0][1], grid_resolutions[0])
for i, (label, p_value) in enumerate(six.iteritems(p_values)):
if (label.find("signal") != -1):
continue
#plt.scatter(centers**2, p_value, label=label)
plt.scatter(centers, p_value, label=label)
print (theta_grid[mle[label]], label)
# plt.scatter(
# theta_grid[mle[label]], theta_grid[mle[label]][1],
# s=80., color='C{}'.format(i), marker='*',
# label=label
# )
plt.legend()
plt.xlabel(r'$\theta_0$')
#plt.xlabel(r'$\mu$')
plt.ylabel(r'p_value')
#plt.xlim(0,5)
plt.tight_layout()
plt.show()
bin_size = (grid_ranges[0][1] - grid_ranges[0][0])/(grid_resolutions[0] - 1)
edges = np.linspace(grid_ranges[0][0] - bin_size/2, grid_ranges[0][1] + bin_size/2, grid_resolutions[0] + 1)
centers = np.linspace(grid_ranges[0][0], grid_ranges[0][1], grid_resolutions[0])
for i, (label, p_value) in enumerate(six.iteritems(p_values)):
if (label.find("signal") == -1):
continue
plt.scatter(centers, p_value, label=label)
print (theta_grid[mle[label]], label)
# plt.scatter(
# theta_grid[mle[label]][0], theta_grid[mle[label]][1],
# s=80., color='C{}'.format(i), marker='*',
# label=label
# )
plt.legend()
plt.xlabel(r'$\theta_0$')
plt.ylabel(r'p_value')
plt.tight_layout()
plt.show()
```
| github_jupyter |
```
# python packages pd
import numpy as np
import matplotlib.pyplot as plt
import sys
import os
import inspect
import tensorflow as tf
import keras
from keras.models import Sequential
from keras.layers import Dense, Embedding, LSTM, SpatialDropout1D, Bidirectional, Activation
from keras.layers import CuDNNLSTM
from keras.utils.np_utils import to_categorical
# from keras.callbacks import EarlyStopping
from keras.layers import Dropout
from sklearn.model_selection import train_test_split
import importlib
import utilis
# custom
from keras import backend as K
from keras.layers import Layer
from keras.constraints import MinMaxNorm
from keras import initializers, regularizers, constraints, Input
from keras.models import Model
sys.path.append("..")
# custom python scripts
from packages import generator
importlib.reload(generator)
# # check version
# print(inspect.getsource(generator.Keras_DataGenerator))
```
# Bidirectional LSTM with Hypotheses
```
# Check that you are running GPU's
utilis.GPU_checker()
utilis.aws_setup()
```
# Config, generators and train
```
INPUT_TENSOR_NAME = "inputs_input"
SIGNATURE_NAME = "serving_default"
W_HYP = True
LEARNING_RATE = 0.001
BATCH_SIZE = 64
# constnats
VOCAB_SIZE = 1254
INPUT_LENGTH = 3000 if W_HYP else 1000
EMBEDDING_DIM = 512
print(INPUT_LENGTH)
importlib.reload(generator)
# generators
training_generator = generator.Keras_DataGenerator(data_dir='', subset_frac=0.04, dataset='train_new', w_hyp=W_HYP)
print()
validation_generator = generator.Keras_DataGenerator(data_dir='', subset_frac=0.04, dataset='valid_new', w_hyp=W_HYP)
# custom dot product function
def dot_product(x, kernel):
if K.backend() == 'tensorflow':
return K.squeeze(K.dot(x, K.expand_dims(kernel)), axis=-1)
else:
return K.dot(x, kernel)
# find a way to return attention weight vector a
class AttentionWithContext(Layer):
def __init__(self,
W_regularizer=None, u_regularizer=None, b_regularizer=None,
W_constraint=None, u_constraint=None, b_constraint=None,
bias=True, **kwargs):
self.supports_masking = True
# initialization of all learnable params
self.init = initializers.get('glorot_uniform')
# regularizers for params, init as None
self.W_regularizer = regularizers.get(W_regularizer)
self.u_regularizer = regularizers.get(u_regularizer)
self.b_regularizer = regularizers.get(b_regularizer)
# constraints for params, init as None
self.W_constraint = constraints.get(W_constraint)
self.u_constraint = constraints.get(u_constraint)
self.b_constraint = constraints.get(b_constraint)
self.bias = bias
super(AttentionWithContext, self).__init__(**kwargs)
def build(self, input_shape):
# assert len(input_shape) == 3
# weight matrix
self.W = self.add_weight((input_shape[-1], input_shape[-1],),
initializer=self.init,
name='{}_W'.format(self.name),
regularizer=self.W_regularizer,
constraint=self.W_constraint)
# bias term
if self.bias:
self.b = self.add_weight((input_shape[-1],),
initializer='lecun_uniform',
name='{}_b'.format(self.name),
regularizer=self.b_regularizer,
constraint=self.b_constraint)
# context vector
self.u = self.add_weight((input_shape[-1],),
initializer=self.init,
name='{}_u'.format(self.name),
regularizer=self.u_regularizer,
constraint=self.u_constraint)
super(AttentionWithContext, self).build(input_shape)
def compute_mask(self, input, input_mask=None):
# do not pass the mask to the next layers
return None
def call(self, x, mask=None):
uit = dot_product(x, self.W)
if self.bias:
uit += self.b
uit = K.tanh(uit)
ait = dot_product(uit, self.u)
a = K.exp(ait)
# apply mask after the exp. will be re-normalized next
if mask is not None:
# Cast the mask to floatX to avoid float64 upcasting in theano
a *= K.cast(mask, K.floatx())
# in some cases especially in the early stages of training the sum may be almost zero
# and this results in NaN's. A workaround is to add a very small positive number ε to the sum.
# a /= K.cast(K.sum(a, axis=1, keepdims=True), K.floatx())
a /= K.cast(K.sum(a, axis=1, keepdims=True) + K.epsilon() * 100, K.floatx())
a = K.expand_dims(a)
weighted_input = x * a
return K.sum(weighted_input, axis=1)
def compute_output_shape(self, input_shape):
return input_shape[0], input_shape[-1]
# model
def build_model(vocab_size, embedding_dim, input_length):
sequence_input = Input(shape=(input_length,), dtype='int32')
embedded_sequences = Embedding(vocab_size, embedding_dim, input_length=input_length)(sequence_input)
output_1 = SpatialDropout1D(0.2)(embedded_sequences)
output_2 = Bidirectional(CuDNNLSTM(512, return_sequences=True,
kernel_constraint=MinMaxNorm(min_value=0.0001, max_value=1.0, rate=1.0, axis=0),
recurrent_constraint=MinMaxNorm(min_value=0.0001, max_value=1.0, rate=1.0, axis=0),
bias_constraint = MinMaxNorm(min_value=0.0001, max_value=1.0, rate=1.0, axis=0)))(output_1)
context_vec = AttentionWithContext(
W_constraint = MinMaxNorm(min_value=0.0001, max_value=1.0, rate=1.0, axis=0),
u_constraint = MinMaxNorm(min_value=0.0001, max_value=1.0, rate=1.0, axis=0),
b_constraint = MinMaxNorm(min_value=0.0001, max_value=1.0, rate=1.0, axis=0))(output_2)
predictions = Dense(41, kernel_constraint = MinMaxNorm(min_value=0.0001, max_value=1.0, rate=1.0, axis=0), activation='softmax')(context_vec)
model = Model(inputs=sequence_input, outputs=predictions)
return model
```
### testing generator
```
model = build_model(VOCAB_SIZE, EMBEDDING_DIM, INPUT_LENGTH)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
## ARE YOU LOADING A MODEL IF YES RUN TEH FOLLOWING LINES
# from keras.models import model_from_json
# json_file = open('model.json', 'r')
# loaded_model_json = json_file.read()
# json_file.close()
# loaded_model = model_from_json(loaded_model_json)
# # load weights into new model
# loaded_model.load_weights("model.h5")
# print("Loaded model from disk")
# # REMEMEBER TO COMPILE
# loaded_model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
#overwriting model
# model = loaded_model
model.layers[4].get_weights()
%%time
#try and make it run until 9 am GMT+1
n_epochs = 8
history = model.fit_generator(generator=training_generator,
validation_data=validation_generator,
verbose=1,
use_multiprocessing=True,
epochs=n_epochs)
```
## Save modek
```
# FOR SAVING MODEL
model_json = model.to_json()
with open("model.json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model.save_weights("model.h5")
print("Saved model to disk")
#WARNING_DECIDE_HOW_TO_NAME_LOG
#descriptionofmodel_personwhostartsrun
#e.g. LSTM_128encoder_etc_tanc
LOSS_FILE_NAME = "forjeff3"
#WARNING NUMBER 2 - CURRENTLY EVERYTIME YOU RERUN THE CELLS BELOW THE FILES WITH THOSE NAMES GET WRITTEN OVER
# save history - WARNING FILE NAME
utilis.history_saver_bad(history, LOSS_FILE_NAME)
Epoch 1/8
235/235 [==============================] - 633s 3s/step - loss: 2.5071 - acc: 0.2366 - val_loss: 2.2807 - val_acc: 0.2778
Epoch 2/8
235/235 [==============================] - 596s 3s/step - loss: 2.2096 - acc: 0.2912 - val_loss: 2.2231 - val_acc: 0.2837
Epoch 3/8
235/235 [==============================] - 599s 3s/step - loss: 2.1271 - acc: 0.3055 - val_loss: 2.1742 - val_acc: 0.2817
Epoch 4/8
235/235 [==============================] - 601s 3s/step - loss: 2.0601 - acc: 0.3216 - val_loss: 2.1557 - val_acc: 0.2869
Epoch 5/8
235/235 [==============================] - 582s 2s/step - loss: 1.9939 - acc: 0.3343 - val_loss: 2.0946 - val_acc: 0.3145
Epoch 6/8
235/235 [==============================] - 590s 3s/step - loss: 1.9377 - acc: 0.3496 - val_loss: 2.0405 - val_acc: 0.3240
Epoch 7/8
235/235 [==============================] - 602s 3s/step - loss: 1.8974 - acc: 0.3590 - val_loss: 2.0558 - val_acc: 0.3206
Epoch 8/8
235/235 [==============================] - 607s 3s/step - loss: 2.3994 - acc: 0.2478 - val_loss: 2.4301 - val_acc: 0.2576
CPU times: user 20min 56s, sys: 6min 13s, total: 27min 9s
Wall time: 1h 20min 10s
Epoch 1/8
235/235 [==============================] - 623s 3s/step - loss: 2.3551 - acc: 0.2728 - val_loss: 2.3344 - val_acc: 0.2639
Epoch 2/8
235/235 [==============================] - 598s 3s/step - loss: 2.2601 - acc: 0.2972 - val_loss: 2.2669 - val_acc: 0.2881
Epoch 3/8
235/235 [==============================] - 598s 3s/step - loss: 2.0409 - acc: 0.3342 - val_loss: 2.0943 - val_acc: 0.3230
Epoch 4/8
235/235 [==============================] - 598s 3s/step - loss: 1.9858 - acc: 0.3426 - val_loss: 2.1001 - val_acc: 0.3115
Epoch 5/8
235/235 [==============================] - 596s 3s/step - loss: 1.9489 - acc: 0.3591 - val_loss: 2.0977 - val_acc: 0.3164
Epoch 6/8
235/235 [==============================] - 592s 3s/step - loss: 1.9035 - acc: 0.3658 - val_loss: 2.0781 - val_acc: 0.3193
Epoch 7/8
235/235 [==============================] - 594s 3s/step - loss: 1.8696 - acc: 0.3728 - val_loss: 2.0858 - val_acc: 0.3237
Epoch 8/8
235/235 [==============================] - 598s 3s/step - loss: 1.8482 - acc: 0.3827 - val_loss: 2.1564 - val_acc: 0.2913
CPU times: user 20min 57s, sys: 6min 18s, total: 27min 16s
Wall time: 1h 19min 57s
test_generator = generator.Keras_DataGenerator(data_dir='', subset_frac=0.01, dataset='test_new', w_hyp=W_HYP)
model.evaluate_generator(test_generator, verbose =1)
scores = model.predict_generator(test_generator)
scores.shape
medians = np.mean(scores, axis = 0)
# that is incorrect you
import matplotlib.pyplot as plt
plt.hist(medians, bins = 41, )
medians
plt.plot(medians)
lis = [x for x in range(1,42)]
plt.bar(lis, medians)
plt.xlabel("Label")
plt.ylabel("Frequency")
plt.title("Distribution of Labels - Attention G&H")
```
| github_jupyter |
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import cv2
import os
import torch.optim as optim
from sklearn.metrics import confusion_matrix
import numpy as np
import matplotlib.pyplot as plt
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.pool = nn.MaxPool2d(2, 2)
self.conv1 = nn.Conv2d(1, 6, 5)
self.batch1 = nn.BatchNorm2d(6)
self.conv2 = nn.Conv2d(6, 16, 5)
self.batch2 = nn.BatchNorm2d(16)
self.drop1 = nn.Dropout(0.5)
self.fc1 = nn.Linear(16 * 13 * 13, 120)
self.drop2 = nn.Dropout(0.4)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 14)
def forward(self, x):
# conv
# x = self.pool(F.relu(self.conv1(x)))
# x = self.pool(F.relu(self.conv2(x)))
x = self.pool(self.batch1(F.relu(self.conv1(x))))
x = self.pool(self.batch2(F.relu(self.conv2(x))))
x = x.view(-1, 16 * 13 * 13)
# fc
# x = self.drop1(x)
x = F.relu(self.fc1(x))
# x = self.drop2(x)
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print('device', device)
net = Net()
net.to(device)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
print(net)
import numbers
import numpy as np
from PIL import ImageFilter
class GaussianSmoothing(object):
def __init__(self, radius):
if isinstance(radius, numbers.Number):
self.min_radius = radius
self.max_radius = radius
elif isinstance(radius, list):
if len(radius) != 2:
raise Exception(
"`radius` should be a number or a list of two numbers")
if radius[1] < radius[0]:
raise Exception(
"radius[0] should be <= radius[1]")
self.min_radius = radius[0]
self.max_radius = radius[1]
else:
raise Exception(
"`radius` should be a number or a list of two numbers")
def __call__(self, image):
radius = np.random.uniform(self.min_radius, self.max_radius)
return image.filter(ImageFilter.GaussianBlur(radius))
from copy import copy, deepcopy
train_transform = torchvision.transforms.Compose([
torchvision.transforms.Grayscale(),
torchvision.transforms.Resize((64, 64)),
GaussianSmoothing([0, 0.5]),
# torchvision.transforms.RandomApply([
# torchvision.transforms.RandomResizedCrop((64, 64), scale=(0.8, 1.0), ratio=(0.9, 1.1)),
# ], p=0.75),
torchvision.transforms.ToTensor(),
# torchvision.transforms.RandomErasing(p=0.30, scale=(0.05, 0.2)),
# torchvision.transforms.Normalize((0.5, ), (0.5, ))
])
test_transform = torchvision.transforms.Compose([
torchvision.transforms.Grayscale(),
torchvision.transforms.Resize((64, 64)),
# GaussianSmoothing([0, 5]),
torchvision.transforms.ToTensor(),
# torchvision.transforms.Normalize((0.5, ), (0.5, ))
])
set1 = torchvision.datasets.ImageFolder('..\\data\\out\\1n_final', transform=None)
set2 = torchvision.datasets.ImageFolder('..\\data\\out\\23_final', transform=None)
set3 = torchvision.datasets.ImageFolder('..\\data\\out\\gm_final', transform=None)
set4 = torchvision.datasets.ImageFolder('..\\data\\out\\yasser_final', transform=None)
fullset = torch.utils.data.ConcatDataset([set1, set2, set3, set4])
train_size = int(0.8 * len(fullset))
# train_size = len(fullset)
test_size = len(fullset) - train_size
trainset, testset = torch.utils.data.random_split(fullset, [train_size, test_size])
trainset.dataset = deepcopy(fullset)
for dt in trainset.dataset.datasets:
dt.transform = train_transform
for dt in testset.dataset.datasets:
dt.transform = test_transform
trainloader = torch.utils.data.DataLoader(trainset,
batch_size = 100,
shuffle = True,
num_workers = 0,
pin_memory = True)
testloader = torch.utils.data.DataLoader(testset,
batch_size = 100,
shuffle = True,
num_workers = 0,
pin_memory = True)
print(len(trainset), len(testset))
print(id(trainset.dataset.datasets[0]), trainset.dataset.datasets[0].transform)
print(id(testset.dataset.datasets[0]), testset.dataset.datasets[0].transform)
exp = 'exp_cbnn1_d0.5_0.4_bn_rc'
%%time
x = 0
n = 100
for epoch in range(30): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data[0].to(device), data[1].to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
x += 1
# print statistics
running_loss += loss.item()
if x % n == n-1:
PATH = '..\\data\\out\\model_' + exp + str(x) + '.pth'
torch.save(net.state_dict(), PATH)
print('%d [%d, %5d] loss: %.3f' %
(x, epoch + 1, i + 1, running_loss / n))
running_loss = 0.0
PATH = '..\\data\\out\\model_' + exp + str(x) + '.pth'
print(PATH)
torch.save(net.state_dict(), PATH)
print('Finished Training')
test_transform = torchvision.transforms.Compose([
torchvision.transforms.Grayscale(),
torchvision.transforms.Resize((64, 64)),
torchvision.transforms.ToTensor(),
])
testset = torchvision.datasets.ImageFolder('..\\data\\out\\aagard_final', transform=test_transform)
testloader = torch.utils.data.DataLoader(testset,
batch_size = 100,
shuffle = True,
num_workers = 0,
pin_memory = True)
print(len(testset))
%%time
# wtf
correct = 0
total = 0
# 23
# num_images = 28032
# 1n
# num_images = 26816
cm = np.zeros((14, 14), dtype=np.int64)
net.eval()
with torch.no_grad():
for data in testloader:
images, labels = data[0].to(device), data[1].to(device)
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
cm += confusion_matrix(labels.numpy(), predicted.numpy(), labels=np.arange(14))
total += labels.size(0)
if total % 3000 == 0:
print(total)
# if total > 3000:
# break
# correct = (predicted == labels).sum().item()
# if correct != predicted.shape[0]:
# break
# print('Accuracy of the network on the 10000 test images: %d %%' % (
# 100 * correct / total))
net.train()
total = np.sum(cm)
print(' 0 1 2 3 4 5 6 7 8 9 10 11 12 13')
print(cm.astype(np.int32))
print('accuracy', np.trace(cm) / total)
print('total', np.trace(cm), total)
cm.diagonal() / cm.sum(1)
%matplotlib inline
def sw(im):
plt.imshow(cv2.cvtColor(im, cv2.COLOR_BGR2RGB))
sw(net.conv1.weight[5,...].detach().numpy().squeeze(0))
# net.conv2.weight.shape
# lol
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.pool = nn.MaxPool2d(2, 2)
self.conv1 = nn.Conv2d(1, 6, 5)
self.batch1 = nn.BatchNorm2d(6)
self.conv2 = nn.Conv2d(6, 16, 5)
self.batch2 = nn.BatchNorm2d(16)
self.drop1 = nn.Dropout(0.5)
self.fc1 = nn.Linear(16 * 13 * 13, 120)
self.drop2 = nn.Dropout(0.4)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 14)
def forward(self, x):
# conv
# x = self.pool(F.relu(self.conv1(x)))
# x = self.pool(F.relu(self.conv2(x)))
x = self.pool(self.batch1(F.relu(self.conv1(x))))
x = self.pool(self.batch2(F.relu(self.conv2(x))))
x = x.view(-1, 16 * 13 * 13)
# fc
x = self.drop1(x)
x = F.relu(self.fc1(x))
x = self.drop2(x)
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print('device', device)
net = Net()
net.to(device)
# PATH = '..//data//out//model_exp_model9_4390.pth'
# PATH = '..//data//out//model_exp_model12_6770.pth'
# PATH = '..//data//out//model_exp_cbnn1_d0.4_0.3_7800.pth'
PATH = '..//data//out//model_exp_cbnn1_d0.5_0.4_bn_rc7799.pth'
net.load_state_dict(torch.load(PATH))
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
print(net)
net.eval()
image_path = '..\\data\\out\\a22.png'
board_im = cv2.imread(image_path, 0)
transform = torchvision.transforms.Compose([
# torchvision.transforms.Grayscale(),
torchvision.transforms.Resize((8*64, 8*64)),
# GaussianSmoothing([0, 0.5]),
torchvision.transforms.ToTensor(),
# torchvision.transforms.Normalize((0.5, ), (0.5, ))
])
im = Image.fromarray(board_im)
im = transform(im)
dim = 64
tensors = [im[:, dim*k: dim*(k+1), dim*j: dim*(j+1)] for k in range(8) for j in range(8)]
# sw(tensors[0].numpy()[np.newaxis, ...] * 255)
# print(tensors[0].numpy()[np.newaxis, ...])
images = torch.stack(tensors)
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
print(predicted)
print(get_fen_str(predicted))
net(images[0,...].unsqueeze(0)).data
sw((images[0] * 255).view(64, 64, 1).numpy().astype(np.uint8))
import io
piecenames = ['bb', 'bk', 'bn', 'bp', 'bq', 'br', 'em', 'wb', 'wk', 'wn', 'wp', 'wq', 'wr', 'em']
def get_fen_str(predicted):
with io.StringIO() as s:
for row in range(8):
empty = 0
for cell in range(8):
c = piecenames[predicted[row*8 + cell]]
if c[0] in ('w', 'b'):
if empty > 0:
s.write(str(empty))
empty = 0
s.write(c[1].upper() if c[0] == 'w' else c[1].lower())
else:
empty += 1
if empty > 0:
s.write(str(empty))
s.write('/')
# Move one position back to overwrite last '/'
s.seek(s.tell() - 1)
# If you do not have the additional information choose what to put
# s.write(' w KQkq - 0 1')
s.write('%20w%20KQkq%20-%200%201')
return s.getvalue()
def pil_loader(path):
# open path as file to avoid ResourceWarning (https://github.com/python-pillow/Pillow/issues/835)
with open(path, 'rb') as f:
img = Image.open(f)
return img.convert('L')
# print(np.array(pil_loader(image_path)))
print(transform(pil_loader(image_path)))
from PIL import Image
%matplotlib inline
def sbw(im):
plt.imshow(im, cmap='gray', vmin=0, vmax=255)
def sw(im):
plt.imshow(cv2.cvtColor(im, cv2.COLOR_BGR2RGB))
plt.show()
piecenames = ['BlackBishop', 'BlackKing', 'BlackKnight', 'BlackPawn', 'BlackQueen', 'BlackRook', 'BlackSpace', 'WhiteBishop', 'WhiteKing', 'WhiteKnight', 'WhitePawn', 'WhiteQueen', 'WhiteRook', 'WhiteSpace']
from PIL import Image
folder = '..\\data\\out\\yasser_final'
x = 0
cm = np.zeros((14, 14), dtype=np.int64)
def pil_loader(path):
# open path as file to avoid ResourceWarning (https://github.com/python-pillow/Pillow/issues/835)
with open(path, 'rb') as f:
img = Image.open(f)
return img.convert('RGB')
for i, piece in enumerate(sorted(os.listdir(folder))):
folder2 = os.path.join(folder, piece)
if os.path.isfile(folder2):
continue
for j, filename in enumerate(sorted(os.listdir(folder2))):
fullname = os.path.join(folder2, filename)
im = pil_loader(fullname)
im = torchvision.transforms.functional.to_grayscale(im)
im = torchvision.transforms.functional.resize(im, (64, 64), Image.BILINEAR)
im = torchvision.transforms.functional.to_tensor(im)
im = im.unsqueeze(0)
# # im = im.unsqueeze(0).type(torch.float32)
# # im = cv2.imread(fullname, 0)
# # im = cv2.resize(im, (64, 64)).astype(np.float32)
# # im = torch.from_numpy(im).unsqueeze(0).unsqueeze(0)
outputs = net(im)
_, predicted = torch.max(outputs.data, 1)
# piece = piecenames[predicted]
# newpath = os.path.join(folder2, piece + '_con1_' + str(x) + '.png')
# print(newpath)
# os.rename(fullname, newpath)
cm[i][predicted] += 1
x += 1
if x % 1000 == 0:
print(x)
# print(predicted)
# break
# break
import torchvision.models as models
resnet50 = models.resnet50(pretrained=True, progress=False)
activation = {}
def get_activation(name):
def hook(model, input, output):
activation[name] = output.detach()
return hook
vgg16.classifier[-2].register_forward_hook(get_activation('fc2'))
activation = {}
def get_activation(name):
def hook(model, input, output):
activation[name] = output.detach()
return hook
resnet50.avgpool.register_forward_hook(get_activation('fc2'))
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
%%time
# res = np.empty((26816, 2048), dtype=np.float32)
# res = np.empty((28, 2048), dtype=np.float32)
x = 0
with torch.no_grad():
for data in trainloader:
images, labels = data[0].to(device), data[1].to(device)
n = images.shape[0]
output = resnet50(images)
# res[x: x + n,:] = activation['fc2'].view(-1, 2048)
act = activation['fc2'].view(-1, 2048)
break
x += n
if x % 1000 == 0:
print('x', x)
gt = res
import sklearn
# act[0].shape, gt.shape
# print(sklearn.metrics.pairwise.euclidean_distances(act, gt).argmin(axis=1) / 2)
# print(labels)
print(gt)
print(act[0])
%%time
cm = np.zeros((14, 14))
x = 0
with torch.no_grad():
for data in trainloader:
images, labels = data[0].to(device), data[1].to(device)
n = images.shape[0]
cm += confusion_matrix(labels, bestLabels[x: x+n], labels=np.arange(14))
x += n
if x % 1000 == 0:
print('x', x)
labels, bestLabels[-15:].flatten()
activation['fc2'].view(-1, 2048).shape
import pickle
pickle.dump(res, open('..\\data\\out\\res2.pkl', 'wb'))
# res = np.load('..\\data\\out\\res.pkl')
import pickle
res = pickle.load(open('..\\data\\out\\res2.pkl', 'rb'))
%%time
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, int(1e6), 1.0)
ret, bestLabels, centers = cv2.kmeans(res, 14, None, criteria, 10, cv2.KMEANS_PP_CENTERS)
```
| github_jupyter |
# Bayesian Modelling
This notebook is for those interested in learning Bayesian statistics in Python. Much of this material is drawn from Mark Dregan's GitHub: https://github.com/markdregan/Bayesian-Modelling-in-Python. See here for more details.
Statistics often deploys frequentist techniques (such as p-values) and can feel contrived. Thus as an alternative approach we can deploy **Bayesian Statistics** - a branch of statistics quite different from the frequentist approach.
```
import numpy as np
import pandas as pd
import pymc3 as pm
import scipy.stats as stats
import matplotlib.pyplot as plt
%matplotlib inline
```
## How do Bayesians think about data?
Imagine the following scenario:
> A curious boy watches the number of cars that pass by his house every day. He diligently notes down the total count of cars that pass per day. Over the past week, his notebook contains the following counts: 12, 33, 20, 29, 20, 30, 18.
From a Bayesian's perspective, the data is generated by a random process. However, now that the data is observed, it is fixed and does not change. This random process has some model parameters that are fixed. However, the Bayesian uses probability distributions to represent their *uncertainty* in these parameters.
Because the boy is measuring counts (non-negative integers), it is common practice to use a Poisson distribution to model the data (e.g the random process). A Poisson distribution takes a single parameter $\mu$ which describes both the mean and variance of the data. You can see 3 Poisson distributions below with different values of $\mu$:
$$
p(x | \mu) = \frac{e^{-\mu}\mu^x}{x!}, \quad x = 0,1,2,\dots \\
\lambda = E(x) = Var(\mu)
$$
```
colors = ['#348ABD', '#A60628', '#7A68A6', '#467821', '#D55E00',
'#CC79A7', '#56B4E9', '#009E73', '#F0E442', '#0072B2']
fig = plt.figure(figsize=(11,3))
ax = fig.add_subplot(111)
x_lim = 60
mu = [5, 20, 40]
for i in np.arange(x_lim):
plt.bar(i, stats.poisson.pmf(mu[0], i), color=colors[3])
plt.bar(i, stats.poisson.pmf(mu[1], i), color=colors[4])
plt.bar(i, stats.poisson.pmf(mu[2], i), color=colors[5])
_ = ax.set_xlim(0, x_lim)
_ = ax.set_ylim(0, 0.2)
_ = ax.set_ylabel('Probability mass')
_ = ax.set_title('Poisson distribution')
_ = plt.legend(['$\mu$ = %s' % mu[0], '$\mu$ = %s' % mu[1], '$\mu$ = %s' % mu[2]])
```
Here we will work with some *Google Hangout Chat* data, where we are interested in modelling the time it takes to respond to messages. Given that `response_time` is count data, we can model it as a Poisson distribution and estimate it's parameter $\mu$.
```
messages = pd.read_html("hangout_chat_data.csv")[0]
fig = plt.figure(figsize=(11,3))
_ = plt.title('Frequency of messages by response time')
_ = plt.xlabel('Response time (seconds)')
_ = plt.ylabel('Number of messages')
_ = plt.hist(messages['time_delay_seconds'].values,
range=[0, 60], bins=60, histtype='stepfilled')
```
## The Frequentist Approach
Before we jump into Bayesian techniques, let's analyse a frequentist method of estimating $\mu$. We will use an optimization technique that *maximises* the likelihood of a function.
Our function `poisson_logprob()` returns the overall likelihood of the observed data given a Poisson model and parameter value. We use the method `scipy.optimize.minimize_scalar` to find the value of $\mu$ that is most credible given the data observed.
```
from scipy import optimize as opt
y_obs = messages["time_delay_seconds"].values
def poisson_logprob(mu, sign=-1):
return np.sum(sign*stats.poisson.logpmf(y_obs, mu=mu))
freq_res = opt.minimize_scalar(poisson_logprob)
%time print("The estimated value of mu is: %s" % freq_res['x'])
```
The drawback with this method is that the optimization provides no measure of *uncertainty* - it just returns a point value. And it does so efficiently.
Below we illustrate the optimization the function is performing: at each value of $\mu$, the plot shows the log probability of $\mu$ given the data and the model. The optimizer hill-climbs starting at a random point on the curve and incrementally climbs until it cannot reach a higher point. This is known as finding the *local maxima*.
```
x = np.linspace(1, 60)
y_min = np.min([poisson_logprob(i, sign=1) for i in x])
y_max = np.max([poisson_logprob(i, sign=1) for i in x])
fig = plt.figure(figsize=(6,4))
_ = plt.plot(x, [poisson_logprob(i, sign=1) for i in x])
_ = plt.fill_between(x, [poisson_logprob(i, sign=1) for i in x],
y_min, color=colors[0], alpha=0.3)
_ = plt.title('Optimization of $\mu$')
_ = plt.xlabel('$\mu$')
_ = plt.ylabel('Log probability of $\mu$ given data')
_ = plt.vlines(freq_res['x'], y_max, y_min, colors='red', linestyles='dashed')
_ = plt.scatter(freq_res['x'], y_max, s=110, c='red', zorder=3)
_ = plt.ylim(bottom=y_min, top=0)
_ = plt.xlim(left=1, right=60)
_ = plt.grid()
```
The above optimization estimated the parameter $\mu$ to be 18. We know for any Poisson distribution, the parameter $\mu$ represents the mean and it's variance.
```
fig = plt.figure(figsize=(11,3))
ax = fig.add_subplot(111)
x_lim = 60
mu = np.int(freq_res['x'])
for i in np.arange(x_lim):
plt.bar(i, stats.poisson.pmf(mu, i), color=colors[3])
_ = ax.set_xlim(0, x_lim)
_ = ax.set_ylim(0, 0.1)
_ = ax.set_xlabel('Response time in seconds')
_ = ax.set_ylabel('Probability mass')
_ = ax.set_title('Estimated Poisson distribution for Hangout chat response time')
_ = plt.legend(['$\lambda$ = %s' % mu])
```
With the above Poisson model providing a variance of values between 10 and 30, this doesn't accurately reflect the distribution we observe in the actual data - which has observed values in the range 0 to 60.
## Bayesian Method of estimating $\mu$
To estimate using a Bayesian Method, it would be important to first cover **Bayes' Theorem**. This states that the *posterior distribution*, i.e the probability of the parameter $\mu$ given the data is related to the probability distribution of the data, given the parameter multiplied by some prior distribution:
$$
P(A|B)=\frac{P(B|A) P(A)}{P(B)}
$$
where $A$ and $B$ are events and $P(B) \neq 0$.
- $P(A|B)$ is the conditional probability of the likelihood of event $A$ happening given that $B$ is true.
- $P(B|A)$ is the conditional probability of the likelihood of event $B$ happening given that $A$ is true.
- $P(A)$ and $P(B)$ are the probabilities of observing $A$ and $B$ independently of each other; these are known as marginal probabilities.
In our case, $A=\mu$ and $B=X$, where $X$ is our data referring to response time in seconds.
Our model, is as follows:
1. Draw a prior from the Uniform distribution: $\mu \sim \mathcal{U}(0, 60)$
2. Represent data as a Poisson distribution (likelihood) with single parameter $\mu$: $y_i \sim \text{Poisson}(\mu)$
## The mechanics of MCMC
The process of Markov Chain Monte Carlo (MCMC) sampling draws parameter values from the prior distribution and computes the likelihood that the observed data came from a distribution with these parameter values.
$$
P(\mu | X) \propto P(X | \mu)P(\mu)
$$
This calculation acts as a guiding light for the MCMC sampler. As it draws values from the parameter priors, it computes the likelihood of these parameters given the data - and will try to guide the sampler towards areas of higher probability.
The MCMC sampler *wanders* towards areas of highest likelihood. However, the Bayesian method is not concerned with finding the absolute maximum values - but rather to traverse and collect samples around the area of highest probability. All of the samples collected are considered to be a credible parameter.
```
with pm.Model() as model:
# draw mu from a uniform distribution
mu = pm.Uniform("mu", lower=0, upper=60)
# likelihood follows a Poisson distribution - we think
likelihood = pm.Poisson("likelihood", mu=mu, observed=y_obs)
# use maximum a posteriori estimate for start initial guess
start = pm.find_MAP()
# use metropolis algorithm
step = pm.Metropolis()
# generate
trace = pm.sample(50000, step, start=start, progressbar=True)
_ = pm.traceplot(trace, varnames=["mu"], lines={"mu":freq_res['x']})
```
The above has just gathered 100000 credible samples of $\mu$ by traversing over the areas of high likelihood of the posterior distribution of $\mu$. The below plot (left) shows the distribution of collected values over $\mu$. The mean of this distribution is identical (almost) to the frequentist estimate (red line). We also get a measure of uncertainty and can see there are credible values of $\mu \in [17,19]$. This measure of uncertainty is incredibly valuable.
## Discarding early samples (burnin)
You may have wondered what `pm.find_MAP()` does in the above MCMC code. MAP stands for maximum a posteriori estimation. It helps the MCMC sampler find a good place to start sampling. Ideally this will start the model off in an area of high likelihood - although nothing is guaranteed with local maxima issues. As a result, samples collected early in the trace are often discarded.
```
fig = plt.figure(figsize=(11,3))
plt.subplot(121)
_ = plt.title('Burnin trace')
_ = plt.ylim(bottom=16.5, top=19.5)
_ = plt.plot(trace.get_values('mu')[:1000])
fig = plt.subplot(122)
_ = plt.title('Full trace')
_ = plt.ylim(bottom=16.5, top=19.5)
_ = plt.plot(trace.get_values('mu'))
```
## Model convergence
### Trace
Just because the above model estimated a value for μ, doesn't mean the model estimated a good value given the data. There are some recommended checks that you can make. Firstly, look at the trace output. You should see the trace jumping around and generally looking like a hairy caterpillar. If you see the trace snake up and down or appear to be stuck in any one location - it is a sign that you have convergence issues and the estimations from the MCMC sampler cannot be trusted.
### Autocorrelation plot
The second test you can perform is the autocorrelation test (see below plot). It is a measure of correlation between successive samples in the MCMC sampling chain. When samples have low correlation with each other, they are adding more "information" to the estimate of your parameter value than samples that are highly correlated.
Visually, you are looking for an autocorrelation plot that tapers off to zero relatively quickly and then oscilates above and below zero correlation. If your autocorrelation plot does not taper off - it is generally a sign of poor mixing and you should revisit your model selection (eg. likelihood) and sampling methods (eg. Metropolis).
```
_ = pm.autocorrplot(trace[:2000], varnames=["mu"])
```
## Model Checking
For checking whether a model is suitable for fitting the data, we can ask two questions:
1. Are the model and parameters estimated a good fit for the underlying data?
2. Given two separate models, which is a better fit for the underlying data?
### Model Check I: Posterior predictive check
Where in the previous model, we drew 100000 samples from the posterior distribution of $\mu$, we generate *new* data from the predicted model. What this means is we construct 100000 Poisson distributions with these values of $\mu$ and then randomly sample from these distributions, formally constructed as:
$$
P(\hat{y}|y)=\int P(\hat{y}|\theta)f(\theta|y)d\theta
$$
Conceptually, if the model is a good fit for the underlying data then the generated data should resemble the original observed data. PyMC provides a convenient way to sample from the fitted model:
```python
y_pred = pm.Poisson('y_pred', mu=mu)
```
This is almost identical to `y_est` except we do not provide the observed data. PyMC considers this a stochastic node and as the MCMC sampler runs - it also samples data from `y_est`.
```
with pm.Model() as model:
mu = pm.Uniform("mu", lower=0, upper=100)
# likelihood data
y_est = pm.Poisson("y_est", mu=mu, observed=y_obs)
# create predicted!
y_pred = pm.Poisson("y_pred", mu=mu)
start = pm.find_MAP()
step = pm.Metropolis()
trace = pm.sample(50000, step, start=start, progressbar=True)
x_lim = 60
burnin=15000
y_pred = trace[burnin:].get_values("y_pred")
mu_mean = trace[burnin:].get_values("mu").mean()
fig = plt.figure(figsize=(10,6))
fig.add_subplot(211)
_ = plt.hist(y_pred, range=[0, x_lim], bins=x_lim, histtype='stepfilled', color=colors[1])
_ = plt.xlim(1, x_lim)
_ = plt.ylabel('Frequency')
_ = plt.title('Posterior predictive distribution')
fig.add_subplot(212)
_ = plt.hist(y_obs, range=[0, x_lim], bins=x_lim, histtype='stepfilled')
_ = plt.xlabel('Response time in seconds')
_ = plt.ylabel('Frequency')
_ = plt.title('Distribution of observed data')
plt.tight_layout()
```
## Choosing the right distribution
In this case, it's clear the Poisson distribution is doing a poor job of representing our observed data, no matter how much we tweak the parameters. However, let's try the Negative Binomial distribution, which has similar characteristics to the Poisson distribution except that it has two parameters ($\mu$ and $\alpha$) which enables it to vary its variance independently of it's mean. We compare like so:
```
fig = plt.figure(figsize=(10,5))
fig.add_subplot(211)
x_lim = 70
mu = [15, 40]
for i in np.arange(x_lim):
plt.bar(i, stats.poisson.pmf(mu[0], i), color=colors[3], alpha=.7)
plt.bar(i, stats.poisson.pmf(mu[1], i), color=colors[4], alpha=.7)
_ = plt.xlim(1, x_lim)
_ = plt.xlabel('Response time in seconds')
_ = plt.ylabel('Probability mass')
_ = plt.title('Poisson distribution')
_ = plt.legend(['$\lambda$ = %s' % mu[0],
'$\lambda$ = %s' % mu[1]])
# Scipy takes parameters n & p, not mu & alpha
def get_n(mu, alpha):
return 1. / alpha * mu
def get_p(mu, alpha):
return get_n(mu, alpha) / (get_n(mu, alpha) + mu)
fig.add_subplot(212)
a = [2, 4]
for i in np.arange(x_lim):
plt.bar(i, stats.nbinom.pmf(i, n=get_n(mu[0], a[0]), p=get_p(mu[0], a[0])), color=colors[3], alpha=.7)
plt.bar(i, stats.nbinom.pmf(i, n=get_n(mu[1], a[1]), p=get_p(mu[1], a[1])), color=colors[4], alpha=.7)
_ = plt.xlabel('Response time in seconds')
_ = plt.ylabel('Probability mass')
_ = plt.title('Negative Binomial distribution')
_ = plt.legend(['$\\mu = %s, \/ \\alpha = %s$' % (mu[0], a[0]),
'$\\mu = %s, \/ \\alpha = %s$' % (mu[1], a[1])])
plt.tight_layout()
```
Let's try again to estimate the parameters using a Negative Binomial distribution given the same dataset used before. Again, we will use a Uniform distribution to estimate both $\mu$ and $\alpha$. The model can be represented as:
$$
y_j \sim \text{NegBinomial}(\mu, \alpha) \\
\alpha \sim \text{Exponential}\left(\frac{1}{5}\right) \\
\mu \sim \mathcal{U}(0, 100)
$$
```
with pm.Model() as model:
alpha = pm.Exponential("alpha", lam=.2)
mu = pm.Uniform("mu", lower=0, upper=100)
y_pred = pm.NegativeBinomial("y_pred", mu=mu, alpha=alpha)
y_est = pm.NegativeBinomial("y_est", mu=mu, alpha=alpha, observed=y_obs)
start = pm.find_MAP()
step = pm.Metropolis()
trace = pm.sample(100000, step, start=start, progressbar=True)
_ = pm.traceplot(trace[burnin:], varnames=["alpha", "mu"])
```
We can see that the model we have just developed has *less uncertainty* around the estimation of $\mu$ than the Poisson distribution model.
- Poisson: 10 to 30
- Negative Binomial: 16 to 21
Additionally, the Negative Binomial model has an $\alpha$ parameter of 1.2 to 2.2 which further increases the variance in estimated parameter $\mu$. Let's look at the posterior predictive distribution and see if it more closely resembles the distribution from the observed data.
```
x_lim = 60
y_pred = trace[burnin:].get_values('y_pred')
fig = plt.figure(figsize=(10,6))
ax = fig.add_subplot(211)
_ = plt.hist(y_pred, range=[0, x_lim], bins=x_lim, histtype='stepfilled', color=colors[1])
_ = plt.xlim(1, x_lim)
_ = plt.ylabel('Frequency')
_ = plt.title('Posterior predictive distribution')
ax2 = fig.add_subplot(212)
_ = plt.hist(messages['time_delay_seconds'].values, range=[0, x_lim], bins=x_lim, histtype='stepfilled')
_ = plt.xlabel('Response time in seconds')
_ = plt.ylabel('Frequency')
_ = plt.title('Distribution of observed data')
plt.tight_layout()
```
These two distributions are looking more similar to one another. As per the posterior predictive check. this would suggest that the Negative binomial model is a more appropriate fit for the underlying data.
We may wish to deploy a more rigorous approach to model checking however; bring forth the **Bayes Factor**.
### Model Check II: Bayes Factor
Another modeling technique is to compute the Bayes factor. This is an analytical method that aims to compare two models with each other.
The Bayes factor was typically a difficult metric to compute because it required integrating over the full joint probability distribution. In a low dimension space, integration is possible but once you begin to model in even modest dimensionality, integrating over the full joint posterior distribution becomes computationally costly and time-consuming.
There is an alternative and analogous technique for calculating the Bayes factor. It involves taking your two models for comparison and combining them into a hierarchical model with a model parameter index ($\tau$
). This index will switch between the two models throughout the MCMC process depending on which model it finds more credible. As such, the trace of the model index tells us a lot about the credibility of model M1 over model M2.
Our new model is as follows:
$$
\tau \sim \text{Bernoulli}\left(\frac{1}{2}\right) \\
\mu_p \sim \mathcal{U}(0, 60) \\
\alpha \sim \text{Exponential}(0.2) \\
\mu_{nb} \sim \mathcal{U}(0, 60) \\
y_j \sim \text{switch}\left(\tau, \text{Poisson}(\mu_p), \text{NegBinomial}(\mu_{nb}, \alpha) \right)\\
$$
```
with pm.Model() as model:
prior_model_prob = .5
# tau from bernoulli
tau = pm.Bernoulli("tau", prior_model_prob)
# poisson parameters
mu_p = pm.Uniform("mu_p", 0, 60)
# neg binomial parameters
alpha = pm.Exponential("alpha", lam=.2)
mu_nb = pm.Uniform("mu_nb", 0, 60)
# likelihood
y_like = pm.DensityDist("y_like",
lambda value: pm.math.switch(
tau,
pm.Poisson.dist(mu_p).logp(value),
pm.NegativeBinomial.dist(mu_nb, alpha).logp(value)
),
observed=y_obs
)
start = pm.find_MAP()
step1 = pm.Metropolis([mu_p, alpha, mu_nb])
step2 = pm.ElemwiseCategorical(vars=[tau], values=[0,1])
trace = pm.sample(50000, step=[step1,step2], start=start, progressbar=True)
_ = pm.traceplot(trace[burnin:], varnames=['tau'])
```
We can calculate the **Bayes Factor** for the above two models using the formulation below:
$$
\frac{P(X | M_1)}{P(X | M_2)} = F_B\frac{P(M_1)}{P(M_2)}
$$
where $F_B$ is the Bayes Factor. In the above example, we didn't apply prior probability to either model, hence the Bayes Factor is simply the quotient of the model likelihoods. If you find that your MCMC sampler is not traversing between the two models, you can introduce prior probabilities that will help you get sufficient exposure to both models.
```
def bayes_factor(tr, burnin=15000, prior_model_prob=0.5):
prob_pois = tr[burnin:]["tau"].mean()
prob_nb = 1 - prob_pois
BF = (prob_nb/prob_pois)*(prior_model_prob/(1-prior_model_prob))
return BF
print("Bayes Factor: %s" % (bayes_factor(trace)))
```
A Bayes Factor of $>1$ suggests that $M_1$ (negative binomial) is more strongly supported by the data than $M_2$ (Poisson). Jeffreys' scale of evidence for Bayes factors interprets a BF of 1.6 as there being weak evidence of $M_1 > M_2$ given the data.
| ------------- Bayes Factor ------------- | --------- Interpretation ------------|
| ------------------------------------ | ------------------------------------- |
| $F_B(M_1, M_2) < 1/10$ | Strong evidence for $M_2$ |
| $1/10 \lt F_B(M_1, M_2) < 1/3$ | Moderate evidence for $M_2$ |
| $1/3 \lt F_B(M_1, M_2) < 1$ | Weak evidence for $M_2$ |
| $1 \lt F_B(M_1, M_2) < 3$ | Weak evidence for $M_1$ |
| $3 \lt F_B(M_1, M_2) < 10$ | Moderate evidence for $M_1$ |
| $F_B(M_1, M_2) > 10$ | Strong evidence for $M_1$ |
## Hierarchical Modelling
A key strength of Bayesian modelling is the easy and flexibility with which one can implement a hierarchical model.
### Model Pooling
Let's explore a different way of modeling the response time for my hangout conversations. My intuition would suggest that my tendency to reply quickly to a chat depends on who I'm talking to. I might be more likely to respond quickly to someone I care about than to a distant friend. As such, I could decide to model each conversation independently, estimating parameters $\mu_i$ and $\alpha_i$ for each conversation $i$.
One consideration we must make, is that some conversations have very few messages compared to others. As such, our estimates of response time for conversations with few messages will have a higher degree of uncertainty than conversations with a large number of messages. The below plot illustrates the discrepancy in sample size per conversation.
```
ax = messages.groupby('prev_sender')['conversation_id'].size().plot(
kind='bar', figsize=(12,3), title='Number of messages sent per recipient', color=colors[0])
_ = ax.set_xlabel('Previous Sender')
_ = ax.set_ylabel('Number of messages')
_ = plt.xticks(rotation=45)
```
For each message $j$ and each conversation $i$, we can represent the model as:
$$
y_{ji} \sim \text{NegBinomial}(\mu_i, \alpha_i) \\
\mu_i \sim \mathcal{U}(0, 100) \\
\alpha_i \sim \mathcal{U}(0, 100)
$$
```
from sklearn import preprocessing
indiv_traces = {}
le = preprocessing.LabelEncoder()
part_idx = le.fit_transform(messages["prev_sender"])
part = le.classes_
n_part = len(part)
for p in part:
with pm.Model() as model:
# parameters
alpha = pm.Uniform("alpha", lower=0, upper=100)
mu = pm.Uniform("mu", lower=0, upper=100)
data = messages[messages["prev_sender"]==p]["time_delay_seconds"].values
y_est = pm.NegativeBinomial("y_est", mu=mu, alpha=alpha, observed=data)
y_pred = pm.NegativeBinomial("y_pred", mu=mu, alpha=alpha)
start = pm.find_MAP()
step = pm.Metropolis()
trace = pm.sample(10000, step, start=start, progressbar=False)
indiv_traces[p] = trace
```
Now we will sample 3 of the people and see their respective distributions:
```
fig, axs = plt.subplots(3,2, figsize=(12, 6))
axs = axs.ravel()
y_left_max = 2
y_right_max = 8000
x_lim = 60
ix = [2,4,6]
for i, j, p in zip([0,1,2], [0,2,4], part[ix]):
axs[j].set_title('Observed: %s' % p)
axs[j].hist(messages[messages['prev_sender']==p]['time_delay_seconds'].values, range=[0, x_lim], bins=x_lim, histtype='stepfilled')
axs[j].set_ylim([0, y_left_max])
for i, j, p in zip([0,1,2], [1,3,5], part[ix]):
axs[j].set_title('Posterior predictive distribution: %s' % p)
axs[j].hist(indiv_traces[p].get_values('y_pred'), range=[0, x_lim], bins=x_lim, histtype='stepfilled', color=colors[1])
axs[j].set_ylim([0, y_right_max])
axs[4].set_xlabel('Response time (seconds)')
axs[5].set_xlabel('Response time (seconds)')
plt.tight_layout()
```
The above plots show the observed data and the posterior predictive distribution for three sample conversations. The posterior predictive can vary considerably across conversations. This could accurately reflect the characteristics of the conversation or it could be inaccurate due to small sample size.
If we combine the posterior predictive distrobutions across these models, we could expect this to resemble the distribution of the overall dataset observed.
```
combined_y_pred = np.concatenate([v.get_values('y_pred') for k, v in indiv_traces.items()])
x_lim = 60
y_max = combined_y_pred.max()
y_pred = trace.get_values('y_pred')
fig = plt.figure(figsize=(12,6))
fig.add_subplot(211)
fig.add_subplot(211)
_ = plt.hist(combined_y_pred, range=[0, x_lim], bins=x_lim, histtype='stepfilled', color=colors[1])
_ = plt.xlim(1, x_lim)
_ = plt.ylim(0, 80000)
_ = plt.ylabel('Frequency')
_ = plt.title('Posterior predictive distribution')
fig.add_subplot(212)
_ = plt.hist(messages['time_delay_seconds'].values, range=[0, x_lim], bins=x_lim, histtype='stepfilled')
_ = plt.xlim(0, x_lim)
_ = plt.xlabel('Response time in seconds')
_ = plt.ylim(0, 30)
_ = plt.ylabel('Frequency')
_ = plt.title('Distribution of observed data')
plt.tight_layout()
```
Now that the posterior predictive distribution resembles the distribution of the observed data. However there is concern that some of the conversations have very little data and hence the estimates are likely to have high variance. One way to mitigate this risk is to share information across conversations - but still estimate $\mu_i$ for each conversation. We call this **partial pooling**.
### Partial Pooling
Just like in the pooled model, a partially pooled model has parameter values estimated for each conversation $i$. However, parameters are connected together via hyperparameters. This reflects our belief that my `response_time` per conversation have similarities with one another via my natural tendancy to respond either quickly or slowly.
$$
y_{ji} \sim \text{NegBinomial}(\mu_i, \alpha_i)
$$
Following on from the above example, we will estimate parameter values $\mu_i$ and $\alpha_i$ for a Poisson distribution. Rather than using a uniform prior, I will use a Gamma distribution for both $\mu$ and $\sigma$. This will enable the introduction of more prior knowledge into the model as we have reasonable expectations as to what $\mu$ and $\sigma$ will be.
Here's the Gamma distribution $\Gamma$:
```
mu = [5,25,50]
sd = [3,7,2]
plt.figure(figsize=(11,3))
_ = plt.title('Gamma distribution')
with pm.Model() as model:
for i, (j, k) in enumerate(zip(mu, sd)):
samples = pm.Gamma('gamma_%s' % i, mu=j, sd=k).random(size=10**5)
plt.hist(samples, bins=100, range=(0,60), color=colors[i], alpha=1)
_ = plt.legend(['$\mu$ = %s, $\sigma$ = %s' % (mu[a], sd[a]) for a in [0,1,2]])
```
The partially-pooled model can be formally described by:
$$
y_{ji} \sim \text{NegBinomial}(\mu_i, \alpha_i) \\
\mu_i \sim \Gamma(\mu_{\mu}, \sigma_{\mu}) \\
\alpha_i \sim \Gamma(\mu_{\alpha}, \sigma_{\alpha}) \\
\mu_{\mu} \sim \mathcal{U}(0, 60) \\
\sigma_{\mu} \sim \mathcal{U}(0, 50) \\
\mu_{\alpha} \sim \mathcal{U}(0, 10) \\
\sigma_{\alpha} \sim \mathcal{U}(0, 50) \\
$$
```
with pm.Model() as model:
# hyperparameters
# alpha params
hyper_alpha_sd = pm.Uniform("hyper_alpha_sd", lower=0, upper=50)
hyper_alpha_mu = pm.Uniform("hyper_alpha_mu", lower=0, upper=10)
# mu params
hyper_mu_sd = pm.Uniform("hyper_mu_sd", lower=0, upper=50)
hyper_mu_mu = pm.Uniform("hyper_mu_mu", lower=0, upper=60)
# prior params
alpha = pm.Gamma("alpha", mu=hyper_alpha_mu, sd=hyper_alpha_sd, shape=n_part)
mu = pm.Gamma("mu", mu=hyper_mu_mu, sd=hyper_mu_sd, shape=n_part)
# likelihood
y_est = pm.NegativeBinomial("y_est", mu=mu[part_idx], alpha=alpha[part_idx], observed=messages["time_delay_seconds"].values)
y_pred = pm.NegativeBinomial("y_pred", mu=mu[part_idx], alpha=alpha[part_idx], shape=messages["prev_sender"].shape)
# begin
start = pm.find_MAP()
step = pm.Metropolis()
hierarchical_trace = pm.sample(20000, step, progressbar=True)
_ = pm.traceplot(hierarchical_trace[12000:],
varnames=['mu','alpha','hyper_mu_mu',
'hyper_mu_sd','hyper_alpha_mu',
'hyper_alpha_sd'])
```
We can see for estimates of $\mu$ and $\alpha$ we have multiple plots - one for each conversation $i$. The difference between pooled and partially-pooled is that the parameters of the partially pooled model have a hyperparameter that is shared across all conversations $i$. This brings a number of benefits:
1. Information is shared across conversations, so for conversations that have limited sample size, they "borrow" knowledge from other conversations during estimation to help reduce the variance of the estimate.
2. We get an estimate for each conversation and an overall estimate for all conversations.
```
x_lim = 60
y_pred = hierarchical_trace.get_values('y_pred')[::1000].ravel()
fig = plt.figure(figsize=(12,6))
fig.add_subplot(211)
_ = plt.hist(y_pred, range=[0, x_lim], bins=x_lim, histtype='stepfilled', color=colors[1])
_ = plt.xlim(1, x_lim)
_ = plt.ylabel('Frequency')
_ = plt.title('Posterior predictive distribution')
fig.add_subplot(212)
_ = plt.hist(messages['time_delay_seconds'].values, range=[0, x_lim], bins=x_lim, histtype='stepfilled')
_ = plt.xlabel('Response time in seconds')
_ = plt.ylabel('Frequency')
_ = plt.title('Distribution of observed data')
plt.tight_layout()
```
## Shrinkage effort: pooled vs hierarchical model
As discussed, the partially pooled model shared a hyperparameter for both $\mu$ and $\alpha$. By sharing knowledge across conversations, it has the effect of shrinking the estimates closer together - particularly for conversations that have little data.
This shrinkage effect is illustrated in the below plot. You can see how the $\mu$
and $\alpha$ parameters are drawn together by the effect of the hyperparameter.
```
hier_mu = hierarchical_trace['mu'][500:].mean(axis=0)
hier_alpha = hierarchical_trace['alpha'][500:].mean(axis=0)
indv_mu = [indiv_traces[p]['mu'][500:].mean() for p in part]
indv_alpha = [indiv_traces[p]['alpha'][500:].mean() for p in part]
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(111, xlabel='mu', ylabel='alpha',
title='Pooled vs. Partially Pooled Negative Binomial Model',
xlim=(5, 45), ylim=(0, 10))
ax.scatter(indv_mu, indv_alpha, c=colors[5], s=50, label = 'Pooled', zorder=3)
ax.scatter(hier_mu, hier_alpha, c=colors[6], s=50, label = 'Partially Pooled', zorder=4)
for i in range(len(indv_mu)):
ax.arrow(indv_mu[i], indv_alpha[i], hier_mu[i] - indv_mu[i], hier_alpha[i] - indv_alpha[i],
fc="grey", ec="grey", length_includes_head=True, alpha=.5, head_width=0)
_ = ax.legend()
```
## Asking questions of the posterior
Let's start to take advantage of one of the best aspects of Bayesian statistics - the posterior distribution. Unlike frequentist techniques, we get a full posterior distribution as opposed to a single point estimate. In essence, we have a basket full of credible parameter values. This enables us to ask some questions in a fairly natural and intuitive manner.
| github_jupyter |
```
%matplotlib inline
import matplotlib.pyplot as plt
import nibabel as nib
from nilearn.plotting import plot_stat_map
cut_coords = [0, 0, 0]
mfx_glm = nib.load('results/mfx_glm_thresh_z.nii.gz')
rfx_glm = nib.load('results/rfx_glm_z.nii.gz')
z_mfx = nib.load('results/stouffers_rfx_z.nii.gz')
con_perm = nib.load('results/contrast_perm_z.nii.gz')
z_perm = nib.load('results/z_perm_z.nii.gz')
ffx_glm = nib.load('results/ffx_glm_z.nii.gz')
fishers = nib.load('results/fishers_z.nii.gz')
stouffers = nib.load('results/stouffers_ffx_z.nii.gz')
weighted_stouffers = nib.load('results/stouffers_weighted_z.nii.gz')
mkda_chi2_fdr = nib.load('results/mkda_chi2_fdr_consistency_z_FDR.nii.gz')
mkda_chi2_fwe = nib.load('results/mkda_chi2_fwe_consistency_z_FWE.nii.gz')
mkda_density = nib.load('results/mkda_density_vfwe.nii.gz')
ale = nib.load('results/ale_z_vfwe.nii.gz')
scale = nib.load('results/scale_z.nii.gz')
kda = nib.load('results/kda_vfwe.nii.gz')
thr = 1.96 # two-tailed tests
thr2 = 1.65 # one-tailed tests
```
## Image-based meta-analyses
```
'''
fig, axes = plt.subplots(nrows=3, ncols=3, figsize=(25, 12))
plot_stat_map(mfx_glm,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[0, 0],
title="MFX GLM")
plot_stat_map(rfx_glm, threshold=thr,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[0, 1],
title="RFX GLM")
plot_stat_map(z_mfx, threshold=thr,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[0, 2],
title="Z MFX")
plot_stat_map(con_perm, threshold=thr,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[1, 0],
title="Contrast Permutation")
plot_stat_map(z_perm, threshold=thr,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[1, 1],
title="Z Permutation")
plot_stat_map(ffx_glm, threshold=thr,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[1, 2],
title="FFX GLM")
plot_stat_map(fishers, threshold=thr,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[2, 0],
title="Fisher's")
plot_stat_map(stouffers, threshold=thr,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[2, 1],
title="Stouffer's")
plot_stat_map(weighted_stouffers, threshold=thr,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[2, 2],
title="Weighted Stouffer's")
fig.suptitle('Image-based meta-analyses', fontsize=24, y=0.95)
fig.savefig('figures/ibmas.png', dpi=400, bbox_inches='tight')
'''
"""fig, axes = plt.subplots(nrows=2, ncols=3, figsize=(25, 8))
plot_stat_map(mkda_chi2_fdr, threshold=thr,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[0, 0],
title="MKDA Chi2 Analysis with FDR")
plot_stat_map(mkda_chi2_fwe, threshold=thr,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[0, 1],
title="MKDA Chi2 Analysis with FWE")
plot_stat_map(mkda_density,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[0, 2],
title="MKDA Density Analysis")
plot_stat_map(ale, threshold=thr2,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[1, 0],
title="ALE")
plot_stat_map(scale, threshold=thr2,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[1, 1],
title="SCALE")
plot_stat_map(kda,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[1, 2],
title="KDA Density Analysis")
fig.suptitle('Coordinate-based meta-analyses', fontsize=24)
fig.savefig('figures/cbmas.png', dpi=400, bbox_inches='tight')"""
fig, axes = plt.subplots(nrows=5, ncols=3, figsize=(30, 20))
plot_stat_map(mfx_glm,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[0, 0],
title="MFX GLM")
plot_stat_map(rfx_glm, threshold=thr,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[0, 1],
title="RFX GLM")
plot_stat_map(ffx_glm, threshold=thr,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[0, 2],
title="FFX GLM")
plot_stat_map(con_perm, threshold=thr,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[1, 0],
title="Contrast Permutation")
plot_stat_map(z_perm, threshold=thr,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[1, 1],
title="Z Permutation")
plot_stat_map(z_mfx, threshold=thr,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[1, 2],
title="Z MFX")
plot_stat_map(fishers, threshold=thr,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[2, 0],
title="Fisher's")
plot_stat_map(stouffers, threshold=thr,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[2, 1],
title="Stouffer's")
plot_stat_map(weighted_stouffers, threshold=thr,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[2, 2],
title="Weighted Stouffer's")
plot_stat_map(ale, threshold=thr2,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[3, 0],
title="ALE")
plot_stat_map(scale, threshold=thr2,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[3, 1],
title="SCALE")
plot_stat_map(kda,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[3, 2],
title="KDA Density Analysis")
plot_stat_map(mkda_chi2_fdr, threshold=thr,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[4, 0],
title="Neurosynth Chi2 Analysis")
plot_stat_map(mkda_chi2_fwe, threshold=thr,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[4, 1],
title="MKDA Chi2 Analysis")
plot_stat_map(mkda_density,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[4, 2],
title="MKDA Density Analysis")
#fig.suptitle('Meta-analyses', fontsize=40, y=0.925)
fig.savefig('figures/metas.png', dpi=400, bbox_inches='tight')
fig, axes = plt.subplots(nrows=5, ncols=3, figsize=(30, 22))
plot_stat_map(mfx_glm,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[0, 0])
plot_stat_map(rfx_glm, threshold=thr,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[0, 1])
plot_stat_map(ffx_glm, threshold=thr,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[0, 2])
plot_stat_map(con_perm, threshold=thr,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[1, 0])
plot_stat_map(z_perm, threshold=thr,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[1, 1])
plot_stat_map(z_mfx, threshold=thr,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[1, 2])
plot_stat_map(fishers, threshold=thr,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[2, 0])
plot_stat_map(stouffers, threshold=thr,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[2, 1])
plot_stat_map(weighted_stouffers, threshold=thr,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[2, 2])
plot_stat_map(ale, threshold=thr2,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[3, 0])
plot_stat_map(scale, threshold=thr2,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[3, 1])
plot_stat_map(kda,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[3, 2])
plot_stat_map(mkda_chi2_fdr, threshold=thr,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[4, 0])
plot_stat_map(mkda_chi2_fwe, threshold=thr,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[4, 1])
plot_stat_map(mkda_density,
cut_coords=cut_coords, draw_cross=False,
cmap='RdBu_r', axes=axes[4, 2])
#fig.suptitle('Meta-analyses', fontsize=40, y=0.925)
fig.savefig('figures/metas_unlabeled.png', dpi=400, bbox_inches='tight')
```
| github_jupyter |
```
import pandas as pd
import glob
from geopandas import GeoDataFrame
import geopandas as gpd
from math import sin, cos, sqrt, atan2, radians
import numpy as np
import matplotlib.pyplot as plt
import dateutil
import datetime as dt
from datetime import datetime, timedelta
%matplotlib inline
plt.style.use('ggplot')
# approximate radius of earth in km
R = 6373.0
```
# Creating first df
```
df = pd.read_csv('serverdata/2017-09-07 23:00:022000obikesZH.csv')
del df['Unnamed: 0']
del df['index']
del df['countyId']
del df['helmet']
del df['imei']
def split1(elem):
elem = elem.replace('POINT (', '')
elem = elem.replace(')', '')
return elem.split(' ')[0]
def split2(elem):
elem = elem.replace('POINT (', '')
elem = elem.replace(')', '')
return elem.split(' ')[1]
df['Long'] = df['2017-09-07 23:00:02'].apply(split1)
df['Lat'] = df['2017-09-07 23:00:02'].apply(split2)
df['Timestamp'] = '2017-09-07 23:00:02'
del df['2017-09-07 23:00:02']
df.head(3)
```
# Creating list of names in file
```
file_names = []
for name in glob.glob('serverdata/*'):
name = name.split('/')[-1]
file_names.append(name)
```
# Iterating through contents of all the server files
```
for file in file_names[0:]:
df_new = pd.read_csv('serverdata/'+ file)
del df_new['Unnamed: 0']
del df_new['index']
del df_new['countyId']
del df_new['helmet']
del df_new['imei']
df_new['Long'] = df_new[file[:19]].apply(split1)
df_new['Lat'] = df_new[file[:19]].apply(split2)
df_new['Timestamp'] = file[:19]
del df_new[file[:19]]
#print(file[:19])
frames = [df, df_new]
df = pd.concat(frames)
df.info()
df = df.drop_duplicates()
df.info()
df.to_csv('timestamps.csv')
```
# Die beliebtesten Standorte
```
df['Long'].value_counts(ascending=False).head(5)
df['Lat'].value_counts(ascending=False).head(5)
```
# How many bikes are there in total?
```
id_list = list(set(list(df['id'])))
len(id_list)
```
# Counting which bikes have the most location changes?
```
id_list = list(set(list(df['id'])))
list_count = []
for elem in id_list:
count = df[df['id'] == elem]['Long'].value_counts().count()
mini_dict = {'Count':count,
'ID': elem}
list_count.append(mini_dict)
df_change_count = pd.DataFrame(list_count)
print("Startzeit:", df['Timestamp'].head(1)[0],
"Endzeit:",str(df['Timestamp'].tail(1))[:26][7:])
df_change_count.sort_values(by='Count', ascending=False).head(10)
```
# Average amount of time bikes are used in the above defined time period?
```
df_change_count['Count'].mean()
```
# How many bikes never moved in the past three days?
```
df_change_count[df_change_count['Count'] == 1].count()
```
# Percentage of total? (Bikes that haven't moved in the above mentioned time frame)
```
total = len(list(set(list(df['id']))))
never_moved = df_change_count[df_change_count['Count'] == 1]['Count'].count()
round(never_moved / total * 100, 1)
```
# Save list of bikes that have never moved, and created a del list
```
Del_list = list(df_change_count[df_change_count['Count'] == 1]['ID'])
```
# Creating DataFrame of the bikes that have never moved, with location data
```
Del_list[0:5]
df_never_moved = df.head(0)
for id_never_moved in Del_list:
df_new = df[df['id'] == id_never_moved]
frames = [df_never_moved, df_new]
df_never_moved = pd.concat(frames)
df_never_moved = df_never_moved.drop_duplicates(subset='id')
df_never_moved.info() #remove Time
df_never_moved.to_csv('never_moved.csv')
df_never_moved['Long'].value_counts().head(5)
df_never_moved['Lat'].value_counts().head(10)
```
# Delete these bikes from the list
```
for bike in Del_list:
id_list.remove(bike)
len(id_list)
```
# Create list of bikes that have moved
```
rel_list_count = []
for elem in id_list:
count = df[df['id'] == elem]['Long'].value_counts().count()
mini_dict = {'Count':count,
'ID': elem}
rel_list_count.append(mini_dict)
df_moved = pd.DataFrame(rel_list_count)
df_moved.sort_values(by='Count', ascending=False).head()
```
# The bike with the most movements
```
df[df['id']==41000425].to_csv('BikeID41000425.csv')
```
# What is the average distance per bike move?
# First the top bike
```
top_bike = df[df['id']==41000425]
long_list = list(top_bike['Long'])
last_elem = long_list[-1]
long_list.append(last_elem)
long_list.pop(0)
top_bike['newLong'] = long_list
lat_list = list(top_bike['Lat'])
last_elem = lat_list[-1]
lat_list.append(last_elem)
lat_list.pop(0)
top_bike['newLat'] = lat_list
la1 = list(top_bike['Lat'])
lo1 = list(top_bike['Long'])
la2 = list(top_bike['newLat'])
lo2 = list(top_bike['newLong'])
distance_list = []
for lat1, lon1, lat2, lon2 in zip(la1, lo1, la2, lo2):
lat1 = radians(float(lat1))
lon1 = radians(float(lon1))
lat2 = radians(float(lat2))
lon2 = radians(float(lon2))
dlon = lon2 - lon1
dlat = lat2 - lat1
a = sin(dlat / 2)**2 + cos(lat1) * cos(lat2) * sin(dlon / 2)**2
c = 2 * atan2(sqrt(a), sqrt(1 - a))
distance = R * c
distance_list.append(distance)
top_bike['Distance'] = distance_list
final_topbike = top_bike[top_bike['Distance'] != 0.000000]
final_topbike.head(0)
```
# Now with all the bikes
```
final_topbikes = final_topbike.head(0)
moved_bikes = list(df_moved['ID'])
for bike in moved_bikes:
top_bike = df[df['id']== bike]
long_list = list(top_bike['Long'])
last_elem = long_list[-1]
long_list.append(last_elem)
long_list.pop(0)
top_bike['newLong'] = long_list
lat_list = list(top_bike['Lat'])
last_elem = lat_list[-1]
lat_list.append(last_elem)
lat_list.pop(0)
top_bike['newLat'] = lat_list
la1 = list(top_bike['Lat'])
lo1 = list(top_bike['Long'])
la2 = list(top_bike['newLat'])
lo2 = list(top_bike['newLong'])
distance_list = []
for lat1, lon1, lat2, lon2 in zip(la1, lo1, la2, lo2):
lat1 = radians(float(lat1))
lon1 = radians(float(lon1))
lat2 = radians(float(lat2))
lon2 = radians(float(lon2))
dlon = lon2 - lon1
dlat = lat2 - lat1
a = sin(dlat / 2)**2 + cos(lat1) * cos(lat2) * sin(dlon / 2)**2
c = 2 * atan2(sqrt(a), sqrt(1 - a))
distance = R * c
distance_list.append(distance)
top_bike['Distance'] = distance_list
final_topbike = top_bike[top_bike['Distance'] != 0.000000]
frames = [final_topbikes, final_topbike]
final_topbikes = pd.concat(frames)
final_topbikes = final_topbikes.drop_duplicates()
```
# Mittelere Strecke
```
final_topbikes['Distance'].mean()
```
# Anzahl Bewegungen
```
final_topbikes['Distance'].count()
final_topbikes['Distance'].count() / len(list(set(list(final_topbikes['id'])))) / 14
final_topbikes['Distance'].count() * 1.5
```
# Total Kilometer (Luftlinie)
```
round(final_topbikes['Distance'].sum())
```
# Zu welcher Tageszeit werden die Velos am meisten benutzt?
```
def pmam(x):
x = str(x)
#x = (':'.join(a+b for a,b in zip(x[::2], x[1::2])))
if x == 'NaN':
pass
try:
x = str(x[:2] + ':' + x[2:])
date = dateutil.parser.parse(x)
return str(date.strftime('%d/%m/%Y %H:%M %p'))
except:
return 'NaN'
print(pmam('2017-09-09 07:00:03'))
```
### Adding on two hour, because time on the server is wrong
```
def addtwo(elem):
mytime = datetime.strptime(elem, "%Y-%m-%d %H:%M:%S")
mytime += timedelta(hours=2)
return mytime.strftime("%Y.%m.%d %H:%M:%S")
final_topbikes['Timestamp +2'] = final_topbikes['Timestamp'].apply(addtwo)
final_topbikes['Timestamp index'] = final_topbikes['Timestamp +2'].apply(pmam)
final_topbikes['Timestamp index'] = final_topbikes['Timestamp index'].apply(lambda x:
dt.datetime.strptime(x,'%d/%m/%Y %H:%M %p'))
final_topbikes.index = final_topbikes['Timestamp index']
```
# When are the most bikes rented?
```
final_topbikes.groupby(by=final_topbikes.index.hour)['id'].count().plot(kind='bar', figsize=(12,6))
```
# Median Distance according to time of day
```
final_topbikes.groupby(by=final_topbikes.index.hour)['Distance'].median().plot(kind='bar', figsize=(12,6))
```
# What about day of week?
```
final_topbikes.groupby(by=final_topbikes.index.weekday)['id'].count().plot(kind='bar', figsize=(6,3))
final_topbikes.groupby(by=final_topbikes.index.weekday)['Distance'].count().plot(kind='bar', figsize=(12,6))
final_topbikes.groupby(by=final_topbikes.index.weekday)['Distance'].median().plot(kind='bar', figsize=(12,6))
len(list(set(list(final_topbikes['id']))))
```
# Die Längste Strecken
```
final_topbikes.sort_values(by='Distance', ascending=False)
```
# Saving Top Bikes
```
final_topbikes.to_csv('final_top_bikes.csv')
```
# Looking for Info for Carto
```
df.info()
#Getting right time
del_id = list(df_never_moved['id'])
df['Timestamp +2'] = df['Timestamp'].apply(addtwo)
#Developing list for the dates
times = df['Timestamp +2'].value_counts().reset_index()
times['date'] = times['index'].apply(lambda x:
dt.datetime.strptime(x,'%Y.%m.%d %H:%M:%S'))
date_list = list(times.sort_values(by='date')['index'])
final_date_list = []
for elem in date_list:
if '07:00' in elem:
final_date_list.append(elem)
elif '12:00' in elem:
final_date_list.append(elem)
elif '15:00' in elem:
final_date_list.append(elem)
elif '00:00' in elem:
final_date_list.append(elem)
else:
pass
#Pulling out relevant files to make final df for carto
df_carto = df.head(0)
for elem in final_date_list:
df_new = df[df['Timestamp +2'] == elem]
df_carto = pd.concat([df_carto, df_new])
df_carto.head()
del df['Timestamp']
df_carto.to_csv('obike_carto.csv')
len(final_date_list)
36390 / 55
```
| github_jupyter |
# Model simplification
## Introduction
In biology, we often use statistics to compare competing hypotheses in order to work out the simplest explanation for some data. This often involves collecting several explanatory variables that describe different hypotheses and then fitting them together in a single model, and often including interactions between those variables.
In all likelihood, not all of these model *terms* will be important. If we remove unimportant terms, then the explanatory power of the model will get worse, but might not get significantly worse.
```{epigraph}
"It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience."
-- Albert Einstein
```
Or to paraphrase:
```{epigraph}
"Everything should be made as simple as possible, but no simpler."
```
The approach we will look at is to start with a *maximal model* — the model that contains everything that might be important — and simplify it towards the *null model* — the
model that says that none of your variables are important. Hopefully, there is a point somewhere in between where you can't remove any further terms without making the model significantly worse: this is called the *minimum adequate model*.
<img src="./graphics/minmodflow.png" width="600px">
### Chapter aims
The main aim of this chapter[$^{[1]}$](#fn1) is to learn how to build and then simplify complex models by removing non-explanatory terms, to arrive at the *Minimum Adequate Model*.
### The process of model simplification
Model simplification is an iterative process. The flow diagram below shows how it works: at each stage you try and find an acceptable simplification. If successful, then you start again with the new simpler model and try and find a way to simplify this, until eventually, you can't find anything more to remove.
<img src="./graphics/maxmodflow.png" width="600px">
As always, we can use an $F$-test to compare two models and see if they have significantly different explanatory power (there are also other ways to do this, such as using the Akaike Information Criterion, but we will not cover this here). In this context, the main thing to remember is that significance of the $F$-test used to compare a model and its simplified counterpart is a *bad* thing — it means that we've removed a term from the fitted model that makes it *significantly* worse.
## An example
We'll be using the mammal dataset for this practical, so once again:
$\star$ Make sure you have changed the working directory to your stats module `code` folder.
$\star$ Create a new blank script called `MyModelSimp.R`.
$\star$ Load the mammals data into a data frame called `mammals`:
```
mammals <- read.csv('../data/MammalData.csv', stringsAsFactors = T)
```
In previous chapters, we looked at how the categorical variables `GroundDwelling` and `TrophicLevel` predicted genome size in mammals. In this chapter, we will add in two more continuous variables: litter size and body mass. The first thing we will do is to log both variables and reduce the dataset to the rows for which all of these data are available:
```
#get logs of continuous variables
mammals$logLS <- log(mammals$LitterSize)
mammals$logCvalue <- log(mammals$meanCvalue)
mammals$logBM <- log(mammals$AdultBodyMass_g)
# reduce dataset to five key variables
mammals <- subset(mammals, select = c(logCvalue, logLS, logBM,
TrophicLevel, GroundDwelling))
# remove the row with missing data
mammals <- na.omit(mammals)
```
$\star$ Copy the code above into your script and run it
Check that the data you end up with has this structure:
```
str(mammals)
```
## A Maximal model
First let's fit a model including all of these variables and all of the interactions:
```
model <- lm(formula = logCvalue ~ logLS * logBM * TrophicLevel * GroundDwelling, data = mammals)
```
$\star$ Add this model-fitting step in your script.
$\star$ Look at the output of `anova(model)` and `summary(model)`.
Scared? Don't be! There are a number of points to this exercise:
1. These tables show exactly the kind of output you've seen before. Sure, there are lots of rows but each row is just asking whether a model term (`anova`) or a model coefficient (`summary`) is significant.
2. Some of the rows are significant, others aren't: some of the model terms are not explanatory.
3. The two tables show slightly different things - lots of stars for the `anova` table and only a few for the `summary` table.
4. That last line in the `anova` table: `logLS:logBM:TrophicLevel:GroundDwelling`. This is an interaction of four variables capturing how the slope for litter size changes for different body masses for species in different trophic groups and which are arboreal or ground dwelling. Does this seem easy to understand?
The real lesson here is that it is easy to fit complicated models in R.
*Understanding and explaining them is a different matter*.
The temptation is always to start with the most complex possible model but this is rarely a good idea.
## A better maximal model
Instead of all possible interactions, we'll consider two-way interactions: how do pairs of variables affect each other?
There is a shortcut for this: `y ~ (a + b + c)^2` gets all two way combinations of the variables in the brackets, so is a quicker way of getting this model:
`y ~ a + b + c + a:b + a:c + b:c`
So let's use this to fit a simpler maximal model:
```
model <- lm(logCvalue ~ (logLS + logBM + TrophicLevel + GroundDwelling)^2, data = mammals)
```
The `anova` table for this model looks like this:
```
anova(model)
```
The first lines are the *main effects*, which are all significant or near significant. Then there are the six interactions. One of these is very significant: `logBM:GroundDwelling`,
which suggests that the slope of log C value with body mass differs between ground dwelling and non-ground dwelling species. The other interactions are non-significant although some are close.
$\star$ Run this model in your script.
$\star$ Look at the output of `anova(model)` and `summary(model)`.
$\star$ Generate and inspect the model diagnostic plots.
## Model simplification
Now let's simplify the model we fitted above. Model simplification is not as straightforward as just dropping terms. Each time you remove a term from a model, the model will change: the model will get worse, since some of the sums of squares are no longer explained, but the remaining variables may partly compensate for this loss of explanatory power. The main point is that if it gets only a little worse, its OK, as the tiny amount of additional variation explained by the term you removed is not really worth it.
But how much is "tiny amount"? This is what we will learn now by using the $F$-test. Again, remember: significance of the $F$-test used to compare a model and its simplified counterpart is a *bad* thing — it means that we've removed a term from the fitted model that makes the it *significantly* worse.
The first question is: *what terms can you remove from a model*? Obviously, you only want to remove non-significant terms, but there is another rule – you cannot remove a main effect or an interaction while those main effects or interactions are present in a more complex interaction. For example, in the model `y ~ a + b + c + a:b + a:c + b:c`, you cannot drop `c` without dropping both `a:c` and `b:c`.
The R function `drop.scope` tells you what you can drop from a model. Some examples:
```
drop.scope(model)
drop.scope(y ~ a + b + c + a:b)
drop.scope(y ~ a + b + c + a:b + b:c + a:b:c)
```
The last thing we need to do is work out how to remove a term from a model. We could type out the model again, but there is a shortcut using the function `update`:
```
# a simple model
f <- y ~ a + b + c + b:c
# remove b:c from the current model
update(f, . ~ . - b:c)
# model g as a response using the same explanatory variables.
update(f, g ~ .)
```
Yes, the syntax is weird. The function uses a model or a formula and then allows you to alter the current formula. The dots in the code `. ~ . ` mean 'use whatever is currently in the response or explanatory variables'. It gives a simple way of changing a model.
Now that you have learned the syntax, let's try model simplification with the mammals dataset.
From the above `anova` and `drop.scope` output, we know that the interaction `TrophicLevel:GroundDwelling` is not significant and a valid term. So, let's remove this term:
```
model2 <- update(model, . ~ . - TrophicLevel:GroundDwelling)
```
And now use ANOVA to compare the two models:
```
anova(model, model2)
```
This tells us that `model2` is *not* significantly worse than `model`. That is, dropping that one interaction term did not result in much of a loss of predictability.
Now let's look at this simplified model and see what else can be removed:
```
anova(model2)
drop.scope(model2)
```
$\star$ Add this first simplification to your script and re-run it.
$\star$ Look at the output above and decide what is the next possible term to delete
$\star$ Using the code above as a model, create `model3` as the next simplification. (remember to use `model2` in your `update` call and not `model`).
## Exercise
Now for a more difficult exercise:
$\star$ Using the code above to guide you, try and find a minimal adequate model that you are happy with. In each step, the output of `anova(model, modelN)` should be non-significant (where $N$ is the current step).
It can be important to consider both `anova` and `summary` tables. It can be worth trying to remove things that look significant in one table but not the other — some terms can explain significant variation on the `anova` table but the coefficients are not significant.
Remember to remove *terms*: with categorical variables, several coefficients in the `summary` table may come from one term in the model and have to be removed together.
When you have got your final model, save the model as an R data file: `save(modelN, file='myFinalModel.Rda')`.
-----
<a id="fn1"></a>
[1]: Here you work with the script file `ModelSimp.R`
| github_jupyter |
```
import torch
from rllib.policy import NNPolicy, FelixPolicy
from rllib.value_function import NNValueFunction
from rllib.util import tensor_to_distribution
from rllib.util.neural_networks.utilities import random_tensor, zero_bias, init_head_weight
import matplotlib.pyplot as plt
from matplotlib import rcParams
# If in your browser the figures are not nicely vizualized, change the following line.
rcParams['font.size'] = 16
%load_ext autoreload
%autoreload 2
def plot_policy(policy, num_states=10, num_samples=1000):
dim_state, dim_action = policy.dim_state, policy.dim_action
fig, ax = plt.subplots(num_states, 2, figsize=(18, 10), dpi= 80, facecolor='w', edgecolor='k', sharex='col')
for i in range(num_states):
if i == 0:
state = torch.zeros(dim_state)
else:
state = random_tensor(discrete=False, dim=dim_state, batch_size=None)
out = policy(state)
print(out[1])
normal = tensor_to_distribution(out)
tanh = tensor_to_distribution(out, tanh=True)
ax[i, 0].hist(normal.sample((1000,)).squeeze().clamp_(-1, 1), density=True)
ax[i, 1].hist(tanh.sample((1000,)).squeeze(), density=True)
ax[i, 0].set_xlim([-1.1, 1.1])
ax[i, 1].set_xlim([-1.1, 1.1])
ax[0, 0].set_title('TruncNormal')
ax[0, 1].set_title('Tanh')
ax[-1, 0].set_xlabel('Action')
ax[-1, 1].set_xlabel('Action')
plt.show()
def plot_value_function(value_function, num_samples=1000):
dim_state = value_function.dim_state
fig, ax = plt.subplots(1, 1, figsize=(18, 10), dpi= 80, facecolor='w', edgecolor='k')
state = random_tensor(discrete=False, dim=dim_state, batch_size=num_samples)
value = value_function(state)
ax.hist(value.squeeze().detach().numpy(), density=True)
ax.set_xlabel('Value')
plt.show()
```
# Policy Initialization
```
dim_state, dim_action = 4, 1
policy = FelixPolicy(dim_state, dim_action)
plot_policy(policy)
```
## NNPolicy with Default Initialization
```
dim_state, dim_action = 4, 1
policy = NNPolicy(dim_state, dim_action, biased_head=True)
plot_policy(policy)
dim_state, dim_action = 4, 1
policy = NNPolicy(dim_state, dim_action, biased_head=False) # Unbias the head?
plot_policy(policy)
```
## NNPolicy with Zero Bias Initialization
```
dim_state, dim_action = 4, 1
policy = NNPolicy(dim_state, dim_action, biased_head=True)
zero_bias(policy)
plot_policy(policy)
```
## NNPolicy with Default Head Initialization
```
dim_state, dim_action = 4, 1
policy = NNPolicy(dim_state, dim_action, biased_head=True)
# zero_bias(policy)
init_head_weight(policy)
plot_policy(policy)
```
## NNPolicy with Zero Bias and Default Weight Initialization
```
dim_state, dim_action = 4, 1
policy = NNPolicy(dim_state, dim_action, biased_head=True)
zero_bias(policy)
init_head_weight(policy)
plot_policy(policy)
```
## Effect of Initial Std Dev
```
dim_state, dim_action = 4, 1
policy = NNPolicy(dim_state, dim_action, initial_scale=0.1)
zero_bias(policy)
init_head_weight(policy) # Increase scale weight
plot_policy(policy)
dim_state, dim_action = 4, 1
policy = NNPolicy(dim_state, dim_action, initial_scale=0.01)
zero_bias(policy)
init_head_weight(policy) # Increase scale weight
plot_policy(policy)
dim_state, dim_action = 4, 1
policy = NNPolicy(dim_state, dim_action, initial_scale=0.5)
zero_bias(policy)
init_head_weight(policy) # Increase scale weight
plot_policy(policy)
```
# Value Functions
```
dim_state, dim_action = 4, 1
value_function = NNValueFunction(dim_state)
zero_bias(value_function)
init_head_weight(value_function) # Increase scale weight
torch.nn.init.uniform_(value_function.nn.head.bias, 2 + -0.1, 2 + 0.1)
plot_value_function(value_function)
```
| github_jupyter |
# Training Batch Reinforcement Learning Policies with Amazon SageMaker RL and Coach library
For many real-world problems, the reinforcement learning (RL) agent needs to learn from historical data that was generated by some deployed policy. For example, we may have historical data of experts playing games, users interacting with a website or sensor data from a control system. This notebook shows an example of how to use batch RL to train a new policy from offline dataset[1]. We use gym `CartPole-v0` as a fake simulated system to generate offline dataset and the RL agents are trained using Amazon SageMaker RL.
We may want to evaluate the policy learned from historical data before deployment. Since simulators may not be available in all use cases, we need to evaluate how good the learned policy by using held out historical data. This is called as off-policy evaluation or counterfactual evaluation. In this notebook, we evaluate the policy during the training using several off-policy evaluation metrics.
We can deploy the policy using SageMaker Hosting endpoint. However, some use cases may not require a persistent serving endpoint with sub-second latency. Here we demonstrate how to deploy the policy with [SageMaker Batch Transform](https://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform.html), where large volumes of input state features can be inferenced with high throughput.
Figure below shows an overview of the entire notebook.

## Pre-requisites
### Roles and permissions
To get started, we'll import the Python libraries we need, set up the environment with a few pre-requisites for permissions and configurations.
```
import sagemaker
import boto3
import sys
import os
import glob
import re
import subprocess
from IPython.display import HTML
import time
from time import gmtime, strftime
sys.path.append("common")
from misc import get_execution_role, wait_for_s3_object
from sagemaker.rl import RLEstimator, RLToolkit, RLFramework
# install gym environments if needed
!pip install gym
from env_utils import VectoredGymEnvironment
```
### Steup S3 buckets
Setup the linkage and authentication to the S3 bucket that you want to use for checkpoint and the metadata.
```
# S3 bucket
sage_session = sagemaker.session.Session()
s3_bucket = sage_session.default_bucket()
region_name = sage_session.boto_region_name
s3_output_path = 's3://{}/'.format(s3_bucket) # SDK appends the job name and output folder
print("S3 bucket path: {}".format(s3_output_path))
```
### Define Variables
We define variables such as the job prefix for the training jobs *and the image path for the container (only when this is BYOC).*
```
# create unique job name
job_name_prefix = 'rl-batch-cartpole'
```
### Configure settings
You can run your RL training jobs on a SageMaker notebook instance or on your own machine. In both of these scenarios, you can run the following in either `local` or `SageMaker` modes. The `local` mode uses the SageMaker Python SDK to run your code in a local container before deploying to SageMaker. This can speed up iterative testing and debugging while using the same familiar Python SDK interface. You just need to set `local_mode = True`.
```
%%time
# run in local mode?
local_mode = False
```
### Create an IAM role
Either get the execution role when running from a SageMaker notebook `role = sagemaker.get_execution_role()` or, when running from local machine, use utils method `role = get_execution_role()` to create an execution role.
```
try:
role = sagemaker.get_execution_role()
except:
role = get_execution_role()
print("Using IAM role arn: {}".format(role))
```
### Install docker for `local` mode
In order to work in `local` mode, you need to have docker installed. When running from you local machine, please make sure that you have docker or docker-compose (for local CPU machines) and nvidia-docker (for local GPU machines) installed. Alternatively, when running from a SageMaker notebook instance, you can simply run the following script to install dependenceis.
Note, you can only run a single local notebook at one time.
```
# only run from SageMaker notebook instance
if local_mode:
!/bin/bash ./common/setup.sh
```
## Collect offline data
In order to do Batch RL training, we need to first prepare the dataset that is generated by a deployed policy. In real world scenarios, customers can collect these offline data by interacting the live environment using the already deployed agent. In this notebook, we used OpenAI gym `Cartpole-v0` as the environment to mimic a live environment and used a random policy with uniform action distribution to mimic a deployed agent. By interacting with multiple environments simultaneously, we can gather more trajectories from the environments.
Here is a short introduction of the cart-pole balancing problem, where a pole is attached by an un-actuated joint to a cart, moving along a frictionless track.
1. *Objective*: Prevent the pole from falling over
2. *Environment*: The environment used in this example is part of OpenAI Gym, corresponding to the version of the cart-pole problem described by Barto, Sutton, and Anderson [2]
3. *State*: Cart position, cart velocity, pole angle, pole velocity at tip
4. *Action*: Push cart to the left, push cart to the right
5. *Reward*: Reward is 1 for every step taken, including the termination step
```
# initiate 100 environment to collect rollout data
NUM_ENVS = 100
NUM_EPISODES = 5
vectored_envs = VectoredGymEnvironment('CartPole-v0', NUM_ENVS)
```
Now we have 100 environments of `Cartpole-v0` ready. We'll collect 5 episodes from each environment so we’ll have 500 episodes of data for training. We start from a random policy that generates the same uniform action probabilities regardless of the state features.
```
# initiate a random policy by setting action probabilities as uniform distribution
action_probs = [[1/2, 1/2] for _ in range(NUM_ENVS)]
df = vectored_envs.collect_rollouts_with_given_action_probs(action_probs=action_probs, num_episodes=NUM_EPISODES)
# the rollout dataframes contain attributes: action, action_probs, episode_id, reward, cumulative_rewards, state_features
# only show cumulative rewards at the last step of the episode
df.head()
```
We can use the average cumulative reward of the random policy as a baseline for the Batch RL trained policy.
```
# average cumulative rewards for each episode
avg_rewards = df['cumulative_rewards'].sum() / (NUM_ENVS * NUM_EPISODES)
print("Average cumulative rewards over {} episodes rollouts was {}.".format((NUM_ENVS * NUM_EPISODES), avg_rewards))
```
### Save Dataframe as CSV for Batch RL Training
Coach Batch RL support reading off policy data in CSV format. We will dump our collected rollout data in CSV format.
```
# dump dataframe as csv file
df.to_csv("src/cartpole_dataset.csv", index=False)
```
## Configure the presets for RL algorithm
The presets that configure the Batch RL training jobs are defined in the `preset-cartpole-ddqnbcq.py` file which is also uploaded on the `/src` directory. Using the preset file, you can define agent parameters to select the specific agent algorithm. You can also set the environment parameters, define the schedule and visualization parameters, and define the graph manager. The schedule presets will define the number of heat up steps, periodic evaluation steps, training steps between evaluations.
These can be overridden at runtime by specifying the `RLCOACH_PRESET` hyperparameter. Additionally, it can be used to define custom hyperparameters.
```
!pygmentize src/preset-cartpole-ddqnbcq.py
```
In this notebook, we use DDQN[6] to update the policy in an off-policy manner, and combine it with BCQ[5] to address the error induced by inaccurately estimated values for unseen state-action pairs. The training is completely off-line.
## Write the Training Code
The training code is written in the file “train-coach.py” which is uploaded in the /src directory.
First import the environment files and the preset files, and then define the `main()` function.
```
!pygmentize src/train-coach.py
```
## Train the RL model using the Python SDK Script mode
If you are using local mode, the training will run on the notebook instance. When using SageMaker for training, you can select a GPU or CPU instance. The RLEstimator is used for training RL jobs.
1. Specify the source directory where the environment, presets and training code is uploaded.
2. Specify the entry point as the training code
3. Define the training parameters such as the instance count, job name, S3 path for output and job name.
4. Specify the hyperparameters for the RL agent algorithm. The `RLCOACH_PRESET` can be used to specify the RL agent algorithm you want to use.
```
%%time
if local_mode:
instance_type = 'local'
else:
instance_type = "ml.m4.xlarge"
estimator = RLEstimator(entry_point="train-coach.py",
source_dir='src',
dependencies=["common/sagemaker_rl"],
toolkit=RLToolkit.COACH,
toolkit_version='1.0.0',
framework=RLFramework.TENSORFLOW,
role=role,
instance_type=instance_type,
instance_count=1,
output_path=s3_output_path,
base_job_name=job_name_prefix,
hyperparameters = {
"RLCOACH_PRESET": "preset-cartpole-ddqnbcq",
"save_model": 1
}
)
estimator.fit()
```
## Store intermediate training output and model checkpoints
The output from the training job above is stored on S3. The intermediate folder contains gifs and metadata of the training. We'll need these metadata for metrics visualization and model evaluations.
```
job_name=estimator._current_job_name
print("Job name: {}".format(job_name))
s3_url = "s3://{}/{}".format(s3_bucket,job_name)
if local_mode:
output_tar_key = "{}/output.tar.gz".format(job_name)
else:
output_tar_key = "{}/output/output.tar.gz".format(job_name)
intermediate_folder_key = "{}/output/intermediate/".format(job_name)
output_url = "s3://{}/{}".format(s3_bucket, output_tar_key)
intermediate_url = "s3://{}/{}".format(s3_bucket, intermediate_folder_key)
print("S3 job path: {}".format(s3_url))
print("Output.tar.gz location: {}".format(output_url))
print("Intermediate folder path: {}".format(intermediate_url))
tmp_dir = "/tmp/{}".format(job_name)
os.system("mkdir {}".format(tmp_dir))
print("Create local folder {}".format(tmp_dir))
```
## Visualization
### Plot metrics for training job
We can pull the Off Policy Evaluation(OPE) metric of the training and plot it to see the performance of the model over time.
```
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
csv_file_name = "worker_0.batch_rl_graph.main_level.main_level.agent_0.csv"
key = os.path.join(intermediate_folder_key, csv_file_name)
wait_for_s3_object(s3_bucket, key, tmp_dir, training_job_name=job_name)
csv_file = "{}/{}".format(tmp_dir, csv_file_name)
df = pd.read_csv(csv_file)
df = df.dropna(subset=['Sequential Doubly Robust'])
df.dropna(subset=['Weighted Importance Sampling'])
plt.figure(figsize=(12,5))
plt.xlabel('Number of epochs')
ax1 = df['Weighted Importance Sampling'].plot(color='blue', grid=True, label='WIS')
ax2 = df['Sequential Doubly Robust'].plot(color='red', grid=True, secondary_y=True, label='SDR')
h1, l1 = ax1.get_legend_handles_labels()
h2, l2 = ax2.get_legend_handles_labels()
plt.legend(h1+h2, l1+l2, loc=1)
plt.show()
```
There is a set of methods used to investigate the performance of the current trained policy without interacting with simulator / live environment. They can be used to estimate the goodness of the policy, based on the dataset collected from other policy. Here we showed two of these OPE metrics: WIS (Weighted Importance Sampling) [3] and SDR (Sequential Doubly Robust) [4]. As we can see in the plot, these metrics are improving as the learning agent is iterating over the given dataset.
## Evaluation of RL models
To evaluate the model trained with off policy data, we need to see the accumulative rewards of the agent by interacting with the environment. We use the last checkpointed model to run evaluation of the RL Agent. We use a different preset file here `preset-cartpole-ddqnbcq-env.py` to let the RL agent interact with the environment and collect rewards.
### Load checkpointed model
Checkpoint is passed on for evaluation / inference in the checkpoint channel. In local mode, we can simply use the local directory, whereas in the SageMaker mode, it needs to be moved to S3 first.
```
wait_for_s3_object(s3_bucket, output_tar_key, tmp_dir, training_job_name=job_name)
if not os.path.isfile("{}/output.tar.gz".format(tmp_dir)):
raise FileNotFoundError("File output.tar.gz not found")
os.system("tar -xvzf {}/output.tar.gz -C {}".format(tmp_dir, tmp_dir))
if local_mode:
checkpoint_dir = "{}/data/checkpoint".format(tmp_dir)
else:
checkpoint_dir = "{}/checkpoint".format(tmp_dir)
print("Checkpoint directory {}".format(checkpoint_dir))
if local_mode:
checkpoint_path = 'file://{}'.format(checkpoint_dir)
print("Local checkpoint file path: {}".format(checkpoint_path))
else:
checkpoint_path = "s3://{}/{}/checkpoint/".format(s3_bucket, job_name)
if not os.listdir(checkpoint_dir):
raise FileNotFoundError("Checkpoint files not found under the path")
os.system("aws s3 cp --recursive {} {}".format(checkpoint_dir, checkpoint_path))
print("S3 checkpoint file path: {}".format(checkpoint_path))
estimator_eval = RLEstimator(entry_point="evaluate-coach.py",
source_dir='src',
dependencies=["common/sagemaker_rl"],
toolkit=RLToolkit.COACH,
toolkit_version='1.0.0',
framework=RLFramework.TENSORFLOW,
role=role,
instance_type=instance_type,
instance_count=1,
output_path=s3_output_path,
base_job_name=job_name_prefix,
hyperparameters = {
"RLCOACH_PRESET": "preset-cartpole-ddqnbcq-env",
"evaluate_steps": 1000
}
)
estimator_eval.fit({'checkpoint': checkpoint_path})
```
### Batch Transform
As we can see from the above evaluation job, the trained agent gets a total reward of around `200` as compared to a total reward around `25` in our offline dataset. Therefore, we can confirm that the agent has learned a better policy from the off-policy data.
After we get the trained model, we can use it to do SageMaker Batch Transform, where customers can provide large volumes of input state features and get predictions with high throughput.
```
import time
from sagemaker.tensorflow.model import TensorFlowModel
if local_mode:
sage_session = sagemaker.local.LocalSession()
# Create SageMaker model entity by using model data generated by the estimator
model = TensorFlowModel(model_data=estimator.model_data,
framework_version="1.15",
sagemaker_session=sage_session,
role=role)
prefix = "batch_test"
# setup input data prefix and output data prefix for batch transform
batch_input = 's3://{}/{}/{}/input/'.format(s3_bucket, job_name, prefix) # The location of the test dataset
batch_output = 's3://{}/{}/{}/output/'.format(s3_bucket, job_name, prefix) # The location to store the results of the batch transform job
print("Inputpath for batch transform: {}".format(batch_input))
print("Outputpath for batch transform: {}".format(batch_output))
```
In this notebook, we use the states of the environments as input for the Batch Transform.
```
import time
file_name = 'env_states_{}.json'.format(int(time.time()))
# resetting the environments
vectored_envs.reset_all_envs()
# dump environment states into jsonlines file
vectored_envs.dump_environment_states(tmp_dir, file_name)
```
In order to use SageMaker Batch Transform, we'll need to first upload the input data from local to S3 bucket
```
%%time
from pathlib import Path
local_input_file_path = Path(tmp_dir) / file_name
s3_input_file_path = batch_input + file_name # Path library will remove :// from s3 path
print("Copy file from local path '{}' to s3 path '{}'".format(local_input_file_path, s3_input_file_path))
assert os.system("aws s3 cp {} {}".format(local_input_file_path, s3_input_file_path)) == 0
print("S3 batch input file path: {}".format(s3_input_file_path))
```
Similar to how we launch a training job on SageMaker, we can initiate a batch transform job either in `Local` mode or `SageMaker` mode.
```
if local_mode:
instance_type = 'local'
else:
instance_type = "ml.m4.xlarge"
transformer = model.transformer(instance_count=1, instance_type=instance_type, output_path=batch_output, assemble_with = 'Line', accept = 'application/jsonlines', strategy='SingleRecord')
transformer.transform(data=batch_input, data_type='S3Prefix', content_type='application/jsonlines', split_type='Line', join_source='Input')
transformer.wait()
```
After we finished the batch transform job, we can download the prediction output from S3 bucket to local machine.
```
import subprocess
# get the latest generated output file
cmd = "aws s3 ls {} --recursive | sort | tail -n 1".format(batch_output)
result = subprocess.check_output(cmd, shell=True).decode("utf-8").split(' ')[-1].strip()
local_output_file_path = Path(tmp_dir) / f"{file_name}.out"
s3_output_file_path = 's3://{}/{}'.format(s3_bucket,result)
print("Copy file from s3 path '{}' to local path '{}'".format(s3_output_file_path, local_output_file_path))
os.system("aws s3 cp {} {}".format(s3_output_file_path, local_output_file_path))
print("S3 batch output file local path: {}".format(local_output_file_path))
import subprocess
batcmd="cat {}".format(local_output_file_path)
results = subprocess.check_output(batcmd, shell=True).decode("utf-8").split('\n')
results[:10]
```
In this notebook, we use simulated environments to collect rollout data of a random policy. Assuming the updated policy is now deployed, we can use Batch Transform to collect rollout data from this policy.
Here are the steps on how to collect rollout data with Batch Transform:
1. Use Batch Transform to get action predictions, provided observation features from the live environment at timestep *t*
2. Deployed agent takes suggested actions against the environment (simulator / real) at timestep *t*
3. Environment returns new observation features at timestep *t+1*
4. Return back to step 1. Use Batch Transform to get action predictions at timestep *t+1*
This iterative procedure enables us to collect a set of data that can cover the whole episode, similar to what we've shown at the beginning of the notebook. Once the data is sufficient, we can use these data to kick off a BatchRL training again.
Batch Transform works well when there are multiple episodes interacting with the environments concurrently. One of the typical use cases is email campaign, where each email user is an independent episode interacting with the deployed policy. Batch Transform can concurrently collect rollout data from millions of user context with efficiency. The collected rollout data can then be supplied to Batch RL Training to train a better policy to serve the email users.
### Reference
1. Batch Reinforcement Learning with Coach: https://github.com/NervanaSystems/coach/blob/master/tutorials/4.%20Batch%20Reinforcement%20Learning.ipynb
2. AG Barto, RS Sutton and CW Anderson, "Neuronlike Adaptive Elements That Can Solve Difficult Learning Control Problem", IEEE Transactions on Systems, Man, and Cybernetics, 1983.
3. Thomas, Philip, Georgios Theocharous, and Mohammad Ghavamzadeh. "High confidence policy improvement." International Conference on Machine Learning. 2015.
4. Jiang, Nan, and Lihong Li. "Doubly robust off-policy value evaluation for reinforcement learning." arXiv preprint arXiv:1511.03722 (2015).
5. Fujimoto, Scott, David Meger, and Doina Precup. "Off-policy deep reinforcement learning without exploration." arXiv preprint arXiv:1812.02900 (2018)
6. Van Hasselt, Hado, Arthur Guez, and David Silver. "Deep reinforcement learning with double q-learning." Thirtieth AAAI conference on artificial intelligence. 2016.
| github_jupyter |
# Q-Learning
## Cliff walking
This is a simple implementation of the Gridworld Cliff
reinforcement learning task.
Adapted from Example 6.6 (page 106) from Reinforcement Learning: An Introduction
by Sutton and Barto:
http://incompleteideas.net/book/bookdraft2018jan1.pdf
With inspiration from:
https://github.com/dennybritz/reinforcement-learning/blob/master/lib/envs/cliff_walking.py
The board is a 4x12 matrix, with (using Numpy matrix indexing):
[3, 0] as the start at bottom-left
[3, 11] as the goal at bottom-right
[3, 1..10] as the cliff at bottom-center
Each time step incurs -1 reward, and stepping into the cliff incurs -100 reward
and a reset to the start. An episode terminates when the agent reaches the goal.


```
%matplotlib inline
import gym
import matplotlib
import numpy as np
import sys
from collections import defaultdict
import itertools
import seaborn as sns
class Q_Learning:
def __init__(self,
num_episodes = 10000,
verbose=False,
discount_factor=0.9,
alpha=0.5,
epsilon=0.1):
self.verbose = verbose
self.nE = num_episodes
self.df=discount_factor
self.alpha=alpha
self.epsilon=epsilon
self.greedy_pol = None
self.Q = None
######################################################################
############################ Core Functons ##########################
######################################################################
def _td_update(self, state, action, env):
'''
updated Q table for a given state/action pair
Input:
State : the current state
actions: the selected action
env: game enviroment
'''
# take action
next_state, reward, done, _ = env.step(action)
# get best action in next state
best_next_action = np.argmax(self.Q[next_state])
# calculate TD target = r + gamma*armaxa Q(s',a)
td_target = reward + self.df * self.Q[next_state][best_next_action]
# calculate error: target - Q(s,a)
td_delta = td_target - self.Q[state][action]
# updated Q-Table: Q(s,a) += alpha * td_error
self.Q[state][action] += self.alpha * td_delta
self.greedy_pol.set_Q(self.Q.copy())
return done, next_state
######################################################################
############################ Helper Functons ########################
######################################################################
def _init_model(self, env):
self.Q = defaultdict(lambda: np.zeros(env.action_space.n))
self.greedy_pol = Epsilon_greedy_policy(self.Q.copy(), self.epsilon)
def run_episode(self, env):
'''
Plays one round of the game and updates Q table
Input enviroment to learn
'''
state = env.reset()
# is step count
for t in itertools.count():
# Take a step
action_probs = self.greedy_pol.get_policy_for_state(state)
action = np.random.choice(np.arange(len(action_probs)), p=action_probs)
# update Q table
done, next_state = self._td_update(state, action, env)
# stop after 50 steps or when game is finished
if done or t == 50:
break
state = next_state
def train(self, env, force_init=False):
'''
Trains Q learner for a given enviroment
Input: env enviroment you want to train on
'''
if self.Q is None or self.C is None or force_init:
self._init_model(env)
for i_episode in range(1, self.nE + 1):
if i_episode % 10 == 0 and self.verbose:
print("Episode %i" % (i_episode))
sys.stdout.flush()
self.run_episode(env)
######################################################################
############################ Helper Functons ########################
######################################################################
class Greedy_policy():
def __init__(self, Q):
self.Q = Q
def set_Q(self, Q):
self.Q = Q
def get_policy_for_state(self, s):
A = np.zeros_like(self.Q[s], dtype=float)
best_action = np.argmax(self.Q[s])
A[best_action] = 1.0
return A
class Epsilon_greedy_policy(Greedy_policy):
def __init__(self, Q, epsilon):
super(Epsilon_greedy_policy, self).__init__(Q)
self.epsilon = epsilon
def get_policy_for_state(self, s):
greedy_policy = super(Epsilon_greedy_policy, self).get_policy_for_state(s)
return self.make_greedy_policy_epsilon_greedy(greedy_policy)
def make_greedy_policy_epsilon_greedy(self, greedy_pol):
best_action = np.argmax(greedy_pol)
greedy_pol += self.epsilon/len(greedy_pol)
greedy_pol[best_action] -= self.epsilon
return greedy_pol
def mk_heatmap(Q, action, action_name):
world = np.zeros(env.shape)
for s, v in q_learner.Q.items():
pos = np.unravel_index(s, env.shape)
world[pos] = v[action]
sns.set(rc={'figure.figsize':(11.7,5.27)})
sns.heatmap(world, vmax = 10, vmin = -10, annot=True, cmap = sns.diverging_palette(240, 10, n=9), linewidths=.5, linecolor='black').set_title(action_name)
from gym.envs.toy_text.cliffwalking import CliffWalkingEnv
env = CliffWalkingEnv()
q_learner = Q_Learning(num_episodes=100)
q_learner.train(env)
env.render()
world = np.zeros(env.shape)
for s, v in q_learner.Q.items():
pos = np.unravel_index(s, env.shape)
world[pos] = np.argmax(v)
# UP:0, RIGHT: 1, DOWN: 2, LEFT: 3
world
mk_heatmap(q_learner.Q, 0, 'UP')
mk_heatmap(q_learner.Q, 1, 'RIGHT')
mk_heatmap(q_learner.Q, 2, 'DOWN')
mk_heatmap(q_learner.Q, 3, 'LEFT')
```
# Deep Q Networks
```
# https://github.com/cyoon1729/deep-Q-networks
# https://github.com/andri27-ts/Reinforcement-Learning
# https://github.com/Kaixhin/Rainbow
# https://github.com/dennybritz/reinforcement-learning
# https://github.com/the-computer-scientist/OpenAIGym/blob/master/PrioritizedExperienceReplayInOpenAIGym.ipynb
# https://jaromiru.com/2016/11/07/lets-make-a-dqn-double-learning-and-prioritized-experience-replay/
```



```
import gym
import utils
import numpy as np
import random
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
from collections import namedtuple
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision.transforms as T
import torch
import torch.autograd as autograd
import torch.nn as nn
from enum import Enum
import copy
```
## Replay Buffer
```
Transition = namedtuple('Transition',
('state', 'action', 'next_state', 'reward', 'done'))
class ReplayMemory(object):
'''
A simple memory for storing episodes where each episodes
is a names tuple with (state, action, next_state, reward, done)
'''
def __init__(self, capacity):
'''
Initialize memory of size capacity
Input: Capacity : int
size of the memory
output: initialized ReplayMemory object
'''
self.capacity = capacity
self.memory = []
self.position = 0
def push(self, *args):
'''
input *args : list *args is list for transition
[state, action, next_state, reward, done] and add
transition to memory.
Returns : None
'''
if len(self.memory) < self.capacity:
self.memory.append(None)
self.memory[self.position] = Transition(*args)
self.position = (self.position + 1) % self.capacity
def sample(self, batch_size):
'''
Randomly sample transitions from memory
Input batch_size : int
numer of transition to sample
Output: namedtuple
Namedtupe with each field contains a list of data points
'''
batch = random.sample(self.memory, batch_size)
return Transition(*zip(*batch))
def __len__(self):
'''
returns current size of memory
'''
return len(self.memory)
```
## Deep Q Neural network
```
class Abstract_DQNN(nn.Module):
'''
Abstract class gives skelleton that all DQN algos should implement/inherit
DQNN are the neural network of the Deep Q learning part. Deep Q learning consits of netowkr and agent.
'''
def __init__(self, input_dim, output_dim):
'''
input_dim : tuple shape of enviroment state
output_dim: int number of actions
'''
super(Abstract_DQNN, self).__init__()
self.input_dim = input_dim
self.output_dim = output_dim
self._init_fc()
self._init_o()
def _init_fc(self):
'''
Initialize the feature generation part of the NN
'''
pass
def _init_o(self):
'''
Initialize the output layer of the NN
'''
pass
def _forward_fc(self, state):
'''
Pass the state trough the feature layer generating hidden state (i.e. features)
'''
pass
def _forward_o(self, fc):
'''
pass features from fc layer trough the output layer
'''
pass
def forward(self, state):
'''
Complete forward pass from state to otput
Input: State: list state of the enviroment
Output: Value function for each action in input state
'''
features = self._forward_fc(state)
features = features.view(features.size(0), -1)
out = self._forward_o(features)
return out
class FCDQN(Abstract_DQNN):
'''
Fully connected Nural network for Q-Learning
'''
def _init_fc(self):
'''
Initialize the feature generation part of the NN
Here it is two dense layer
'''
self.fc = nn.Sequential(
nn.Linear(self.input_dim[0], 128),
nn.ReLU(),
nn.Linear(128, 256),
nn.ReLU(),
)
def _init_o(self):
'''
pass features from fc layer trough the output layer
here a single dense layer
'''
self.o = nn.Sequential(
nn.Linear(256, self.output_dim)
)
def _forward_fc(self, state):
'''
Pass the state trough the feature layer generating hidden state (i.e. features)
Input state env.state state of the enviroment
returns hidden state of fc layers
'''
return self.fc(state)
def _forward_o(self, fcs):
'''
pass features from fc layer trough the output layer
Input output hidden layer of fc layer
Output value state action function for each action in state
'''
return self.o(fcs)
class ConvDQN(nn.Module):
def _init_fc(self):
self.conv_net = nn.Sequential(
nn.Conv2d(self.input_dim[0], 32, kernel_size=8, stride=4),
nn.ReLU(),
nn.Conv2d(32, 64, kernel_size=4, stride=2),
nn.ReLU(),
nn.Conv2d(64, 64, kernel_size=3, stride=1),
nn.ReLU()
)
fc_input_dim = self.feature_size()
self.fc = nn.Sequential(
nn.Linear(self.fc_input_dim, 128),
nn.ReLU(),
nn.Linear(128, 128),
nn.ReLU(),
nn.Linear(128, self.output_dim)
)
def _init_o(self):
self.o = nn.Sequential(
nn.Linear(128, self.output_dim)
)
def _forward_fc(self, state):
features = self.conv_net(state)
features = features.view(features.size(0), -1)
return self.fc(features)
def _forward_o(self, fcs):
return self.o(fcs)
def _forward_o(self, fcs):
return self.o(fcs)
def feature_size(self):
return self.conv_net(autograd.Variable(torch.zeros(1, *self.input_dim))).view(1, -1).size(1)
class RL_NN(Enum):
FCDQN = 1
ConvDQN = 2
Dllng_FCDQN = 3
Noiy_Dllng_FCDQN = 4
class NN_Factory:
'''
Factory for generating Deep Q neural networks
'''
def __init__(self):
self.registry = {}
self.register(RL_NN.FCDQN, FCDQN)
self.register(RL_NN.ConvDQN, ConvDQN)
def register(self, key, const):
'''
Registers a cunstructor to a specific RL_NN enum
Key : Enum enum key for costructor
const: Constructor for NN class
'''
self.registry[key] = const
def get(self, key):
'''
retrieves a registered constructor for a given Enum key
Input: key: Enum key for constructor
'''
if key in self.registry:
return self.registry[key]
else:
raise Exception('%s Constructor not registered')
```
## Deep Q Agent
```
class DQN_Agent:
'''
Base model for Deep Q-Learning Agents
'''
def __init__(self, env, NN_class, replay_buffer,
learning_rate=0.001, gamma=0.99, update_target = 100, eps=1,
eps_decay=0.9, eps_min= 0.01):
'''
Iniitialize Deep Q Agent
Input:
env: openai gym enviroment
NN_class: Abstract_DQNN Neural network class for neural network part of Deep Q-Learning
replay_buffer: Replay Bufer to store Transitions
learning_rate: learning rate to update neural network
gamma: future discount rate
update_target: step % update_target ==0 is when target network gets updated
eps: epsilon for choosing random action
eps_decay: rate at which epsilon decays
eps_min: minimal epsilon value
'''
self.env = env
self.learning_rate = learning_rate
self.gamma = gamma
self.replay_buffer = replay_buffer
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
self.model = NN_class(env.observation_space.shape, env.action_space.n)
self.target_model = NN_class(env.observation_space.shape, env.action_space.n)
self.optimizer = torch.optim.Adam(self.model.parameters())
self.MSE_loss = nn.MSELoss()
self.update_target = update_target
self.i = 0
self.eps = eps
self.eps_decay = eps_decay
self.eps_min = eps_min
######################################################################
############################ Core Functons ##########################
######################################################################
def get_action(self, state):
'''
epsilon greedy selection of action:
with P(1-epsilon):
select greedy action
else:
select random action
Input: state: env.state state enviroment
output: action with highest value fucntion for input in state
'''
if(np.random.uniform() < self.eps):
return self.env.action_space.sample()
else:
state = torch.FloatTensor(state).float().unsqueeze(0).to(self.device)
qvals = self.model.forward(state)
return np.argmax(qvals.cpu().detach().numpy())
def compute_loss(self, batch):
'''
calculates loss for a given batch
Input: batch: a batch of Transitions
output: MSE loss of predicted value functions and gamma * value fuction(next state)
'''
states, actions, rewards, next_states, dones = self.__get_tensors_from_batch__(batch)
# get model predictions of Q
q_predictions = self._predict_q(states, actions)
# get y_i (i.e. a better estimate of Q values)
expected_Q = self._yi(next_states, dones, rewards)
loss = F.mse_loss(q_predictions, expected_Q.detach())
return loss
def _yi(self, next_states, dones, rewards):
'''
retursn td error for each Transitions
Input:
next_states: list of future tates
dones: list of dones
rewards: list of rewards
Output:
Td error for each Transition in Batch
'''
max_next_Q = self._next_max_Q(next_states)
y_i = rewards + (1 - dones) * self.gamma * max_next_Q
return y_i
def train(self, env, max_episodes, max_steps, batch_size):
'''
trains Deep Q Agent for a given enviroment
Input:
env: enviroment to train on
max_episodes: number of epsiodes to play
max_steps: number of steps to take in an epsiode
batch_size: size of batch
'''
episode_rewards = []
for episode in range(max_episodes):
state = env.reset()
episode_reward = 0
for step in range(max_steps):
action = self.get_action(state)
next_state, reward, done, _ = env.step(action)
reward = reward if not done else -10
self.replay_buffer.push(state, action, next_state, reward, done)
episode_reward += reward
if len(self.replay_buffer) > batch_size:
self.update(batch_size)
if done or step == max_steps-1:
episode_rewards.append(episode_reward)
self.update_eps()
print("Episode " + str(episode) + ": " + str(episode_reward), '\t', self.eps)
break
state = next_state
return episode_rewards
######################################################################
############################ Helper Functons ########################
######################################################################
def _predict_q(self, states, actions):
'''
Get value function for actions and state for batch of states of actions
Input: states : list of states
actions: list of actions
Output: list of Q values for each actions/state pair
'''
return self.model.forward(states).gather(1, actions).to(self.device)
def __get_tensors_from_batch__(self, batch):
'''
maps list values from abtch to pytorch tensors
Input: batch: list of Transitions
output: pytorch tensors of all Transitions values
'''
states = torch.FloatTensor(batch.state, device=self.device)
actions = torch.LongTensor(batch.action, device=self.device)
rewards = torch.FloatTensor(batch.reward, device=self.device)
next_states = torch.FloatTensor(batch.next_state, device=self.device)
dones = torch.FloatTensor(batch.done, device=self.device)
# resize tensors
actions = actions.view(actions.size(0), 1)
dones = dones.view(dones.size(0), 1)
rewards = rewards.view(rewards.size(0), 1)
return states, actions, rewards, next_states, dones
def _next_max_Q(self, next_states):
'''
returns maxt Q value for a batch of future states
Input: next_states: list of future tates
Output: max Q-value for each future state
'''
next_Q = self.target_model.forward(next_states).to(self.device)
max_next_Q = torch.max(next_Q, 1)[0]
return max_next_Q.view(max_next_Q.size(0), 1)
def update(self, batch_size):
'''
Forward and backward pass for a batch
Input:
batch_size: number of Transitions to be sampled from memory
'''
batch = self.replay_buffer.sample(batch_size)
loss = self.compute_loss(batch)
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
# Copy the moving NN in the target NN
if self.i % self.update_target == 0:
self.target_model.load_state_dict(self.model.state_dict())
self.i += 1
def update_eps(self):
'''
perform epsilon decay
'''
if self.eps > self.eps_min:
self.eps *= self.eps_decay
env_id = "CartPole-v1"
MAX_EPISODES = 100
MAX_STEPS = 500
BATCH_SIZE = 32
nn_fac = NN_Factory()
env = gym.make(env_id)
buffer = ReplayMemory(10000)
agent = DQN_Agent(env, nn_fac.get(RL_NN.FCDQN), buffer)
episode_scores = agent.train(env, MAX_EPISODES, MAX_STEPS, BATCH_SIZE)
env = utils.wrap_env(gym.make("CartPole-v1"))
observation = env.reset()
total_raward_rnd = 0
agent.eps = 0
while True:
env.render()
action = agent.get_action(observation)
observation, reward, done, info = env.step(action)
total_raward_rnd += reward
if done:
break;
print(total_raward_rnd)
env.close()
utils.show_video()
```
# Rainbow
Rainbow algorithm consits of a list of improvements:
* Double deep Q-Learning
* Duelling Deep Q-Learning
* Noisy Nets
* N look ahead Buffer
* Prioritized Experience Replay
* Distributed Q-Learning
## Double Deep Q Learning

```
class Double_DQN_Agent(DQN_Agent):
'''
Double Deep Q Agent extens on the original Deep Q Agent
'''
def _next_max_Q(self, next_states):
'''
Calculates Max Q Value for next_states using double deep Q-Learning
Input:
next_states: list of next states
'''
#Actions our model would select based on next state
double_max_action = self.model(next_states).max(1)[1]
double_max_action = double_max_action.detach()
# calculate Q vales for all actions our model would take based on next state
target_output = self.target_model(next_states)
max_next_Q = torch.gather(target_output, 1, double_max_action[:,None])
return max_next_Q.view(max_next_Q.size(0), 1)
env_id = "CartPole-v1"
MAX_EPISODES = 200
MAX_STEPS = 500
BATCH_SIZE = 32
nn_fac = NN_Factory()
env = gym.make(env_id)
buffer = ReplayMemory(10000)
agent = Double_DQN_Agent(env, nn_fac.get(RL_NN.FCDQN), buffer)
episode_scores = agent.train(env, MAX_EPISODES, MAX_STEPS, BATCH_SIZE)
```
## Duelling Deep Q-Learning


```
class Duelling_DQNN(Abstract_DQNN):
'''
Extend Deep Q learning Networks to Duelling Netowrks
'''
def _init_fc(self):
'''
Feature generation is unchanged to vanilla deep Q-LEarning
'''
self.fc = nn.Sequential(
nn.Linear(self.input_dim[0], 128),
nn.ReLU(),
nn.Linear(128, 128),
nn.ReLU(),
)
def _init_o(self):
'''
Output layer consits of q-value extimation and adventage function for actions
value_stream: calculates Q-value (output one value)
advantage_stream: calucaltes advantage value for each action in state
'''
self.value_stream = nn.Sequential(
nn.Linear(128, 128),
nn.ReLU(),
nn.Linear(128, 1)
)
self.advantage_stream = nn.Sequential(
nn.Linear(128, 128),
nn.ReLU(),
nn.Linear(128, self.output_dim)
)
self.o = nn.Sequential(
nn.Linear(128, self.output_dim)
)
def _forward_fc(self, state):
return self.fc(state)
def _forward_o(self, fcs):
'''
returns q-value for each action for a given hidden feature state
'''
values = self.value_stream(fcs)
advantages = self.advantage_stream(fcs)
# return value + (adventage - mean(adventage))
return values + (advantages - advantages.mean())
env_id = "CartPole-v1"
MAX_EPISODES = 200
MAX_STEPS = 500
BATCH_SIZE = 32
# Add new NN architecture
nn_fac = NN_Factory()
nn_fac.register(RL_NN.Dllng_FCDQN, Duelling_DQNN)
env = gym.make(env_id)
buffer = ReplayMemory(10000)
agent = Double_DQN_Agent(env, nn_fac.get(RL_NN.Dllng_FCDQN), buffer)
episode_scores = agent.train(env, MAX_EPISODES, MAX_STEPS, BATCH_SIZE)
```
## Noisy Nets

```
import math
# Factorised NoisyLinear layer with bias
# source https://github.com/Kaixhin/Rainbow/blob/master/model.py
class NoisyLinear(nn.Module):
def __init__(self, in_features, out_features, std_init=0.5):
super(NoisyLinear, self).__init__()
self.in_features = in_features
self.out_features = out_features
self.std_init = std_init
self.weight_mu = nn.Parameter(torch.empty(out_features, in_features))
self.weight_sigma = nn.Parameter(torch.empty(out_features, in_features))
self.register_buffer('weight_epsilon', torch.empty(out_features, in_features))
self.bias_mu = nn.Parameter(torch.empty(out_features))
self.bias_sigma = nn.Parameter(torch.empty(out_features))
self.register_buffer('bias_epsilon', torch.empty(out_features))
self.reset_parameters()
self.reset_noise()
def reset_parameters(self):
mu_range = 1 / math.sqrt(self.in_features)
self.weight_mu.data.uniform_(-mu_range, mu_range)
self.weight_sigma.data.fill_(self.std_init / math.sqrt(self.in_features))
self.bias_mu.data.uniform_(-mu_range, mu_range)
self.bias_sigma.data.fill_(self.std_init / math.sqrt(self.out_features))
def _scale_noise(self, size):
x = torch.randn(size)
return x.sign().mul_(x.abs().sqrt_())
def reset_noise(self):
epsilon_in = self._scale_noise(self.in_features)
epsilon_out = self._scale_noise(self.out_features)
self.weight_epsilon.copy_(epsilon_out.ger(epsilon_in))
self.bias_epsilon.copy_(epsilon_out)
def forward(self, input):
if self.training:
return F.linear(input, self.weight_mu + self.weight_sigma * self.weight_epsilon, self.bias_mu + self.bias_sigma * self.bias_epsilon)
else:
return F.linear(input, self.weight_mu, self.bias_mu)
class Noisy_Duelling_DQNN(Duelling_DQNN):
'''
Adds Noisy Networks to the duelling Deep Q-Learning networks
'''
def _init_o(self):
self.value_stream = nn.Sequential(
NoisyLinear(128, 128),
nn.ReLU(),
NoisyLinear(128, 1)
)
self.advantage_stream = nn.Sequential(
NoisyLinear(128, 128),
nn.ReLU(),
NoisyLinear(128, self.output_dim)
)
self.o = nn.Sequential(
nn.Linear(128, self.output_dim)
)
env_id = "CartPole-v1"
MAX_EPISODES = 200
MAX_STEPS = 500
BATCH_SIZE = 32
# Add new NN architecture
nn_fac = NN_Factory()
nn_fac.register(RL_NN.Dllng_FCDQN, Duelling_DQNN)
nn_fac.register (RL_NN.Noiy_Dllng_FCDQN, Noisy_Duelling_DQNN)
env = gym.make(env_id)
buffer = ReplayMemory(10000)
agent = Double_DQN_Agent(env, nn_fac.get(RL_NN.Noiy_Dllng_FCDQN), buffer, eps=0)
episode_scores = agent.train(env, MAX_EPISODES, MAX_STEPS, BATCH_SIZE)
```
## N look a head Buffer

```
from collections import deque
from dataclasses import dataclass
@dataclass
class Transition_dclass:
state: list
action: list
next_state: list
reward: list
done: list
class N_step_ReplayMemory(object):
'''
N step look ahead buffer for experience replay
'''
def __init__(self, capacity, n=3, gamma = 0.99):
'''
Creates Buffer
Input:
capacity: size of the memory
n: number of steps to look ahead
gammma: future reward discount
'''
self.capacity = capacity
self.memory = deque([])
self.position = 0
self.n = n
self.gamma = gamma
self.buffer_initialized = False
######################################################################
############################ Core Functons ##########################
######################################################################
def push(self, *args):
'''
Adds Transition to buffer
Input *args : list *args is list for transition
[state, action, next_state, reward, done]
'''
self.add_transition(*args)
if self.n >0:
if not self.buffer_initialized:
self.check_buffer()
if self.buffer_initialized:
self._look_ahead()
self.position = (self.position + 1) % self.capacity
def _look_ahead_vals(self):
'''
get look ahead val for a given transition
'''
dones = []
rewards = 0
pntr = self._pointer()
for i in range(self.n):
self.memory.rotate(-1)
d, r = self.memory[pntr].done, self.memory[pntr].reward
dones.append(d)
rewards += self.gamma * r
self.memory.rotate(self.n)
return dones, rewards, pntr
######################################################################
############################ Helper Functons ########################
######################################################################
def add_transition(self, *args):
"""Saves a transition."""
if len(self.memory) < self.capacity:
self.memory.append(None)
self.memory[self.position] = Transition_dclass(*args)
def _update_done(self, dones, pntr):
self.memory[pntr].done = max(dones)
def _update_reward(self, reward, pntr):
self.memory[pntr].reward = reward
def _look_ahead(self):
ds, r, p = self._look_ahead_vals()
self._update_done(ds, p)
self._update_reward(r, p)
def _pointer(self):
if self.position >= self.n:
return self.position - self.n
else:
return self.capacity + self.position - self.n
def check_buffer(self):
if self.position >= self.n:
self.buffer_initialized = True
def sample(self, batch_size):
batch = random.sample(self.memory, batch_size)
return Transition_dclass(tuple([t.state] for t in batch),
tuple([t.action] for t in batch),
tuple([t.next_state] for t in batch),
tuple([t.reward] for t in batch),
tuple([t.done] for t in batch))
def __len__(self):
return len(self.memory)
env_id = "CartPole-v1"
MAX_EPISODES = 200
MAX_STEPS = 500
BATCH_SIZE = 32
# Add new NN architecture
nn_fac = NN_Factory()
nn_fac.register(RL_NN.Dllng_FCDQN, Duelling_DQNN)
nn_fac.register (RL_NN.Noiy_Dllng_FCDQN, Noisy_Duelling_DQNN)
env = gym.make(env_id)
buffer = N_step_ReplayMemory(100000)
agent = Double_DQN_Agent(env, nn_fac.get(RL_NN.Noiy_Dllng_FCDQN), buffer, eps=0)
episode_scores = agent.train(env, MAX_EPISODES, MAX_STEPS, BATCH_SIZE)
```
## Prioritized Experience Replay

```
class PrioritizedBuffer(N_step_ReplayMemory):
'''
N look ahead prioritized replay buffer
'''
def __init__(self, capacity, n=3, gamma = 0.99, alpha = 0.6, beta = 0.4, min_prob = 0.01):
super(PrioritizedBuffer, self).__init__(capacity = capacity, n = n, gamma = gamma)
self.weights = np.array([min_prob]*capacity)
self.alpha = alpha
self.beta = beta
self.min_prob = min_prob
self.max_pos = 0
######################################################################
############################ Core Functons ##########################
######################################################################
def get_importance(self, sample_probs):
importance = len(sample_probs) * sample_probs
importance_normalized = importance**(-self.beta) / max(importance)
return importance_normalized
def update_priority(self, idx, td_error):
self.weights[idx] = ( td_error ** self.alpha) + self.min_prob
def sample(self, batch_size):
sample_probs = self.weights/sum(self.weights)
sample_indices = random.choices(range(self.max_pos), k=batch_size, weights=sample_probs[0:self.max_pos])
weights = self.get_importance(sample_probs[sample_indices])
samples = self.samples_from_indices(sample_indices)
return samples, sample_indices, weights
######################################################################
############################ Helper Functons ########################
######################################################################
def samples_from_indices(self, sample_indices):
batch = [self.memory[i] for i in sample_indices]
return Transition_dclass(tuple([t.state] for t in batch),
tuple([t.action] for t in batch),
tuple([t.next_state] for t in batch),
tuple([t.reward] for t in batch),
tuple([t.done] for t in batch))
def push(self, *args):
super(PrioritizedBuffer, self).push(*args)
self.max_pos = max(self.max_pos, self.position)
class PerExp_Double_DQN_Agent(Double_DQN_Agent):
'''
Modified Deep Q Agent so it can work with per Buffer
The thing that got added is to update the buffer after backward pass
'''
def update(self, batch_size):
batch, idxs, weights = self.replay_buffer.sample(batch_size)
td_errors = self.compute_loss(batch)
weights = torch.FloatTensor(weights).to(self.device)
weights = weights.view(weights.size(0), -1)
td_errors = td_errors*weights
td_errors_mean = td_errors.mean()
self.optimizer.zero_grad()
td_errors_mean.backward()
self.optimizer.step()
for idx, td_error in zip(idxs, td_errors.cpu().detach().numpy()):
self.replay_buffer.update_priority(idx, td_error)
# Copy the moving NN in the target NN
if self.i % self.update_target == 0:
self.target_model.load_state_dict(self.model.state_dict())
self.i += 1
def compute_loss(self, batch):
states, actions, rewards, next_states, dones = self.__get_tensors_from_batch__(batch)
# get model predictions of Q
q_predictions = self._predict_q(states, actions)
# get y_i (i.e. a better estimate of Q values)
expected_Q = self._yi(next_states, dones, rewards)
td_errors = torch.pow(q_predictions - expected_Q, 2)
return td_errors
env_id = "CartPole-v1"
MAX_EPISODES = 200
MAX_STEPS = 500
BATCH_SIZE = 32
# Add new NN architecture
nn_fac = NN_Factory()
nn_fac.register(RL_NN.Dllng_FCDQN, Duelling_DQNN)
nn_fac.register (RL_NN.Noiy_Dllng_FCDQN, Noisy_Duelling_DQNN)
env = gym.make(env_id)
buffer = PrioritizedBuffer(100000)
agent = PerExp_Double_DQN_Agent(env, nn_fac.get(RL_NN.Dllng_FCDQN), buffer, eps=1)
episode_scores = agent.train(env, MAX_EPISODES, MAX_STEPS, BATCH_SIZE)
```
| github_jupyter |
```
import pyGMs as gm
import numpy as np
import matplotlib.pyplot as plt # use matplotlib for plotting with inline plots
%matplotlib inline
```
Let's define the Burglar & Earthquake model from lecture:
```
J,M,A,E,B = tuple(gm.Var(i,2) for i in range(0,5)) # all binary variables
X = [J,M,A,E,B] # we'll often refer to variables as Xi or X[i]
# sometimes it's useful to have a reverse look-up from ID to "name string" (e.g. when drawing the graph)
IDtoName = dict( (eval(n).label,n) for n in ['J','M','A','E','B'])
pE = gm.Factor([E], [.998, .002]) # probability of earthquake (false,true)
pB = gm.Factor([B], [.999, .001]) # probability of burglary
pAgEB = gm.Factor([A,E,B], 0.0)
# Set A,E,B # Note: it's important to refer to tuples like
pAgEB[:,0,0] = [.999, .001] # (A,E,B)=(0,0,0) in the order of the variables' ID numbers
pAgEB[:,0,1] = [.710, .290] # So, since A=X[2], E=X[3], etc., A,E,B is the correct order
pAgEB[:,1,0] = [.060, .940]
pAgEB[:,1,1] = [.050, .950] # ":" refers to an entire row of the table
pJgA = gm.Factor([J,A], 0.0) # Probability that John calls given the alarm's status
pJgA[:,0] = [.95, .05]
pJgA[:,1] = [.10, .90]
pMgA = gm.Factor([M,A], 0.0) # Probability that Mary calls given the alarm's status
pMgA[:,0] = [.99, .01]
pMgA[:,1] = [.30, .70]
#factors = [pE, pB, pAgEB, pJgA, pMgA] # collect all the factors that define the model
factors = [pJgA, pMgA, pAgEB, pE, pB] # collect all the factors that define the model
fg = gm.GraphModel(factors)
```
## A-Star search for the MAP configuration
Let's find the most likely configuration using a simple fixed order a-star search:
```
import heapq
class PriorityQueue:
'''Priority queue wrapper; returns highest priority element.'''
def __init__(self): self.elements = []
def __len__(self): return len(self.elements)
def empty(self): return len(self.elements) == 0
def push(self, item, pri): heapq.heappush(self.elements, (-pri, item))
def pop(self): return heapq.heappop(self.elements)[1]
def astar(model, order):
'''Basic A-star search for graphical model 'model' using search order 'order'.'''
def heur(model,config):
return sum([np.log(f.condition(config).max()) for f in model.factors]);
frontier = PriorityQueue()
frontier.push({}, heur(model,{}))
while frontier:
current = frontier.pop()
if len(current) == len(model.X): break # if a full configuration, done:
Xi = order[len(current)] # get the next variable to assign (fixed order)
for xi in range(Xi.states): # generate each child node (assignment Xi=xi)
next = current.copy();
next[Xi] = xi; # and add it to the frontier / queue
frontier.push( next, heur(model,next) )
return model.logValue(current), current # found a leaf: return value & config
print( astar(fg, fg.X) )
```
We can verify that this is indeed the value of the most probable configuration via variable elimination:
```
fg_copy = fg.copy() # make a deep copy (VE changes the graph & factors)
fg_copy.eliminate(fg.X, 'max') # eliminate via "max" over order 0...n-1
print(fg_copy.logValue([])) # no variables left: only one (scalar) factor
```
| github_jupyter |
# 7교시 스파크 애플리케이션 튜닝 및 최적화
> 6장에서는 스파크가 어떻게 메모리 관리를 하고, 고급 API 를 통해서 데이터셋을 구성하는 지에 대해 학습했으며, 이번 장에서는 최적화를 위한 스파크 설정과, 조인 전략들을 살펴보고, 스파크 UI 를 통해 안좋은 영향을 줄 수 있는 것들에 대한 힌트를 얻고자 합니다.
```
from pyspark.sql import *
from pyspark.sql.functions import *
from pyspark.sql.types import *
from IPython.display import display, display_pretty, clear_output, JSON
spark = (
SparkSession
.builder
.config("spark.sql.session.timeZone", "Asia/Seoul")
.getOrCreate()
)
# 노트북에서 테이블 형태로 데이터 프레임 출력을 위한 설정을 합니다
spark.conf.set("spark.sql.repl.eagerEval.enabled", True) # display enabled
spark.conf.set("spark.sql.repl.eagerEval.truncate", 100) # display output columns size
# 공통 데이터 위치
home_jovyan = "/home/jovyan"
work_data = f"{home_jovyan}/work/data"
work_dir=!pwd
work_dir = work_dir[0]
# 로컬 환경 최적화
spark.conf.set("spark.sql.shuffle.partitions", 5) # the number of partitions to use when shuffling data for joins or aggregations.
spark.conf.set("spark.sql.streaming.forceDeleteTempCheckpointLocation", "true")
spark
```
## 7.1 Optimizing and Tuning Spark for Efficiency
> 스파크는 [튜닝](https://spark.apache.org/docs/latest/tuning.html)을 위한 다양한 설정을 제공하며, [설정](https://spark.apache.org/docs/latest/configuration.html)값을 통해 확인할 수 있습니다
### 7.1.1 Viewing and Setting Apache Spark Configurations
> 아래의 순서대로 스파크는 설정값을 읽어들이며, 가장 마지막에 변경된 값이 반영됩니다
#### 1. 설치된 스파크 경로의 conf/spark-default.conf 파일을 생성 및 수정
#### 2. 스파크 실행 시에 옵션을 지정하는 방법
```bash
$ spark-submit --conf spark.sql.shuffle.partitions=5 --conf "spark.executor.memory=2g" --class main.scala.chapter7.SparkConfig_7_1 jars/mainscala-chapter7_2.12-1.0.jar
```
#### 3. 스파크 코드 내에서 직접 지정하는 방법
```scala
SparkSession.builder
.config("spark.sql.shuffle.partitions", 5)
.config("spark.executor.memroy", "2g")
...
```
```
# 파이스파크 내에서는 sparkContext 를 통해서 해당 정보를 가져올 수 있습니다
def printConfigs(session):
for x in sorted(session.sparkContext.getConf().getAll()):
print(x)
printConfigs(spark)
# SparkSQL의 경우 내부적으로 사용되는 설정값이 다르기 때문에 더 많은 정보가 출력됩니다
spark.sql("SET -v").select("key", "value").where("key like '%spark.sql%'").show(n=5, truncate=False)
```
* 스파크 UI 를 통해서도 확인이 가능합니다

```
# 스파크 기본 설정 spark.sql.shuffle.partitions 값을 확인하고, 프로그램 상에서 변경 후 테스트 합니다
num_partitions = spark.conf.get("spark.sql.shuffle.partitions")
spark.conf.set("spark.sql.shuffle.partitions", 5)
mod_partitions = spark.conf.get("spark.sql.shuffle.partitions")
spark.conf.set("spark.sql.shuffle.partitions", num_partitions)
print(num_partitions, mod_partitions)
```
### 7.1.2 Scaling Spark for Large Workloads
#### 1. 정적 vs 동적 리소스 할당의 선택
> CPU 및 Memory 사용을 애플리케이션에 따라 지정하는 정적 리소스 할당과 동적 리소스 할당은 처리해야 할 데이터의 특성에 따라 선택할 수 있으며, 환경설정을 다르게 구성해야 합니다.
* 데이터의 크기가 일정하지 않고, 유동적
* 특히 데이터의 크기가 고르지 않은 스트리밍 처리
* 멀티테넌시 환경의 분석용 클러스터의 데이터 리소스 관리
#### 2. 동적 리소스 할당 설정 가이드
* 기본 설정은 false 이므로 아래의 값들에 대한 설정이 별도로 되어야 하며, REPL 환경에서 지원하지 않는 값들도 존재하므로, 프로그램을 통한 수정이 필요합니다
```
spark.dynamicAllocation.enabled true
spark.dynamicAllocation.minExecutors 2
spark.dynamicAllocation.schedulerBacklogTimeout 1m
spark.dynamicAllocation.maxExecutors 20
spark.dynamicAllocation.executorIdleTimeout 2min
```
* 아래의 과정을 통해 동적 리소스를 관리합니다
- 1. 스파크 드라이버가 클러스터 매니저에 2개(minExecutors)의 익스큐터를 요청합니다
- 2. 작업 큐의 백로그가 증가하여, 백로그 타임아웃(schedulerBacklogTimeout)이 발생하는 경우 새로운 익스큐터 요청이 발생합니다
- 3. 스케줄링 된 작업들이 1분 이상 지연되는 경우 드라이버는 새로운 익스큐터를 최대 20개(maxExecutors) 까지 요청합니다
- 4. 스파크 드라이버는 2분 이상 (executorIdleTimeout) 작업이 할당되지 않는 익스큐터 들을 종료시킵니다
#### 3. 스파크 익스큐터의 메모리와 셔플 서비스의 설정 가이드

* 맵, 스필 그리고 병합 프로세스들이 I/O 부족에 따른 문제점을 갖지 않으며, 최종 셔플 파티션이 디스크에 저장되기 전에 버퍼 메모리를 확보할 수 있도록 설정을 아래와 같이 조정할 수 있습니다

#### 4. 스파크 병렬성을 최대화
> 스파크가 데이터를 어떻게 저장소로부터 메모리에 적재하는지, 스파크에 있어서 파티션이 어떻게 활용되는지를 이해해야 합니다
* 매 스테이지 마다 많은 타스크들이 존재하지만, 스파크는 기껏해야 코어당 작업당 하나의 스레드만 할당하며, 개별 타스크는 독립된 파티션 하나를 처리합니다.
* 리소스 사용을 최적화하고, 병렬성을 최대화 하려면 익스큐터에 존재하는 코어수들 만큼 많은 파티션들이 존재해야 합니다. (유휴 코어를 두지 않기 위함)

#### 5. 파티션은 구성에 대한 이해와 재구성
* 분산 저장소에 저장시에 구성되는 경우
- HDFS, S3 등의 저장소의 기본 파일블록의 크기는 64mb, 128mb 이며, 파일 크기가 작고 많아질 수록 파티션당 할당해야 하는 코어수가 모자라기 때문에 "small file problem" 을 피해야 합니다
* 스파크의 셔플링을 통해 생성되는 경우
- 집계함수나 조인과 같은 Wide Transformation 과정에서 셔플링이 발생 (Network & Disk I/O 비용)
- 기본 셔플 파티션 수는 200개인데 작은 데이터집합이나, 스트리밍 워크로드 등에는 **충분히 많은 수이기 때문에 조정이 필요**합니다
- 최종 결과 테이블의 용량 및 사용 용도에 따라 의도적인 파티션 수를 조정할 수 있습니다 (repartition, coalesce)
#### 질문과 답변
* 대부분 dynamic allocation 을 쓰면 좋을거 아닐까?
- 워크로드가 예상된다면 동적할당은 필요없는 리소스 및 관리 비용이 더 들어가기 때문에 성능에 영향을 줄 수 있습니다
* REPL 이 뭔가?
- Read-Evaluate-Print Loop 의 약자
* dynamic allocation 은 수시로 변경할 수 없는가? 왜 그런가?
* off-heap 이 좋으면 모두 off-heap 사용하지 왜 jvm 메모리를 이렇게나 많이 사용하는가?
- 자바에서 사용하는 구조화된 API의 장점과 네이티브 라이브러리의 데이터 송수신 및 읽고 쓰기의 장점을 모두 취하기 위함
* execution vs storage 메모리의 비율을 어떻게 확인할 수 있는가? 오히려 삽질 아닌가?
- 직접 셋팅하기 보다는 관련 옵션을 조정하면서 튜닝합니다
* spark 작업에서의 spill 절차는 무엇이고 왜 발생하며 어떻게 해결할 수 있는가?
- 스파크 익스큐터가 위의 각 레이어에 할당된 메모리를 모두 사용한 경우 디스크로 저장하는 경우를 Spill 이라고 합니다
- Disk I/O 는 성능에 큰 영향을 미치기 때문에 SSD 를 사용한다면 좋은 성능을 효과를 기대할 수 있습니다
```
operations, the shuffle will spill results to executors’ local disks at the location specified in spark.local.directory. Having performant SSD disks for this operation will boost the performance.
```
```
numDF = spark.range(1000).repartition(16)
numDF.rdd.getNumPartitions()
```
## 7.2 Caching and Persistence of Data
> cache() 와 persist() 는 거의 동일하지만, persist() 의 경우 persistent level 을 결정할 수 있습니다 (메모리, 디스크, 직렬화, 비직렬화 등)
### 7.2.1 DataFrame.cache()
* DataFrame 은 부분적으로 캐시가 가능하지만, 파티션은 그렇지 못 합니다. 예를 들어 8개의 파티션 중 4.5개 정도를 사용할 메모리가 있는 경우 4개의 파티션만 캐시됩니다
- 캐시되지 않은 데이터를 읽는 데에는 문제가 없지만, 모두 다시 계산되어야 하는 비용이 발생합니다
* cache() 혹은 persist() 호출 시에 DataFrame 은 take(1) 같은 경우 첫 번째 파티션만 캐싱이 이루어지고, count() 같은 action 수행 시에는 모든 데이터가 캐싱이 된다는 점을 알고 있어야 합니다
- rdd.cache()는 persist(StorageLevel.MEMORY_ONLY) 로
- df.cache()는 persist(StorageLevel.MEMORY_AND_DISK) 로 동작합니다

```
# 반복적으로 수행하는 경우 노트북 프로그램의 캐싱될 수 있기 때문에, 매번 다른 프로그램 수행을 위해서 랜덤 시드숫자를 매번 더해줍니다.
import random
seed = random.randint(1,100)
print("seed number is {}".format(seed))
cached = spark.range(10 * 1000 * 1000 + seed).toDF("id").withColumn("square", expr("id * id"))
import time
start = time.time()
cached.cache() # 데이터를 캐싱
cached.count() # Materialize the cache
print(time.time()-start)
start = time.time()
cached.count()
print(time.time()-start)
```
### 7.2.2 DataFrame.persist()


* 테이블 캐시를 사용하는 경우도 cache() 와 동일한 결과를 보여줍니다

```
# 반복적으로 수행하는 경우 노트북 프로그램의 캐싱될 수 있기 때문에, 매번 다른 프로그램 수행을 위해서 랜덤 시드숫자를 매번 더해줍니다.
import random
seed = random.randint(1,100)
print("seed number is {}".format(seed))
persisted = spark.range(10 * 1000 * 1000 + seed).toDF("id").withColumn("square", expr("id * id"))
import time
start = time.time()
from pyspark import StorageLevel
persisted.persist(StorageLevel.DISK_ONLY) # 데이터를 캐싱
persisted.count() # Materialize the cache
print(time.time()-start)
start = time.time()
persisted.count()
print(time.time()-start)
# 반복적으로 수행하는 경우 노트북 프로그램의 캐싱될 수 있기 때문에, 매번 다른 프로그램 수행을 위해서 랜덤 시드숫자를 매번 더해줍니다.
import random
seed = random.randint(1,100)
print("seed number is {}".format(seed))
table_cached = spark.range(10 * 1000 * 1000 + seed).toDF("id").withColumn("square", expr("id * id"))
table_cached.createOrReplaceTempView("square")
import time
start = time.time()
spark.sql("CACHE TABLE square") # 데이터를 캐싱
spark.sql("SELECT COUNT(1) FROM square") # Materialize the cache
print(time.time()-start)
start = time.time()
spark.sql("SELECT COUNT(1) FROM square")
print(time.time()-start)
```
### 7.2.3 When to Cache and Persist
> 대용량 테이블을 자주 쿼리하는 경우 혹은 변환에 활용되는 경우에 사용합니다
* 기계학습 훈련 시와 같이 반복 적인 데이터프레임의 조회
* ETL 데이터 파이프라인의 변환작업에 빈번하게 사용되는 공통 테이블의 사용
### 7.2.4 When Not to Cache and Persist
> 너무 크거나, 자주 사용되지 않는 테이블의 경우는 지양합니다. 왜냐하면 데이터의 직렬화, 역직렬화에 따른 비용이 상당하기 때문에 오히려 전체적인 처리시간에 악영향을 줄 수 있습니다.
* 메모리에 들어가지 않을 만큼 큰 데이터
* 크기에 비해서 자주 사용되지 않는 데이터
## 7.3 A Family of Spark Joins
```text
/**
* Select the proper physical plan for join based on join strategy hints, the availability of
* equi-join keys and the sizes of joining relations. Below are the existing join strategies,
* their characteristics and their limitations.
*
* - Broadcast hash join (BHJ):
* Only supported for equi-joins, while the join keys do not need to be sortable.
* Supported for all join types except full outer joins.
* BHJ usually performs faster than the other join algorithms when the broadcast side is
* small. However, broadcasting tables is a network-intensive operation and it could cause
* OOM or perform badly in some cases, especially when the build/broadcast side is big.
*
* - Shuffle hash join:
* Only supported for equi-joins, while the join keys do not need to be sortable.
* Supported for all join types.
* Building hash map from table is a memory-intensive operation and it could cause OOM
* when the build side is big.
*
* - Shuffle sort merge join (SMJ):
* Only supported for equi-joins and the join keys have to be sortable.
* Supported for all join types.
*
* - Broadcast nested loop join (BNLJ):
* Supports both equi-joins and non-equi-joins.
* Supports all the join types, but the implementation is optimized for:
* 1) broadcasting the left side in a right outer join;
* 2) broadcasting the right side in a left outer, left semi, left anti or existence join;
* 3) broadcasting either side in an inner-like join.
* For other cases, we need to scan the data multiple times, which can be rather slow.
*
* - Shuffle-and-replicate nested loop join (a.k.a. cartesian product join):
* Supports both equi-joins and non-equi-joins.
* Supports only inner like joins.
*/
```
### 7.3.1 Broadcast Hash Join
> 드라이버 혹은 익스큐터의 메모리 보다 충분히 작은 경우에 해당 데이터를 broadcast 변수에 담아, 상대적으로 큰 데이터가 존재하는 노드로 변수를 전달하기 때문에 map 단계에서 join 이 일어나게 되어 *map-side-join* 이라고 부르며, 조인 성능에 가장 큰 영향을 미치는 셔플이 발생하지 않게 되어 성능이 좋습니다.
#### When to use a broadcast hash join
* 작고 큰 데이터 집합의 개별 키가 스파크에 의해서 같은 파티션에 해시되어 있는 경우
- 버킷 등을 통해 이미 동일한 노드에 저장되어 있는 경우로 추측
- hash-join 과 같이 hash table 을 사용하는 것처럼 보이지는 않으나 확인이 필요함
* 하나의 데이터 집합이 다른 데이터 집합에 비해 훨씬 작을 때 (그리고 기본 구성 메모리가 충분한 경우 10MB 이상)
* 정렬되지 않은 키들의 매칭을 기반으로 두 데이터집합 들을 결합하기 위해서, 동등 조인을 수행하기를 워한는 경우
- 해시 조인이기 때문에 정렬되지 않은 상태의 Equi-join 이 가능하기 때문
- [non equi-join](https://www.essentialsql.com/non-equi-join-sql-purpose/) 은 anti-join 혹은 range-join 이 있다
* 모든 스파크 익스큐터들에 작은 데이터가 브로드캐스트 될 것이 명확해서, 네트워크 밴드나 OOM 오류를 걱정할 필요가 없을때
#### spark.sql.autoBroadcastJoinThreshold 값으로 설정을 변경할 수 있으며, 기본 값은 10m 입니다

```
from pyspark.sql.functions import *
animal = spark.createDataFrame([("Cat", 1), ("Dog", 1), ("Monkey", 2), ("Lion", 3), ("Tiger", 3)], ["name", "type"])
animal.show(truncate=False)
category = spark.createDataFrame([("Fat", 1), ("Animal", 2), ("Beast", 3)], ["category", "id"])
animal.join(category, animal.type == category.id, "left_outer").select("name", "category").show()
animal.join(broadcast(category), animal.type == category.id, "left_outer").select("name", "category").show()
```
* 스파크 3.0 에서 추가된 기능으로 explain 모드를 입력할 수 있으며 simple, extended, codegen, cost, formatted 등의 옵션을 제공합니다
```
animal.join(category, animal.type == category.id, "left_outer").explain("simple")
animal.join(broadcast(category), animal.type == category.id, "left_outer").explain("formatted")
```
| 소트머지조인 | 브로드캐스트 조인 |
| --- | --- |
|  |  |
### 7.3.2 Shuffle Sort Merge Join
> 두개의 대용량 데이터 집합을 조인하는 가장 효과적인 알고리즘이며, 기본 설정은 spark.sql.join.preferSortMergeJoin 은 enabled 된 상태입니다.
```
spark.conf.set("spark.sql.autoBroadcastJoinThreashold", "10485760b") # default value
spark.conf.set("spark.sql.autoBroadcastJoinThreashold", "-1") # force sortMergeJoin
states = spark.createDataFrame([(0, "AZ"), (1, "CO"), (3, "TX"), (4, "N"), (5, "MI")], ["id", "state"])
items = spark.createDataFrame([(0, "SKU-0"), (1, "SKU-1"), (2, "SKU-2"), (3, "SKU-3"), (4, "SKU-4"), (5, "SKU-5")], ["id", "item"])
animal.show()
items.show()
import random
spark.conf.set("spark.sql.autoBroadcastJoinThreashold", "-1") # force sortMergeJoin
states = {0:"AZ", 1:"CO", 2:"CA", 3: "TX", 4: "NY", 5:"MI"}
items = {0:"SKU-0", 1:"SKU-1", 2:"SKU-2", 3: "SKU-3", 4: "SKU-4", 5:"SKU-5"}
usersDF = spark.range(0, 10000).rdd.map(lambda id: (id[0], "user_{}".format(id[0]), "user_{}@databricks.com".format(id[0]), states[random.randint(0, 5)])).toDF(["uid", "login", "email", "user_state"])
ordersDF = spark.range(0, 10000).rdd.map(lambda r: (r[0], r[0], random.randint(0, 10000), 10 * r[0] * 0.2, states[random.randint(0, 5)], items[random.randint(0,5)])).toDF(["transaction_id", "quantity", "users_id", "amount", "state", "items"])
# usersDF.show(truncate=False)
# ordersDF.show(truncate=False)
usersOrdersDF = ordersDF.join(usersDF, ordersDF.users_id == usersDF.uid)
usersOrdersDF.show(truncate=False)
usersOrdersDF.explain("formatted")
```
#### Optimizing the shuffle sort merge join
> Sort-Merge 조인의 가장 큰 비용인 Exchange Stage 를 제거하여 성능향상을 도모할 수 있습니다. 이는 버킷을 통해 해당 데이터를 생성하는 시점에 미리 정렬해 두는 접근입니다. 즉 자주 사용되는 equi-join 의 컬럼을 기준으로 버킷 수준에서 정렬해둔다고 보시면 됩니다.


```
# spark.conf.set("spark.sql.legacy.allowCreatingManagedTableUsingNonemptyLocation","true")
%rm -rf "spark-warehouse/userstbl"
%rm -rf "spark-warehouse/orderstbl"
from pyspark.sql.types import *
(
usersDF.orderBy(asc("uid"))
.write
.mode("overwrite")
.format("parquet")
.bucketBy(8, "uid")
.saveAsTable("UsersTbl")
)
(
ordersDF.orderBy(asc("users_id"))
.write
.mode("overwrite")
.format("parquet")
.bucketBy(8, "users_id")
.saveAsTable("OrdersTbl")
)
spark.sql("cache table UsersTbl")
spark.sql("cache table OrdersTbl")
usersBucketDF = spark.table("UsersTbl")
ordersBucketDF = spark.table("OrdersTbl")
joinUsersOrdersBucketDF = ordersBucketDF.join(usersBucketDF, ordersBucketDF.users_id == usersBucketDF.uid)
joinUsersOrdersBucketDF.show(truncate=False)
```
#### When to use a shuffle sort merge join
* 두 개의 큰 데이터 집합의 각 키를 정렬하고 동일한 파티션으로 해시 할 수 있는 경우
* 정렬된 키들의 매칭을 기반으로 두 데이터집합 들을 결합하기 위해서, 동등 조인을 수행하기를 워한는 경우
- SortMerge 조인이기 때문에 이미 정렬된 상태의 Equi-join 이 가능하기 때문
* 네트워크를 통해 대용량 셔플 파일을 저장 시에 Exchange 와 Sort 연산을 피하고 싶을 때
- Bucket 기법을 활용하는 예제를 고려하라는 말로 추측
## 7.4 Inspecting the Spark UI
### 7.4.1 Journey Through the Spark UI Tabs
#### Jobs and Stages
> Duration 항목을 기준으로 문제가 되는 Job, Stage 및 Task 를 추측합니다
* 확인 및 모니터링 대상 지표
- Average Duration 시간
- GC 에 소모되는 시간
- Shuffle bytes/records 정보








## 7.5 Summary
## 추가로 학습할 내용들
* [Tuning Apache Spark for Large Scale Workloads - Sital Kedia & Gaoxiang Liu](https://www.youtube.com/watch?v=5dga0UT4RI8)
* [Hive Bucketing in Apache Spark - Tejas Patil](https://www.youtube.com/watch?v=6BD-Vv-ViBw)
* [How does Facebook tune Apache Spark for Large-Scale Workloads?](https://towardsdatascience.com/how-does-facebook-tune-apache-spark-for-large-scale-workloads-3238ddda0830)
* [External Shuffle Service in Apache Spark](https://www.waitingforcode.com/apache-spark/external-shuffle-service-apache-spark/read)
* [Spark Internal Part 2. Spark의 메모리 관리(2)](https://medium.com/@leeyh0216/spark-internal-part-2-spark%EC%9D%98-%EB%A9%94%EB%AA%A8%EB%A6%AC-%EA%B4%80%EB%A6%AC-2-db1975b74d2f)
* [Why You Should Care about Data Layout in the Filesystem](https://databricks.com/session/why-you-should-care-about-data-layout-in-the-filesystem)
* [Five distinct join strategies](https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/SparkStrategies.scala#L111)
| github_jupyter |
<img src="../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
## _*Shor's Algorithm for Integer Factorization*_
The latest version of this tutorial notebook is available on https://github.com/qiskit/qiskit-tutorial.
In this tutorial, we first introduce the problem of [integer factorization](#factorization) and describe how [Shor's algorithm](#shorsalgorithm) solves it in detail. We then [implement](#implementation) a version of it in Qiskit.
***
### Contributors
Anna Phan
### Qiskit Package Versions
```
import qiskit
qiskit.__qiskit_version__
```
## Integer Factorization <a id='factorization'></a>
Integer factorization is the decomposition of an composite integer into a product of smaller integers, for example, the integer $100$ can be factored into $10 \times 10$. If these factors are restricted to prime numbers, the process is called prime factorization, for example, the prime factorization of $100$ is $2 \times 2 \times 5 \times 5$.
When the integers are very large, no efficient classical integer factorization algorithm is known. The hardest factorization problems are semiprime numbers, the product of two prime numbers. In [2009](https://link.springer.com/chapter/10.1007/978-3-642-14623-7_18), a team of researchers factored a 232 decimal digit semiprime number (768 bits), spending the computational equivalent of more than two thousand years on a single core 2.2 GHz AMD Opteron processor with 2 GB RAM:
```
RSA-768 = 12301866845301177551304949583849627207728535695953347921973224521517264005
07263657518745202199786469389956474942774063845925192557326303453731548268
50791702612214291346167042921431160222124047927473779408066535141959745985
6902143413
= 33478071698956898786044169848212690817704794983713768568912431388982883793
878002287614711652531743087737814467999489
× 36746043666799590428244633799627952632279158164343087642676032283815739666
511279233373417143396810270092798736308917
```
The presumed difficulty of this semiprime factorization problem underlines many encryption algorithms, such as [RSA](https://www.google.com/patents/US4405829), which is used in online credit card transactions, amongst other applications.
***
## Shor's Algorithm <a id='shorsalgorithm'></a>
Shor's algorithm, named after mathematician Peter Shor, is a polynomial time quantum algorithm for integer factorization formulated in [1994](http://epubs.siam.org/doi/10.1137/S0097539795293172). It is arguably the most dramatic example of how the paradigm of quantum computing changed our perception of which computational problems should be considered tractable, motivating the study of new quantum algorithms and efforts to design and construct quantum computers. It also has expedited research into new cryptosystems not based on integer factorization.
Shor's algorithm has been experimentally realised by multiple teams for specific composite integers. The composite $15$ was first factored into $3 \times 5$ in [2001](https://www.nature.com/nature/journal/v414/n6866/full/414883a.html) using seven NMR qubits, and has since been implemented using four photon qubits in 2007 by [two](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.99.250504) [teams](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.99.250505), three solid state qubits in [2012](https://www.nature.com/nphys/journal/v8/n10/full/nphys2385.html) and five trapped ion qubits in [2016](http://science.sciencemag.org/content/351/6277/1068). The composite $21$ has also been factored into $3 \times 7$ in [2012](http://www.nature.com/nphoton/journal/v6/n11/full/nphoton.2012.259.html) using a photon qubit and qutrit (a three level system). Note that these experimental demonstrations rely on significant optimisations of Shor's algorithm based on apriori knowledge of the expected results. In general, [$2 + \frac{3}{2}\log_2N$](https://link-springer-com.virtual.anu.edu.au/chapter/10.1007/3-540-49208-9_15) qubits are needed to factor the composite integer $N$, meaning at least $1,154$ qubits would be needed to factor $RSA-768$ above.
```
from IPython.display import IFrame
IFrame("https://www.youtube.com/embed/hOlOY7NyMfs?start=75&end=126",560,315)
```
As Peter Shor describes in the video above from [PhysicsWorld](http://physicsworld.com/cws/article/multimedia/2015/sep/30/what-is-shors-factoring-algorithm), Shor’s algorithm is composed of three parts. The first part turns the factoring problem into a period finding problem using number theory, which can be computed on a classical computer. The second part finds the period using the quantum Fourier transform and is responsible for the quantum speedup of the algorithm. The third part uses the period found to calculate the factors.
The following sections go through the algorithm in detail, for those who just want the steps, without the lengthy explanation, refer to the [blue](#stepsone) [boxes](#stepstwo) before jumping down to the [implemention](#implemention).
### From Factorization to Period Finding
The number theory that underlines Shor's algorithm relates to periodic modulo sequences. Let's have a look at an example of such a sequence. Consider the sequence of the powers of two:
$$1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, ...$$
Now let's look at the same sequence 'modulo 15', that is, the remainder after fifteen divides each of these powers of two:
$$1, 2, 4, 8, 1, 2, 4, 8, 1, 2, 4, ...$$
This is a modulo sequence that repeats every four numbers, that is, a periodic modulo sequence with a period of four.
Reduction of factorization of $N$ to the problem of finding the period of an integer $x$ less than $N$ and greater than $1$ depends on the following result from number theory:
> The function $\mathcal{F}(a) = x^a \bmod N$ is a periodic function, where $x$ is an integer coprime to $N$ and $a \ge 0$.
Note that two numbers are coprime, if the only positive integer that divides both of them is 1. This is equivalent to their greatest common divisor being 1. For example, 8 and 15 are coprime, as they don't share any common factors (other than 1). However, 9 and 15 are not coprime, since they are both divisible by 3 (and 1).
> Since $\mathcal{F}(a)$ is a periodic function, it has some period $r$. Knowing that $x^0 \bmod N = 1$, this means that $x^r \bmod N = 1$ since the function is periodic, and thus $r$ is just the first nonzero power where $x^r = 1 (\bmod N)$.
Given this information and through the following algebraic manipulation:
$$ x^r \equiv 1 \bmod N $$
$$ x^r = (x^{r/2})^2 \equiv 1 \bmod N $$
$$ (x^{r/2})^2 - 1 \equiv 0 \bmod N $$
and if $r$ is an even number:
$$ (x^{r/2} + 1)(x^{r/2} - 1) \equiv 0 \bmod N $$
From this, the product $(x^{r/2} + 1)(x^{r/2} - 1)$ is an integer multiple of $N$, the number to be factored. Thus, so long as $(x^{r/2} + 1)$ or $(x^{r/2} - 1)$ is not a multiple of $N$, then at least one of $(x^{r/2} + 1)$ or $(x^{r/2} - 1)$ must have a nontrivial factor in common with $N$.
So computing $\text{gcd}(x^{r/2} - 1, N)$ and $\text{gcd}(x^{r/2} + 1, N)$ will obtain a factor of $N$, where $\text{gcd}$ is the greatest common denominator function, which can be calculated by the polynomial time [Euclidean algorithm](https://en.wikipedia.org/wiki/Euclidean_algorithm).
#### Classical Steps to Shor's Algorithm
Let's assume for a moment that a period finding machine exists that takes as input coprime integers $x, N$ and outputs the period of $x \bmod N$, implemented by as a brute force search below. Let's show how to use the machine to find all prime factors of $N$ using the number theory described above.
```
# Brute force period finding algorithm
def find_period_classical(x, N):
n = 1
t = x
while t != 1:
t *= x
t %= N
n += 1
return n
```
For simplicity, assume that $N$ has only two distinct prime factors: $N = pq$.
<div class="alert alert-block alert-info"> <a id='stepsone'></a>
<ol>
<li>Pick a random integer $x$ between $1$ and $N$ and compute the greatest common divisor $\text{gcd}(x,N)$ using Euclid's algorithm.</li>
<li>If $x$ and $N$ have some common prime factors, $\text{gcd}(x,N)$ will equal $p$ or $q$. Otherwise $\text{gcd}(x,N) = 1$, meaning $x$ and $N$ are coprime. </li>
<li>Let $r$ be the period of $x \bmod N$ computed by the period finding machine. Repeat the above steps with different random choices of $x$ until $r$ is even.</li>
<li>Now $p$ and $q$ can be found by computing $\text{gcd}(x^{r/2} \pm 1, N)$ as long as $x^{r/2} \neq \pm 1$.</li>
</ol>
</div>
As an example, consider $N = 15$. Let's look at all values of $1 < x < 15$ where $x$ is coprime with $15$:
| $x$ | $x^a \bmod 15$ | Period $r$ |$\text{gcd}(x^{r/2}-1,15)$|$\text{gcd}(x^{r/2}+1,15)$ |
|:-----:|:----------------------------:|:----------:|:------------------------:|:-------------------------:|
| 2 | 1,2,4,8,1,2,4,8,1,2,4... | 4 | 3 | 5 |
| 4 | 1,4,1,4,1,4,1,4,1,4,1... | 2 | 3 | 5 |
| 7 | 1,7,4,13,1,7,4,13,1,7,4... | 4 | 3 | 5 |
| 8 | 1,8,4,2,1,8,4,2,1,8,4... | 4 | 3 | 5 |
| 11 | 1,11,1,11,1,11,1,11,1,11,1...| 2 | 5 | 3 |
| 13 | 1,13,4,7,1,13,4,7,1,13,4,... | 4 | 3 | 5 |
| 14 | 1,14,1,14,1,14,1,14,1,14,1,,,| 2 | 1 | 15 |
As can be seen, any value of $x$ except $14$ will return the factors of $15$, that is, $3$ and $5$. $14$ is an example of the special case where $(x^{r/2} + 1)$ or $(x^{r/2} - 1)$ is a multiple of $N$ and thus another $x$ needs to be tried.
In general, it can be shown that this special case occurs infrequently, so on average only two calls to the period finding machine are sufficient to factor $N$.
For a more interesting example, first let's find larger number N, that is semiprime that is relatively small. Using the [Sieve of Eratosthenes](https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes) [Python implementation](http://archive.oreilly.com/pub/a/python/excerpt/pythonckbk_chap1/index1.html?page=last), let's generate a list of all the prime numbers less than a thousand, randomly select two, and muliply them.
```
import random, itertools
# Sieve of Eratosthenes algorithm
def sieve( ):
D = { }
yield 2
for q in itertools.islice(itertools.count(3), 0, None, 2):
p = D.pop(q, None)
if p is None:
D[q*q] = q
yield q
else:
x = p + q
while x in D or not (x&1):
x += p
D[x] = p
# Creates a list of prime numbers up to the given argument
def get_primes_sieve(n):
return list(itertools.takewhile(lambda p: p<n, sieve()))
def get_semiprime(n):
primes = get_primes_sieve(n)
l = len(primes)
p = primes[random.randrange(l)]
q = primes[random.randrange(l)]
return p*q
N = get_semiprime(1000)
print("semiprime N =",N)
```
Now implement the [above steps](#stepsone) of Shor's Algorithm:
```
import math
def shors_algorithm_classical(N):
x = random.randint(0,N) # step one
if(math.gcd(x,N) != 1): # step two
return x,0,math.gcd(x,N),N/math.gcd(x,N)
r = find_period_classical(x,N) # step three
while(r % 2 != 0):
r = find_period_classical(x,N)
p = math.gcd(x**int(r/2)+1,N) # step four, ignoring the case where (x^(r/2) +/- 1) is a multiple of N
q = math.gcd(x**int(r/2)-1,N)
return x,r,p,q
x,r,p,q = shors_algorithm_classical(N)
print("semiprime N = ",N,", coprime x = ",x,", period r = ",r,", prime factors = ",p," and ",q,sep="")
```
### Quantum Period Finding <a id='quantumperiodfinding'></a>
Let's first describe the quantum period finding algorithm, and then go through a few of the steps in detail, before going through an example. This algorithm takes two coprime integers, $x$ and $N$, and outputs $r$, the period of $\mathcal{F}(a) = x^a\bmod N$.
<div class="alert alert-block alert-info"><a id='stepstwo'></a>
<ol>
<li> Choose $T = 2^t$ such that $N^2 \leq T \le 2N^2$. Initialise two registers of qubits, first an argument register with $t$ qubits and second a function register with $n = log_2 N$ qubits. These registers start in the initial state:
$$\vert\psi_0\rangle = \vert 0 \rangle \vert 0 \rangle$$ </li>
<li> Apply a Hadamard gate on each of the qubits in the argument register to yield an equally weighted superposition of all integers from $0$ to $T$:
$$\vert\psi_1\rangle = \frac{1}{\sqrt{T}}\sum_{a=0}^{T-1}\vert a \rangle \vert 0 \rangle$$ </li>
<li> Implement the modular exponentiation function $x^a \bmod N$ on the function register, giving the state:
$$\vert\psi_2\rangle = \frac{1}{\sqrt{T}}\sum_{a=0}^{T-1}\vert a \rangle \vert x^a \bmod N \rangle$$
This $\vert\psi_2\rangle$ is highly entangled and exhibits quantum parallism, i.e. the function entangled in parallel all the 0 to $T$ input values with the corresponding values of $x^a \bmod N$, even though the function was only executed once. </li>
<li> Perform a quantum Fourier transform on the argument register, resulting in the state:
$$\vert\psi_3\rangle = \frac{1}{T}\sum_{a=0}^{T-1}\sum_{z=0}^{T-1}e^{(2\pi i)(az/T)}\vert z \rangle \vert x^a \bmod N \rangle$$
where due to the interference, only the terms $\vert z \rangle$ with
$$z = qT/r $$
have significant amplitude where $q$ is a random integer ranging from $0$ to $r-1$ and $r$ is the period of $\mathcal{F}(a) = x^a\bmod N$. </li>
<li> Measure the argument register to obtain classical result $z$. With reasonable probability, the continued fraction approximation of $T / z$ will be an integer multiple of the period $r$. Euclid's algorithm can then be used to find $r$.</li>
</ol>
</div>
Note how quantum parallelism and constructive interference have been used to detect and measure periodicity of the modular exponentiation function. The fact that interference makes it easier to measure periodicity should not come as a big surprise. After all, physicists routinely use scattering of electromagnetic waves and interference measurements to determine periodicity of physical objects such as crystal lattices. Likewise, Shor's algorithm exploits interference to measure periodicity of arithmetic objects, a computational interferometer of sorts.
#### Modular Exponentiation
The modular exponentiation, step 3 above, that is the evaluation of $x^a \bmod N$ for $2^t$ values of $a$ in parallel, is the most demanding part of the algorithm. This can be performed using the following identity for the binary representation of any integer: $x = x_{t-1}2^{t-1} + ... x_12^1+x_02^0$, where $x_t$ are the binary digits of $x$. From this, it follows that:
\begin{aligned}
x^a \bmod N & = x^{2^{(t-1)}a_{t-1}} ... x^{2a_1}x^{a_0} \bmod N \\
& = x^{2^{(t-1)}a_{t-1}} ... [x^{2a_1}[x^{2a_0} \bmod N] \bmod N] ... \bmod N \\
\end{aligned}
This means that 1 is first multiplied by $x^1 \bmod N$ if and only if $a_0 = 1$, then the result is multiplied by $x^2 \bmod N$ if and only if $a_1 = 1$ and so forth, until finally the result is multiplied by $x^{2^{(s-1)}}\bmod N$ if and only if $a_{t-1} = 1$.
Therefore, the modular exponentiation consists of $t$ serial multiplications modulo $N$, each of them controlled by the qubit $a_t$. The values $x,x^2,...,x^{2^{(t-1)}} \bmod N$ can be found efficiently on a classical computer by repeated squaring.
#### Quantum Fourier Transform
The Fourier transform occurs in many different versions throughout classical computing, in areas ranging from signal processing to data compression to complexity theory. The quantum Fourier transform (QFT), step 4 above, is the quantum implementation of the discrete Fourier transform over the amplitudes of a wavefunction.
The classical discrete Fourier transform acts on a vector $(x_0, ..., x_{N-1})$ and maps it to the vector $(y_0, ..., y_{N-1})$ according to the formula
$$y_k = \frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}x_j\omega_N^{jk}$$
where $\omega_N^{jk} = e^{2\pi i \frac{jk}{N}}$.
Similarly, the quantum Fourier transform acts on a quantum state $\sum_{i=0}^{N-1} x_i \vert i \rangle$ and maps it to the quantum state $\sum_{i=0}^{N-1} y_i \vert i \rangle$ according to the formula
$$y_k = \frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}x_j\omega_N^{jk}$$
with $\omega_N^{jk}$ defined as above. Note that only the amplitudes of the state were affected by this transformation.
This can also be expressed as the map:
$$\vert x \rangle \mapsto \frac{1}{\sqrt{N}}\sum_{y=0}^{N-1}\omega_N^{xy} \vert y \rangle$$
Or the unitary matrix:
$$ U_{QFT} = \frac{1}{\sqrt{N}} \sum_{x=0}^{N-1} \sum_{y=0}^{N-1} \omega_N^{xy} \vert y \rangle \langle x \vert$$
As an example, we've actually already seen the quantum Fourier transform for when $N = 2$, it is the Hadamard operator ($H$):
$$H = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}$$
Suppose we have the single qubit state $\alpha \vert 0 \rangle + \beta \vert 1 \rangle$, if we apply the $H$ operator to this state, we obtain the new state:
$$\frac{1}{\sqrt{2}}(\alpha + \beta) \vert 0 \rangle + \frac{1}{\sqrt{2}}(\alpha - \beta) \vert 1 \rangle
\equiv \tilde{\alpha}\vert 0 \rangle + \tilde{\beta}\vert 1 \rangle$$
Notice how the Hadamard gate performs the discrete Fourier transform for $N = 2$ on the amplitudes of the state.
So what does the quantum Fourier transform look like for larger N? Let's derive a circuit for $N=2^n$, $QFT_N$ acting on the state $\vert x \rangle = \vert x_1...x_n \rangle$ where $x_1$ is the most significant bit.
\begin{aligned}
QFT_N\vert x \rangle & = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1}\omega_N^{xy} \vert y \rangle \\
& = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1} e^{2 \pi i xy / 2^n} \vert y \rangle \:\text{since}\: \omega_N^{xy} = e^{2\pi i \frac{xy}{N}} \:\text{and}\: N = 2^n\\
& = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1} e^{2 \pi i \left(\sum_{k=1}^n y_k/2^k\right) x} \vert y_1 ... y_n \rangle \:\text{rewriting in fractional binary notation}\: y = y_1...y_k, y/2^n = \sum_{k=1}^n y_k/2^k \\
& = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1} \prod_{k=0}^n e^{2 \pi i x y_k/2^k } \vert y_1 ... y_n \rangle \:\text{after expanding the exponential of a sum to a product of exponentials} \\
& = \frac{1}{\sqrt{N}} \bigotimes_{k=1}^n \left(\vert0\rangle + e^{2 \pi i x /2^k } \vert1\rangle \right) \:\text{after rearranging the sum and products, and expanding} \\
& = \frac{1}{\sqrt{N}} \left(\vert0\rangle + e^{2 \pi i[0.x_n]} \vert1\rangle\right) \otimes...\otimes \left(\vert0\rangle + e^{2 \pi i[0.x_1.x_2...x_{n-1}.x_n]} \vert1\rangle\right) \:\text{as}\: e^{2 \pi i x/2^k} = e^{2 \pi i[0.x_k...x_n]}
\end{aligned}
This is a very useful form of the QFT for $N=2^n$ as only the last qubit depends on the the
values of all the other input qubits and each further bit depends less and less on the input qubits. Furthermore, note that $e^{2 \pi i.0.x_n}$ is either $+1$ or $-1$, which resembles the Hadamard transform.
Before we create the circuit code for general $N=2^n$, let's look at $N=8,n=3$:
$$QFT_8\vert x_1x_2x_3\rangle = \frac{1}{\sqrt{8}} \left(\vert0\rangle + e^{2 \pi i[0.x_3]} \vert1\rangle\right) \otimes \left(\vert0\rangle + e^{2 \pi i[0.x_2.x_3]} \vert1\rangle\right) \otimes \left(\vert0\rangle + e^{2 \pi i[0.x_1.x_2.x_3]} \vert1\rangle\right) $$
The steps to creating the circuit for $\vert y_1y_2x_3\rangle = QFT_8\vert x_1x_2x_3\rangle$, remembering the [controlled phase rotation gate](../tools/quantum_gates_and_linear_algebra.ipynb
) $CU_1$, would be:
1. Apply a Hadamard to $\vert x_3 \rangle$, giving the state $\frac{1}{\sqrt{2}}\left(\vert0\rangle + e^{2 \pi i.0.x_3} \vert1\rangle\right) = \frac{1}{\sqrt{2}}\left(\vert0\rangle + (-1)^{x_3} \vert1\rangle\right)$
2. Apply a Hadamard to $\vert x_2 \rangle$, then depending on $k_3$ (before the Hadamard gate) a $CU_1(\frac{\pi}{2})$, giving the state $\frac{1}{\sqrt{2}}\left(\vert0\rangle + e^{2 \pi i[0.x_2.x_3]} \vert1\rangle\right)$.
3. Apply a Hadamard to $\vert x_1 \rangle$, then $CU_1(\frac{\pi}{2})$ depending on $k_2$, and $CU_1(\frac{\pi}{4})$ depending on $k_3$.
4. Measure the bits in reverse order, that is $y_3 = x_1, y_2 = x_2, y_1 = y_3$.
In Qiskit, this is:
```
q3 = QuantumRegister(3, 'q3')
c3 = ClassicalRegister(3, 'c3')
qft3 = QuantumCircuit(q3, c3)
qft3.h(q[0])
qft3.cu1(math.pi/2.0, q3[1], q3[0])
qft3.h(q[1])
qft3.cu1(math.pi/4.0, q3[2], q3[0])
qft3.cu1(math.pi/2.0, q3[2], q3[1])
qft3.h(q[2])
```
For $N=2^n$, this can be generalised, as in the `qft` function in [tools.qi](https://github.com/Q/qiskit-terra/blob/master/qiskit/tools/qi/qi.py):
```
def qft(circ, q, n):
"""n-qubit QFT on q in circ."""
for j in range(n):
for k in range(j):
circ.cu1(math.pi/float(2**(j-k)), q[j], q[k])
circ.h(q[j])
```
#### Example
Let's factorize $N = 21$ with coprime $x=2$, following the [above steps](#stepstwo) of the quantum period finding algorithm, which should return $r = 6$. This example follows one from [this](https://arxiv.org/abs/quant-ph/0303175) tutorial.
1. Choose $T = 2^t$ such that $N^2 \leq T \le 2N^2$. For $N = 21$, the smallest value of $t$ is 9, meaning $T = 2^t = 512$. Initialise two registers of qubits, first an argument register with $t = 9$ qubits, and second a function register with $n = log_2 N = 5$ qubits:
$$\vert\psi_0\rangle = \vert 0 \rangle \vert 0 \rangle$$
2. Apply a Hadamard gate on each of the qubits in the argument register:
$$\vert\psi_1\rangle = \frac{1}{\sqrt{T}}\sum_{a=0}^{T-1}\vert a \rangle \vert 0 \rangle = \frac{1}{\sqrt{512}}\sum_{a=0}^{511}\vert a \rangle \vert 0 \rangle$$
3. Implement the modular exponentiation function $x^a \bmod N$ on the function register:
\begin{eqnarray}
\vert\psi_2\rangle
& = & \frac{1}{\sqrt{T}}\sum_{a=0}^{T-1}\vert a \rangle \vert x^a \bmod N \rangle
= \frac{1}{\sqrt{512}}\sum_{a=0}^{511}\vert a \rangle \vert 2^a \bmod 21 \rangle \\
& = & \frac{1}{\sqrt{512}} \bigg( \;\; \vert 0 \rangle \vert 1 \rangle + \vert 1 \rangle \vert 2 \rangle +
\vert 2 \rangle \vert 4 \rangle + \vert 3 \rangle \vert 8 \rangle + \;\; \vert 4 \rangle \vert 16 \rangle + \;\,
\vert 5 \rangle \vert 11 \rangle \, + \\
& & \;\;\;\;\;\;\;\;\;\;\;\;\;\, \vert 6 \rangle \vert 1 \rangle + \vert 7 \rangle \vert 2 \rangle + \vert 8 \rangle \vert 4 \rangle + \vert 9 \rangle \vert 8 \rangle + \vert 10 \rangle \vert 16 \rangle + \vert 11 \rangle \vert 11 \rangle \, +\\
& & \;\;\;\;\;\;\;\;\;\;\;\;\, \vert 12 \rangle \vert 1 \rangle + \ldots \bigg)\\
\end{eqnarray}
Notice that the above expression has the following pattern: the states of the second register of each “column” are the same. Therefore we can rearrange the terms in order to collect the second register:
\begin{eqnarray}
\vert\psi_2\rangle
& = & \frac{1}{\sqrt{512}} \bigg[ \big(\,\vert 0 \rangle + \;\vert 6 \rangle + \vert 12 \rangle \ldots + \vert 504 \rangle + \vert 510 \rangle \big) \, \vert 1 \rangle \, + \\
& & \;\;\;\;\;\;\;\;\;\;\; \big(\,\vert 1 \rangle + \;\vert 7 \rangle + \vert 13 \rangle \ldots + \vert 505 \rangle + \vert 511 \rangle \big) \, \vert 2 \rangle \, + \\
& & \;\;\;\;\;\;\;\;\;\;\; \big(\,\vert 2 \rangle + \;\vert 8 \rangle + \vert 14 \rangle \ldots + \vert 506 \rangle + \big) \, \vert 4 \rangle \, + \\
& & \;\;\;\;\;\;\;\;\;\;\; \big(\,\vert 3 \rangle + \;\vert 9 \rangle + \vert 15 \rangle \ldots + \vert 507 \rangle + \big) \, \vert 8 \rangle \, + \\
& & \;\;\;\;\;\;\;\;\;\;\; \big(\,\vert 4 \rangle + \vert 10 \rangle + \vert 16 \rangle \ldots + \vert 508 \rangle + \big) \vert 16 \rangle \, + \\
& & \;\;\;\;\;\;\;\;\;\;\; \big(\,\vert 5 \rangle + \vert 11 \rangle + \vert 17 \rangle \ldots + \vert 509 \rangle + \big) \vert 11 \rangle \, \bigg]\\
\end{eqnarray}
4. To simplify following equations, we'll measure the function register before performing a quantum Fourier transform on the argument register. This will yield one of the following numbers with equal probability: $\{1,2,4,6,8,16,11\}$. Suppose that the result of the measurement was $2$, then:
$$\vert\psi_3\rangle = \frac{1}{\sqrt{86}}(\vert 1 \rangle + \;\vert 7 \rangle + \vert 13 \rangle \ldots + \vert 505 \rangle + \vert 511 \rangle)\, \vert 2 \rangle $$
It does not matter what is the result of the measurement; what matters is the periodic pattern. The period of the states of the first register is the solution to the problem and the quantum Fourier transform can reveal the value of the period.
5. Perform a quantum Fourier transform on the argument register:
$$
\vert\psi_4\rangle
= QFT(\vert\psi_3\rangle)
= QFT(\frac{1}{\sqrt{86}}\sum_{a=0}^{85}\vert 6a+1 \rangle)\vert 2 \rangle
= \frac{1}{\sqrt{512}}\sum_{j=0}^{511}\bigg(\big[ \frac{1}{\sqrt{86}}\sum_{a=0}^{85} e^{-2 \pi i \frac{6ja}{512}} \big] e^{-2\pi i\frac{j}{512}}\vert j \rangle \bigg)\vert 2 \rangle
$$
6. Measure the argument register. The probability of measuring a result $j$ is:
$$ \rm{Probability}(j) = \frac{1}{512 \times 86} \bigg\vert \sum_{a=0}^{85}e^{-2 \pi i \frac{6ja}{512}} \bigg\vert^2$$
This peaks at $j=0,85,171,256,341,427$. Suppose that the result of the measement yielded $j = 85$, then using continued fraction approximation of $\frac{512}{85}$, we obtain $r=6$, as expected.
## Implementation <a id='implementation'></a>
```
from qiskit import BasicAer, execute
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
from qiskit.tools.visualization import plot_histogram
```
As mentioned [earlier](#shorsalgorithm), many of the experimental demonstrations of Shor's algorithm rely on significant optimisations based on apriori knowledge of the expected results. We will follow the formulation in [this](http://science.sciencemag.org/content/351/6277/1068) paper, which demonstrates a reasonably scalable realisation of Shor's algorithm using $N = 15$. Below is the first figure from the paper, showing various quantum circuits, with the following caption: _Diagrams of Shor’s algorithm for factoring $N = 15$, using a generic textbook approach (**A**) compared with Kitaev’s approach (**B**) for a generic base $a$. (**C**) The actual implementation for factoring $15$ to base $11$, optimized for the corresponding single-input state. Here $q_i$ corresponds to the respective qubit in the computational register. (**D**) Kitaev’s approach to Shor’s algorithm for the bases ${2, 7, 8, 13}$. Here, the optimized map of the first multiplier is identical in all four cases, and the last multiplier is implemented with full modular multipliers, as depicted in (**E**). In all cases, the single QFT qubit is used three times, which, together with the four qubits in the computation register, totals seven effective qubits. (**E**) Circuit diagrams of the modular multipliers of the form $a \bmod N$ for bases $a = {2, 7, 8, 11, 13}$._
<img src="images/shoralgorithm.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="center">
Note that we cannot run this version of Shor's algorithm on an IBM Quantum Experience device at the moment as we currently lack the ability to do measurement feedforward and qubit resetting. Thus we'll just be building the ciruits to run on the simulators for now. Based on Pinakin Padalia & Amitabh Yadav's implementation, found [here](https://github.com/amitabhyadav/Shor-Algorithm-on-IBM-Quantum-Experience)
First we'll construct the $a^1 \bmod 15$ circuits for $a = 2,7,8,11,13$ as in **E**:
```
# qc = quantum circuit, qr = quantum register, cr = classical register, a = 2, 7, 8, 11 or 13
def circuit_amod15(qc,qr,cr,a):
if a == 2:
qc.cswap(qr[4],qr[3],qr[2])
qc.cswap(qr[4],qr[2],qr[1])
qc.cswap(qr[4],qr[1],qr[0])
elif a == 7:
qc.cswap(qr[4],qr[1],qr[0])
qc.cswap(qr[4],qr[2],qr[1])
qc.cswap(qr[4],qr[3],qr[2])
qc.cx(qr[4],qr[3])
qc.cx(qr[4],qr[2])
qc.cx(qr[4],qr[1])
qc.cx(qr[4],qr[0])
elif a == 8:
qc.cswap(qr[4],qr[1],qr[0])
qc.cswap(qr[4],qr[2],qr[1])
qc.cswap(qr[4],qr[3],qr[2])
elif a == 11: # this is included for completeness
qc.cswap(qr[4],qr[2],qr[0])
qc.cswap(qr[4],qr[3],qr[1])
qc.cx(qr[4],qr[3])
qc.cx(qr[4],qr[2])
qc.cx(qr[4],qr[1])
qc.cx(qr[4],qr[0])
elif a == 13:
qc.cswap(qr[4],qr[3],qr[2])
qc.cswap(qr[4],qr[2],qr[1])
qc.cswap(qr[4],qr[1],qr[0])
qc.cx(qr[4],qr[3])
qc.cx(qr[4],qr[2])
qc.cx(qr[4],qr[1])
qc.cx(qr[4],qr[0])
```
Next we'll build the rest of the period finding circuit as in **D**:
```
# qc = quantum circuit, qr = quantum register, cr = classical register, a = 2, 7, 8, 11 or 13
def circuit_aperiod15(qc,qr,cr,a):
if a == 11:
circuit_11period15(qc,qr,cr)
return
# Initialize q[0] to |1>
qc.x(qr[0])
# Apply a**4 mod 15
qc.h(qr[4])
# controlled identity on the remaining 4 qubits, which is equivalent to doing nothing
qc.h(qr[4])
# measure
qc.measure(qr[4],cr[0])
# reinitialise q[4] to |0>
qc.reset(qr[4])
# Apply a**2 mod 15
qc.h(qr[4])
# controlled unitary
qc.cx(qr[4],qr[2])
qc.cx(qr[4],qr[0])
# feed forward
qc.u1(math.pi/2.,qr[4]).c_if(cr, 1)
qc.h(qr[4])
# measure
qc.measure(qr[4],cr[1])
# reinitialise q[4] to |0>
qc.reset(qr[4])
# Apply a mod 15
qc.h(qr[4])
# controlled unitary.
circuit_amod15(qc,qr,cr,a)
# feed forward
qc.u1(3.*math.pi/4.,qr[4]).c_if(cr, 3)
qc.u1(math.pi/2.,qr[4]).c_if(cr, 2)
qc.u1(math.pi/4.,qr[4]).c_if(cr, 1)
qc.h(qr[4])
# measure
qc.measure(qr[4],cr[2])
```
Next we build the optimised circuit for $11 \bmod 15$ as in **C**.
```
def circuit_11period15(qc,qr,cr):
# Initialize q[0] to |1>
qc.x(qr[0])
# Apply a**4 mod 15
qc.h(qr[4])
# controlled identity on the remaining 4 qubits, which is equivalent to doing nothing
qc.h(qr[4])
# measure
qc.measure(qr[4],cr[0])
# reinitialise q[4] to |0>
qc.reset(qr[4])
# Apply a**2 mod 15
qc.h(qr[4])
# controlled identity on the remaining 4 qubits, which is equivalent to doing nothing
# feed forward
qc.u1(math.pi/2.,qr[4]).c_if(cr, 1)
qc.h(qr[4])
# measure
qc.measure(qr[4],cr[1])
# reinitialise q[4] to |0>
qc.reset(qr[4])
# Apply 11 mod 15
qc.h(qr[4])
# controlled unitary.
qc.cx(qr[4],qr[3])
qc.cx(qr[4],qr[1])
# feed forward
qc.u1(3.*math.pi/4.,qr[4]).c_if(cr, 3)
qc.u1(math.pi/2.,qr[4]).c_if(cr, 2)
qc.u1(math.pi/4.,qr[4]).c_if(cr, 1)
qc.h(qr[4])
# measure
qc.measure(qr[4],cr[2])
```
Let's build and run a circuit for $a = 7$, and plot the the circuit and results:
```
q = QuantumRegister(5, 'q')
c = ClassicalRegister(5, 'c')
shor = QuantumCircuit(q, c)
circuit_aperiod15(shor,q,c,7)
shor.draw(output='mpl')
backend = BasicAer.get_backend('qasm_simulator')
sim_job = execute([shor], backend)
sim_result = sim_job.result()
sim_data = sim_result.get_counts(shor)
plot_histogram(sim_data)
```
We see the measurements yield $x = 0, 2, 4$ and $6$ with equal(ish) probability. Using the continued fraction expansion for $x/2^3$, we note that only $x = 2$ and $6$ give the correct period $r = 4$, and thus the factors $p = \text{gcd}(a^{r/2}+1,15) = 3$ and $q = \text{gcd}(a^{r/2}-1,15) = 5$.
Why don't you try seeing what you get for $a = 2, 8, 11, 13$?
| github_jupyter |
## Expresiones Regulares
-----------------------------------
Las expresiones regulares, también conocidas como 'regex' o 'regexp', son patrones de búsqueda definidos con una sintaxis formal. Siempre que sigamos sus reglas, podremos realizar búsquedas simples y avanzadas, que utilizadas en conjunto con otras funcionalidades, las vuelven una de las opciones más útiles e importantes de cualquier lenguaje.
<center><img src='./img/regex.PNG'></center>
```
import re
```
**Métodos Básicos**
- **search**: busca un patrón en otra cadena
```
texto = "En esta cadena se encuentra una palabra mágica"
re.search('mágica', texto)
```
Como vemos, al realizar la búsqueda lo que nos encontramos es un objeto de tipo Match (encontrado), en lugar un simple True o False.
```
palabra = "mágica"
encontrado = re.search(palabra, texto)
if encontrado:
print("Se ha encontrado la palabra:", palabra)
else:
print("No se ha encontrado la palabra:", palabra)
```
Sin embargo, volviendo al objeto devuelto de tipo Match, éste nos ofrece algunas opciones interesantes.
```
# Posición donde empieza la coincidencia
print( encontrado.start() )
# Posición donde termina la coincidencia
print( encontrado.end() )
# Tupla con posiciones donde empieza y termina la coincidencia
print( encontrado.span() )
# Cadena sobre la que se ha realizado la búsqueda
print( encontrado.string )
```
- **findall**: busca todas las coincidencias en una cadena
```
# re.findall(r'regex','string')
texto= "Love #movies! I had fun yesterday going to the #movies"
re.findall(r"#movies", texto)
```
- **split**: divide una cadena de texto según un patrón
```
# re.split(r'regex', 'string')
texto="Nice Place to eat! I'll come back! Excellent meat!"
re.split(r"!", texto)
```
- **sub**: substituye parte de un texto por otro
```
# re.sub(r'regex', 'sub' , 'string')
texto = "I have a yellow car and a yellow house in a yellow neighborhood"
sub = 'nice'
re.sub(r"yellow", sub ,texto)
```
**Metacarácteres**
Permite realizar búsquedas de patrones con carácterísticas especiales
<img src='./img/codigo_escapado.PNG'>
```
texto = "The winners are: User9, UserN, User8,UserÑ , User!"
# encontrando valores de User[num] en texto
patron = r"User\d"
print(re.findall(patron, texto))
# encontrando valores de User[letra] en texto
print(re.findall(r"User\D", texto))
print(re.findall(r"User\w", texto))
re.findall(r"User[0-9a-zA-Z]", texto) # recoge letras del ingles
```
**Repeticiones**
Supongamos que tenemos que realizar la validación del siguiente string
```
password = "el texto password1234"
re.search(r"password\d\d\d\d", password)
```
Para poder facilitar esa búsqueda existen los repetidores, los cuales van a indicar un número de veces en que se repita un carácter o metacarácter en específico.
- Con número de repeticiones fijo <code>{n}</code>
**n** -> indica la cantidad de veces en que se repite un caracter
```
#
re.search(r"password\d{4}", password)
```
**Cuantificadores**
Al igual que los repetidores nos va a indicar la cantidad de veces en que se repita cierta expresión:
- <code>+</code> : una o más veces
- <code>*</code> : cero o más veces
- <code>?</code> : cero o una vez
- <code>{n,m}</code> : al menos n veces, como máximo m veces
**nota**
<code>r"apple+"</code> : <code>+</code> aplica al la expresión de la izquierda
```
# "+" ->digitos se repiten una o más veces
text = "Date of start: 4-3. Date of registration: 10-04 , 100-4., 4-"
re.findall(r"\d+-[0-9]+", text)
# "*" ->
my_string = "The concert was amazing! @ameli!a @joh&&n @mary90"
re.findall(r"@\w+\W*\w+", my_string)
# ?
text = "The color of this image is amazing. However, the colour blue could be brighter."
re.findall(r"colou?r", text)
# {n,m}
phone_number = "John: 1-966-847-3131 Michelle: 54-908-42-42424"
re.findall(r"[0-9]{1,2}-\d{3}-\d{2,3}-\d{4,}", phone_number)
```
### Caracteres Especiales
- Match cualquier caracter (excepto salto de línea) <code>.
```
my_links = "Just check out this link: www.amazingpics.com. It has amazing photos!"
re.findall(r"w{3}.+com", my_links)
```
- Inicio de la cadena de texto <code>^</code>
- Fin de cadena de texto <code>$</code>
```
my_string = "the 80s music was much better that the 90s"
# búsca cualquier texto de la forma 'the ..'
print(re.findall(r"the\s\d+s", my_string))
# cadena de texto inicia con 'the'
print(re.findall(r"^the\s\d+s", my_string))
# cadena de texto finaliza con 'the (num)s'
re.findall(r"the\s\d+s$", my_string)
```
- caracter de escape especial <code>\
```
my_string = "I love the music of Mr.Go. However, the sound was too loud."
# Separando texto por '.\s' -> se pretende separar por '.'
print(re.split(r".\s", my_string))
# utilizando '\'
print(re.split(r"\.\s", my_string))
```
- Operador OR <code>|
```
my_string = "Elephants are the world's largest land animal! I would love to see an elephant one day"
# eligiendo entre valor 'Elephant' or 'elephant'
re.findall(r"Elephant|elephant", my_string)
re.findall(r"[Ee]lephant", my_string)
```
- Conjunto de caracteres <code>[]
<img src='./img/rango_regex.PNG'>
```
# Reemplzando carácteres especiales en texto por " "
my_string = "My&name&is#John Smith. I%live$in#London."
re.sub(r"[#$%&]", " ", my_string)
# [^] - transforma la expresión a negativa
my_links = "Bad website: www.99.com. Favorite site: www.hola.com"
re.findall(r"www[^0-9]+com", my_links) # link pero que no contenga números
```
## Documentación
Hay docenas y docenas de códigos especiales, si deseas echar un vistazo a todos ellos puedes consultar la documentación oficial:
- https://docs.python.org/3.5/library/re.html#regular-expression-syntax
Un resumen por parte de Google Eduactión:
- https://developers.google.com/edu/python/regular-expressions
Otro resumen muy interesante sobre el tema:
- https://www.tutorialspoint.com/python/python_reg_expressions.htm
- http://w3.unpocodetodo.info/utiles/regex.php
Un par de documentos muy trabajados con ejemplos básicos y avanzados:
- http://www.python-course.eu/python3_re.php
- http://www.python-course.eu/python3_re_advanced.php
Pruebas
- https://regex101.com/
# Problemas
--------------------------------
1. **Validacion de Numeros Telefónicos**: Cree un programa que valide si una cadena de carácteres es un numero telefonico o no
- Un numero telefonico es aquel que pose 10 carácteres numericos e inicializa con los numeros 7,8 o 9
Validar los casos:
- 9587456281 -> YES
- 1252478965 -> NO
- 8F54698745 -> NO
- 9898959398 -> YES
- 879546242 -> NO
```
regex = r'^[789]\d{9}$' # defino el patron de busqueda
evaluar = ['9587456281','1252478965','8F54698745','9898959398','879546242']
for t in evaluar:
x = re.findall(regex, t )
if x:
print(f'{t} -> Yes')
else:
print(f'{t} -> No')
```
2. Los colores CSS se definen mediante una notación hexadecimal (HEX) para la combinación de valores de color rojo, verde y azul (RGB).
Especificaciones del código de color HEX
- Debe comenzar con un símbolo '#'.
- Puede tener 3 o 6 dígitos.
- Cada dígito está en el rango de 0 a F. (0,1,2,3,4,5,6,7,8,9,A,B,C,D,E y F).
- las letras pueden ser minúsculas. (a,b,c,d,e y f también son dígitos válidos).
**Input:** input_regex.css carpeta src
**OutPut Esperado:** #FfFdF8, #aef, #f9f9f9, #fff, #ABC, #fff
**Explicacion:** #BED y #Cab satisfacen los criterios pero estos son usados como selectores.
| github_jupyter |
# Simon algorithm(Overview)
We explain Simon algorithm algorithm.
In this algorithm, for a function $f_s(x)$ with $n$-bit input and output ($s$ is any $n$-bit sequence), one of the following is assumed to be true.
Case1: Always return different outputs for different inputs (one-to-one correspondence)
Case2: For input $x, x'$, if $x' = x\oplus s$, then $f_s(x) = f_s(x')$. That is, it returns the same output for two inputs.
This algorithm determines whether Oracle is case 1 or case 2 above.
The concrete quantum circuit is as follows. The contents of $U_f$ are shown for the case of case 2 above and $s=1001$.
The number of qubits is $2 n$.
<img src="../tutorial-ja/img/103_img.png" width="50%">
Check the state.
$$
\begin{align}
\lvert \psi_1\rangle &= \frac{1}{\sqrt{2^n}} \biggl(\otimes^n H\lvert 0\rangle \biggr) \lvert 0\rangle^{\otimes n} \\
&= \frac{1}{\sqrt{2^n}} \sum_{x=0}^{2^n-1} \lvert x\rangle \lvert 0\rangle^{\otimes n}
\end{align}
$$
Next, consider $\lvert \psi_2 \rangle$.
Here, for $f_s(x)$, we have the following oracle gate $U_f$.
$$
U_f \lvert x \rangle \lvert 0 \rangle = \lvert x \rangle \lvert f_s(x) \rangle
$$
Using this $U_f$, we get
$$
\lvert \psi_2 \rangle = \frac{1}{\sqrt{2^n}} \sum_{x=0}^{2^n-1} \lvert x\rangle \lvert f_s(x)\rangle
$$
Therefore, $\lvert \psi_3 \rangle$ is as follows
$$
\lvert \psi_3 \rangle = \frac{1}{2^n} \sum_{x=0}^{2^n-1}\sum_{y=0}^{2^n-1} (-1)^{x\cdot y} \lvert y\rangle \lvert f_s(x)\rangle
$$
Now, consider what the measurement result of $\lvert y \rangle$ would be if $f_s(x)$ were as follows.
Case 1: Always return different outputs for different inputs (one-to-one correspondence)
All measurement results are obtained with equal probability.
Case 2: For input $x, x'$, if $x' = x\oplus s$, then $f_s(x) = f_s(x')$. That is, it returns the same output for two inputs.
Notice the amplitude $A(y, x)$ of the state $\lvert y \rangle \lvert f_s(x) \rangle = \lvert y \rangle \lvert f_s(x\oplus s) \rangle$.
$$
A(y, x) = \frac{1}{2^n} \{(-1)^{x\cdot y} + (-1)^{(x\oplus s) \cdot y}\}
$$
As you can see from the equation, the amplitude of $y$ such that $y\cdot s \equiv 1 \bmod2$ is $0$ due to cancellation.
Therefore, only $y$ is measured such that $y\cdot s \equiv 0 \bmod2$.
In both case 1 and case 2, if such $n$ different $y$ are obtained by measurement (except for $00. .0$), we can determine $s'$ such that $y\cdot s' \equiv 0 \bmod2$ for all those $y$.
In the case 1, $s'$ is completely random.
However, in the case 2, $f_s(s') = f_s(0)$ is always true because $s' = 0\oplus s'$.
Thus, except for the case where $s'$ such that $f_s(s') = f_s(0)$ is obtained from case 1 with probability $1 / 2^n$, we can check whether $s'$ is obtained from case 1 or case 2 using the oracle gate.
The oracle can be determined from the above.
Finally, we consider the implementation of the oracle gate $U_f$.
In the case 1, it is enough that the output has a one-to-one correspondence with the input $x$.
For simplicity, let's consider a circuit that randomly inserts an $X$ gate.
The case 2 is a bit more complicated.
First, the $CX$ gate creates the following state.
$$
\lvert \psi_{1a} \rangle = \frac{1}{\sqrt{2^n}} \sum_{x=0}^{2^n-1} \lvert x\rangle \lvert x\rangle
$$
Next, for the lowest index $i'$ where $s_i=1$, take XOR of the auxiliary register and $s$ only if $x_{i'} = 0$.
As a result, we get the following $\lvert\psi_2\rangle$.
$$
\begin{align}
\lvert \psi_{2} \rangle &= \frac{1}{\sqrt{2^n}} \biggl(\sum_{\{x_{i'}=0\}} \lvert x\rangle \lvert x \oplus s\rangle + \sum_{\{x_{i'}=1\}} \lvert x\rangle \lvert x\rangle \biggr) \\
&= \frac{1}{\sqrt{2^n}} \sum_{x=0}^{2^n-1} \lvert x\rangle \lvert f_s(x)\rangle
\end{align}
$$
We can confirm that $f_s(x)$ satisfies case 2 by calculation.
Let's implement this with blueqat.
```
from blueqat import Circuit
import numpy as np
```
Prepare a function for two types of oracle gates $U_f$.
```
def oracle_1(c, s):
_n = len(s)
for i in range(_n):
if np.random.rand() > 0.5:
c.x[i]
for i in range(_n):
c.cx[i, i + _n]
def oracle_2(c, s):
_n = len(s)
flag = 0
for i, si in enumerate(reversed(s)):
c.cx[i, i + _n]
if si == '1' and flag == 0:
c.x[i]
for j, sj in enumerate(s):
if sj == '1':
c.cx[i, j + _n]
c.x[i]
flag = 1
```
The following is the main body of the algorithm.
First, use a random number to determine the oracle (one of the two types) and the $s$ you want to find.
(In the following, the values are fixed to reproduce the quantum circuit shown in the figure above.)
```
n = 4
N = np.random.randint(1, 2**n-1)
s = bin(N)[2:].zfill(n)
which_oracle = np.random.rand()
### to reproduce the quantum circuit shown in the figure above ###
### Erasing these two lines will randomly determine s and oracle###
s = "1001"
which_oracle = 0
######
c = Circuit(n * 2)
c.h[:n]
if which_oracle > 0.5:
oracle_1(c, s)
oracle = "oracle 1"
else:
oracle_2(c, s)
oracle = "oracle 2"
c.h[:n].m[:n]
res = c.run(shots = 1000)
res
```
Extract $n$ results other than '00...0' from the sampling result.
```
res_list = list(res.keys())
_res_list = []
for i in res_list:
if i[:n] != '0'*4:
_res_list.append(i[:n])
if len(_res_list) == 4:
break
print(_res_list)
```
Find $s'$ from the extracted result.
(Here, we are simply looking for $s'$ that matches the condition by brute force, but it is possible to find it efficiently in linear algebra.)
If the oracle in case 2 is selected, the resulting $s'$ should be equal to $s$.
```
for i in range(2**n):
l = bin(i)[2:].zfill(n)
flag = 1
for sampled in _res_list:
mod = np.sum(np.array(list(l), dtype = np.int) * np.array(list(sampled), dtype = np.int)) % 2
if mod:
flag = 0
break
if flag:
output_s = l
print("s' =", output_s)
print("s =", s)
```
| github_jupyter |
```
%env AZURE_EXTENSION_DIR=/home/schrodinger/automl/sdk-cli-v2/src/cli/src
%env AZURE_ML_CLI_PRIVATE_FEATURES_ENABLED=true
```
# Problem
The dataset in this experiment is taken from a Kaggle competition sponsored by Porto Seguro, one of the largest auto and homeowner insurance companies in Brazil. Given an anonymized dataset about drivers, cars etc., the challenge is to build a model that predicts the probability that a driver will initiate an auto insurance claim in the next year.
Kaggle link: https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/overview
# Imports
```
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
# pandas options
pd.set_option("display.max_columns", None)
pd.set_option('display.max_colwidth', 5000)
pd.options.display.float_format = "{:.5f}".format
```
# Load Data
The training dataset has been uploaded to an Azure Blob Store, from where we can download and process it directly using Pandas.
```
train_df = pd.read_csv(
"https://azmlworkshopdata.blob.core.windows.net/safedriverdata/porto_seguro_safe_driver_prediction_train.csv",
index_col="id")
train_df.head()
```
The features have been named in such a way that they indicate the type of the feature.
- The feature that we want to predict is named as 'target'
- Features with 'bin' are binary features (i.e. have 0 or 1 values)
- Feaures with 'cat' are categorical features
- Features with 'calc' are engineered (calculated) columns
- Remaining features are numerical (continuous)
Also, all missing values are already imputed by a common value of -1.
```
def dtype_dist(df):
def _l(x): return "{%s}" % ', '.join(x["index"])
return df.dtypes.reset_index(name="dtype").groupby(
["dtype"]).apply(lambda x: _l(x)).reset_index(name="feature")
dtype_dist(train_df)
```
# Exploratory Data Analysis
```
X = train_df.drop("target", axis=1)
y = train_df["target"]
X.shape
# separate features based on types
cat_features = [c for c in X.columns if '_cat' in c]
bin_features = [c for c in X.columns if '_bin' in c]
num_features = list(set(X.columns) - set(cat_features + bin_features))
```
## Target Imbalance
The target value is highly skewed, i.e., there are far less individuals which claimed insurace (target=1) than those who didn't (target=0). This means that we might have to use advanced methods, like upsampling, to make the model learn the behavior of the minority class, and not just always predict the majority class.
```
display(y.value_counts())
sns.countplot(x=y)
```
## Feature Correlations
Understanding feature correlations can help us figure out complex relationships between features (e.g. if they are positively or negatively correlated, or neutral - not dependent on each other at all), and ultimately help in narrowing our choices for model selection. For instance, Linear models (like LinearRegression) may not work well for two or more features that are tightly correlated, and may require some processing (e.g. dropping) before these models can work effectively on them.
The below graphs show the pearson correlations among the features (grouped by their type). The values range from -1 (strong negative correlation) to 1 (strong positive correlation) with 0 being neutral.
There aren't features that are very tightly correlated to each other - which means we don't worry too much about them.
```
def get_masked_corr(df):
corr = df.corr(method='pearson')
mask = np.zeros_like(corr, dtype=bool) # Mask for hiding the upper half of the diagonal
mask[np.triu_indices_from(mask)] = True
corr[mask] = np.nan
return corr
fig, axs = plt.subplots(ncols=2, nrows=2, figsize=(20,20))
axs[1, 1].set_visible(False)
corr = get_masked_corr(X[num_features])
axs[0, 0].set_title("Correlation between numeric features", fontsize=14)
sns.heatmap(corr, cmap=sns.diverging_palette(220, 10, as_cmap=True), square=True, ax=axs[0, 0])
corr = get_masked_corr(X[bin_features])
axs[0, 1].set_title("Correlation between binary features", fontsize=14)
sns.heatmap(corr, cmap=sns.diverging_palette(220, 10, as_cmap=True), square=True, ax=axs[0, 1])
corr = get_masked_corr(X[cat_features])
axs[1, 0].set_title("Correlation between categorical features", fontsize=14)
sns.heatmap(corr, cmap=sns.diverging_palette(220, 10, as_cmap=True),square=True, ax=axs[1, 0])
```
## Missing Values
The dataset contains missing values, which are populated by the value -1. These would need to be handled separately (e.g. by filling them with appropriate values, adding missing value indicators etc.)
We will convert them to np.nan, which can later be handled by the featurization pipeline.
```
summary = X.describe()
summary
cols_missing_vals = list(summary.T[summary.T["missing"] == -1].T.columns)
print("The following columns contain missing values: ", cols_missing_vals)
print("\nReplacing with np.nan")
X[cols_missing_vals] = X[cols_missing_vals].replace(-1, np.nan)
assert (X.describe().loc["min"] == -1).sum() == 0, "Missing values were not replaced with np.nan"
```
A large percent of missing values in some columns (e.g. ps_car_03_cat, ps_car_05_cat) indicates that this can be encoded as a separate feature (e.g. by adding a missing values indicator column)
```
percent_missing = X[cols_missing_vals].isnull().mean() *100
percent_missing_df = pd.DataFrame({'column_name': cols_missing_vals, 'percent_missing': percent_missing}).reset_index(drop=True)
display(percent_missing_df.sort_values(by='percent_missing', ascending=False))
```
## Binary Features
### Distribution
Most of the binary features are skweed, and appear to be dominated by a single class (0).
```
columns = X[bin_features].columns.values
# Calculating required amount of rows to display all feature plots
cols = 3
rows = 6
fig, axs = plt.subplots(ncols=cols, nrows=rows, figsize=(15,15))
# Adding some distance between plots
plt.subplots_adjust(hspace = 0.5, wspace = 0.5)
for i, col in enumerate(columns):
r = i // cols
c = i % cols
sns.countplot(x=col, data=X[bin_features], hue=y, ax=axs[r][c])
axs[r, c+1].set_visible(False) # Hide the last (empty) plot
```
## Categorical Features
### Cardinalities
If the threshold for high cardinality column is considered to be 10 or more, it seems we have four high cardinality columns - ps_car_11_cat, ps_car_06_cat, ps_car_04_cat, ps_car_01_cat, with ps_car_11_cat having more than 100 categories.
```
bars_pos = np.arange(len(cat_features))
width=0.3
fig, ax = plt.subplots(figsize=(6, 3))
bars = ax.bar(bars_pos-width/2, X[cat_features].nunique().values, width=width,
color="darkorange", edgecolor="black")
ax.set_title("Categorical Feature Cardinalities", fontsize=15, pad=15)
ax.set_xlabel("Categorical feature", fontsize=10, labelpad=10)
ax.set_ylabel("Cardinality", fontsize=10, labelpad=10)
ax.set_xticks(bars_pos)
ax.set_xticklabels(cat_features, rotation=90, fontsize=10)
ax.tick_params(axis="y", labelsize=12)
ax.grid(axis="y")
plt.margins(0.01, 0.05)
```
### Distribution
```
columns = X[cat_features].columns.values
# Calculating required amount of rows to display all feature plots
cols = 3
rows = 5
fig, axs = plt.subplots(ncols=cols, nrows=rows, figsize=(15,15))
# Adding some distance between plots
plt.subplots_adjust(hspace = 0.5, wspace = 0.5)
for i, col in enumerate(columns):
r = i // cols
c = i % cols
sns.countplot(x=col, data=X[cat_features], hue=y, ax=axs[r][c])
axs[r, c+1].set_visible(False) # Hide the last (empty) plot
```
## Numerical Features
### Distribution
The distribution below shows that most features (like ps_reg_15, ps_car_11, ps_ind_01, ps_reg_01) contains only a few unique values. Perhaps it could also be treated as categorical?
A few other columns, most predominantly ps_ind_14, is dominated by a single value (0).
```
only_nums = [c for c in num_features if '_calc' not in c]
columns = X[only_nums].columns.values
# Calculating required amount of rows to display all feature plots
cols = 3
rows = 4
fig, axs = plt.subplots(ncols=cols, nrows=rows, figsize=(15,15))
# Adding some distance between plots
plt.subplots_adjust(hspace = 0.5, wspace = 0.5)
for i, col in enumerate(columns):
r = i // cols
c = i % cols
sns.kdeplot(x=col, data=X[columns], ax=axs[r][c])
```
# Preprocessing
Based on the exploratory data analysis above, we now numercalize the data.
This is the place that will distill the above findings into an AutoML Featurization Config, and create a featurization job to get the preprocessed dataset.
## Split training data into train and valid splits
```
train_df, valid_df = train_test_split(train_df, test_size=0.2, random_state=42)
train_df.shape, valid_df.shape
```
## Featurization
```
def featurize(df, cat_features, num_features):
```
## Save the files as CSV
Save the CSV file locally, so that it can be uploaded to create a tabular dataset
```
if not os.path.isdir('data'):
os.mkdir('data')
# Save the train-test-valid data to a csv to be uploaded to the datastore
train_df.to_csv("data/train_data.csv", index=False)
valid_df.to_csv("data/valid_data.csv", index=False)
test_df.to_csv("data/test_data.csv", index=False)
```
# Data Loading in Azure ML
Once Tabular Datasets are available, the above CSV files can be used to generate Datasets, that can be fed into AutoML
| github_jupyter |
```
!pip install pytorch-adapt
```
### Helper function and data for demo
```
from pprint import pprint
import torch
from pytorch_adapt.utils import common_functions as c_f
from pytorch_adapt.utils.common_functions import get_lr
def print_optimizers_slim(adapter):
for k, v in adapter.optimizers.items():
print(f"{k}: {v.__class__.__name__} with lr={get_lr(v)}")
data = {
"src_imgs": torch.randn(32, 1000),
"target_imgs": torch.randn(32, 1000),
"src_labels": torch.randint(0, 10, size=(32,)),
"src_domain": torch.zeros(32),
"target_domain": torch.zeros(32),
}
device = torch.device("cuda")
```
### Adapters Initialization
Models are usually the only required argument when initializing adapters. Optimizers are created using the default that is defined in the adapter.
```
from pytorch_adapt.adapters import DANN
from pytorch_adapt.containers import Models
G = torch.nn.Linear(1000, 100)
C = torch.nn.Linear(100, 10)
D = torch.nn.Sequential(torch.nn.Linear(100, 1), torch.nn.Flatten(start_dim=0))
models = Models({"G": G, "C": C, "D": D})
adapter = DANN(models=models)
print_optimizers_slim(adapter)
```
### Modifying optimizers using the Optimizers container
We can use the Optimizers container if we don't want to use the defaults.
For example: SGD with lr 0.1 for all 3 models
```
from pytorch_adapt.containers import Optimizers
optimizers = Optimizers((torch.optim.SGD, {"lr": 0.1}))
adapter = DANN(models=models, optimizers=optimizers)
print_optimizers_slim(adapter)
```
SGD with lr 0.1 for the G and C models only. The default optimizer will be used for D.
```
optimizers = Optimizers((torch.optim.SGD, {"lr": 0.1}), keys=["G", "C"])
adapter = DANN(models=models, optimizers=optimizers)
print_optimizers_slim(adapter)
```
SGD with lr 0.1 for G, and SGD with lr 0.5 for C
```
optimizers = Optimizers(
{"G": (torch.optim.SGD, {"lr": 0.1}), "C": (torch.optim.SGD, {"lr": 0.5})}
)
adapter = DANN(models=models, optimizers=optimizers)
print_optimizers_slim(adapter)
```
You can also create the optimizers yourself and pass them into the Optimizers container
```
optimizers = Optimizers({"G": torch.optim.SGD(G.parameters(), lr=0.123)})
adapter = DANN(models=models, optimizers=optimizers)
print_optimizers_slim(adapter)
```
### Adding LR Schedulers
LR schedulers can be added with the LRSchedulers container.
```
from pytorch_adapt.containers import LRSchedulers
optimizers = Optimizers((torch.optim.Adam, {"lr": 1}))
lr_schedulers = LRSchedulers(
{
"G": (torch.optim.lr_scheduler.ExponentialLR, {"gamma": 0.99}),
"C": (torch.optim.lr_scheduler.StepLR, {"step_size": 2}),
},
scheduler_types={"per_step": ["G"], "per_epoch": ["C"]},
)
adapter = DANN(models=models, optimizers=optimizers, lr_schedulers=lr_schedulers)
print(adapter.lr_schedulers)
```
If you don't wrap the adapter with a framework, then you have to step the lr schedulers manually as shown below.
(Here we're just demonstrating how the lr scheduler container works, so we're stepping it without computing a loss or stepping the optimizers etc.)
```
for epoch in range(4):
for i in range(5):
adapter.lr_schedulers.step("per_step")
adapter.lr_schedulers.step("per_epoch")
print(f"End of epoch={epoch}")
print_optimizers_slim(adapter)
```
### Training step
```
adapter.models.to(device)
data = c_f.batch_to_device(data, device)
loss = adapter.training_step(data)
pprint(loss)
```
### Customizing the wrapped hook
```
from pytorch_adapt.hooks import BNMHook, MCCHook
post_g = [BNMHook(), MCCHook()]
adapter = DANN(models=models, hook_kwargs={"post_g": post_g})
data = c_f.batch_to_device(data, device)
loss = adapter.training_step(data)
pprint(loss)
```
### Inference
```
inference_data = torch.randn(32, 1000).to(device)
features, logits = adapter.inference(inference_data)
print(features.shape)
print(logits.shape)
```
### Custom inference function
```
def custom_inference_fn(cls):
def return_fn(x):
print("using custom_inference_fn")
return cls.models["G"](x)
return return_fn
adapter_custom = DANN(models=models, inference=custom_inference_fn)
features = adapter_custom.inference(inference_data)
print(features.shape)
```
| github_jupyter |
# Feature Engineering and Labeling
We'll use the price-volume data and generate features that we can feed into a model. We'll use this notebook for all the coding exercises of this lesson, so please open this notebook in a separate tab of your browser.
Please run the following code up to and including "Make Factors." Then continue on with the lesson.
```
import sys
!{sys.executable} -m pip install --quiet -r requirements.txt
import numpy as np
import pandas as pd
import time
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (14, 8)
```
#### Registering data
```
import os
import project_helper
from zipline.data import bundles
os.environ['ZIPLINE_ROOT'] = os.path.join(os.getcwd(), '..', '..', 'data', 'module_4_quizzes_eod')
ingest_func = bundles.csvdir.csvdir_equities(['daily'], project_helper.EOD_BUNDLE_NAME)
bundles.register(project_helper.EOD_BUNDLE_NAME, ingest_func)
print('Data Registered')
from zipline.pipeline import Pipeline
from zipline.pipeline.factors import AverageDollarVolume
from zipline.utils.calendars import get_calendar
universe = AverageDollarVolume(window_length=120).top(500)
trading_calendar = get_calendar('NYSE')
bundle_data = bundles.load(project_helper.EOD_BUNDLE_NAME)
engine = project_helper.build_pipeline_engine(bundle_data, trading_calendar)
universe_end_date = pd.Timestamp('2016-01-05', tz='UTC')
universe_tickers = engine\
.run_pipeline(
Pipeline(screen=universe),
universe_end_date,
universe_end_date)\
.index.get_level_values(1)\
.values.tolist()
from zipline.data.data_portal import DataPortal
data_portal = DataPortal(
bundle_data.asset_finder,
trading_calendar=trading_calendar,
first_trading_day=bundle_data.equity_daily_bar_reader.first_trading_day,
equity_minute_reader=None,
equity_daily_reader=bundle_data.equity_daily_bar_reader,
adjustment_reader=bundle_data.adjustment_reader)
def get_pricing(data_portal, trading_calendar, assets, start_date, end_date, field='close'):
end_dt = pd.Timestamp(end_date.strftime('%Y-%m-%d'), tz='UTC', offset='C')
start_dt = pd.Timestamp(start_date.strftime('%Y-%m-%d'), tz='UTC', offset='C')
end_loc = trading_calendar.closes.index.get_loc(end_dt)
start_loc = trading_calendar.closes.index.get_loc(start_dt)
return data_portal.get_history_window(
assets=assets,
end_dt=end_dt,
bar_count=end_loc - start_loc,
frequency='1d',
field=field,
data_frequency='daily')
```
# Make Factors
- We'll use the same factors we have been using in the lessons about alpha factor research. Factors can be features that we feed into the model.
```
from zipline.pipeline.factors import CustomFactor, DailyReturns, Returns, SimpleMovingAverage
from zipline.pipeline.data import USEquityPricing
factor_start_date = universe_end_date - pd.DateOffset(years=3, days=2)
sector = project_helper.Sector()
def momentum_1yr(window_length, universe, sector):
return Returns(window_length=window_length, mask=universe) \
.demean(groupby=sector) \
.rank() \
.zscore()
def mean_reversion_5day_sector_neutral(window_length, universe, sector):
return -Returns(window_length=window_length, mask=universe) \
.demean(groupby=sector) \
.rank() \
.zscore()
def mean_reversion_5day_sector_neutral_smoothed(window_length, universe, sector):
unsmoothed_factor = mean_reversion_5day_sector_neutral(window_length, universe, sector)
return SimpleMovingAverage(inputs=[unsmoothed_factor], window_length=window_length) \
.rank() \
.zscore()
class CTO(Returns):
"""
Computes the overnight return, per hypothesis from
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2554010
"""
inputs = [USEquityPricing.open, USEquityPricing.close]
def compute(self, today, assets, out, opens, closes):
"""
The opens and closes matrix is 2 rows x N assets, with the most recent at the bottom.
As such, opens[-1] is the most recent open, and closes[0] is the earlier close
"""
out[:] = (opens[-1] - closes[0]) / closes[0]
class TrailingOvernightReturns(Returns):
"""
Sum of trailing 1m O/N returns
"""
window_safe = True
def compute(self, today, asset_ids, out, cto):
out[:] = np.nansum(cto, axis=0)
def overnight_sentiment(cto_window_length, trail_overnight_returns_window_length, universe):
cto_out = CTO(mask=universe, window_length=cto_window_length)
return TrailingOvernightReturns(inputs=[cto_out], window_length=trail_overnight_returns_window_length) \
.rank() \
.zscore()
def overnight_sentiment_smoothed(cto_window_length, trail_overnight_returns_window_length, universe):
unsmoothed_factor = overnight_sentiment(cto_window_length, trail_overnight_returns_window_length, universe)
return SimpleMovingAverage(inputs=[unsmoothed_factor], window_length=trail_overnight_returns_window_length) \
.rank() \
.zscore()
universe = AverageDollarVolume(window_length=120).top(500)
sector = project_helper.Sector()
pipeline = Pipeline(screen=universe)
pipeline.add(
momentum_1yr(252, universe, sector),
'Momentum_1YR')
pipeline.add(
mean_reversion_5day_sector_neutral_smoothed(20, universe, sector),
'Mean_Reversion_Sector_Neutral_Smoothed')
pipeline.add(
overnight_sentiment_smoothed(2, 10, universe),
'Overnight_Sentiment_Smoothed')
all_factors = engine.run_pipeline(pipeline, factor_start_date, universe_end_date)
all_factors.head()
```
#### Stop here and continue with the lesson section titled "Features".
# Universal Quant Features
* stock volatility: zipline has a custom factor called AnnualizedVolatility. The [source code is here](https://github.com/quantopian/zipline/blob/master/zipline/pipeline/factors/basic.py) and also pasted below:
```
class AnnualizedVolatility(CustomFactor):
"""
Volatility. The degree of variation of a series over time as measured by
the standard deviation of daily returns.
https://en.wikipedia.org/wiki/Volatility_(finance)
**Default Inputs:** :data:`zipline.pipeline.factors.Returns(window_length=2)` # noqa
Parameters
----------
annualization_factor : float, optional
The number of time units per year. Defaults is 252, the number of NYSE
trading days in a normal year.
"""
inputs = [Returns(window_length=2)]
params = {'annualization_factor': 252.0}
window_length = 252
def compute(self, today, assets, out, returns, annualization_factor):
out[:] = nanstd(returns, axis=0) * (annualization_factor ** .5)
```
```
from zipline.pipeline.factors import AnnualizedVolatility
AnnualizedVolatility()
```
#### Quiz
We can see that the returns `window_length` is 2, because we're dealing with daily returns, which are calculated as the percent change from one day to the following day (2 days). The `AnnualizedVolatility` `window_length` is 252 by default, because it's the one-year volatility. Try to adjust the call to the constructor of `AnnualizedVolatility` so that this represents one-month volatility (still annualized, but calculated over a time window of 20 trading days)
#### Answer
```
# TODO
AnnualizedVolatility(window_length=20)
```
#### Quiz: Create one-month and six-month annualized volatility.
Create `AnnualizedVolatility` objects for 20 day and 120 day (one month and six-month) time windows. Remember to set the `mask` parameter to the `universe` object created earlier (this filters the stocks to match the list in the `universe`). Convert these to ranks, and then convert the ranks to zscores.
```
# TODO
volatility_20d = AnnualizedVolatility(window_length=20, mask=universe).rank().zscore()
volatility_120d = AnnualizedVolatility(window_length=120, mask=universe).rank().zscore()
```
#### Add to the pipeline
```
pipeline.add(volatility_20d, 'volatility_20d')
pipeline.add(volatility_120d, 'volatility_120d')
```
#### Quiz: Average Dollar Volume feature
We've been using [AverageDollarVolume](http://www.zipline.io/appendix.html#zipline.pipeline.factors.AverageDollarVolume) to choose the stock universe based on stocks that have the highest dollar volume. We can also use it as a feature that is input into a predictive model.
Use 20 day and 120 day `window_length` for average dollar volume. Then rank it and convert to a zscore.
```
"""already imported earlier, but shown here for reference"""
#from zipline.pipeline.factors import AverageDollarVolume
# TODO: 20-day and 120 day average dollar volume
adv_20d = AverageDollarVolume(window_length=20, mask=universe).rank().zscore()
adv_120d = AverageDollarVolume(window_length=120, mask=universe).rank().zscore()
```
#### Add average dollar volume features to pipeline
```
pipeline.add(adv_20d, 'adv_20d')
pipeline.add(adv_120d, 'adv_120d')
```
### Market Regime Features
We are going to try to capture market-wide regimes: Market-wide means we'll look at the aggregate movement of the universe of stocks.
High and low dispersion: dispersion is looking at the dispersion (standard deviation) of the cross section of all stocks at each period of time (on each day). We'll inherit from [CustomFactor](http://www.zipline.io/appendix.html?highlight=customfactor#zipline.pipeline.CustomFactor). We'll feed in [DailyReturns](http://www.zipline.io/appendix.html?highlight=dailyreturns#zipline.pipeline.factors.DailyReturns) as the `inputs`.
#### Quiz
If the `inputs` to our market dispersion factor are the daily returns, and we plan to calculate the market dispersion on each day, what should be the `window_length` of the market dispersion class?
#### Answer
window_length = 1
#### Quiz: market dispersion feature
Create a class that inherits from `CustomFactor`. Override the `compute` function to calculate the population standard deviation of all the stocks over a specified window of time.
**mean returns**
$\mu = \sum_{t=0}^{T}\sum_{i=1}^{N}r_{i,t}$
**Market Dispersion**
$\sqrt{\frac{1}{T} \sum_{t=0}^{T} \frac{1}{N}\sum_{i=1}^{N}(r_{i,t} - \mu)^2}$
Use [numpy.nanmean](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.nanmean.html) to calculate the average market return $\mu$ and to calculate the average of the squared differences.
```
import numpy as np
class MarketDispersion(CustomFactor):
inputs = [DailyReturns()]
window_length = 1
window_safe = True
def compute(self, today, assets, out, returns):
# TODO: calculate average returns
mean_returns = np.nanmean(returns)
#TODO: calculate standard deviation of returns
out[:] = np.sqrt(np.nanmean((returns - mean_returns)**2))
```
#### Quiz
Create the MarketDispersion object. Apply two separate smoothing operations using [SimpleMovingAverage](https://www.zipline.io/appendix.html?highlight=simplemovingaverage#zipline.pipeline.factors.SimpleMovingAverage). One with a one-month window, and another with a 6-month window. Add both to the pipeline.
```
# TODO: create MarketDispersion object
dispersion = MarketDispersion(mask=universe)
# TODO: apply one-month simple moving average
dispersion_20d = SimpleMovingAverage(inputs=[dispersion], window_length=20)
# TODO: apply 6-month simple moving average
dispersion_120d = SimpleMovingAverage(inputs=[dispersion], window_length=120)
# Add to pipeline
pipeline.add(dispersion_20d, 'dispersion_20d')
pipeline.add(dispersion_120d, 'dispersion_120d')
```
#### Market volatility feature
* High and low volatility
We'll also build a class for market volatility, which inherits from [CustomFactor](http://www.zipline.io/appendix.html?highlight=customfactor#zipline.pipeline.CustomFactor). This will measure the standard deviation of the returns of the "market". In this case, we're approximating the "market" as the equal weighted average return of all the stocks in the stock universe.
##### Market return
$r_{m,t} = \frac{1}{N}\sum_{i=1}^{N}r_{i,t}$ for each day $t$ in `window_length`.
##### Average market return
Also calculate the average market return over the `window_length` $T$ of days:
$\mu_{m} = \frac{1}{T}\sum_{t=1}^{T} r_{m,t}$
#### Standard deviation of market return
Then calculate the standard deviation of the market return
$\sigma_{m,t} = \sqrt{252 \times \frac{1}{N} \sum_{t=1}^{T}(r_{m,t} - \mu_{m})^2 } $
##### Hints
* Please use [numpy.nanmean](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.nanmean.html) so that it ignores null values.
* When using `numpy.nanmean`:
axis=0 will calculate one average for every column (think of it like creating a new row in a spreadsheet)
axis=1 will calculate one average for every row (think of it like creating a new column in a spreadsheet)
* The returns data in `compute` has one day in each row, and one stock in each column.
* Notice that we defined a dictionary `params` that has a key `annualization_factor`. This `annualization_factor` can be used as a regular variable, and you'll be using it in the `compute` function. This is also done in the definition of AnnualizedVolatility (as seen earlier in the notebook).
```
class MarketVolatility(CustomFactor):
inputs = [DailyReturns()]
window_length = 1 # We'll want to set this in the constructor when creating the object.
window_safe = True
params = {'annualization_factor': 252.0}
def compute(self, today, assets, out, returns, annualization_factor):
# TODO
"""
For each row (each row represents one day of returns),
calculate the average of the cross-section of stock returns
So that market_returns has one value for each day in the window_length
So choose the appropriate axis (please see hints above)
"""
mkt_returns = np.nanmean(returns, axis=1)
# TODO
# Calculate the mean of market returns
mkt_returns_mu = np.nanmean(mkt_returns)
# TODO
# Calculate the standard deviation of the market returns, then annualize them.
out[:] = np.sqrt(annualization_factor * np.nanmean((mkt_returns-mkt_returns_mu)**2))
# TODO: create market volatility features using one month and six-month windows
market_vol_20d = MarketVolatility(window_length=20, mask=universe)
market_vol_120d = MarketVolatility(window_length=120, mask=universe)
# add market volatility features to pipeline
pipeline.add(market_vol_20d, 'market_vol_20d')
pipeline.add(market_vol_120d, 'market_vol_120d')
```
#### Stop here and continue with the lesson section "Sector and Industry"
# Sector and Industry
#### Add sector code
Note that after we run the pipeline and get the data in a dataframe, we can work on enhancing the sector code feature with one-hot encoding.
```
pipeline.add(sector, 'sector_code')
```
#### Run pipeline to calculate features
```
all_factors = engine.run_pipeline(pipeline, factor_start_date, universe_end_date)
all_factors.head()
```
#### One-hot encode sector
Let's get all the unique sector codes. Then we'll use the `==` comparison operator to check when the sector code equals a particular value. This returns a series of True/False values. For some functions that we'll use in a later lesson, it's easier to work with numbers instead of booleans. We can convert the booleans to type int. So False becomes 0, and 1 becomes True.
```
sector_code_l = set(all_factors['sector_code'])
sector_0 = all_factors['sector_code'] == 0
sector_0[0:5]
sector_0_numeric = sector_0.astype(int)
sector_0_numeric[0:5]
```
#### Quiz: One-hot encode sector
Choose column names that look like "sector_code_0", "sector_code_1" etc. Store the values as 1 when the row matches the sector code of the column, 0 otherwise.
```
# TODO: one-hot encode sector and store into dataframe
for s in sector_code_l:
all_factors[f'sector_code_{s}'] = (all_factors['sector_code'] == s).astype(int)
all_factors.head()
```
#### Stop here and continue with the lesson section "Date Parts".
# Date Parts
* We will make features that might capture trader/investor behavior due to calendar anomalies.
* We can get the dates from the index of the dataframe that is returned from running the pipeline.
#### Accessing index of dates
* Note that we can access the date index. using `Dataframe.index.get_level_values(0)`, since the date is stored as index level 0, and the asset name is stored in index level 1. This is of type [DateTimeIndex](https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.DatetimeIndex.html).
```
all_factors.index.get_level_values(0)
```
#### [DateTimeIndex attributes](https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.DatetimeIndex.html)
* The `month` attribute is a numpy array with a 1 for January, 2 for February ... 12 for December etc.
* We can use a comparison operator such as `==` to return True or False.
* It's usually easier to have all data of a similar type (numeric), so we recommend converting booleans to integers.
The numpy ndarray has a function `.astype()` that can cast the data to a specified type.
For instance, `astype(int)` converts False to 0 and True to 1.
```
# Example
print(all_factors.index.get_level_values(0).month)
print(all_factors.index.get_level_values(0).month == 1)
print( (all_factors.index.get_level_values(0).month == 1).astype(int) )
```
## Quiz
* Create a numpy array that has 1 when the month is January, and 0 otherwise. Store it as a column in the all_factors dataframe.
* Add another similar column to indicate when the month is December
```
# TODO: create a feature that indicate whether it's January
all_factors['is_January'] = (all_factors.index.get_level_values(0).month == 1).astype(int)
# TODO: create a feature to indicate whether it's December
all_factors['is_December'] = (all_factors.index.get_level_values(0).month == 12).astype(int)
```
## Weekday, quarter
* add columns to the all_factors dataframe that specify the weekday, quarter and year.
* As you can see in the [documentation for DateTimeIndex](https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.DatetimeIndex.html), `weekday`, `quarter`, and `year` are attributes that you can use here.
```
# we can see that 0 is for Monday, 4 is for Friday
set(all_factors.index.get_level_values(0).weekday)
# Q1, Q2, Q3 and Q4 are represented by integers too
set(all_factors.index.get_level_values(0).quarter)
```
#### Quiz
Add features for weekday, quarter and year.
```
# TODO
all_factors['weekday'] = all_factors.index.get_level_values(0).weekday
all_factors['quarter'] = all_factors.index.get_level_values(0).quarter
all_factors['year'] = all_factors.index.get_level_values(0).year
```
## Start and end-of features
* The start and end of the week, month, and quarter may have structural differences in trading activity.
* [Pandas.date_range](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.date_range.html) takes the start_date, end_date, and frequency.
* The [frequency](http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases) for end of month is `BM`.
```
# Example
tmp = pd.date_range(start=factor_start_date, end=universe_end_date, freq='BM')
tmp
```
#### Example
Create a DatetimeIndex that stores the dates which are the last business day of each month.
Use the `.isin` function, passing in these last days of the month, to create a series of booleans.
Convert the booleans to integers.
```
last_day_of_month = pd.date_range(start=factor_start_date, end=universe_end_date, freq='BM')
last_day_of_month
tmp_month_end = all_factors.index.get_level_values(0).isin(last_day_of_month)
tmp_month_end
tmp_month_end_int = tmp_month_end.astype(int)
tmp_month_end_int
all_factors['month_end'] = tmp_month_end_int
```
#### Quiz: Start of Month
Create a feature that indicates the first business day of each month.
**Hint:** The frequency for first business day of the month uses the code `BMS`.
```
# TODO: month_start feature
first_day_of_month = pd.date_range(start=factor_start_date, end=universe_end_date, freq='BMS')
all_factors['month_start'] = (all_factors.index.get_level_values(0).isin(first_day_of_month)).astype(int)
```
#### Quiz: Quarter end and quarter start
Create features for the last business day of each quarter, and first business day of each quarter.
**Hint**: use `freq=BQ` for business day end of quarter, and `freq=BQS` for business day start of quarter.
```
# TODO: qtr_end feature
last_day_qtr = pd.date_range(start=factor_start_date, end=universe_end_date, freq='BQ')
all_factors['qtr_end'] = (all_factors.index.get_level_values(0).isin(last_day_qtr)).astype(int)
# TODO: qtr_start feature
first_day_qtr = pd.date_range(start=factor_start_date, end=universe_end_date, freq='BQS')
all_factors['qtr_start'] = (all_factors.index.get_level_values(0).isin(first_day_qtr)).astype(int)
```
## View all features
```
list(all_factors.columns)
```
Note that we can skip the sector_code feature, since we one-hot encoded it into separate features.
```
features = ['Mean_Reversion_Sector_Neutral_Smoothed',
'Momentum_1YR',
'Overnight_Sentiment_Smoothed',
'adv_120d',
'adv_20d',
'dispersion_120d',
'dispersion_20d',
'market_vol_120d',
'market_vol_20d',
#'sector_code', # removed sector_code
'volatility_120d',
'volatility_20d',
'sector_code_0',
'sector_code_1',
'sector_code_2',
'sector_code_3',
'sector_code_4',
'sector_code_5',
'sector_code_6',
'sector_code_7',
'sector_code_8',
'sector_code_9',
'sector_code_10',
'sector_code_-1',
'is_January',
'is_December',
'weekday',
'quarter',
'year',
'month_start',
'qtr_end',
'qtr_start']
```
#### Stop here and continue to the lesson section "Targets"
# Targets (Labels)
- We are going to try to predict the go forward 1-week return
- Very important! Quantize the target. Why do we do this?
- Makes it market neutral return
- Normalizes changing volatility and dispersion over time
- Make the target robust to changes in market regimes
- The factor we create is the trailing 5-day return.
```
# we'll create a separate pipeline to handle the target
pipeline_target = Pipeline(screen=universe)
```
#### Example
We'll convert weekly returns into 2-quantiles.
```
return_5d_2q = Returns(window_length=5, mask=universe).quantiles(2)
return_5d_2q
pipeline_target.add(return_5d_2q, 'return_5d_2q')
```
#### Quiz
Create another weekly return target that's converted to 5-quantiles.
```
# TODO: create a target using 5-quantiles
return_5d_5q = Returns(window_length=5, mask=universe).quantiles(5)
# TODO: add the feature to the pipeline
pipeline_target.add(return_5d_2q, 'return_5d_5q')
# Let's run the pipeline to get the dataframe
targets_df = engine.run_pipeline(pipeline_target, factor_start_date, universe_end_date)
targets_df.head()
targets_df.columns
```
## Solution
[solution notebook](feature_engineering_solution.ipynb)
| github_jupyter |
<a href="https://colab.research.google.com/github/cateto/python4NLP/blob/main/kobert/news_classifications_pytorch_kobert_ipynb_210802_9590.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from google.colab import drive
drive.mount('/content/drive')
!pip install mxnet
!pip install gluonnlp pandas tqdm
!pip install sentencepiece
# 변경: 최신 버전으로 설치하면 "Input: must be Tensor, not str" 라는 에러 발생
!pip install transformers==3
!pip install torch
!pip install git+https://git@github.com/SKTBrain/KoBERT.git@master
import torch
from torch import nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import Dataset, DataLoader
import gluonnlp as nlp
import numpy as np
from tqdm import tqdm, tqdm_notebook
from kobert.utils import get_tokenizer
from kobert.pytorch_kobert import get_pytorch_kobert_model
from transformers import AdamW
from transformers.optimization import get_cosine_schedule_with_warmup
##GPU 사용 시
device = torch.device("cuda:0")
bertmodel, vocab = get_pytorch_kobert_model()
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
import pandas as pd
dataset = pd.read_csv('/content/drive/MyDrive/rsn_nlp_project/sample_0728.csv')
#라벨 데이터 category 정수로 변환
encoder = LabelEncoder()
encoder.fit(dataset['category'])
dataset['category'] = encoder.transform(dataset['category'])
# possible_labels = dataset['category'].unique()
# label_dict = {}
# for index, possible_label in enumerate(possible_labels):
# label_dict[possible_label] = index
# dataset['category'] = dataset['category'].replace(label_dict)
#이어서 contents 전처리(정제)
dataset['contents'] = dataset['contents'].str.replace("[^ㄱ-ㅎㅏ-ㅣ가-힣 ]"," ")
dataset['contents'] = dataset['contents'].str.replace("[ ]{2,}"," ")
dataset.head()
mapping = dict(zip(range(len(encoder.classes_)), encoder.classes_))
mapping
# test data / train data 나누기
# 데이터 수 줄이기
dataset = dataset[:30000]
train, test = train_test_split(dataset, test_size=0.3, random_state=42)
print("훈련용 data 개수:", len(train))
print("테스트용 data 개수:", len(test))
train.to_csv('/content/drive/MyDrive/rsn_nlp_project/sample_train.txt', sep = '\t' , index = False)
test.to_csv('/content/drive/MyDrive/rsn_nlp_project/sample_test.txt', sep = '\t' , index = False)
```
재시도
```
dataset_train = nlp.data.TSVDataset("/content/drive/MyDrive/rsn_nlp_project/sample_train.txt", field_indices=[1,2], num_discard_samples=1)
dataset_test = nlp.data.TSVDataset("/content/drive/MyDrive/rsn_nlp_project/sample_test.txt", field_indices=[1,2], num_discard_samples=1)
dataset_train[:5]
tokenizer = get_tokenizer()
tok = nlp.data.BERTSPTokenizer(tokenizer, vocab, lower=False)
class BERTDataset(Dataset):
def __init__(self, dataset, sent_idx, label_idx, bert_tokenizer, max_len,
pad, pair):
transform = nlp.data.BERTSentenceTransform(
bert_tokenizer, max_seq_length=max_len, pad=pad, pair=pair)
self.sentences = [transform([i[sent_idx]]) for i in dataset]
self.labels = [np.int32(i[label_idx]) for i in dataset]
def __getitem__(self, i):
return (self.sentences[i] + (self.labels[i], ))
def __len__(self):
return (len(self.labels))
## Setting parameters
max_len = 128 # max_length 512, but Considering Computing Resource => set 128
batch_size = 16 # GPU CUDA problem
warmup_ratio = 0.1
num_epochs = 4
max_grad_norm = 1
log_interval = 200
learning_rate = 1e-5
data_train = BERTDataset(dataset_train, 0, 1, tok, max_len, True, False)
data_test = BERTDataset(dataset_test, 0, 1, tok, max_len, True, False)
data_train[0]
train_dataloader = torch.utils.data.DataLoader(data_train, batch_size=batch_size, num_workers=5)
test_dataloader = torch.utils.data.DataLoader(data_test, batch_size=batch_size, num_workers=5)
class BERTClassifier(nn.Module):
def __init__(self,
bert,
hidden_size = 768, # bert의 hidden layer 크기?
# num_classes=2,
num_classes=9, # 분류 class 크기
dr_rate=None,
params=None):
super(BERTClassifier, self).__init__()
self.bert = bert
self.dr_rate = dr_rate
self.classifier = nn.Linear(hidden_size , num_classes) #nn.Linear
if dr_rate:
self.dropout = nn.Dropout(p=dr_rate)
def gen_attention_mask(self, token_ids, valid_length):
attention_mask = torch.zeros_like(token_ids)
for i, v in enumerate(valid_length):
attention_mask[i][:v] = 1
return attention_mask.float()
def forward(self, token_ids, valid_length, segment_ids):
attention_mask = self.gen_attention_mask(token_ids, valid_length)
_, pooler = self.bert(input_ids = token_ids, token_type_ids = segment_ids.long(), attention_mask = attention_mask.float().to(token_ids.device))
if self.dr_rate:
out = self.dropout(pooler)
return self.classifier(out)
model = BERTClassifier(bertmodel, dr_rate=0.5).to(device)
# Prepare optimizer and schedule (linear warmup and decay)
no_decay = ['bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], 'weight_decay': 0.01},
{'params': [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}
]
optimizer = AdamW(optimizer_grouped_parameters, lr=learning_rate)
loss_fn = nn.CrossEntropyLoss() # softmax용 Loss Function <- binary classification도 해당 loss function 사용 가능
t_total = len(train_dataloader) * num_epochs
warmup_step = int(t_total * warmup_ratio)
scheduler = get_cosine_schedule_with_warmup(optimizer, num_warmup_steps=warmup_step, num_training_steps=t_total)
#얼마나 타겟의 값을 잘 맞추었는지 평가하는 함수
def calc_accuracy(X,Y):
max_vals, max_indices = torch.max(X, 1)
train_acc = (max_indices == Y).sum().data.cpu().numpy()/max_indices.size()[0]
return train_acc
for e in range(num_epochs):
train_acc = 0.0
test_acc = 0.0
model.train()
for batch_id, (token_ids, valid_length, segment_ids, label) in enumerate(tqdm_notebook(train_dataloader)):
optimizer.zero_grad()
token_ids = token_ids.long().to(device)
segment_ids = segment_ids.long().to(device)
valid_length= valid_length
label = label.long().to(device)
out = model(token_ids, valid_length, segment_ids)
loss = loss_fn(out, label)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), max_grad_norm)
optimizer.step()
scheduler.step() # Update learning rate schedule
train_acc += calc_accuracy(out, label)
if batch_id % log_interval == 0:
print("epoch {} batch id {} loss {} train acc {}".format(e+1, batch_id+1, loss.data.cpu().numpy(), train_acc / (batch_id+1)))
print("epoch {} train acc {}".format(e+1, train_acc / (batch_id+1)))
model.eval()
for batch_id, (token_ids, valid_length, segment_ids, label) in enumerate(tqdm_notebook(test_dataloader)):
token_ids = token_ids.long().to(device)
segment_ids = segment_ids.long().to(device)
valid_length= valid_length
label = label.long().to(device)
out = model(token_ids, valid_length, segment_ids)
test_acc += calc_accuracy(out, label)
print("epoch {} test acc {}".format(e+1, test_acc / (batch_id+1)))
# 테스트 문장 예측
test_sentence = '''올여름 유일한 재난 버스터 '싱크홀'이 지금껏 접하지 못한 짜릿한 볼거리로 관객을 초대한다.
2일 오후 서울 용산구 CGV아이파크몰에서 영화 '싱크홀' 언론 시사회가 열렸다. 시사 직후 온라인으로 생중계된 기자 간담회에는 배우 차승원, 김성균, 이광수, 김혜준, 권소현, 남다름과 김지훈 감독이 참석했다.
'싱크홀'은 11년 만에 마련한 내 집이 지하 500m 초대형 싱크홀로 추락하며 벌어지는 영화다. '타워'로 한국형 재난 영화의 새 지평을 연 김지훈 감독이 연출을 맡았으며 '명량', '더 테러 라이브'의 서경훈 시각특수효과(VFX) 감독이 힘을 합쳐 완성도를 높였다.
제작진은 리얼한 재난 상황을 연출하기 위해 지상 세트의 사전 제작 단계부터 심혈을 기울인 것은 물론, 싱크홀 발생 이후의 모습을 보여주고자 지하 500m 지반의 모습을 담은 대규모 암벽 세트를 만들었다. 그리고 건물이 무너지며 발생하는 흔들림을 그대로 전달하고자 짐벌 세트 위에 빌라 세트를 짓는 대규모 프로덕션을 진행했다.'''
test_label = '7'
temp_set = []
temp_set.append(test_sentence)
temp_set.append(test_label)
test_set = []
test_set.append(temp_set)
test_set
test_set = BERTDataset(test_set, 0, 1, tok, max_len, True, False)
test_input = torch.utils.data.DataLoader(test_set, batch_size=1, num_workers=9)
for batch_id, (token_ids, valid_length, segment_ids, label) in enumerate(tqdm_notebook(test_input)):
token_ids = token_ids.long().to(device)
segment_ids = segment_ids.long().to(device)
valid_length= valid_length
out = model(token_ids, valid_length, segment_ids)
print(out)
model.eval() # 평가 모드로 변경
for batch_id, (token_ids, valid_length, segment_ids, label) in enumerate(tqdm_notebook(test_dataloader)):
token_ids = token_ids.long().to(device)
segment_ids = segment_ids.long().to(device)
valid_length= valid_length
label = label.long().to(device)
out = model(token_ids, valid_length, segment_ids)
test_acc += calc_accuracy(out, label)
print("epoch {} test acc {}".format(e+1, test_acc / (batch_id+1)))
torch.save(model, '/content/drive/MyDrive/rsn_nlp_project/20210802model_95.pt')
```
| github_jupyter |
## Update
**Bug found**: I intended to use the whole clip of `site_3`, but I found that current implementation only uses the last 5 seconds. This version (v3) fixes that.
## About
I've spent several days to make a successful submission and finally got a way to do that after 3 days of struggle.
I want Kaggle competitors to feel easy to participate in this competition, therefore I decided to share my Notebook.
I would like to thank [@radek1](https://www.kaggle.com/radek1) for creating [a good starter notebook](https://www.kaggle.com/c/birdsong-recognition/discussion/160222), [@shonenkov](https://www.kaggle.com/shonenkov) for [a nice checking dataset and notebook](https://www.kaggle.com/shonenkov/sample-submission-using-custom-check) and several discussions to make this competition better, [@cwthompson](https://www.kaggle.com/cwthompson) for [showing the way to submit](https://www.kaggle.com/cwthompson/birdsong-making-a-prediction) using `test_audio`.
I also would like to thank [@stefankahl](https://www.kaggle.com/stefankahl), [@tomdenton](https://www.kaggle.com/tomdenton), [@sohier](https://www.kaggle.com/sohier) for hosting a really interesting competition.
In this notebook I tried to make submission using ResNet based model trained with log melspectrogram. I will create a notebook to show the way I trained the model but here I briefly describe my approach.
* Randomly crop 5 seconds for each train audio clip each epoch.
* No augmentation.
* Use pretrained weight of `torchvision.models.resnet50`.
* Used `BCELoss`.
* Trained 100 epoch and used the weight which got best F1 (at 92epoch).
* `Adam` optimizer (`lr=0.001`) with `CosineAnnealingLR` (`T_max=10`).
* Use `StratifiedKFold(n_splits=5)` to split dataset and used only first fold
Here are the parameter details.
* `batch_size`: 100 (on V100, took 2 ~ 3hrs to run 100epochs)
* melspectrogram parameters
- `n_mels`: 128
- `fmin`: 20
- `fmax`: 16000
* image size: 224 x 541 (I don't remember the exact width)
### Future direction
There are a lot many to do to make improvement. It was a big challenge for me to make successful submission with very few feedback signal (like `Submission CSV Not Found` or `Notebook Exceeded Allowed Compute`), but this is just a beginning of the real challenge.
As described in https://www.kaggle.com/c/birdsong-recognition/discussion/160222#895234 , data augmentation is a key. I worked on [Freesound Audio Tagging 2019](https://www.kaggle.com/c/freesound-audio-tagging-2019) last year, which was also an audio competition (which is comparatively rare in Kaggle), and at that time data augmentation like pitch shift or reverb effect gave us a boost. This competition is not about environmental sound but about bird song, therefore we need to check what augmentation works best on this data by experiment. Maybe we can get a boost with different augmentation for different audio class.
Mixup / BClearning or mixing different audio class may give us a rise I believe, since the test set has multiple sounds in the clip whereas train set has basically one class for one clip (of course we can use background sound information to treat train set as multilabel problem).
Training procedure also has an important role, whether we use a procedure for multilabel problem (by using background sound) or for multiclass problem. The challenge of this competition can also be treated as *Domain Adaptation* problem, so we can use techniques for that.
Model selection is also important, deeper model may give us a rise, but from my experience, *too deep* model are sometimes defeated by shallower model in audio classification.
## Libraries
```
import cv2
import audioread
import logging
import os
import random
import time
import warnings
import librosa
import numpy as np
import pandas as pd
import soundfile as sf
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.data as data
from contextlib import contextmanager
from pathlib import Path
from typing import Optional
from fastprogress import progress_bar
from sklearn.metrics import f1_score
from torchvision import models
```
## Utilities
```
def set_seed(seed: int = 42):
random.seed(seed)
np.random.seed(seed)
os.environ["PYTHONHASHSEED"] = str(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed) # type: ignore
torch.backends.cudnn.deterministic = True # type: ignore
torch.backends.cudnn.benchmark = True # type: ignore
def get_logger(out_file=None):
logger = logging.getLogger()
formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")
logger.handlers = []
logger.setLevel(logging.INFO)
handler = logging.StreamHandler()
handler.setFormatter(formatter)
handler.setLevel(logging.INFO)
logger.addHandler(handler)
if out_file is not None:
fh = logging.FileHandler(out_file)
fh.setFormatter(formatter)
fh.setLevel(logging.INFO)
logger.addHandler(fh)
logger.info("logger set up")
return logger
@contextmanager
def timer(name: str, logger: Optional[logging.Logger] = None):
t0 = time.time()
msg = f"[{name}] start"
if logger is None:
print(msg)
else:
logger.info(msg)
yield
msg = f"[{name}] done in {time.time() - t0:.2f} s"
if logger is None:
print(msg)
else:
logger.info(msg)
logger = get_logger("main.log")
set_seed(1213)
```
## Data Loading
```
TARGET_SR = 32000
TEST = Path("../input/birdsong-recognition/test_audio").exists()
if TEST:
DATA_DIR = Path("../input/birdsong-recognition/")
else:
# dataset created by @shonenkov, thanks!
DATA_DIR = Path("../input/birdcall-check/")
test = pd.read_csv(DATA_DIR / "test.csv")
test_audio = DATA_DIR / "test_audio"
test.head()
sub = pd.read_csv("../input/birdsong-recognition/sample_submission.csv")
sub.to_csv("submission.csv", index=False) # this will be overwritten if everything goes well
```
## Define Model
### Model
```
class ResNet(nn.Module):
def __init__(self, base_model_name: str, pretrained=False,
num_classes=264):
super().__init__()
base_model = models.__getattribute__(base_model_name)(
pretrained=pretrained)
layers = list(base_model.children())[:-2]
layers.append(nn.AdaptiveMaxPool2d(1))
self.encoder = nn.Sequential(*layers)
in_features = base_model.fc.in_features
self.classifier = nn.Sequential(
nn.Linear(in_features, 1024), nn.ReLU(), nn.Dropout(p=0.2),
nn.Linear(1024, 1024), nn.ReLU(), nn.Dropout(p=0.2),
nn.Linear(1024, num_classes))
def forward(self, x):
batch_size = x.size(0)
x = self.encoder(x).view(batch_size, -1)
x = self.classifier(x)
multiclass_proba = F.softmax(x, dim=1)
multilabel_proba = F.sigmoid(x)
return {
"logits": x,
"multiclass_proba": multiclass_proba,
"multilabel_proba": multilabel_proba
}
```
## Parameters
```
model_config = {
"base_model_name": "resnet50",
"pretrained": False,
"num_classes": 264
}
melspectrogram_parameters = {
"n_mels": 128,
"fmin": 20,
"fmax": 16000
}
weights_path = "../input/birdcall-resnet50-init-weights/best.pth"
BIRD_CODE = {
'aldfly': 0, 'ameavo': 1, 'amebit': 2, 'amecro': 3, 'amegfi': 4,
'amekes': 5, 'amepip': 6, 'amered': 7, 'amerob': 8, 'amewig': 9,
'amewoo': 10, 'amtspa': 11, 'annhum': 12, 'astfly': 13, 'baisan': 14,
'baleag': 15, 'balori': 16, 'banswa': 17, 'barswa': 18, 'bawwar': 19,
'belkin1': 20, 'belspa2': 21, 'bewwre': 22, 'bkbcuc': 23, 'bkbmag1': 24,
'bkbwar': 25, 'bkcchi': 26, 'bkchum': 27, 'bkhgro': 28, 'bkpwar': 29,
'bktspa': 30, 'blkpho': 31, 'blugrb1': 32, 'blujay': 33, 'bnhcow': 34,
'boboli': 35, 'bongul': 36, 'brdowl': 37, 'brebla': 38, 'brespa': 39,
'brncre': 40, 'brnthr': 41, 'brthum': 42, 'brwhaw': 43, 'btbwar': 44,
'btnwar': 45, 'btywar': 46, 'buffle': 47, 'buggna': 48, 'buhvir': 49,
'bulori': 50, 'bushti': 51, 'buwtea': 52, 'buwwar': 53, 'cacwre': 54,
'calgul': 55, 'calqua': 56, 'camwar': 57, 'cangoo': 58, 'canwar': 59,
'canwre': 60, 'carwre': 61, 'casfin': 62, 'caster1': 63, 'casvir': 64,
'cedwax': 65, 'chispa': 66, 'chiswi': 67, 'chswar': 68, 'chukar': 69,
'clanut': 70, 'cliswa': 71, 'comgol': 72, 'comgra': 73, 'comloo': 74,
'commer': 75, 'comnig': 76, 'comrav': 77, 'comred': 78, 'comter': 79,
'comyel': 80, 'coohaw': 81, 'coshum': 82, 'cowscj1': 83, 'daejun': 84,
'doccor': 85, 'dowwoo': 86, 'dusfly': 87, 'eargre': 88, 'easblu': 89,
'easkin': 90, 'easmea': 91, 'easpho': 92, 'eastow': 93, 'eawpew': 94,
'eucdov': 95, 'eursta': 96, 'evegro': 97, 'fiespa': 98, 'fiscro': 99,
'foxspa': 100, 'gadwal': 101, 'gcrfin': 102, 'gnttow': 103, 'gnwtea': 104,
'gockin': 105, 'gocspa': 106, 'goleag': 107, 'grbher3': 108, 'grcfly': 109,
'greegr': 110, 'greroa': 111, 'greyel': 112, 'grhowl': 113, 'grnher': 114,
'grtgra': 115, 'grycat': 116, 'gryfly': 117, 'haiwoo': 118, 'hamfly': 119,
'hergul': 120, 'herthr': 121, 'hoomer': 122, 'hoowar': 123, 'horgre': 124,
'horlar': 125, 'houfin': 126, 'houspa': 127, 'houwre': 128, 'indbun': 129,
'juntit1': 130, 'killde': 131, 'labwoo': 132, 'larspa': 133, 'lazbun': 134,
'leabit': 135, 'leafly': 136, 'leasan': 137, 'lecthr': 138, 'lesgol': 139,
'lesnig': 140, 'lesyel': 141, 'lewwoo': 142, 'linspa': 143, 'lobcur': 144,
'lobdow': 145, 'logshr': 146, 'lotduc': 147, 'louwat': 148, 'macwar': 149,
'magwar': 150, 'mallar3': 151, 'marwre': 152, 'merlin': 153, 'moublu': 154,
'mouchi': 155, 'moudov': 156, 'norcar': 157, 'norfli': 158, 'norhar2': 159,
'normoc': 160, 'norpar': 161, 'norpin': 162, 'norsho': 163, 'norwat': 164,
'nrwswa': 165, 'nutwoo': 166, 'olsfly': 167, 'orcwar': 168, 'osprey': 169,
'ovenbi1': 170, 'palwar': 171, 'pasfly': 172, 'pecsan': 173, 'perfal': 174,
'phaino': 175, 'pibgre': 176, 'pilwoo': 177, 'pingro': 178, 'pinjay': 179,
'pinsis': 180, 'pinwar': 181, 'plsvir': 182, 'prawar': 183, 'purfin': 184,
'pygnut': 185, 'rebmer': 186, 'rebnut': 187, 'rebsap': 188, 'rebwoo': 189,
'redcro': 190, 'redhea': 191, 'reevir1': 192, 'renpha': 193, 'reshaw': 194,
'rethaw': 195, 'rewbla': 196, 'ribgul': 197, 'rinduc': 198, 'robgro': 199,
'rocpig': 200, 'rocwre': 201, 'rthhum': 202, 'ruckin': 203, 'rudduc': 204,
'rufgro': 205, 'rufhum': 206, 'rusbla': 207, 'sagspa1': 208, 'sagthr': 209,
'savspa': 210, 'saypho': 211, 'scatan': 212, 'scoori': 213, 'semplo': 214,
'semsan': 215, 'sheowl': 216, 'shshaw': 217, 'snobun': 218, 'snogoo': 219,
'solsan': 220, 'sonspa': 221, 'sora': 222, 'sposan': 223, 'spotow': 224,
'stejay': 225, 'swahaw': 226, 'swaspa': 227, 'swathr': 228, 'treswa': 229,
'truswa': 230, 'tuftit': 231, 'tunswa': 232, 'veery': 233, 'vesspa': 234,
'vigswa': 235, 'warvir': 236, 'wesblu': 237, 'wesgre': 238, 'weskin': 239,
'wesmea': 240, 'wessan': 241, 'westan': 242, 'wewpew': 243, 'whbnut': 244,
'whcspa': 245, 'whfibi': 246, 'whtspa': 247, 'whtswi': 248, 'wilfly': 249,
'wilsni1': 250, 'wiltur': 251, 'winwre3': 252, 'wlswar': 253, 'wooduc': 254,
'wooscj2': 255, 'woothr': 256, 'y00475': 257, 'yebfly': 258, 'yebsap': 259,
'yehbla': 260, 'yelwar': 261, 'yerwar': 262, 'yetvir': 263
}
INV_BIRD_CODE = {v: k for k, v in BIRD_CODE.items()}
```
## Define Dataset
For `site_3`, I decided to use the same procedure as I did for `site_1` and `site_2`, which is, crop 5 seconds out of the clip and provide prediction on that short clip.
The only difference is that I crop 5 seconds short clip from start to the end of the `site_3` clip and aggeregate predictions for each short clip after I did prediction for all those short clips.
```
def mono_to_color(X: np.ndarray,
mean=None,
std=None,
norm_max=None,
norm_min=None,
eps=1e-6):
"""
Code from https://www.kaggle.com/daisukelab/creating-fat2019-preprocessed-data
"""
# Stack X as [X,X,X]
X = np.stack([X, X, X], axis=-1)
# Standardize
mean = mean or X.mean()
X = X - mean
std = std or X.std()
Xstd = X / (std + eps)
_min, _max = Xstd.min(), Xstd.max()
norm_max = norm_max or _max
norm_min = norm_min or _min
if (_max - _min) > eps:
# Normalize to [0, 255]
V = Xstd
V[V < norm_min] = norm_min
V[V > norm_max] = norm_max
V = 255 * (V - norm_min) / (norm_max - norm_min)
V = V.astype(np.uint8)
else:
# Just zero
V = np.zeros_like(Xstd, dtype=np.uint8)
return V
class TestDataset(data.Dataset):
def __init__(self, df: pd.DataFrame, clip: np.ndarray,
img_size=224, melspectrogram_parameters={}):
self.df = df
self.clip = clip
self.img_size = img_size
self.melspectrogram_parameters = melspectrogram_parameters
def __len__(self):
return len(self.df)
def __getitem__(self, idx: int):
SR = 32000
sample = self.df.loc[idx, :]
site = sample.site
row_id = sample.row_id
if site == "site_3":
y = self.clip.astype(np.float32)
len_y = len(y)
start = 0
end = SR * 5
images = []
while len_y > start:
y_batch = y[start:end].astype(np.float32)
if len(y_batch) != (SR * 5):
break
start = end
end = end + SR * 5
melspec = librosa.feature.melspectrogram(y_batch,
sr=SR,
**self.melspectrogram_parameters)
melspec = librosa.power_to_db(melspec).astype(np.float32)
image = mono_to_color(melspec)
height, width, _ = image.shape
image = cv2.resize(image, (int(width * self.img_size / height), self.img_size))
image = np.moveaxis(image, 2, 0)
image = (image / 255.0).astype(np.float32)
images.append(image)
images = np.asarray(images)
return images, row_id, site
else:
end_seconds = int(sample.seconds)
start_seconds = int(end_seconds - 5)
start_index = SR * start_seconds
end_index = SR * end_seconds
y = self.clip[start_index:end_index].astype(np.float32)
melspec = librosa.feature.melspectrogram(y, sr=SR, **self.melspectrogram_parameters)
melspec = librosa.power_to_db(melspec).astype(np.float32)
image = mono_to_color(melspec)
height, width, _ = image.shape
image = cv2.resize(image, (int(width * self.img_size / height), self.img_size))
image = np.moveaxis(image, 2, 0)
image = (image / 255.0).astype(np.float32)
return image, row_id, site
```
## Prediction loop
```
def get_model(config: dict, weights_path: str):
model = ResNet(**config)
checkpoint = torch.load(weights_path)
model.load_state_dict(checkpoint["model_state_dict"])
device = torch.device("cuda")
model.to(device)
model.eval()
return model
def prediction_for_clip(test_df: pd.DataFrame,
clip: np.ndarray,
model: ResNet,
mel_params: dict,
threshold=0.5):
dataset = TestDataset(df=test_df,
clip=clip,
img_size=224,
melspectrogram_parameters=mel_params)
loader = data.DataLoader(dataset, batch_size=1, shuffle=False)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.eval()
prediction_dict = {}
for image, row_id, site in progress_bar(loader):
site = site[0]
row_id = row_id[0]
if site in {"site_1", "site_2"}:
image = image.to(device)
with torch.no_grad():
prediction = model(image)
proba = prediction["multilabel_proba"].detach().cpu().numpy().reshape(-1)
events = proba >= threshold
labels = np.argwhere(events).reshape(-1).tolist()
else:
# to avoid prediction on large batch
image = image.squeeze(0)
batch_size = 16
whole_size = image.size(0)
if whole_size % batch_size == 0:
n_iter = whole_size // batch_size
else:
n_iter = whole_size // batch_size + 1
all_events = set()
for batch_i in range(n_iter):
batch = image[batch_i * batch_size:(batch_i + 1) * batch_size]
if batch.ndim == 3:
batch = batch.unsqueeze(0)
batch = batch.to(device)
with torch.no_grad():
prediction = model(batch)
proba = prediction["multilabel_proba"].detach().cpu().numpy()
events = proba >= threshold
for i in range(len(events)):
event = events[i, :]
labels = np.argwhere(event).reshape(-1).tolist()
for label in labels:
all_events.add(label)
labels = list(all_events)
if len(labels) == 0:
prediction_dict[row_id] = "nocall"
else:
labels_str_list = list(map(lambda x: INV_BIRD_CODE[x], labels))
label_string = " ".join(labels_str_list)
prediction_dict[row_id] = label_string
return prediction_dict
def prediction(test_df: pd.DataFrame,
test_audio: Path,
model_config: dict,
mel_params: dict,
weights_path: str,
threshold=0.5):
model = get_model(model_config, weights_path)
unique_audio_id = test_df.audio_id.unique()
warnings.filterwarnings("ignore")
prediction_dfs = []
for audio_id in unique_audio_id:
with timer(f"Loading {audio_id}", logger):
clip, _ = librosa.load(test_audio / (audio_id + ".mp3"),
sr=TARGET_SR,
mono=True,
res_type="kaiser_fast")
test_df_for_audio_id = test_df.query(
f"audio_id == '{audio_id}'").reset_index(drop=True)
with timer(f"Prediction on {audio_id}", logger):
prediction_dict = prediction_for_clip(test_df_for_audio_id,
clip=clip,
model=model,
mel_params=mel_params,
threshold=threshold)
row_id = list(prediction_dict.keys())
birds = list(prediction_dict.values())
prediction_df = pd.DataFrame({
"row_id": row_id,
"birds": birds
})
prediction_dfs.append(prediction_df)
prediction_df = pd.concat(prediction_dfs, axis=0, sort=False).reset_index(drop=True)
return prediction_df
```
## Prediction
```
submission = prediction(test_df=test,
test_audio=test_audio,
model_config=model_config,
mel_params=melspectrogram_parameters,
weights_path=weights_path,
threshold=0.8)
submission.to_csv("submission.csv", index=False)
submission
```
## EOF
| github_jupyter |
# **[作業]電商如何以A/B Test 驗證新網頁設計有效**
## **Can eCommerce UX change boost the conversion rate from 0.13 to 0.15?**
知識點:
* effect size
* sample size for A/B test
* type I error = 0.05 and Power= 0.8
* z-score, confidence interval
參考:A/B testing: A step-by-step guide in Python by Renato Fillinich @ medium.com
數據 : ab_data.csv from Kaggle
# **[作業目標]**
1. 了解Binomial分布,以及用常態分布求統計解的方法
2. 判讀A/B Test 結果
# **[作業重點]**
1. 如何決定最小樣本數
2. 如何以Z值,p-Value和信賴區間(Confidence Interval)判斷A/B結果是否顯著
```
# Packages imports
#
import numpy as np
import pandas as pd
import scipy.stats as stats
import statsmodels.stats.api as sms
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
from math import ceil
%matplotlib inline
# Some plot styling preferences
plt.style.use('seaborn-whitegrid')
font = {'family' : 'Helvetica',
'weight' : 'bold',
'size' : 14}
mpl.rc('font', **font)
#求樣本大小
effect_size = sms.proportion_effectsize(0.12, 0.11) # Calculating effect size based on our expected rates
required_n = sms.NormalIndPower().solve_power(
effect_size,
power=0.8,
alpha=0.05,
ratio=1
) # Calculating sample size needed
required_n = ceil(required_n) # Rounding up to next whole number
print(required_n)
#展示實驗資料
df = pd.read_csv('./sample_data/ab_data.csv')
df.head()
df.info()
# To make sure all the control group are seeing the old page and viceversa
# 用 crosstab 將 landing_page 當作 column,group 當作 row
pd.crosstab(df['group'], df['landing_page'])
#偵測重複出現使用者
session_counts = df['user_id'].value_counts(ascending=False)
multi_users = session_counts[session_counts > 1].count()
print(f'There are {multi_users} users that appear multiple times in the dataset')
#除去重複出現使用者
users_to_drop = session_counts[session_counts > 1].index
df = df[~df['user_id'].isin(users_to_drop)]
print(f'The updated dataset now has {df.shape[0]} entries')
#選取 控制組和實驗組各半 4720 * 2 = 9440
control_sample = df[df['group'] == 'control'].sample(n=required_n, random_state=22)
treatment_sample = df[df['group'] == 'treatment'].sample(n=required_n, random_state=22)
ab_test = pd.concat([control_sample, treatment_sample], axis=0)
ab_test.reset_index(drop=True, inplace=True)
ab_test
ab_test.info()
#確認 ab_test 控制組實驗組各半
ab_test['group'].value_counts()
#計算conversion rate 平均值,標準差,標準誤
conversion_rates = ab_test.groupby('group')['converted']
std_p = lambda x: np.std(x, ddof=0) # Std. deviation of the proportion
se_p = lambda x: stats.sem(x, ddof=0) # Std. error of the proportion (std / sqrt(n))
conversion_rates = conversion_rates.agg([np.mean, std_p, se_p])
conversion_rates.columns = ['conversion_rate', 'std_deviation', 'std_error']
conversion_rates.style.format('{:.3f}')
#繪出 conversion rate 棒狀圖
plt.figure(figsize=(8,6))
sns.barplot(x=ab_test['group'], y=ab_test['converted'], ci=False)
plt.ylim(0, 0.17)
plt.title('Conversion rate by group', pad=20)
plt.xlabel('Group', labelpad=15)
plt.ylabel('Converted (proportion)', labelpad=15);
#以函數計算z_stat, pval, confidence interval
from statsmodels.stats.proportion import proportions_ztest, proportion_confint
control_results = ab_test[ab_test['group'] == 'control']['converted']
treatment_results = ab_test[ab_test['group'] == 'treatment']['converted']
n_con = control_results.count()
n_treat = treatment_results.count()
successes = [control_results.sum(), treatment_results.sum()]
nobs = [n_con, n_treat]
z_stat, pval = proportions_ztest(successes, nobs=nobs)
(lower_con, lower_treat), (upper_con, upper_treat) = proportion_confint(successes, nobs=nobs, alpha=0.05)
print(f'z statistic: {z_stat:.2f}')
print(f'p-value: {pval:.3f}')
print(f'ci 95% for control group: [{lower_con:.3f}, {upper_con:.3f}]')
print(f'ci 95% for treatment group: [{lower_treat:.3f}, {upper_treat:.3f}]')
#判讀統計結果
```
# 作業:判讀程式最後統計結果,A/B test 是否顯著
(0.13, 0.15)
在Alpha=0.05的情況下,Z>1.96 或 <-1.96 方代表顯著
同時 p 也應小於 0.05
因此判斷不顯著
# 作業:試以(0.12, 0.11)計算結果是否顯著
z statistic: 0.07
p-value: 0.945
ci 95% for control group: [0.116, 0.126]
ci 95% for treatment group: [0.116, 0.126]
z 沒有超過正負1.96
p 大於 aplha 0.05
信賴區間說明 control group 在區間內 0.12 可信
treatment group在區間外,0.11 數字不可信
# 作業:樣本數是以那些模組/函數算的
```
#求樣本大小
effect_size = sms.proportion_effectsize(0.13, 0.15) # Calculating effect size based on our expected rates
required_n = sms.NormalIndPower().solve_power(
effect_size,
power=0.8,
alpha=0.05,
ratio=1
) # Calculating sample size needed
required_n = ceil(required_n) # Rounding up to next whole number
print(required_n)
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
This notebook demonstrates how to run batch scoring job. __[Inception-V3 model](https://arxiv.org/abs/1512.00567)__ and unlabeled images from __[ImageNet](http://image-net.org/)__ dataset will be used. It registers a pretrained inception model in model registry then uses the model to do batch scoring on images in a blob container.
## Prerequisites
Make sure you go through the [00. Installation and Configuration](./00.configuration.ipynb) Notebook first if you haven't.
```
import os
from azureml.core import Workspace, Run, Experiment
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep = '\n')
# Also create a Project and attach to Workspace
scripts_folder = "scripts"
if not os.path.isdir(scripts_folder):
os.mkdir(scripts_folder)
from azureml.core.compute import BatchAiCompute, ComputeTarget
from azureml.core.datastore import Datastore
from azureml.data.data_reference import DataReference
from azureml.pipeline.core import Pipeline, PipelineData
from azureml.pipeline.steps import PythonScriptStep
from azureml.core.runconfig import CondaDependencies, RunConfiguration
```
## Create and attach Compute targets
Use the below code to create and attach Compute targets.
```
# Batch AI compute
cluster_name = "gpu_cluster"
try:
cluster = BatchAiCompute(ws, cluster_name)
print("found existing cluster.")
except:
print("creating new cluster")
provisioning_config = BatchAiCompute.provisioning_configuration(vm_size = "STANDARD_NC6",
autoscale_enabled = True,
cluster_min_nodes = 0,
cluster_max_nodes = 1)
# create the cluster
cluster = ComputeTarget.create(ws, cluster_name, provisioning_config)
cluster.wait_for_completion(show_output=True)
```
# Python scripts to run
Python scripts that run the batch scoring. `batchai_score.py` takes input images in `dataset_path`, pretrained models in `model_dir` and outputs a `results-label.txt` to `output_dir`.
```
%%writefile $scripts_folder/batchai_score.py
import os
import argparse
import datetime,time
import tensorflow as tf
from math import ceil
import numpy as np
import shutil
from tensorflow.contrib.slim.python.slim.nets import inception_v3
from azureml.core.model import Model
slim = tf.contrib.slim
parser = argparse.ArgumentParser(description="Start a tensorflow model serving")
parser.add_argument('--model_name', dest="model_name", required=True)
parser.add_argument('--label_dir', dest="label_dir", required=True)
parser.add_argument('--dataset_path', dest="dataset_path", required=True)
parser.add_argument('--output_dir', dest="output_dir", required=True)
parser.add_argument('--batch_size', dest="batch_size", type=int, required=True)
args = parser.parse_args()
image_size = 299
num_channel = 3
# create output directory if it does not exist
os.makedirs(args.output_dir, exist_ok=True)
def get_class_label_dict(label_file):
label = []
proto_as_ascii_lines = tf.gfile.GFile(label_file).readlines()
for l in proto_as_ascii_lines:
label.append(l.rstrip())
return label
class DataIterator:
def __init__(self, data_dir):
self.file_paths = []
image_list = os.listdir(data_dir)
total_size = len(image_list)
self.file_paths = [data_dir + '/' + file_name.rstrip() for file_name in image_list ]
self.labels = [1 for file_name in self.file_paths]
@property
def size(self):
return len(self.labels)
def input_pipeline(self, batch_size):
images_tensor = tf.convert_to_tensor(self.file_paths, dtype=tf.string)
labels_tensor = tf.convert_to_tensor(self.labels, dtype=tf.int64)
input_queue = tf.train.slice_input_producer([images_tensor, labels_tensor], shuffle=False)
labels = input_queue[1]
images_content = tf.read_file(input_queue[0])
image_reader = tf.image.decode_jpeg(images_content, channels=num_channel, name="jpeg_reader")
float_caster = tf.cast(image_reader, tf.float32)
new_size = tf.constant([image_size, image_size], dtype=tf.int32)
images = tf.image.resize_images(float_caster, new_size)
images = tf.divide(tf.subtract(images, [0]), [255])
image_batch, label_batch = tf.train.batch([images, labels], batch_size=batch_size, capacity=5 * batch_size)
return image_batch
def main(_):
start_time = datetime.datetime.now()
label_file_name = os.path.join(args.label_dir, "labels.txt")
label_dict = get_class_label_dict(label_file_name)
classes_num = len(label_dict)
test_feeder = DataIterator(data_dir=args.dataset_path)
total_size = len(test_feeder.labels)
count = 0
# get model from model registry
model_path = Model.get_model_path(args.model_name)
with tf.Session() as sess:
test_images = test_feeder.input_pipeline(batch_size=args.batch_size)
with slim.arg_scope(inception_v3.inception_v3_arg_scope()):
input_images = tf.placeholder(tf.float32, [args.batch_size, image_size, image_size, num_channel])
logits, _ = inception_v3.inception_v3(input_images,
num_classes=classes_num,
is_training=False)
probabilities = tf.argmax(logits, 1)
sess.run(tf.global_variables_initializer())
sess.run(tf.local_variables_initializer())
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
saver = tf.train.Saver()
saver.restore(sess, model_path)
out_filename = os.path.join(args.output_dir, "result-labels.txt")
with open(out_filename, "w") as result_file:
i = 0
while count < total_size and not coord.should_stop():
test_images_batch = sess.run(test_images)
file_names_batch = test_feeder.file_paths[i*args.batch_size: min(test_feeder.size, (i+1)*args.batch_size)]
results = sess.run(probabilities, feed_dict={input_images: test_images_batch})
new_add = min(args.batch_size, total_size-count)
count += new_add
i += 1
for j in range(new_add):
result_file.write(os.path.basename(file_names_batch[j]) + ": " + label_dict[results[j]] + "\n")
result_file.flush()
coord.request_stop()
coord.join(threads)
# copy the file to artifacts
shutil.copy(out_filename, "./outputs/")
# Move the processed data out of the blob so that the next run can process the data.
if __name__ == "__main__":
tf.app.run()
```
## Prepare Model and Input data
### Download Model
Download and extract model from http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz to `"models"`
```
# create directory for model
model_dir = 'models'
if not os.path.isdir(model_dir):
os.mkdir(model_dir)
import tarfile
import urllib.request
url="http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz"
response = urllib.request.urlretrieve(url, "model.tar.gz")
tar = tarfile.open("model.tar.gz", "r:gz")
tar.extractall(model_dir)
```
### Create a datastore that points to blob container containing sample images
We have created a public blob container `sampledata` on an account named `pipelinedata` containing images from ImageNet evaluation set. In the next step, we create a datastore with name `images_datastore` that points to this container. The `overwrite=True` step overwrites any datastore that was created previously with that name.
This step can be changed to point to your blob container by providing an additional `account_key` parameter with `account_name`.
```
account_name = "pipelinedata"
sample_data = Datastore.register_azure_blob_container(ws, datastore_name="images_datastore", container_name="sampledata",
account_name=account_name,
overwrite=True)
```
# Output datastore
We write the outputs to the default datastore
```
default_ds = ws.get_default_datastore()
```
# Specify where the data is stored or will be written to
```
from azureml.core.conda_dependencies import CondaDependencies
from azureml.data.data_reference import DataReference
from azureml.pipeline.core import Pipeline, PipelineData
from azureml.core import Datastore
from azureml.core import Experiment
input_images = DataReference(datastore=sample_data,
data_reference_name="input_images",
path_on_datastore="batchscoring/images",
mode="download"
)
model_dir = DataReference(datastore=sample_data,
data_reference_name="input_model",
path_on_datastore="batchscoring/models",
mode="download"
)
label_dir = DataReference(datastore=sample_data,
data_reference_name="input_labels",
path_on_datastore="batchscoring/labels",
mode="download"
)
output_dir = PipelineData(name="scores",
datastore_name=default_ds.name,
output_path_on_compute="batchscoring/results")
```
## Register the model with Workspace
```
import shutil
from azureml.core.model import Model
# register downloaded model
model = Model.register(model_path = "models/inception_v3.ckpt",
model_name = "inception", # this is the name the model is registered as
tags = {'pretrained': "inception"},
description = "Imagenet trained tensorflow inception",
workspace = ws)
# remove the downloaded dir after registration if you wish
shutil.rmtree("models")
```
# Specify environment to run the script
```
cd = CondaDependencies.create(pip_packages=["tensorflow-gpu==1.4.0", "azureml-defaults"])
# Runconfig
batchai_run_config = RunConfiguration(conda_dependencies=cd)
batchai_run_config.environment.docker.enabled = True
batchai_run_config.environment.docker.gpu_support = True
batchai_run_config.environment.docker.base_image = "microsoft/mmlspark:gpu-0.12"
batchai_run_config.environment.spark.precache_packages = False
```
# Steps to run
A subset of the parameters to the python script can be given as input when we re-run a `PublishedPipeline`. In the current example, we define `batch_size` taken by the script as such parameter.
```
from azureml.pipeline.core.graph import PipelineParameter
batch_size_param = PipelineParameter(name="param_batch_size", default_value=20)
inception_model_name = "inception_v3.ckpt"
batch_score_step = PythonScriptStep(
name="batch ai scoring",
script_name="batchai_score.py",
arguments=["--dataset_path", input_images,
"--model_name", "inception",
"--label_dir", label_dir,
"--output_dir", output_dir,
"--batch_size", batch_size_param],
target=cluster,
inputs=[input_images, label_dir],
outputs=[output_dir],
runconfig=batchai_run_config,
source_directory=scripts_folder
)
pipeline = Pipeline(workspace=ws, steps=[batch_score_step])
pipeline_run = Experiment(ws, 'batch_scoring').submit(pipeline, pipeline_params={"param_batch_size": 20})
```
# Monitor run
```
from azureml.train.widgets import RunDetails
RunDetails(pipeline_run).show()
pipeline_run.wait_for_completion(show_output=True)
```
# Download and review output
```
step_run = list(pipeline_run.get_children())[0]
step_run.download_file("./outputs/result-labels.txt")
import pandas as pd
df = pd.read_csv("result-labels.txt", delimiter=":", header=None)
df.columns = ["Filename", "Prediction"]
df.head()
```
# Publish a pipeline and rerun using a REST call
## Create a published pipeline
```
published_pipeline = pipeline_run.publish_pipeline(
name="Inception v3 scoring", description="Batch scoring using Inception v3 model", version="1.0")
published_id = published_pipeline.id
```
## Rerun using REST call
## Get AAD token
```
from azureml.core.authentication import AzureCliAuthentication
import requests
cli_auth = AzureCliAuthentication()
aad_token = cli_auth.get_authentication_header()
```
## Run published pipeline using its REST endpoint
```
from azureml.pipeline.core import PublishedPipeline
rest_endpoint = PublishedPipeline.get_endpoint(published_id, ws)
# specify batch size when running the pipeline
response = requests.post(rest_endpoint, headers=aad_token, json={"param_batch_size": 50})
run_id = response.json()["Id"]
```
## Monitor the new run
```
from azureml.pipeline.core.run import PipelineRun
published_pipeline_run = PipelineRun(ws.experiments()["batch_scoring"], run_id)
RunDetails(published_pipeline_run).show()
```
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><ul class="toc-item"><li><span><a href="#HoloPy-demo:-spin-chain" data-toc-modified-id="HoloPy-demo:-spin-chain-0.1"><span class="toc-item-num">0.1 </span>HoloPy demo: spin-chain</a></span></li><li><span><a href="#1.-Compute-energy-by-exporting-to-tenpy" data-toc-modified-id="1.-Compute-energy-by-exporting-to-tenpy-0.2"><span class="toc-item-num">0.2 </span>1. Compute energy by exporting to tenpy</a></span></li><li><span><a href="#Define-the-Model" data-toc-modified-id="Define-the-Model-0.3"><span class="toc-item-num">0.3 </span>Define the Model</a></span><ul class="toc-item"><li><span><a href="#classically-optimize-the-variational-circuit-using-tenpy" data-toc-modified-id="classically-optimize-the-variational-circuit-using-tenpy-0.3.1"><span class="toc-item-num">0.3.1 </span>classically optimize the variational circuit using tenpy</a></span></li></ul></li><li><span><a href="#2.-Qiskit-simulations" data-toc-modified-id="2.-Qiskit-simulations-0.4"><span class="toc-item-num">0.4 </span>2. Qiskit simulations</a></span><ul class="toc-item"><li><span><a href="#Qiskit-Simulation" data-toc-modified-id="Qiskit-Simulation-0.4.1"><span class="toc-item-num">0.4.1 </span>Qiskit Simulation</a></span></li></ul></li></ul></li><li><span><a href="#Post-processing" data-toc-modified-id="Post-processing-1"><span class="toc-item-num">1 </span>Post-processing</a></span><ul class="toc-item"><li><ul class="toc-item"><li><span><a href="#(Note-that,-this-will-have-to-be-adjusted-depending-on-the-structure-of-registers-in-the-circuit)" data-toc-modified-id="(Note-that,-this-will-have-to-be-adjusted-depending-on-the-structure-of-registers-in-the-circuit)-1.0.1"><span class="toc-item-num">1.0.1 </span>(Note that, this will have to be adjusted depending on the structure of registers in the circuit)</a></span></li><li><span><a href="#Example:-Compute-E-from-correlators" data-toc-modified-id="Example:-Compute-E-from-correlators-1.0.2"><span class="toc-item-num">1.0.2 </span>Example: Compute E from correlators</a></span></li></ul></li></ul></li></ul></div>
```
# standard imports
import numpy as np
from scipy.optimize import minimize
import matplotlib.pyplot as plt
import time as time
#import json
import pickle
# 3rd party packages
import qiskit as qk
import qiskit.providers.aer.noise as noise
import networkx as nx
import tenpy as tp
## custom things
from networks.networks import IsoMPS
from networks.isonetwork import QKParamCircuit
import circuits.basic_circuits as circuits
```
## HoloPy demo: spin-chain
XXZ: $$H_\text{XXZ}=\sum_{i}J\left(\sigma^x_{i}\sigma^x_{i+1}+\sigma^y_{i}\sigma^y_{i+1} + \Delta \sigma^z_i\sigma^z_{i+1}\right)+h_z\sigma^z_i$$
Ising: $$H_\text{TFIM}=\sum_i -J\sigma^x_i\sigma^x_{i+1}-h\sigma^x_{i}$$
```
# ansatz parameters
nb = 1 # number of bond-qubits
L = 1 # number of unit cells
l_uc = 2 # number of sites in unit cell
## Setup IsoMPS ##
# initialize registers
preg = qk.QuantumRegister(1,'p') # physical qubits
breg = qk.QuantumRegister(nb,'b') # bond qubits
creg = qk.ClassicalRegister(L*l_uc+nb,'m') # classical register to hold measurement outcomes
## Initialize parameterized circuits
# bond-prep circuit (sets left-boundary vector of MPS)
bond_prep_circ = QKParamCircuit(qk.QuantumCircuit(breg),[])
circs = []#[bond_prep_circ]
# circuits that generate tensors
param_names= [] # list of circuit parameters
for j in range(l_uc):
# circuit to initialize state (start in state 101010101...)
init_circ = qk.QuantumCircuit(preg)
init_circ.h(preg)
#if j%2==0: init_circ.x(preg)
circ_tmp,params_tmp = circuits.star_circ(preg,
breg,
label='[c{}]'.format(j),
circ_type='xxz',
init_circ=init_circ)
circs+=[circ_tmp]
param_names+=params_tmp
# setup circuit-generated isoMPS
psi = IsoMPS(preg,breg,circs,L=L)
## Example (resolve parameters => random values and print circuit)
# pick some random parameter values
param_vals = [4*np.pi*np.random.rand() for j in range(len(param_names))]
param_dict = dict(zip(param_names,param_vals))
psi.construct_circuit(param_dict,include_measurements=False)
print(param_dict)
psi.circ.draw('mpl')
```
## 1. Compute energy by exporting to tenpy
```
## Functions to create tenpy models
def ising_impo(J, h):
site = tp.networks.site.SpinHalfSite(conserve=None)
#Id, Sp, Sm, Sz = site.Id, site.Sp, site.Sm, 2*site.Sz
#Sx = Sp + Sm
Id, Sx, Sz = site.Id, site.Sigmax, site.Sigmaz
W = [[Id,Sx,-h*Sz],
[None,None,-J*Sx],
[None,None,Id]]
H = tp.networks.mpo.MPO.from_grids([site], [W], bc='infinite', IdL=0, IdR=-1)
return H
def xxz_impo(Delta):
"""
Doesn't work! (leg-charge error?)
"""
site = tp.networks.site.SpinHalfSite(conserve=None)
Id, Sp, Sm, Sz = site.Id, site.Sp, site.Sm, site.Sigmaz
Sx = Sp + Sm
W = [[Id,Sp,Sm,Sz],
[None,None,None,1/2*Sm],
[None,None,None,1/2*Sp],
[None,None,None,Delta*Sz],
[None,None,None,Id]]
H = tp.networks.mpo.MPO.from_grids([site], [W], bc='infinite', IdL=0, IdR=-1)
return H
def xxz_mpo(J=1.0, Delta=1.0, hz=0.0, N=1, bc='infinite'):
site = tp.networks.site.SpinHalfSite(None)
Id, Sp, Sm, Sz = site.Id, site.Sp, site.Sm, 2*site.Sz
W_bulk = [[Id, Sp, Sm, Sz, -hz * Sz],
[None, None, None, None, 0.5 * J * Sm],
[None, None, None, None, 0.5 * J * Sp],
[None, None, None, None, J * Delta * Sz],
[None, None, None, None, Id]]
#W_first = [W_bulk[0]] # first row
#W_last = [[row[-1]] for row in W_bulk] # last column
#Ws = [W_first] + [W_bulk]*(N - 2) + [W_last]
#H = tp.networks.mpo.MPO.from_grids([site]*N, Ws, bc, IdL=0, IdR=-1) # (probably leave the IdL,IdR)
H = tp.networks.mpo.MPO.from_grids([site], [W_bulk], bc, IdL=0, IdR=-1) # (probably leave the IdL,IdR)
return H
test = xxz_mpo()
# energy calculator
def energy_tp(param_vals,*args):
"""
function to calculate energy using MPO/MPS contraction in tenpy
inputs:
- param_vals = dict {parameter:value}
- *args,
args[0] should be psi: state as IsoMPS
args[1] should be H_mpo: Hamiltonian as MPO
(input made this way to be compatible w/ scipy.optimize)
outputs:
- float, <psi|H|psi> computed w/ tenpy
"""
# parse inputs
psi=args[0] # state as isoMPS
H_mpo = args[1] # Hamiltonian as tenpy MPO
param_dict = dict(zip(param_names,param_vals))
# convert state from holoPy isoMPS to tenpy MPS
psi_tp = psi.to_tenpy(param_dict,L=np.inf)
# compute energy
E = (H_mpo.expectation_value(psi_tp)).real
return E
```
## Define the Model
```
## define model ##
J = -1.0
Delta = 1.3
hz = 0.0
```
### classically optimize the variational circuit using tenpy
```
H_mpo = xxz_mpo(J=J,hz=hz,Delta=Delta,N=1)#ising_impo(J=1,h=1) # Hamiltonian as tenpy MPO
# file saving info
subdir = 'test_data' # subfolder to hold data
filename = 'xxzjobs_Delta1p3.dat'.format(np.round(Delta)) # file name for saved data
# optimize ansatz using tenpy
x0 = 0.03*np.random.randn(len(param_names)) # starting point for parameters
t0 = time.time()
opt_result = minimize(energy_tp, # function to minimize
x0, # starting point for parameters
args=(psi,H_mpo), # must take form (isoMPS,tenpy MPO)
method='BFGS'
)
tf = time.time()
# set parameters to previously optimized values
opt_vals = opt_result.x
opt_params = dict(zip(param_names,opt_vals))
print('Optimization done, elapsed time: {}'.format(tf-t0))
print('Optimized energy = {}'.format(opt_result.fun))
```
## 2. Qiskit simulations
Now, compute the energy for the optimized parameter values in qiskit
```
## Specify hyperparameters ##
L=10 # length of chain to simulate
shots = 1000 # number of shots for each measurement
# list of Pauli strings to measure
# example format for L = 3, l_uc = 4: [['xxxy'],['zzzz'],['yzxz']]
measurement_strings = [['x']*L,
['y']*L,
['z']*L]
# Create meta-data
model_data = {'type':'xxz',
'J':J,
'Delta':Delta,
'hz':hz,
'L':L,
'shots':shots,
}
vqe_data = {'architecture':'su4_star',
'nb':nb,
'params':opt_params}
## Create jobs ##
# loop through measurement strings, and create list of jobs to run
jobs = []
for m in measurement_strings:
psi_curr = IsoMPS(preg,breg,circs,bases=m,L=L)
circ_curr = psi_curr.construct_circuit(opt_params)
jobs += [{'name':'xxz_xxzstar_hz{}'.format(hz)+'_basis_'+m[0],
'isoMPS':psi_curr,
'vqe_data':vqe_data,
'qiskit_circuit':circ_curr,
'qasm':circ_curr.qasm(),
'model':model_data,
'basis':m,
'shots':shots,
'job_id':None, # job-id when submitted to honeywell
'qiskit_results':None, # qiskit simultor results
'H1_results':None # Honeywell results
}]
# save jobs dict to pickle file
file = open(subdir+'/' + filename, 'wb')
pickle.dump(jobs, file)
file.close()
```
### Qiskit Simulation
```
## Define Noise Model ##
# errors (simulation)
perr_1q = 0.000 # 1-qubit gate error
perr_2q = 0.00 # 2-qubit gate error
# depolarizaing errors
depol_1q = noise.depolarizing_error(perr_1q, 1)
depol_2q = noise.depolarizing_error(perr_2q, 2)
noise_model = noise.NoiseModel()
noise_model.add_all_qubit_quantum_error(depol_1q, ['u1', 'u2', 'u3'])
noise_model.add_all_qubit_quantum_error(depol_2q, ['cx','cz'])
## Run Jobs (qiskit version) ##
# load job files
file = open(subdir+'/' + filename, 'rb')
jobs = pickle.load(file)
file.close()
# setup qiskit simulator
simulator = qk.Aer.get_backend('qasm_simulator')
for job in jobs:
shots = job['shots']
job['qiskit_results'] = qk.execute(job['qiskit_circuit'],
simulator,
shots=shots,
noise_model=noise_model).result()
# save jobs dict to pickle file
file = open(subdir+'/' + filename, 'wb')
pickle.dump(jobs, file)
file.close()
```
# Post-processing
First define some functions to extract one- and two- point correlators from counts dictionary
### (Note that, this will have to be adjusted depending on the structure of registers in the circuit)
```
def counts_to_correlators(counts,shots):
"""
converts qiskit-style counts result
to NxN numpy array of 2-point correlatrs
w/ N = # of sites in isoMPS = L*l_uc
"""
# number of sites (compute from input dictionary shape)
N = len(list(counts.keys())[0].split(" "))
C = np.zeros((N,N))
# loop over each measurement outcome
for k in counts.keys():
split_list = k.split(" ")[::-1] # split bits from each register
# note that qiskit typically orders in reverse order
# NOTE: WILL NEED TO REVISIT CREG ORDERING IF WE HAVE OTHER CREGs
# compute up all pairs of correlators
for x in range(N):
for y in range(x+1,N): # use symmetry C[x,y]=C[y,x] to only compute 1/2 of entries
C[x,y] += counts[k] * (2.0*(split_list[x]==split_list[y])-1.0)
C /= shots # normalize
C += C.T + np.eye(N) # we've constructed only the upper-right triangular part
return C
def counts_to_mean(counts,shots):
"""
converts qiskit-type counts result to
one point correlator (mean spin component)
on each site
"""
N = len(list(counts.keys())[0].split(" "))
m = np.zeros(N)
for k in counts.keys():
split_array = np.array(k.split(" ")[::-1]) # split bits from each register
m += 2.0*(split_array=='1')-1.0
m /= shots
return m
```
### Example: Compute E from correlators
```
## Post-process results ##
# load job files
file = open(subdir+'/' + filename, 'rb')
jobs = pickle.load(file)
file.close()
# compute two-point correlators from counts
Cs = {} # dictionary of 2-point correlators
ms = {} # dictionary of 1-spin correlators ('magnetizations')
for job in jobs:
counts = job['qiskit_results'].get_counts()
key = job['basis'][0] #'x','y',or'z' (assumes measurements are same type on each bond)
Cs[key] = counts_to_correlators(counts,job['shots'])
ms[key] = counts_to_mean(counts,job['shots'])
N = len(list(counts.keys())[0].split(" "))
# estimate <H>
burn_in = 4 # number of sites to "burn in" MPS channel before measuring
sites = np.arange(burn_in,L*l_uc-1) # remaining sites
E = 0
for j in sites:
E += job['model']['J']*(Cs['x'][j,j+1]+Cs['y'][j,j+1])
E += job['model']['J'] * job['model']['Delta']*Cs['z'][j,j+1]
E += job['model']['hz'] * np.sum(ms['z'])
E = E/sites.size # convert to energy density
print('Energy density - estimate = {}'.format(E))
# save jobs dict to pickle file
#file = open(subdir+'/' + filename, 'wb')
#pickle.dump(jobs, file)
#file.close()
## Plot results ##
```
| github_jupyter |
### <font color = "darkblue">Updates to Assignment</font>
#### If you were working on the older version:
* Please click on the "Coursera" icon in the top right to open up the folder directory.
* Navigate to the folder: Week 3/ Planar data classification with one hidden layer. You can see your prior work in version 6b: "Planar data classification with one hidden layer v6b.ipynb"
#### List of bug fixes and enhancements
* Clarifies that the classifier will learn to classify regions as either red or blue.
* compute_cost function fixes np.squeeze by casting it as a float.
* compute_cost instructions clarify the purpose of np.squeeze.
* compute_cost clarifies that "parameters" parameter is not needed, but is kept in the function definition until the auto-grader is also updated.
* nn_model removes extraction of parameter values, as the entire parameter dictionary is passed to the invoked functions.
# Planar data classification with one hidden layer
Welcome to your week 3 programming assignment. It's time to build your first neural network, which will have a hidden layer. You will see a big difference between this model and the one you implemented using logistic regression.
**You will learn how to:**
- Implement a 2-class classification neural network with a single hidden layer
- Use units with a non-linear activation function, such as tanh
- Compute the cross entropy loss
- Implement forward and backward propagation
## 1 - Packages ##
Let's first import all the packages that you will need during this assignment.
- [numpy](https://www.numpy.org/) is the fundamental package for scientific computing with Python.
- [sklearn](http://scikit-learn.org/stable/) provides simple and efficient tools for data mining and data analysis.
- [matplotlib](http://matplotlib.org) is a library for plotting graphs in Python.
- testCases provides some test examples to assess the correctness of your functions
- planar_utils provide various useful functions used in this assignment
```
# Package imports
import numpy as np
import matplotlib.pyplot as plt
from testCases_v2 import *
import sklearn
import sklearn.datasets
import sklearn.linear_model
from planar_utils import plot_decision_boundary, sigmoid, load_planar_dataset, load_extra_datasets
%matplotlib inline
np.random.seed(1) # set a seed so that the results are consistent
```
## 2 - Dataset ##
First, let's get the dataset you will work on. The following code will load a "flower" 2-class dataset into variables `X` and `Y`.
```
X, Y = load_planar_dataset()
```
Visualize the dataset using matplotlib. The data looks like a "flower" with some red (label y=0) and some blue (y=1) points. Your goal is to build a model to fit this data. In other words, we want the classifier to define regions as either red or blue.
```
# Visualize the data:
plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
```
You have:
- a numpy-array (matrix) X that contains your features (x1, x2)
- a numpy-array (vector) Y that contains your labels (red:0, blue:1).
Lets first get a better sense of what our data is like.
**Exercise**: How many training examples do you have? In addition, what is the `shape` of the variables `X` and `Y`?
**Hint**: How do you get the shape of a numpy array? [(help)](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html)
```
### START CODE HERE ### (≈ 3 lines of code)
shape_X = X.shape
shape_Y = Y.shape
m = X.shape[1] # training set size
### END CODE HERE ###
print ('The shape of X is: ' + str(shape_X))
print ('The shape of Y is: ' + str(shape_Y))
print ('I have m = %d training examples!' % (m))
```
**Expected Output**:
<table style="width:20%">
<tr>
<td>**shape of X**</td>
<td> (2, 400) </td>
</tr>
<tr>
<td>**shape of Y**</td>
<td>(1, 400) </td>
</tr>
<tr>
<td>**m**</td>
<td> 400 </td>
</tr>
</table>
## 3 - Simple Logistic Regression
Before building a full neural network, lets first see how logistic regression performs on this problem. You can use sklearn's built-in functions to do that. Run the code below to train a logistic regression classifier on the dataset.
```
# Train the logistic regression classifier
clf = sklearn.linear_model.LogisticRegressionCV();
clf.fit(X.T, Y.T);
```
You can now plot the decision boundary of these models. Run the code below.
```
# Plot the decision boundary for logistic regression
plot_decision_boundary(lambda x: clf.predict(x), X, Y)
plt.title("Logistic Regression")
# Print accuracy
LR_predictions = clf.predict(X.T)
print ('Accuracy of logistic regression: %d ' % float((np.dot(Y,LR_predictions) + np.dot(1-Y,1-LR_predictions))/float(Y.size)*100) +
'% ' + "(percentage of correctly labelled datapoints)")
```
**Expected Output**:
<table style="width:20%">
<tr>
<td>**Accuracy**</td>
<td> 47% </td>
</tr>
</table>
**Interpretation**: The dataset is not linearly separable, so logistic regression doesn't perform well. Hopefully a neural network will do better. Let's try this now!
## 4 - Neural Network model
Logistic regression did not work well on the "flower dataset". You are going to train a Neural Network with a single hidden layer.
**Here is our model**:
<img src="images/classification_kiank.png" style="width:600px;height:300px;">
**Mathematically**:
For one example $x^{(i)}$:
$$z^{[1] (i)} = W^{[1]} x^{(i)} + b^{[1]}\tag{1}$$
$$a^{[1] (i)} = \tanh(z^{[1] (i)})\tag{2}$$
$$z^{[2] (i)} = W^{[2]} a^{[1] (i)} + b^{[2]}\tag{3}$$
$$\hat{y}^{(i)} = a^{[2] (i)} = \sigma(z^{ [2] (i)})\tag{4}$$
$$y^{(i)}_{prediction} = \begin{cases} 1 & \mbox{if } a^{[2](i)} > 0.5 \\ 0 & \mbox{otherwise } \end{cases}\tag{5}$$
Given the predictions on all the examples, you can also compute the cost $J$ as follows:
$$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large\left(\small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large \right) \small \tag{6}$$
**Reminder**: The general methodology to build a Neural Network is to:
1. Define the neural network structure ( # of input units, # of hidden units, etc).
2. Initialize the model's parameters
3. Loop:
- Implement forward propagation
- Compute loss
- Implement backward propagation to get the gradients
- Update parameters (gradient descent)
You often build helper functions to compute steps 1-3 and then merge them into one function we call `nn_model()`. Once you've built `nn_model()` and learnt the right parameters, you can make predictions on new data.
### 4.1 - Defining the neural network structure ####
**Exercise**: Define three variables:
- n_x: the size of the input layer
- n_h: the size of the hidden layer (set this to 4)
- n_y: the size of the output layer
**Hint**: Use shapes of X and Y to find n_x and n_y. Also, hard code the hidden layer size to be 4.
```
# GRADED FUNCTION: layer_sizes
def layer_sizes(X, Y):
"""
Arguments:
X -- input dataset of shape (input size, number of examples)
Y -- labels of shape (output size, number of examples)
Returns:
n_x -- the size of the input layer
n_h -- the size of the hidden layer
n_y -- the size of the output layer
"""
### START CODE HERE ### (≈ 3 lines of code)
n_x = X.shape[0] # size of input layer
n_h = 4
n_y = Y.shape[0] # size of output layer
### END CODE HERE ###
return (n_x, n_h, n_y)
X_assess, Y_assess = layer_sizes_test_case()
(n_x, n_h, n_y) = layer_sizes(X_assess, Y_assess)
print("The size of the input layer is: n_x = " + str(n_x))
print("The size of the hidden layer is: n_h = " + str(n_h))
print("The size of the output layer is: n_y = " + str(n_y))
```
**Expected Output** (these are not the sizes you will use for your network, they are just used to assess the function you've just coded).
<table style="width:20%">
<tr>
<td>**n_x**</td>
<td> 5 </td>
</tr>
<tr>
<td>**n_h**</td>
<td> 4 </td>
</tr>
<tr>
<td>**n_y**</td>
<td> 2 </td>
</tr>
</table>
### 4.2 - Initialize the model's parameters ####
**Exercise**: Implement the function `initialize_parameters()`.
**Instructions**:
- Make sure your parameters' sizes are right. Refer to the neural network figure above if needed.
- You will initialize the weights matrices with random values.
- Use: `np.random.randn(a,b) * 0.01` to randomly initialize a matrix of shape (a,b).
- You will initialize the bias vectors as zeros.
- Use: `np.zeros((a,b))` to initialize a matrix of shape (a,b) with zeros.
```
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
"""
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
params -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
"""
np.random.seed(2) # we set up a seed so that your output matches ours although the initialization is random.
### START CODE HERE ### (≈ 4 lines of code)
W1 = np.random.randn(n_h, n_x) * 0.01
b1 = np.zeros(shape=(n_h, 1))
W2 = np.random.randn(n_y, n_h) * 0.01
b2 = np.zeros(shape=(n_y, 1))
### END CODE HERE ###
assert (W1.shape == (n_h, n_x))
assert (b1.shape == (n_h, 1))
assert (W2.shape == (n_y, n_h))
assert (b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
n_x, n_h, n_y = initialize_parameters_test_case()
parameters = initialize_parameters(n_x, n_h, n_y)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table style="width:90%">
<tr>
<td>**W1**</td>
<td> [[-0.00416758 -0.00056267]
[-0.02136196 0.01640271]
[-0.01793436 -0.00841747]
[ 0.00502881 -0.01245288]] </td>
</tr>
<tr>
<td>**b1**</td>
<td> [[ 0.]
[ 0.]
[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td>**W2**</td>
<td> [[-0.01057952 -0.00909008 0.00551454 0.02292208]]</td>
</tr>
<tr>
<td>**b2**</td>
<td> [[ 0.]] </td>
</tr>
</table>
### 4.3 - The Loop ####
**Question**: Implement `forward_propagation()`.
**Instructions**:
- Look above at the mathematical representation of your classifier.
- You can use the function `sigmoid()`. It is built-in (imported) in the notebook.
- You can use the function `np.tanh()`. It is part of the numpy library.
- The steps you have to implement are:
1. Retrieve each parameter from the dictionary "parameters" (which is the output of `initialize_parameters()`) by using `parameters[".."]`.
2. Implement Forward Propagation. Compute $Z^{[1]}, A^{[1]}, Z^{[2]}$ and $A^{[2]}$ (the vector of all your predictions on all the examples in the training set).
- Values needed in the backpropagation are stored in "`cache`". The `cache` will be given as an input to the backpropagation function.
```
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
"""
Argument:
X -- input data of size (n_x, m)
parameters -- python dictionary containing your parameters (output of initialization function)
Returns:
A2 -- The sigmoid output of the second activation
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2"
"""
# Retrieve each parameter from the dictionary "parameters"
### START CODE HERE ### (≈ 4 lines of code)
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
### END CODE HERE ###
# Implement Forward Propagation to calculate A2 (probabilities)
### START CODE HERE ### (≈ 4 lines of code)
Z1 = W1 @ X + b1
A1 = np.tanh(Z1)
Z2 = W2 @ A1 + b2
A2 = sigmoid(Z2)
### END CODE HERE ###
assert(A2.shape == (1, X.shape[1]))
cache = {"Z1": Z1,
"A1": A1,
"Z2": Z2,
"A2": A2}
return A2, cache
X_assess, parameters = forward_propagation_test_case()
A2, cache = forward_propagation(X_assess, parameters)
# Note: we use the mean here just to make sure that your output matches ours.
print(np.mean(cache['Z1']) ,np.mean(cache['A1']),np.mean(cache['Z2']),np.mean(cache['A2']))
```
**Expected Output**:
<table style="width:50%">
<tr>
<td> 0.262818640198 0.091999045227 -1.30766601287 0.212877681719 </td>
</tr>
</table>
Now that you have computed $A^{[2]}$ (in the Python variable "`A2`"), which contains $a^{[2](i)}$ for every example, you can compute the cost function as follows:
$$J = - \frac{1}{m} \sum\limits_{i = 1}^{m} \large{(} \small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large{)} \small\tag{13}$$
**Exercise**: Implement `compute_cost()` to compute the value of the cost $J$.
**Instructions**:
- There are many ways to implement the cross-entropy loss. To help you, we give you how we would have implemented
$- \sum\limits_{i=0}^{m} y^{(i)}\log(a^{[2](i)})$:
```python
logprobs = np.multiply(np.log(A2),Y)
cost = - np.sum(logprobs) # no need to use a for loop!
```
(you can use either `np.multiply()` and then `np.sum()` or directly `np.dot()`).
Note that if you use `np.multiply` followed by `np.sum` the end result will be a type `float`, whereas if you use `np.dot`, the result will be a 2D numpy array. We can use `np.squeeze()` to remove redundant dimensions (in the case of single float, this will be reduced to a zero-dimension array). We can cast the array as a type `float` using `float()`.
```
# GRADED FUNCTION: compute_cost
def compute_cost(A2, Y, parameters):
"""
Computes the cross-entropy cost given in equation (13)
Arguments:
A2 -- The sigmoid output of the second activation, of shape (1, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
parameters -- python dictionary containing your parameters W1, b1, W2 and b2
[Note that the parameters argument is not used in this function,
but the auto-grader currently expects this parameter.
Future version of this notebook will fix both the notebook
and the auto-grader so that `parameters` is not needed.
For now, please include `parameters` in the function signature,
and also when invoking this function.]
Returns:
cost -- cross-entropy cost given equation (13)
"""
m = Y.shape[1] # number of example
# Compute the cross-entropy cost
### START CODE HERE ### (≈ 2 lines of code)
logprobs = Y * np.log(A2) + (1-Y) * np.log(1-A2)
cost = -1/m * np.sum(logprobs)
### END CODE HERE ###
cost = float(np.squeeze(cost)) # makes sure cost is the dimension we expect.
# E.g., turns [[17]] into 17
assert(isinstance(cost, float))
return cost
A2, Y_assess, parameters = compute_cost_test_case()
print("cost = " + str(compute_cost(A2, Y_assess, parameters)))
```
**Expected Output**:
<table style="width:20%">
<tr>
<td>**cost**</td>
<td> 0.693058761... </td>
</tr>
</table>
Using the cache computed during forward propagation, you can now implement backward propagation.
**Question**: Implement the function `backward_propagation()`.
**Instructions**:
Backpropagation is usually the hardest (most mathematical) part in deep learning. To help you, here again is the slide from the lecture on backpropagation. You'll want to use the six equations on the right of this slide, since you are building a vectorized implementation.
<img src="images/grad_summary.png" style="width:600px;height:300px;">
<!--
$\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } = \frac{1}{m} (a^{[2](i)} - y^{(i)})$
$\frac{\partial \mathcal{J} }{ \partial W_2 } = \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } a^{[1] (i) T} $
$\frac{\partial \mathcal{J} }{ \partial b_2 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)}}}$
$\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } = W_2^T \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } * ( 1 - a^{[1] (i) 2}) $
$\frac{\partial \mathcal{J} }{ \partial W_1 } = \frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } X^T $
$\frac{\partial \mathcal{J} _i }{ \partial b_1 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)}}}$
- Note that $*$ denotes elementwise multiplication.
- The notation you will use is common in deep learning coding:
- dW1 = $\frac{\partial \mathcal{J} }{ \partial W_1 }$
- db1 = $\frac{\partial \mathcal{J} }{ \partial b_1 }$
- dW2 = $\frac{\partial \mathcal{J} }{ \partial W_2 }$
- db2 = $\frac{\partial \mathcal{J} }{ \partial b_2 }$
!-->
- Tips:
- To compute dZ1 you'll need to compute $g^{[1]'}(Z^{[1]})$. Since $g^{[1]}(.)$ is the tanh activation function, if $a = g^{[1]}(z)$ then $g^{[1]'}(z) = 1-a^2$. So you can compute
$g^{[1]'}(Z^{[1]})$ using `(1 - np.power(A1, 2))`.
```
# GRADED FUNCTION: backward_propagation
def backward_propagation(parameters, cache, X, Y):
"""
Implement the backward propagation using the instructions above.
Arguments:
parameters -- python dictionary containing our parameters
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2".
X -- input data of shape (2, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
Returns:
grads -- python dictionary containing your gradients with respect to different parameters
"""
m = X.shape[1]
# First, retrieve W1 and W2 from the dictionary "parameters".
### START CODE HERE ### (≈ 2 lines of code)
W1 = parameters['W1']
W2 = parameters['W2']
### END CODE HERE ###
# Retrieve also A1 and A2 from dictionary "cache".
### START CODE HERE ### (≈ 2 lines of code)
A1 = cache['A1']
A2 = cache['A2']
### END CODE HERE ###
# Backward propagation: calculate dW1, db1, dW2, db2.
### START CODE HERE ### (≈ 6 lines of code, corresponding to 6 equations on slide above)
dZ2 = A2 - Y
dW2 = 1/m * dZ2 @ A1.T
db2 = 1/m * np.sum(dZ2, axis=1, keepdims=True)
dZ1 = W2.T @ dZ2 * (1 - A1**2)
dW1 = 1/m * dZ1 @ X.T
db1 = 1/m * np.sum(dZ1, axis=1, keepdims=True)
### END CODE HERE ###
grads = {"dW1": dW1,
"db1": db1,
"dW2": dW2,
"db2": db2}
return grads
parameters, cache, X_assess, Y_assess = backward_propagation_test_case()
grads = backward_propagation(parameters, cache, X_assess, Y_assess)
print ("dW1 = "+ str(grads["dW1"]))
print ("db1 = "+ str(grads["db1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("db2 = "+ str(grads["db2"]))
```
**Expected output**:
<table style="width:80%">
<tr>
<td>**dW1**</td>
<td> [[ 0.00301023 -0.00747267]
[ 0.00257968 -0.00641288]
[-0.00156892 0.003893 ]
[-0.00652037 0.01618243]] </td>
</tr>
<tr>
<td>**db1**</td>
<td> [[ 0.00176201]
[ 0.00150995]
[-0.00091736]
[-0.00381422]] </td>
</tr>
<tr>
<td>**dW2**</td>
<td> [[ 0.00078841 0.01765429 -0.00084166 -0.01022527]] </td>
</tr>
<tr>
<td>**db2**</td>
<td> [[-0.16655712]] </td>
</tr>
</table>
**Question**: Implement the update rule. Use gradient descent. You have to use (dW1, db1, dW2, db2) in order to update (W1, b1, W2, b2).
**General gradient descent rule**: $ \theta = \theta - \alpha \frac{\partial J }{ \partial \theta }$ where $\alpha$ is the learning rate and $\theta$ represents a parameter.
**Illustration**: The gradient descent algorithm with a good learning rate (converging) and a bad learning rate (diverging). Images courtesy of Adam Harley.
<img src="images/sgd.gif" style="width:400;height:400;"> <img src="images/sgd_bad.gif" style="width:400;height:400;">
```
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate = 1.2):
"""
Updates parameters using the gradient descent update rule given above
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients
Returns:
parameters -- python dictionary containing your updated parameters
"""
# Retrieve each parameter from the dictionary "parameters"
### START CODE HERE ### (≈ 4 lines of code)
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
### END CODE HERE ###
# Retrieve each gradient from the dictionary "grads"
### START CODE HERE ### (≈ 4 lines of code)
dW1 = grads['dW1']
db1 = grads['db1']
dW2 = grads['dW2']
db2 = grads['db2']
## END CODE HERE ###
# Update rule for each parameter
### START CODE HERE ### (≈ 4 lines of code)
W1 = W1 - learning_rate * dW1
b1 = b1 - learning_rate * db1
W2 = W2 - learning_rate * dW2
b2 = b2 - learning_rate * db2
### END CODE HERE ###
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table style="width:80%">
<tr>
<td>**W1**</td>
<td> [[-0.00643025 0.01936718]
[-0.02410458 0.03978052]
[-0.01653973 -0.02096177]
[ 0.01046864 -0.05990141]]</td>
</tr>
<tr>
<td>**b1**</td>
<td> [[ -1.02420756e-06]
[ 1.27373948e-05]
[ 8.32996807e-07]
[ -3.20136836e-06]]</td>
</tr>
<tr>
<td>**W2**</td>
<td> [[-0.01041081 -0.04463285 0.01758031 0.04747113]] </td>
</tr>
<tr>
<td>**b2**</td>
<td> [[ 0.00010457]] </td>
</tr>
</table>
### 4.4 - Integrate parts 4.1, 4.2 and 4.3 in nn_model() ####
**Question**: Build your neural network model in `nn_model()`.
**Instructions**: The neural network model has to use the previous functions in the right order.
```
# GRADED FUNCTION: nn_model
def nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False):
"""
Arguments:
X -- dataset of shape (2, number of examples)
Y -- labels of shape (1, number of examples)
n_h -- size of the hidden layer
num_iterations -- Number of iterations in gradient descent loop
print_cost -- if True, print the cost every 1000 iterations
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
np.random.seed(3)
n_x = layer_sizes(X, Y)[0]
n_y = layer_sizes(X, Y)[2]
# Initialize parameters
### START CODE HERE ### (≈ 1 line of code)
parameters = initialize_parameters(n_x, n_h, n_y)
### END CODE HERE ###
# Loop (gradient descent)
for i in range(0, num_iterations):
### START CODE HERE ### (≈ 4 lines of code)
# Forward propagation. Inputs: "X, parameters". Outputs: "A2, cache".
A2, cache = forward_propagation(X, parameters)
# Cost function. Inputs: "A2, Y, parameters". Outputs: "cost".
cost = compute_cost(A2, Y, parameters)
# Backpropagation. Inputs: "parameters, cache, X, Y". Outputs: "grads".
grads = backward_propagation(parameters, cache, X, Y)
# Gradient descent parameter update. Inputs: "parameters, grads". Outputs: "parameters".
parameters = update_parameters(parameters, grads)
### END CODE HERE ###
# Print the cost every 1000 iterations
if print_cost and i % 1000 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
return parameters
X_assess, Y_assess = nn_model_test_case()
parameters = nn_model(X_assess, Y_assess, 4, num_iterations=10000, print_cost=True)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table style="width:90%">
<tr>
<td>
**cost after iteration 0**
</td>
<td>
0.692739
</td>
</tr>
<tr>
<td>
<center> $\vdots$ </center>
</td>
<td>
<center> $\vdots$ </center>
</td>
</tr>
<tr>
<td>**W1**</td>
<td> [[-0.65848169 1.21866811]
[-0.76204273 1.39377573]
[ 0.5792005 -1.10397703]
[ 0.76773391 -1.41477129]]</td>
</tr>
<tr>
<td>**b1**</td>
<td> [[ 0.287592 ]
[ 0.3511264 ]
[-0.2431246 ]
[-0.35772805]] </td>
</tr>
<tr>
<td>**W2**</td>
<td> [[-2.45566237 -3.27042274 2.00784958 3.36773273]] </td>
</tr>
<tr>
<td>**b2**</td>
<td> [[ 0.20459656]] </td>
</tr>
</table>
### 4.5 Predictions
**Question**: Use your model to predict by building predict().
Use forward propagation to predict results.
**Reminder**: predictions = $y_{prediction} = \mathbb 1 \text{{activation > 0.5}} = \begin{cases}
1 & \text{if}\ activation > 0.5 \\
0 & \text{otherwise}
\end{cases}$
As an example, if you would like to set the entries of a matrix X to 0 and 1 based on a threshold you would do: ```X_new = (X > threshold)```
```
# GRADED FUNCTION: predict
def predict(parameters, X):
"""
Using the learned parameters, predicts a class for each example in X
Arguments:
parameters -- python dictionary containing your parameters
X -- input data of size (n_x, m)
Returns
predictions -- vector of predictions of our model (red: 0 / blue: 1)
"""
# Computes probabilities using forward propagation, and classifies to 0/1 using 0.5 as the threshold.
### START CODE HERE ### (≈ 2 lines of code)
A2, cache = forward_propagation(X, parameters)
predictions = ( A2 > 0.5 ) * 1
### END CODE HERE ###
return predictions
parameters, X_assess = predict_test_case()
predictions = predict(parameters, X_assess)
print("predictions mean = " + str(np.mean(predictions)))
```
**Expected Output**:
<table style="width:40%">
<tr>
<td>**predictions mean**</td>
<td> 0.666666666667 </td>
</tr>
</table>
It is time to run the model and see how it performs on a planar dataset. Run the following code to test your model with a single hidden layer of $n_h$ hidden units.
```
# Build a model with a n_h-dimensional hidden layer
parameters = nn_model(X, Y, n_h = 4, num_iterations = 10000, print_cost=True)
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
plt.title("Decision Boundary for hidden layer size " + str(4))
```
**Expected Output**:
<table style="width:40%">
<tr>
<td>**Cost after iteration 9000**</td>
<td> 0.218607 </td>
</tr>
</table>
```
# Print accuracy
predictions = predict(parameters, X)
print ('Accuracy: %d' % float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100) + '%')
```
**Expected Output**:
<table style="width:15%">
<tr>
<td>**Accuracy**</td>
<td> 90% </td>
</tr>
</table>
Accuracy is really high compared to Logistic Regression. The model has learnt the leaf patterns of the flower! Neural networks are able to learn even highly non-linear decision boundaries, unlike logistic regression.
Now, let's try out several hidden layer sizes.
### 4.6 - Tuning hidden layer size (optional/ungraded exercise) ###
Run the following code. It may take 1-2 minutes. You will observe different behaviors of the model for various hidden layer sizes.
```
# This may take about 2 minutes to run
plt.figure(figsize=(16, 32))
hidden_layer_sizes = [1, 2, 3, 4, 5, 20, 50]
for i, n_h in enumerate(hidden_layer_sizes):
plt.subplot(5, 2, i+1)
plt.title('Hidden Layer of size %d' % n_h)
parameters = nn_model(X, Y, n_h, num_iterations = 5000)
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
predictions = predict(parameters, X)
accuracy = float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100)
print ("Accuracy for {} hidden units: {} %".format(n_h, accuracy))
```
**Interpretation**:
- The larger models (with more hidden units) are able to fit the training set better, until eventually the largest models overfit the data.
- The best hidden layer size seems to be around n_h = 5. Indeed, a value around here seems to fits the data well without also incurring noticeable overfitting.
- You will also learn later about regularization, which lets you use very large models (such as n_h = 50) without much overfitting.
**Optional questions**:
**Note**: Remember to submit the assignment by clicking the blue "Submit Assignment" button at the upper-right.
Some optional/ungraded questions that you can explore if you wish:
- What happens when you change the tanh activation for a sigmoid activation or a ReLU activation?
- Play with the learning_rate. What happens?
- What if we change the dataset? (See part 5 below!)
<font color='blue'>
**You've learnt to:**
- Build a complete neural network with a hidden layer
- Make a good use of a non-linear unit
- Implemented forward propagation and backpropagation, and trained a neural network
- See the impact of varying the hidden layer size, including overfitting.
Nice work!
## 5) Performance on other datasets
If you want, you can rerun the whole notebook (minus the dataset part) for each of the following datasets.
```
# Datasets
noisy_circles, noisy_moons, blobs, gaussian_quantiles, no_structure = load_extra_datasets()
datasets = {"noisy_circles": noisy_circles,
"noisy_moons": noisy_moons,
"blobs": blobs,
"gaussian_quantiles": gaussian_quantiles}
### START CODE HERE ### (choose your dataset)
dataset = "noisy_moons"
### END CODE HERE ###
X, Y = datasets[dataset]
X, Y = X.T, Y.reshape(1, Y.shape[0])
# make blobs binary
if dataset == "blobs":
Y = Y%2
# Visualize the data
plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
```
Congrats on finishing this Programming Assignment!
Reference:
- http://scs.ryerson.ca/~aharley/neural-networks/
- http://cs231n.github.io/neural-networks-case-study/
| github_jupyter |
```
#IMPORT SEMUA LIBARARY
#IMPORT LIBRARY PANDAS
import pandas as pd
#IMPORT LIBRARY UNTUK POSTGRE
from sqlalchemy import create_engine
import psycopg2
#IMPORT LIBRARY CHART
from matplotlib import pyplot as plt
from matplotlib import style
#IMPORT LIBRARY BASE PATH
import os
import io
#IMPORT LIBARARY PDF
from fpdf import FPDF
#IMPORT LIBARARY CHART KE BASE64
import base64
#IMPORT LIBARARY EXCEL
import xlsxwriter
#FUNGSI UNTUK MENGUPLOAD DATA DARI CSV KE POSTGRESQL
def uploadToPSQL(columns, table, filePath, engine):
#FUNGSI UNTUK MEMBACA CSV
df = pd.read_csv(
os.path.abspath(filePath),
names=columns,
keep_default_na=False
)
#APABILA ADA FIELD KOSONG DISINI DIFILTER
df.fillna('')
#MENGHAPUS COLUMN YANG TIDAK DIGUNAKAN
del df['kategori']
del df['jenis']
del df['pengiriman']
del df['satuan']
#MEMINDAHKAN DATA DARI CSV KE POSTGRESQL
df.to_sql(
table,
engine,
if_exists='replace'
)
#DIHITUNG APABILA DATA YANG DIUPLOAD BERHASIL, MAKA AKAN MENGEMBALIKAN KELUARAN TRUE(BENAR) DAN SEBALIKNYA
if len(df) == 0:
return False
else:
return True
#FUNGSI UNTUK MEMBUAT CHART, DATA YANG DIAMBIL DARI DATABASE DENGAN MENGGUNAKAN ORDER DARI TANGGAL DAN JUGA LIMIT
#DISINI JUGA MEMANGGIL FUNGSI MAKEEXCEL DAN MAKEPDF
def makeChart(host, username, password, db, port, table, judul, columns, filePath, name, subjudul, limit, negara, basePath):
#TEST KONEKSI DATABASE
try:
#KONEKSI KE DATABASE
connection = psycopg2.connect(user=username,password=password,host=host,port=port,database=db)
cursor = connection.cursor()
#MENGAMBL DATA DARI TABLE YANG DIDEFINISIKAN DIBAWAH, DAN DIORDER DARI TANGGAL TERAKHIR
#BISA DITAMBAHKAN LIMIT SUPAYA DATA YANG DIAMBIL TIDAK TERLALU BANYAK DAN BERAT
postgreSQL_select_Query = "SELECT * FROM "+table+" ORDER BY tanggal ASC LIMIT " + str(limit)
cursor.execute(postgreSQL_select_Query)
mobile_records = cursor.fetchall()
uid = []
lengthx = []
lengthy = []
#MELAKUKAN LOOPING ATAU PERULANGAN DARI DATA YANG SUDAH DIAMBIL
#KEMUDIAN DATA TERSEBUT DITEMPELKAN KE VARIABLE DIATAS INI
for row in mobile_records:
uid.append(row[0])
lengthx.append(row[1])
if row[2] == "":
lengthy.append(float(0))
else:
lengthy.append(float(row[2]))
#FUNGSI UNTUK MEMBUAT CHART
#bar
style.use('ggplot')
fig, ax = plt.subplots()
#MASUKAN DATA ID DARI DATABASE, DAN JUGA DATA TANGGAL
ax.bar(uid, lengthy, align='center')
#UNTUK JUDUL CHARTNYA
ax.set_title(judul)
ax.set_ylabel('Total')
ax.set_xlabel('Tanggal')
ax.set_xticks(uid)
#TOTAL DATA YANG DIAMBIL DARI DATABASE, DIMASUKAN DISINI
ax.set_xticklabels((lengthx))
b = io.BytesIO()
#CHART DISIMPAN KE FORMAT PNG
plt.savefig(b, format='png', bbox_inches="tight")
#CHART YANG SUDAH DIJADIKAN PNG, DISINI DICONVERT KE BASE64
barChart = base64.b64encode(b.getvalue()).decode("utf-8").replace("\n", "")
#CHART DITAMPILKAN
plt.show()
#line
#MASUKAN DATA DARI DATABASE
plt.plot(lengthx, lengthy)
plt.xlabel('Tanggal')
plt.ylabel('Total')
#UNTUK JUDUL CHARTNYA
plt.title(judul)
plt.grid(True)
l = io.BytesIO()
#CHART DISIMPAN KE FORMAT PNG
plt.savefig(l, format='png', bbox_inches="tight")
#CHART YANG SUDAH DIJADIKAN PNG, DISINI DICONVERT KE BASE64
lineChart = base64.b64encode(l.getvalue()).decode("utf-8").replace("\n", "")
#CHART DITAMPILKAN
plt.show()
#pie
#UNTUK JUDUL CHARTNYA
plt.title(judul)
#MASUKAN DATA DARI DATABASE
plt.pie(lengthy, labels=lengthx, autopct='%1.1f%%',
shadow=True, startangle=180)
plt.axis('equal')
p = io.BytesIO()
#CHART DISIMPAN KE FORMAT PNG
plt.savefig(p, format='png', bbox_inches="tight")
#CHART YANG SUDAH DIJADIKAN PNG, DISINI DICONVERT KE BASE64
pieChart = base64.b64encode(p.getvalue()).decode("utf-8").replace("\n", "")
#CHART DITAMPILKAN
plt.show()
#MENGAMBIL DATA DARI CSV YANG DIGUNAKAN SEBAGAI HEADER DARI TABLE UNTUK EXCEL DAN JUGA PDF
header = pd.read_csv(
os.path.abspath(filePath),
names=columns,
keep_default_na=False
)
#MENGHAPUS COLUMN YANG TIDAK DIGUNAKAN
header.fillna('')
del header['tanggal']
del header['total']
#MEMANGGIL FUNGSI EXCEL
makeExcel(mobile_records, header, name, limit, basePath)
#MEMANGGIL FUNGSI PDF
makePDF(mobile_records, header, judul, barChart, lineChart, pieChart, name, subjudul, limit, basePath)
#JIKA GAGAL KONEKSI KE DATABASE, MASUK KESINI UNTUK MENAMPILKAN ERRORNYA
except (Exception, psycopg2.Error) as error :
print (error)
#KONEKSI DITUTUP
finally:
if(connection):
cursor.close()
connection.close()
#FUNGSI MAKEEXCEL GUNANYA UNTUK MEMBUAT DATA YANG BERASAL DARI DATABASE DIJADIKAN FORMAT EXCEL TABLE F2
#PLUGIN YANG DIGUNAKAN ADALAH XLSXWRITER
def makeExcel(datarow, dataheader, name, limit, basePath):
#MEMBUAT FILE EXCEL
workbook = xlsxwriter.Workbook(basePath+'jupyter/BLOOMBERG/SektorIndustri/excel/'+name+'.xlsx')
#MENAMBAHKAN WORKSHEET PADA FILE EXCEL TERSEBUT
worksheet = workbook.add_worksheet('sheet1')
#SETINGAN AGAR DIBERIKAN BORDER DAN FONT MENJADI BOLD
row1 = workbook.add_format({'border': 2, 'bold': 1})
row2 = workbook.add_format({'border': 2})
#MENJADIKAN DATA MENJADI ARRAY
data=list(datarow)
isihead=list(dataheader.values)
header = []
body = []
#LOOPING ATAU PERULANGAN, KEMUDIAN DATA DITAMPUNG PADA VARIABLE DIATAS
for rowhead in dataheader:
header.append(str(rowhead))
for rowhead2 in datarow:
header.append(str(rowhead2[1]))
for rowbody in isihead[1]:
body.append(str(rowbody))
for rowbody2 in data:
body.append(str(rowbody2[2]))
#MEMASUKAN DATA DARI VARIABLE DIATAS KE DALAM COLUMN DAN ROW EXCEL
for col_num, data in enumerate(header):
worksheet.write(0, col_num, data, row1)
for col_num, data in enumerate(body):
worksheet.write(1, col_num, data, row2)
#FILE EXCEL DITUTUP
workbook.close()
#FUNGSI UNTUK MEMBUAT PDF YANG DATANYA BERASAL DARI DATABASE DIJADIKAN FORMAT EXCEL TABLE F2
#PLUGIN YANG DIGUNAKAN ADALAH FPDF
def makePDF(datarow, dataheader, judul, bar, line, pie, name, subjudul, lengthPDF, basePath):
#FUNGSI UNTUK MENGATUR UKURAN KERTAS, DISINI MENGGUNAKAN UKURAN A4 DENGAN POSISI LANDSCAPE
pdf = FPDF('L', 'mm', [210,297])
#MENAMBAHKAN HALAMAN PADA PDF
pdf.add_page()
#PENGATURAN UNTUK JARAK PADDING DAN JUGA UKURAN FONT
pdf.set_font('helvetica', 'B', 20.0)
pdf.set_xy(145.0, 15.0)
#MEMASUKAN JUDUL KE DALAM PDF
pdf.cell(ln=0, h=2.0, align='C', w=10.0, txt=judul, border=0)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_font('arial', '', 14.0)
pdf.set_xy(145.0, 25.0)
#MEMASUKAN SUB JUDUL KE PDF
pdf.cell(ln=0, h=2.0, align='C', w=10.0, txt=subjudul, border=0)
#MEMBUAT GARIS DI BAWAH SUB JUDUL
pdf.line(10.0, 30.0, 287.0, 30.0)
pdf.set_font('times', '', 10.0)
pdf.set_xy(17.0, 37.0)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_font('Times','',10.0)
#MENGAMBIL DATA HEADER PDF YANG SEBELUMNYA SUDAH DIDEFINISIKAN DIATAS
datahead=list(dataheader.values)
pdf.set_font('Times','B',12.0)
pdf.ln(0.5)
th1 = pdf.font_size
#MEMBUAT TABLE PADA PDF, DAN MENAMPILKAN DATA DARI VARIABLE YANG SUDAH DIKIRIM
pdf.cell(100, 2*th1, "Kategori", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][0], border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Jenis", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][1], border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Pengiriman", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][2], border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Satuan", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][3], border=1, align='C')
pdf.ln(2*th1)
#PENGATURAN PADDING
pdf.set_xy(17.0, 75.0)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_font('Times','B',11.0)
data=list(datarow)
epw = pdf.w - 2*pdf.l_margin
col_width = epw/(lengthPDF+1)
#PENGATURAN UNTUK JARAK PADDING
pdf.ln(0.5)
th = pdf.font_size
#MEMASUKAN DATA HEADER YANG DIKIRIM DARI VARIABLE DIATAS KE DALAM PDF
pdf.cell(50, 2*th, str("Negara"), border=1, align='C')
for row in data:
pdf.cell(40, 2*th, str(row[1]), border=1, align='C')
pdf.ln(2*th)
#MEMASUKAN DATA ISI YANG DIKIRIM DARI VARIABLE DIATAS KE DALAM PDF
pdf.set_font('Times','B',10.0)
pdf.set_font('Arial','',9)
pdf.cell(50, 2*th, negara, border=1, align='C')
for row in data:
pdf.cell(40, 2*th, str(row[2]), border=1, align='C')
pdf.ln(2*th)
#MENGAMBIL DATA CHART, KEMUDIAN CHART TERSEBUT DIJADIKAN PNG DAN DISIMPAN PADA DIRECTORY DIBAWAH INI
#BAR CHART
bardata = base64.b64decode(bar)
barname = basePath+'jupyter/BLOOMBERG/SektorIndustri/img/'+name+'-bar.png'
with open(barname, 'wb') as f:
f.write(bardata)
#LINE CHART
linedata = base64.b64decode(line)
linename = basePath+'jupyter/BLOOMBERG/SektorIndustri/img/'+name+'-line.png'
with open(linename, 'wb') as f:
f.write(linedata)
#PIE CHART
piedata = base64.b64decode(pie)
piename = basePath+'jupyter/BLOOMBERG/SektorIndustri/img/'+name+'-pie.png'
with open(piename, 'wb') as f:
f.write(piedata)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_xy(17.0, 75.0)
col = pdf.w - 2*pdf.l_margin
widthcol = col/3
#MEMANGGIL DATA GAMBAR DARI DIREKTORY DIATAS
pdf.image(barname, link='', type='',x=8, y=100, w=widthcol)
pdf.set_xy(17.0, 75.0)
col = pdf.w - 2*pdf.l_margin
pdf.image(linename, link='', type='',x=103, y=100, w=widthcol)
pdf.set_xy(17.0, 75.0)
col = pdf.w - 2*pdf.l_margin
pdf.image(piename, link='', type='',x=195, y=100, w=widthcol)
pdf.ln(2*th)
#MEMBUAT FILE PDF
pdf.output(basePath+'jupyter/BLOOMBERG/SektorIndustri/pdf/'+name+'.pdf', 'F')
#DISINI TEMPAT AWAL UNTUK MENDEFINISIKAN VARIABEL VARIABEL SEBELUM NANTINYA DIKIRIM KE FUNGSI
#PERTAMA MANGGIL FUNGSI UPLOADTOPSQL DULU, KALAU SUKSES BARU MANGGIL FUNGSI MAKECHART
#DAN DI MAKECHART MANGGIL FUNGSI MAKEEXCEL DAN MAKEPDF
#DEFINISIKAN COLUMN BERDASARKAN FIELD CSV
columns = [
"kategori",
"jenis",
"tanggal",
"total",
"pengiriman",
"satuan",
]
#UNTUK NAMA FILE
name = "SektorIndustri1_2"
#VARIABLE UNTUK KONEKSI KE DATABASE
host = "localhost"
username = "postgres"
password = "1234567890"
port = "5432"
database = "bloomberg_SektorIndustri"
table = name.lower()
#JUDUL PADA PDF DAN EXCEL
judul = "Data Sektor Eksternal"
subjudul = "Badan Perencanaan Pembangunan Nasional"
#LIMIT DATA UNTUK SELECT DI DATABASE
limitdata = int(8)
#NAMA NEGARA UNTUK DITAMPILKAN DI EXCEL DAN PDF
negara = "Indonesia"
#BASE PATH DIRECTORY
basePath = 'C:/Users/ASUS/Documents/bappenas/'
#FILE CSV
filePath = basePath+ 'data mentah/BLOOMBERG/SektorIndustri/' +name+'.csv';
#KONEKSI KE DATABASE
engine = create_engine('postgresql://'+username+':'+password+'@'+host+':'+port+'/'+database)
#MEMANGGIL FUNGSI UPLOAD TO PSQL
checkUpload = uploadToPSQL(columns, table, filePath, engine)
#MENGECEK FUNGSI DARI UPLOAD PSQL, JIKA BERHASIL LANJUT MEMBUAT FUNGSI CHART, JIKA GAGAL AKAN MENAMPILKAN PESAN ERROR
if checkUpload == True:
makeChart(host, username, password, database, port, table, judul, columns, filePath, name, subjudul, limitdata, negara, basePath)
else:
print("Error When Upload CSV")
```
| github_jupyter |
# Exogenous Process modeling
## ETH price stochastic process
1. Acquire time series of tick-by-tick (or as-close-as) data for ETH price e.g. 2017-2020, e.g. [Kaggle Hourly Dataset](https://www.kaggle.com/prasoonkottarathil/ethereum-historical-dataset)
2. Fit a parametric distribution to the price of ETH, gamma + Kalman fit (prediction) resulting in a distribution $F^p_{ETH}(t; \mu_{ETH})$ and fitted parameters $\mu_{ETH}$.
```
# import libraries
import pandas as pd
import numpy as np
from scipy.stats import gamma
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt
from math import sqrt
%matplotlib inline
```
## Data preprocessing
```
# import data
historical_eth_hourly = pd.read_csv('data/ETH_1H.csv')
historical_eth_hourly.head(5)
historical_eth_hourly['Date'] = pd.to_datetime(historical_eth_hourly['Date'])
historical_eth_hourly = historical_eth_hourly[historical_eth_hourly['Date']>'2017-01-01']
# sort by date from earliest to latest
sorted_historical_eth_hourly = historical_eth_hourly.sort_values(by='Date')
# split into training and test data.
train, test= np.split(sorted_historical_eth_hourly, [int(.9 *len(sorted_historical_eth_hourly))])
train.tail()
train.plot(x='Date',y='Close',title='Eth Hourly data')
```
## Kalman filter
Kalman filters are a lightweight algorithm often in an economic context for reducing noise in signals. The Kalman Filter is trained on a subset of the data, and then the parameters are passed into a prediction function for use in subsequent samples. As Kalman filters are one step predictors, at each time step, the filters is retrained and the system state and parameters are updated.
This implemention was refined by Andrew Clark in a [recent paper](https://ideas.repec.org/p/rdg/emxxdp/em-dp2020-22.html) that used Kalman filters embedded into a cadCAD model for forecasting exchange rates.
```
import pandas as pd
import numpy as np
from scipy.stats import gamma
def kalman_filter(observations,initialValue,truthValues=None,plot=False,paramExport=False):
'''
Description:
Function to create a Kalman Filter for smoothing currency timestamps in order to search for the
intrinisic value.
Parameters:
observations: Array of observations, i.e. predicted secondary market prices.
initialValue: Initial Starting value of filter
truthValues: Array of truth values, i.e. GPS location or secondary market prices. Or can be left
blank if none exist
plot: If True, plot the observations, truth values and kalman filter.
paramExport: If True, the parameters xhat,P,xhatminus,Pminus,K are returned to use in training.
Example:
xhat,P,xhatminus,Pminus,K = kalman_filter(observations=train.Close.values[0:-1],
initialValue=train.Close.values[-1],paramExport=True)
'''
# intial parameters
n_iter = len(observations)
sz = (n_iter,) # size of array
if isinstance(truthValues,np.ndarray):
x = truthValues # truth value
z = observations# observations (normal about x, sigma=0.1)
Q = 1e-5 # process variance
# allocate space for arrays
xhat=np.zeros(sz) # a posteri estimate of x
P=np.zeros(sz) # a posteri error estimate
xhatminus=np.zeros(sz) # a priori estimate of x
Pminus=np.zeros(sz) # a priori error estimate
K=np.zeros(sz) # gain or blending factor
R = 0.5**2 # estimate of measurement variance, change to see effect
# intial guesses
xhat[0] = initialValue
P[0] = 1.0
for k in range(1,n_iter):
# time update
xhatminus[k] = xhat[k-1]
Pminus[k] = P[k-1]+Q
# measurement update
K[k] = Pminus[k]/( Pminus[k]+R )
xhat[k] = xhatminus[k]+K[k]*(z[k]-xhatminus[k])
P[k] = (1-K[k])*Pminus[k]
if plot==True:
plt.figure()
plt.plot(z,'k+',label='Actual data')
plt.plot(xhat,'b-',label='a posteri estimate')
if isinstance(truthValues,np.ndarray):
plt.plot(x,color='g',label='truth value')
plt.legend()
plt.title('Kalman Filter Estimates', fontweight='bold')
plt.xlabel('Iteration')
plt.ylabel('USD')
plt.show()
if paramExport==True:
return xhat,P,xhatminus,Pminus,K
else:
return xhat
def kalman_filter_predict(xhat,P,xhatminus,Pminus,K,observations,truthValues=None,paramExport=False):
'''
Description:
Function to predict a pre-trained Kalman Filter 1 step forward.
Parameters:
xhat: Trained Kalman filter values - array
P: Trained Kalman variance - array
xhatminus: Trained Kalman xhat delta - array
Pminus: Trained Kalman variance delta - array
K: Kalman gain - array
observations: Array of observations, i.e. predicted secondary market prices.
truthValues: Array of truth values, i.e. GPS location or secondary market prices. Or can be left
blank if none exist
paramExport: If True, the parameters xhat,P,xhatminus,Pminus,K are returned to use in next predicted step.
Example:
xhat,P,xhatminus,Pminus,K = kalman_filter_predict(xhatInput,PInput,
xhatminusInput,PminusInput,KInput,observation,
paramExport=True)
'''
# intial parameters
if isinstance(truthValues,np.ndarray):
x = truthValues # truth value
z = observations# observations (normal about x, sigma=0.1)
Q = 1e-5 # process variance
R = 0.5**2 # estimate of measurement variance, change to see effect
# time update
xhatminus = np.append(xhatminus,xhat[-1])
Pminus = np.append(Pminus,P[-1]+Q)
# measurement update
K = np.append(K,Pminus[-1]/( Pminus[-1]+R ))
xhat = np.append(xhat,xhatminus[-1]+K[-1]*(z[-1]-xhatminus[-1]))
P = np.append(P,(1-K[-1])*Pminus[-1])
if paramExport==True:
return xhat,P,xhatminus,Pminus,K
else:
return xhat
```
## Process training
Fit the gamma distribution off of the training data.
```
timesteps = 24 * 365 # 24 hours a day * 365 days a year
# fit eth distribution
fit_shape, fit_loc, fit_scale = gamma.fit(train.Close.values)
sample = np.random.gamma(fit_shape, fit_scale, 100)[0]
sample
# generate 100 samples for initialization of Kalman
samples = np.random.gamma(fit_shape, fit_scale, 100)
plt.hist(samples)
plt.title('Histogram of Eth Price IID Samples')
# train kalman
xhat,P,xhatminus,Pminus,K = kalman_filter(observations=samples[0:-1],
initialValue=samples[-1],paramExport=True,plot=True)
```
## Validation
To test how our generator is working, we will make 100 predictions and compare to the test data.
```
eth_values = []
filter_values = {'xhat':xhat,'P':P,
'xhatminus':xhatminus,'Pminus':Pminus,
'K':K}
for i in range(0,100):
sample = np.random.gamma(fit_shape, fit_scale, 1)[0]
eth_values.append(sample)
xhat,P,xhatminus,Pminus,K = kalman_filter_predict(filter_values['xhat'],
filter_values['P'],
filter_values['xhatminus'],
filter_values['Pminus'],
filter_values['K'],
eth_values,
paramExport=True)
filter_values = {'xhat':xhat,'P':P,
'xhatminus':xhatminus,'Pminus':Pminus,
'K':K}
plt.plot(xhat[100:], label = 'Predicted')
plt.plot(test.head(100)['Close'].values, label = 'Actual')
plt.xlabel('Predictions')
plt.ylabel('Eth value in USD')
# Set a title of the current axes.
plt.title('Predicted vs actual')
plt.legend()
# Display a figure.
plt.show()
```
## Generate data for simulation
```
samples = np.random.gamma(fit_shape, fit_scale, 100)
# train kalman
xhat,P,xhatminus,Pminus,K = kalman_filter(observations=samples[0:-1],
initialValue=samples[-1],paramExport=True,plot=True)
eth_values = []
filter_values = {'xhat':xhat,'P':P,
'xhatminus':xhatminus,'Pminus':Pminus,
'K':K}
for i in range(0,timesteps+1):
sample = np.random.gamma(fit_shape, fit_scale, 1)[0]
eth_values.append(sample)
xhat,P,xhatminus,Pminus,K = kalman_filter_predict(filter_values['xhat'],
filter_values['P'],
filter_values['xhatminus'],
filter_values['Pminus'],
filter_values['K'],
eth_values,
paramExport=True)
filter_values = {'xhat':xhat,'P':P,
'xhatminus':xhatminus,'Pminus':Pminus,
'K':K}
plt.hist(xhat[100:])
plt.title('Histogram of Eth Price IID Samples')
plt.plot(xhat[100:])
plt.title('Predicted Eth Prices')
timesteps
eth_prices = pd.DataFrame(eth_values,columns=['Eth_price'])
eth_prices.head()
# export data
eth_prices.to_csv('data/eth_prices.csv')
```
### Generate Monte Carlo runs
```
def generate_eth_timeseries(xhat, P, xhatminus, Pminus, K):
eth_values = []
filter_values = {'xhat':xhat,'P':P,
'xhatminus':xhatminus,'Pminus':Pminus,
'K':K}
for i in range(0,timesteps+1):
sample = np.random.gamma(fit_shape, fit_scale, 1)[0]
eth_values.append(sample)
xhat,P,xhatminus,Pminus,K = kalman_filter_predict(filter_values['xhat'],
filter_values['P'],
filter_values['xhatminus'],
filter_values['Pminus'],
filter_values['K'],
eth_values,
paramExport=True)
filter_values = {'xhat':xhat,'P':P,
'xhatminus':xhatminus,'Pminus':Pminus,
'K':K}
return eth_values, xhat, P, xhatminus, Pminus, K
monte_carlo_runs = 10
eth_values_mc = {}
for run in range(0, monte_carlo_runs):
np.random.seed(seed=run)
buffer_for_transcients = 100
samples = np.random.gamma(fit_shape, fit_scale, timesteps + buffer_for_transcients)
# train kalman
xhat,P,xhatminus,Pminus,K = kalman_filter(observations=samples[0:-1],
initialValue=samples[-1],paramExport=True,plot=False)
# eth_values, _, _, _, _, _ = generate_eth_timeseries(xhat, P, xhatminus, Pminus, K)
eth_values_mc[run] = xhat[buffer_for_transcients:]
eth_values, xhat, P, xhatminus, Pminus, K = generate_eth_timeseries(xhat, P, xhatminus, Pminus, K)
eth_values_mc_df = pd.DataFrame(eth_values_mc)
eth_values_mc_df.to_csv('data/eth_values_mc.csv')
eth_values_mc_df
```
## Implementation information
Below is an example of how to integrate these univariate time series datasets into the exogenous process section of a cadCAD model, assuming each timestep is an hour.
```
# partial_state_update_block.py
partial_state_update_block = {
# Exogenous
'Exogenous': {
'policies': {
},
'variables': {
'eth_price': eth_price_mech,
}
},
# exogenousProcesses.py
# import libraries
import pandas as pd
# import data
eth_prices = pd.read_csv('data/eth_prices.csv')
# mechanisms
def eth_price_mech(params, step, sL, s, _input):
y = 'eth_price'
timestep = s['timestep']
x = eth_prices.Eth_price.values[timestep]
return (y, x)
```
## Conclusion
In this notebook, we've read in hourly historical eth data from kaggle, defined functions for fitting and making predictions off of a gamma distribution, which is comminly used in for random walk calculations, and defined functions for de-noising kalman filter. We then validated the structure of these functions and predicted 100 timesteps for evaluation and demonstration purposes. Finally, we provided an overview of how to fit this code into cadCAD. Next steps could include:
* Refining the Kalman filter hyperparameters
* Refining the gamma prediction tuning parameter
* More thorough model validation
* Add seasonality
| github_jupyter |
# Self-Driving Car Engineer Nanodegree
## Project: **Finding Lane Lines on the Road**
***
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project.
---
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**
---
**The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**
---
<figure>
<img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
**Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
## Import Packages
```
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
```
## Read in an Image
```
#reading in an image
image = mpimg.imread('test_images/geremarioMoment.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
```
## Ideas for Lane Detection Pipeline
**Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**
`cv2.inRange()` for color selection
`cv2.fillPoly()` for regions selection
`cv2.line()` to draw lines on an image given endpoints
`cv2.addWeighted()` to coadd / overlay two images
`cv2.cvtColor()` to grayscale or change color
`cv2.imwrite()` to output images to file
`cv2.bitwise_and()` to apply a mask to an image
**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!**
## Helper Functions
Below are some helper functions to help get you started. They should look familiar from the lesson!
```
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
`vertices` should be a numpy array of integer points.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=8):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
for line in lines:
for x1,y1,x2,y2 in line:
print(x1,y1,x2,y2)# my modification
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., γ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + γ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, γ)
def average_lines(lines, img):
'''
img should be a regioned canny output
'''
if lines is None: return lines
positive_slopes = []
positive_xs = []
positive_ys = []
negative_slopes = []
negative_xs = []
negative_ys = []
min_slope = .3
max_slope = 1000
for line in lines:
for x1, y1, x2, y2 in line:
slope = (y2-y1)/(x2-x1)
if abs(slope) < min_slope or abs(slope) > max_slope: continue # Filter our slopes
# We only need one point sample and the slope to determine the line equation
positive_slopes.append(slope) if slope > 0 else negative_slopes.append(slope)
positive_xs.append(x1) if slope > 0 else negative_xs.append(x1)
positive_ys.append(y1) if slope > 0 else negative_ys.append(y1)
# We need to calculate our region_top_y from the canny image so we know where to extend our lines to
ysize, xsize = img.shape[0], img.shape[1]
XX, YY = np.meshgrid(np.arange(0, xsize), np.arange(0, ysize))
white = img == 255
YY[~white] = ysize*2 # Large number because we will take the min
region_top_y = np.amin(YY)
new_lines = []
if len(positive_slopes) > 0:
m = np.mean(positive_slopes)
avg_x = np.mean(positive_xs)
avg_y = np.mean(positive_ys)
b = avg_y - m*avg_x
# We have m and b, so with a y we can get x = (y-b)/m
x1 = int((region_top_y - b)/m)
x2 = int((ysize - b)/m)
new_lines.append([(x1, region_top_y, x2, ysize)])
if len(negative_slopes) > 0:
m = np.mean(negative_slopes)
avg_x = np.mean(negative_xs)
avg_y = np.mean(negative_ys)
b = avg_y - m*avg_x
# We have m and b, so with a y we can get x = (y-b)/m
x1 = int((region_top_y - b)/m)
x2 = int((ysize - b)/m)
new_lines.append([(x1, region_top_y, x2, ysize)])
return np.array(new_lines)
def avg_hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
avg_lines = average_lines(lines, img)
#draw_lines(line_img, lines)
draw_lines(line_img, avg_lines)
return line_img
```
## Test Images
Build your pipeline to work on the images in the directory "test_images"
**You should make sure your pipeline works well on these images before you try the videos.**
```
import os
os.listdir("test_images/")
```
## Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
```
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images_output directory.
# To apply Canny Edged it is necessary to use the image on grayscale:
imageGray = grayscale(image)
plt.imshow(imageGray, cmap='gray')
# Using Gaussian blur to "smooth" the image:
kernel_size = 5
blur_gray = gaussian_blur(imageGray, kernel_size)
plt.figure(figsize=(20,10))
plt.imshow(blur_gray, cmap='gray')
# Defining the Canny function thresholds:
low_threshold = 50
high_threshold = 120
edges = canny(blur_gray, low_threshold, high_threshold)
# Display the image:
plt.imshow(edges, cmap='Greys_r')
print(imshape[1],imshape[0]),
print(.03*imshape[1],imshape[0]),
print((.18*imshape[1],0.52*imshape[0]),
(0.32*imshape[1], 0.44*imshape[0]),
(0.46*imshape[1], 0.44*imshape[0]),
(.75*imshape[1],0.50*imshape[0]))
500/1080
#Defining the area (mask) that is used to apply the functions:
imshape = image.shape
vertices = np.array([[(.12*imshape[1],0.52*imshape[0]),
(0.35*imshape[1], 0.45*imshape[0]),
(0.43*imshape[1], 0.45*imshape[0]),
(.85*imshape[1],0.50*imshape[0])]], dtype=np.int32)
# vertices = np.array([[(0+30,imshape[0]),
# (imshape[1]/2-20,imshape[0]/2+20),
# (imshape[1]/2+20, imshape[0]/2+20),
# (imshape[1]-30,imshape[0])]], dtype=np.int32)
#vertices = np.array([[(0,imshape[0]),(450, 290), (490, 290), (imshape[1],imshape[0])]], dtype=np.int32)
# Applying the mask to the image:
maskedEdges = region_of_interest(edges, vertices)
plt.figure(figsize=(10,5))
plt.imshow(maskedEdges, cmap='Greys_r')
# Define the Hough transform parameters
# Make a blank the same size as our image to draw on
rho = 2 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 10 # minimum number of votes (intersections in Hough grid cell)
min_line_len = 15 #minimum number of pixels making up a line
max_line_gap = 10 # maximum gap in pixels between connectable line segments
line_image = np.copy(image)*0 # creating a blank to draw lines on
# Run Hough on edge detected image
# Output "lines" is an array containing endpoints of detected line segments
#`img` should be the output of a Canny transform.
#lines = avg_hough_lines(maskedEdges, rho, theta, threshold, min_line_len, max_line_gap)
lines = hough_lines(maskedEdges, rho, theta, threshold, min_line_len, max_line_gap)
plt.imshow(lines)
final = weighted_img(lines, image, α=0.8, β=1., γ=0.)
plt.imshow(final)
```
## Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
`solidWhiteRight.mp4`
`solidYellowLeft.mp4`
**Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
**If you get an error that looks like this:**
```
NeedDownloadError: Need ffmpeg exe.
You can download it by calling:
imageio.plugins.ffmpeg.download()
```
**Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**
```
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
imageGray = grayscale(image)
kernel_size = 5
blur_gray = gaussian_blur(imageGray, kernel_size)
# Defining the Canny function thresholds:
low_threshold = 50
high_threshold = 120
edges = canny(blur_gray, low_threshold, high_threshold)
imshape = image.shape
vertices = np.array([[(.12*imshape[1],0.52*imshape[0]),
(0.35*imshape[1], 0.44*imshape[0]),
(0.43*imshape[1], 0.44*imshape[0]),
(.85*imshape[1],0.50*imshape[0])]], dtype=np.int32)
maskedEdges = region_of_interest(edges, vertices)
# Define the Hough transform parameters
# Make a blank the same size as our image to draw on
rho = 2 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 10 # minimum number of votes (intersections in Hough grid cell)
min_line_len = 15 #minimum number of pixels making up a line
max_line_gap = 10 # maximum gap in pixels between connectable line segments
line_image = np.copy(image)*0 # creating a blank to draw lines on
# Run Hough on edge detected image
# Output "lines" is an array containing endpoints of detected line segments
#`img` should be the output of a Canny transform.
#lines = avg_hough_lines(maskedEdges, rho, theta, threshold, min_line_len, max_line_gap)
lines = hough_lines(maskedEdges, rho, theta, threshold, min_line_len, max_line_gap)
final = weighted_img(lines, image, α=0.8, β=1., γ=0.)
return final
# def process_image(image):
# # NOTE: The output you return should be a color image (3 channel) for processing video below
# # TODO: put your pipeline here,
# # you should return the final output (image where lines are drawn on lanes)
# return result
```
Let's try the one with the solid white lane on the right first ...
```
white_output = 'test_videos_output/geremario.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/geremario.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
```
Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
```
HTML("""
<video width="540" height="960" controls>
<source src="{0}">
</video>
""".format(white_output))
```
| github_jupyter |
### Comparing PCA and t-SNE using scikit-learn
### Edgar Acuna
#### Dataset: MNIST
#### February 2021
```
import warnings
warnings.filterwarnings('ignore')
import numpy as np
from sklearn.datasets import fetch_openml
#mnist = fetch_openml("MNIST original")
#X = mnist.data / 255.0
#y = mnist.target
X, y = fetch_openml('mnist_784', version=1, return_X_y=True)
X = X / 255.
# rescale the data, use the traditional train/test split
#X_train, X_test = X[:60000], X[60000:]
#y_train, y_test = y[:60000], y[60000:]
print(X.shape, y.shape)
X.head()
import pandas as pd
print(X.shape[1])
feat_cols = [ 'pixel'+str(i) for i in range(X.shape[1]) ]
df = pd.DataFrame(X,columns=feat_cols)
df['label'] = y
df['label'] = df['label'].apply(lambda i: str(i))
print('Size of the dataframe: {}'.format(df.shape))
rndperm = np.random.permutation(df.shape[0])
%matplotlib inline
import matplotlib.pyplot as plt
# Plot the graph
plt.gray()
fig = plt.figure( figsize=(16,7) )
for i in range(0,30):
ax = fig.add_subplot(3,10,i+1, title='Digit: ' + str(df.loc[rndperm[i],'label']) )
ax.matshow(df.loc[rndperm[i],feat_cols].values.reshape((28,28)).astype(float))
plt.show()
from sklearn.decomposition import PCA
pca = PCA(n_components=3)
pca_result = pca.fit_transform(X)
df['pca-one'] = pca_result[:,0]
df['pca-two'] = pca_result[:,1]
df['pca-three'] = pca_result[:,2]
print('Explained variation per principal component: {}'.format(pca.explained_variance_ratio_))
from plotnine import *
chart = ggplot( df.loc[rndperm[:3000],:], aes(x='pca-one', y='pca-two', color='label') ) \
+ geom_point(size=3,alpha=0.6) \
+ ggtitle("First and Second Principal Components colored by digit")
chart
import time
from sklearn.manifold import TSNE
n_sne = 7000
time_start = time.time()
tsne = TSNE(n_components=2, verbose=1, perplexity=40, n_iter=300)
tsne_results = tsne.fit_transform(X.iloc[rndperm[:n_sne]])
df_tsne = df.loc[rndperm[:n_sne],:].copy()
df_tsne['x-tsne'] = tsne_results[:,0]
df_tsne['y-tsne'] = tsne_results[:,1]
chart = ggplot( df_tsne, aes(x='x-tsne', y='y-tsne', color='label') ) \
+ geom_point(size=3,alpha=0.6) \
+ ggtitle("tSNE dimensions colored by digit")
chart
pca_50 = PCA(n_components=50)
pca_result_50 = pca_50.fit_transform(X)
print('Cumulative explained variation for 50 principal components: {}'.format(np.sum(pca_50.explained_variance_ratio_)))
n_sne = 10000
time_start = time.time()
tsne = TSNE(n_components=2, verbose=1, perplexity=40, n_iter=300)
tsne_pca_results = tsne.fit_transform(pca_result_50[rndperm[:n_sne]])
print('t-SNE done! Time elapsed: {} seconds'.format(time.time()-time_start))
df_tsne = None
df_tsne = df.loc[rndperm[:n_sne],:].copy()
df_tsne['x-tsne-pca'] = tsne_pca_results[:,0]
df_tsne['y-tsne-pca'] = tsne_pca_results[:,1]
chart = ggplot( df_tsne, aes(x='x-tsne-pca', y='y-tsne-pca', color='label') ) \
+ geom_point(size=3,alpha=0.6) \
+ ggtitle("tSNE dimensions colored by Digit (PCA)")
chart
```
| github_jupyter |
```
# Empty Cell
# Execute this cell to process training data. Skip next cell.
import numpy as np
import csv
input_directory = "/home/haruki/Documents/2018-19_Autumn/CS_230/pedestrian_prediction/social-cnn-pytorch/data/train/raw/"
output_directory = "/home/haruki/Documents/2018-19_Autumn/CS_230/pedestrian_prediction/social-cnn-pytorch/data/train/processed/"
dataset_file_names = ["biwi/biwi_hotel.txt",
"crowds/arxiepiskopi1.txt",
"crowds/crowds_zara02.txt",
"crowds/crowds_zara03.txt",
"crowds/students001.txt",
"crowds/students003.txt",
"mot/PETS09-S2L1.txt",
"stanford/bookstore_0.txt",
"stanford/bookstore_1.txt",
"stanford/bookstore_2.txt",
"stanford/bookstore_3.txt",
"stanford/coupa_3.txt",
"stanford/deathCircle_0.txt",
"stanford/deathCircle_1.txt",
"stanford/deathCircle_2.txt",
"stanford/deathCircle_3.txt",
"stanford/deathCircle_4.txt",
"stanford/gates_0.txt",
"stanford/gates_1.txt",
"stanford/gates_3.txt",
"stanford/gates_4.txt",
"stanford/gates_5.txt",
"stanford/gates_6.txt",
"stanford/gates_7.txt",
"stanford/gates_8.txt",
"stanford/hyang_4.txt",
"stanford/hyang_5.txt",
"stanford/hyang_6.txt",
"stanford/hyang_7.txt",
"stanford/hyang_9.txt",
"stanford/nexus_0.txt",
"stanford/nexus_1.txt",
"stanford/nexus_2.txt",
"stanford/nexus_3.txt",
"stanford/nexus_4.txt",
"stanford/nexus_7.txt",
"stanford/nexus_8.txt",
"stanford/nexus_9.txt"]
# Store paths to each dataset.
dataset_paths = []
for ii in range(len(dataset_file_names)):
filename = input_directory + dataset_file_names[ii]
dataset_paths.append(filename)
# Find x_max, x_min, y_max, y_min across all the data.
x_max_global, x_min_global, y_max_global, y_min_global = -1000, 1000, -1000, 1000
for ii in range(len(dataset_file_names)):
txtfile = open(dataset_paths[ii], 'r')
lines = txtfile.read().splitlines()
data = [line.split() for line in lines]
data = np.transpose(sorted(data, key=lambda line: int(line[0])))
data[[2,3]] = data[[3,2]]
y = data[2,:].astype(np.float)
y_min, y_max = min(y), max(y)
if y_min < y_min_global:
y_min_global = y_min
if y_max > y_max_global:
y_max_global = y_max
x = data[3,:].astype(np.float)
x_min, x_max = min(x), max(x)
if x_min < x_min_global:
x_min_global = x_min
if x_max > x_max_global:
x_max_global = x_max
scale_factor_x = (x_max_global - x_min_global)/(100 + 100)
scale_factor_y = (y_max_global - y_min_global)/(100 + 100)
print((y_min_global, y_max_global))
print(scale_factor_y)
print((x_min_global, x_max_global))
print(scale_factor_x)
# Rescale data and save.
output_paths = []
for ii in range(len(dataset_file_names)):
txtfile = open(dataset_paths[ii], 'r')
lines = txtfile.read().splitlines()
data = [line.split() for line in lines]
data = np.transpose(sorted(data, key=lambda line: int(line[0])))
data[[2,3]] = data[[3,2]]
y = data[2,:].astype(np.float)
y = (100 + 100)*(y - y_min_global)/(y_max_global - y_min_global)
y = y - 100.0
y = y*(-1.0)
for jj in range(len(y)):
if abs(y[jj]) < 0.0001:
data[2,jj] = 0.0
else:
data[2,jj] = y[jj]
x = data[3,:].astype(np.float)
x = (100 + 100)*(x - x_min_global)/(x_max_global - x_min_global)
x = x - 100.0
for jj in range(len(x)):
if abs(x[jj]) < 0.0001:
data[3,jj] = 0.0
else:
data[3,jj] = x[jj]
path_new = output_directory + dataset_file_names[ii][0:-4] + "/world_pos_normalized.csv"
with open(path_new,'w') as out:
csv_out = csv.writer(out)
for row in data:
csv_out.writerow(row)
output_paths.append(path_new)
for path in output_paths:
print(path)
with open(path,'r') as f:
csv_in = csv.reader(f)
data = []
for row in csv_in:
data.append(row)
y_min,y_max = min(np.float_(data[2])),max(np.float_(data[2]))
x_min,x_max = min(np.float_(data[3])),max(np.float_(data[3]))
print(y_min,y_max)
print(x_min,x_max)
```
| github_jupyter |
```
from decodes.core import *
from decodes.io.jupyter_out import JupyterOut
import math
out = JupyterOut.unit_square( )
```
# Geometric Properties of Curves
To move beyond visual evaluations of a curve we require means of quantification. Here we describe the essential mathematical quantities that capture the geometric features of curves most often used in design.
These include quantities such as ***length***, ***curvature***, and ***Frenet frame*** (which includes the ***unit tangent*** and ***normal*** vectors).
All of These are typically expressed using calculus and implemented using numerical methods. Here we will limit ourselves to a discrete setting where the curve is thought of as a sampling of points, sometimes called ***discrete approximation***.
An easy way to understand how this works is to look at how curve length is calculated.
<img src="http://geometric-computation-images.s3-website-us-east-1.amazonaws.com/1.11.P10.jpg" style="width: 200px; display: inline;">
### Curve Length
The simplest way of calculating the length of a curve is to sample the curve to produce a dense collection of points, and then sum up the distance between each pair of points.
In code, this would be expressed as such:
```
"""
Curve Length
"""
def appx_length(self):
length = 0
for ival in Interval() // resolution:
length += self.eval(ival.a).distance(self.eval(ival.b))
return length
```
The more samples we take, the more accurate our length calculation.
Take, for example, a series of length calculations for a parabolic curve, each with an increasing number of samples. The results are shown in the table bleow, which demonstrates two properties of this discrete approach to calculating curve length:
* the additional number of samples required to achieve an additional decimal point of precision increases exponentially
* the calculated length increases with additional samples, converging from below on the actual length of the Curve.
<table style="width:600px">
<tr>
<th colspan="2" style="text-align:left">*Curve Length Accuracy*</th>
</tr>
<tr>
<th style="text-align:left">*Number of Divisions*</th>
<th style="text-align:left">*Calculated Length*</th>
</tr>
<tr>
<td>2</td>
<td><span style="color:blue">4.57649122254</span></td>
</tr>
<tr>
<td>4</td>
<td>4.<span style="color:blue">62672348734</span></td>
</tr>
<tr>
<td>16</td>
<td>4.6<span style="color:blue">4552053219</span></td>
</tr>
<tr>
<td>32</td>
<td>4.64<span style="color:blue">646795934</span></td>
</tr>
<tr>
<td>128</td>
<td>4.646<span style="color:blue">76402483</span></td>
</tr>
<tr>
<td>512</td>
<td>4.6467<span style="color:blue">8252883</span></td>
</tr>
<tr>
<td>1084</td>
<td>4.64678<span style="color:blue">345403</span></td>
</tr>
</table>
## Calculating Properties by Nearest Neighbors Approximation
To illustrate and compute the remainder of the geometric properties, we build upon this basic logic of extracting information on neighboring points, usually in groups of three points at a time.
The properties of ***curvature***, ***unit tangent***, ***normal vector***, and ***Frenet frame*** all describe certain local qualities of a curve near a particular point. The discrete approximation of these local properties requires additional information besides the point itself; the addition of two of its ***nearest neighbors***, points that are just ahead and just behind our point of interest.
To ensure that the neighboring points are close, we set the step-size $\Delta t$ to be a fraction of `Curve.tol`.
```
"""
Calculation of Nearest Neighbors
Returns a Point on the Curve and two neighboring Points on either side at a distance related to
the tolerance of the Curve
"""
def nearest_neighbors(crv,t):
pt = crv.eval(t)
pt_plus = crv.eval(t + crv.tol_nudge)
pt_minus = crv.eval(t - crv.tol_nudge)
return pt, Vec(pt_plus,pt), Vec(pt,pt_minus)
```
<img src="http://geometric-computation-images.s3-website-us-east-1.amazonaws.com/1.11.P16.jpg" style="width: 200px; display: inline;">
### Tangent Vector
The ***tangent vector*** at a curve point measures the rate of change of the curve.
Geometrically, this is a vector that just touches the curve at the curve point. In a discrete setting, the vectors that connect the point to its nearest neighbors are approximately vectors tangent to the curve.
If all we want is the direction of the tangent, then normalizing any one of these vectors gives an approximation for the ***unit tangent vector***, denoted as $\vec{T}$
```
"""
Unit Tangent Approximation
"""
pt_t, vec_plus, vec_minus = nearest_neighbors(crv,t)
unit_tangent = vec_plus.normalized()
```
<img src="http://geometric-computation-images.s3-website-us-east-1.amazonaws.com/1.11.P07.jpg" style="width: 200px; display: inline;">
### Normal Vector
Unlike a tangent vector, ***there are many possible normal vectors*** to a curve. In fact, when the curve is in three dimensions, there is *an entire plane of eligible vectors*.
To define a *special normal vector* using a discrete approach, take three points – the point and its nearest neighbors – which determine a plane and a circle. These are called the ***osculating plane*** and ***osculating circle*** at the curve point.
The vector that *connects the curve point to the center of the circle* is perpendicular to the tangent vector, and is thus normal to the curve.
The unit vector in this normal direction is called the ***principal normal vector***, denoted as $\vec{N}$
<img src="http://geometric-computation-images.s3-website-us-east-1.amazonaws.com/1.11.P09.jpg" style="width: 200px; display: inline;">
### Curvature
The curvature at a curve point gives a numerical measurement of the turning of a curve, and is defined to be the reciprocal value $x = 1/R$, where $R$ is the radius of the osculating circle at the curve point.
The smaller the circle, the tighter the turning and in turn, the bigger the curvature becomes.
Calculating curvature using nearest neighbor vectors is a built-in method of the Curve class and amounts to finding the center and radius of the approximate osculating circle.
```
"""
Curvature From Vectors Method in Curve Class
Returns curvature and circle determined by point and nearest neighbors on either side
"""
def _curvature_from_vecs(pt, vec_pos, vec_neg, calc_circles=False):
pt_plus = pt + vec_pos
pt_minus = pt + vec_neg
v1 = vec_pos
v2 = vec_neg
v3 = Vec(vec_pos - vec_neg)
xl = v1.cross(v3).length
if xl == 0 : return 0,Ray(pt,vec_pos)
rad_osc = 0.5*v1.length*v2.length*v3.length/xl
if not calc_circles: return 1/rad_osc
denom = 2*xl*xl
a1 = v3.length*v3.length*v1.dot(v2)/denom
a2 = v2.length*v2.length*v1.dot(v3)/denom
a3 = v1.length*v1.length*(-v2.dot(v3))/denom
center_osc = pt*a1 + pt_plus*a2 + pt_minus*a3
pln_out = Plane(center_osc, v1.cross(v2))
circ_out = Circle(pln_out,rad_osc)
return (1/rad_osc, circ_out)
"""
Curvature
Evaluates curvature at at given t-value
"""
def deval_crv(self,t):
pt, vec_pos, vec_neg = self._nudged(t)
# nudge vectors to avoid zero curvature at endpoints
if (t-self.tol_nudge <= self.domain.a):
nhood = self._nudged(self.tol_nudge)
vec_pos = nhood[1]
vec_neg = nhood[2]
if (t+self.tol_nudge >= self.domain.b):
nhood = self._nudged(self.domain.b-self.tol_nudge)
vec_pos = nhood[1]
vec_neg = nhood[2]
return Curve._curvature_from_vecs(pt,vec_pos,vec_neg)
"""
Curvature
Evaluates curvature at normalized t-value
"""
def eval_crv(self,t):
return self.deval_crv(Interval.remap(t,Interval(),self.domain))
```
<img src="http://geometric-computation-images.s3-website-us-east-1.amazonaws.com/1.11.P18.jpg" style="width: 800px; display: inline;">
### Frenet Frame
The Frenet frame is linked to the shape of the curve, and describes how it turns and twists in space. It is composed of a set of three orthonormal vectors.
We have already seen two of these vectors: the ***unit tangent vector $\vec{T}$*** and the ***principal unit normal $\vec{N}$***.
The third vector in the set, called the ***binormal vector***, is a simple product of these first two. Denoted by $\vec{B}$, the binormal vector results from taking the cross product $\vec{T} \times \vec{N}$ .
<img src="http://geometric-computation-images.s3-website-us-east-1.amazonaws.com/1.11.P11.jpg" style="width: 200px; display: inline;">
```
"""
Curve CS
Evaluates a CS aligned with Frenet Frame at given t-value
"""
def deval_cs(self,t):
pt, vec_pos, vec_neg = self._nudged(t)
# nudge vectors to avoid zero curvature at endpoints
if (t-self.tol_nudge <= self.domain.a):
nhood = self._nudged(self.tol_nudge)
vec_pos = nhood[1]
vec_neg = nhood[2]
if (t+self.tol_nudge >= self.domain.b):
nhood = self._nudged(self.domain.b-self.tol_nudge)
vec_pos = nhood[1]
vec_neg = nhood[2]
vec_T = self.tangent(t)
k, circ = Curve._curvature_from_vecs(pt,vec_pos,vec_neg,calc_circles=True)
center_osc = circ.plane.origin
vec_N = Vec(center_osc-pt).normalized()
vec_B = vec_T.cross(vec_N)
return CS(pt, vec_N, vec_B)
"""
Curve CS
Evaluates a CS aligned with Frenet Frame at normalized t-value
"""
def eval_cs(self,t):
return self.deval_cs(Interval.remap(t,Interval(),self.domain))
```
<img src="http://geometric-computation-images.s3-website-us-east-1.amazonaws.com/1.11.P12.jpg" style="width: 200px; display: inline;">
```
"""
Propagate Polygons Along Curve
Generates a collection of polygons using the built-in CS evaluated along curve
"""
pgons = []
for t in Interval().divide(pgon_count):
cs = crv.eval_cs(t)
rad_out = Interval(r1,r0*0.25).eval(t**(1/pow))
rad_in = Interval(r0,r1*0.25).eval(t**(1/pow))
pgon = PGon.doughnut(cs,Interval(rad_out,rad_in),res=edge_count)
pgons.append(pgon)
"""
Variable Pipe Around Curve
Meshes Resulting Collection of Polygons
"""
for n in range(len(pgons)-1):
pgon_a = pgons[n]
pgon_b = pgons[n+1]
off = len(pgon_a.pts)
msh = Mesh()
msh.append(pgon_a.pts)
msh.append(pgon_b.pts)
for e in range(off):
if e != (off/2)-1:
msh.add_face(e,e+1,e+off+1)
msh.add_face(e+off+1,e+off,e)
```
<img src="http://geometric-computation-images.s3-website-us-east-1.amazonaws.com/1.11.P17.jpg" style="width: 800px; display: inline;">
| github_jupyter |
# Notebook 11: Introduction to Deep Neural Networks with Keras
## Learning Goals
The goal of this notebook is to introduce deep neural networks (DNNs) using the high-level Keras package. The reader will become familiar with how to choose an architecture, cost function, and optimizer in Keras. We will also learn how to train neural networks.
# MNIST with Keras
We will once again work with the MNIST dataset of hand written digits introduced in *Notebook 7: Logistic Regression (MNIST)*. The goal is to find a statistical model which recognizes and distinguishes between the ten handwritten digits (0-9).
The MNIST dataset comprises $70000$ handwritten digits, each of which comes in a square image, divided into a $28\times 28$ pixel grid. Every pixel can take on $256$ nuances of the gray color, interpolating between white and black, and hence each data point assumes any value in the set $\{0,1,\dots,255\}$. Since there are $10$ categories in the problem, corresponding to the ten digits, this problem represents a generic classification task.
In this Notebook, we show how to use the Keras python package to tackle the MNIST problem with the help of deep neural networks.
The following code is a slight modification of a Keras tutorial, see [https://github.com/fchollet/keras/blob/master/examples/mnist_cnn.py](https://github.com/fchollet/keras/blob/master/examples/mnist_cnn.py). We invite the reader to read Sec. IX of the review to acquire a broad understanding of what the separate parts of the code do.
```
from __future__ import print_function
import keras,sklearn
# suppress tensorflow compilation warnings
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
import tensorflow as tf
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import numpy as np
seed=0
#np.random.seed(seed) # fix random seed
tf.random.set_seed(seed)
import matplotlib.pyplot as plt
```
## Structure of the Procedure
Constructing a Deep Neural Network to solve ML problems is a multiple-stage process. Quite generally, one can identify the key steps as follows:
* ***step 1:*** Load and process the data
* ***step 2:*** Define the model and its architecture
* ***step 3:*** Choose the optimizer and the cost function
* ***step 4:*** Train the model
* ***step 5:*** Evaluate the model performance on the *unseen* test data
* ***step 6:*** Modify the hyperparameters to optimize performance for the specific data set
We would like to emphasize that, while it is always possible to view steps 1-5 as independent of the particular task we are trying to solve, it is only when they are put together in ***step 6*** that the real gain of using Deep Learning is revealed, compared to less sophisticated methods such as the regression models or bagging, described in Secs. VII and VIII of the review. With this remark in mind, we shall focus predominantly on steps 1-5 below. We show how one can use grid search methods to find optimal hyperparameters in ***step 6***.
### Step 1: Load and Process the Data
Keras can conveniently download the MNIST data from the web. All we need to do is import the `mnist` module and use the `load_data()` class, and it will create the training and test data sets or us.
The MNIST set has pre-defined test and training sets, in order to facilitate the comparison of the performance of different models on the data.
Once we have loaded the data, we need to format it in the correct shape. This differs from one package to the other and, as we see in the case of Keras, it can even be different depending on the backend used.
While choosing the correct `datatype` can help improve the computational speed, we emphasize the rescaling step, which is necessary to avoid large variations in the minimal and maximal possible values of each feature. In other words, we want to make sure a feature is not being over-represented just because it is "large".
Last, we cast the label vectors $y$ to binary class matrices (a.k.a. one-hot format), as explained in Sec. VII on SoftMax regression.
```
from keras.datasets import mnist
# input image dimensions
num_classes = 10 # 10 digits
img_rows, img_cols = 28, 28 # number of pixels
# the data, shuffled and split between train and test sets
(X_train, Y_train), (X_test, Y_test) = mnist.load_data()
# reshape data, depending on Keras backend
X_train = X_train.reshape(X_train.shape[0], img_rows*img_cols)
X_test = X_test.reshape(X_test.shape[0], img_rows*img_cols)
# cast floats to single precesion
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
# rescale data in interval [0,1]
X_train /= 255
X_test /= 255
# look at an example of data point
print('an example of a data point with label', Y_train[20])
plt.matshow(X_train[20,:].reshape(28,28),cmap='binary')
plt.show()
# convert class vectors to binary class matrices
Y_train = keras.utils.np_utils.to_categorical(Y_train, num_classes)
Y_test = keras.utils.np_utils.to_categorical(Y_test, num_classes)
print('X_train shape:', X_train.shape)
print('Y_train shape:', Y_train.shape)
print()
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
```
### Step 2: Define the Neural Net and its Architecture
We can now move on to construct our deep neural net. We shall use Keras's `Sequential()` class to instantiate a model, and will add different deep layers one by one.
At this stage, we refrain from using convolutional layers. This is done further below.
Let us create an instance of Keras' `Sequential()` class, called `model`. As the name suggests, this class allows us to build DNNs layer by layer. We use the `add()` method to attach layers to our model. For the purposes of our introductory example, it suffices to focus on `Dense` layers for simplicity. Every `Dense()` layer accepts as its first required argument an integer which specifies the number of neurons. The type of activation function for the layer is defined using the `activation` optional argument, the input of which is the name of the activation function in `string` format. Examples include `relu`, `tanh`, `elu`, `sigmoid`, `softmax`.
In order for our DNN to work properly, we have to make sure that the numbers of input and output neurons for each layer match. Therefore, we specify the shape of the input in the first layer of the model explicitly using the optional argument `input_shape=(N_features,)`. The sequential construction of the model then allows Keras to infer the correct input/output dimensions of all hidden layers automatically. Hence, we only need to specify the size of the softmax output layer to match the number of categories.
```
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
def create_DNN():
# instantiate model
model = Sequential()
# add a dense all-to-all relu layer
model.add(Dense(400,input_shape=(img_rows*img_cols,), activation='relu'))
# add a dense all-to-all relu layer
model.add(Dense(100, activation='relu'))
# apply dropout with rate 0.5
model.add(Dropout(0.5))
# soft-max layer
model.add(Dense(num_classes, activation='softmax'))
return model
print('Model architecture created successfully!')
```
### Step 3: Choose the Optimizer and the Cost Function
Next, we choose the loss function according to which to train the DNN. For classification problems, this is the cross entropy, and since the output data was cast in categorical form, we choose the `categorical_crossentropy` defined in Keras' `losses` module. Depending on the problem of interest one can pick any other suitable loss function. To optimize the weights of the net, we choose SGD. This algorithm is already available to use under Keras' `optimizers` module, but we could use `Adam()` or any other built-in one as well. The parameters for the optimizer, such as `lr` (learning rate) or `momentum` are passed using the corresponding optional arguments of the `SGD()` function. All available arguments can be found in Keras' online documentation at [https://keras.io/](https://keras.io/). While the loss function and the optimizer are essential for the training procedure, to test the performance of the model one may want to look at a particular `metric` of performance. For instance, in categorical tasks one typically looks at their `accuracy`, which is defined as the percentage of correctly classified data points. To complete the definition of our model, we use the `compile()` method, with optional arguments for the `optimizer`, `loss`, and the validation `metric` as follows:
```
def compile_model(optimizer=tf.optimizers.Adam()):
# create the mode
model=create_DNN()
# compile the model
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=optimizer,
metrics=['accuracy'])
return model
print('Model compiled successfully and ready to be trained.')
```
### Step 4: Train the model
We train our DNN in minibatches, the advantages of which were explained in Sec. IV.
Shuffling the training data during training improves stability of the model. Thus, we train over a number of training epochs.
Training the DNN is a one-liner using the `fit()` method of the `Sequential` class. The first two required arguments are the training input and output data. As optional arguments, we specify the mini-`batch_size`, the number of training `epochs`, and the test or `validation_data`. To monitor the training procedure for every epoch, we set `verbose=True`.
```
# training parameters
batch_size = 64
epochs = 10
# create the deep neural net
model_DNN=compile_model()
# train DNN and store training info in history
history=model_DNN.fit(X_train, Y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(X_test, Y_test))
```
### Step 5: Evaluate the Model Performance on the *Unseen* Test Data
Next, we evaluate the model and read of the loss on the test data, and its accuracy using the `evaluate()` method.
```
# evaluate model
score = model_DNN.evaluate(X_test, Y_test, verbose=1)
# print performance
print()
print('Test loss:', score[0])
print('Test accuracy:', score[1])
# look into training history
# summarize history for accuracy
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.ylabel('model accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.ylabel('model loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.show()
```
### Step 6: Modify the Hyperparameters to Optimize Performance of the Model
Last, we show how to use the grid search option of scikit-learn to optimize the
hyperparameters of our model. An excellent blog on this by Jason Brownlee can be found on [https://machinelearningmastery.com/grid-search-hyperparameters-deep-learning-models-python-keras/](https://machinelearningmastery.com/grid-search-hyperparameters-deep-learning-models-python-keras/).
```
from sklearn.model_selection import GridSearchCV
from keras.wrappers.scikit_learn import KerasClassifier
# call Keras scikit wrapper
model_gridsearch = KerasClassifier(build_fn=compile_model,
epochs=1,
batch_size=batch_size,
verbose=1)
# list of allowed optional arguments for the optimizer, see `compile_model()`
optimizer = ['SGD', 'RMSprop', 'Adagrad', 'Adadelta', 'Adam', 'Adamax', 'Nadam']
# define parameter dictionary
param_grid = dict(optimizer=optimizer)
# call scikit grid search module
grid = GridSearchCV(estimator=model_gridsearch, param_grid=param_grid, n_jobs=1, cv=4)
grid_result = grid.fit(X_train,Y_train)
# summarize results
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
```
## Creating Convolutional Neural Nets with Keras
We have so far considered each MNIST data sample as a $(28\times 28,)$-long 1d vector. This approach neglects any spatial structure in the image. On the other hand, we do know that in every one of the hand-written digits there are *local* spatial correlations between the pixels, which we would like to take advantage of to improve the accuracy of our classification model. To this end, we first need to reshape the training and test input data as follows
```
# reshape data, depending on Keras backend
if keras.backend.image_data_format() == 'channels_first':
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
print('X_train shape:', X_train.shape)
print('Y_train shape:', Y_train.shape)
print()
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
```
One can ask the question of whether a neural net can learn to recognize such local patterns. As we saw in Sec. X of the review, this can be achieved by using convolutional layers. Luckily, all we need to do is change the architecture of our DNN, i.e. introduce small changes to the function `create_model()`. We can also merge **Step 2** and **Step 3** for convenience:
```
def create_CNN():
# instantiate model
model = Sequential()
# add first convolutional layer with 10 filters (dimensionality of output space)
model.add(Conv2D(10, kernel_size=(5, 5),
activation='relu',
input_shape=input_shape))
# add 2D pooling layer
model.add(MaxPooling2D(pool_size=(2, 2)))
# add second convolutional layer with 20 filters
model.add(Conv2D(20, (5, 5), activation='relu'))
# apply dropout with rate 0.5
model.add(Dropout(0.5))
# add 2D pooling layer
model.add(MaxPooling2D(pool_size=(2, 2)))
# flatten data
model.add(Flatten())
# add a dense all-to-all relu layer
model.add(Dense(20*4*4, activation='relu'))
# apply dropout with rate 0.5
model.add(Dropout(0.5))
# soft-max layer
model.add(Dense(num_classes, activation='softmax'))
# compile the model
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer='Adam',
metrics=['accuracy'])
return model
```
Training the deep conv net (**Step 4**) and evaluating its performance (**Step 6**) proceeds exactly as before:
```
# training parameters
batch_size = 64
epochs = 10
# create the deep conv net
model_CNN=create_CNN()
# train CNN
model_CNN.fit(X_train, Y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(X_test, Y_test))
# evaliate model
score = model_CNN.evaluate(X_test, Y_test, verbose=1)
# print performance
print()
print('Test loss:', score[0])
print('Test accuracy:', score[1])
```
<a style='text-decoration:none;line-height:16px;display:flex;color:#5B5B62;padding:10px;justify-content:end;' href='https://deepnote.com?utm_source=created-in-deepnote-cell&projectId=1cb9264e-65a5-431d-a980-16667908489e' target="_blank">
<img alt='Created in deepnote.com' style='display:inline;max-height:16px;margin:0px;margin-right:7.5px;' src='data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPHN2ZyB3aWR0aD0iODBweCIgaGVpZ2h0PSI4MHB4IiB2aWV3Qm94PSIwIDAgODAgODAiIHZlcnNpb249IjEuMSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayI+CiAgICA8IS0tIEdlbmVyYXRvcjogU2tldGNoIDU0LjEgKDc2NDkwKSAtIGh0dHBzOi8vc2tldGNoYXBwLmNvbSAtLT4KICAgIDx0aXRsZT5Hcm91cCAzPC90aXRsZT4KICAgIDxkZXNjPkNyZWF0ZWQgd2l0aCBTa2V0Y2guPC9kZXNjPgogICAgPGcgaWQ9IkxhbmRpbmciIHN0cm9rZT0ibm9uZSIgc3Ryb2tlLXdpZHRoPSIxIiBmaWxsPSJub25lIiBmaWxsLXJ1bGU9ImV2ZW5vZGQiPgogICAgICAgIDxnIGlkPSJBcnRib2FyZCIgdHJhbnNmb3JtPSJ0cmFuc2xhdGUoLTEyMzUuMDAwMDAwLCAtNzkuMDAwMDAwKSI+CiAgICAgICAgICAgIDxnIGlkPSJHcm91cC0zIiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgxMjM1LjAwMDAwMCwgNzkuMDAwMDAwKSI+CiAgICAgICAgICAgICAgICA8cG9seWdvbiBpZD0iUGF0aC0yMCIgZmlsbD0iIzAyNjVCNCIgcG9pbnRzPSIyLjM3NjIzNzYyIDgwIDM4LjA0NzY2NjcgODAgNTcuODIxNzgyMiA3My44MDU3NTkyIDU3LjgyMTc4MjIgMzIuNzU5MjczOSAzOS4xNDAyMjc4IDMxLjY4MzE2ODMiPjwvcG9seWdvbj4KICAgICAgICAgICAgICAgIDxwYXRoIGQ9Ik0zNS4wMDc3MTgsODAgQzQyLjkwNjIwMDcsNzYuNDU0OTM1OCA0Ny41NjQ5MTY3LDcxLjU0MjI2NzEgNDguOTgzODY2LDY1LjI2MTk5MzkgQzUxLjExMjI4OTksNTUuODQxNTg0MiA0MS42NzcxNzk1LDQ5LjIxMjIyODQgMjUuNjIzOTg0Niw0OS4yMTIyMjg0IEMyNS40ODQ5Mjg5LDQ5LjEyNjg0NDggMjkuODI2MTI5Niw0My4yODM4MjQ4IDM4LjY0NzU4NjksMzEuNjgzMTY4MyBMNzIuODcxMjg3MSwzMi41NTQ0MjUgTDY1LjI4MDk3Myw2Ny42NzYzNDIxIEw1MS4xMTIyODk5LDc3LjM3NjE0NCBMMzUuMDA3NzE4LDgwIFoiIGlkPSJQYXRoLTIyIiBmaWxsPSIjMDAyODY4Ij48L3BhdGg+CiAgICAgICAgICAgICAgICA8cGF0aCBkPSJNMCwzNy43MzA0NDA1IEwyNy4xMTQ1MzcsMC4yNTcxMTE0MzYgQzYyLjM3MTUxMjMsLTEuOTkwNzE3MDEgODAsMTAuNTAwMzkyNyA4MCwzNy43MzA0NDA1IEM4MCw2NC45NjA0ODgyIDY0Ljc3NjUwMzgsNzkuMDUwMzQxNCAzNC4zMjk1MTEzLDgwIEM0Ny4wNTUzNDg5LDc3LjU2NzA4MDggNTMuNDE4MjY3Nyw3MC4zMTM2MTAzIDUzLjQxODI2NzcsNTguMjM5NTg4NSBDNTMuNDE4MjY3Nyw0MC4xMjg1NTU3IDM2LjMwMzk1NDQsMzcuNzMwNDQwNSAyNS4yMjc0MTcsMzcuNzMwNDQwNSBDMTcuODQzMDU4NiwzNy43MzA0NDA1IDkuNDMzOTE5NjYsMzcuNzMwNDQwNSAwLDM3LjczMDQ0MDUgWiIgaWQ9IlBhdGgtMTkiIGZpbGw9IiMzNzkzRUYiPjwvcGF0aD4KICAgICAgICAgICAgPC9nPgogICAgICAgIDwvZz4KICAgIDwvZz4KPC9zdmc+' > </img>
Created in <span style='font-weight:600;margin-left:4px;'>Deepnote</span></a>
| github_jupyter |
```
import tensorflow as tf
import cv2
from tensorflow.keras.preprocessing.image import load_img
from tensorflow.keras import backend as K
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [5, 5]
import numpy as np
import cv2
model=tf.keras.models.load_model("01densenet.h5")
model.summary()
def grad_cam(input_model, image, category_index, layer_name):
"""
GradCAM method for visualizing input saliency.
Args:
input_model (Keras.model): model to compute cam for
image (tensor): input to model, shape (1, H, W, 3)
cls (int): class to compute cam with respect to
layer_name (str): relevant layer in model
H (int): input height
W (int): input width
Return:
cam ()
"""
cam=None
# 1. Get placeholders for class output and last layer
# Get the model's output
output_with_batch_dim = input_model.output
# print(output_with_batch_dim)
# Remove the batch dimension
output_all_categories = output_with_batch_dim[0]
# print(output_all_categories)
# Retrieve only the disease category at the given category index
y_c = output_all_categories[category_index]
# print(y_c)
# Get the input model's layer specified by layer_name, and retrive the layer's output tensor
spatial_map_layer = model.get_layer(layer_name).output
# print(spatial_map_layer)
# 2. Get gradients of last layer with respect to output
# get the gradients of y_c with respect to the spatial map layer (it's a list of length 1)
grads_l = K.gradients(y_c, spatial_map_layer)
# print(grads_l)
# Get the gradient at index 0 of the list
grads = grads_l[0]
# print(grads)
# 3. Get hook for the selected layer and its gradient, based on given model's input
# Hint: Use the variables produced by the previous two lines of code
spatial_map_and_gradient_function = K.function([input_model.input], [spatial_map_layer, grads])
# Put in the image to calculate the values of the spatial_maps (selected layer) and values of the gradients
spatial_map_all_dims, grads_val_all_dims = spatial_map_and_gradient_function([image])
# Reshape activations and gradient to remove the batch dimension
# Shape goes from (B, H, W, C) to (H, W, C)
# B: Batch. H: Height. W: Width. C: Channel
# Reshape spatial map output to remove the batch dimension
spatial_map_val = spatial_map_all_dims[0]
# print(spatial_map_val.shape)
# Reshape gradients to remove the batch dimension
grads_val = grads_val_all_dims[0]
# print(grads_val.shape)
# 4. Compute weights using global average pooling on gradient
# grads_val has shape (Height, Width, Channels) (H,W,C)
# Take the mean across the height and also width, for each channel
# Make sure weights have shape (C)
weights = np.mean(grads_val,axis=(0,1))
# print("W:"+str(weights))
# 5. Compute dot product of spatial map values with the weights
cam = np.dot(spatial_map_val,weights)
# print(cam)
# We'll take care of the postprocessing.
H, W = image.shape[1], image.shape[2]
# print(H,W)
cam = np.maximum(cam, 0) # ReLU so we only get positive importance
# print("cam"+str(cam))
cam = cv2.resize(cam, (W, H), cv2.INTER_NEAREST)
cam = cam / cam.max()
return cam
def prepare2(ima):
IMG_SIZE = 300
ima=ima/255.0 # filepathread in the image, convert to grayscale
new_array = cv2.resize(ima, (IMG_SIZE, IMG_SIZE)) # resize image to match model's expected sizing
return new_array.reshape(-1,IMG_SIZE, IMG_SIZE,1)
im = cv2.imread("/home/paa/COVID19-upgraded/Dataset/TRAIN/COVID19/61.jpeg",0)
im = cv2.resize(im,(300,300))
cv2.normalize(im, im, 0, 255, cv2.NORM_MINMAX)
im=prepare2(im)
cam = grad_cam(model, im, 0, 'conv5_block16_concat')
im_path=cv2.imread("/home/paa/COVID19-upgraded/Dataset/TRAIN/COVID19/61.jpeg")
im_path = cv2.resize(im_path,(300,300))
plt.imshow(im_path, cmap='gray')
plt.imshow(cam, cmap='magma', alpha=0.5)
plt.title("GradCAM")
plt.axis('off')
plt.savefig('GRADCAM.jpeg')
plt.show()
```
| github_jupyter |
```
!pip install transformers
!pip install torch==1.4.0
!pip install flask_ngrok
!pip install flask
!pip install sentencepiece
#final_summarizer
from transformers import T5Tokenizer, T5ForConditionalGeneration, T5Config
from flask import Flask, render_template, request,jsonify
import requests
from flask_ngrok import run_with_ngrok
# # initialize the model architecture and weights
model = T5ForConditionalGeneration.from_pretrained("t5-base")
# # initialize the model tokenizer
tokenizer = T5Tokenizer.from_pretrained("t5-base")
app = Flask(__name__,template_folder='template')
url ="https://spoonacular-recipe-food-nutrition-v1.p.rapidapi.com/"
headers = {
'x-rapidapi-host': "aylien-text.p.rapidapi.com",
'x-rapidapi-key': "e83a081f21msh7b1bc6b3ccc6f18p1302c9jsnaab174d2dcce",
}
# GET /recipes/mealplans/generate?timeFrame=day&targetCalories=2000&diet=vegetarian&exclude=shellfish%2C%20olives HTTP/1.1
# X-Rapidapi-Key: e83a081f21msh7b1bc6b3ccc6f18p1302c9jsnaab174d2dcce
# X-Rapidapi-Host: spoonacular-recipe-food-nutrition-v1.p.rapidapi.com
# Host: spoonacular-recipe-food-nutrition-v1.p.rapidapi.com
summarizer="/summarize"
label='/classify'
@app.route('/',methods = ['POST'])
def get_summary():
text=request.form.get('text')
response=[]
lines=list(text.split('\n'))
cnt=0
abstractive_summary=''
if (len(lines)>200):
final_summary=''
threshold_lines=200
while cnt < len(lines):
sub_str=lines[cnt:min(cnt+threshold_lines,len(lines))]
sub_str="\n".join(sub_str)
inputs = tokenizer.encode("summarize: " + sub_str, return_tensors="pt", max_length=512, truncation=True)
outputs = model.generate(
inputs,
max_length=700,
min_length=100,
length_penalty=2.0,
num_beams=4,
early_stopping=True)
abstractive_summary= abstractive_summary+tokenizer.decode(outputs[0])
cnt=cnt+threshold_lines
else:
print("abstractive only")
final_summary=text
inputs = tokenizer.encode("summarize: " + final_summary, return_tensors="pt", max_length=512, truncation=True)
outputs = model.generate(
inputs,
max_length=700,
min_length=100,
length_penalty=2.0,
num_beams=4,
early_stopping=True)
abstractive_summary=tokenizer.decode(outputs[0])
print("Response received")
print("FINAL STEP RETURN REMAINING")
return jsonify(abstractive_summary)
run_with_ngrok(app)
app.run()
```
| github_jupyter |
## OSR test for case 1a
> ### model performance is worsening fitting an in-Sample model with exogenous "outdoor temperature"
apparently the outdoor temperature does not fit the MW usage. Potentially the outdoor temperature is not meditaing the MW controls. More analysis is necessary. It is as if the MW consumption controls are not directly affected by the external tempreature
>>#### Test RMSE: 0.136
>>#### Test MAPE: inf
>>#### Test SMAPE: 6.462
>>#### Correlation: 0.936
>>#### R-squared: 0.875
> #### target feature: MW (thermic)
> #### exogenous feature: OUT_TEMP (outdoor temperature)
> ### statistical esimator: SARIMAX - Seasonal Auto-Regressive Integrated Moving Average
> RENergetic Project: fitting a forcasting estimator to predict MW over time and use of outdoor temperature as exo feature
>This model is based on model proptotyping run in IBM Modeler 18.2 software at 07/05/21
> Dataframe from: XXX building complex
> ### time window covered: 15-Aug-2020 -> 09 Nov 2020
>Other buildings in OXXX and dataframes available - contact DR D. Baranzini)
>> Coding by Dr Daniele Baranzini
```
# method to check working directory
import os
CURR_DIR = os.getcwd()
print(CURR_DIR)
%matplotlib inline
import numpy as np
import pandas as pd
from scipy.stats import norm
import matplotlib.pyplot as plt
from datetime import datetime # maybe necessary for future actions on dates and indexing
from math import sqrt
from sklearn.metrics import mean_squared_error
from statsmodels.tsa.statespace.sarimax import SARIMAX # toy-model for SARIMAX estimator
from random import random
import openpyxl
# read data and encode timestamp as index
df_base = pd.read_excel('Summer_Period_Dibit2_V04_REPLICA.xlsx')
df_base.index = df_base['timestamp']
ts1 = df_base[['OUT_TEMP', 'MW']] #the timestamp column is removed as it is doubled with index
endog = ts1['MW']
exog = ts1['OUT_TEMP']
ts1
```
### In-Sample fitting
```
# fit the complete model
model = SARIMAX(endog, exog, order=(0,1,0), seasonal_order=(1,0,1,24))
fit_res = model.fit(disp=False, maxiter=250)
print(fit_res.summary())
# In-sample forecast (baseline approach no train test split for backtesting)
yhat = fit_res.predict(start=0, end=2087, exog=exog) # example of Out-of-sample forecast with exo
yhat # predict() can do In- or Out-of-sample forecast
ts1['Forecast_SARIMAX']=yhat # appending forecast values to ts1 dataframe
ts1
# evaluate generalization performance of SARIMAX model above (In-sample forecast)
obs=ts1['MW']
pred=ts1['Forecast_SARIMAX']
# RMSE
rmse = sqrt(mean_squared_error(obs, pred)) # algo for RMSE
print('Test RMSE: %.3f' % rmse)
# MAPE - from 'https://vedexcel.com/how-to-calculate-mape-in-python/'
def mape(obs, pred):
return np.mean(np.abs((obs - pred) / (obs)))*100 # algo for MAPE
result = mape(obs, pred)
print('Test MAPE: %.3f' % result)
#SMAPE - from 'https://vedexcel.com/how-to-calculate-smape-in-python/'
def smape(obs, pred):
return 100/len(obs) * np.sum(2 * np.abs(pred - obs) / (np.abs(obs) + np.abs(pred))) # algo for SMAPE
result = smape(obs,pred)
print('Test SMAPE: %.3f' % result)
# Pearson Correlation
corr = np.corrcoef(obs, pred)[0,1]
print('Correlation: %.3f' % corr)
# R2
r2_result = corr**2 # algo for R-squared
print('R-squared: %.3f' % r2_result)
# plot forecasts against actual outcomes
f = plt.figure()
f.set_figwidth(16)
f.set_figheight(9)
plt.plot(obs, label = 'MW observed')
plt.plot(pred, color='red', label = 'MW forecast')
plt.legend()
plt.show()
# end
```
| github_jupyter |
<a href="https://colab.research.google.com/github/justin-hsieh/DS-Unit-2-Applied-Modeling/blob/master/assignment_applied_modeling_3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Lambda School Data Science, Unit 2: Predictive Modeling
# Applied Modeling, Module 3
You will use your portfolio project dataset for all assignments this sprint.
## Assignment
Complete these tasks for your project, and document your work.
- [ ] Continue to iterate on your project: data cleaning, exploration, feature engineering, modeling.
- [ ] Make at least 1 partial dependence plot to explain your model.
- [ ] Share at least 1 visualization on Slack.
(If you have not yet completed an initial model yet for your portfolio project, then do today's assignment using your Tanzania Waterpumps model.)
## Stretch Goals
- [ ] Make multiple PDPs with 1 feature in isolation.
- [ ] Make multiple PDPs with 2 features in interaction.
- [ ] If you log-transformed your regression target, then convert your PDP back to original units.
- [ ] Use Plotly to make a 3D PDP.
- [ ] Make PDPs with categorical feature(s). Use Ordinal Encoder, outside of a pipeline, to encode your data first. If there is a natural ordering, then take the time to encode it that way, instead of random integers. Then use the encoded data with pdpbox.I Get readable category names on your plot, instead of integer category codes.
## Links
- [Christoph Molnar: Interpretable Machine Learning — Partial Dependence Plots](https://christophm.github.io/interpretable-ml-book/pdp.html) + [animated explanation](https://twitter.com/ChristophMolnar/status/1066398522608635904)
- [Kaggle / Dan Becker: Machine Learning Explainability — Partial Dependence Plots](https://www.kaggle.com/dansbecker/partial-plots)
- [Plotly: 3D PDP example](https://plot.ly/scikit-learn/plot-partial-dependence/#partial-dependence-of-house-value-on-median-age-and-average-occupancy)
```
# If you're in Colab...
import os, sys
in_colab = 'google.colab' in sys.modules
if in_colab:
# Install required python package:
# category_encoders, version >= 2.0
!pip install --upgrade category_encoders pdpbox plotly
# Pull files from Github repo
os.chdir('/content')
!git init .
!git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Applied-Modeling.git
!git pull origin master
# Change into directory for module
os.chdir('module3')
import pandas as pd
import seaborn as sns
import category_encoders as ce
#import plotly.express as px
%matplotlib inline
import matplotlib.pyplot as plt
from pdpbox.pdp import pdp_isolate, pdp_plot
from sklearn.preprocessing import StandardScaler
#import eli5
#from eli5.sklearn import PermutationImportance
from sklearn.metrics import accuracy_score
from sklearn.metrics import mean_absolute_error,r2_score,mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.impute import SimpleImputer
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
df = pd.read_csv('/content/openpowerlifting.csv')
drops = ['Squat4Kg', 'Bench4Kg', 'Deadlift4Kg','Country','Place','Squat1Kg',
'Squat2Kg','Squat3Kg','Bench1Kg','Bench2Kg','Bench3Kg','Deadlift1Kg',
'Deadlift2Kg','Deadlift3Kg']
df = df.drop(columns=drops)
df.dropna(inplace=True)
df.shape
X = df.drop(columns='Best3SquatKg')
y = df['Best3SquatKg']
Xtrain, X_test, ytrain,y_test = train_test_split(X,y, test_size=0.20,
random_state=42)
X_train, X_val, y_train,y_val = train_test_split(Xtrain,ytrain, test_size=0.25,
random_state=42)
X_train.shape, X_test.shape, X_val.shape
model = LinearRegression()
features = ['Sex','Equipment','Age','BodyweightKg','Best3BenchKg','Best3DeadliftKg']
X = X_train[features].replace({'M':0,'F':1,'Raw':2,'Single-ply':3,
'Wraps':4,'Multi-ply':5})
y = y_train
model.fit(X,y)
y_pred = model.predict(X_val[features].replace({'M':0,'F':1,'Raw':2,'Single-ply':3,
'Wraps':4,'Multi-ply':5}))
print('Validation Accuracy', r2_score(y_pred, y_val))
print('Mean Absolute Error:', mean_absolute_error(y_val, y_pred))
lr = make_pipeline(
ce.OrdinalEncoder(), # Not ideal for Linear Regression
#StandardScaler(),
LinearRegression()
)
lr.fit(X_train, y_train)
feature = 'TotalKg'
isolated = pdp_isolate(
model=lr,
dataset=X_val,
model_features=X_val.columns,
feature=feature
)
pdp_plot(isolated, feature_name=feature, plot_lines=True,frac_to_plot=100);
```
| github_jupyter |
**Notas para contenedor de docker:**
Comando de docker para ejecución de la nota de forma local:
nota: cambiar `dir_montar` por la ruta de directorio que se desea mapear a `/datos` dentro del contenedor de docker.
```
dir_montar=<ruta completa de mi máquina a mi directorio>#aquí colocar la ruta al directorio a montar, por ejemplo:
#dir_montar=/Users/erick/midirectorio.
```
Ejecutar:
```
$docker run --rm -v $dir_montar:/datos --name jupyterlab_prope_r_kernel_tidyverse -p 8888:8888 -d palmoreck/jupyterlab_prope_r_kernel_tidyverse:3.0.16
```
Ir a `localhost:8888` y escribir el password para jupyterlab: `qwerty`
Detener el contenedor de docker:
```
docker stop jupyterlab_prope_r_kernel_tidyverse
```
Documentación de la imagen de docker `palmoreck/jupyterlab_prope_r_kernel_tidyverse:3.0.16` en [liga](https://github.com/palmoreck/dockerfiles/tree/master/jupyterlab/prope_r_kernel_tidyverse).
---
Para ejecución de la nota usar:
[docker](https://www.docker.com/) (instalación de forma **local** con [Get docker](https://docs.docker.com/install/)) y ejecutar comandos que están al inicio de la nota de forma **local**.
O bien dar click en alguno de los botones siguientes:
[](https://mybinder.org/v2/gh/palmoreck/dockerfiles-for-binder/jupyterlab_prope_r_kernel_tidyerse?urlpath=lab/tree/Propedeutico/Python/clases/3_algebra_lineal/3_minimos_cuadrados.ipynb) esta opción crea una máquina individual en un servidor de Google, clona el repositorio y permite la ejecución de los notebooks de jupyter.
[](https://repl.it/languages/python3) esta opción no clona el repositorio, no ejecuta los notebooks de jupyter pero permite ejecución de instrucciones de Python de forma colaborativa con [repl.it](https://repl.it/). Al dar click se crearán nuevos ***repl*** debajo de sus users de ***repl.it***.
### Lo siguiente está basado en el capítulo 8 del libro de Álgebra Lineal de S. Grossman y J. Flores y el libro de Matrix Analysis and Applied Linear Algebra de C. D. Meyer.
# Definiciones generales
En lo que sigue se supone una matriz cuadrada $A \in \mathbb{R}^{nxn}$.
## Eigenvalor (valor propio o característico)
El número $\lambda$ (real o complejo) se denomina *eigenvalor* de A si existe $v \in \mathbb{C}^n - \{0\}$ tal que $Av = \lambda v$. El vector $v$ se nombra eigenvector (vector propio o característico) de $A$ correspondiente al eigenvalor $\lambda$.
---
**Observación**
* Una matriz con componentes reales puede tener eigenvalores y eigenvectores con valores en $\mathbb{C}$ o $\mathbb{C}^n$ respectivamente.
* El conjunto de eigenvalores se le llama **espectro de una matriz**.
* $A$ siempre tiene al menos un eigenvalor con eigenvector asociado.
---
**Nota**
Si A es simétrica entonces tiene eigenvalores reales y aún más: $A$ tiene eigenvectores reales linealmente independientes y forman un conjunto ortonormal. Entonces $A$ se escribe como un producto de tres matrices nombrado descomposición espectral: $$A = Q \Lambda Q^T$$ donde: $Q$ es una matriz ortogonal cuyas columnas son eigenvectores de $A$ y $\Lambda$ es una matriz diagonal con eigenvalores de $A$.
---
### En *NumPy*...
**Con el módulo [eig](https://numpy.org/doc/stable/reference/generated/numpy.linalg.eig.html) podemos obtener eigenvalores y eigenvectores**
```
import numpy as np
import pprint
```
### Ejemplo
```
A=np.array([[10,-18],[6,-11]])
pprint.pprint(A)
evalue, evector = np.linalg.eig(A)
print('eigenvalores:')
pprint.pprint(evalue)
print('eigenvectores:')
pprint.pprint(evector)
```
Comprobamos: $Av_1 = \lambda_1 v_1$, $Av_2 = \lambda_2 v_2$.
```
print('matriz * eigenvector:')
pprint.pprint(A@evector[:,0])
print('eigenvalor * eigenvector:')
pprint.pprint(evalue[0]*evector[:,0])
print('matriz * eigenvector:')
pprint.pprint(A@evector[:,1])
print('eigenvalor * eigenvector:')
pprint.pprint(evalue[1]*evector[:,1])
```
---
**Observación**
Si $v$ es un eigenvector entonces $cv$ es eigenvector donde: $c$ es una constante distinta de cero.
---
```
const = -2
const_evector = const*evector[:,0]
pprint.pprint(const_evector)
print('matriz * (constante * eigenvector):')
pprint.pprint(A@const_evector)
print('eigenvalor * (constante * eigenvector):')
pprint.pprint(evalue[0]*const_evector)
```
### Ejemplo 2
Una matriz con entradas reales puede tener eigenvalores y eigenvectores complejos.
```
A=np.array([[3,-5],[1,-1]])
pprint.pprint(A)
evalue, evector = np.linalg.eig(A)
print('eigenvalores:')
pprint.pprint(evalue)
print('eigenvectores:')
pprint.pprint(evector)
```
### Ejemplo 3
Matriz simétrica y descomposición espectral de la misma. Ver [eigh](https://numpy.org/doc/stable/reference/generated/numpy.linalg.eigh.html#numpy.linalg.eigh)
```
A=np.array([[5,4,2],[4,5,2],[2,2,2]])
pprint.pprint(A)
evalue, evector = np.linalg.eigh(A)
print('eigenvalores:')
pprint.pprint(evalue)
print('eigenvectores:')
pprint.pprint(evector)
print('descomposición espectral:')
Lambda = np.diag(evalue)
Q = evector
print('QLambdaQ^T:')
pprint.pprint(Q@Lambda@Q.T)
print('A:')
pprint.pprint(A)
```
# Valores y vectores singulares de una matriz
En lo que sigue se supone $A \in \mathbb{R}^{mxn}$.
## Valor singular
El número $\sigma$ se denomina valor *singular* de $A$ si $\sigma = \sqrt{\lambda_{A^TA}} = \sqrt{\lambda_{AA^T}}$ donde: $\lambda_{A^TA}$ y $\lambda_{AA^T}$ es eigenvalor de $A^TA$ y $AA^T$ respectivamente .
---
**Observación**
La definición se realiza sobre $A^TA$ o $AA^T$ pues éstas matrices tienen el mismo espectro y además sus eigenvalores son reales y no negativos por lo que $\sigma \in \mathbb{R}$ y de hecho $\sigma \geq 0$ (la raíz cuadrada se calcula para un eigenvalor no negativo).
---
## Vector singular izquierdo, vector singular derecho
Asociado con cada valor singular $\sigma$ existen vectores singulares $u,v$ que cumplen con la igualdad: $$Av = \sigma u .$$ Al vector $u$ se le nombra vector singular *izquierdo* y al vector $v$ se le nombra vector singular *derecho*.
## Descomposición en valores singulares (SVD)
Si $A \in \mathbb{R}^{mxn}$ entonces existen $U \in \mathbb{R}^{mxm}, V \in \mathbb{R}^{nxn}$ ortogonales tales que: $A = U\Sigma V^T$ con $\Sigma = diag(\sigma_1, \sigma_2, \dots, \sigma_p) \in \mathbb{R}^{mxn}$, $p = \min\{m,n\}$ y $\sigma_1 \geq \sigma_2 \geq \dots \geq \sigma_p \geq 0$.
---
**Observaciones**
* La notación $\sigma_1$ hace referencia al valor singular más grande de A, $\sigma_2$ al segundo valor singular más grande de A y así sucesivamente.
* La SVD que se definió arriba es nombrada *SVD full*, hay una forma **truncada** en la que $U \in \mathbb{R}^{mxk}$, $V \in \mathbb{R}^{nxk}$ y $\Sigma \in \mathbb{R}^{kxk}$.
---
Existen diferentes propiedades de los valores y vectores singulares, aquí se enlistan algunas:
* Si $rank(A) = r$ entonces $r \leq p$ y $\sigma_1 \geq \sigma_2 \geq \dots \geq \sigma_r > \sigma_{r+1} = \sigma_{r+2} = \dots = \sigma_p = 0$.
* Si $rank(A) = r$ entonces $A = \displaystyle \sum_{i=1}^r \sigma_i u_i v_i^T$ con $u_i$ $i$-ésima columna de U y $v_i$ $i$-ésima columna de V.
* Geométricamente los valores singulares de una matriz $A \in \mathbb{R}^{mxn}$ son las longitudes de los semiejes del hiperelipsoide $E$ definido por $E = \{Ax : ||x|| \leq 1, \text{ con } ||\cdot || \text{ norma Euclidiana}\}$ y los vectores $u_i$ son direcciones de estos semiejes; los vectores $vi$'s tienen norma igual a $1$ por lo que se encuentran en una circunferencia de radio igual a $1$ y como $Av_i = \sigma u_i$ entonces $A$ mapea los vectores $v_i$'s a los semiejes $u_i$'s respectivamente:
<img src="https://dl.dropboxusercontent.com/s/xbuepon355pralw/svd_2.jpg?dl=0" heigth="700" width="700">
* La SVD da bases ortogonales para los $4$ espacios fundamentales de una matriz: espacio columna, espacio nulo izquierdo, espacio nulo y espacio renglón:
<img src="https://dl.dropboxusercontent.com/s/uo9s9f0nqi43s6d/svd_four_spaces_of_matrix.png?dl=0" heigth="600" width="600">
* Si $t < r$ y $r=rank(A)$ entonces $A_t = \displaystyle \sum_{i=1}^t \sigma_i u_i v_i^T$ es una matriz de entre todas las matrices con $rank$ igual a t, que es más *cercana* a A (la cercanía se mide con una norma **matricial**).
Entre las aplicaciones de la SVD se encuentran:
* Procesamiento de imágenes y señales.
* Sistemas de recomendación (Netflix).
* Mínimos cuadrados.
* Componentes principales.
* Reconstrucción de imágenes.
## En *NumPy* ...
**Con [numpy.linalg.svd](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.svd.html) podemos calcular la SVD de A. Obsérvese que se regresa $V^T$ y no $V$.**
### Ejemplo 1: producto $U \Sigma V^T$ igual a $A$
```
A = np.array([[1,-1],[1,-2],[1,-1]])
print('A:')
pprint.pprint(A)
U,S,V_T = np.linalg.svd(A) #V^T is what we get, not V
print('U:')
pprint.pprint(U)
print('Sigma:')
pprint.pprint(S)
print('V^T:')
pprint.pprint(V_T)
print('dimensiones de U:', U.shape)
print('dimensiones de S:', S.shape)
print('dimensiones de V:', V_T.T.shape)
```
Comprobación:
```
r, =S.shape
print('U*S*V^T:')
pprint.pprint(U[:,:r]*S@V_T)
print('A:')
pprint.pprint(A)
```
### Ejemplo 2: producto $U \Sigma V^T$ igual a $A$
**SVD truncada.**
```
A = np.array([[1,1,-2],[1,-1,1]])
print('A:')
pprint.pprint(A)
U,S,V_T = np.linalg.svd(A,full_matrices=False) #we want a truncated version
#equal to rank of A
print('U:')
pprint.pprint(U)
print('Sigma:')
pprint.pprint(S)
print('V^T:') #V^T is what we get, not V
pprint.pprint(V_T)
print('dimensiones de U:', U.shape)
print('dimensiones de S:', S.shape)
print('dimensiones de V:', V_T.T.shape)
```
Comprobación:
```
r, =S.shape
print('U*S*V^T:')
pprint.pprint(U*S@V_T)
print('A:')
pprint.pprint(A)
```
**SVD no truncada.**
```
A = np.array([[1,1,-2],[1,-1,1]])
print('A:')
pprint.pprint(A)
U,S,V_T = np.linalg.svd(A)
print('U:')
pprint.pprint(U)
print('Sigma:')
pprint.pprint(S)
print('V^T:') #V^T is what we get, not V
pprint.pprint(V_T)
print('dimensiones de U:', U.shape)
print('dimensiones de S:', S.shape)
print('dimensiones de V:', V_T.T.shape)
```
Comprobación:
```
r, =S.shape
print('U*S*V^T:')
pprint.pprint(U*S@V_T[:r,:])
print('A:')
pprint.pprint(A)
```
### Ejemplo 3: *rank* de $A$ igual al número de valores singulares diferentes de cero
```
A = np.array([[1, 1],
[1, 2],
[1, 3],
[1, 4]])
np.linalg.matrix_rank(A)
U,S,V_T = np.linalg.svd(A) #V^T is what we get, not V
print(S)
tol = 1e-8
np.sum(S>tol)
pprint.pprint(S)
```
### Ejemplo 4: rank de A igual al número de valores singulares diferentes de cero
```
A
new_column = A[:,0] + .5*A[:,1]
m,n = A.shape
new_column = new_column.reshape(m, 1)
A = np.hstack((A, new_column))
pprint.pprint(A)
np.linalg.matrix_rank(A)
U,S,V_T = np.linalg.svd(A) #V^T is what we get, not V
print(S)
np.sum(S>tol)
pprint.pprint(S)
pprint.pprint(U)
pprint.pprint(V_T)
```
Las columnas de la tres a la cuatro de la matriz $U$ pertenecen al espacio nulo izquierdo de $A$:
```
A.T@U[:,2] #third column of U
A.T@U[:,3] #fourth column of U
```
Las columna tres de la matriz $V$ pertenece al espacio nulo de $A$:
```
A@V_T[2, :] #we take second row of V_T as V^T is what numpy svd func returns
```
# Aplicación: reconstrucción de imágenes
Dentro de las aplicaciones de la SVD de una matriz se encuentra la de reconstrucción de imágenes.
## En *NumPy* y *Matplotlib* ...
Tomar en cuenta: *matplotlib only supports PNG images* de acuerdo a [liga](https://matplotlib.org/users/image_tutorial.html) pero se puede instalar el paquete de [Pillow](https://pypi.org/project/Pillow/) con pip para soportar otros formatos (`pip3 install --user pillow`)
```
import matplotlib.pyplot as plt
img=plt.imread('Kiara.png')
```
ver: [matplotlib.pyplot.imread](https://matplotlib.org/3.1.0/api/_as_gen/matplotlib.pyplot.imread.html)
```
img #Matplotlib has rescaled the 8 bit data from each channel to floating point data between 0.0 and 1.0
```
Los datos tienen cuatro canales:
```
img.shape
```
Sólo usaremos uno de ellos
```
img[:,:,0].shape
img = img[:,:,0]
plt.imshow(img, cmap='gray')
plt.title('Imagen en escala de grises')
plt.show()
```
### Aplicamos SVD a la imagen
```
U,S,V_T = np.linalg.svd(img, full_matrices=False)
print(U.shape)
print(S.shape)
print(V_T.shape)
img_svd = U*S@V_T
img_svd.shape
plt.imshow(img_svd, cmap='gray')
plt.title('Imagen utilizando SVD')
plt.show()
```
### Sólo usando 1 vector singular izquierdo, derecho y 1 valor singular
```
(U[:,0]*S[0]).shape
img_svd_1=np.outer((U[:,0]*S[0]),V_T[0,:])
img_svd_1.shape
plt.imshow(img_svd_1, cmap='gray')
plt.title('SVD truncada a 1')
plt.show()
```
### Usando 2 vectores singulares izquierdos, derechos y 2 valores singulares
```
img_svd_2=np.outer((U[:,0]*S[0]),V_T[0,:]) + np.outer((U[:,1]*S[1]),V_T[1,:])
plt.imshow(img_svd_2, cmap='gray')
plt.title('SVD truncada a 2')
plt.show()
fig, axes = plt.subplots(1, 2, sharey=True) #two subplots will share values in vertical axis
fig.set_figheight(8)
fig.set_figwidth(8)
axes[0].imshow(img_svd_1, cmap="gray") #index 0 for plot at the left
axes[0].set_title("SVD truncada a 1")
axes[1].imshow(img_svd_2, cmap="gray") #index 1 for plot at the left
axes[1].set_title("SVD truncada a 2")
plt.show()
```
### Otro ejemplo
```
img=plt.imread('hummingbird.png')
img #Matplotlib has rescaled the 8 bit data from each channel to floating point data between 0.0 and 1.0
```
Los datos tienen cuatro canales:
```
img.shape
```
Sólo usaremos uno de ellos
```
img[:,:,0].shape
img = img[:,:,0]
plt.imshow(img, cmap='gray')
plt.title('Imagen en escala de grises')
plt.show()
```
### Aplicamos SVD a la imagen
```
U,S,V_T = np.linalg.svd(img, full_matrices=False)
print(U.shape)
print(S.shape)
print(V_T.shape)
img_svd = U*S@V_T
img_svd.shape
plt.imshow(img_svd, cmap='gray')
plt.title('Imagen utilizando SVD')
plt.show()
```
### Sólo usando 1 vector singular izquierdo, derecho y 1 valor singular
```
(U[:,0]*S[0]).shape
img_svd_1=np.outer((U[:,0]*S[0]),V_T[0,:])
img_svd_1.shape
plt.imshow(img_svd_1, cmap='gray')
plt.title('SVD truncada a 1')
plt.show()
```
### Usando 2 vectores singulares izquierdos, derechos y 2 valores singulares
```
img_svd_2=np.outer((U[:,0]*S[0]),V_T[0,:]) + np.outer((U[:,1]*S[1]),V_T[1,:])
plt.imshow(img_svd_2, cmap='gray')
plt.title('SVD truncada a 2')
plt.show()
fig, axes = plt.subplots(1, 2, sharey=True) #two subplots will share values in vertical axis
fig.set_figheight(8)
fig.set_figwidth(8)
axes[0].imshow(img_svd_1, cmap="gray") #index 0 for plot at the left
axes[0].set_title("SVD truncada a 1")
axes[1].imshow(img_svd_2, cmap="gray") #index 1 for plot at the left
axes[1].set_title("SVD truncada a 2")
plt.show()
```
### Usando 10, 20, 30, 40 y 50 número de vectores singulares izquierdos, derechos y valores singulares
**(Tarea) Ejercicio1: resolver este caso con una imagen de su elección (si las dimensiones lo permiten) en un jupyter notebook.**
**(Tarea) Ejercicio2: Con una imagen de su elección (puede ser la misma que del ejercicio1) definir un número de vectores singulares izquierdos, derechos y valores singulares de modo que se aprecie la imagen "bien".**
| github_jupyter |
## Compute corrected sea surface heights & anomalies from Jason-3 ##
** Import libraries **
```
import os
import numpy as np
from netCDF4 import Dataset
import matplotlib
import matplotlib.pyplot as plt
import matplotlib as mpl
from matplotlib import dates
%matplotlib inline
from mpl_toolkits.basemap import Basemap
plt.rcParams["figure.figsize"] = (16,10)
plt.ioff()
input_root = '../data/'
input_path = ''
input_file = 'JA3_GPN_2PdP050_126_20170622_042327_20170622_051940.nc'
my_file = os.path.join(input_root,input_path,input_file)
nc = Dataset(my_file, 'r')
```
** define the variables to use **<br>
We have to define the variables available within the file that we wish to use - here to compute corrected SSH and SLA<br>
Corrected sea surface height =
Altitude - range - atmosphere propagation corrections (dry & wet troposphere, electrons in the ionosphere: delays in the radar wave propagation) - tides (ocean, solid earth and pole) - atmospheric loading (pressure & hf winds) - sea state bias (NB. note that ALL corrections are removed from the raw SSH)<br>
then, depending on the reference you want to work with respect to, you can also remove either mean sea surface (SLAs) or geoid (ADT), or nothing (SSH with respect to the ellipsoid). For ADT, you can also add the mean topography instead of removing the geoid.
```
#below we define the parameters we will need to compute a sea level anomaly, edit it from erroneous data
lat = nc.variables['lat'][:]
lon = nc.variables['lon'][:]
range_ku = nc.variables['range_ku'][:]
alt = nc.variables['alt'][:]
iono_corr_alt_ku = nc.variables['iono_corr_alt_ku'][:]
model_dry_tropo_corr = nc.variables['model_dry_tropo_corr'][:]
rad_wet_tropo_corr = nc.variables['rad_wet_tropo_corr'][:]
ocean_tide_sol1 = nc.variables['ocean_tide_sol1'][:]
solid_earth_tide = nc.variables['solid_earth_tide'][:]
pole_tide = nc.variables['pole_tide'][:]
hf_fluctuations_corr = nc.variables['hf_fluctuations_corr'][:]
inv_bar_corr = nc.variables['inv_bar_corr'][:]
sea_state_bias_ku = nc.variables['sea_state_bias_ku'][:]
geoid = nc.variables['geoid'][:]
mean_sea_surface = nc.variables['mean_sea_surface'][:]
mean_topography = nc.variables['mean_topography'] [:]
#visualize the variables above
fig = plt.figure()
plt.ylabel('correction value (in m)')
plt.xlabel('Latitude')
plt.plot(lat, iono_corr_alt_ku)
#plt.plot(lat, model_dry_tropo_corr)
#plt.plot(lat, rad_wet_tropo_corr)
#plt.plot(lat, ocean_tide_sol1)
#plt.plot(lat, solid_earth_tide)
#plt.plot(lat, pole_tide)
#plt.plot(lat, hf_fluctuations_corr)
#plt.plot(lat, inv_bar_corr)
#plt.plot(lat, sea_state_bias_ku)
plt.show()
# here are intermediate computations of ssh and sla
# usually, only the full corrected ones are used
raw_ssh = alt - range_ku
raw_sla = (alt - range_ku) - mean_sea_surface
sla_ionocorr = ((alt - range_ku) - mean_sea_surface) - iono_corr_alt_ku
sla_iono_drytropocorr = sla_ionocorr - model_dry_tropo_corr
sla_iono_radtropocorr = sla_iono_drytropocorr - rad_wet_tropo_corr
sla_iono_tropo_tides_corr = sla_iono_radtropocorr - ocean_tide_sol1 - solid_earth_tide - pole_tide
tides = (ocean_tide_sol1 - solid_earth_tide) - pole_tide
sla_iono_tropo_tides_ssb_corr = sla_iono_tropo_tides_corr - sea_state_bias_ku
sla_fullcorr = sla_iono_tropo_tides_ssb_corr - (hf_fluctuations_corr + inv_bar_corr)
dynatmcorr = hf_fluctuations_corr + inv_bar_corr
adt = sla_fullcorr + mean_topography
#plot e.g. a "raw" version of SSH (un-corrected)
lat = nc.variables['lat'][:]
fig = plt.figure()
plt.plot(lat, raw_ssh)
plt.show()
#plot of a "raw" version of SSH (un-corrected)
lat = nc.variables['lat'][:]
fig = plt.figure()
plt.plot(lat, raw_ssh)
plt.show()
```
Note the amplitude, and note also the very extreme data.
** have a look at the SLA **
note the same - amplitude, and the extreme data
```
#or corrected sla
lat = nc.variables['lat'][:]
fig = plt.figure()
plt.plot(lat, sla_fullcorr)
plt.show()
```
** looking at a major current **
the graph above shows a whole track (half orbit); a smaller area can be easier to interpret. We will focus on the North Atlantic (Gulf Stream), i.e. restrict the graph to a latitude between 0 and 45°N.
You may need to adapt the axis to the area/date you are looking at. Zoom over another region and look at the extrema values
```
# zoom on an area and adapt the scale
lat = nc.variables['lat'][:]
fig = plt.figure()
plt.plot(lat, raw_ssh)
# we will focus on the North Atlantic (Gulf Stream), lat between 0 and 45
# Activity
# modify the third and fourth values below to adapt the y scale (NB. we are working in meters)
plt.axis([0, 45, -60, -20])
plt.show()
```
** tests on the variables over the Gulf Stream area **
Try and test different variables as defined above - corrections and intermediate ssh values
modify the min/max in y to better see the amplitude of the curves
ends up with sla_fullcorr
```
lat = nc.variables['lat'][:]
fig1 = plt.figure()
plt.plot(lat, iono_corr_alt_ku)
plt.title('iono_corr_alt_ku')
plt.axis([0, 45, -0.5, 0.5])
fig2 = plt.figure()
plt.plot(lat, sla_ionocorr)
plt.title('sla_ionocorr')
plt.axis([0, 45, -5, 5])
plt.show()
```
** Editing **
The data with extreme values do not seem relevant - sea level anomalies of more than 100 meters are not possible. Thus, to use the data, you need to remove the spurious ones. two processes can be used : using the provided flags, or using physical thresholds (using the limitation of the physics or of the algorithms).
```
#remove spurious data by "editing"
surface_type = nc.variables['surface_type'][:]
range_numval_ku = nc.variables['range_numval_ku'][:]
range_rms_ku = nc.variables['range_rms_ku'][:]
swh_ku = nc.variables['swh_ku'][:]
sig0_ku = nc.variables['sig0_ku'][:]
wind_speed_alt = nc.variables['wind_speed_alt'][:]
off_nadir_angle_wf_ku = nc.variables['off_nadir_angle_wf_ku'][:]
sig0_numval_ku = nc.variables['sig0_numval_ku'][:]
sig0_rms_ku = nc.variables['sig0_rms_ku'][:]
mask = (surface_type == 0) & (range_numval_ku >= 10) & (range_rms_ku > 0) & (range_rms_ku < 0.2) & (model_dry_tropo_corr > -2.5) & (model_dry_tropo_corr < -1.9) & (rad_wet_tropo_corr > -0.500) & (rad_wet_tropo_corr < -0.001) & (iono_corr_alt_ku > -0.400) & (iono_corr_alt_ku < 0.040) & (sea_state_bias_ku > -0.500) & (sea_state_bias_ku < 0) & (ocean_tide_sol1 > -5) & (ocean_tide_sol1 < 5) & (solid_earth_tide > -1) & (solid_earth_tide < 1) & (pole_tide > -0.150) & (pole_tide < 0.150) & (swh_ku > 0) & (swh_ku < 11) & (sig0_ku > 7) & (sig0_ku < 30) & (wind_speed_alt > 0) & (wind_speed_alt < 30) & ((alt-range_ku) > -130) & ((alt-range_ku) < 100) & (off_nadir_angle_wf_ku > -0.2) & (off_nadir_angle_wf_ku < 0.16) & (( hf_fluctuations_corr + inv_bar_corr) > -2) & (( hf_fluctuations_corr + inv_bar_corr) < 2) & (sig0_numval_ku >= 10) & (sig0_rms_ku > 0) & (sig0_rms_ku < 1) & (sla_fullcorr > -2) & (sla_fullcorr < 2)
sshedited = raw_ssh[mask]
fig = plt.figure()
plt.plot(lat[mask], raw_ssh[mask])
plt.show()
#now, test the "adt" (ssh with respect to the geoid)
lat = nc.variables['lat'][:]
fig1 = plt.figure()
plt.plot(lat, adt)
plt.title('ADT')
plt.axis([0, 45, -2.5, 2.5])
mask = (surface_type == 0) & (model_dry_tropo_corr > -2.5) & (model_dry_tropo_corr < -1.9) & (rad_wet_tropo_corr > -0.500) & (rad_wet_tropo_corr < -0.001) & (iono_corr_alt_ku > -0.400) & (iono_corr_alt_ku < 0.040) & (sea_state_bias_ku > -0.500) & (sea_state_bias_ku < 0) & (ocean_tide_sol1 > -5) & (ocean_tide_sol1 < 5) & (solid_earth_tide > -1) & (solid_earth_tide < 1) & (pole_tide > -0.150) & (pole_tide < 0.150) & (swh_ku > 0) & (swh_ku < 11) & (sig0_ku > 7) & (sig0_ku < 30) & (wind_speed_alt > 0) & (wind_speed_alt < 30) & ((alt-range_ku) > -130) & ((alt-range_ku) < 100) & (( hf_fluctuations_corr + inv_bar_corr) > -2) & (( hf_fluctuations_corr + inv_bar_corr) < 2) & (sla_fullcorr > -2) & (sla_fullcorr < 2)
adtedited = adt[mask]
fig = plt.figure()
plt.plot(lat[mask], adt[mask])
plt.title('edited ADT')
plt.axis([0, 45, -2.5, 2.5])
plt.show()
```
| github_jupyter |
# Implementation of Anomaly detection using Autoencoders
Dataset used here is Credit Card Fraud Detection from Kaggle.
### Import required libraries
```
import pandas as pd
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler,normalize, MinMaxScaler
from sklearn.metrics import confusion_matrix, recall_score, accuracy_score, precision_score
from keras import backend as K
print("-------------------------------------------")
print("GPU available: ", tf.test.is_gpu_available())
print("Keras backend: ", K.backend())
print("-------------------------------------------")
# Load layers from keras
from keras.layers import Dense, Input, Concatenate, Flatten, BatchNormalization, Dropout, LeakyReLU
from keras.models import Sequential, Model
from keras.losses import binary_crossentropy
from Disco_tf import distance_corr
from keras.optimizers import Adam
from sklearn.metrics import roc_auc_score
RANDOM_SEED = 2021
TEST_PCT = 0.3
LABELS = ["Normal","Fraud"]
# define new loss with distance decorrelation
def decorr(var_1, var_2, weights,kappa):
def loss(y_true, y_pred):
#return binary_crossentropy(y_true, y_pred) + distance_corr(var_1, var_2, weights)
#return distance_corr(var_1, var_2, weights)
return binary_crossentropy(y_true, y_pred) + kappa * distance_corr(var_1, var_2, weights)
#return binary_crossentropy(y_true, y_pred)
return loss
```
### Read the dataset
```
##########################################################
# ------------------------------------------------------ #
# ----------------------- LOADING ---------------------- #
# ------------------------------------------------------ #
##########################################################
# Firstly the model loads the background and signal data,
# then it removes the attributes first string line, which
# are the column names, in order to avoid NaN values in
# the array.
print(' ==== Commencing Initiation ====\n')
### Background
b_name='Input_Background_1.csv'
background = np.genfromtxt(b_name, delimiter=',')
background = background[1:,:]
print(" .Background Loaded..." )
print(" .Background shape: {}".format(background.shape))
### Signal
s_name='Input_Signal_1.csv'
signal = np.genfromtxt(s_name, delimiter=',')
signal = signal[1:,:]
print(" .Signal Loaded...")
print(" .Signal shape: {}\n".format(signal.shape))
### Selecting the most pertinent attributes
pertinent_features = [2,3,4,6,10,11,12,14,15,16,18,19,20]
background = background[:,pertinent_features]
signal = signal[:,pertinent_features]
##########################################################
# ------------------------------------------------------ #
# --------------------- INITIATION --------------------- #
# ------------------------------------------------------ #
##########################################################
# Number of events
total = 1000
# Percentage of background samples on the testing phase
background_percent = 0.99
# Percentage of samples on the training phase
test_size = 0.3
print('\n ==== Initiation Complete ====\n')
print('=*='*17 )
print(' ==== Commencing Data Processing ====')
# Percentage of background samples to divide the data-set
dat_set_percent = total/len(background)
# Reducing background samples
_,reduced_background = train_test_split(background, test_size=dat_set_percent)
# Iserting the correct number of signal in streaming
n_signal_samples = int(len(reduced_background)*(1-background_percent))
_,reduced_background = train_test_split(reduced_background, test_size=background_percent)
_,reduced_signal = train_test_split(signal, test_size=n_signal_samples/len(signal))
# Concatenating Signal and the Background sub-sets
data = np.vstack((reduced_background,reduced_signal))
print(".formatted data shape: {}".format(data.shape))
print(".Background shape: {}".format(reduced_background.shape))
print(".Signal shape: {}".format(reduced_signal.shape))
# Normalize Data
print('.Normalizing Data')
data = normalize(data,norm='max',axis=0)
# Creating Labels
print('.Creating Labels')
labels =np.ones((len(data)))
labels[:len(reduced_background)] = 0
attributes = np.array(["px1","py1","pz1","E1","eta1","phi1","pt1","px2","py2","pz2","E2","eta2",
"phi2","pt2","Delta_R","M12","MET","S","C","HT","A",'Class'])
dataset = pd.DataFrame(np.hstack((data,y.reshape(-1,1))),columns = attributes)
```
# Exploratory Data Analysis
```
dataset.head()
#check for any nullvalues
print("Any nulls in the dataset ",dataset.isnull().values.any() )
print('-------')
print("No. of unique labels ", len(dataset['Class'].unique()))
print("Label values ",dataset.Class.unique())
#0 is for normal credit card transaction
#1 is for fraudulent credit card transaction
print('-------')
print("Break down of the Normal and Fraud Transactions")
print(pd.value_counts(dataset['Class'], sort = True) )
```
### Visualize the dataset
plotting the number of normal and fraud transactions in the dataset.
```
#Visualizing the imbalanced dataset
count_classes = pd.value_counts(dataset['Class'], sort = True)
count_classes.plot(kind = 'bar', rot=0)
plt.xticks(range(len(dataset['Class'].unique())), dataset.Class.unique())
plt.title("Frequency by observation number")
plt.xlabel("Class")
plt.ylabel("Number of Observations");
```
Visualizing the amount for normal and fraud transactions.
# Deixar para uma próxima, colocar bayes aqui
```
#Save the normal and fradulent transactions in separate dataframe
normal_dataset = dataset[dataset.Class == 0]
fraud_dataset = dataset[dataset.Class == 1]
#Visualize transactionamounts for normal and fraudulent transactions
bins = np.linspace(200, 2500, 100)
plt.hist(normal_dataset.Amount, bins=bins, alpha=1, density=True, label='Normal')
plt.hist(fraud_dataset.Amount, bins=bins, alpha=0.5, density=True, label='Fraud')
plt.legend(loc='upper right')
plt.title("Transaction amount vs Percentage of transactions")
plt.xlabel("Transaction amount (USD)")
plt.ylabel("Percentage of transactions");
plt.show()
```
### Create train and test dataset
Checking on the dataset
The last column in the dataset is our target variable.
```
def probability(df):
mu = np.mean(df, axis=0)
variance = np.mean((df - mu)**2, axis=0)
var_dia = np.diag(variance)
k = len(mu)
X = df - mu
p = 1/((2*np.pi)**(k/2)*(np.linalg.det(var_dia)**0.5))* np.exp(-0.5* np.sum(X @ np.linalg.pinv(var_dia) * X,axis=1))
return p
raw_data = dataset.values
# The other data points are the electrocadriogram data
data = raw_data[:, 0:-1]
train_data, test_data, train_labels, test_labels = train_test_split(
data, labels, test_size=0.3, random_state=2021 ,stratify = labels)
p_train = probability(train_data)
p_test = probability(test_data)
```
> Use only normal transactions to train the Autoencoder.
Normal data has a value of 0 in the target variable. Using the target variable to create a normal and fraud dataset.
```
train_labels = train_labels.astype(bool)
test_labels = test_labels.astype(bool)
#creating normal and fraud datasets
normal_train_data = train_data[~train_labels]
normal_p_train = p_train[~train_labels]
normal_test_data = test_data[~test_labels]
normal_p_test = p_test[~test_labels]
fraud_train_data = train_data[train_labels]
fraud_p_train = p_train[train_labels]
fraud_test_data = test_data[test_labels]
fraud_p_test = p_test[test_labels]
print(" No. of records in Fraud Train Data=",len(fraud_train_data))
print(" No. of records in Normal Train data=",len(normal_train_data))
print(" No. of records in Fraud Test Data=",len(fraud_test_data))
print(" No. of records in Normal Test data=",len(normal_test_data))
```
### Set the training parameter values
```
nb_epoch = 150
batch_size = 200
input_dim = normal_train_data.shape[1]
encoding_dim = 6
hidden_dim_1 = int(encoding_dim / 2) #
hidden_dim_2=10
learning_rate = 1e-7
```
### Create the Autoencoder
The architecture of the autoencoder is shown below.

```
#input Layer
input_layer = tf.keras.layers.Input(shape=(input_dim, ))
#Encoder
encoder = tf.keras.layers.Dense(encoding_dim, activation="tanh",
activity_regularizer=tf.keras.regularizers.l2(learning_rate))(input_layer)
encoder=tf.keras.layers.Dropout(0.2)(encoder)
encoder = tf.keras.layers.Dense(hidden_dim_1, activation='relu')(encoder)
encoder = tf.keras.layers.Dense(hidden_dim_2, activation=tf.nn.leaky_relu)(encoder)
# Decoder
decoder = tf.keras.layers.Dense(hidden_dim_1, activation='relu')(encoder)
decoder=tf.keras.layers.Dropout(0.2)(decoder)
decoder = tf.keras.layers.Dense(encoding_dim, activation='relu')(decoder)
decoder = tf.keras.layers.Dense(input_dim, activation='tanh')(decoder)
#Autoencoder
autoencoder = tf.keras.Model(inputs=input_layer, outputs=decoder)
autoencoder.summary()
```
### Define the callbacks for checkpoints and early stopping
```
cp = tf.keras.callbacks.ModelCheckpoint(filepath="autoencoder_fraud.h5",
mode='min', monitor='val_loss', verbose=2, save_best_only=True)
# define our early stopping
early_stop = tf.keras.callbacks.EarlyStopping(
monitor='val_loss',
min_delta=0.0001,
patience=10,
verbose=1,
mode='min',
restore_best_weights=True
)
```
### Compile the Autoencoder
```
autoencoder.compile(metrics=['accuracy'],
loss='mean_squared_error',
optimizer='adam')
```
### Train the Autoencoder
```
history = autoencoder.fit(normal_train_data, normal_train_data,
epochs=nb_epoch,
batch_size=batch_size,
shuffle=True,
validation_data=(test_data, test_data),
verbose=1,
callbacks=[cp, early_stop]
).history
```
Plot training and test loss
```
plt.plot(history['loss'], linewidth=2, label='Train')
plt.plot(history['val_loss'], linewidth=2, label='Test')
plt.legend(loc='upper right')
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
#plt.ylim(ymin=0.70,ymax=1)
plt.show()
```
# Detect Anomalies on test data
> Anomalies are data points where the reconstruction loss is higher
To calculate the reconstruction loss on test data, predict the test data and calculate the mean square error between the test data and the reconstructed test data.
```
test_x_predictions = autoencoder.predict(test_data)
mse = np.mean(np.power(test_data - test_x_predictions, 2), axis=1)
error_df = pd.DataFrame({'Reconstruction_error': mse,
'True_class': test_labels})
```
Plotting the test data points and their respective reconstruction error sets a threshold value to visualize if the threshold value needs to be adjusted.
```
threshold_fixed = 0.04
groups = error_df.groupby('True_class')
fig, ax = plt.subplots()
for name, group in groups:
ax.plot(group.index, group.Reconstruction_error, marker='o', ms=3.5, linestyle='',
label= "Fraud" if name == 1 else "Normal")
ax.hlines(threshold_fixed, ax.get_xlim()[0], ax.get_xlim()[1], colors="r", zorder=100, label='Threshold')
ax.legend()
plt.title("Reconstruction error for normal and fraud data")
plt.ylabel("Reconstruction error")
plt.xlabel("Data point index")
plt.show()
```
Detect anomalies as points where the reconstruction loss is greater than a fixed threshold. Here we see that a value of 52 for the threshold will be good.
### Evaluating the performance of the anomaly detection
```
threshold_fixed =0.04
pred_y = [1 if e > threshold_fixed else 0 for e in error_df.Reconstruction_error.values]
error_df['pred'] =pred_y
conf_matrix = confusion_matrix(error_df.True_class, pred_y)
plt.figure(figsize=(4, 4))
sns.heatmap(conf_matrix, xticklabels=LABELS, yticklabels=LABELS, annot=True, fmt="d");
plt.title("Confusion matrix")
plt.ylabel('True class')
plt.xlabel('Predicted class')
plt.show()
# print Accuracy, precision and recall
print(" Accuracy: ",accuracy_score(error_df['True_class'], error_df['pred']))
print(" Recall: ",recall_score(error_df['True_class'], error_df['pred']))
print(" Precision: ",precision_score(error_df['True_class'], error_df['pred']))
```
As our dataset is highly imbalanced, we see a high accuracy but a low recall and precision.
Things to further improve precision and recall would add more relevant features, different architecture for autoencoder, different hyperparameters, or a different algorithm.
# Conclusion:
Autoencoder can be used as an anomaly detection algorithm when we have an unbalanced dataset where we have a lot of good examples and only a few anomalies. Autoencoders are trained to minimize reconstruction error. When we train the autoencoders on normal data or good data, we can hypothesize that the anomalies will have higher reconstruction errors than the good or normal data.
```
# Setup network
# make inputs
jets = Input(shape=X_train[0].shape[1:])
f_jets = Flatten()(jets)
leps = Input(shape=X_train[1].shape[1:])
f_leps = Flatten()(leps)
met = Input(shape=X_train[2].shape[1:])
i = Concatenate(axis=-1)([f_jets, f_leps, met])
sample_weights = Input(shape=(1,))
#setup trainable layers
d1 = get_block(i, 1024)
d2 = get_block(d1, 1024)
d3 = get_block(d2, 512)
d4 = get_block(d3, 256)
d5 = get_block(d4, 128)
o = Dense(1, activation="sigmoid")(d5)
model = Model(inputs=[jets,leps,met, sample_weights], outputs=o)
model.summary()
# Compile model
opt = Adam(lr=0.001)
model.compile(optimizer=opt, loss=decorr(jets[:,0,0], o[:,0], sample_weights[:,0]))
#model.compile(optimizer=opt, loss="binary_crossentropy")
# Train model
model.fit(x=X_train, y=y_train, epochs=20, batch_size=10000, validation_split=0.1)
# Evaluate model
y_train_predict = model.predict(X_train, batch_size=10000)
y_test_predict = model.predict(X_test, batch_size=10000)
auc_train = roc_auc_score(y_train, y_train_predict)
auc_test = roc_auc_score(y_test, y_test_predict)
print("area under ROC curve (train sample): ", auc_train)
print("area under ROC curve (test sample): ", auc_test)
# plot correlation
x = X_test[0][:,0,0]
y = y_test_predict[:,0]
corr = np.corrcoef(x, y)
print("correlation ", corr[0][1])
fig = plt.figure()
plt.scatter(x,y)
plt.xlabel("leading jet pt")
plt.ylabel("classifier output")
plt.show()
#fig.savefig("/work/creissel/TTH/sw/CMSSW_9_4_9/src/TTH/DNN/DisCo/corr.png")
#input Layer
input_layer = Input(shape=(input_dim, ))
sample_weights = Input(shape=(1,))
#Encoder
encoder = Dense(encoding_dim, activation="sigmoid",
activity_regularizer=tf.keras.regularizers.l2(learning_rate))(input_layer)
encoder=tf.keras.layers.Dropout(0.2)(encoder)
encoder = tf.keras.layers.Dense(hidden_dim_1, activation='relu')(encoder)
encoder = tf.keras.layers.Dense(hidden_dim_2, activation=tf.nn.leaky_relu)(encoder)
# Decoder
decoder = tf.keras.layers.Dense(hidden_dim_1, activation='relu')(encoder)
decoder=tf.keras.layers.Dropout(0.2)(decoder)
decoder = tf.keras.layers.Dense(encoding_dim, activation='relu')(decoder)
decoder = tf.keras.layers.Dense(input_dim, activation='sigmoid')(decoder)
#Autoencoder
autoencoder = tf.keras.Model(inputs=[input_layer, sample_weights], outputs=decoder)
autoencoder.summary()
cp = tf.keras.callbacks.ModelCheckpoint(filepath="autoencoder_fraud.h5",
mode='min', monitor='val_loss', verbose=2, save_best_only=True)
# define our early stopping
early_stop = tf.keras.callbacks.EarlyStopping(
monitor='val_loss',
min_delta=0.0001,
patience=10,
verbose=1,
mode='min',
restore_best_weights=True
)
opt = Adam(lr=0.001)
autoencoder.compile(optimizer=opt, loss=decorr(input_layer, decoder, sample_weights,0.5))
# Train model
autoencoder.fit(x=normal_train_data, y=normal_train_data, epochs=20, batch_size=10000, validation_split=0.1)
history = autoencoder.fit([normal_train_data,normal_p_train], [normal_train_data,normal_p_train],
epochs=nb_epoch,
batch_size=batch_size,
shuffle=True,
validation_data=([test_data,p_test], [test_data,p_test]),
verbose=1,
).history
# load and split dataset
input_dir = "/work/creissel/DATA/binaryclassifier"
allX = { feat : np.load(input_dir+'/%s.npy' % feat) for feat in ["jets","leps","met"] }
X = list(allX.values())
y = np.load(input_dir+'/target.npy')
from sklearn.model_selection import train_test_split
split = train_test_split(*X,y , test_size=0.1, random_state=42)
train = [ split[ix] for ix in range(0,len(split),2) ]
test = [ split[ix] for ix in range(1,len(split),2) ]
X_train, y_train = train[0:3], train[-1]
X_test, y_test = test[0:3], test[-1]
X_train.append(np.ones(len(y_train)))
X_test.append(np.ones(len(y_train)))
# load and split dataset
allX = { feat : np.genfromtxt('%s.csv' % feat, delimiter=',') for feat in ["Input_Background_1",
"Input_Signal_1"]}
type(allx)
allx
X = list(allX.values())
y = np.load(input_dir+'/target.npy')
from sklearn.model_selection import train_test_split
split = train_test_split(*X,y , test_size=0.1, random_state=42)
train = [ split[ix] for ix in range(0,len(split),2) ]
test = [ split[ix] for ix in range(1,len(split),2) ]
X_train, y_train = train[0:3], train[-1]
X_test, y_test = test[0:3], test[-1]
X_train.append(np.ones(len(y_train)))
X_test.append(np.ones(len(y_train)))
```
| github_jupyter |
# Tutorial on connecting DeepPavlov bot to Yandex.Alice
In this tutorial we show how to develop a simple bot and connect it to Yandex.Alice. Our bot is able to greet, say goodbye, and answer questions based on an FAQ list. In our case we take typical questions of MIPT entrant as an example. This list consists of 15 questions with corresponding answers with manually generated 4-6 paraphrases for each question. More details on how to build bots with DeepPavlov can be found [here](https://medium.com/deeppavlov).
## Requirements
To connect your bot to Yandex.Alice you need to have a dedicated IP address. You also need to create an ssl certificate and a key (more details in corresponding section).
First, create a virtual env with python3.6 (`source activate py36`). Then download DeepPavlov
```
!pip install deeppavlov
```
## Implementation
Create skills for greeting, saying goodbye, and fallback (in case a bot doesn't have an appropriate answer).
```
from deeppavlov.skills.pattern_matching_skill import PatternMatchingSkill
hello = PatternMatchingSkill(responses=['Привет!', 'Приветик', 'Здравствуйте', 'Привет, с наступающим Новым Годом!'],
patterns=['Привет', 'Здравствуйте', 'Добрый день'])
bye = PatternMatchingSkill(responses=['Пока!', 'До свидания, надеюсь смог вам помочь', 'C наступающим Новым Годом!'],
patterns=['До свидания', 'Пока', 'Спасибо за помощь'])
fallback = PatternMatchingSkill(responses=['Я не понял, но могу попробовать ответить на другой вопрос',
'Я не понял, задайте другой вопрос'])
```
Download '.csv' table with questions and answers
```
import pandas as pd
faq_table = pd.read_csv('http://files.deeppavlov.ai/faq/school/faq_school.csv')
# You may also specify path to a local file instead of a link
faq_table[:10]
```
There are several autoFAQ models in DeepPavlov. In this tutorial we use tf-idf based model.
```
from deeppavlov.core.common.file import read_json
from deeppavlov import configs
from deeppavlov import train_model
model_config = read_json(configs.faq.tfidf_autofaq)
```
You can specify the following fields in the config (for explanations look [here](http://docs.deeppavlov.ai/en/master/intro/config_description.html#variables)):
1. `['dataset_reader']['data_url']` - path/link to '.csv' table
2. `['dataset_reader']['x_col_name']` - name of the questions column in the table
3. `['dataset_reader']['y_col_name']` - name of the answers column in the table
4. `['metadata']['variables']['ROOT_PATH']` - path, where models will be saved
and train it.
```
model_config['dataset_reader']['data_url'] = 'http://files.deeppavlov.ai/faq/school/faq_school.csv'
model_config['dataset_reader']['x_col_name'] = 'Question'
model_config['dataset_reader']['y_col_name'] = 'Answer'
model_config['metadata']['variables']['ROOT_PATH'] = './models/'
faq = train_model(model_config)
```
The last step is to bring all skills together using Agent.
```
from deeppavlov.agents.default_agent.default_agent import DefaultAgent
from deeppavlov.agents.processors.highest_confidence_selector import HighestConfidenceSelector
agent = DefaultAgent([hello, bye, faq, fallback], skills_selector=HighestConfidenceSelector())
```
Before connecting to Alice, check that everything works correctly. Agent is able to receive a list of messages.
```
print(agent(['Привет', 'У меня болит живот', 'Пока', 'бла бла бла']))
```
In order to connect to Alice we need an ssl key and a certificate. Replace `<MACHINE_IP_ADDRESS>` with actual IP address or a domain name of your server.
```
!openssl req -new -newkey rsa:4096 -days 365 -nodes -x509 -subj "/CN=<MACHINE_IP_ADDRESS>" -keyout my.key -out my.crt
from utils.alice import start_agent_server
start_agent_server(agent, host='0.0.0.0', port=5000, endpoint='/faq', ssl_key='my.key', ssl_cert='my.crt')
```
Now you can specify webhook (`<MACHINE_IP_ADDRESS>:5000/faq`) in [Yandex.Alice skill settings](https://dialogs.yandex.ru/developer/) and test it!
# Useful links
[DeepPavlov repository](https://github.com/deepmipt/DeepPavlov)
[DeepPavlov demo page](https://demo.ipavlov.ai)
[DeepPavlov documentation](https://docs.deeppavlov.ai)
[DeepPavlov blog](https://medium.com/deeppavlov)
| github_jupyter |
## Model 4.3: SEIR in an open population in Python/PyGOM
*Author*: Thomas Finnie @twomagpi
*Date*: 2018-10-03
```
# housekeeping
import numpy
from pygom import DeterministicOde, TransitionType, SimulateOde, Transition
```
### Build the model system
```
# first set up the states
states = ['Susceptible',
'Preinfectious',
'Infectious',
'Immune'
]
# now set up the parameters
parameters = ['beta',
'infectious_rate',
'rec_rate',
'total_popn',
'b_rate',
'm_rate']
# setup the ODEs
odes = [
Transition(origin='Susceptible',
equation='-beta*Susceptible*Infectious + total_popn*b_rate - Susceptible*m_rate',
transition_type=TransitionType.ODE),
Transition(origin='Preinfectious',
equation='beta*Susceptible*Infectious - Preinfectious*infectious_rate - Preinfectious*m_rate',
transition_type=TransitionType.ODE),
Transition(origin='Infectious',
equation='Preinfectious*infectious_rate - Infectious*rec_rate - Infectious*m_rate',
transition_type=TransitionType.ODE),
Transition(origin='Immune',
equation='Infectious*rec_rate - Immune*m_rate',
transition_type=TransitionType.ODE)
]
# now create the system
ode_obj = DeterministicOde(states,
parameters,
ode=odes)
ode_obj.print_ode()
```
### Now set up the parameters
```
## Demography ##
# total population size
total_popn = 100000
# life expectancy in years
life_expectancy_yrs = 70
# per capita mortality rate
m_rate = 1/(life_expectancy_yrs*365)
# per capita birth rate
b_rate = m_rate
## Transmission and Infection parameters ##
# average pre-infectious period
preinfectious_period = 8
# average infectious period
infectious_period = 7
# R0
R0 = 13
# rate at which individuals become infectious
infectious_rate = 1/preinfectious_period
#rate at which individuals recover to become immune
rec_rate = 1/infectious_period
# rate at which two specific individuals come into effective contact per unit time
beta = R0/(total_popn*infectious_period)
prop_immune_0 = 0.0
## Inital Values ##
#initial number of immune individuals
Immune_0 = total_popn*prop_immune_0
# initial number of infectious individuals
Infectious_0 = 1
# initial number of susceptible individuals
Sus_0 = total_popn-Infectious_0 -Immune_0
# initial number of pre-infectious individuals
Preinfectious_0 = 0
```
### Solve the model system
```
# Create the timeline
t = numpy.linspace(0, 36500, 1001)
# Set the inital values for the states
ode_obj.initial_values = ([Sus_0,
Preinfectious_0,
Infectious_0,
Immune_0],
t[0])
#set the values for the parameters
ode_obj.parameters = {'beta': beta,
'infectious_rate': infectious_rate,
'rec_rate': rec_rate,
'total_popn': total_popn,
'b_rate': b_rate,
'm_rate': m_rate
}
solution,output = ode_obj.integrate(t[1::], full_output=True)
ode_obj.plot()
# Now for a pretty graph
from bokeh.plotting import figure, show, output_notebook
from bokeh.palettes import Accent4
output_notebook()
colors = Accent4
p = figure(title="Measles Example",
y_axis_type="log"
)
for i in range(solution.shape[1]):
p.line(t,
solution[:,i],
line_color=colors[i],
legend=states[i],
line_width=2)
show(p)
```
| github_jupyter |
```
import stuett
from stuett.global_config import get_setting, setting_exists
import argparse
from pathlib import Path
from datetime import datetime
import plotly.graph_objects as go
import pandas as pd
import matplotlib.pyplot as plt
import utils
import numpy as np
# load data
temp_north = utils.load_data("MH30_temperature_rock_2017.csv")
sensor_idx = [1,3,4,5]
temp_north = temp_north[:,sensor_idx].data# take only 4 sensors to match south data, could do interpolatin later
temp_south = utils.load_data("MH10_temperature_rock_2017.csv")
temp_south = temp_south[:,7:].data # only keep temerature data
rmeter = utils.load_data("MH15_radiometer__conv_2017.csv")
solar = rmeter[:,0].data
temp_r = rmeter[:,7].data
weather = utils.load_data("MH25_vaisalawxt520windpth_2017.csv")
wind = weather[:,4].data
hum = weather[:,8].data
temp = weather[:,6].data
timestamp = np.asarray([i for j in range(365) for i in range(24)])
weather = utils.load_data("MH25_vaisalawxt520windpth_2017.csv")
rmeter.coords["name"]
rmeter.coords["name"]
np.mean(np.reshape(wind,(24,365)))
depth_north = [5,10,20,30,50,100]
depth_south = [10,35,60,85]
plt.figure()
vmin = -15
vmax = 10
plt.subplot(3,1,1)
plt.imshow(temp_north.transpose(),aspect='auto',vmin=vmin,vmax=vmax)
plt.colorbar()
plt.title('North')
plt.subplot(3,1,2)
plt.imshow(temp_south.transpose(),aspect='auto',vmin=vmin,vmax=vmax)
plt.colorbar()
plt.title('South')
plt.subplot(3,1,3)
plt.imshow(temp_diff.transpose(),aspect='auto',vmin=vmin,vmax=vmax)
plt.colorbar()
plt.title('Diff')
plt.tight_layout()
plt.show()
# relation between temperature difference and sunshie
plt.figure(figsize=[10, 3])
d = 3
plt.subplot(1,3,1)
plt.scatter(temp_north[:,d],solar,marker='.',s=1)
plt.title('North')
plt.subplot(1,3,2)
plt.scatter(temp_south[:,d],solar,marker='.',s=1)
plt.title('South')
plt.subplot(1,3,3)
plt.scatter(temp_diff[:,d],solar,marker='.',s=1)
plt.title('Diff')
plt.tight_layout()
plt.show()
plt.figure()
plt.subplot(2,1,1)
plt.plot(temp_north)
plt.ylim(-20,20)
#plt.xlim(1000,1023)
plt.subplot(2,1,2)
plt.plot(temp_south)
plt.ylim(-20,20)
#plt.xlim(1000,1023)
plt.show()
# calc the mean temperature according to daytime
mean_temp = np.zeros([24,4,3]) # hours, depths, n/s/diff
for hour in range(24):
for d in range(4):
mean_temp[hour,d,0] = np.nanmean(temp_north[timestamp==hour,d])
mean_temp[hour,d,1] = np.nanmean(temp_south[timestamp==hour,d])
mean_temp[hour,d,2] = np.nanmean(temp_diff[timestamp==hour,d])
# mean subtract for better visualization
for d in range(4):
for i in range(3):
mean_temp[:,d,i] = mean_temp[:,d,i] - np.nanmean(mean_temp[:,d,i])
plt.figure()
plt.plot(temp_north[1000:2000,0]/-np.max(np.abs(temp_north[1000:2000,0])))
plt.plot(temp_south[1000:2000,0]/np.max(np.abs(temp_south[1000:2000,0])))
plt.plot(solar[1000:2000]/max(solar))
plt.show()
# relation between temperature difference and daytime
plt.figure(figsize=[10, 3])
for i in range(3):
plt.subplot(1,3,i+1)
plt.imshow(mean_temp[:,:,i].transpose(),aspect='auto',vmin=-5,vmax=5)
plt.colorbar()
plt.tight_layout()
plt.show()
# quantify the volatility of temperatures over the day
# then look for relations between volatility and other factors such as
# sun / wind / snow / rain
ttn = np.reshape(temp_north,(24,365,4))
tts = np.reshape(temp_south,(24,365,4))
vol_n = np.nanstd(ttn,axis=0)
vol_s = np.nanstd(tts,axis=0)
plt.figure()
plt.imshow(ttn[:,:,0],aspect='auto')
sort_criterion = np.nanmean(np.reshape(solar,(24,365)),axis=0)
#sort_criterion = np.nanmean(np.reshape(temp,(24,365)),axis=0)
#sort_criterion = np.nanmean(np.reshape(hum,(24,365)),axis=0)
#sort_criterion = np.nanmean(np.reshape(wind,(24,365)),axis=0)
sortIdx = np.argsort(sort_criterion)
#sortIdx = range(365)
plt.figure(figsize=[10,6])
plt.subplot(2,2,1)
plt.imshow(vol_n[sortIdx,:].transpose(),aspect='auto',vmin=4,vmax=8)
plt.subplot(2,2,2)
plt.imshow(vol_s[sortIdx,:].transpose(),aspect='auto',vmin=4,vmax=8)
plt.subplot(2,2,3)
plt.plot(vol_n[sortIdx,:])
plt.ylim(2,10)
plt.subplot(2,2,4)
plt.plot(vol_s[sortIdx,:])
plt.ylim(2,10)
plt.tight_layout()
plt.show()
plt.figure()
sort_criterion = np.nanmean(np.reshape(solar,(24,365)),axis=0)
plt.scatter(vol_s[:,0],sort_criterion)
vol_s.shape
np.mean(ttn,axis=0).shape
```
| github_jupyter |
---
_You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-social-network-analysis/resources/yPcBs) course resource._
---
# Assignment 1 - Creating and Manipulating Graphs
Eight employees at a small company were asked to choose 3 movies that they would most enjoy watching for the upcoming company movie night. These choices are stored in the file `Employee_Movie_Choices.txt`.
A second file, `Employee_Relationships.txt`, has data on the relationships between different coworkers.
The relationship score has value of `-100` (Enemies) to `+100` (Best Friends). A value of zero means the two employees haven't interacted or are indifferent.
Both files are tab delimited.
```
import networkx as nx
import pandas as pd
import numpy as np
from networkx.algorithms import bipartite
# This is the set of employees
employees = set(['Pablo',
'Lee',
'Georgia',
'Vincent',
'Andy',
'Frida',
'Joan',
'Claude'])
# This is the set of movies
movies = set(['The Shawshank Redemption',
'Forrest Gump',
'The Matrix',
'Anaconda',
'The Social Network',
'The Godfather',
'Monty Python and the Holy Grail',
'Snakes on a Plane',
'Kung Fu Panda',
'The Dark Knight',
'Mean Girls'])
# you can use the following function to plot graphs
# make sure to comment it out before submitting to the autograder
def plot_graph(G, weight_name=None):
'''
G: a networkx G
weight_name: name of the attribute for plotting edge weights (if G is weighted)
'''
%matplotlib notebook
import matplotlib.pyplot as plt
plt.figure()
pos = nx.spring_layout(G)
edges = G.edges()
weights = None
if weight_name:
weights = [int(G[u][v][weight_name]) for u,v in edges]
labels = nx.get_edge_attributes(G,weight_name)
nx.draw_networkx_edge_labels(G,pos,edge_labels=labels)
nx.draw_networkx(G, pos, edges=edges, width=weights);
else:
nx.draw_networkx(G, pos, edges=edges);
```
### Question 1
Using NetworkX, load in the bipartite graph from `Employee_Movie_Choices.txt` and return that graph.
*This function should return a networkx graph with 19 nodes and 24 edges*
```
def answer_one():
G1_df = pd.read_csv('Employee_Movie_Choices.txt', sep='\t',
header=None, skiprows = 1, names=['Employees', 'Movies'])
G1 = nx.from_pandas_dataframe(G1_df, 'Employees', 'Movies')
return G1
# answer_one()
```
### Question 2
Using the graph from the previous question, add nodes attributes named `'type'` where movies have the value `'movie'` and employees have the value `'employee'` and return that graph.
*This function should return a networkx graph with node attributes `{'type': 'movie'}` or `{'type': 'employee'}`*
```
def answer_two():
G2 = answer_one()
G2.add_nodes_from(employees, bipartite=0, type = 'employee')
G2.add_nodes_from(movies, bipartite=1, type = 'movie')
return G2
# answer_two()
```
### Question 3
Find a weighted projection of the graph from `answer_two` which tells us how many movies different pairs of employees have in common.
*This function should return a weighted projected graph.*
```
def answer_three():
G3 = answer_two()
weighted_projection = bipartite.weighted_projected_graph(G3, employees)
return weighted_projection
# plot_graph(answer_three())
# answer_three()
```
### Question 4
Suppose you'd like to find out if people that have a high relationship score also like the same types of movies.
Find the Pearson correlation ( using `DataFrame.corr()` ) between employee relationship scores and the number of movies they have in common. If two employees have no movies in common it should be treated as a 0, not a missing value, and should be included in the correlation calculation.
*This function should return a float.*
```
def answer_four():
G4 = pd.DataFrame(answer_three().edges(data=True), columns=['Emp1', 'Emp2', 'movies_in_common'])
G4['movies_in_common'] = G4['movies_in_common'].map(lambda x: x['weight'])
G4_duplicate = G4.copy()
G4_duplicate.rename(columns={'Emp1': 'Emp2', 'Emp2': 'Emp1'}, inplace=True)
G4_common_movies = pd.concat([G4, G4_duplicate], ignore_index=True)
G4_employee_rel = pd.read_csv('Employee_Relationships.txt', sep='\t',
header=None, names=['Emp1', 'Emp2', 'Relationship'])
G4_merged = pd.merge(G4_employee_rel, G4_common_movies, how='left')
G4_merged['movies_in_common'].fillna(value=0, inplace=True)
correlation_score = G4_merged['movies_in_common'].corr(G4_merged['Relationship'])
return correlation_score
# answer_four()
```
| github_jupyter |
# Installing Packages
```
# Installing required packages
pip install psutil
pip install numpy
pip install matplotlib
pip install scikit-learn
pip install kmeanstf
pip install tensorflow-gpu
```
# Importing Packages
```
# Library Importing
import time
import psutil as ps
import platform
from datetime import datetime
```
# CPU Testing
```
#Printing number of cores
print("\n")
print("Physical cores:", ps.cpu_count(logical=False))
print("Logical cores:", ps.cpu_count(logical=True),"\n")
# CPU usage
print("Core Based CPU Usage Percentages:", "\n")
for i, percentage in enumerate(ps.cpu_percent(percpu=True)):
print(f"Core {i}: {percentage}%")
print("\n")
print(f"CPU Usage in Total: {ps.cpu_percent()}%")
import time
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
start_time = time.time()
kmeansRnd = KMeans(n_clusters=100, init='random',random_state=12345)
CPU_Test = np.random.normal(size=(500000,5))
kmeansRnd.fit_predict(CPU_Test)
plt.scatter(CPU_Test[:,0],CPU_Test[:,1],s=1)
plt.scatter(kmeansRnd.cluster_centers_[:,0],kmeansRnd.cluster_centers_[:,1],s=5,c="r")
plt.title("Scikit-Learn Random K-means")
plt.show()
end_time = time.time()
print('Sklearn K-means (Random) execution time in seconds: {}'.format(end_time - start_time))
import time
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
start_time = time.time()
kmeansPl = KMeans(n_clusters=100, random_state=12345)
CPU_Test = np.random.normal(size=(500000,5))
kmeansPl.fit_predict(CPU_Test)
plt.scatter(CPU_Test[:,0],CPU_Test[:,1],s=1)
plt.scatter(kmeansPl.cluster_centers_[:,0],kmeansPl.cluster_centers_[:,1],s=5,c="r")
plt.title("Scikit-Learn K-means++")
plt.show()
end_time = time.time()
print('Sklearn K-means (Kmeans++) execution time in seconds: {}'.format(end_time - start_time))
```
# GPU Testing
```
import tensorflow as tf
config = tf.compat.v1.ConfigProto()
tf.config.list_physical_devices('GPU')
```
# All Processors Information
```
from tensorflow.python.client import device_lib as dev_lib
print (dev_lib.list_local_devices())
import time
import numpy as np
import matplotlib.pyplot as plt
from kmeanstf import KMeansTF
start_time = time.time()
kmeanstf = KMeansTF(n_clusters=100, random_state=12345)
GPU_Test = np.random.normal(size=(500000,5))
kmeanstf.fit(GPU_Test)
plt.scatter(GPU_Test[:,0],GPU_Test[:,1],s=1)
plt.scatter(kmeanstf.cluster_centers_[:,0],kmeanstf.cluster_centers_[:,1],s=5,c="r")
plt.title("TensorFlow-GPU k-means++")
plt.show()
end_time = time.time()
print('Kmeans++ execution time in seconds: {}'.format(end_time - start_time))
import numpy as np
import matplotlib.pyplot as plt
from kmeanstf import TunnelKMeansTF
start_time = time.time()
kmeansTunnel = TunnelKMeansTF(n_clusters=100, random_state=12345)
GPU_Test = np.random.normal(size=(500000,5))
kmeansTunnel.fit(GPU_Test)
plt.scatter(GPU_Test[:,0],GPU_Test[:,1],s=1)
plt.scatter(kmeansTunnel.cluster_centers_[:,0],kmeansTunnel.cluster_centers_[:,1],s=5,c="r")
plt.title("TensorFlow-GPU Tunnel k-means")
plt.show()
end_time = time.time()
print('Tunnel Kmeans execution time in seconds: {}'.format(end_time - start_time))
start_time = time.time()
```
| github_jupyter |
# In this notebook a simple Q learner will be trained and evaluated. The Q learner recommends when to buy or sell shares of one particular stock, and in which quantity (in fact it determines the desired fraction of shares in the total portfolio value). One initial attempt was made to train the Q-learner with multiple processes, but it was unsuccessful.
```
# Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
from multiprocessing import Pool
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
import recommender.simulator as sim
from utils.analysis import value_eval
from recommender.agent import Agent
from functools import partial
NUM_THREADS = 1
LOOKBACK = 252*2 + 28
STARTING_DAYS_AHEAD = 20
POSSIBLE_FRACTIONS = [0.0, 1.0]
# Get the data
SYMBOL = 'SPY'
total_data_train_df = pd.read_pickle('../../data/data_train_val_df.pkl').stack(level='feature')
data_train_df = total_data_train_df[SYMBOL].unstack()
total_data_test_df = pd.read_pickle('../../data/data_test_df.pkl').stack(level='feature')
data_test_df = total_data_test_df[SYMBOL].unstack()
if LOOKBACK == -1:
total_data_in_df = total_data_train_df
data_in_df = data_train_df
else:
data_in_df = data_train_df.iloc[-LOOKBACK:]
total_data_in_df = total_data_train_df.loc[data_in_df.index[0]:]
# Create many agents
index = np.arange(NUM_THREADS).tolist()
env, num_states, num_actions = sim.initialize_env(total_data_in_df,
SYMBOL,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS)
agents = [Agent(num_states=num_states,
num_actions=num_actions,
random_actions_rate=0.98,
random_actions_decrease=0.999,
dyna_iterations=0,
name='Agent_{}'.format(i)) for i in index]
def show_results(results_list, data_in_df, graph=False):
for values in results_list:
total_value = values.sum(axis=1)
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(total_value))))
print('-'*100)
initial_date = total_value.index[0]
compare_results = data_in_df.loc[initial_date:, 'Close'].copy()
compare_results.name = SYMBOL
compare_results_df = pd.DataFrame(compare_results)
compare_results_df['portfolio'] = total_value
std_comp_df = compare_results_df / compare_results_df.iloc[0]
if graph:
plt.figure()
std_comp_df.plot()
```
## Let's show the symbols data, to see how good the recommender has to be.
```
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_in_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))
# Simulate (with new envs, each time)
n_epochs = 4
for i in range(n_epochs):
tic = time()
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL,
agents[0],
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_in_df)
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL, agents[0],
learn=False,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
other_env=env)
show_results([results_list], data_in_df, graph=True)
```
## Let's run the trained agent, with the test set
### First a non-learning test: this scenario would be worse than what is possible (in fact, the q-learner can learn from past samples in the test set without compromising the causality).
```
TEST_DAYS_AHEAD = 20
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=False,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
```
### And now a "realistic" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few).
```
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=True,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
```
## What are the metrics for "holding the position"?
```
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_test_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))
```
## Conclusion: Sharpe ratio is clearly better than the benchmark. Cumulative return is similar (a bit better with the non-learner, a bit worse with the learner)
```
import pickle
with open('../../data/simple_q_learner_fast_learner.pkl', 'wb') as best_agent:
pickle.dump(agents[0], best_agent)
```
| github_jupyter |
# Dijkstra's algorithm:
Reference: [Dijkstra's algorithm, Wikipedia, the free encyclopedia](https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm)
- Author: Israel Oliveira [\[e-mail\]](mailto:'Israel%20Oliveira%20'<prof.israel@gmail.com>)
```
%load_ext watermark
# Run this cell before close.
%watermark
%watermark --iversion
%watermark -b -r -g
from random import randint
import string
# Only to show the graph:
import networkx as nx
import matplotlib.pyplot as plt
```
#### Simple class for (undirected) graphs:
```
class Graph:
def __init__(self):
self.nodes = set()
self.edges = {}
self.distances = {}
def add_node(self, value):
self.nodes.add(value)
self.edges[value] = []
def add_edge(self, from_node, to_node, distance):
self.edges[from_node].append(to_node)
self.edges[from_node] = list(set(self.edges[from_node]))
self.edges[to_node].append(from_node)
self.edges[to_node] = list(set(self.edges[to_node]))
self.distances[tuple(sorted((from_node, to_node)))] = distance
def distance(self, from_node, to_node):
'''
Considering only a undirected graph.
'''
try:
return self.distances[tuple(sorted((from_node, to_node)))] if from_node != to_node else 0
except:
return float('inf')
def path_distance(self,nodes):
return sum([self.distance(nodes[i],nodes[i+1]) for i in range(0,len(nodes)-1)])
```
#### Dijkstra algorithm for computing all shortest path from start node:
```
def dijkstra(graph: Graph, node_start: str):
dist = {node: float('Inf') for node in graph.nodes}
dist[node_start] = 0
pi = {}
Q = list(graph.nodes)
while Q:
for current_node,_ in sorted(dist.items(), key=lambda x: x[1]):
if current_node in Q:
Q.remove(current_node)
break
for neighbor in G.edges[current_node]:
if dist[neighbor] > dist[current_node]+G.distance(neighbor,current_node):
dist[neighbor] = dist[current_node]+G.distance(neighbor,current_node)
pi[neighbor] = current_node
return pi, dist
```
#### For get a path for a given start and end nodes:
```
def dijkstra_shortest_path(graph: Graph, node_start: str, node_end: str):
ver, dist = dijkstra(G,nodes[0])
if dist[node_end] == float('Inf'):
print('There is no way to the end node from start node.')
return None
shortest_path = [node_end]
current_node = node_end
while current_node != node_start:
shortest_path.append(ver[current_node])
current_node = ver[current_node]
return shortest_path[::-1], dist[node_end]
```
#### Generating a well balanced graph (not with much edges).
```
?randint
G = Graph()
Gnx = nx.Graph()
nodes = [c for c in string.ascii_uppercase][:10]
for node in nodes:
G.add_node(node)
for nodefrom in nodes:
for nodeto in nodes:
if nodeto != nodefrom:
n = randint(-400,100)
if (n >0) and (tuple(sorted((nodefrom, nodeto))) not in G.distances.keys()):
G.add_edge(nodefrom, nodeto, n)
Gnx.add_edge(nodefrom, nodeto, distance=n)
n = 8
plt.figure(figsize=(n,n))
pos = nx.circular_layout(Gnx)
labels = nx.get_edge_attributes(Gnx,'distance')
nx.draw(Gnx, pos, with_labels=True)
nx.draw_networkx_edge_labels(Gnx, pos, edge_labels=labels)
```
#### From A
```
ver, dist = dijkstra(G,'A')
dist
```
#### to 'E'
```
G.path_distance(['A','D'])
dijkstra_shortest_path(G,'A','D')
```
#### Using the NetworkX method:
```
print(nx.shortest_path(Gnx,source='A',target='D',weight='distance'))
```
| github_jupyter |
## This is a demo that demonstrates end to end workflow from training to serving of MNIST model with Tensorflow and Kubernetes
```
import os
import sys
# This is a placeholder for a Google-internal import.
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
# Base path for exporting model
# Set it to /home/jovyan/tf-serving-base/models to easily create a container and deploy
export_path_base = "/home/jovyan/tf-serving-base/models"
# Model Version
model_version = 1
```
### Let's look at the devices available locally
```
from tensorflow.python.client import device_lib
local_device_protos = device_lib.list_local_devices()
[print(x.name) for x in local_device_protos if x.device_type == 'GPU']
```
### Let's train a model based on mnist data
```
# Training iterations
training_iteration = 1000
print('Training model...')
sess = tf.InteractiveSession()
serialized_tf_example = tf.placeholder(tf.string, name='tf_example')
feature_configs = {'x': tf.FixedLenFeature(shape=[784], dtype=tf.float32),}
tf_example = tf.parse_example(serialized_tf_example, feature_configs)
x = tf.identity(tf_example['x'], name='x') # use tf.identity() to assign name
y_ = tf.placeholder('float', shape=[None, 10])
w = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
sess.run(tf.global_variables_initializer())
y = tf.nn.softmax(tf.matmul(x, w) + b, name='y')
cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
values, indices = tf.nn.top_k(y, 10)
table = tf.contrib.lookup.index_to_string_table_from_tensor(
tf.constant([str(i) for i in range(10)]))
prediction_classes = table.lookup(tf.to_int64(indices))
for _ in range(training_iteration):
batch = mnist.train.next_batch(50)
train_step.run(feed_dict={x: batch[0], y_: batch[1]})
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float'))
print('training accuracy %g' % sess.run(
accuracy, feed_dict={x: mnist.test.images,
y_: mnist.test.labels}))
print('Done training!')
```
### Let's export the model to a pre-defined directory.
```
export_path = os.path.join(
tf.compat.as_bytes(export_path_base),
tf.compat.as_bytes(str(model_version)))
print('Exporting trained model to', export_path)
builder = tf.saved_model.builder.SavedModelBuilder(export_path)
# Build the signature_def_map.
classification_inputs = tf.saved_model.utils.build_tensor_info(serialized_tf_example)
classification_outputs_classes = tf.saved_model.utils.build_tensor_info(prediction_classes)
classification_outputs_scores = tf.saved_model.utils.build_tensor_info(values)
classification_signature = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs={
tf.saved_model.signature_constants.CLASSIFY_INPUTS:
classification_inputs
},
outputs={
tf.saved_model.signature_constants.CLASSIFY_OUTPUT_CLASSES:
classification_outputs_classes,
tf.saved_model.signature_constants.CLASSIFY_OUTPUT_SCORES:
classification_outputs_scores
},
method_name=tf.saved_model.signature_constants.CLASSIFY_METHOD_NAME))
tensor_info_x = tf.saved_model.utils.build_tensor_info(x)
tensor_info_y = tf.saved_model.utils.build_tensor_info(y)
prediction_signature = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs={'images': tensor_info_x},
outputs={'scores': tensor_info_y},
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME))
legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')
builder.add_meta_graph_and_variables(
sess, [tf.saved_model.tag_constants.SERVING],
signature_def_map={
'predict_images':
prediction_signature,
tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
classification_signature,
},
legacy_init_op=legacy_init_op)
builder.save()
print('Done exporting!')
```
## Let's build a container with the model just generated.
### Open Terminal and run "cd tf-serving-base && ./build_and_export.sh 'gcr.io project name'"
## Now lets start a Kubernetes Deployment that runs Tensorflow Serving with our model
```
from os import path
import yaml
import time
from kubernetes import client, config
config.load_incluster_config()
with open("/home/jovyan/tf-serving-base/deployment.yaml") as f:
dep = yaml.load(f)
# Specify the newly built image.
dep['spec']['template']['spec']['containers'][0]['image'] = "gcr.io/vishnuk-cloud/tf-model-server:latest"
k8s_beta = client.ExtensionsV1beta1Api()
try:
resp = k8s_beta.delete_namespaced_deployment(body={}, namespace="jupyterhub", name=dep['metadata']['name'])
time.sleep(10)
except:
pass
resp = k8s_beta.create_namespaced_deployment(body=dep, namespace="jupyterhub")
print("Deployment created. status='%s'" % str(resp.status))
with open("/home/jovyan/tf-serving-base/service.yaml") as f:
dep = yaml.load(f)
v1 = client.CoreV1Api()
v1.delete_namespaced_service(namespace="jupyterhub", name=dep['metadata']['name'])
resp = v1.create_namespaced_service(body=dep, namespace="jupyterhub")
print("Service created. status='%s'" % str(resp.status))
```
| github_jupyter |
# Building your Deep Neural Network: Step by Step
Welcome to your week 4 assignment (part 1 of 2)! Previously you trained a 2-layer Neural Network with a single hidden layer. This week, you will build a deep neural network with as many layers as you want!
- In this notebook, you'll implement all the functions required to build a deep neural network.
- For the next assignment, you'll use these functions to build a deep neural network for image classification.
**By the end of this assignment, you'll be able to:**
- Use non-linear units like ReLU to improve your model
- Build a deeper neural network (with more than 1 hidden layer)
- Implement an easy-to-use neural network class
**Notation**:
- Superscript $[l]$ denotes a quantity associated with the $l^{th}$ layer.
- Example: $a^{[L]}$ is the $L^{th}$ layer activation. $W^{[L]}$ and $b^{[L]}$ are the $L^{th}$ layer parameters.
- Superscript $(i)$ denotes a quantity associated with the $i^{th}$ example.
- Example: $x^{(i)}$ is the $i^{th}$ training example.
- Lowerscript $i$ denotes the $i^{th}$ entry of a vector.
- Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the $l^{th}$ layer's activations).
Let's get started!
## Table of Contents
- [1 - Packages](#1)
- [2 - Outline](#2)
- [3 - Initialization](#3)
- [3.1 - 2-layer Neural Network](#3-1)
- [Exercise 1 - initialize_parameters](#ex-1)
- [3.2 - L-layer Neural Network](#3-2)
- [Exercise 2 - initialize_parameters_deep](#ex-2)
- [4 - Forward Propagation Module](#4)
- [4.1 - Linear Forward](#4-1)
- [Exercise 3 - linear_forward](#ex-3)
- [4.2 - Linear-Activation Forward](#4-2)
- [Exercise 4 - linear_activation_forward](#ex-4)
- [4.3 - L-Layer Model](#4-3)
- [Exercise 5 - L_model_forward](#ex-5)
- [5 - Cost Function](#5)
- [Exercise 6 - compute_cost](#ex-6)
- [6 - Backward Propagation Module](#6)
- [6.1 - Linear Backward](#6-1)
- [Exercise 7 - linear_backward](#ex-7)
- [6.2 - Linear-Activation Backward](#6-2)
- [Exercise 8 - linear_activation_backward](#ex-8)
- [6.3 - L-Model Backward](#6-3)
- [Exercise 9 - L_model_backward](#ex-9)
- [6.4 - Update Parameters](#6-4)
- [Exercise 10 - update_parameters](#ex-10)
<a name='1'></a>
## 1 - Packages
First, import all the packages you'll need during this assignment.
- [numpy](www.numpy.org) is the main package for scientific computing with Python.
- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.
- dnn_utils provides some necessary functions for this notebook.
- testCases provides some test cases to assess the correctness of your functions
- np.random.seed(1) is used to keep all the random function calls consistent. It helps grade your work. Please don't change the seed!
```
import numpy as np
import h5py
import matplotlib.pyplot as plt
from testCases import *
from dnn_utils import sigmoid, sigmoid_backward, relu, relu_backward
from public_tests import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
```
<a name='2'></a>
## 2 - Outline
To build your neural network, you'll be implementing several "helper functions." These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network.
Each small helper function will have detailed instructions to walk you through the necessary steps. Here's an outline of the steps in this assignment:
- Initialize the parameters for a two-layer network and for an $L$-layer neural network
- Implement the forward propagation module (shown in purple in the figure below)
- Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$).
- The ACTIVATION function is provided for you (relu/sigmoid)
- Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function.
- Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function.
- Compute the loss
- Implement the backward propagation module (denoted in red in the figure below)
- Complete the LINEAR part of a layer's backward propagation step
- The gradient of the ACTIVATE function is provided for you(relu_backward/sigmoid_backward)
- Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function
- Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function
- Finally, update the parameters
<img src="images/final outline.png" style="width:800px;height:500px;">
<caption><center><b>Figure 1</b></center></caption><br>
**Note**:
For every forward function, there is a corresponding backward function. This is why at every step of your forward module you will be storing some values in a cache. These cached values are useful for computing gradients.
In the backpropagation module, you can then use the cache to calculate the gradients. Don't worry, this assignment will show you exactly how to carry out each of these steps!
<a name='3'></a>
## 3 - Initialization
You will write two helper functions to initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one generalizes this initialization process to $L$ layers.
<a name='3-1'></a>
### 3.1 - 2-layer Neural Network
<a name='ex-1'></a>
### Exercise 1 - initialize_parameters
Create and initialize the parameters of the 2-layer neural network.
**Instructions**:
- The model's structure is: *LINEAR -> RELU -> LINEAR -> SIGMOID*.
- Use this random initialization for the weight matrices: `np.random.randn(shape)*0.01` with the correct shape
- Use zero initialization for the biases: `np.zeros(shape)`
```
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y, multiplier=0.01):
"""
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
parameters -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
"""
np.random.seed(1)
#(≈ 4 lines of code)
# W1 = ...
# b1 = ...
# W2 = ...
# b2 = ...
# YOUR CODE STARTS HERE
W1 = np.random.randn(n_h, n_x) * multiplier
b1 = np.zeros((n_h, 1))
W2 = np.random.randn(n_y, n_h) * multiplier
b2 = np.zeros((n_y, 1))
# YOUR CODE ENDS HERE
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters = initialize_parameters(3,2,1)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
initialize_parameters_test(initialize_parameters)
```
***Expected output***
```
W1 = [[ 0.01624345 -0.00611756 -0.00528172]
[-0.01072969 0.00865408 -0.02301539]]
b1 = [[0.]
[0.]]
W2 = [[ 0.01744812 -0.00761207]]
b2 = [[0.]]
```
<a name='3-2'></a>
### 3.2 - L-layer Neural Network
The initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the `initialize_parameters_deep` function, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. For example, if the size of your input $X$ is $(12288, 209)$ (with $m=209$ examples) then:
<table style="width:100%">
<tr>
<td> </td>
<td> <b>Shape of W</b> </td>
<td> <b>Shape of b</b> </td>
<td> <b>Activation</b> </td>
<td> <b>Shape of Activation</b> </td>
<tr>
<tr>
<td> <b>Layer 1</b> </td>
<td> $(n^{[1]},12288)$ </td>
<td> $(n^{[1]},1)$ </td>
<td> $Z^{[1]} = W^{[1]} X + b^{[1]} $ </td>
<td> $(n^{[1]},209)$ </td>
<tr>
<tr>
<td> <b>Layer 2</b> </td>
<td> $(n^{[2]}, n^{[1]})$ </td>
<td> $(n^{[2]},1)$ </td>
<td>$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ </td>
<td> $(n^{[2]}, 209)$ </td>
<tr>
<tr>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$</td>
<td> $\vdots$ </td>
<tr>
<tr>
<td> <b>Layer L-1</b> </td>
<td> $(n^{[L-1]}, n^{[L-2]})$ </td>
<td> $(n^{[L-1]}, 1)$ </td>
<td>$Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ </td>
<td> $(n^{[L-1]}, 209)$ </td>
<tr>
<tr>
<td> <b>Layer L</b> </td>
<td> $(n^{[L]}, n^{[L-1]})$ </td>
<td> $(n^{[L]}, 1)$ </td>
<td> $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$</td>
<td> $(n^{[L]}, 209)$ </td>
<tr>
</table>
Remember that when you compute $W X + b$ in python, it carries out broadcasting. For example, if:
$$ W = \begin{bmatrix}
w_{00} & w_{01} & w_{02} \\
w_{10} & w_{11} & w_{12} \\
w_{20} & w_{21} & w_{22}
\end{bmatrix}\;\;\; X = \begin{bmatrix}
x_{00} & x_{01} & x_{02} \\
x_{10} & x_{11} & x_{12} \\
x_{20} & x_{21} & x_{22}
\end{bmatrix} \;\;\; b =\begin{bmatrix}
b_0 \\
b_1 \\
b_2
\end{bmatrix}\tag{2}$$
Then $WX + b$ will be:
$$ WX + b = \begin{bmatrix}
(w_{00}x_{00} + w_{01}x_{10} + w_{02}x_{20}) + b_0 & (w_{00}x_{01} + w_{01}x_{11} + w_{02}x_{21}) + b_0 & \cdots \\
(w_{10}x_{00} + w_{11}x_{10} + w_{12}x_{20}) + b_1 & (w_{10}x_{01} + w_{11}x_{11} + w_{12}x_{21}) + b_1 & \cdots \\
(w_{20}x_{00} + w_{21}x_{10} + w_{22}x_{20}) + b_2 & (w_{20}x_{01} + w_{21}x_{11} + w_{22}x_{21}) + b_2 & \cdots
\end{bmatrix}\tag{3} $$
<a name='ex-2'></a>
### Exercise 2 - initialize_parameters_deep
Implement initialization for an L-layer Neural Network.
**Instructions**:
- The model's structure is *[LINEAR -> RELU] $ \times$ (L-1) -> LINEAR -> SIGMOID*. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function.
- Use random initialization for the weight matrices. Use `np.random.randn(shape) * 0.01`.
- Use zeros initialization for the biases. Use `np.zeros(shape)`.
- You'll store $n^{[l]}$, the number of units in different layers, in a variable `layer_dims`. For example, the `layer_dims` for last week's Planar Data classification model would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. This means `W1`'s shape was (4,2), `b1` was (4,1), `W2` was (1,4) and `b2` was (1,1). Now you will generalize this to $L$ layers!
- Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network).
```python
if L == 1:
parameters["W" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01
parameters["b" + str(L)] = np.zeros((layer_dims[1], 1))
```
```
# GRADED FUNCTION: initialize_parameters_deep
def initialize_parameters_deep(layer_dims):
"""
Arguments:
layer_dims -- python array (list) containing the dimensions of each layer in our network
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
bl -- bias vector of shape (layer_dims[l], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
#(≈ 2 lines of code)
# parameters['W' + str(l)] = ...
# parameters['b' + str(l)] = ...
# YOUR CODE STARTS HERE
parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) * 0.01
parameters['b'+ str(l)] = np.zeros((layer_dims[l], 1))
# YOUR CODE ENDS HERE
assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l - 1]))
assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
parameters = initialize_parameters_deep([5,4,3])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
initialize_parameters_deep_test(initialize_parameters_deep)
```
***Expected output***
```
W1 = [[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388]
[-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218]
[-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034]
[-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]
b1 = [[0.]
[0.]
[0.]
[0.]]
W2 = [[-0.01185047 -0.0020565 0.01486148 0.00236716]
[-0.01023785 -0.00712993 0.00625245 -0.00160513]
[-0.00768836 -0.00230031 0.00745056 0.01976111]]
b2 = [[0.]
[0.]
[0.]]
```
<a name='4'></a>
## 4 - Forward Propagation Module
<a name='4-1'></a>
### 4.1 - Linear Forward
Now that you have initialized your parameters, you can do the forward propagation module. Start by implementing some basic functions that you can use again later when implementing the model. Now, you'll complete three functions in this order:
- LINEAR
- LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid.
- [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID (whole model)
The linear forward module (vectorized over all the examples) computes the following equations:
$$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\tag{4}$$
where $A^{[0]} = X$.
<a name='ex-3'></a>
### Exercise 3 - linear_forward
Build the linear part of forward propagation.
**Reminder**:
The mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find `np.dot()` useful. If your dimensions don't match, printing `W.shape` may help.
```
# GRADED FUNCTION: linear_forward
def linear_forward(A_prev, W, b):
"""
Implement the linear part of a layer's forward propagation.
Arguments:
A -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
Returns:
Z -- the input of the activation function, also called pre-activation parameter
cache -- a python tuple containing "A", "W" and "b" ; stored for computing the backward pass efficiently
"""
Z = np.dot(W, A_prev) + b
linear_cache = (A_prev, W, b)
return Z, linear_cache
t_A, t_W, t_b = linear_forward_test_case()
t_Z, t_linear_cache = linear_forward(t_A, t_W, t_b)
print("Z = " + str(t_Z))
linear_forward_test(linear_forward)
```
***Expected output***
```
Z = [[ 3.26295337 -1.23429987]]
```
<a name='4-2'></a>
### 4.2 - Linear-Activation Forward
In this notebook, you will use two activation functions:
- **Sigmoid**: $\sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}}$. You've been provided with the `sigmoid` function which returns **two** items: the activation value "`a`" and a "`cache`" that contains "`Z`" (it's what we will feed in to the corresponding backward function). To use it you could just call:
``` python
A, activation_cache = sigmoid(Z)
```
- **ReLU**: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. You've been provided with the `relu` function. This function returns **two** items: the activation value "`A`" and a "`cache`" that contains "`Z`" (it's what you'll feed in to the corresponding backward function). To use it you could just call:
``` python
A, activation_cache = relu(Z)
```
For added convenience, you're going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you'll implement a function that does the LINEAR forward step, followed by an ACTIVATION forward step.
<a name='ex-4'></a>
### Exercise 4 - linear_activation_forward
Implement the forward propagation of the *LINEAR->ACTIVATION* layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation "g" can be sigmoid() or relu(). Use `linear_forward()` and the correct activation function.
```
# GRADED FUNCTION: linear_activation_forward
def linear_activation_forward(A_prev, W, b, activation):
"""
Implement the forward propagation for the LINEAR->ACTIVATION layer
Arguments:
A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
A -- the output of the activation function, also called the post-activation value
cache -- a python tuple containing "linear_cache" and "activation_cache";
stored for computing the backward pass efficiently
"""
if activation == "sigmoid":
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = sigmoid(Z)
elif activation == "relu":
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = relu(Z)
cache = (linear_cache, activation_cache)
return A, cache
t_A_prev, t_W, t_b = linear_activation_forward_test_case()
t_A, t_linear_activation_cache = linear_activation_forward(t_A_prev, t_W, t_b, activation = "sigmoid")
print("With sigmoid: A = " + str(t_A))
t_A, t_linear_activation_cache = linear_activation_forward(t_A_prev, t_W, t_b, activation = "relu")
print("With ReLU: A = " + str(t_A))
linear_activation_forward_test(linear_activation_forward)
```
***Expected output***
```
With sigmoid: A = [[0.96890023 0.11013289]]
With ReLU: A = [[3.43896131 0. ]]
```
**Note**: In deep learning, the "[LINEAR->ACTIVATION]" computation is counted as a single layer in the neural network, not two layers.
<a name='4-3'></a>
### 4.3 - L-Layer Model
For even *more* convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (`linear_activation_forward` with RELU) $L-1$ times, then follows that with one `linear_activation_forward` with SIGMOID.
<img src="images/model_architecture_kiank.png" style="width:600px;height:300px;">
<caption><center> <b>Figure 2</b> : *[LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID* model</center></caption><br>
<a name='ex-5'></a>
### Exercise 5 - L_model_forward
Implement the forward propagation of the above model.
**Instructions**: In the code below, the variable `AL` will denote $A^{[L]} = \sigma(Z^{[L]}) = \sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called `Yhat`, i.e., this is $\hat{Y}$.)
**Hints**:
- Use the functions you've previously written
- Use a for loop to replicate [LINEAR->RELU] (L-1) times
- Don't forget to keep track of the caches in the "caches" list. To add a new value `c` to a `list`, you can use `list.append(c)`.
```
# GRADED FUNCTION: L_model_forward
def L_model_forward(X, parameters):
"""
Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation
Arguments:
X -- data, numpy array of shape (input size, number of examples)
parameters -- output of initialize_parameters_deep()
Returns:
AL -- activation value from the output (last) layer
caches -- list of caches containing:
every cache of linear_activation_forward() (there are L of them, indexed from 0 to L-1)
"""
caches = []
A = X
L = len(parameters) // 2 # number of layers in the neural network
# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.
# The for loop starts at 1 because layer 0 is the input
for l in range(1, L):
A_prev = A
#(≈ 2 lines of code)
# A, cache = ...
# caches ...
# YOUR CODE STARTS HERE
A, cache = linear_activation_forward(A_prev,
parameters['W'+str(l)],
parameters['b'+str(l)],
activation='relu')
caches.append(cache)
# YOUR CODE ENDS HERE
# Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.
#(≈ 2 lines of code)
# AL, cache = ...
# caches ...
# YOUR CODE STARTS HERE
AL, cache = linear_activation_forward(A,
parameters['W'+str(L)],
parameters['b'+str(L)],
activation='sigmoid')
caches.append(cache)
return AL, caches
t_X, t_parameters = L_model_forward_test_case_2hidden()
t_AL, t_caches = L_model_forward(t_X, t_parameters)
print("AL = " + str(t_AL))
L_model_forward_test(L_model_forward)
```
***Expected output***
```
AL = [[0.03921668 0.70498921 0.19734387 0.04728177]]
```
**Awesome!** You've implemented a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in "caches". Using $A^{[L]}$, you can compute the cost of your predictions.
<a name='5'></a>
## 5 - Cost Function
Now you can implement forward and backward propagation! You need to compute the cost, in order to check whether your model is actually learning.
<a name='ex-6'></a>
### Exercise 6 - compute_cost
Compute the cross-entropy cost $J$, using the following formula: $$-\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right)) \tag{7}$$
```
# GRADED FUNCTION: compute_cost
def compute_cost(AL, Y):
"""
Implement the cost function defined by equation (7).
Arguments:
AL -- probability vector corresponding to your label predictions, shape (1, number of examples)
Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)
Returns:
cost -- cross-entropy cost
"""
m = Y.shape[1]
# Compute loss from AL and y.
cost = -1/m * (np.dot(np.log(AL), Y.T) + np.dot((1 - Y), np.log(1 - AL).T))
cost = np.squeeze(cost) # To turn array to float
return cost
t_Y, t_AL = compute_cost_test_case()
t_cost = compute_cost(t_AL, t_Y)
print("Cost: " + str(t_cost))
compute_cost_test(compute_cost)
```
**Expected Output**:
<table>
<tr>
<td><b>cost</b> </td>
<td> 0.2797765635793422</td>
</tr>
</table>
<a name='6'></a>
## 6 - Backward Propagation Module
Just as you did for the forward propagation, you'll implement helper functions for backpropagation. Remember that backpropagation is used to calculate the gradient of the loss function with respect to the parameters.
**Reminder**:
<img src="images/backprop_kiank.png" style="width:650px;height:250px;">
<caption><center><font color='purple'><b>Figure 3</b>: Forward and Backward propagation for LINEAR->RELU->LINEAR->SIGMOID <br> <i>The purple blocks represent the forward propagation, and the red blocks represent the backward propagation.</font></center></caption>
<!--
For those of you who are experts in calculus (which you don't need to be to do this assignment!), the chain rule of calculus can be used to derive the derivative of the loss $\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows:
$$\frac{d \mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \frac{d\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\frac{{da^{[2]}}}{{dz^{[2]}}}\frac{{dz^{[2]}}}{{da^{[1]}}}\frac{{da^{[1]}}}{{dz^{[1]}}} \tag{8} $$
In order to calculate the gradient $dW^{[1]} = \frac{\partial L}{\partial W^{[1]}}$, use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial W^{[1]}}$. During backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted.
Equivalently, in order to calculate the gradient $db^{[1]} = \frac{\partial L}{\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial b^{[1]}}$.
This is why we talk about **backpropagation**.
!-->
Now, similarly to forward propagation, you're going to build the backward propagation in three steps:
1. LINEAR backward
2. LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation
3. [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model)
For the next exercise, you will need to remember that:
- `b` is a matrix(np.ndarray) with 1 column and n rows, i.e: b = [[1.0], [2.0]] (remember that `b` is a constant)
- np.sum performs a sum over the elements of a ndarray
- axis=1 or axis=0 specify if the sum is carried out by rows or by columns respectively
- keepdims specifies if the original dimensions of the matrix must be kept.
- Look at the following example to clarify:
```
A = np.array([[1, 2], [3, 4]])
print('axis=1 and keepdims=True')
print(np.sum(A, axis=1, keepdims=True))
print('axis=1 and keepdims=False')
print(np.sum(A, axis=1, keepdims=False))
print('axis=0 and keepdims=True')
print(np.sum(A, axis=0, keepdims=True))
print('axis=0 and keepdims=False')
print(np.sum(A, axis=0, keepdims=False))
```
<a name='6-1'></a>
### 6.1 - Linear Backward
For layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation).
Suppose you have already calculated the derivative $dZ^{[l]} = \frac{\partial \mathcal{L} }{\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]}, dA^{[l-1]})$.
<img src="images/linearback_kiank.png" style="width:250px;height:300px;">
<caption><center><font color='purple'><b>Figure 4</b></font></center></caption>
The three outputs $(dW^{[l]}, db^{[l]}, dA^{[l-1]})$ are computed using the input $dZ^{[l]}$.
Here are the formulas you need:
$$ dW^{[l]} = \frac{\partial \mathcal{J} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T} \tag{8}$$
$$ db^{[l]} = \frac{\partial \mathcal{J} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{[l](i)}\tag{9}$$
$$ dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \tag{10}$$
$A^{[l-1] T}$ is the transpose of $A^{[l-1]}$.
<a name='ex-7'></a>
### Exercise 7 - linear_backward
Use the 3 formulas above to implement `linear_backward()`.
**Hint**:
- In numpy you can get the transpose of an ndarray `A` using `A.T` or `A.transpose()`
```
# GRADED FUNCTION: linear_backward
def linear_backward(dZ, linear_cache):
"""
Implement the linear portion of backward propagation for a single layer (layer l)
Arguments:
dZ -- Gradient of the cost with respect to the linear output (of current layer l)
cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
"""
A_prev, W, b = linear_cache
m = A_prev.shape[1]
dW = 1/m * np.dot(dZ, A_prev.T)
db = 1/m * np.sum(dZ, axis=1, keepdims=True)
dA_prev = np.dot(W.T, dZ)
return dA_prev, dW, db
t_dZ, t_linear_cache = linear_backward_test_case()
t_dA_prev, t_dW, t_db = linear_backward(t_dZ, t_linear_cache)
print("dA_prev: " + str(t_dA_prev))
print("dW: " + str(t_dW))
print("db: " + str(t_db))
linear_backward_test(linear_backward)
```
**Expected Output**:
```
dA_prev: [[-1.15171336 0.06718465 -0.3204696 2.09812712]
[ 0.60345879 -3.72508701 5.81700741 -3.84326836]
[-0.4319552 -1.30987417 1.72354705 0.05070578]
[-0.38981415 0.60811244 -1.25938424 1.47191593]
[-2.52214926 2.67882552 -0.67947465 1.48119548]]
dW: [[ 0.07313866 -0.0976715 -0.87585828 0.73763362 0.00785716]
[ 0.85508818 0.37530413 -0.59912655 0.71278189 -0.58931808]
[ 0.97913304 -0.24376494 -0.08839671 0.55151192 -0.10290907]]
db: [[-0.14713786]
[-0.11313155]
[-0.13209101]]
```
<a name='6-2'></a>
### 6.2 - Linear-Activation Backward
Next, you will create a function that merges the two helper functions: **`linear_backward`** and the backward step for the activation **`linear_activation_backward`**.
To help you implement `linear_activation_backward`, two backward functions have been provided:
- **`sigmoid_backward`**: Implements the backward propagation for SIGMOID unit. You can call it as follows:
```python
dZ = sigmoid_backward(dA, activation_cache)
```
- **`relu_backward`**: Implements the backward propagation for RELU unit. You can call it as follows:
```python
dZ = relu_backward(dA, activation_cache)
```
If $g(.)$ is the activation function,
`sigmoid_backward` and `relu_backward` compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}). \tag{11}$$
<a name='ex-8'></a>
### Exercise 8 - linear_activation_backward
Implement the backpropagation for the *LINEAR->ACTIVATION* layer.
```
# GRADED FUNCTION: linear_activation_backward
def linear_activation_backward(dA, cache, activation):
"""
Implement the backward propagation for the LINEAR->ACTIVATION layer.
Arguments:
dA -- post-activation gradient for current layer l
cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
"""
linear_cache, activation_cache = cache
if activation == "relu":
dZ = relu_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
elif activation == "sigmoid":
dZ = sigmoid_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
return dA_prev, dW, db
t_dAL, t_linear_activation_cache = linear_activation_backward_test_case()
t_dA_prev, t_dW, t_db = linear_activation_backward(t_dAL, t_linear_activation_cache, activation = "sigmoid")
print("With sigmoid: dA_prev = " + str(t_dA_prev))
print("With sigmoid: dW = " + str(t_dW))
print("With sigmoid: db = " + str(t_db))
t_dA_prev, t_dW, t_db = linear_activation_backward(t_dAL, t_linear_activation_cache, activation = "relu")
print("With relu: dA_prev = " + str(t_dA_prev))
print("With relu: dW = " + str(t_dW))
print("With relu: db = " + str(t_db))
linear_activation_backward_test(linear_activation_backward)
```
**Expected output:**
```
With sigmoid: dA_prev = [[ 0.11017994 0.01105339]
[ 0.09466817 0.00949723]
[-0.05743092 -0.00576154]]
With sigmoid: dW = [[ 0.10266786 0.09778551 -0.01968084]]
With sigmoid: db = [[-0.05729622]]
With relu: dA_prev = [[ 0.44090989 0. ]
[ 0.37883606 0. ]
[-0.2298228 0. ]]
With relu: dW = [[ 0.44513824 0.37371418 -0.10478989]]
With relu: db = [[-0.20837892]]
```
<a name='6-3'></a>
### 6.3 - L-Model Backward
Now you will implement the backward function for the whole network!
Recall that when you implemented the `L_model_forward` function, at each iteration, you stored a cache which contains (A,W,b, and z). In the back propagation module, you'll use those variables to compute the gradients. Therefore, in the `L_model_backward` function, you'll iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass.
<img src="images/mn_backward.png" style="width:450px;height:300px;">
<caption><center><font color='purple'><b>Figure 5</b>: Backward pass</font></center></caption>
**Initializing backpropagation**:
To backpropagate through this network, you know that the output is:
$A^{[L]} = \sigma(Z^{[L]})$. Your code thus needs to compute `dAL` $= \frac{\partial \mathcal{L}}{\partial A^{[L]}}$.
To do so, use this formula (derived using calculus which, again, you don't need in-depth knowledge of!):
```python
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL
```
You can then use this post-activation gradient `dAL` to keep going backward. As seen in Figure 5, you can now feed in `dAL` into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function).
After that, you will have to use a `for` loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula :
$$grads["dW" + str(l)] = dW^{[l]}\tag{15} $$
For example, for $l=3$ this would store $dW^{[l]}$ in `grads["dW3"]`.
<a name='ex-9'></a>
### Exercise 9 - L_model_backward
Implement backpropagation for the *[LINEAR->RELU] $\times$ (L-1) -> LINEAR -> SIGMOID* model.
```
# GRADED FUNCTION: L_model_backward
def L_model_backward(AL, Y, caches):
"""
Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group
Arguments:
AL -- probability vector, output of the forward propagation (L_model_forward())
Y -- true "label" vector (containing 0 if non-cat, 1 if cat)
caches -- list of caches containing:
every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)
the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1])
Returns:
grads -- A dictionary with the gradients
grads["dA" + str(l)] = ...
grads["dW" + str(l)] = ...
grads["db" + str(l)] = ...
"""
grads = {}
L = len(caches) # the number of layers
m = AL.shape[1]
Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL
# Initializing the backpropagation
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL
# Last layer (SIGMOID -> LINEAR) gradients
current_cache = caches[L-1]
dA_prev_temp, dW_temp, db_temp = linear_activation_backward(dA=dAL, cache=current_cache, activation='sigmoid')
grads["dA" + str(L-1)] = dA_prev_temp
grads["dW" + str(L)] = dW_temp
grads["db" + str(L)] = db_temp
# Loop from l=L-2 to l=0
for l in reversed(range(L-1)):
# lth layer: (RELU -> LINEAR) gradients.
# Inputs: "grads["dA" + str(l + 1)], current_cache". Outputs: "grads["dA" + str(l)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)]
current_cache = caches[l]
dA_prev_temp, dW_temp, db_temp = linear_activation_backward(dA=grads["dA" + str(l+1)], cache=current_cache, activation='relu')
grads["dA" + str(l)] = dA_prev_temp
grads["dW" + str(l + 1)] = dW_temp
grads["db" + str(l + 1)] = db_temp
return grads
t_AL, t_Y_assess, t_caches = L_model_backward_test_case()
grads = L_model_backward(t_AL, t_Y_assess, t_caches)
print("dA0 = " + str(grads['dA0']))
print("dA1 = " + str(grads['dA1']))
print("dW1 = " + str(grads['dW1']))
print("dW2 = " + str(grads['dW2']))
print("db1 = " + str(grads['db1']))
print("db2 = " + str(grads['db2']))
L_model_backward_test(L_model_backward)
```
**Expected output:**
```
dA0 = [[ 0. 0.52257901]
[ 0. -0.3269206 ]
[ 0. -0.32070404]
[ 0. -0.74079187]]
dA1 = [[ 0.12913162 -0.44014127]
[-0.14175655 0.48317296]
[ 0.01663708 -0.05670698]]
dW1 = [[0.41010002 0.07807203 0.13798444 0.10502167]
[0. 0. 0. 0. ]
[0.05283652 0.01005865 0.01777766 0.0135308 ]]
dW2 = [[-0.39202432 -0.13325855 -0.04601089]]
db1 = [[-0.22007063]
[ 0. ]
[-0.02835349]]
db2 = [[0.15187861]]
```
<a name='6-4'></a>
### 6.4 - Update Parameters
In this section, you'll update the parameters of the model, using gradient descent:
$$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{16}$$
$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{17}$$
where $\alpha$ is the learning rate.
After computing the updated parameters, store them in the parameters dictionary.
<a name='ex-10'></a>
### Exercise 10 - update_parameters
Implement `update_parameters()` to update your parameters using gradient descent.
**Instructions**:
Update parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$.
```
# GRADED FUNCTION: update_parameters
def update_parameters(params, grads, learning_rate):
"""
Update parameters using gradient descent
Arguments:
params -- python dictionary containing your parameters
grads -- python dictionary containing your gradients, output of L_model_backward
Returns:
parameters -- python dictionary containing your updated parameters
parameters["W" + str(l)] = ...
parameters["b" + str(l)] = ...
"""
parameters = params.copy()
L = len(parameters) // 2 # 2 parameters per layer
for l in range(L):
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * grads["dW" + str(l+1)]
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * grads["db" + str(l+1)]
return parameters
t_parameters, grads = update_parameters_test_case()
t_parameters = update_parameters(t_parameters, grads, 0.1)
print ("W1 = "+ str(t_parameters["W1"]))
print ("b1 = "+ str(t_parameters["b1"]))
print ("W2 = "+ str(t_parameters["W2"]))
print ("b2 = "+ str(t_parameters["b2"]))
update_parameters_test(update_parameters)
```
**Expected output:**
```
W1 = [[-0.59562069 -0.09991781 -2.14584584 1.82662008]
[-1.76569676 -0.80627147 0.51115557 -1.18258802]
[-1.0535704 -0.86128581 0.68284052 2.20374577]]
b1 = [[-0.04659241]
[-1.28888275]
[ 0.53405496]]
W2 = [[-0.55569196 0.0354055 1.32964895]]
b2 = [[-0.84610769]]
```
### Congratulations!
You've just implemented all the functions required for building a deep neural network, including:
- Using non-linear units improve your model
- Building a deeper neural network (with more than 1 hidden layer)
- Implementing an easy-to-use neural network class
This was indeed a long assignment, but the next part of the assignment is easier. ;)
In the next assignment, you'll be putting all these together to build two models:
- A two-layer neural network
- An L-layer neural network
You will in fact use these models to classify cat vs non-cat images! (Meow!) Great work and see you next time.
| github_jupyter |
# Explore and create ML datasets
In this notebook, we will explore data corresponding to taxi rides in New York City to build a Machine Learning model in support of a fare-estimation tool. The idea is to suggest a likely fare to taxi riders so that they are not surprised, and so that they can protest if the charge is much higher than expected.
## Learning Objectives
1. Access and explore a public BigQuery dataset on NYC Taxi Cab rides
2. Visualize your dataset using the Seaborn library
3. Inspect and clean-up the dataset for future ML model training
4. Create a benchmark to judge future ML model performance off of
Each learning objective will correspond to a __#TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solution/explore_data.ipynb).
Let's start off with the Python imports that we need.
```
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
from google.cloud import bigquery
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
```
<h3> Extract sample data from BigQuery </h3>
The dataset that we will use is <a href="https://bigquery.cloud.google.com/table/nyc-tlc:yellow.trips">a BigQuery public dataset</a>. Click on the link, and look at the column names. Switch to the Details tab to verify that the number of records is one billion, and then switch to the Preview tab to look at a few rows.
Let's write a SQL query to pick up interesting fields from the dataset. It's a good idea to get the timestamp in a predictable format.
```
%%bigquery
SELECT
FORMAT_TIMESTAMP(
"%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime,
pickup_longitude, pickup_latitude, dropoff_longitude,
dropoff_latitude, passenger_count, trip_distance, tolls_amount,
fare_amount, total_amount
# TODO 1: Set correct BigQuery public dataset for nyc-tlc yellow taxi cab trips
# Tip: For projects with hyphens '-' be sure to escape with backticks ``
FROM
LIMIT 10
```
Let's increase the number of records so that we can do some neat graphs. There is no guarantee about the order in which records are returned, and so no guarantee about which records get returned if we simply increase the LIMIT. To properly sample the dataset, let's use the HASH of the pickup time and return 1 in 100,000 records -- because there are 1 billion records in the data, we should get back approximately 10,000 records if we do this.
We will also store the BigQuery result in a Pandas dataframe named "trips"
```
%%bigquery trips
SELECT
FORMAT_TIMESTAMP(
"%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime,
pickup_longitude, pickup_latitude,
dropoff_longitude, dropoff_latitude,
passenger_count,
trip_distance,
tolls_amount,
fare_amount,
total_amount
FROM
`nyc-tlc.yellow.trips`
WHERE
ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1
print(len(trips))
# We can slice Pandas dataframes as if they were arrays
trips[:10]
```
<h3> Exploring data </h3>
Let's explore this dataset and clean it up as necessary. We'll use the Python Seaborn package to visualize graphs and Pandas to do the slicing and filtering.
```
# TODO 2: Visualize your dataset using the Seaborn library.
# Plot the distance of the trip as X and the fare amount as Y.
ax = sns.regplot(x="", y="", fit_reg=False, ci=None, truncate=True, data=trips)
ax.figure.set_size_inches(10, 8)
```
Hmm ... do you see something wrong with the data that needs addressing?
It appears that we have a lot of invalid data that is being coded as zero distance and some fare amounts that are definitely illegitimate. Let's remove them from our analysis. We can do this by modifying the BigQuery query to keep only trips longer than zero miles and fare amounts that are at least the minimum cab fare ($2.50).
Note the extra WHERE clauses.
```
%%bigquery trips
SELECT
FORMAT_TIMESTAMP(
"%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime,
pickup_longitude, pickup_latitude,
dropoff_longitude, dropoff_latitude,
passenger_count,
trip_distance,
tolls_amount,
fare_amount,
total_amount
FROM
`nyc-tlc.yellow.trips`
WHERE
ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1
# TODO 3: Filter the data to only include non-zero distance trips and fares above $2.50
AND
print(len(trips))
ax = sns.regplot(
x="trip_distance", y="fare_amount",
fit_reg=False, ci=None, truncate=True, data=trips)
ax.figure.set_size_inches(10, 8)
```
What's up with the streaks around 45 dollars and 50 dollars? Those are fixed-amount rides from JFK and La Guardia airports into anywhere in Manhattan, i.e. to be expected. Let's list the data to make sure the values look reasonable.
Let's also examine whether the toll amount is captured in the total amount.
```
tollrides = trips[trips["tolls_amount"] > 0]
tollrides[tollrides["pickup_datetime"] == "2012-02-27 09:19:10 UTC"]
notollrides = trips[trips["tolls_amount"] == 0]
notollrides[notollrides["pickup_datetime"] == "2012-02-27 09:19:10 UTC"]
```
Looking at a few samples above, it should be clear that the total amount reflects fare amount, toll and tip somewhat arbitrarily -- this is because when customers pay cash, the tip is not known. So, we'll use the sum of fare_amount + tolls_amount as what needs to be predicted. Tips are discretionary and do not have to be included in our fare estimation tool.
Let's also look at the distribution of values within the columns.
```
trips.describe()
```
Hmm ... The min, max of longitude look strange.
Finally, let's actually look at the start and end of a few of the trips.
```
def showrides(df, numlines):
lats = []
lons = []
for iter, row in df[:numlines].iterrows():
lons.append(row["pickup_longitude"])
lons.append(row["dropoff_longitude"])
lons.append(None)
lats.append(row["pickup_latitude"])
lats.append(row["dropoff_latitude"])
lats.append(None)
sns.set_style("darkgrid")
plt.figure(figsize=(10, 8))
plt.plot(lons, lats)
showrides(notollrides, 10)
showrides(tollrides, 10)
```
As you'd expect, rides that involve a toll are longer than the typical ride.
<h3> Quality control and other preprocessing </h3>
We need to do some clean-up of the data:
<ol>
<li>New York city longitudes are around -74 and latitudes are around 41.</li>
<li>We shouldn't have zero passengers.</li>
<li>Clean up the total_amount column to reflect only fare_amount and tolls_amount, and then remove those two columns.</li>
<li>Before the ride starts, we'll know the pickup and dropoff locations, but not the trip distance (that depends on the route taken), so remove it from the ML dataset</li>
<li>Discard the timestamp</li>
</ol>
We could do preprocessing in BigQuery, similar to how we removed the zero-distance rides, but just to show you another option, let's do this in Python. In production, we'll have to carry out the same preprocessing on the real-time input data.
This sort of preprocessing of input data is quite common in ML, especially if the quality-control is dynamic.
```
def preprocess(trips_in):
trips = trips_in.copy(deep=True)
trips.fare_amount = trips.fare_amount + trips.tolls_amount
del trips["tolls_amount"]
del trips["total_amount"]
del trips["trip_distance"] # we won't know this in advance!
qc = np.all([
trips["pickup_longitude"] > -78,
trips["pickup_longitude"] < -70,
trips["dropoff_longitude"] > -78,
trips["dropoff_longitude"] < -70,
trips["pickup_latitude"] > 37,
trips["pickup_latitude"] < 45,
trips["dropoff_latitude"] > 37,
trips["dropoff_latitude"] < 45,
trips["passenger_count"] > 0
], axis=0)
return trips[qc]
tripsqc = preprocess(trips)
tripsqc.describe()
```
The quality control has removed about 300 rows (11400 - 11101) or about 3% of the data. This seems reasonable.
Let's move on to creating the ML datasets.
<h3> Create ML datasets </h3>
Let's split the QCed data randomly into training, validation and test sets.
Note that this is not the entire data. We have 1 billion taxicab rides. This is just splitting the 10,000 rides to show you how it's done on smaller datasets. In reality, we'll have to do it on all 1 billion rides and this won't scale.
```
shuffled = tripsqc.sample(frac=1)
trainsize = int(len(shuffled["fare_amount"]) * 0.70)
validsize = int(len(shuffled["fare_amount"]) * 0.15)
df_train = shuffled.iloc[:trainsize, :]
df_valid = shuffled.iloc[trainsize:(trainsize + validsize), :]
df_test = shuffled.iloc[(trainsize + validsize):, :]
df_train.head(n=1)
df_train.describe()
df_valid.describe()
df_test.describe()
```
Let's write out the three dataframes to appropriately named csv files. We can use these csv files for local training (recall that these files represent only 1/100,000 of the full dataset) just to verify our code works, before we run it on all the data.
```
def to_csv(df, filename):
outdf = df.copy(deep=False)
outdf.loc[:, "key"] = np.arange(0, len(outdf)) # rownumber as key
# Reorder columns so that target is first column
cols = outdf.columns.tolist()
cols.remove("fare_amount")
cols.insert(0, "fare_amount")
print (cols) # new order of columns
outdf = outdf[cols]
outdf.to_csv(filename, header=False, index_label=False, index=False)
to_csv(df_train, "taxi-train.csv")
to_csv(df_valid, "taxi-valid.csv")
to_csv(df_test, "taxi-test.csv")
!head -10 taxi-valid.csv
```
<h3> Verify that datasets exist </h3>
```
!ls -l *.csv
```
We have 3 .csv files corresponding to train, valid, test. The ratio of file-sizes correspond to our split of the data.
```
%%bash
head taxi-train.csv
```
Looks good! We now have our ML datasets and are ready to train ML models, validate them and evaluate them.
<h3> Benchmark </h3>
Before we start building complex ML models, it is a good idea to come up with a very simple model and use that as a benchmark.
My model is going to be to simply divide the mean fare_amount by the mean trip_distance to come up with a rate and use that to predict. Let's compute the RMSE of such a model.
```
def distance_between(lat1, lon1, lat2, lon2):
# Baversine formula to compute distance "as the crow flies".
lat1_r = np.radians(lat1)
lat2_r = np.radians(lat2)
lon_diff_r = np.radians(lon2 - lon1)
sin_prod = np.sin(lat1_r) * np.sin(lat2_r)
cos_prod = np.cos(lat1_r) * np.cos(lat2_r) * np.cos(lon_diff_r)
minimum = np.minimum(1, sin_prod + cos_prod)
dist = np.degrees(np.arccos(minimum)) * 60 * 1.515 * 1.609344
return dist
def estimate_distance(df):
return distance_between(
df["pickuplat"], df["pickuplon"], df["dropofflat"], df["dropofflon"])
def compute_rmse(actual, predicted):
return np.sqrt(np.mean((actual - predicted) ** 2))
def print_rmse(df, rate, name):
print ("{1} RMSE = {0}".format(
compute_rmse(df["fare_amount"], rate * estimate_distance(df)), name))
# TODO 4: Create a benchmark to judge future ML model performance off of
# Specify the five feature columns
FEATURES = ["", "", "", "", ""]
# Specify the one target column for prediction
TARGET = ""
columns = list([TARGET])
columns.append("pickup_datetime")
columns.extend(FEATURES) # in CSV, target is first column, after the features
columns.append("key")
df_train = pd.read_csv("taxi-train.csv", header=None, names=columns)
df_valid = pd.read_csv("taxi-valid.csv", header=None, names=columns)
df_test = pd.read_csv("taxi-test.csv", header=None, names=columns)
rate = df_train["fare_amount"].mean() / estimate_distance(df_train).mean()
print ("Rate = ${0}/km".format(rate))
print_rmse(df_train, rate, "Train")
print_rmse(df_valid, rate, "Valid")
print_rmse(df_test, rate, "Test")
```
<h2>Benchmark on same dataset</h2>
The RMSE depends on the dataset, and for comparison, we have to evaluate on the same dataset each time. We'll use this query in later labs:
```
validation_query = """
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
"unused" AS key
FROM
`nyc-tlc.yellow.trips`
WHERE
ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2
AND trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
"""
client = bigquery.Client()
df_valid = client.query(validation_query).to_dataframe()
print_rmse(df_valid, 2.59988, "Final Validation Set")
```
The simple distance-based rule gives us a RMSE of <b>$8.14</b>. We have to beat this, of course, but you will find that simple rules of thumb like this can be surprisingly difficult to beat.
Let's be ambitious, though, and make our goal to build ML models that have a RMSE of less than $6 on the test set.
Copyright 2019 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| github_jupyter |
# TensorFlow Estimators Deep Dive
The purporse of this tutorial is to explain the details of how to create a premade TensorFlow estimator, how trainining and evaluation work with different configurations, and how the model is exported for serving. The tutorial covers the following points:
1. Implementing **Input function** with tf.data APIs.
1. Creating **Feature columns**.
1. Creating a **Wide and Deep** model with a premade estimator.
4. Configuring **Train and evaluate** parameters.
5. **Exporting** trained models for **serving**.
6. Implementing **Early stopping**.
7. Distribution Strategy for **multi-GPUs**.
8. **Extending** premade estimators.
9. Adaptive **learning rate**.
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/sme_academy/01_tf_estimator_deepdive.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<img valign="middle" src="images/tf-layers.jpeg" width="400">
```
try:
COLAB = True
from google.colab import auth
auth.authenticate_user()
except:
pass
RANDOM_SEED = 19831006
import os
import math
import multiprocessing
import pandas as pd
from datetime import datetime
import tensorflow as tf
print "TensorFlow : {}".format(tf.__version__)
tf.enable_eager_execution()
print "Eager Execution Enabled: {}".format(tf.executing_eagerly())
```
## Download Data
### UCI Adult Dataset: https://archive.ics.uci.edu/ml/datasets/adult
Predict whether income exceeds $50K/yr based on census data. Also known as "Census Income" dataset.
```
DATA_DIR='data'
!mkdir $DATA_DIR
!gsutil cp gs://cloud-samples-data/ml-engine/census/data/adult.data.csv $DATA_DIR
!gsutil cp gs://cloud-samples-data/ml-engine/census/data/adult.test.csv $DATA_DIR
TRAIN_DATA_FILE = os.path.join(DATA_DIR, 'adult.data.csv')
EVAL_DATA_FILE = os.path.join(DATA_DIR, 'adult.test.csv')
!wc -l $TRAIN_DATA_FILE
!wc -l $EVAL_DATA_FILE
```
The **training** data includes **32,561** records, while the **evaluation** data includes **16,278** records.
```
HEADER = ['age', 'workclass', 'fnlwgt', 'education', 'education_num',
'marital_status', 'occupation', 'relationship', 'race', 'gender',
'capital_gain', 'capital_loss', 'hours_per_week',
'native_country', 'income_bracket']
pd.read_csv(TRAIN_DATA_FILE, names=HEADER).head()
```
## Dataset Metadata
```
HEADER = ['age', 'workclass', 'fnlwgt', 'education', 'education_num',
'marital_status', 'occupation', 'relationship', 'race', 'gender',
'capital_gain', 'capital_loss', 'hours_per_week',
'native_country', 'income_bracket']
HEADER_DEFAULTS = [[0], [''], [0], [''], [0], [''], [''], [''], [''], [''],
[0], [0], [0], [''], ['']]
NUMERIC_FEATURE_NAMES = ['age', 'education_num', 'capital_gain', 'capital_loss', 'hours_per_week']
CATEGORICAL_FEATURE_WITH_VOCABULARY = {
'workclass': ['State-gov', 'Self-emp-not-inc', 'Private', 'Federal-gov', 'Local-gov', '?', 'Self-emp-inc', 'Without-pay', 'Never-worked'],
'relationship': ['Not-in-family', 'Husband', 'Wife', 'Own-child', 'Unmarried', 'Other-relative'],
'gender': [' Male', 'Female'], 'marital_status': [' Never-married', 'Married-civ-spouse', 'Divorced', 'Married-spouse-absent', 'Separated', 'Married-AF-spouse', 'Widowed'],
'race': [' White', 'Black', 'Asian-Pac-Islander', 'Amer-Indian-Eskimo', 'Other'],
'education': ['Bachelors', 'HS-grad', '11th', 'Masters', '9th', 'Some-college', 'Assoc-acdm', 'Assoc-voc', '7th-8th', 'Doctorate', 'Prof-school', '5th-6th', '10th', '1st-4th', 'Preschool', '12th'],
}
CATEGORICAL_FEATURE_WITH_HASH_BUCKETS = {
'native_country': 60,
'occupation': 20
}
FEATURE_NAMES = NUMERIC_FEATURE_NAMES + CATEGORICAL_FEATURE_WITH_VOCABULARY.keys() + CATEGORICAL_FEATURE_WITH_HASH_BUCKETS.keys()
TARGET_NAME = 'income_bracket'
TARGET_LABELS = [' <=50K', ' >50K']
WEIGHT_COLUMN_NAME = 'fnlwgt'
```
## 1. Data Input Function
* Use [tf.data.Dataset](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) APIs: **list_files()**, **skip()**, **map()**, **filter()**, **batch()**, **shuffle()**, **repeat()**, **prefetch()**, **cache()**, etc.
* Use [tf.data.experimental.make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset) to read and parse CSV data files.
* Use [tf.data.experimental.make_batched_features_dataset](https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_batched_features_dataset) to read and parse TFRecords data files.
```
def process_features(features, target):
for feature_name in CATEGORICAL_FEATURE_WITH_VOCABULARY.keys() + CATEGORICAL_FEATURE_WITH_HASH_BUCKETS.keys():
features[feature_name] = tf.strings.strip(features[feature_name])
features['capital_total'] = features['capital_gain'] - features['capital_loss']
return features, target
def make_input_fn(file_pattern, batch_size, num_epochs=1, shuffle=False):
def _input_fn():
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=file_pattern,
batch_size=batch_size,
column_names=HEADER,
column_defaults=HEADER_DEFAULTS,
label_name=TARGET_NAME,
field_delim=',',
use_quote_delim=True,
header=False,
num_epochs=num_epochs,
shuffle=shuffle,
shuffle_buffer_size=(5 * batch_size),
shuffle_seed=RANDOM_SEED,
num_parallel_reads=multiprocessing.cpu_count(),
sloppy=True,
)
return dataset.map(process_features).cache()
return _input_fn
# You need to run tf.enable_eager_execution() at the top.
dataset = make_input_fn(TRAIN_DATA_FILE, batch_size=1)()
for features, target in dataset.take(1):
print "Input Features:"
for key in features:
print "{}:{}".format(key, features[key])
print ""
print "Target:"
print target
```
## 2. Create feature columns
<br/>
<img valign="middle" src="images/tf-feature-columns.jpeg" width="800">
Base feature columns
1. [numeric_column](https://www.tensorflow.org/api_docs/python/tf/feature_column/numeric_column|)
2. [categorical_column_with_vocabulary_list](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_list)
3. [categorical_column_with_vocabulary_file](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_file)
4. [categorical_column_with_identity](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_identity)
5. [categorical_column_with_hash_buckets](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_hash_bucket)
Extended feature columns
1. [bucketized_column](https://www.tensorflow.org/api_docs/python/tf/feature_column/bucketized_column)
2. [indicator_column](https://www.tensorflow.org/api_docs/python/tf/feature_column/indicator_column)
3. [crossing_column](https://www.tensorflow.org/api_docs/python/tf/feature_column/crossed_column)
4. [embedding_column](https://www.tensorflow.org/api_docs/python/tf/feature_column/embedding_column)
```
def create_feature_columns():
wide_columns = []
deep_columns = []
for column in NUMERIC_FEATURE_NAMES:
# Create numeric columns.
numeric_column = tf.feature_column.numeric_column(column)
deep_columns.append(numeric_column)
for column in CATEGORICAL_FEATURE_WITH_VOCABULARY:
# Create categorical columns with vocab.
vocabolary = CATEGORICAL_FEATURE_WITH_VOCABULARY[column]
categorical_column = tf.feature_column.categorical_column_with_vocabulary_list(
column, vocabolary)
wide_columns.append(categorical_column)
# Create embeddings of the categorical columns.
embed_size = int(math.sqrt(len(vocabolary)))
embedding_column = tf.feature_column.embedding_column(
categorical_column, embed_size)
deep_columns.append(embedding_column)
for column in CATEGORICAL_FEATURE_WITH_HASH_BUCKETS:
# Create categorical columns with hashing.
hash_columns = tf.feature_column.categorical_column_with_hash_bucket(
column,
hash_bucket_size=CATEGORICAL_FEATURE_WITH_HASH_BUCKETS[column])
wide_columns.append(hash_columns)
# Create indicators for hashing columns.
indicator_column = tf.feature_column.indicator_column(hash_columns)
deep_columns.append(indicator_column)
# Create bucktized column.
age_bucketized = tf.feature_column.bucketized_column(
deep_columns[0], boundaries = [18, 25, 30, 35, 40, 45, 50, 55, 60]
)
wide_columns.append(age_bucketized)
# Create crossing column.
education_X_occupation = tf.feature_column.crossed_column(
['education', 'workclass'], hash_bucket_size=int(1e4))
wide_columns.append(education_X_occupation)
# Create embeddings for crossing column.
education_X_occupation_embedded = tf.feature_column.embedding_column(
education_X_occupation, dimension=10)
deep_columns.append(education_X_occupation_embedded)
return wide_columns, deep_columns
wide_columns, deep_columns = create_feature_columns()
print ""
print "Wide columns:"
for column in wide_columns:
print column
print ""
print "Deep columns:"
for column in deep_columns:
print column
```
## 3. Instantiate a [Wide and Deep Estimator](https://www.tensorflow.org/api_docs/python/tf/estimator/DNNLinearCombinedClassifier)
<br/>
<img valign="middle" src="images/dnn-wide-deep.jpeg">
```
def create_estimator(params, run_config):
wide_columns, deep_columns = create_feature_columns()
estimator = tf.estimator.DNNLinearCombinedClassifier(
n_classes=len(TARGET_LABELS),
label_vocabulary=TARGET_LABELS,
weight_column=WEIGHT_COLUMN_NAME,
dnn_feature_columns=deep_columns,
dnn_optimizer=tf.train.AdamOptimizer(
learning_rate=params.learning_rate),
dnn_hidden_units=params.hidden_units,
dnn_dropout=params.dropout,
dnn_activation_fn=tf.nn.relu,
batch_norm=True,
linear_feature_columns=wide_columns,
linear_optimizer='Ftrl',
config=run_config
)
return estimator
```
## 4. Implement Train and Evaluate Experiment
<img valign="middle" src="images/tf-estimators.jpeg" width="900">
**Delete** the **model_dir** file if you don't want a **Warm Start**
* If not deleted, and you **change** the model, it will error.
[TrainSpec](https://www.tensorflow.org/api_docs/python/tf/estimator/TrainSpec)
* Set **shuffle** in the **input_fn** to **True**
* Set **num_epochs** in the **input_fn** to **None**
* Set **max_steps**. One batch (feed-forward pass & backpropagation)
corresponds to 1 training step.
[EvalSpec](https://www.tensorflow.org/api_docs/python/tf/estimator/EvalSpec)
* Set **shuffle** in the **input_fn** to **False**
* Set Set **num_epochs** in the **input_fn** to **1**
* Set **steps** to **None** if you want to use all the evaluation data.
* Otherwise, set **steps** to the number of batches you want to use for evaluation, and set **shuffle** to True.
* Set **start_delay_secs** to 0 to start evaluation as soon as a checkpoint is produced.
* Set **throttle_secs** to 0 to re-evaluate as soon as a new checkpoint is produced.
```
def run_experiment(estimator, params, run_config,
resume=False, train_hooks=None, exporters=None):
print "Resume training {}: ".format(resume)
print "Epochs: {}".format(epochs)
print "Batch size: {}".format(params.batch_size)
print "Steps per epoch: {}".format(steps_per_epoch)
print "Training steps: {}".format(params.max_steps)
print "Learning rate: {}".format(params.learning_rate)
print "Hidden Units: {}".format(params.hidden_units)
print "Dropout probability: {}".format(params.dropout)
print "Save a checkpoint and evaluate afer {} step(s)".format(run_config.save_checkpoints_steps)
print "Keep the last {} checkpoint(s)".format(run_config.keep_checkpoint_max)
print ""
tf.logging.set_verbosity(tf.logging.INFO)
if not resume:
if tf.gfile.Exists(run_config.model_dir):
print "Removing previous artefacts..."
tf.gfile.DeleteRecursively(run_config.model_dir)
else:
print "Resuming training..."
# Create train specs.
train_spec = tf.estimator.TrainSpec(
input_fn = make_input_fn(
TRAIN_DATA_FILE,
batch_size=params.batch_size,
num_epochs=None, # Run until the max_steps is reached.
shuffle=True
),
max_steps=params.max_steps,
hooks=train_hooks
)
# Create eval specs.
eval_spec = tf.estimator.EvalSpec(
input_fn = make_input_fn(
EVAL_DATA_FILE,
batch_size=params.batch_size,
),
exporters=exporters,
start_delay_secs=0,
throttle_secs=0,
steps=None # Set to limit number of steps for evaluation.
)
time_start = datetime.utcnow()
print "Experiment started at {}".format(time_start.strftime("%H:%M:%S"))
print "......................................."
# Run train and evaluate.
tf.estimator.train_and_evaluate(
estimator=estimator,
train_spec=train_spec,
eval_spec=eval_spec)
time_end = datetime.utcnow()
print "......................................."
print "Experiment finished at {}".format(time_end.strftime("%H:%M:%S"))
print ""
time_elapsed = time_end - time_start
print "Experiment elapsed time: {} seconds".format(time_elapsed.total_seconds())
```
## Set Parameters and Run Configurations.
* Set **model_dir** in the **run_config**
* If the data **size is known**, training **steps**, with respect to **epochs** would be: **(training_size / batch_size) * epochs**
* By default, a **checkpoint** is saved every 600 secs. That is, the model is **evaluated** only every 10mins.
* To change this behaviour, set one of the following parameters in the **run_config**
* **save_checkpoints_secs**: Save checkpoints every this many **seconds**.
* **save_checkpoints_steps**: Save checkpoints every this many **steps**.
* Set the number of the checkpoints to keep using **keep_checkpoint_max**
```
class Parameters():
pass
MODELS_LOCATION = 'gs://ksalama-gcs-cloudml/others/models/census'
MODEL_NAME = 'dnn_classifier'
model_dir = os.path.join(MODELS_LOCATION, MODEL_NAME)
os.environ['MODEL_DIR'] = model_dir
TRAIN_DATA_SIZE = 32561
params = Parameters()
params.learning_rate = 0.001
params.hidden_units = [128, 128, 128]
params.dropout = 0.15
params.batch_size = 128
# Set number of steps with respect to epochs.
epochs = 5
steps_per_epoch = int(math.ceil(TRAIN_DATA_SIZE / params.batch_size))
params.max_steps = steps_per_epoch * epochs
run_config = tf.estimator.RunConfig(
tf_random_seed=RANDOM_SEED,
save_checkpoints_steps=steps_per_epoch, # Save a checkpoint after each epoch, evaluate the model after each epoch.
keep_checkpoint_max=3, # Keep the 3 most recently produced checkpoints.
model_dir=model_dir,
save_summary_steps=100, # Summary steps for Tensorboard.
log_step_count_steps=50
)
```
## Run Experiment
```
if COLAB:
from tensorboardcolab import *
TensorBoardColab(graph_path=model_dir)
estimator = create_estimator(params, run_config)
run_experiment(estimator, params, run_config)
print model_dir
!gsutil ls {model_dir}
```
## 5. Export your trained model
### Implement serving input receiver function
```
def make_serving_input_receiver_fn():
inputs = {}
for feature_name in FEATURE_NAMES:
dtype = tf.float32 if feature_name in NUMERIC_FEATURE_NAMES else tf.string
inputs[feature_name] = tf.placeholder(shape=[None], dtype=dtype)
# What is wrong here?
return tf.estimator.export.build_raw_serving_input_receiver_fn(inputs)
```
### Export to saved_model
```
export_dir = os.path.join(model_dir, 'export')
# Delete export directory if exists.
if tf.gfile.Exists(export_dir):
tf.gfile.DeleteRecursively(export_dir)
# Export the estimator as a saved_model.
estimator.export_savedmodel(
export_dir_base=export_dir,
serving_input_receiver_fn=make_serving_input_receiver_fn()
)
!gsutil ls gs://ksalama-gcs-cloudml/others/models/census/dnn_classifier/export/1552582374
%%bash
saved_models_base=${MODEL_DIR}/export/
saved_model_dir=$(gsutil ls ${saved_models_base} | tail -n 1)
saved_model_cli show --dir=${saved_model_dir} --all
```
### Test saved_model
```
export_dir = os.path.join(model_dir, 'export')
tf.gfile.ListDirectory(export_dir)[-1]
saved_model_dir = os.path.join(export_dir, tf.gfile.ListDirectory(export_dir)[-1])
print(saved_model_dir)
print ""
predictor_fn = tf.contrib.predictor.from_saved_model(
export_dir = saved_model_dir,
signature_def_key="predict"
)
output = predictor_fn(
{
'age': [34.0],
'workclass': ['Private'],
'education': ['Doctorate'],
'education_num': [10.0],
'marital_status': ['Married-civ-spouse'],
'occupation': ['Prof-specialty'],
'relationship': ['Husband'],
'race': ['White'],
'gender': ['Male'],
'capital_gain': [0.0],
'capital_loss': [0.0],
'hours_per_week': [40.0],
'native_country':['Egyptian']
}
)
print(output)
```
## Export the Model during Training and Evaluation
Saved models are exported under **<model_dir>/export/<folder_name>**.
* [Latest Exporter](https://www.tensorflow.org/api_docs/python/tf/estimator/LatestExporter): exports a model **after each evaluation**.
* specify the **maximum number** of exported models to keep using **exports_to_keep** param.
* [Final Exporter](https://www.tensorflow.org/api_docs/python/tf/estimator/LatestExporter): exports only the very **last** evaluated checkpoint. of the model.
* [Best exporter](https://www.tensorflow.org/api_docs/python/tf/estimator/BestExporter): runs everytime when the **newly evaluted checkpoint** is **better** than any exsiting model.
* specify the **maximum number** of exported models to keep using **exports_to_keep** param.
* It uses the **evaluation events** stored under the **eval** folder.
```
def _accuracy_bigger(best_eval_result, current_eval_result):
metric = 'accuracy'
return best_eval_result[metric] < current_eval_result[metric]
params.max_steps = 1000
params.hidden_units = [128, 128]
params.dropout = 0
run_config = tf.estimator.RunConfig(
tf_random_seed=RANDOM_SEED,
save_checkpoints_steps=200,
keep_checkpoint_max=1,
model_dir=model_dir,
log_step_count_steps=50
)
exporter = tf.estimator.BestExporter(
compare_fn=_accuracy_bigger,
event_file_pattern='eval_{}/*.tfevents.*'.format(datetime.utcnow().strftime("%H%M%S")),
name="estimate", # Saved models are exported under /export/estimate/
serving_input_receiver_fn=make_serving_input_receiver_fn(),
exports_to_keep=1
)
estimator = create_estimator(params, run_config)
run_experiment(estimator, params, run_config, exporters = [exporter])
!gsutil ls {model_dir}/export/estimate
```
## 6. Early Stopping
* [stop_if_higher_hook](https://www.google.com/search?q=stop_if_higher_hook&oq=stop_if_higher_hook)
* [stop_if_lower_hook](https://www.tensorflow.org/api_docs/python/tf/contrib/estimator/stop_if_lower_hook)
* [stop_if_no_increase_hook](https://www.tensorflow.org/api_docs/python/tf/contrib/estimator/stop_if_no_increase_hook)
* [stop_if_no_decrease_hook](https://www.tensorflow.org/api_docs/python/tf/contrib/estimator/stop_if_no_decrease_hook)
```
early_stopping_hook = tf.contrib.estimator.stop_if_no_increase_hook(
estimator,
'accuracy',
max_steps_without_increase=100,
run_every_secs=None,
run_every_steps=500
)
params.max_steps = 1000000
params.hidden_units = [128, 128]
params.dropout = 0
run_config = tf.estimator.RunConfig(
tf_random_seed=RANDOM_SEED,
save_checkpoints_steps=500,
keep_checkpoint_max=1,
model_dir=model_dir,
log_step_count_steps=100
)
run_experiment(estimator, params, run_config, exporters = [exporter], train_hooks=[early_stopping_hook])
```
## 7. Using Distribution Strategy for Utilising Multiple GPUs
```
strategy = None
num_gpus = len([device_name for device_name in tf.contrib.eager.list_devices()
if '/device:GPU' in device_name])
print "GPUs available: {}".format(num_gpus)
if num_gpus > 1:
strategy = tf.distribute.MirroredStrategy()
params.batch_size = int(math.ceil(params.batch_size / num_gpus))
run_config = tf.estimator.RunConfig(
tf_random_seed=RANDOM_SEED,
save_checkpoints_steps=200,
model_dir=model_dir,
train_distribute=strategy
)
estimator = create_estimator(params, run_config)
run_experiment(estimator, params, run_config)
```
## 8. Extending a Premade Estimator
### Add an evaluation metric
* [tf.metrics](https://www.tensorflow.org/api_docs/python/tf/metrics)
* [tf.estimator.add_metric](https://www.tensorflow.org/api_docs/python/tf/contrib/estimator/add_metrics)
```
def metric_fn(labels, predictions):
metrics = {}
label_index = tf.contrib.lookup.index_table_from_tensor(tf.constant(TARGET_LABELS)).lookup(labels)
one_hot_labels = tf.one_hot(label_index, len(TARGET_LABELS))
metrics['mirco_accuracy'] = tf.metrics.mean_per_class_accuracy(
labels=label_index,
predictions=predictions['class_ids'],
num_classes=2)
metrics['f1_score'] = tf.contrib.metrics.f1_score(
labels=one_hot_labels,
predictions=predictions['probabilities'])
return metrics
params.max_steps = 1
estimator = create_estimator(params, run_config)
estimator = tf.contrib.estimator.add_metrics(estimator, metric_fn)
run_experiment(estimator, params, run_config)
```
### Add Forward Features
* [tf.estimator.forward_features](https://www.tensorflow.org/api_docs/python/tf/contrib/estimator/forward_features)
This is very useful for batch prediction, in order to make instances to their predictions
```
estimator = tf.contrib.estimator.forward_features(estimator, keys="row_identifier")
def make_serving_input_receiver_fn():
inputs = {}
for feature_name in FEATURE_NAMES:
dtype = tf.float32 if feature_name in NUMERIC_FEATURE_NAMES else tf.string
inputs[feature_name] = tf.placeholder(shape=[None], dtype=dtype)
processed_inputs,_ = process_features(inputs, None)
processed_inputs['row_identifier'] = tf.placeholder(shape=[None], dtype=tf.string)
return tf.estimator.export.build_raw_serving_input_receiver_fn(processed_inputs)
export_dir = os.path.join(model_dir, 'export')
if tf.gfile.Exists(export_dir):
tf.gfile.DeleteRecursively(export_dir)
estimator.export_savedmodel(
export_dir_base=export_dir,
serving_input_receiver_fn=make_serving_input_receiver_fn()
)
%%bash
saved_models_base=${MODEL_DIR}/export/
saved_model_dir=$(gsutil ls ${saved_models_base} | tail -n 1)
saved_model_cli show --dir=${saved_model_dir} --all
export_dir = os.path.join(model_dir, 'export')
tf.gfile.ListDirectory(export_dir)[-1]
saved_model_dir = os.path.join(export_dir, tf.gfile.ListDirectory(export_dir)[-1])
print(saved_model_dir)
print ""
predictor_fn = tf.contrib.predictor.from_saved_model(
export_dir = saved_model_dir,
signature_def_key="predict"
)
output = predictor_fn(
{ 'row_identifier': ['key0123'],
'age': [34.0],
'workclass': ['Private'],
'education': ['Doctorate'],
'education_num': [10.0],
'marital_status': ['Married-civ-spouse'],
'occupation': ['Prof-specialty'],
'relationship': ['Husband'],
'race': ['White'],
'gender': ['Male'],
'capital_gain': [0.0],
'capital_loss': [0.0],
'hours_per_week': [40.0],
'native_country':['Egyptian']
}
)
print(output)
```
## 9. Adaptive learning rate
* [exponential_decay](https://www.tensorflow.org/api_docs/python/tf/train/exponential_decay)
* [consine_decay](https://www.tensorflow.org/api_docs/python/tf/train/cosine_decay)
* [linear_cosine_decay](https://www.tensorflow.org/api_docs/python/tf/train/linear_cosine_decay)
* [consine_decay_restarts](https://www.tensorflow.org/api_docs/python/tf/train/cosine_decay_restarts)
* [polynomial decay](https://www.tensorflow.org/api_docs/python/tf/train/polynomial_decay)
* [piecewise_constant_decay](https://www.tensorflow.org/api_docs/python/tf/train/piecewise_constant_decay)
```
def create_estimator(params, run_config):
wide_columns, deep_columns = create_feature_columns()
def _update_optimizer(initial_learning_rate, decay_steps):
# learning_rate = tf.train.exponential_decay(
# initial_learning_rate,
# global_step=tf.train.get_global_step(),
# decay_steps=decay_steps,
# decay_rate=0.9
# )
learning_rate = tf.train.cosine_decay_restarts(
initial_learning_rate,
tf.train.get_global_step(),
first_decay_steps=50,
t_mul=2.0,
m_mul=1.0,
alpha=0.0,
)
tf.summary.scalar('learning_rate', learning_rate)
return tf.train.AdamOptimizer(
learning_rate=initial_learning_rate)
estimator = tf.estimator.DNNLinearCombinedClassifier(
n_classes=len(TARGET_LABELS),
label_vocabulary=TARGET_LABELS,
weight_column=WEIGHT_COLUMN_NAME,
dnn_feature_columns=deep_columns,
dnn_optimizer=lambda: _update_optimizer(params.learning_rate, params.max_steps),
dnn_hidden_units=params.hidden_units,
dnn_dropout=params.dropout,
batch_norm=True,
linear_feature_columns=wide_columns,
linear_optimizer='Ftrl',
config=run_config
)
return estimator
params.learning_rate = 0.1
params.max_steps = 1000
run_config = tf.estimator.RunConfig(
tf_random_seed=RANDOM_SEED,
save_checkpoints_steps=200,
model_dir=model_dir,
)
if COLAB:
from tensorboardcolab import *
TensorBoardColab(graph_path=model_dir)
estimator = create_estimator(params, run_config)
run_experiment(estimator, params, run_config)
```
## License
Author: Khalid Salama
---
**Disclaimer**: This is not an official Google product. The sample code provided for an educational purpose.
---
Copyright 2019 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0.
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| github_jupyter |
### Web-based Tools for Teaching and Research: Jupyter Notebooks and GitHub
A workshop of the Academy of Data Sciences at the College of Science
## Jupyter Notebooks & Python
Susanna Werth
---
# Lesson 2: Storing Multiple Values in Lists
This lesson is following the carpentry tutorial "Programming with Python":
https://software-carpentry.org/lessons/
---
## Python Lists
We create a **list** by putting values inside **square brackets** and separating the values with commas:
```
```
We can access elements of a list using **indices** – numbered positions of elements in the list. These positions are numbered starting at 0, so the first element has an index of 0:
```
```
Yes, we can use negative numbers as indices in Python. When we do so, the index `-1` gives us the last element in the list, `-2` the second to last, and so on.
Because of this, `odds[3]` and `odds[-1]` point to the same element here.
### Mutability of objects
There is one important difference between lists and strings, called **mutability**: we can change the values in a list, but we cannot change individual characters in a string.
For example:
```
```
works, but:
```
```
You receive a `TypeError`, because `string` object types are immutable, and they cannot be changed.
#### Note
Strings and numbers are **immutable**. Lists and arrays (next lesson) are **mutable**: we can modify them after they have been created.
For lists, we can change individual elements, append new elements, or reorder the whole list. For some operations, like sorting, we can choose whether to use a function that modifies the data in-place or a function that returns a modified copy and leaves the original unchanged.
Be careful when modifying data in-place. If two variables refer to the same list, and you modify the list value, it will change for both variables!
```
# assigning a list
salsa = ['peppers', 'onions', 'cilantro', 'tomatoes']
# copy salsa to my_salsa
# now, my_salsa and salsa point to the *same* list data in memory
```
If you want variables with mutable values to be independent, you must make an **explicit copy** of the value when you assign it.
```
# assigning a list
salsa = ['peppers', 'onions', 'cilantro', 'tomatoes']
# make an explicit copy of the list
```
Because of pitfalls like this, code which modifies data in place can be more difficult to understand.
However, it is often far more efficient to modify a large data structure in place than to create a modified copy for every small change.
You should consider both of these aspects when writing your code.
### Comment your code!
Everything in a line of code following the ‘#’ symbol is a **comment** that is ignored by Python. Comments allow programmers to leave explanatory notes for other programmers or their future selves.
---
## Slicing
An index like `[2]` selects a single element of a list, but we can select whole sections as well. For example, we can select the first three values of a list like this:
The **slice** 0:2 means, “Start at index 0 and go up to, but not including, index 2”.
The up-to-but-not-including takes a bit of getting used to.
Programming languages like **Fortran, MATLAB and R start counting at 1** because that’s what human beings have done for thousands of years.
Languages in the **C family (including C++, Java, Perl, and Python) count from 0** because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays.
As a result, if we have an **list with N elements in Python, its indices go from 0 to N-1**.
One way to remember the rule is that the index is how many steps we have to take from the start to get the item we want, or in other words, **indexes indicate the place where the list is cut**.
The difference between the upper and lower bounds is the number of values in the slice.
<img src="img/slicing_lists_python.png" alt="Slicing" title="Slicing Concept, (Lutz 2013), Figure 5-1" width="500" />
*Slicing Concept, (Lutz 2013), Figure 5-1*
We don’t have to start slices at 0:
We also don’t have to include the upper and lower bound on the slice.
- if we don’t include the lower bound, Python uses 0 by default;
- if we don’t include the upper, the slice runs to the end of the axis, and
- if we don’t include either (i.e., if we use ‘:’ on its own), the slice includes everything.
Try below:
Or we can slice from the end:
---
## Heterogeneous Lists & List Methods
Lists in Python can contain elements of different types.
Example:
There are many ways to change the contents of lists besides assigning new values to individual elements.
For example we can use type specific methods for lists:
```
# method .append()
# method .pop()
# method .reverse()
```
We just made use of mutability of lists and modified the list `odds` in place.
You can find some more string methods here: https://www.tutorialspoint.com/python/python_lists.htm
---
## Nested Lists
Since a **list can contain any Python variables** (objects), it can even contain other lists.
For example, we could represent the products in the shelves of a small grocery shop:
```
x = [['pepper', 'zucchini', 'onion'],
['cabbage', 'lettuce', 'garlic'],
['apple', 'pear', 'banana']]
```
Here is a visual example of how indexing a list of lists `x` works:
<img src="img/indexing_lists_python.png" alt="Nested Lists" title="Nested Lists" width="600" />
Using the previously declared list x, these would be the results of the index operations shown in the image:
---
## Reflection
You have a list of odd numbers
```
odds = [1, 3, 5, 7, 9, 11] # defines the list
```
How can you print out only the second to second last element from the list `odds`?
Type the code below.
## Challenge
How can you print out a subset of every second element from the list `odds`?
Any idea?
---
# Summary
<div class="alert alert-info">
**Key Points**
- `[value1, value2, value3, ...]` creates a list.
- Lists can contain any Python object, including lists (i.e., list of lists).
- Lists are indexed and sliced with square brackets (e.g., list[0] and list[2:9]), in the same way as strings and arrays.
- Lists are mutable (i.e., their values can be changed in place).
- Strings are immutable (i.e., the characters in them cannot be changed).
- Use `# some kind of explanation` to add comments to programs.
</div>
| github_jupyter |
# Lecture 3: Basics of Python
CBIO (CSCI) 4835/6835: Introduction to Computational Biology
## Overview and Objectives
In this lecture, I'll introduce the Python programming language and how to interact with it; aka, the proverbial [Hello, World!](https://en.wikipedia.org/wiki/%22Hello,_World!%22_program) lecture. By the end, you should be able to:
- Recall basic history and facts about Python (relevance in scientific computing, comparison to other languages)
- Print arbitrary strings in a Python environment
- Create and execute basic arithmetic operations
- Understand and be able to use variable assignment and update
- Define variables of string and numerical types, convert between them, and use them in basic operations
- Explain the different variants of typing in programming languages, and what "duck-typing" in Python does
- Understand how Python uses whitespace in its syntax
- Demonstrate how smart variable-naming and proper use of comments can effectively document your code
## Part 1: Background
Python as a language was implemented from the start by Guido van Rossum. What was originally something of a [snarkily-named hobby project to pass the holidays](https://www.python.org/doc/essays/foreword/) turned into a huge open source phenomenon used by millions.

### Python's history
The original project began in 1989.
- Release of Python 2.0 in 2000
- Release of Python 3.0 in 2008
- Latest stable release of these branches are **2.7.15**--which Guido *emphatically* insists is the final, final, final release of the 2.x branch--and **3.6** (which is what we're using in this course)
Wondering why a 2.x branch has survived *almost two decades* after its initial release?
Python 3 was designed as backwards-incompatible; a good number of syntax changes and other internal improvements made the majority of code written for Python 2 unusable in Python 3.
This made it difficult for power users and developers to upgrade, particularly when they relied on so many third-party libraries for much of the heavy-lifting in Python.
Until these third-party libraries were themselves converted to Python 3 (really only in the past couple years!), most developers stuck with Python 2.
### Python, the Language
Python is an **intepreted** language.
- Contrast with **compiled** languages
- Performance, ease-of-use
- Modern intertwining and blurring of compiled vs interpreted languages
Python is a very **general** language.
- Not designed as a specialized language for performing a specific task. Instead, it relies on third-party developers to provide these extras.

Instead, as [Jake VanderPlas](http://jakevdp.github.io/) put it:
> "Python syntax is the glue that holds your data science code together. As many scientists and statisticians have found, Python excels in that role because it is powerful, intuitive, quick to write, fun to use, and above all extremely useful in day-to-day data science tasks."
### Zen of Python
One of the biggest reasons for Python's popularity is its overall simplicity and ease of use.
Python was designed *explicitly* with this in mind!
It's so central to the Python ethos, in fact, that it's baked into every Python installation. Tim Peters wrote a "poem" of sorts, *The Zen of Python*, that anyone with Python installed can read.
To see it, just type one line of Python code:
```
import this
```
Lack of any discernible meter or rhyming scheme aside, it nonetheless encapsulates the spirit of the Python language. These two lines are particular favorites of mine:
<pre>
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
</pre>
Line 1:
- If you wrote the code and can't explain it\*, go back and fix it.
- If you didn't write the code and can't explain it, get the person who wrote it to fix it.
Line 2:
- "Easy to explain": necessary and sufficient for good code?
Don't you just feel so zen right now?
## Part 2: Hello, World!
Enough reading, time for some coding, amirite?
So what does it take to write your first program, your "Hello, World!"? Pound-include iostream dot h? Import java dot io? Define a main function with command-line parameters? Wrap the whole thing in a class?
```
print("Hello, world!")
```
Yep! That's all there is to it.
Just for the sake of being thorough, though, let's go through this command in painstaking detail.
**Functions**: `print()` is a function.
- Functions take input, perform an operation on it, and give back (return) output.
You can think of it as a direct analog of the mathematical term, $f(x) = y$. In this case, $f$ is the function; $x$ is the input, and $y$ is the output.
Later in the course, we'll see how to create our own functions, but for now we'll make use of the ones Python provides us by default.
**Arguments**: the input to the function.
- Interchangeable with "parameters".
In this case, there is only one argument to `print()`: a string of text that we want printed out. This text, in Python parlance, is called a "string". I can only presume it is so named because it is a *string* of individual characters.
We can very easily change the argument we pass to `print()`:
```
print("This is not the same argument as before.")
```
We could also print out an empty string, or even no string at all.
```
print("") # this is an empty string
print() # this is just nothing
```
In both cases, the output looks pretty much the same...because it is: just a blank line.
- After `print()` finishes printing your input, it prints one final character--a *newline*.
This is basically the programmatic equivalent of hitting `Enter` at the end of a line, moving the cursor down to the start of the next line.
### What are "strings"?
Briefly--a type of data format in Python that exclusively uses alphanumeric (A through Z, 0 through 9) characters.
Look for the double-quotes (or even single-quotes; unlike the command-line, they do the same thing in Python):
```
"5" # This is a string.
'5' # This is also a string.
5 # This is NOT a string.
```
### What are the hashtags? (`#`)
They indicate *comments*.
- Comments are lines in your program that the language ignores entirely.
- When you type a `#` in Python, everything *after* that symbol on the same line is ignored.
They're there purely for the coders as a way to put documentation and clarifying statements directly into the code. It's a practice I **strongly** encourage everyone to do--even just to remind yourself what you were thinking! I can't count the number of times I worked on code, set it aside for a month, then came back to it and had absolutely no idea what I was doing)
## Part 3: Beyond "Hello, World!"
Ok, so Python can print strings. That's cool. Can it do anything that's actually useful?
Python has a lot of built-in objects and data structures that are very useful for more advanced operations--and we'll get to them soon enough!--but for now, you can use Python to perform basic arithmetic operations.
Addition, subtraction, multiplication, division--they're all there. You can use it as a glorified calculator:
```
3 + 4
3 - 4
3 * 4
3 / 4
```
Python respects order of operations, too, performing them as you'd expect:
```
3 + 4 * 6 / 2 - 5
(3 + 4) * 6 / (2 - 5)
```
Python even has a really cool exponent operator, denoted by using two stars right next to each other:
```
2 ** 3 # 2 raised to the 3rd power
3 ** 2 # 3 squared
25 ** (1 / 2) # Square root of 25
```
Now for something really neat:
```
x = 2
x * 3
```
This is an example of using Python *variables*.
- Variables store and maintain values that can be updated and manipulated as the program runes.
- You can name a variable whatever you like, as long as it doesn't start with a number ("`5var`" would be illegal, but "`var5`" would be fine) or conflict with reserved Python words (like `print`).
Here's an operation that involves two variables:
```
x = 2
y = 3
x * y
```
We can assign the result of operations with variables to other variables:
```
x = 2
y = 3
z = x * y
print(z)
```
The use of the equals sign `=` is called the *assignment operator*.
- "Assignment" takes whatever value is being computed on the _right-hand_ side of the equation and **assigns** it to the variable on the _left-hand_ side.
- Multiplication (`*`), Division (`/`), Addition (`+`), and Subtraction (`-`) are also *operators*.
What happens if I perform an assignment on something that can't be assigned a different value...such as, say, a number?
```
x = 2
y = 3
```
<pre>5 = x * y # will this work?</pre>
**CRASH!**
Ok, not really; Python technically did what it was supposed to do. It threw an error, alerting you that something in your program didn't work for some reason. In this case, the error message is `can't assign to literal`.
Parsing out the `SyntaxError` message:
- `Error` is an obvious hint. `Syntax` gives us some context.
- We did something wrong that involves Python's syntax, or the structure of its language.
The "`literal`" being referred to is the number 5 in the statement: `5 = x * y`
- We are attempting to assign the result of the computation of `x * y` to the number 5
- However, 5 is known internally to Python as a "literal"
- 5 is literally 5; you can't change the value of 5! (5 = 8? NOPE)
So we can't assign values to numbers. What about assigning values to a variable that's used in the very same calculation?
```
x = 2
y = 3
x = x * y
print(x)
```
This works just fine! In fact, it's more than fine--this is such a standard operation, it has its own operator:
```
x = 2
y = 3
x *= y
print(x)
```
Out loud, it's pretty much what it sounds like: "x times equals y". Put another way, you're _updating_ or _reassigning_ an existing variable with a new value. **This happens A LOT.**
This is an instance of a shorthand operator.
- We multiplied `x` by `y` and stored the product in `x`, effectively updating it.
- There are many instances where you'll want to increment a variable: for example, when counting how many of some "thing" you have.
- All the other operators have the same shorthand-update versions: `+=` for addition, `-=` for subtraction, and `/=` for division.
## Part 4: Variables and Types
We've seen how we can create variables to assign, update, and combine values using specific operators. It's important to note that all variables have **types** that will dictate much (if not all) of the operations you can perform on and with that variable.
There are two critical components to every single variable you will ever create in Python: the variable's *name* and its *type*.
```
x = 2
```
It's easy to determine the name of the variable; in this case, the name is $x$. It can be a bit more complicated to determine the type of the variable, as it depends on the value the variable is storing. In this case, it's storing the number 2. Since there's no decimal point on the number, we call this number an *integer*, or *int* for short.
### Numerical types
What other types of variables are there?
```
y = 2.0
```
`y` is assigned a value of 2.0: it is referred to as a *floating-point* variable, or *float* for short. Any number with a decimal (i.e. fractional number) is considered a float.
Floats do the heavy-lifting of much of the computation in data science. Whenever you're computing probabilities or fractions or normalizations, floats are the types of variables you're using. In general, you tend to use floats for heavy computation, and ints for counting things.
There is an explicit connection between ints and floats. Let's illustrate with an example:
```
x = 2
y = 3
z = x / y
```
In this case, we've defined two variables `x` and `y` and assigned them integer values, so they are both of type `int`. However, we've used them both in a division operation and assigned the result to a variable named `z`. If we were to check the type of `z`, what type do you think it would be?
```
type(z)
```
**Why?**
In general, an operation involving two things of one type will give you a result that's the same type. However, in cases where a decimal number is outputted, Python implicitly "promotes" the variable storing the result.
Changing the type of a variable is known as **casting**, and it can take two forms: implicit casting (as we just saw), or explicit casting, where you (the programmer) tell Python to change the type of a variable.
### Casting
Implicit casting is done in such a way as to try to abide by "common sense": if you're dividing two numbers, you would all but expect to receive a fraction, or decimal, on the other end. If you're multiplying two numbers, the type of the output depends on the types of the inputs--two floats multiplied will likely produce a float, while two ints multiplied will produce an int.
```
x = 2
y = 3
z = x * y
type(z)
x = 2.5
y = 3.5
z = x * y
type(z)
```
In both cases, the type of the inputs dictated the type of the outputs.
Explicit casting, on the other hand, is a little trickier. In this case, it's *you the programmer* who are making explicit (hence the name) what type you want your variables to be.
Python has a couple special built-in functions for performing explicit casting on variables, and they're named what you would expect: `int()` for casting a variable as an int, and `float()` for casting it as a float.
```
x = 2.5
y = 3.5
z = x * y
print(z) # What type is this?
print(int(z)) # Now what type is this?
```
With explicit casting, you are telling Python to override its default behavior. In doing so, it has to make some decisions as to how to do so in a way that still makes sense.
When you cast a `float` to an `int`, some information is lost; namely, the decimal. So the way Python handles this is by quite literally **discarding the entire decimal portion**.
In this way, even if your number was 9.999999999 and you perfomed an explicit cast to `int()`, Python would hand you back a 9.
### Language typing mechanisms
Python as a language is known as *dynamically typed*. This means you don't have to specify the type of the variable when you define it; rather, Python infers the type based on how you've defined it and how you use it.
As we've already seen, Python creates a variable of type `int` when you assign it an integer number like 5, and it automatically converts the type to a `float` whenever the operations produce decimals.
Other languages, like C++ and Java, are *statically typed*, meaning in addition to naming a variable when it is declared, the programmer must also explicitly state the type of the variable.
Pros and cons of *dynamic* typing (as opposed to *static* typing)?
Pros:
- Streamlined
- Flexible
Cons:
- Easier to make mistakes
- Potential for malicious bugs
All languages have some form of *type-checking*: that is, ensuring that the types of the variables are compatible with the operations you're performing on them.
After all--you can't multiply strings together, right? What would the output of `"this" * "that"` be? Nothing that makes any sense, surely. So [most] languages simply don't allow it, by way of type-checking.
For type-checking, Python implements what is known as [duck typing](https://en.wikipedia.org/wiki/Duck_typing): if it walks like a duck and quacks like a duck, it's a duck.
This brings us to a concept known as **type safety**. A particularly fun example is known as a roundoff error, or more specifically to our case, a [representation error](https://en.wikipedia.org/wiki/Round-off_error#Representation_error). This occurs when we are attempting to represent a value for which we simply don't have enough precision to accurately store.
- When there are too many decimal values to represent (usually because the number we're trying to store is very, very small), we get an *underflow error*.
- When there are too many whole numbers to represent (usually because the number we're trying to store is very, very large), we get an *overflow error*.
One of the most popular examples of an overflow error was the [Y2K bug](https://en.wikipedia.org/wiki/Year_2000_problem). In this case, most Windows machines internally stored the year as simply the last two digits. Thus, when the year 2000 rolled around, the two numbers representing the year overflowed and reset to 00. A similar problem is [anticipated for 2038](https://en.wikipedia.org/wiki/Year_2038_problem), when 32-bit Unix machines will also see their internal date representations overflow to 0.
In these cases, and especially in dynamically typed languages like Python, it is very important to know what types of variables you're working with and what the limitations of those types are.
### String types
Strings, as we've also seen previously, are the variable types used in Python to represent text.
```
x = "this is a string"
type(x)
```
Unlike numerical types like ints and floats, you can't really perform arithmetic operations on strings, *with one exception*:
```
x = "some string"
y = "another string"
z = x + y
print(z)
```
The `+` operator, when applied to strings, is called *string concatenation*.
This means that it glues or *concatenates* two strings together to create a new string. In this case, we took the string in `x` and concatenated that to the string in `y`, storing the whole thing in a final string `z`.
It's a somewhat blunt operation, but it behaves somewhat intuitively.
That said, don't try to subtract, multiply, or divide strings.
**And don't add numbers that happen to be strings together, either.** This is where _knowing the type_ of your variables is very important!
Casting, however, is alive and well with strings. In particular, if you know the string you're working with is a *string representation of a number*, you can cast it from a string to a numeric type:
```
s = "2" # Don't accidentally add this!!! Since it's a string, it would CONCATENATE.
print(s)
print(type(s))
x = int(s) # Now, we'll cast the string to an integer.
print(x)
print(type(x))
```
And back again:
```
x = 2 # Start with an integer.
print(x)
print(type(x))
s = str(x) # Cast it back into a string.
print(s)
print(type(s))
```
### Variable Comparisons and Boolean Types
We can also compare variables! By comparing variables, we can ask whether two things are equal, or greater than or less than some other value.
This sort of true-or-false comparison gives rise to yet another type in Python: the *boolean* type. A variable of this type takes only two possible values: `True` or `False`.
This is exactly the same as doing addition `+`, subtraction `-`, multiplication `*`, or division `/`, except instead of getting back a numeric type (`int` or `float`), we get back a *boolean* type that indicates whether the comparison was `True` or `False`.
Let's say we have two numeric variables, `x` and `y`, and want to check if they're equal. To do this, we use a the _comparison operator_, the double-equals `==` sign:
```
x = 2
y = 2
z = (x == y) # COMPARE if x is equals to y, and store the resulting True/False in z
print(z)
```
The `==` sign is the equality comparison operator, and it will return `True` or `False` depending on whether or not the two values are exactly equal. This works for strings as well:
```
s1 = "a string"
s2 = "a string"
z = (s1 == s2)
print(z)
s3 = "another string"
z = (s1 == s3)
print(z)
```
**Be careful with this operator.** It looks a lot like the assignment operator, `=`. But it is very different.
- With _assignment_, you are **ordering**: put the value on the right into the variable on the left
- With _comparison_, you are **asking**: are these two things exactly equal?
In addition to asking if things are equal, we can also ask if variables are less than or greater than each other, using the `<` and `>` operators, respectively.
```
x = 1
y = 2
z = (x < y) # Just as before, the result of this operator is a True/False.
print(z)
z = (x > y)
print(z)
```
In a small twist of relative magnitude comparisons, we can also ask if something is less than *or equal to* or greater than *or equal to* some other value. To do this, in addition to the comparison operators `<` or `>`, we also add an equal sign:
```
x = 2
y = 3
z = (x <= y) # Less than or equal to
print(z)
x = 3
z = (x >= y) # Greater than or equal to
print(z)
x = 3.00001
z = (x <= y) # Less than or equal to
print(z)
```
Interestingly, these operators also work for strings. Be careful, though: their behavior may be somewhat unexpected until you figure out what actual trick is happening:
```
s1 = "some string"
s2 = "another string"
z = (s1 > s2)
print(z)
s1 = "Some string"
z = (s1 > s2)
print(z)
```
## Part 5: Naming Conventions and Documentation
[There are some rules](https://www.youtube.com/watch?v=yUfec8S10MI) regarding what can and cannot be used as a variable name.
Beyond those rules, [there are guidelines](https://www.youtube.com/watch?v=jl0hMfqNQ-g).
### Naming Rules
- Names can contain only letters, numbers, and underscores.
All the letters a-z (upper and lowercase), the numbers 0-9, and underscores are at your disposal. Anything else is illegal. No special characters like pound signs, dollar signs, or percents are allowed. Hashtag alphanumerics only.
- Variable names can only *start* with letters or underscores.
Numbers cannot be the first character of a variable name. `message_1` is a perfectly valid variable name; however, `1_message` is not and will throw an error.
- Spaces are not allowed in variable names.
Underscores are how Python programmers tend to "simulate" spaces in variable names, but simply put there's no way to name a variable with multiple words separated by spaces.
### Naming Guidelines
These are not hard-and-fast rules, but rather suggestions to help "standardize" code and make it easier to read by people who aren't necessarily familiar with the code you've written.
- Make variable names short, but descriptive.
I've been giving a lot of examples using variables named `x`, `s`, and so forth. **This is bad.** Don't do it--unless, for example, you're defining `x` and `y` to be points in a 2D coordinate axis, or as a counter; one-letter variable names for counters are quite common.
Outside of those narrow use-cases, the variable names should constitute a pithy description that reflects their function in your program. A variable storing a name, for example, could be `name` or even `student_name`, but don't go as far as to use `the_name_of_the_student`.
- Be careful with the lowercase `l` or uppercase `O`.
This is one of those annoying rules that largely only applies to one-letter variables: stay away from using letters that also bear striking resemblance to numbers. Naming your variable `l` or `O` may confuse downstream readers of your code, making them think you're sprinkling 1s and 0s throughout your code.
- Variable names should be all lowercase, using underscores for multiple words.
Java programmers may take umbrage with this point: the convention there is to `useCamelCase` for multi-word variable names.
Since Python takes quite a bit from the C language (and its back-end is implemented in C), it also borrows a lot of C conventions, one of which is to use underscores and all lowercase letters in variable names. So rather than `multiWordVariable`, we do `multi_word_variable`.
The one exception to this rule is when you define variables that are *constant*; that is, their values don't change. In this case, the variable name is usually in all-caps. For example: `PI = 3.14159`.
- Avoid using Python keywords or function names as variables.
This might take some trial-and-error. Basically, if you try to name a variable `print` or `float` or `str`, you'll run into a lot of problems down the road. *Technically* this isn't outlawed in Python, but it will cause a lot of headaches later in your program.
### Self-documenting code
The practice of pithy but precise variable naming strategies is known as "self-documenting code."
We've learned before that we can insert comments into our code to explain things that might otherwise be confusing:
```
# Adds two numbers that are initially strings by converting them to an int and a float,
# then converting the final result to an int and storing it in the variable x.
x = int(int("1345") + float("31.5"))
print(x)
```
Comments are important to good coding style and should be used often for clarification.
However, even more preferable to the liberal use of comments is a good variable naming convention. For instance, instead of naming a variable "x" or "y" or "c", give it a name that describes its purpose.
```
str_length = len("some string")
```
I could've used a comment to explain how this variable was storing the length of the string, but by naming the variable itself in terms of what it was doing, I don't even need such a comment. It's self-evident from the name itself what this variable is doing.
## Part 6: Whitespace in Python
Whitespace (no, not [that Whitespace](https://en.wikipedia.org/wiki/Whitespace_(programming_language))) is important in the Python language.
Some languages like C++ and Java use semi-colons to delineate the end of a single statement. Python, however, does not, but still needs some way to identify when we've reached the end of a statement.
In Python, it's the **return key** that denotes the end of a statement.
Returns, tabs, and spaces are all collectively known as "whitespace", and each can drastically change how your Python program runs. Especially when we get into loops, conditionals, and functions, this will become critical and may be the source of many insidious bugs.
For example, the following code won't run:
<pre>
x = 5
x += 10
</pre>
Python sees the indentation--it's important to Python in terms of delineating blocks of code--but in this case the indentation doesn't make any sense. It doesn't highlight a new function, or a conditional, or a loop. It's just "there", making it unexpected and hence causing the error.
This can be particularly pernicious when writing longer Python programs, full of functions and loops and conditionals, where the indentation of your code is constantly changing. For this reason, I am giving you the following mandate:
**DO NOT MIX TABS AND SPACES!!!**
If you're indenting your code using 2 spaces, *ALWAYS USE SPACES.*
If you're indenting your code using 4 spaces, *ALWAYS USE SPACES.*
If you're indenting your code with a single tab, *ALWAYS USE TABS.*
Mixing the two in the same file will cause **ALL THE HEADACHES**. Your code will crash but will be coy as to the reason why.
## Administrivia
- **How is Assignment 1 going?**. Ask on `#questions` if you're having trouble with anything!
- **Not on Slack? Can't access JupyterHub?** Email me and we'll get it sorted out.
- **Next Tuesday we'll have a guest lecture from someone at GACRC.** Some of what we covered in the command line will come up as you learn how to run computational experiments on the GACRC hardware.
## Additional Resources
1. Guido's PyCon 2016 talk on the future of Python: https://www.youtube.com/watch?v=YgtL4S7Hrwo
2. VanderPlas, Jake. *Python Data Science Handbook*. https://github.com/jakevdp/PythonDataScienceHandbook
3. Matthes, Eric. *Python Crash Course*. 2016. ISBN-13: 978-1593276034
4. Grus, Joel. *Data Science from Scratch*. 2015. ISBN-13: 978-1491901427
| github_jupyter |
# 3. Predicting Bike Sharing
In this project, we'll be building our project!
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
## Loading and preparing the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights.
```
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
```
## Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the `cnt` column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
```
rides[:24*10].plot(x='dteday', y='cnt')
```
### Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to `get_dummies()`.
```
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
```
### Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
```
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
```
### Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
```
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
```
We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
```
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
```
## Time to build the network
Below you'll build your network. We've built out the structure. You'll implement both the forward pass and backwards pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called *forward propagation*.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called *backpropagation*.
> **Hint:** You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set `self.activation_function` in `__init__` to your sigmoid function.
2. Implement the forward pass in the `train` method.
3. Implement the backpropagation algorithm in the `train` method, including calculating the output error.
4. Implement the forward pass in the `run` method.
```
import numpy as np
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
self.activation_function = lambda x : 1.0/(1.0+np.exp(-x))
self.output_function = lambda x: x
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
final_outputs, hidden_outputs = self.forward_pass_train(X)
delta_weights_i_h, delta_weights_h_o = self.backpropagation(final_outputs, hidden_outputs, X, y,
delta_weights_i_h, delta_weights_h_o)
self.update_weights(delta_weights_i_h, delta_weights_h_o, n_records)
def forward_pass_train(self, X):
''' Implement forward pass here
Arguments
---------
X: features batch
'''
### Forward pass ###
hidden_inputs = np.dot(X,self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
final_inputs = np.dot(hidden_outputs,self.weights_hidden_to_output) # signals into final output layer
final_outputs = self.output_function(final_inputs) # signals from final output layer
return final_outputs, hidden_outputs
def backpropagation(self, final_outputs, hidden_outputs, X, y, delta_weights_i_h, delta_weights_h_o):
''' Implement backpropagation
Arguments
---------
final_outputs: output from forward pass
y: target (i.e. label) batch
delta_weights_i_h: change in weights from input to hidden layers
delta_weights_h_o: change in weights from hidden to output layers
'''
### Backward pass ###
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
hidden_error = np.dot(error,self.weights_hidden_to_output.T)
output_error_term = error
hidden_error_term = hidden_error*hidden_outputs*(1-hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += hidden_error_term*X[:,None]
# Weight step (hidden to output)
delta_weights_h_o += output_error_term*hidden_outputs[:,None]
return delta_weights_i_h, delta_weights_h_o
def update_weights(self, delta_weights_i_h, delta_weights_h_o, n_records):
''' Update weights on gradient descent step
Arguments
---------
delta_weights_i_h: change in weights from input to hidden layers
delta_weights_h_o: change in weights from hidden to output layers
n_records: number of records
'''
self.weights_hidden_to_output += self.lr*delta_weights_h_o/n_records # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr*delta_weights_i_h/n_records # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
hidden_inputs = np.dot(features,self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs,self.weights_hidden_to_output) # signals into final output layer
final_outputs = self.output_function(final_inputs) # signals from final output layer
return final_outputs
#########################################################
# Setting our hyperparameters here
##########################################################
iterations = 3000
learning_rate = 0.7
hidden_nodes = 22
output_nodes = 1
def MSE(y, Y):
return np.mean((y-Y)**2)
```
## Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
```
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
```
## Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
### Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, this process can have sharply diminishing returns and can waste computational resources if you use too many iterations. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. The ideal number of iterations would be a level that stops shortly after the validation loss is no longer decreasing.
### Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
### Choose the number of hidden nodes
In a model where all the weights are optimized, the more hidden nodes you have, the more accurate the predictions of the model will be. (A fully optimized model could have weights of zero, after all.) However, the more hidden nodes you have, the harder it will be to optimize the weights of the model, and the more likely it will be that suboptimal weights will lead to overfitting. With overfitting, the model will memorize the training data instead of learning the true pattern, and won't generalize well to unseen data.
Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. You'll generally find that the best number of hidden nodes to use ends up being between the number of input and output nodes.
```
import sys
####################
### Set the hyperparameters in you myanswers.py file ###
####################
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.iloc[batch].values, train_targets.iloc[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim(0,2)
```
## Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
```
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.iloc[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import pickle
import sys
from sklearn.feature_extraction.text import TfidfVectorizer
import nltk
from nltk.stem.porter import *
import string
import re
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer as VS
from textstat.textstat import *
from sklearn.linear_model import LogisticRegression
from sklearn.feature_selection import SelectFromModel
from sklearn.metrics import classification_report
from sklearn.svm import LinearSVC
import matplotlib.pyplot as plt
import seaborn
df = pd.read_csv("./data/labeled_data.csv")
df.head()
df.describe()
df.columns
df['class'].hist()
tweets = df.tweet
stopwords=stopwords = nltk.corpus.stopwords.words("english")
other_exclusions = ["#ff", "ff", "rt"]
stopwords.extend(other_exclusions)
stemmer = PorterStemmer()
def preprocess(text_string):
"""
Accepts a text string and replaces:
1) urls with URLHERE
2) lots of whitespace with one instance
3) mentions with MENTIONHERE
This allows us to get standardized counts of urls and mentions
Without caring about specific people mentioned
"""
space_pattern = '\s+'
giant_url_regex = ('http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|'
'[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+')
mention_regex = '@[\w\-]+'
parsed_text = re.sub(space_pattern, ' ', text_string)
parsed_text = re.sub(giant_url_regex, '', parsed_text)
parsed_text = re.sub(mention_regex, '', parsed_text)
return parsed_text
def tokenize(tweet):
"""Removes punctuation & excess whitespace, sets to lowercase,
and stems tweets. Returns a list of stemmed tokens."""
tweet = " ".join(re.split("[^a-zA-Z]*", tweet.lower())).strip()
tokens = [stemmer.stem(t) for t in tweet.split()]
return tokens
def basic_tokenize(tweet):
"""Same as tokenize but without the stemming"""
tweet = " ".join(re.split("[^a-zA-Z.,!?]*", tweet.lower())).strip()
return tweet.split()
vectorizer = TfidfVectorizer(
tokenizer=tokenize,
preprocessor=preprocess,
ngram_range=(1, 3),
stop_words=stopwords,
use_idf=True,
smooth_idf=False,
norm=None,
decode_error='replace',
max_features=10000,
min_df=5,
max_df=0.75
)
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
#Construct tfidf matrix and get relevant scores
tfidf = vectorizer.fit_transform(tweets).toarray()
vocab = {v:i for i, v in enumerate(vectorizer.get_feature_names())}
idf_vals = vectorizer.idf_
idf_dict = {i:idf_vals[i] for i in vocab.values()} #keys are indices; values are IDF scores
#Get POS tags for tweets and save as a string
tweet_tags = []
for t in tweets:
tokens = basic_tokenize(preprocess(t))
tags = nltk.pos_tag(tokens)
tag_list = [x[1] for x in tags]
tag_str = " ".join(tag_list)
tweet_tags.append(tag_str)
#We can use the TFIDF vectorizer to get a token matrix for the POS tags
pos_vectorizer = TfidfVectorizer(
tokenizer=None,
lowercase=False,
preprocessor=None,
ngram_range=(1, 3),
stop_words=None,
use_idf=False,
smooth_idf=False,
norm=None,
decode_error='replace',
max_features=5000,
min_df=5,
max_df=0.75,
)
#Construct POS TF matrix and get vocab dict
pos = pos_vectorizer.fit_transform(pd.Series(tweet_tags)).toarray()
pos_vocab = {v:i for i, v in enumerate(pos_vectorizer.get_feature_names())}
#Now get other features
sentiment_analyzer = VS()
def count_twitter_objs(text_string):
"""
Accepts a text string and replaces:
1) urls with URLHERE
2) lots of whitespace with one instance
3) mentions with MENTIONHERE
4) hashtags with HASHTAGHERE
This allows us to get standardized counts of urls and mentions
Without caring about specific people mentioned.
Returns counts of urls, mentions, and hashtags.
"""
space_pattern = '\s+'
giant_url_regex = ('http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|'
'[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+')
mention_regex = '@[\w\-]+'
hashtag_regex = '#[\w\-]+'
parsed_text = re.sub(space_pattern, ' ', text_string)
parsed_text = re.sub(giant_url_regex, 'URLHERE', parsed_text)
parsed_text = re.sub(mention_regex, 'MENTIONHERE', parsed_text)
parsed_text = re.sub(hashtag_regex, 'HASHTAGHERE', parsed_text)
return(parsed_text.count('URLHERE'),parsed_text.count('MENTIONHERE'),parsed_text.count('HASHTAGHERE'))
def other_features(tweet):
"""This function takes a string and returns a list of features.
These include Sentiment scores, Text and Readability scores,
as well as Twitter specific features"""
sentiment = sentiment_analyzer.polarity_scores(tweet)
words = preprocess(tweet) #Get text only
syllables = textstat.syllable_count(words)
num_chars = sum(len(w) for w in words)
num_chars_total = len(tweet)
num_terms = len(tweet.split())
num_words = len(words.split())
avg_syl = round(float((syllables+0.001))/float(num_words+0.001),4)
num_unique_terms = len(set(words.split()))
###Modified FK grade, where avg words per sentence is just num words/1
FKRA = round(float(0.39 * float(num_words)/1.0) + float(11.8 * avg_syl) - 15.59,1)
##Modified FRE score, where sentence fixed to 1
FRE = round(206.835 - 1.015*(float(num_words)/1.0) - (84.6*float(avg_syl)),2)
twitter_objs = count_twitter_objs(tweet)
retweet = 0
if "rt" in words:
retweet = 1
features = [FKRA, FRE,syllables, avg_syl, num_chars, num_chars_total, num_terms, num_words,
num_unique_terms, sentiment['neg'], sentiment['pos'], sentiment['neu'], sentiment['compound'],
twitter_objs[2], twitter_objs[1],
twitter_objs[0], retweet]
#features = pandas.DataFrame(features)
return features
def get_feature_array(tweets):
feats=[]
for t in tweets:
feats.append(other_features(t))
return np.array(feats)
other_features_names = ["FKRA", "FRE","num_syllables", "avg_syl_per_word", "num_chars", "num_chars_total", \
"num_terms", "num_words", "num_unique_words", "vader neg","vader pos","vader neu", \
"vader compound", "num_hashtags", "num_mentions", "num_urls", "is_retweet"]
feats = get_feature_array(tweets)
#Now join them all up
M = np.concatenate([tfidf, pos, feats],axis=1)
M.shape
#Finally get a list of variable names
variables = ['']*len(vocab)
for k,v in vocab.items():
variables[v] = k
pos_variables = ['']*len(pos_vocab)
for k,v in pos_vocab.items():
pos_variables[v] = k
feature_names = variables+pos_variables+other_features_names
X = pd.DataFrame(M)
y = df['class'].astype(int)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, test_size=0.1)
from sklearn.model_selection import StratifiedKFold, GridSearchCV
from sklearn.pipeline import Pipeline
pipe = Pipeline(
[('select', SelectFromModel(LogisticRegression(class_weight='balanced',
penalty="l1", C=0.01))),
('model', LogisticRegression(class_weight='balanced', penalty='l2'))])
param_grid = [{}] # Optionally add parameters here
grid_search = GridSearchCV(pipe,
param_grid,
cv=StratifiedKFold(n_splits=5,
random_state=42).split(X_train, y_train),
verbose=2)
model = grid_search.fit(X_train, y_train)
y_preds = model.predict(X_test)
report = classification_report(y_test, y_preds)
print(report)
from sklearn.metrics import confusion_matrix
confusion_matrix = confusion_matrix(y_test,y_preds)
matrix_proportions = np.zeros((3,3))
for i in range(0,3):
matrix_proportions[i,:] = confusion_matrix[i,:]/float(confusion_matrix[i,:].sum())
names=['Hate','Offensive','Neither']
confusion_df = pd.DataFrame(matrix_proportions, index=names,columns=names)
plt.figure(figsize=(5, 5))
seaborn.heatmap(confusion_df,annot=True,annot_kws={"size": 12},cmap='gist_gray_r',cbar=False, square=True,fmt='.2f')
plt.ylabel(r'True categories',fontsize=14)
plt.xlabel(r'Predicted categories',fontsize=14)
plt.tick_params(labelsize=12)
#True distribution
y.hist()
pd.Series(y_preds).hist()
```
| github_jupyter |
```
import numpy as np
import sympy as sp
import pandas as pd
import math
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
data=pd.read_csv('C:\\Users\\Utsav\\Desktop\\pulsar_prediction\\pulsar_stars.csv', sep=',',header=0)
pulsarData=data.values
pulsarData=np.array(pulsarData)
#split the data
train, test = train_test_split(data, test_size=0.2)
pulsar=np.array(train)
train=np.array(train)
test=np.array(test)
print(len(test))
print(len(train))
#print(train)
#copy labels of test & train data elsewhere
#replace all test and train labels by 1 (for intercept term)
y_train=np.zeros(len(train))
y_test=np.zeros(len(test))
for i in range(len(test)):
y_test[i]=test[i][8]
test[i][8]=1
for i in range(len(train)):
y_train[i]=train[i][8]
train[i][8]=1
%%time
#logistic regression using sklearn
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
logreg = LogisticRegression()
logreg.fit(train, y_train)
y_pred = logreg.predict(test)
# calculate score
score=0
for i in range(len(test)):
if y_pred[i]==y_test[i]:
score=score+1
score=score*100/len(test)
print("Accuracy = " + "%.2f" % score + "%")
#calculate no of flase negatives
score=0
for i in range(len(test)):
if y_pred[i]==0 and y_test[i]==1:
score=score+1
#score=score*100/len(test)
percent = score/len(test)
print("No of false negatives = %d" % score)
print("No of positives in test data = %d" % sum(y_test))
print("No of positives predicted : %d" % sum(y_pred))
#using maxabs scaler
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
train_scaled=scaler.fit_transform(train)
test_scaled=scaler.fit_transform(test)
logreg = LogisticRegression()
logreg.fit(train_scaled, y_train)
y_pred = logreg.predict(test_scaled)
#calculate score
score=0
for i in range(len(test)):
if y_pred[i]==y_test[i]:
score=score+1
score=score*100/len(test)
print("Accuracy = " + "%.2f" % score + "%")
#calculate no of flase negatives
score=0
for i in range(len(test)):
if y_pred[i]==0 and y_test[i]==1:
score=score+1
#score=score*100/len(test)
percent = score/len(test)
print("No of false negatives = %d" % score)
print("No of positives in test data = %d" % sum(y_test))
print("No of positives predicted : %d" % sum(y_pred))
```
Own Implementation
```
from scipy.optimize import fmin_tnc
theta = np.zeros((X.shape[1], 1))
def sigmoid(z):
sigma = 1 / (1 + np.exp(-z))
return sigma
def probability(theta, x):
return sigmoid(np.dot(x, theta))
def cost_function(theta, x, y):
cost = -np.sum(y*np.log(probability(theta, x)) + (1-y)*np.log(1 - probability(theta, x)))
return cost
def gradient (theta, x, y):
return np.dot(x.T, probability(theta , x) - y)
def fit (x, y, theta):
param = fmin_tnc(func=cost_function, x0=theta, fprime=gradient,args=(x, y.flatten()))
return param[0]
parameters = fit(X, y, theta)
print (parameters)
def predict(test, y_test, theta):
for i in range(len(test)):
p = probability(parameters, test[i,:])
if p>0.5:
y_test[i] = 1
else:
y_test[i] = 0
#predict
predict(test, y_pred, parameters)
# calculate score:
score=0
for i in range(len(test)):
if y_pred[i]==y_test[i]:
score=score+1
score=score*100/len(test)
print("Accuracy = " + "%.2f" % score + "%")
#calculate no of flase negatives
score=0
for i in range(len(test)):
if y_pred[i]==0 and y_test[i]==1:
score=score+1
#score=score*100/len(test)
percent = score/len(test)
print("No of false negatives = %d" % score)
print("No of positives in test data = %d" % sum(y_test))
print("No of positives predicted : %d" % sum(y_pred))
```
However we can decrease the number of false negatives if we predict 1 at p>0.4 instead of 0.5
```
from scipy.optimize import fmin_tnc
theta = np.zeros((X.shape[1], 1))
def sigmoid(z):
sigma = 1 / (1 + np.exp(-z))
return sigma
def probability(theta, x):
return sigmoid(np.dot(x, theta))
def cost_function(theta, x, y):
cost = -np.sum(y*np.log(probability(theta, x)) + (1-y)*np.log(1 - probability(theta, x)))
return cost
def gradient (theta, x, y):
return np.dot(x.T, probability(theta , x) - y)
def fit (x, y, theta):
param = fmin_tnc(func=cost_function, x0=theta, fprime=gradient,args=(x, y.flatten()))
return param[0]
parameters = fit(X, y, theta)
print (parameters)
def predict(test, y_test, theta):
for i in range(len(test)):
p = probability(parameters, test[i,:])
if p>0.4:
y_test[i] = 1
else:
y_test[i] = 0
#predict
predict(test, y_pred, parameters)
# calculate score:
score=0
for i in range(len(test)):
if y_pred[i]==y_test[i]:
score=score+1
score=score*100/len(test)
print("Accuracy = " + "%.2f" % score + "%")
#calculate no of flase negatives
score=0
for i in range(len(test)):
if y_pred[i]==0 and y_test[i]==1:
score=score+1
#score=score*100/len(test)
percent = score/len(test)
print("No of false negatives = %d" % score)
print("No of positives in test data = %d" % sum(y_test))
print("No of positives predicted : %d" % sum(y_pred))
```
| github_jupyter |
```
import numpy as np
np.__version__
```
文档阅读说明:
- 🐧 表示 Tip
- ⚠️ 表示注意事项
## 数学函数
NumPy内置了很多数学函数,包括:
- 三角/双曲函数
- 四舍五入
- 和、积、差
- 导数和微积分
- 指数和对数
- 算术操作
- 综合
关于这一部分,我们主要介绍一些特殊的方法,对比较简单的就跳过了,可以参考:
- [Mathematical functions — NumPy v1.23.dev0 Manual](https://numpy.org/devdocs/reference/routines.math.html)
### 三角/双曲函数
三角函数和双曲中大部分都很容易理解,也都是通函数,我们主要介绍一个看起来不太明显的:`unwrap`,它的主要目的是对周期取大增量补码。参数如下:
- p:数组
- discont:数值间的最大中断,默认`period/2`,低于该值的被设为该值
- axis:轴,默认最后一个轴
- period:周期范围,默认`2pi`
```
phase = np.linspace(0, np.pi, num=5)
phase[3:] += np.pi
phase
# 超过pi的,剪掉period
np.unwrap(phase)
# 要减掉 1 个周期
np.unwrap([1, 5]), 5 - 2*np.pi
# 要减掉3个周期
np.unwrap([1, 20]), 20 - 3*2*np.pi
# 超过 pi 的处理掉!
np.unwrap([1, 1.1+np.pi]), 1.1+np.pi-2*np.pi
```
再看几个例子:
```
# 超过4/2,加4
np.unwrap([0, 1, 2, -1, 0], period=4)
# 为什么要加而不是减,因为只有加才能满足条件
np.unwrap([1, -2, -1, 0], period=4)
# 同上,5 后面的数字都要加 4
np.unwrap([2, 3, 4, 5, 2, 3, 4, 5], period=4)
```
另外需要注意的是,`deg2rad` == `radians`,`rad2deg` == `degrees`,前面的表达更加清晰一些。
更多细节可查阅:
- https://numpy.org/devdocs/reference/routines.math.html
### 指数和对数
大部分的API都比较容易理解,比如自然指数`np.exp`,2为底指数`np.exp2`等,对应的log也有`np.log`,`np.log2`,`np.log10`等,而且所有API都是通函数。
另外,也有`np.expm1`表示exp后减1,对应的就是加1后log的`np.log1p`:
```
np.log(np.exp(2)), np.log1p(np.expm1(2))
```
还有两个求和的`np.logaddexp`和以2为底的`np.logaddexp2`,计算公式:`log(exp(x1) + exp(x2))`
```
np.logaddexp([1], [2]), np.log(np.exp(1) + np.exp(2))
np.logaddexp2([1], [2]), np.log2(np.exp2(1) + np.exp2(2))
```
`np.frexp`和`np.ldexp`是一对操作,后者等于`x1 * 2**x2`,而前者是将一个数组分解成mantissa和exponent,而依据就是`x = mantissa * 2**exponent`,就是对应前面的x1和x2了。
```
np.ldexp(2, np.arange(5)), 2 * 2**np.arange(5)
np.frexp(np.arange(2, 5))
np.array([0.5, 0.75, 0.5]) * 2 ** np.array([2, 2, 3]), np.arange(2, 5)
```
### 算术操作
主要是中小学学过的加减乘除、乘方、开方、取余、倒数、绝对值,以及对应的一些特殊方法等,它们也都是通函数。按照惯例,我们主要介绍特殊的。
关于除法和Python的类似:
```
# 地板除,等价于python的 //
np.floor_divide(5, 2)
np.floor_divide([7, 8], [3, 5])
```
绝对值有相应的兼容复数的方法。
```
np.abs(-1-1j)
np.fabs(-1-1j)
```
还有几个关于取余的方法:
```
# 等价于Python的 x1%x2
np.remainder(np.arange(3), 2)
# 和 np.reminder 一样
np.mod([12, 13], [4, 5])
# mod 结果的符号是x2的符号
np.mod([-3, -2, -1, 1, 2, 3], 2)
np.mod([2, 3], -2)
# 而fmod 结果的符号是x1的符号
np.fmod([-3, -2, -1, 1, 2, 3], 2)
np.fmod([2, 3], -2)
```
接下来这两个稍微不一样些。
```
# 按元素返回数组的小数部分和整数部分。
np.modf([0, 3.5, 2.0])
np.modf(-1)
# 同时返回(x // y, x % y)
np.divmod([12, 13, 15], 2)
np.divmod([-3, -2, -1, 1, 2, 3], 2)
```
还有两个小学数学用过的:最大公倍数和最小公约数。
```
# 最小公倍数
np.lcm(12, 20)
# 多个值可以用reduce
np.lcm.reduce([2, 3, 5, 8])
# 同时求多个
np.lcm([2, 3, 5, 8], 3)
# 最大公约数
np.gcd(12, 20)
np.gcd.reduce([15, 20, 30])
np.gcd([2, 3, 5, 8], 20)
```
### 自动域
函数的输出数据类型与输入的某些域中的输入数据类型不同时,可以使用`np.emath`。
支持以下API:
- `sqrt`, `power`
- `log`, `log2`, `log10`, `logn`
- `arccos`, `arcsin`, `arctan`
```
np.emath.sqrt(-1)
np.sqrt(-1)
import math
np.emath.log(-math.exp(1)) == 1+1j*math.pi
np.power([2, 4], -2)
np.emath.power([2, 4], -2)
# 如果x包含负数则转为复数
np.emath.power([-2, 4], 1)
```
## 数值计算
### 舍入
然后是四舍五入,`round`和`around`一样。
```
rng = np.random.default_rng(42)
arr = rng.random((2,3))
arr
np.around(arr, 2)
np.array(arr).round(2)
```
其他的接口都比较类似,除了`fix`的其他函数都是通函数。如下,不再赘述:
```
lst = [2.1, -1.5, 3.2, 4.9]
np.fix(lst)
np.trunc(lst)
np.rint(lst)
np.floor(lst)
np.ceil(lst)
```
### 和积差
这里包含了基础和积和累积和积,以及对应的有空值(`nan`)版本,API都比较简单,此处就不再赘述。主要介绍剩下的几个不太熟悉的。
首先是`diff`,它包含以下参数:
- 数组
- 计算次数,就是计算几次diff
- 维度
- prepend/append:沿着维度放在原数组前面/后面然后再计算
```
rng = np.random.default_rng(42)
a = rng.integers(0, 10, (3, 4))
a
np.diff(a)
# 注意,连续算两次
np.diff(a, 2)
np.diff(a, 2, append=[[0], [0], [0]])
# 等价于
pd = np.full((3, 1), 0)
cct = np.concatenate((a, pd), axis=1)
np.diff(cct, 2)
# prepend 同理
np.diff(a, 2, prepend=[[0], [0], [0]])
```
值得一提的是也可以对时间进行处理:
```
dts = np.arange("2022-01-02", "2022-01-05", dtype=np.datetime64)
np.diff(dts, 1)
```
`np.ediff1d`会先flattern再diff,所以返回的是一维。
### 符号函数
```
# 小于0为-1,等于0,为0,大于0为1
np.sign([-5, 0, 5])
# x1<0时为0,x1=0时为x2,x1>0时为1
np.heaviside([-5, 0, 5], 0.5)
```
### 截断
```
# 截断
np.clip(np.arange(10).reshape(2,5), a_min=3, a_max=7)
```
### 插值
`np.interp`是一维线性插值方法,支持弧度。
```
def func(x):
return 2 * x + 3
x = np.arange(1, 5)
y = func(x)
x, y
np.interp([2.5, 8], x, y), func(2.5), func(8)
# 指定左右边界
np.interp([0.5, 2, 5.5], x, y)
np.interp([0.5, 2, 5.5], x, y, left=1, right=60)
```
## 导数和微积分
### 梯度
梯度是使用内部点中的二阶精确中心差和边界处的一阶精确单侧(向前或向后)差异计算的。
主要是利用泰勒二阶展开计算导数:
$$
f(x) \approx f\left(x_{0}\right)+f^{\prime}\left(x_{0}\right)\left(x-x_{0}\right)+\frac{f^{\prime \prime}\left(x_{0}\right)}{2 !}\left(x-x_{0}\right)^{2}+\frac{f^{\prime \prime \prime}\left(x_{0}\right)}{3 !}\left(x-x_{0}\right)^{3}+\cdots .
$$
等价于:
$$
f(x_0 + h) \approx f\left(x_{0}\right)+f^{\prime}\left(x_{0}\right) (h)+\frac{f^{\prime \prime}\left(x_{0}\right)}{2 !} h^{2}+ O(h^3)
$$
又有:
$$
f(x_0 - h) \approx f\left(x_{0}\right)
-f^{\prime}\left(x_{0}\right) (h)+\frac{f^{\prime \prime}\left(x_{0}\right)}{2 !} h^{2}
+ O(h^3)
$$
两式相减,得到:
$$
f(x_0 + h) - f(x_0 - h) = 2 \cdot f^{\prime}(x_0)(h)
$$
即:
$$
f^{\prime} (x_0) = \frac{f(x_0 + h) - f(x_0 - h)}{2h} + O(h^2)
$$
接下来就是利用上式来计算梯度(导数)。
参数如下:
- f,就是f(x),数组
- varargs:f值的间隔,有多种可能的取值
- edge_order:在边界时采用顺序1或2计算
- axis:坐标轴
```
# 表示f(0)=1, f(1)=2 ...
fx = np.array([1, 2, 4, 7, 11, 16])
# 默认 h=1,第一个和最后一个(边界)有点特殊
# f'(0) = (f(0+1) - f(0)) / 1 = (2-1)/1 = 1
# f'(1) = (f(1+1) - f(1-1)) / 2 = (4-1)/2*1 = 1.5
# f'(2) = (f(2+1) - f(2-1)) / 2 = (7-2)/2*1 = 2.5
# ...
# f'(5) = (f(5) - f(5-1)) / 1 = (16-11)/1 = 5
np.gradient(fx)
# h 这个可以为其他数,比如0.5,此时表示 f(0)=1, f(0.5)=2 ...
# f'(0.0) = (f(0+0.5) - f(0)) / 0.5 = (2-1)/0.5 = 2
# f'(0.5) = (f(0.5+0.5) - f(0.5-0.5)) / 2*0.5 = (4-1)/1.0 = 3
# f'(1.0) = (f(1.0+0.5) - f(1.0-0.5)) / 2*0.5 = (7-2)/1.0 = 5
# ...
# f'(2.5) = (f(2.5) - f(2.5-0.5)) / 2*0.5 = (16-11)/0.5 = 10
np.gradient(fx, 0.5)
```
输入数组,会按照行列分别计算后返回,也可以指定坐标轴。
```
rng = np.random.default_rng(42)
arr = rng.integers(1, 10, (2, 3))
arr
np.gradient(arr)
np.gradient(arr[:, 0]), np.gradient(arr[:, 1]), np.gradient(arr[:, 2])
np.gradient(arr[0,:]), np.gradient(arr[1,:])
# 指定坐标轴
np.gradient(arr, axis=0)
```
第二个参数`varargs`控制fx值之间的间隔,但它有多种方式:
- 1. 单个标量,用于指定所有尺寸的样本距离
- 2. N 个标量,用于为每个维度指定恒定的采样距离,即 “dx”, “dy”, “dz”, ...
- 3. N 个数组,用于指定值沿 F 的每个维度的坐标,数组的长度必须与相应尺寸的大小匹配
- 4. N 个标量/数组的任意组合,含义为 2 和 3
上面的例子是最简单的「单个标量」的情况,接下来看N个标量的情况。
$$
a = \frac{\frac{-dx_2}{dx_1}}{ dx_1+dx_2} \\
b = \frac{1}{dx_1} - \frac{1}{dx_2} \\
c = \frac{\frac{dx_1}{dx_2}}{ dx_1+dx_2} \\
a + b + c = 0
$$
```
# N 个标量
x = np.array([0, 1, 1.5, 3.5, 4, 6])
fx = np.array([ 1, 2, 4, 7, 11, 16])
np.gradient(fx, x)
ax_dx = np.diff(x)
dx1 = ax_dx[0:-1]
dx2 = ax_dx[1:]
a = -dx2 / (dx1 * (dx1 + dx2))
b = (dx2 - dx1) / (dx1 * dx2)
c = dx1 / (dx2 * (dx1 + dx2))
N = fx.ndim
slice1 = [slice(None)]*N
slice2 = [slice(None)]*N
slice3 = [slice(None)]*N
slice4 = [slice(None)]*N
axis = 0
slice1[axis] = slice(1, -1)
slice2[axis] = slice(None, -2)
slice3[axis] = slice(1, -1)
slice4[axis] = slice(2, None)
a, b, c
fx[tuple(slice2)], fx[tuple(slice3)], fx[tuple(slice4)]
# out[1:-1] = a * f[:-2] + b * f[1:-1] + c * f[2:]
out = np.empty_like(fx, dtype=np.float16)
out[tuple(slice1)] = a * fx[tuple(slice2)] + b * fx[tuple(slice3)] + c * fx[tuple(slice4)]
out
```
多个数组或数组与标量的组合,此时会分别对应到各自轴上计算:
```
arr
np.gradient(arr, [3,5], [1,2,3])
np.gradient(arr, [3,5], axis=0)
np.gradient(arr, [1,2,3], axis=1)
```
还有一个`edge_order`参数需要说明一下,它主要控制在边界位置如何计算梯度。
```
x = np.array([1, 2, 4, 7])
y = x ** 2 + 2 * x + 1
y
np.gradient(y, x), 2*x + 2., np.gradient(y, x, edge_order=2)
(9 - 4)/(2-1), (64-25)/(7-4)
```
具体公式如下:
$$
f'({x_l}) = \frac{f(x_l + h) - f(x_l)}{h} \\
f'({x_r}) = \frac{f(x_r) - f(x_r - h)}{h} \\
$$
### 梯形公式
另一个API是梯形公式,可用来求积分,原理就是把被积函数切成很多个小的梯形。
$$
\int_{a}^{b} f(x) d x \approx \frac{\Delta x}{2}\left(f\left(x_{0}\right)+2 f\left(x_{1}\right)+2 f\left(x_{2}\right)+2 f\left(x_{3}\right)+2 f\left(x_{4}\right)+\cdots+2 f\left(x_{N-1}\right)+f\left(x_{N}\right)\right)
$$
参数如下:
- y,就是f(x)
- x,默认为None,如果指定则根据x的元素计算dx
- dx,默认1.0,如果没有x,则使用dx
- axis,坐标轴
更多参考:
- [Trapezoidal rule - Wikipedia](https://en.wikipedia.org/wiki/Trapezoidal_rule)
```
y = np.array([1, 2, 3])
x = np.array([4, 6, 8])
np.trapz(y), 1/2 * (1 + 2*2 + 3)
diff = np.diff(x)
np.trapz(y, x), 1/2*((1+2)*diff[0] + (2+3)*diff[1])
z = np.array([1, 2, 4])
diff = np.diff(z)
np.trapz(y, z), 1/2*((1+2)*diff[0] + (2+3)*diff[1])
y = np.array([[1, 2, 3], [4, 5, 6]])
y
# 按行
np.trapz(y, axis=1)
# 列
np.trapz(y, axis=0)
```
## 多项式
多项式的在新的版本有专门的library:`polynomial`。其中包括:
- 幂级数
- 切比雪夫多项式
- 埃尔米特多项式(物理学)
- 埃尔米特多项式(概率学)
- 拉盖尔多项式
- 勒让德多项式
具体概念可参考:
- [幂级数 - 维基百科,自由的百科全书](https://zh.m.wikipedia.org/zh-hans/%E5%B9%82%E7%BA%A7%E6%95%B0)
- [切比雪夫多项式 - 维基百科,自由的百科全书](https://zh.wikipedia.org/zh-hans/%E5%88%87%E6%AF%94%E9%9B%AA%E5%A4%AB%E5%A4%9A%E9%A1%B9%E5%BC%8F)
- [埃尔米特多项式 - 维基百科,自由的百科全书](https://zh.m.wikipedia.org/zh-hans/%E5%9F%83%E5%B0%94%E7%B1%B3%E7%89%B9%E5%A4%9A%E9%A1%B9%E5%BC%8F)
- [拉盖尔多项式 - 维基百科,自由的百科全书](https://zh.wikipedia.org/wiki/%E6%8B%89%E7%9B%96%E5%B0%94%E5%A4%9A%E9%A1%B9%E5%BC%8F)
- [勒让德多项式 - 维基百科,自由的百科全书](https://zh.m.wikipedia.org/zh/%E5%8B%92%E8%AE%A9%E5%BE%B7%E5%A4%9A%E9%A1%B9%E5%BC%8F)
```
from numpy.polynomial import Polynomial as P
```
### 简介
```
# 幂序列
p = np.polynomial.Polynomial([3, 2, 1])
p
p(3)
rng = np.random.default_rng(42)
x = np.arange(10)
y = x + rng.standard_normal(10)
# 拟合
fitted = np.polynomial.Polynomial.fit(x, y, deg=1)
fitted
fitted.convert()
# 根据根得到表达式
p = P.fromroots([1, 2])
p
p.convert()
p.convert(domain=[0,1])
```
类型之间转换也可以,不过不建议使用。当级数增加时会造成精度损失严重。
```
from numpy.polynomial import Chebyshev as T
T.cast(p)
c1 = (1,2,3)
c2 = (3,2,1)
sum = P.polyadd(c1,c2)
P.polyval(2, sum)
```
### 便捷类
NumPy提供了不同类型多项式的便捷使用方式,提供了统一的创建、操作、拟合接口。以下的介绍以幂级数为例,更多可访问文档。
- [Power Series (numpy.polynomial.polynomial) — NumPy v1.23.dev0 Manual](https://numpy.org/devdocs/reference/routines.polynomials.polynomial.html)
- [Chebyshev Series (numpy.polynomial.chebyshev) — NumPy v1.23.dev0 Manual](https://numpy.org/devdocs/reference/routines.polynomials.chebyshev.html)
- [Hermite Series, “Physicists” (numpy.polynomial.hermite) — NumPy v1.23.dev0 Manual](https://numpy.org/devdocs/reference/routines.polynomials.hermite.html)
- [HermiteE Series, “Probabilists” (numpy.polynomial.hermite_e) — NumPy v1.23.dev0 Manual](https://numpy.org/devdocs/reference/routines.polynomials.hermite_e.html)
- [Laguerre Series (numpy.polynomial.laguerre) — NumPy v1.23.dev0 Manual](https://numpy.org/devdocs/reference/routines.polynomials.laguerre.html)
- [Legendre Series (numpy.polynomial.legendre) — NumPy v1.23.dev0 Manual](https://numpy.org/devdocs/reference/routines.polynomials.legendre.html)
```
# 初始化一个实例
p = P([1, 2, 3])
p
p.coef, p.domain, p.window
# x_new = -1+x
p1 = P([1, 2, 3], domain=[0, 1], window=[-1, 0])
p1
# x_new = -1+x + x
p2 = P([1, 2, 3], domain=[0, 1], window=[-1, 1])
p2
p3 = P([1, 2, 3], domain=[2, 5], window=[-1, 1])
p3
from numpy.polynomial import polyutils as pu
# 映射domain
pu.mapparms([2, 5], [-1, 1])
(5*-1 - 2*1)/(5-2), (1--1)/(5-2)
print(p)
```
可以选择不同的打印风格:
```
np.polynomial.set_default_printstyle("ascii")
print(p)
# 或
print(f"{p:unicode}")
```
多项式的基本运算:
```
p + p
p - p
p * p
p ** 2
p // P([-1, 1])
p
P([-1, 1]) * P([5, 3])
# 可以整除(因式分解)
P([2, 3, 1]) == P([1, 1]) * P([2, 1])
# 取余
p % P([-1, 1])
# 分解+余
divmod(p, P([-1, 1]))
# 求值
x = np.arange(5)
p(x)
3*x**2 + 2*x + 1.
# 嵌套
p(p)
# 根
p.roots()
# 有有理根
P([2, -3, 1]).roots()
p
p + [1, 2, 3]
p + [1,2]
p / 2
```
注意:上面的运算在不同domain、window或类型时无法使用。
```
# 不同domain
p + P([1], domain=[0, 1])
```
计算积分:
```
p = P([3, 2, 1])
p
# 定积分
p.integ()
# 指定积分次数
p.integ(m=2)
# 指定下界(默认是0)为-1,常数项发生变化
p.integ(lbnd=-1)
p.integ(k=[1], lbnd=-1)
```
计算导数:
```
p.deriv()
p.deriv(2)
p.deriv()(1)
p.deriv(2)(10)
```
## 逻辑运算
NumPy中的逻辑运算一般用于判断数组是否满足指定条件,返回结果为布尔数组。这部分的API大多都是通函数。
### 真值测试
常用于判断元素是否全部满足或任一满足条件。
```
a = np.array([
[1, 0, 2],
[2, 3, 0]
])
np.all(a)
np.all(a, axis=0)
np.any(a)
np.any(a, axis=0)
np.any([0, 0, 0])
np.alltrue(a)
np.alltrue([1, 2, 3])
```
### 值和类型
判断数组的值是否符合条件:
- `isfinite`:不是无穷并且不是非数字
- `isnan`:非数字
- `isnat`:非时间
- `isinf/isneginf/isposinf`:正/负无穷
类型:
- `iscomplex`:复数
- `iscomplexobj`:复数类型
- `isfortran`:F-Style
- `isreal`:实数
- `isrealobj`:不是复数类型
- `isscalar`:标量
```
np.isfinite([np.nan, 0, np.inf, 1])
np.isnan([np.nan, 2, np.inf])
np.isnat([np.datetime64("2016-01-01")])
# 只支持时间格式
np.isnat([2])
np.isinf([np.nan, np.inf])
np.isneginf([np.inf, -np.inf, np.NINF])
a = np.array([[2, 3], [1+1j, 0]])
a
np.iscomplex(a)
np.iscomplexobj(a)
np.isreal(a)
np.isrealobj(a)
np.isfortran(a)
np.isscalar(np.array([2, 3]))
np.isscalar(2)
np.isscalar("fdf")
```
### 逻辑运算
包括与、或、异或、非,以与为例。
```
np.logical_and(True, False)
np.logical_and([2, 3], [4, False])
a = np.arange(5)
np.logical_and(a>1, a<4)
# & 等价
np.array([1, 0]) & np.array([0, 1])
```
### 比较
相近判断:
`allclose/isclose` 用于判断所有值是否在阈值范围内:
$$
|a - b| <= (\text{atol} + \text{rtol} * |b|)
$$
- atol默认值为1e-08
- rtol默认值为1e-05
常用于验证精度。
```
np.allclose(1.0000089, 1.000009)
np.allclose([1e10, 1e-9], [1.000001e10, 1e-8])
np.allclose([1, np.nan], [1, np.nan])
np.allclose([1, np.nan], [1, np.nan], equal_nan=True)
np.isclose(1.0000089, 1.000009)
np.isclose([1e10, 1e-9], [1.000001e10, 1e-8])
np.isclose([1, np.nan], [1, np.nan])
np.isclose([1, np.nan], [1, np.nan], equal_nan=True)
```
相等判断:
```
a = np.array([[1, 2], [3, 4]])
b = np.array([[1, 2], [3, 4]])
c = np.array([[1, np.nan], [3, np.nan]])
d = np.array([[1, np.nan], [3, np.nan]])
np.array_equal(a, b)
np.array_equal(a, c)
np.array_equal(c, d)
np.array_equal(c, d, equal_nan=True)
a = np.array([1, 2])
b = np.array([[1, 2], [1, 2]])
c = np.array([[1, 2]])
np.array_equal(a, b)
# shape一致值相等
np.array_equiv(a, b)
np.array_equiv(a, c)
np.array_equiv(b, c)
```
注意,这里的shape一致是指一个可以广播到另一个上。
最后是比较运算,包括:>, >=, <, <=, ==, !=。
```
a = np.array([[1, 2], [4, 2]])
b = np.array([[1, 3], [2, 2]])
np.greater(a, b)
np.greater_equal(a, b)
np.less(a, b)
np.less_equal(a, b)
np.equal(a, b)
np.not_equal(a, b)
```
## 二进制运算
主要是位运算相关API。
首先是逐位位运算,均为通函数,包括:与、或、异或、非、左移、右移等。
### 位运算
```
int("000011", base=2), int("001100", base=2)
# 逐位与
np.bitwise_and(3, 12), 3 & 12
# 非
x = np.invert(np.array(13, dtype=np.uint8))
x, 2**7+2**6+2**5+2**4+2
np.binary_repr(x, width=8), int("00001101", base=2)
```
### 左右移
或、异或和 与 类似,不再赘述。接下来的按位移动,以左移为例。
```
# 左移
2 << 1
np.left_shift(2, 1)
np.left_shift(2, [1, 2, 3])
np.left_shift([1, 2, 3], 2)
```
### 打包解包
最后介绍下打包和解包,分别将二进制转为 uint8 数组或反之。
```
arr = np.array([[1, 1, 0], [1, 0, 1]])
# 打包
np.packbits(arr)
np.packbits(np.ravel(arr))
# 110101
int("11010100", base=2)
# 2**7+2**6 2**7 2**6
np.packbits(arr, axis=0)
# 解包,注意,dtype 须为 uint8
b = np.array([2], dtype=np.uint8)
np.unpackbits(b)
```
## 字符串
字符串在`NumPy`中也有很好的支持。所有的API在`np.char`下面,针对以下两种数据类型:
```
np.str_, np.unicode_, np.str0
np.bytes_, np.string_
```
### 基本操作
首先是一些常用的字符串操作,与Python自带的类似。
```
a = np.array(["1", "2"], dtype=np.str_)
b = np.array(["a", "b"], dtype=np.str_)
```
加法就是字符的拼接:
```
# 加法
np.char.add(a, b)
np.array([1, 2], dtype=np.bytes_) + np.array([1, 2], dtype=np.bytes_)
np.char.add(np.array([1, 2], dtype=np.bytes_), np.array([1, 2], dtype=np.bytes_))
np.char.add(np.array([1, 2], dtype=np.bytes_), np.array([1, 2], dtype=np.str_))
```
乘法是字符的重复:
```
np.char.multiply(np.array([1],dtype=np.str_), 3)
# 次数小于0时为0
np.char.multiply(np.array([1],dtype=np.str_), -3)
```
其他API也与str自带的类似:
- `capitalize`:首字母大写
- `title`:按标题大写
- `center`:给定长度居中填充
- `ljust/rjust`:给定长度左右填充
- `zfill`:0左填充
- `decode/encode`:解编码
- `expandtabs`:tab会被替换成一个或多个空格
- `join`:拼接
- `lower/upper`:大小写
- `swapcase`:大小写互换
- `lstrip/rstrip/strip`:strip
- `replace`:替换
- `translate`:转换
- `partition/rpatition`:分为三元组(左/右)
- `split/splitlines`:切分
```
# 首字母大写
np.char.capitalize("ab b c")
# 标题大写
np.char.title("ab b c")
# 给定长度居中
(np.char.center("ab", 5, "~"),
np.char.ljust("a", 5, "~"),
np.char.rjust("a", 5, "~"),
np.char.zfill("a", 6)
)
# 编解码
np.char.encode("abc", encoding="utf8")
# 替换tab
val = np.char.expandtabs("\ta", tabsize=1)
val
val.tolist(), val.tolist()[0] == " "
# 拼接
np.char.join("a", "12345")
# 大小写
np.char.lower(np.array(["A"],dtype=np.str_)), np.char.upper("a")
# 互换
np.char.swapcase("aBc")
# strip
np.char.strip("abc "), np.char.strip("abc", "c")
# replace
np.char.replace("aaabc", "a", "A", count=2)
# 转换
np.char.translate(["abc", "a"], "1"*255, deletechars=None)
# 非unicode时才会删除
np.char.translate(
np.array(["abc", "a"], dtype=np.bytes_), b"1"*256, deletechars=b"a")
# partition
(
np.char.partition("abc", "b"),
np.char.rpartition("abc", "b"),
np.char.partition("abca", "a"),
np.char.rpartition("abca", "a")
)
# split
np.char.split("a b c", " "), np.char.splitlines("a\nb\nc")
```
### 比较
主要比较字符串大小相等。
```
np.char.equal(["abc", "ab"], ["abd", "ab"])
np.char.not_equal(["abc", "ab"], ["abd", "ab"])
# >=
np.char.greater_equal(["abc", "ab"], ["abd", "ab"]), "abc">"abd"
np.char.greater(["abc", "ab"], ["abd", "ab"])
# <=
np.char.less_equal(["abc", "ab"], ["abd", "ab"])
np.char.less(["abc", "ab"], ["abd", "ab"])
# 比较
# cmp可以取 < <= == >= > !=
np.char.compare_chararrays(
["abc", "ab", "a"],
["ab", "ad", "ae"],
cmp="<",
rstrip=True
)
```
### 基本信息
包括基本的判断和统计。
```
np.char.count("abcab", "a", start=0, end=None)
np.char.str_len(["abcab", "a"])
(
np.char.find("abcab", "a", start=0, end=None),
np.char.rfind("abcab", "a", start=0, end=None)
)
# 找不到返回-1
np.char.find("abcab", "d", start=2, end=None)
(
np.char.index("abcab", "a", start=0),
np.char.rindex("abcab", "a", start=0)
)
# 找不到抛出异常
np.char.index("abcab", "d", start=2)
# starts/ends
(
np.char.endswith(["a", "ba"], "a", start=0, end=1),
np.char.startswith(["a", "ba"], "a", start=1, end=3)
)
# 只有空格
np.char.isspace([" \t\n", "a"])
# 所有字符小/大写,首字母大写
(
np.char.islower(["a", "Ab"]),
np.char.isupper(["a", "Ab", "AB"]),
np.char.istitle(["Aa", "aB", "AB"])
)
# 判断
lst = ["a", "1", "01", "03", "⒊⒏", "a1", "1.1", ""]
(
# 每个元素的所有字符都为字母,至少一个字符
np.char.isalpha(lst),
# 同上,字母或数字
np.char.isalnum(lst),
"",
# 只有decimal(小数点不算)
np.char.isdecimal(lst),
# 只有digit
np.char.isdigit(lst),
# 只有numeric
np.char.isnumeric(lst)
)
```
关于decimal、digit和numeric的区别可参考:
- [string - What's the difference between str.isdigit, isnumeric and isdecimal in python? - Stack Overflow](https://stackoverflow.com/questions/44891070/whats-the-difference-between-str-isdigit-isnumeric-and-isdecimal-in-python)
它们的主要区别在处理unicode的方式上。
## 小结
## 参考
- [NumPy documentation — NumPy v1.23.dev0 Manual](https://numpy.org/devdocs/index.html)
- [python - Memory growth with broadcast operations in NumPy - Stack Overflow](https://stackoverflow.com/questions/31536504/memory-growth-with-broadcast-operations-in-numpy)
| github_jupyter |
# Identifying safe loans with decision trees
The [LendingClub](https://www.lendingclub.com/) is a peer-to-peer leading company that directly connects borrowers and potential lenders/investors. In this notebook, you will build a classification model to predict whether or not a loan provided by LendingClub is likely to [default](https://en.wikipedia.org/wiki/Default_(finance)).
In this notebook you will use data from the LendingClub to predict whether a loan will be paid off in full or the loan will be [charged off](https://en.wikipedia.org/wiki/Charge-off) and possibly go into default. In this assignment you will:
* Use SFrames to do some feature engineering.
* Train a decision-tree on the LendingClub dataset.
* Visualize the tree.
* Predict whether a loan will default along with prediction probabilities (on a validation set).
* Train a complex tree model and compare it to simple tree model.
Let's get started!
## Fire up Graphlab Create
Make sure you have the latest version of GraphLab Create. If you don't find the decision tree module, then you would need to upgrade GraphLab Create using
```
pip install graphlab-create --upgrade
```
```
import graphlab
graphlab.canvas.set_target('ipynb')
```
# Load LendingClub dataset
We will be using a dataset from the [LendingClub](https://www.lendingclub.com/). A parsed and cleaned form of the dataset is availiable [here](https://github.com/learnml/machine-learning-specialization-private). Make sure you **download the dataset** before running the following command.
```
loans = graphlab.SFrame('lending-club-data.gl/')
```
## Exploring some features
Let's quickly explore what the dataset looks like. First, let's print out the column names to see what features we have in this dataset.
```
loans.column_names()
```
Here, we see that we have some feature columns that have to do with grade of the loan, annual income, home ownership status, etc. Let's take a look at the distribution of loan grades in the dataset.
```
loans['grade'].show()
```
We can see that over half of the loan grades are assigned values `B` or `C`. Each loan is assigned one of these grades, along with a more finely discretized feature called `sub_grade` (feel free to explore that feature column as well!). These values depend on the loan application and credit report, and determine the interest rate of the loan. More information can be found [here](https://www.lendingclub.com/public/rates-and-fees.action).
Now, let's look at a different feature.
```
loans['home_ownership'].show()
```
This feature describes whether the loanee is mortaging, renting, or owns a home. We can see that a small percentage of the loanees own a home.
## Exploring the target column
The target column (label column) of the dataset that we are interested in is called `bad_loans`. In this column **1** means a risky (bad) loan **0** means a safe loan.
In order to make this more intuitive and consistent with the lectures, we reassign the target to be:
* **+1** as a safe loan,
* **-1** as a risky (bad) loan.
We put this in a new column called `safe_loans`.
```
# safe_loans = 1 => safe
# safe_loans = -1 => risky
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.remove_column('bad_loans')
```
Now, let us explore the distribution of the column `safe_loans`. This gives us a sense of how many safe and risky loans are present in the dataset.
```
loans['safe_loans'].show(view = 'Categorical')
```
You should have:
* Around 81% safe loans
* Around 19% risky loans
It looks like most of these loans are safe loans (thankfully). But this does make our problem of identifying risky loans challenging.
## Features for the classification algorithm
In this assignment, we will be using a subset of features (categorical and numeric). The features we will be using are **described in the code comments** below. If you are a finance geek, the [LendingClub](https://www.lendingclub.com/) website has a lot more details about these features.
```
features = ['grade', # grade of the loan
'sub_grade', # sub-grade of the loan
'short_emp', # one year or less of employment
'emp_length_num', # number of years of employment
'home_ownership', # home_ownership status: own, mortgage or rent
'dti', # debt to income ratio
'purpose', # the purpose of the loan
'term', # the term of the loan
'last_delinq_none', # has borrower had a delinquincy
'last_major_derog_none', # has borrower had 90 day or worse rating
'revol_util', # percent of available credit being used
'total_rec_late_fee', # total late fees received to day
]
target = 'safe_loans' # prediction target (y) (+1 means safe, -1 is risky)
# Extract the feature columns and target column
loans = loans[features + [target]]
```
What remains now is a **subset of features** and the **target** that we will use for the rest of this notebook.
## Sample data to balance classes
As we explored above, our data is disproportionally full of safe loans. Let's create two datasets: one with just the safe loans (`safe_loans_raw`) and one with just the risky loans (`risky_loans_raw`).
```
safe_loans_raw = loans[loans[target] == +1]
risky_loans_raw = loans[loans[target] == -1]
print "Number of safe loans : %s" % len(safe_loans_raw)
print "Number of risky loans : %s" % len(risky_loans_raw)
```
Now, write some code to compute below the percentage of safe and risky loans in the dataset and validate these numbers against what was given using `.show` earlier in the assignment:
```
print "Percentage of safe loans :", len(safe_loans_raw)*100/len(loans)
print "Percentage of risky loans :", len(risky_loans_raw)*100/len(loans)
```
One way to combat class imbalance is to undersample the larger class until the class distribution is approximately half and half. Here, we will undersample the larger class (safe loans) in order to balance out our dataset. This means we are throwing away many data points. We used `seed=1` so everyone gets the same results.
```
# Since there are fewer risky loans than safe loans, find the ratio of the sizes
# and use that percentage to undersample the safe loans.
percentage = len(risky_loans_raw)/float(len(safe_loans_raw))
risky_loans = risky_loans_raw
safe_loans = safe_loans_raw.sample(percentage, seed=1)
# Append the risky_loans with the downsampled version of safe_loans
loans_data = risky_loans.append(safe_loans)
```
Now, let's verify that the resulting percentage of safe and risky loans are each nearly 50%.
```
print "Percentage of safe loans :", len(safe_loans) / float(len(loans_data))
print "Percentage of risky loans :", len(risky_loans) / float(len(loans_data))
print "Total number of loans in our new dataset :", len(loans_data)
```
**Note:** There are many approaches for dealing with imbalanced data, including some where we modify the learning algorithm. These approaches are beyond the scope of this course, but some of them are reviewed in this [paper](http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5128907&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F69%2F5173046%2F05128907.pdf%3Farnumber%3D5128907 ). For this assignment, we use the simplest possible approach, where we subsample the overly represented class to get a more balanced dataset. In general, and especially when the data is highly imbalanced, we recommend using more advanced methods.
## Split data into training and validation sets
We split the data into training and validation sets using an 80/20 split and specifying `seed=1` so everyone gets the same results.
**Note**: In previous assignments, we have called this a **train-test split**. However, the portion of data that we don't train on will be used to help **select model parameters** (this is known as model selection). Thus, this portion of data should be called a **validation set**. Recall that examining performance of various potential models (i.e. models with different parameters) should be on validation set, while evaluation of the final selected model should always be on test data. Typically, we would also save a portion of the data (a real test set) to test our final model on or use cross-validation on the training set to select our final model. But for the learning purposes of this assignment, we won't do that.
```
train_data, validation_data = loans_data.random_split(.8, seed=1)
```
# Use decision tree to build a classifier
Now, let's use the built-in GraphLab Create decision tree learner to create a loan prediction model on the training data. (In the next assignment, you will implement your own decision tree learning algorithm.) Our feature columns and target column have already been decided above. Use `validation_set=None` to get the same results as everyone else.
```
decision_tree_model = graphlab.decision_tree_classifier.create(train_data, validation_set=None,
target = target, features = features)
```
## Visualizing a learned model
As noted in the [documentation](https://dato.com/products/create/docs/generated/graphlab.boosted_trees_classifier.create.html#graphlab.boosted_trees_classifier.create), typically the max depth of the tree is capped at 6. However, such a tree can be hard to visualize graphically. Here, we instead learn a smaller model with **max depth of 2** to gain some intuition by visualizing the learned tree.
```
small_model = graphlab.decision_tree_classifier.create(train_data, validation_set=None,
target = target, features = features, max_depth = 2)
```
In the view that is provided by GraphLab Create, you can see each node, and each split at each node. This visualization is great for considering what happens when this model predicts the target of a new data point.
**Note:** To better understand this visual:
* The root node is represented using pink.
* Intermediate nodes are in green.
* Leaf nodes in blue and orange.
```
small_model.show(view="Tree")
```
# Making predictions
Let's consider two positive and two negative examples **from the validation set** and see what the model predicts. We will do the following:
* Predict whether or not a loan is safe.
* Predict the probability that a loan is safe.
```
validation_safe_loans = validation_data[validation_data[target] == 1]
validation_risky_loans = validation_data[validation_data[target] == -1]
sample_validation_data_risky = validation_risky_loans[0:2]
sample_validation_data_safe = validation_safe_loans[0:2]
sample_validation_data = sample_validation_data_safe.append(sample_validation_data_risky)
sample_validation_data
```
## Explore label predictions
Now, we will use our model to predict whether or not a loan is likely to default. For each row in the **sample_validation_data**, use the **decision_tree_model** to predict whether or not the loan is classified as a **safe loan**.
**Hint:** Be sure to use the `.predict()` method.
```
print decision_tree_model.predict(sample_validation_data)
```
**Quiz Question:** What percentage of the predictions on `sample_validation_data` did `decision_tree_model` get correct?
## Explore probability predictions
For each row in the **sample_validation_data**, what is the probability (according **decision_tree_model**) of a loan being classified as **safe**?
**Hint:** Set `output_type='probability'` to make **probability** predictions using **decision_tree_model** on `sample_validation_data`:
```
print decision_tree_model.predict(sample_validation_data,output_type='probability')
```
**Quiz Question:** Which loan has the highest probability of being classified as a **safe loan**?
**Checkpoint:** Can you verify that for all the predictions with `probability >= 0.5`, the model predicted the label **+1**?
### Tricky predictions!
Now, we will explore something pretty interesting. For each row in the **sample_validation_data**, what is the probability (according to **small_model**) of a loan being classified as **safe**?
**Hint:** Set `output_type='probability'` to make **probability** predictions using **small_model** on `sample_validation_data`:
```
small_model.predict(sample_validation_data,output_type='probability')
```
**Quiz Question:** Notice that the probability preditions are the **exact same** for the 2nd and 3rd loans. Why would this happen?
## Visualize the prediction on a tree
Note that you should be able to look at the small tree, traverse it yourself, and visualize the prediction being made. Consider the following point in the **sample_validation_data**
```
sample_validation_data[1]
```
Let's visualize the small tree here to do the traversing for this data point.
```
small_model.show(view="Tree")
```
**Note:** In the tree visualization above, the values at the leaf nodes are not class predictions but scores (a slightly advanced concept that is out of the scope of this course). You can read more about this [here](https://homes.cs.washington.edu/~tqchen/pdf/BoostedTree.pdf). If the score is $\geq$ 0, the class +1 is predicted. Otherwise, if the score < 0, we predict class -1.
**Quiz Question:** Based on the visualized tree, what prediction would you make for this data point?
Now, let's verify your prediction by examining the prediction made using GraphLab Create. Use the `.predict` function on `small_model`.
```
small_model.predict(sample_validation_data[1])
```
# Evaluating accuracy of the decision tree model
Recall that the accuracy is defined as follows:
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified examples}}{\mbox{# total examples}}
$$
Let us start by evaluating the accuracy of the `small_model` and `decision_tree_model` on the training data
```
print small_model.evaluate(train_data)['accuracy']
print decision_tree_model.evaluate(train_data)['accuracy']
```
**Checkpoint:** You should see that the **small_model** performs worse than the **decision_tree_model** on the training data.
Now, let us evaluate the accuracy of the **small_model** and **decision_tree_model** on the entire **validation_data**, not just the subsample considered above.
```
print small_model.evaluate(validation_data)['accuracy']
print decision_tree_model.evaluate(validation_data)['accuracy']
```
**Quiz Question:** What is the accuracy of `decision_tree_model` on the validation set, rounded to the nearest .01?
## Evaluating accuracy of a complex decision tree model
Here, we will train a large decision tree with `max_depth=10`. This will allow the learned tree to become very deep, and result in a very complex model. Recall that in lecture, we prefer simpler models with similar predictive power. This will be an example of a more complicated model which has similar predictive power, i.e. something we don't want.
```
big_model = graphlab.decision_tree_classifier.create(train_data, validation_set=None,
target = target, features = features, max_depth = 10)
```
Now, let us evaluate **big_model** on the training set and validation set.
```
print big_model.evaluate(train_data)['accuracy']
print big_model.evaluate(validation_data)['accuracy']
```
**Checkpoint:** We should see that **big_model** has even better performance on the training set than **decision_tree_model** did on the training set.
**Quiz Question:** How does the performance of **big_model** on the validation set compare to **decision_tree_model** on the validation set? Is this a sign of overfitting?
### Quantifying the cost of mistakes
Every mistake the model makes costs money. In this section, we will try and quantify the cost of each mistake made by the model.
Assume the following:
* **False negatives**: Loans that were actually safe but were predicted to be risky. This results in an oppurtunity cost of losing a loan that would have otherwise been accepted.
* **False positives**: Loans that were actually risky but were predicted to be safe. These are much more expensive because it results in a risky loan being given.
* **Correct predictions**: All correct predictions don't typically incur any cost.
Let's write code that can compute the cost of mistakes made by the model. Complete the following 4 steps:
1. First, let us compute the predictions made by the model.
1. Second, compute the number of false positives.
2. Third, compute the number of false negatives.
3. Finally, compute the cost of mistakes made by the model by adding up the costs of true positives and false positives.
First, let us make predictions on `validation_data` using the `decision_tree_model`:
```
predictions = decision_tree_model.predict(validation_data)
```
**False positives** are predictions where the model predicts +1 but the true label is -1. Complete the following code block for the number of false positives:
```
false_positives = 0
for i in range(len(predictions)):
if(predictions[i]==1 and validation_data['safe_loans'][i]==-1):
false_positives += 1
print false_positives
```
**False negatives** are predictions where the model predicts -1 but the true label is +1. Complete the following code block for the number of false negatives:
```
false_negatives = 0
for i in range(len(predictions)):
if(predictions[i]==-1 and validation_data['safe_loans'][i]==1):
false_negatives += 1
print false_negatives
```
**Quiz Question:** Let us assume that each mistake costs money:
* Assume a cost of \$10,000 per false negative.
* Assume a cost of \$20,000 per false positive.
What is the total cost of mistakes made by `decision_tree_model` on `validation_data`?
```
print 10000*false_negatives+20000*false_positives
```
| github_jupyter |
# Testing Code with pytest
In this lesson we will be going over some of the things we've learned so far about testing and demonstrate how to use pytest to expand your tests. We'll start by looking at some functions which have been provided for you, and then move on to testing them.
In your repo you should find a Python script called `fibonacci.py`, which contains a couple of functions providing slightly different implementations of the [Fibonacci sequence](https://en.wikipedia.org/wiki/Fibonacci_number). Each of these should take an integer input `n` and return the first `n` Fibonacci numbers.
```
%load fibonacci.py
import fibonacci as f
print(f.fib(15))
print(f.fib_numpy(15))
```
Once you've had a look at these functions and are happy with using them, let's move on to testing them.
<section class="challenge panel panel-success">
<div class="panel-heading">
<h2><span class="fa fa-pencil"></span> Testing functions</h2>
</div>
<div class="panel-body">
<ol>
<li>
<p>Create a new script called <code>test_fibonacci.py</code>, or similar. In this script, write a test function for each of the Fibonacci implementations. Consider the following questions when writing your tests:</p>
<ul>
<li>How many different inputs do you need to test to be confident that the function is working as expected?</li>
<li>For a given input, is there a known, well-defined answer against which you can check the output?</li>
<li>Does the function output have any other qualities which might be wrong, and which should be tested?</li>
</ul>
<p>Remember that in order for your tests to call your functions, that script will need to import them.</p>
</li>
</ol>
</div>
</section>
```
%load test_fibonacci1.py
from test_fibonacci1 import test_fib_10, test_fib_numpy_10
test_fib_10()
test_fib_numpy_10()
```
<section class="challenge panel panel-success">
<div class="panel-heading">
<h2><span class="fa fa-pencil"></span> Generalising the tests</h2>
</div>
<div class="panel-body">
<p>The approach we've used above, with one test for each function, is fine. But it's very specific to this particular scenario - if we introduced another implementation, we would have to write a new test function for it, which is not the point of modularity. Since our functions are supposed to give the same output, a better approach would be to have one generalised test function which could test any function we pass it.</p>
<ol>
<li>Combine your tests into one test function which takes a function as input and uses that as the function to be tested. Run your Fibonacci implementations through this new test and make sure they still pass.</li>
<li>The above solutions testing a specific input are fine in theory, but the point of tests is to find unexpected behaviour. Generalise your test function to test correct behaviour for a Fibonacci sequence of random length. You will probably want to look at the <code>numpy.random</code> module.</li>
</ol>
</div>
</section>
```
%load test_fibonacci2.py
from test_fibonacci2 import test_fib_10
test_fib_10(f.fib)
test_fib_10(f.fib_numpy)
```
Next, let's add a third implementation of the Fibonacci sequence.
```
def fib_recursive(n):
if n == 1 or n == 2:
return 2
return fib_recursive(n-1) + fib_recursive(n-1)
def fib_3(n):
return [fib_recursive(i) for i in range(1, n)]
fib_3(10)
```
<section class="challenge panel panel-success">
<div class="panel-heading">
<h2><span class="fa fa-pencil"></span> Testing a Third Implementation</h2>
</div>
<div class="panel-body">
<p>Copy the functions above (exactly as shown here) into your <code>fibonacci.py</code> script. Use your tests to find the bugs and compare its output to the previous implementations.</p>
</div>
</section>
<section class="solution panel panel-primary">
<div class="panel-heading">
<h2><span class="fa fa-eye"></span> Solution</h2>
</div>
</section>
The actual `fib_recursive` function should read:
```
def fib_recursive(n):
if n == 1 or n == 2:
return 1
return fib_recursive(n-1) + fib_recursive(n-2)
```
and should pass the tests.
## Introducing `pytest`
`pytest` is a Python module which contains a lot of tools for automating tests, rather than running the test for each function one at a time as we've done so far. We won't go into much detail with this, but you should know that it exists and to look into it if you need to write a large number of tests.
The most basic way to use `pytest` is with the command-line tool it provides. This command takes a filename as input, runs the functions defined there and reports whether they pass or fail.
```
!pytest test_fibonacci1.py
```
This works in this example because I've used a file containing only our first versions of the tests, which took no input. Using the new combined test, `pytest` doesn't know what input to provide, so it reports the test as having failed. However, there is a commonly-used feature in `pytest` which addresses this, which is the `parametrize` decorator. This allows you to specify inputs for the input parameters of your test functions. What makes it particularly useful though, is that you can specify several for each parameter and `pytest` will automatically run the test with all of those inputs. In this way you can automate testing your functions with a wide range of inputs without having to type out many different function calls yourself.
For our example, we can use this decorator to pass in the functions we wish to test, like this:
```
# %load test_fibonacci3.py
import pytest
import numpy as np
from fibonacci import fib, fib_numpy
@pytest.mark.parametrize("f_fib", (fib, fib_numpy))
def test_random_fib(f_fib):
n = np.random.randint(1, 1000)
a = f_fib(n)
n2 = np.random.randint(3, n)
assert a[n2] == a[n2-1] + a[n2-2]
```
Now when we run this script with `pytest`, you'll notice that even though we have only defined one function, it still runs two tests, one with each of our Fibonacci functions as input.
```
!pytest test_fibonacci3.py
```
This should also pass all the previous tests written. You may have also wanted to add tests that detect the `RecursionError` when $n==0$.
| github_jupyter |
# The H-R diagram
This notebook will explore the HR diagram, perhaps the most important figure in astronomy, and a classic example of the power of data visualization.
When we look at the Sky, some stars are bright and some faint. This is due not only to their intrinsic properties. but also to their distance from us. If we want to study the intrinsic properties of stars we need to have first determined their distances.
```
from astropy.io import ascii
from astropy.table import Table
from astropy import units as u
```
We'll be using a catalog of stars known as the [HGY database](http://www.astronexus.com/hyg). It is the combination of three surveys which have meadured distances. In the file I've provided in this repository I've filtered that catalog to only include columns we need and to also remove entries without measured distances and colors.
```
starTable = ascii.read('HGY_dist.dat')
starTable
# plotting imports
import matplotlib.pyplot as plt
%matplotlib inline
#import seaborn
import numpy as np
from astropy.coordinates import Angle
import seaborn
```
## Plotting the Skymap
Let's see where those stars are on the sky
```
#seaborn.set_style(("darkgrid"))
fig = plt.figure (figsize=(13,6))
ax = fig.add_subplot(111,projection="mollweide")
plt.scatter(Angle(starTable[1:]['ra'],u.hr).wrap_at(180.*u.deg).radian,Angle(starTable[1:]['dec'],u.deg).radian,s=1,edgecolors='none')
```
OK, well se can see our Galaxy in that plot, but not much else. Let's do another map, this time only showing stars brighter than 4th magnitude, and also scaling the stars bytheir brightness.
```
fig = plt.figure (figsize=(13,6))
ax = fig.add_subplot(111,projection="mollweide")
plt.scatter(Angle(starTable[1:]['ra'],u.hr).wrap_at(180.*u.deg).radian,Angle(starTable[1:]['dec'],u.deg).radian,s=(2.0*(4.0-starTable[1:]['mag'])))
```
### Variable distributions
To start let's see how the quantities we'll be working with are distributed.
```
fig = plt.figure (figsize=(12,9))
ax = fig.add_subplot(311)
ax.hist(starTable['dist'],bins=100)
ax.set_xlabel("Distance(pc)")
a2 = fig.add_subplot(312)
a2.hist(starTable['absmag'],bins=100)
a2.set_xlabel("M")
a3 = fig.add_subplot(313)
a3.hist(starTable['ci'],bins=100)
a3.set_xlabel("B-V")
```
## The HR Diagram
Simply plotting Color vs Absolute Magniture gives us the HR Diagram
```
#seaborn.set_style(("darkgrid"))
fig = plt.figure (figsize=(12.5, 7.5))
plt.gca().invert_yaxis()
plt.xlim(-0.5,2.5)
plt.ylim(17,-7)
plt.title('H-R Diagram',fontsize=26)
plt.ylabel('Absolute Magnitude (V)')
plt.xlabel('Color (B-V)')
plt.scatter(starTable['ci'] ,starTable['absmag'],s=.5,edgecolors='none')
#seaborn.set_style(("darkgrid"), {"axes.facecolor": ".2"})
fig = plt.figure (figsize=(12.5, 7.5))
plt.gca().invert_yaxis()
plt.xlim(-0.5,2.5)
plt.ylim(17,-7)
plt.title('H-R Diagram',fontsize=26)
plt.ylabel('Absolute Magnitude (V)')
plt.xlabel('Color (B-V)')
plt.scatter(starTable['ci'] ,starTable['absmag'],s=.5,cmap='coolwarm',c=starTable['ci'],edgecolors='none',vmin=0.0,vmax=1.5)
```
## HR Diagram and Stellar Evolution
The grouping in the HR diagram led to the recogntion that the different regions in the diagram cooresponeded to an evolutionary sequence. We now have detailed models for how a star of a given mass evolves over its lifetime. [Here](http://www.epantaleo.com/wp-content/uploads/2015/10/HR_diagram_d3.html) is a visualization of the evolution of a 100 star cluster over 5 billions years. This was produced by Ester Panatelo, Aaron Geller and myself based on simulations run by Aaron Geller using the NBODY6 code which includes both gravitational interactions and steller evolution.
```
from IPython.display import YouTubeVideo
YouTubeVideo("qJMom80Qdc8")
```
## Color Magnitude Diagram of a Globular Clusters
The color magnitude diagram is very similar to the HR diagram, except it is plotted using apparent rather than absolute magnitudes. If we were to construct this for a random sample of stars it would be a mess, however a cluster of stars are approximately at the same distance, so there is just a constant shift from the HR diagram based on the distance to the cluster.
##### We can conect to Worldwide Telescope to see the imagery of the cluster we are working with, Palomar 5
```
from pywwt.windows import WWTWindowsClient
my_wwt = WWTWindowsClient()
# Tell WWT to Fly to the position of the cluster
my_wwt.change_mode('Sky',fly_to=[-0.1082,229.0128/15,1,0.,0.])
```
The catalog pal5.csv was created by soing a radial search for objects in the Sloan Digital Sky Survey [SDSS](http://sdss.org) for all abjects within three arcminuted of the center of the cluster.
```
pal5 = ascii.read('pal5.csv')
pal5
```
Oops that contains everything SDSS identified in the direction of the cluster, we only want stars (not galaxies). We just keep things with type=6 (6 means star in this catalog, 3 galaxy).
```
mask = pal5['type']==6
pal5_stars = pal5[mask]
pal5_stars
```
##### OK, now lets's see the catalog in WWT
```
#Set up WWT layer
new_layer = my_wwt.new_layer("Sky", "Palomar 5", pal5_stars.colnames)
#Set visualization parameters in WWT
props_dict = {"CoordinatesType":"Spherical",\
"MarkerScale":"Screen",\
"PlotType":"Circle",\
"PointScaleType":"Constant",\
"ScaleFactor":"16",\
"RaUnits":"Degrees",\
"TimeSeries":"False"}
new_layer.set_properties(props_dict)
#Send data to WWT client
new_layer.update(data=pal5_stars, purge_all=True, no_purge=False, show=True)
```
##### And now let's plot the color-magnitude diagram
```
#seaborn.set_style(("darkgrid"))
fig = plt.figure (figsize=(7.5, 5))
plt.gca().invert_yaxis()
plt.xlim(-0.5,2.0)
plt.ylim(25,16)
plt.title('Color-Magnitude Diagram for Pal5',fontsize=26)
plt.ylabel('Apparent Magnitude (g)')
plt.xlabel('color (g-r)')
plt.scatter(pal5_stars['g']-pal5_stars['r'] ,pal5_stars['g'],s=2,cmap='coolwarm',edgecolors='none',vmin=0.0,vmax=1.0)
```
## Things for you to try:
Now we can figure out a lot of things with HR color-magnitude diagrams. I'll get you started with the following exercises, but you'll have to do most of the work yourself.
#### Finding the Distance to Pal 5
The color Magnitude Diagram uses apparent magnitudes instead of absolute magnitudes. The difference between the apparent and absolute magnitude is known as the distance modulus = (m - M). Extimate the distance modulus from the shift in the two diagrams to esitmate the distance to Pal 5. Now things are a little complicared by the fact that we are using different magnitude systems (B and V vs. SDSS g andr). However the shifts between the two systems and can be ignored for this rough estimate.
```
# This is the code for converting the distance Modulus to a distance
from astropy.coordinates import Distance
Distance(distmod=10.0)
```
#### Which is older pal 5 or pal 3
Construct the color-magnitude of Pal 3 (RA = 151.3801 deg, dec= 0.072 deg). By comparing the location of their main sequence turnoffs, can you figure out which cluster is older.
I've already downloaded the data for you, you can download it from the repository like this:
```
# Load Pal 3 data
pal3objects = ascii.read('pal3.csv')
pal3objects
```
#### Make an HR Diagram from Gaia Data Release 1 (2 million stars!)
The [Gaia](http://sci.esa.int/gaia/) mission had recently released a sample of 2 million star distances, a number that dwarfs what we have been using. But that is nothing in April 2018 they will release a sample containing over one billion!!!
Now this will be the ultimate HR diagram. I'll provide you with the code to download the catalog to get you started. Good luck!
```
# We will download the catalog from the VizieR catalog service
from astroquery.vizier import Vizier
v = Vizier()
v.ROW_LIMIT = -1 # Without this there is a limit
catalogs = v.get_catalogs('I/337/tgas')
GaiaStars=catlogs[0]
```
| github_jupyter |
```
try:
from openmdao.utils.notebook_utils import notebook_mode
except ImportError:
!python -m pip install openmdao[notebooks]
```
# ScipyOptimizeDriver
ScipyOptimizeDriver wraps the optimizers in *scipy.optimize.minimize*. In this example, we use the SLSQP optimizer to find the minimum of the Paraboloid problem.
```
from openmdao.utils.notebook_utils import get_code
from myst_nb import glue
glue("code_src019", get_code("openmdao.test_suite.components.paraboloid.Paraboloid"), display=False)
```
:::{Admonition} `Paraboloid` class definition
:class: dropdown
{glue:}`code_src019`
:::
```
import openmdao.api as om
from openmdao.test_suite.components.paraboloid import Paraboloid
prob = om.Problem()
model = prob.model
model.add_subsystem('comp', Paraboloid(), promotes=['*'])
prob.driver = om.ScipyOptimizeDriver()
prob.driver.options['optimizer'] = 'SLSQP'
prob.driver.options['tol'] = 1e-9
prob.driver.options['disp'] = True
model.add_design_var('x', lower=-50.0, upper=50.0)
model.add_design_var('y', lower=-50.0, upper=50.0)
model.add_objective('f_xy')
prob.setup()
prob.set_val('x', 50.0)
prob.set_val('y', 50.0)
prob.run_driver()
print(prob.get_val('x'))
print(prob.get_val('y'))
from openmdao.utils.assert_utils import assert_near_equal
assert_near_equal(prob.get_val('x'), 6.66666667, 1e-6)
assert_near_equal(prob.get_val('y'), -7.3333333, 1e-6)
```
## ScipyOptimizeDriver Options
```
om.show_options_table("openmdao.drivers.scipy_optimizer.ScipyOptimizeDriver")
```
## ScipyOptimizeDriver Constructor
The call signature for the *ScipyOptimizeDriver* constructor is:
```{eval-rst}
.. automethod:: openmdao.drivers.scipy_optimizer.ScipyOptimizeDriver.__init__
:noindex:
```
## ScipyOptimizeDriver Option Examples
**optimizer**
The “optimizer” option lets you choose which optimizer to use. ScipyOptimizeDriver supports all of the optimizers in scipy.optimize except for ‘dogleg’ and ‘trust-ncg’. Generally, the optimizers that you are most likely to use are “COBYLA” and “SLSQP”, as these are the only ones that support constraints. Only SLSQP supports equality constraints. SLSQP also uses gradients provided by OpenMDAO, while COBYLA is gradient-free. Also, SLSQP supports both equality and inequality constraints, but COBYLA only supports inequality constraints.
Here we pass the optimizer option as a keyword argument.
```
import openmdao.api as om
from openmdao.test_suite.components.paraboloid import Paraboloid
prob = om.Problem()
model = prob.model
model.add_subsystem('comp', Paraboloid(), promotes=['*'])
prob.driver = om.ScipyOptimizeDriver(optimizer='COBYLA')
model.add_design_var('x', lower=-50.0, upper=50.0)
model.add_design_var('y', lower=-50.0, upper=50.0)
model.add_objective('f_xy')
prob.setup()
prob.set_val('x', 50.0)
prob.set_val('y', 50.0)
prob.run_driver()
print(prob.get_val('x'))
print(prob.get_val('y'))
assert_near_equal(prob.get_val('x'), 6.66666667, 1e-6)
assert_near_equal(prob.get_val('y'), -7.3333333, 1e-6)
```
**maxiter**
The “maxiter” option is used to specify the maximum number of major iterations before termination. It is generally a valid option across all of the available options.
```
import openmdao.api as om
from openmdao.test_suite.components.paraboloid import Paraboloid
prob = om.Problem()
model = prob.model
model.add_subsystem('comp', Paraboloid(), promotes=['*'])
prob.driver = om.ScipyOptimizeDriver()
prob.driver.options['maxiter'] = 20
model.add_design_var('x', lower=-50.0, upper=50.0)
model.add_design_var('y', lower=-50.0, upper=50.0)
model.add_objective('f_xy')
prob.setup()
prob.set_val('x', 50.0)
prob.set_val('y', 50.0)
prob.run_driver()
print(prob.get_val('x'))
print(prob.get_val('y'))
assert_near_equal(prob.get_val('x'), 6.66666667, 1e-6)
assert_near_equal(prob.get_val('y'), -7.3333333, 1e-6)
```
**tol**
The “tol” option allows you to specify the tolerance for termination.
```
import openmdao.api as om
from openmdao.test_suite.components.paraboloid import Paraboloid
prob = om.Problem()
model = prob.model
model.add_subsystem('comp', Paraboloid(), promotes=['*'])
prob.driver = om.ScipyOptimizeDriver()
prob.driver.options['tol'] = 1.0e-9
model.add_design_var('x', lower=-50.0, upper=50.0)
model.add_design_var('y', lower=-50.0, upper=50.0)
model.add_objective('f_xy')
prob.setup()
prob.set_val('x', 50.0)
prob.set_val('y', 50.0)
prob.run_driver()
print(prob.get_val('x'))
print(prob.get_val('y'))
assert_near_equal(prob.get_val('x'), 6.66666667, 1e-6)
assert_near_equal(prob.get_val('y'), -7.3333333, 1e-6)
```
## ScipyOptimizeDriver Driver Specific Options
Optimizers in *scipy.optimize.minimize* have optimizer specific options. To let the user specify values for these options, OpenMDAO provides an option in the form of a dictionary named *opt_settings*. See the *scipy.optimize.minimize* documentation for more information about the driver specific options that are available.
As an example, here is code using some *opt_settings* for the shgo optimizer:
```
# Source of example: https://stefan-endres.github.io/shgo/
import numpy as np
import openmdao.api as om
size = 3 # size of the design variable
def rastrigin(x):
a = 10 # constant
return np.sum(np.square(x) - a * np.cos(2 * np.pi * x)) + a * np.size(x)
class Rastrigin(om.ExplicitComponent):
def setup(self):
self.add_input('x', np.ones(size))
self.add_output('f', 0.0)
self.declare_partials(of='f', wrt='x', method='cs')
def compute(self, inputs, outputs, discrete_inputs=None, discrete_outputs=None):
x = inputs['x']
outputs['f'] = rastrigin(x)
prob = om.Problem()
model = prob.model
model.add_subsystem('rastrigin', Rastrigin(), promotes=['*'])
prob.driver = driver = om.ScipyOptimizeDriver()
driver.options['optimizer'] = 'shgo'
driver.options['disp'] = False
driver.opt_settings['maxtime'] = 10 # seconds
driver.opt_settings['iters'] = 3
driver.opt_settings['maxiter'] = None
model.add_design_var('x', lower=-5.12*np.ones(size), upper=5.12*np.ones(size))
model.add_objective('f')
prob.setup()
prob.set_val('x', 2*np.ones(size))
prob.run_driver()
print(prob.get_val('x'))
print(prob.get_val('f'))
assert_near_equal(prob.get_val('x'), np.zeros(size), 1e-6)
assert_near_equal(prob.get_val('f'), 0.0, 1e-6)
```
Notice that when using the shgo optimizer, setting the *opt_settings[‘maxiter’]* to None overrides *ScipyOptimizeDriver*’s *options[‘maxiter’]* value. It is not possible to set *options[‘maxiter’]* to anything other than an integer so the *opt_settings[‘maxiter’]* option provides a way to set the maxiter value for the shgo optimizer to None.
| github_jupyter |
```
import scipy.io, os
import numpy as np
import matplotlib.pyplot as plt
from netCDF4 import Dataset
from fastjmd95 import rho
from matplotlib.colors import ListedColormap
import seaborn as sns; sns.set()
sns.set()
import seawater as sw
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
import matplotlib as mpl
colours=sns.color_palette('colorblind', 10)
my_cmap = ListedColormap(colours)
color_list=colours
```
## Code to plot the meridional overturning and density structure from the Pacific from Sonnewald and Lguensat (2021).
Data used are from the ECCOv4 State Estimate available: https://ecco-group.org/products-ECCO-V4r4.html
Note: Data is generated for the North Atlantic, also including the Southern Ocean and Artcic basin. Data fro the Paciic and Indian ocean are also generated, and the bleow code can be adjusted to plot this also.
```
gridInfo=np.load('/home/jovyan/DNN4Cli/figures/latLonDepthLevelECCOv4.npz')
zLev=gridInfo['depthLevel'][:]
depthPlot=zLev.cumsum()
lat=gridInfo['lat'][:]
lon=gridInfo['lon'][:]
lat.shape,lon.shape,zLev.shape
zMat=np.repeat(zLev,720*360).reshape((50,360,720))
dvx=np.rot90(0.5*111000*np.cos(lat*(np.pi/180)),1)
masks=np.load('regimeMasks.npz')
maskMD=masks['maskMD']
maskSSV=masks['maskSSV']
maskNSV=masks['maskNSV']
maskTR=masks['maskTR']
maskSO=masks['maskSO']
maskNL=masks['maskNL']
```
## Calculating the streamfunction
The overall meridional overturning ($\Psi_{z\theta}$) from Fig. 3 in Sonnewald and Lguensat (2021) is defined as:
$$\Psi_{z\theta}(\theta,z)=- \int^z_{-H} \int_{\phi_2}^{\phi_1} v(\phi,\theta,z')d\phi dz',$$
\noindent where $z$ is the relative level depth and $v$ is the meridional (north-south) component of velocity. For the regimes, the relevant velocity fields were then used. A positive $\Psi_{z\theta}$ signifies a clockwise circulation, while a negative $\Psi_{z\theta}$ signifies an anticlockwise circulation.
```
dens = np.load("/home/jovyan/DNN4Cli/figures/density20yr.npy")
PSI_pacific = np.load("/home/jovyan/DNN4Cli/figures/PSI_pacific.npz")
for i in PSI_pacific:
print (i)
PSI_all = PSI_pacific["PSI_all_P"]
PSI_NL= PSI_pacific["PSI_NL_P"]
PSI_SO = PSI_pacific["PSI_SO_P"]
PSI_SSV = PSI_pacific["PSI_SSV_P"]
PSI_NSV = PSI_pacific["PSI_NSV_P"]
PSI_MD = PSI_pacific["PSI_MD_P"]
PSI_TR = PSI_pacific["PSI_TR_P"]
```
# Finally, we plot the data.
The plot is a composite of different subplots.
```
len(levs)
#levs=[-20,-19,34, 34.5, 35, 35.5,36,36.5,37,37.25,37.5,37.75,38]
levs=[32,33,34, 34.5, 35, 35.5,36,36.5,37,37.25,37.5,37.75,38]
cols=plt.cm.viridis([300,250, 200,150, 125, 100, 50,30, 10,15,10,9,1])
Land=np.ones(np.nansum(PSI_all, axis=0).shape)*np.nan
Land[np.nansum(PSI_all, axis=0)==0.0]=0
land3D=np.ones(dens.shape)
land3D[dens==0]=np.nan
def zPlotSurf(ax, data,zMin, zMax,label,mm,latMin,latMax,RGB,Ticks,saveName='test'):
land=np.ones(np.nanmean(data, axis=0).shape)*np.nan
land[np.nansum(data, axis=0)==0.0]=0
n=50
levels = np.linspace(-20, 20, n+1)
ax.contourf(lat[0,latMin:latMax],-depthPlot[zMin:zMax],-np.nanmean(data, axis=0)[zMin:zMax,latMin:latMax], levels=np.linspace(-20, 20, n+1),cmap=plt.cm.seismic, extend='both')
n2=30
densityPlot=np.nanmean((dens*land3D*mm), axis=2)
assert(len(levs)==len(cols))
CS=ax.contour(lat[0,latMin:latMax],-depthPlot[zMin:zMax],densityPlot[zMin:zMax,latMin:latMax],
levels=levs,
linewidths=3,colors=cols, extend='both')
ax.tick_params(axis='y', labelsize=20)
if Ticks == 0:
ax.set_xticklabels( () )
elif Ticks == 1:
ax.set_xticklabels( () )
ax.set_yticklabels( () )
ax.contourf(lat[0,latMin:latMax],-depthPlot[zMin:zMax],land[zMin:zMax,latMin:latMax], 1,cmap=plt.cm.Set2)
ax.contourf(lat[0,latMin:latMax],-depthPlot[zMin:zMax],Land[zMin:zMax,latMin:latMax], 50,cmap=plt.cm.bone)
yL=ax.get_ylim()
xL=ax.get_xlim()
plt.text(xL[0]+0.02*np.ptp(xL), yL[0]+0.4*np.ptp(yL), label, fontsize=20, size=30,
weight='bold', bbox={'facecolor':'white', 'alpha':0.7}, va='bottom')
def zPlotDepth(ax, data,zMin, zMax,label,mm,latMin,latMax,RGB,Ticks,saveName='test'):
land=np.ones(np.nanmean(data, axis=0).shape)*np.nan
land[np.nansum(data, axis=0)==0.0]=0
n=50
levels = np.linspace(-20, 20, n+1)
ax.contourf(lat[0,latMin:latMax],-depthPlot[zMin:zMax],-np.nanmean(data, axis=0)[zMin:zMax,latMin:latMax], levels=np.linspace(-20, 20, n+1),cmap=plt.cm.seismic, extend='both')
n2=30
densityPlot=np.nanmean((dens*land3D*mm), axis=2)
ax.contour(lat[0,latMin:latMax],-depthPlot[zMin:zMax],densityPlot[zMin:zMax,latMin:latMax], colors=cols,
levels=levs,
linewidths=3, extend='both')
if Ticks == 0:
ax.tick_params(axis='y', labelsize=20)
#ax.set_xticklabels( () )
elif Ticks== 1:
#ax.set_xticklabels( () )
ax.set_yticklabels( () )
plt.tick_params(axis='both', labelsize=20)
#plt.clim(cmin, cmax)
ax.contourf(lat[0,latMin:latMax],-depthPlot[zMin:zMax],land[zMin:zMax,latMin:latMax], 1,cmap=plt.cm.Set2)
ax.contourf(lat[0,latMin:latMax],-depthPlot[zMin:zMax],Land[zMin:zMax,latMin:latMax], 50,cmap=plt.cm.bone)
yL=ax.get_ylim()
xL=ax.get_xlim()
plt.text(xL[0]+0.03*np.ptp(xL), yL[0]+0.03*np.ptp(yL), label, fontsize=20, size=30,
weight='bold', bbox={'facecolor':RGB, 'alpha':1}, va='bottom')
LatMin = 120
LatMax = 240
# Set general figure options
# figure layout
xs = 15.5 # figure width in inches
nx = 2 # number of axes in x dimension
ny = 3 # number of sub-figures in y dimension (each sub-figure has two axes)
nya = 2 # number of axes per sub-figure
idy = [2.0, 1.0] # size of the figures in the y dimension
xm = [0.07, 0.07,0.9, 0.07] # x margins of the figure (left to right)
ym = [1.5] + ny*[0.07, 0.1] + [0.3] # y margins of the figure (bottom to top)
# pre-calculate some things
xcm = np.cumsum(xm) # cumulative margins
ycm = np.cumsum(ym) # cumulative margins
idx = (xs - np.sum(xm))/nx
idy_off = [0] + idy
ys = np.sum(idy)*ny + np.sum(ym) # size of figure in y dimension
# make the figure!
fig = plt.figure(figsize=(xs, ys))
# loop through sub-figures
ix,iy=0,0
#print(loc[0], loc[1], loc[0] + loc[2], loc[1] + loc[3])
# create the axis
#ax = plt.axes(loc)
for iys in range(nya):
# (bottom left corner x, bottom left corner y, width, height)
loc = ((xcm[ix] + (ix*idx))/xs,
(ycm[nya*iy + iys] + np.sum(idy)*iy+ idy_off[iys])/ys,
idx/xs,
idy[iys]/ys)
#print(loc[0], loc[1], loc[0] + loc[2], loc[1] + loc[3])
# create the axis
ax = plt.axes(loc)
# split between your two figure types
if iys == 0:
zPlotDepth(ax, PSI_TR,1,50,'TR', maskTR,LatMin, LatMax, color_list[1],'')
# if not the bottom figure remove x ticks
if iy > 0:
ax.set_xticks([])
else:
xticks = ax.get_xticks()
ax.set_xticklabels(['{:0.0f}$^\circ$N'.format(xtick) for xtick in xticks])
elif iys == 1:
zPlotSurf(ax, PSI_TR,0,10,'', maskTR,LatMin, LatMax, color_list[1],'')
# remove x ticks
ax.set_xticks([])
ix,iy=0,1
#print(loc[0], loc[1], loc[0] + loc[2], loc[1] + loc[3])
# create the axis
#ax = plt.axes(loc)
for iys in range(nya):
# (bottom left corner x, bottom left corner y, width, height)
loc = ((xcm[ix] + (ix*idx))/xs,
(ycm[nya*iy + iys] + np.sum(idy)*iy+ idy_off[iys])/ys,
idx/xs,
idy[iys]/ys)
#print(loc[0], loc[1], loc[0] + loc[2], loc[1] + loc[3])
# create the axis
ax = plt.axes(loc)
# split between your two figure types
if iys == 0:
zPlotDepth(ax, PSI_NL,1,50,'NL', maskNL,LatMin, LatMax, color_list[-1],'')
# if not the bottom figure remove x ticks
if iy > 0:
ax.set_xticks([])
elif iys == 1:
zPlotSurf(ax, PSI_NL,0,10,'', maskNL,LatMin, LatMax, color_list[4],'')
# remove x ticks
ax.set_xticks([])
############### n-SV
ix,iy=0,2
loc = ((xcm[ix] + (ix*idx))/xs,
(ycm[nya*iy + iys] + np.sum(idy)*iy+ idy_off[iys])/ys,
idx/xs,
idy[iys]/ys)
#print(loc[0], loc[1], loc[0] + loc[2], loc[1] + loc[3])
# create the axis
#ax = plt.axes(loc)
for iys in range(nya):
# (bottom left corner x, bottom left corner y, width, height)
loc = ((xcm[ix] + (ix*idx))/xs,
(ycm[nya*iy + iys] + np.sum(idy)*iy+ idy_off[iys])/ys,
idx/xs,
idy[iys]/ys)
#print(loc[0], loc[1], loc[0] + loc[2], loc[1] + loc[3])
# create the axis
ax = plt.axes(loc)
# split between your two figure types
if iys == 0:
zPlotDepth(ax, PSI_NSV,1,50,'N-SV', maskNSV,LatMin, LatMax, color_list[4],'')
# if not the bottom figure remove x ticks
if iy > 0:
ax.set_xticks([])
elif iys == 1:
zPlotSurf(ax, PSI_NSV,0,10,'', maskNSV,LatMin, LatMax, color_list[-1],'')
# remove x ticks
ax.set_xticks([])
#
#_______________________________________________________________________
# S-SV
ix,iy=1,2
loc = ((xcm[ix] + (ix*idx))/xs,
(ycm[nya*iy + iys] + np.sum(idy)*iy+ idy_off[iys])/ys,
idx/xs,
idy[iys]/ys)
#print(loc[0], loc[1], loc[0] + loc[2], loc[1] + loc[3])
# create the axis
# ax = plt.axes(loc)
for iys in range(nya):
# (bottom left corner x, bottom left corner y, width, height)
loc = ((xcm[ix] + (ix*idx))/xs,
(ycm[nya*iy + iys] + np.sum(idy)*iy+ idy_off[iys])/ys,
idx/xs,
idy[iys]/ys)
#print(loc[0], loc[1], loc[0] + loc[2], loc[1] + loc[3])
# create the axis
ax = plt.axes(loc)
# split between your two figure types
if iys == 0:
zPlotDepth(ax, PSI_SSV,1,50,'S-SV', maskSSV,LatMin, LatMax, color_list[2],1,'')
# if not the bottom figure remove x ticks
if iy > 0:
ax.set_xticks([])
elif iys == 1:
zPlotSurf(ax, PSI_SSV,0,10,'', maskSSV,LatMin, LatMax, color_list[-3],1,'')
# remove x ticks
ax.set_xticks([])
#%%%%%%%%%%%%%%%%%%%%%%%%% SO
ix,iy=1,1
loc = ((xcm[ix] + (ix*idx))/xs,
(ycm[nya*iy + iys] + np.sum(idy)*iy+ idy_off[iys])/ys,
idx/xs,
idy[iys]/ys)
#print(loc[0], loc[1], loc[0] + loc[2], loc[1] + loc[3])
# create the axis
#ax = plt.axes(loc)
for iys in range(nya):
# (bottom left corner x, bottom left corner y, width, height)
loc = ((xcm[ix] + (ix*idx))/xs,
(ycm[nya*iy + iys] + np.sum(idy)*iy+ idy_off[iys])/ys,
idx/xs,
idy[iys]/ys)
#print(loc[0], loc[1], loc[0] + loc[2], loc[1] + loc[3])
# create the axis
ax = plt.axes(loc)
# split between your two figure types
if iys == 0:
zPlotDepth(ax, PSI_SO,1,50,'SO', maskSO,LatMin, LatMax, color_list[-3],1,'')
# if not the bottom figure remove x ticks
if iy > 0:
ax.set_xticks([])
elif iys == 1:
zPlotSurf(ax, PSI_SO,0,10,'', maskSO,LatMin, LatMax, color_list[-3],1,'')
# remove x ticks
ax.set_xticks([])
#%%%%%%%MD
ix,iy=1,0
loc = ((xcm[ix] + (ix*idx))/xs,
(ycm[nya*iy + iys] + np.sum(idy)*iy+ idy_off[iys])/ys,
idx/xs,
idy[iys]/ys)
#print(loc[0], loc[1], loc[0] + loc[2], loc[1] + loc[3])
# create the axis
#ax = plt.axes(loc)
for iys in range(nya):
# (bottom left corner x, bottom left corner y, width, height)
loc = ((xcm[ix] + (ix*idx))/xs,
(ycm[nya*iy + iys] + np.sum(idy)*iy+ idy_off[iys])/ys,
idx/xs,
idy[iys]/ys)
#print(loc[0], loc[1], loc[0] + loc[2], loc[1] + loc[3])
# create the axis
ax = plt.axes(loc)
# split between your two figure types
if iys == 0:
zPlotDepth(ax, PSI_MD,1,50,'MD', maskMD,LatMin, LatMax, color_list[0],1,'')
# if not the bottom figure remove x ticks
if iy > 0:
ax.set_xticks([])
else:
xticks = ax.get_xticks()
ax.set_xticklabels(['{:0.0f}$^\circ$N'.format(xtick) for xtick in xticks])
elif iys == 1:
zPlotSurf(ax, PSI_MD,0,10,'', maskMD,LatMin, LatMax, color_list[-3],1,'')
# remove x ticks
ax.set_xticks([])
cmap = plt.get_cmap('viridis')
cmap = mpl.colors.ListedColormap(cols)
ncol = len(levs)
axes = plt.axes([(xcm[0])/(xs), (ym[0]-0.6)/ys, (2*idx + xm[1])/(xs*2), (0.2)/ys])
cb = fig.colorbar(plt.cm.ScalarMappable(norm=mpl.colors.Normalize(-0.5, ncol - 0.5), cmap=cmap),
cax=axes, orientation='horizontal')
cb.ax.set_xticks(np.arange(ncol))
cb.ax.set_xticklabels(['{:0.2f}'.format(lev) for lev in levs])
cb.ax.tick_params(labelsize=20)
cb.set_label(label=r'Density, $\sigma_2$',weight='bold', fontsize=20)
cmap = plt.get_cmap('seismic')
ncol = len(cols)
axes = plt.axes([(xcm[2]+2*idx)/(xs*2), (ym[0]-0.6)/ys, (2*idx+xm[3])/(xs*2), (0.2)/ys])
cb = fig.colorbar(plt.cm.ScalarMappable(norm=mpl.colors.Normalize(-20,20), cmap=cmap),
cax=axes, label='title', orientation='horizontal', extend='both',format='%.0f',
boundaries=np.linspace(-20, 20, 41))
cb.ax.tick_params(labelsize=20)
cb.set_label(label=r'SV ($10^{6}m^{2}s^{-2}$)',weight='bold', fontsize=20)
# save as a png
fig.savefig('psiRho_NAtl_sigma2.png', dpi=200, bbox_inches='tight')
```
| github_jupyter |
# Evaluation of AD algorithms perfomance
```
import pandas as pd
import numpy as np
import sys
try:
import tsad
except:
import sys
sys.path.insert(1, '../')
from tsad.evaluating.evaluating import evaluating
import warnings
warnings.filterwarnings('ignore')
```
## Simple example
Init true and prediction labelled data
```
true = pd.Series(0,pd.date_range('2020-01-01','2020-01-20',freq='D'))
true.iloc[[6,14]]=1
prediction = pd.Series(0,pd.date_range('2020-01-01','2020-01-20',freq='D'))
prediction.iloc[[4,10]]=1
pd.concat([true,prediction],1).reset_index()
```
Evaluating by default `NAB` metric
```
results = evaluating(true=true,prediction=prediction)
print(results)
```
## Approaches for evaluating of anomaly detection algorithms for time series data

## Metirc for evaluating of AD algorithms for time series data in tsad

NAB metric: [link](https://ieeexplore.ieee.org/abstract/document/7424283/?casa_token=QrawzPwH7AkAAAAA:vzRggk5TMUviU2JOxxzG76ZlACc3paQhP7KtoUq8jmx7-DkrSWAUp4wZldlTjcqPpap6WPHCeu095g)
## Changepoints metrics
### Variants of input variables
A crucial element for the changepoint detection problem is a detection window:
* The predicted anomalies inside the detection window are perceived as only one true positive
* The absence of predicted anomalies inside the detection window is perceived as only one false negative
* the predicted points outside the detection windows as false positives.
Thus we must assign left and right boundaries of a window for any true changepoint if it is available for a dataset. In TSAD, we have three opportunities for this:
1. ```true``` variable as pd.Series and ```numenta_time``` variable (or ```portion```*)
2. ```true``` variable as a list of true changepoints of pd.Timestamp format and ```numenta_time``` variable (or ```portion```*)
3. ```true``` variable as boundaries itself as a list (we can have more than one change point for one dataset) of a list (length of 2) with left and right pd.Ttimestamp boundary of the window.
\* The ```portion``` is needed if ```numenta_time = None```. The width of the detection window in this case is equal to a ```portion``` of the width of the length of ```prediction``` divided by the number of real CPs in this dataset. default 0.1.
```prediction``` is always pd.Series for one dataset

The picture above shows the predicted label values for changepoint problem.The variable ```numenta_time``` is actually width of window.
```prediction``` is always have the same format (pd.Series) for one dataset:
```
prediction = pd.Series(0,pd.date_range('2020-01-01','2020-01-07',freq='D'))
prediction.iloc[3]=1
prediction
```
How would the ```true``` input variable look in each variant:
#### Variant 1. True as pd.Series
```
true = pd.Series(0,pd.date_range('2020-01-01','2020-01-07',freq='D'))
true.iloc[5]=1
numenta_time='3D'
true
results = evaluating(true=true,prediction=prediction,numenta_time=numenta_time,metric='average_time')
```
From here we can see that we really **correctly detect** the one true chanepoint by given our detection window (which is equal 3 days before changepoint).
**If we not true CPs for specefic dataset**:
```
true = pd.Series(0,pd.date_range('2020-01-01','2020-01-07',freq='D'))
results = evaluating(true=true,prediction=prediction,numenta_time=numenta_time,metric='average_time')
```
#### Variant 2. True as list of pd.Timestamp
```
true = [pd.Timestamp('2020-01-06')]
numenta_time='3D'
true
results = evaluating(true=true,prediction=prediction,numenta_time=numenta_time,metric='average_time')
```
The same result
**If we not true CPs for specefic dataset**:
```
true = []
results = evaluating(true=true,prediction=prediction,numenta_time=numenta_time,metric='average_time')
```
#### Variant 3.True as a list of a list with left and right pd.Timestamp boundary of the window
```
true = [[pd.Timestamp('2020-01-03'),pd.Timestamp('2020-01-06')]]
numenta_time='3D'
true
results = evaluating(true=true,prediction=prediction,numenta_time=numenta_time,metric='average_time')
```
The same result
**If we not true CPs for specefic dataset**:
```
true = [[]]
results = evaluating(true=true,prediction=prediction,numenta_time=numenta_time,metric='average_time')
```
#### Variant 4,5,6. Many datasets
```
# if we have 2 the same datasets
prediction = [prediction,prediction]
numenta_time='3D'
true = pd.Series(0,pd.date_range('2020-01-01','2020-01-07',freq='D'))
true.iloc[5]=1
true = [true,true]
results = evaluating(true=true,prediction=prediction,numenta_time=numenta_time,metric='average_time')
true = [pd.Timestamp('2020-01-06')]
true = [true,true]
results = evaluating(true=true,prediction=prediction,numenta_time=numenta_time,metric='average_time')
true = [[pd.Timestamp('2020-01-03'),pd.Timestamp('2020-01-06')]]
true = [true,true]
results = evaluating(true=true,prediction=prediction,numenta_time=numenta_time,metric='average_time')
```
### Different situations with changepoint detection problem
Assigning characteristics of a window as well as selecting one point in the window must differ depend on business tasks, and in TSAD we seem to foreseen every case for this. Examples of cases from technical diagnostic:
* We have clear anomalies that have to lead to failure. From history, we have objective information about the times of arising anomalies (true changepoints), and we understand that any predicted anomaly that is earlier than the true changepoint is a false positive.
* We have a failure of a system. From history, we have objective information about the time of failure. But we haven't any information about the anomaly. Thus predicted anomaly that is earlier than the true changepoint to be true positive.
* We know approximately the time of the anomaly.
* Many other cases.
To meet the business objectives, we make possible to **adjust the following parameters** in ```evaluating```:
#### ```anomaly_window_destenation``` for input variant 1,2 of true variable

#### ```clear_anomalies_mode```

#### ```intersection_mode``` for solving a problem of intersection of detection windows

| github_jupyter |
```
#IMPORT SEMUA LIBARARY
#IMPORT LIBRARY PANDAS
import pandas as pd
#IMPORT LIBRARY UNTUK POSTGRE
from sqlalchemy import create_engine
import psycopg2
#IMPORT LIBRARY CHART
from matplotlib import pyplot as plt
from matplotlib import style
#IMPORT LIBRARY BASE PATH
import os
import io
#IMPORT LIBARARY PDF
from fpdf import FPDF
#IMPORT LIBARARY CHART KE BASE64
import base64
#IMPORT LIBARARY EXCEL
import xlsxwriter
#FUNGSI UNTUK MENGUPLOAD DATA DARI CSV KE POSTGRESQL
def uploadToPSQL(columns, table, filePath, engine):
#FUNGSI UNTUK MEMBACA CSV
df = pd.read_csv(
os.path.abspath(filePath),
names=columns,
keep_default_na=False
)
#APABILA ADA FIELD KOSONG DISINI DIFILTER
df.fillna('')
#MENGHAPUS COLUMN YANG TIDAK DIGUNAKAN
del df['kategori']
del df['jenis']
del df['pengiriman']
del df['satuan']
#MEMINDAHKAN DATA DARI CSV KE POSTGRESQL
df.to_sql(
table,
engine,
if_exists='replace'
)
#DIHITUNG APABILA DATA YANG DIUPLOAD BERHASIL, MAKA AKAN MENGEMBALIKAN KELUARAN TRUE(BENAR) DAN SEBALIKNYA
if len(df) == 0:
return False
else:
return True
#FUNGSI UNTUK MEMBUAT CHART, DATA YANG DIAMBIL DARI DATABASE DENGAN MENGGUNAKAN ORDER DARI TANGGAL DAN JUGA LIMIT
#DISINI JUGA MEMANGGIL FUNGSI MAKEEXCEL DAN MAKEPDF
def makeChart(host, username, password, db, port, table, judul, columns, filePath, name, subjudul, limit, negara, basePath):
#TEST KONEKSI DATABASE
try:
#KONEKSI KE DATABASE
connection = psycopg2.connect(user=username,password=password,host=host,port=port,database=db)
cursor = connection.cursor()
#MENGAMBL DATA DARI TABLE YANG DIDEFINISIKAN DIBAWAH, DAN DIORDER DARI TANGGAL TERAKHIR
#BISA DITAMBAHKAN LIMIT SUPAYA DATA YANG DIAMBIL TIDAK TERLALU BANYAK DAN BERAT
postgreSQL_select_Query = "SELECT * FROM "+table+" ORDER BY tanggal ASC LIMIT " + str(limit)
cursor.execute(postgreSQL_select_Query)
mobile_records = cursor.fetchall()
uid = []
lengthx = []
lengthy = []
#MELAKUKAN LOOPING ATAU PERULANGAN DARI DATA YANG SUDAH DIAMBIL
#KEMUDIAN DATA TERSEBUT DITEMPELKAN KE VARIABLE DIATAS INI
for row in mobile_records:
uid.append(row[0])
lengthx.append(row[1])
if row[2] == "":
lengthy.append(float(0))
else:
lengthy.append(float(row[2]))
#FUNGSI UNTUK MEMBUAT CHART
#bar
style.use('ggplot')
fig, ax = plt.subplots()
#MASUKAN DATA ID DARI DATABASE, DAN JUGA DATA TANGGAL
ax.bar(uid, lengthy, align='center')
#UNTUK JUDUL CHARTNYA
ax.set_title(judul)
ax.set_ylabel('Total')
ax.set_xlabel('Tanggal')
ax.set_xticks(uid)
#TOTAL DATA YANG DIAMBIL DARI DATABASE, DIMASUKAN DISINI
ax.set_xticklabels((lengthx))
b = io.BytesIO()
#CHART DISIMPAN KE FORMAT PNG
plt.savefig(b, format='png', bbox_inches="tight")
#CHART YANG SUDAH DIJADIKAN PNG, DISINI DICONVERT KE BASE64
barChart = base64.b64encode(b.getvalue()).decode("utf-8").replace("\n", "")
#CHART DITAMPILKAN
plt.show()
#line
#MASUKAN DATA DARI DATABASE
plt.plot(lengthx, lengthy)
plt.xlabel('Tanggal')
plt.ylabel('Total')
#UNTUK JUDUL CHARTNYA
plt.title(judul)
plt.grid(True)
l = io.BytesIO()
#CHART DISIMPAN KE FORMAT PNG
plt.savefig(l, format='png', bbox_inches="tight")
#CHART YANG SUDAH DIJADIKAN PNG, DISINI DICONVERT KE BASE64
lineChart = base64.b64encode(l.getvalue()).decode("utf-8").replace("\n", "")
#CHART DITAMPILKAN
plt.show()
#pie
#UNTUK JUDUL CHARTNYA
plt.title(judul)
#MASUKAN DATA DARI DATABASE
plt.pie(lengthy, labels=lengthx, autopct='%1.1f%%',
shadow=True, startangle=180)
plt.axis('equal')
p = io.BytesIO()
#CHART DISIMPAN KE FORMAT PNG
plt.savefig(p, format='png', bbox_inches="tight")
#CHART YANG SUDAH DIJADIKAN PNG, DISINI DICONVERT KE BASE64
pieChart = base64.b64encode(p.getvalue()).decode("utf-8").replace("\n", "")
#CHART DITAMPILKAN
plt.show()
#MENGAMBIL DATA DARI CSV YANG DIGUNAKAN SEBAGAI HEADER DARI TABLE UNTUK EXCEL DAN JUGA PDF
header = pd.read_csv(
os.path.abspath(filePath),
names=columns,
keep_default_na=False
)
#MENGHAPUS COLUMN YANG TIDAK DIGUNAKAN
header.fillna('')
del header['tanggal']
del header['total']
#MEMANGGIL FUNGSI EXCEL
makeExcel(mobile_records, header, name, limit, basePath)
#MEMANGGIL FUNGSI PDF
makePDF(mobile_records, header, judul, barChart, lineChart, pieChart, name, subjudul, limit, basePath)
#JIKA GAGAL KONEKSI KE DATABASE, MASUK KESINI UNTUK MENAMPILKAN ERRORNYA
except (Exception, psycopg2.Error) as error :
print (error)
#KONEKSI DITUTUP
finally:
if(connection):
cursor.close()
connection.close()
#FUNGSI MAKEEXCEL GUNANYA UNTUK MEMBUAT DATA YANG BERASAL DARI DATABASE DIJADIKAN FORMAT EXCEL TABLE F2
#PLUGIN YANG DIGUNAKAN ADALAH XLSXWRITER
def makeExcel(datarow, dataheader, name, limit, basePath):
#MEMBUAT FILE EXCEL
workbook = xlsxwriter.Workbook(basePath+'jupyter/BLOOMBERG/SektorEksternal/excel/'+name+'.xlsx')
#MENAMBAHKAN WORKSHEET PADA FILE EXCEL TERSEBUT
worksheet = workbook.add_worksheet('sheet1')
#SETINGAN AGAR DIBERIKAN BORDER DAN FONT MENJADI BOLD
row1 = workbook.add_format({'border': 2, 'bold': 1})
row2 = workbook.add_format({'border': 2})
#MENJADIKAN DATA MENJADI ARRAY
data=list(datarow)
isihead=list(dataheader.values)
header = []
body = []
#LOOPING ATAU PERULANGAN, KEMUDIAN DATA DITAMPUNG PADA VARIABLE DIATAS
for rowhead in dataheader:
header.append(str(rowhead))
for rowhead2 in datarow:
header.append(str(rowhead2[1]))
for rowbody in isihead[1]:
body.append(str(rowbody))
for rowbody2 in data:
body.append(str(rowbody2[2]))
#MEMASUKAN DATA DARI VARIABLE DIATAS KE DALAM COLUMN DAN ROW EXCEL
for col_num, data in enumerate(header):
worksheet.write(0, col_num, data, row1)
for col_num, data in enumerate(body):
worksheet.write(1, col_num, data, row2)
#FILE EXCEL DITUTUP
workbook.close()
#FUNGSI UNTUK MEMBUAT PDF YANG DATANYA BERASAL DARI DATABASE DIJADIKAN FORMAT EXCEL TABLE F2
#PLUGIN YANG DIGUNAKAN ADALAH FPDF
def makePDF(datarow, dataheader, judul, bar, line, pie, name, subjudul, lengthPDF, basePath):
#FUNGSI UNTUK MENGATUR UKURAN KERTAS, DISINI MENGGUNAKAN UKURAN A4 DENGAN POSISI LANDSCAPE
pdf = FPDF('L', 'mm', [210,297])
#MENAMBAHKAN HALAMAN PADA PDF
pdf.add_page()
#PENGATURAN UNTUK JARAK PADDING DAN JUGA UKURAN FONT
pdf.set_font('helvetica', 'B', 20.0)
pdf.set_xy(145.0, 15.0)
#MEMASUKAN JUDUL KE DALAM PDF
pdf.cell(ln=0, h=2.0, align='C', w=10.0, txt=judul, border=0)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_font('arial', '', 14.0)
pdf.set_xy(145.0, 25.0)
#MEMASUKAN SUB JUDUL KE PDF
pdf.cell(ln=0, h=2.0, align='C', w=10.0, txt=subjudul, border=0)
#MEMBUAT GARIS DI BAWAH SUB JUDUL
pdf.line(10.0, 30.0, 287.0, 30.0)
pdf.set_font('times', '', 10.0)
pdf.set_xy(17.0, 37.0)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_font('Times','',10.0)
#MENGAMBIL DATA HEADER PDF YANG SEBELUMNYA SUDAH DIDEFINISIKAN DIATAS
datahead=list(dataheader.values)
pdf.set_font('Times','B',12.0)
pdf.ln(0.5)
th1 = pdf.font_size
#MEMBUAT TABLE PADA PDF, DAN MENAMPILKAN DATA DARI VARIABLE YANG SUDAH DIKIRIM
pdf.cell(100, 2*th1, "Kategori", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][0], border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Jenis", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][1], border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Pengiriman", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][2], border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Satuan", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][3], border=1, align='C')
pdf.ln(2*th1)
#PENGATURAN PADDING
pdf.set_xy(17.0, 75.0)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_font('Times','B',11.0)
data=list(datarow)
epw = pdf.w - 2*pdf.l_margin
col_width = epw/(lengthPDF+1)
#PENGATURAN UNTUK JARAK PADDING
pdf.ln(0.5)
th = pdf.font_size
#MEMASUKAN DATA HEADER YANG DIKIRIM DARI VARIABLE DIATAS KE DALAM PDF
pdf.cell(50, 2*th, str("Negara"), border=1, align='C')
for row in data:
pdf.cell(40, 2*th, str(row[1]), border=1, align='C')
pdf.ln(2*th)
#MEMASUKAN DATA ISI YANG DIKIRIM DARI VARIABLE DIATAS KE DALAM PDF
pdf.set_font('Times','B',10.0)
pdf.set_font('Arial','',9)
pdf.cell(50, 2*th, negara, border=1, align='C')
for row in data:
pdf.cell(40, 2*th, str(row[2]), border=1, align='C')
pdf.ln(2*th)
#MENGAMBIL DATA CHART, KEMUDIAN CHART TERSEBUT DIJADIKAN PNG DAN DISIMPAN PADA DIRECTORY DIBAWAH INI
#BAR CHART
bardata = base64.b64decode(bar)
barname = basePath+'jupyter/BLOOMBERG/SektorEksternal/img/'+name+'-bar.png'
with open(barname, 'wb') as f:
f.write(bardata)
#LINE CHART
linedata = base64.b64decode(line)
linename = basePath+'jupyter/BLOOMBERG/SektorEksternal/img/'+name+'-line.png'
with open(linename, 'wb') as f:
f.write(linedata)
#PIE CHART
piedata = base64.b64decode(pie)
piename = basePath+'jupyter/BLOOMBERG/SektorEksternal/img/'+name+'-pie.png'
with open(piename, 'wb') as f:
f.write(piedata)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_xy(17.0, 75.0)
col = pdf.w - 2*pdf.l_margin
widthcol = col/3
#MEMANGGIL DATA GAMBAR DARI DIREKTORY DIATAS
pdf.image(barname, link='', type='',x=8, y=100, w=widthcol)
pdf.set_xy(17.0, 75.0)
col = pdf.w - 2*pdf.l_margin
pdf.image(linename, link='', type='',x=103, y=100, w=widthcol)
pdf.set_xy(17.0, 75.0)
col = pdf.w - 2*pdf.l_margin
pdf.image(piename, link='', type='',x=195, y=100, w=widthcol)
pdf.ln(2*th)
#MEMBUAT FILE PDF
pdf.output(basePath+'jupyter/BLOOMBERG/SektorEksternal/pdf/'+name+'.pdf', 'F')
#DISINI TEMPAT AWAL UNTUK MENDEFINISIKAN VARIABEL VARIABEL SEBELUM NANTINYA DIKIRIM KE FUNGSI
#PERTAMA MANGGIL FUNGSI UPLOADTOPSQL DULU, KALAU SUKSES BARU MANGGIL FUNGSI MAKECHART
#DAN DI MAKECHART MANGGIL FUNGSI MAKEEXCEL DAN MAKEPDF
#DEFINISIKAN COLUMN BERDASARKAN FIELD CSV
columns = [
"kategori",
"jenis",
"tanggal",
"total",
"pengiriman",
"satuan",
]
#UNTUK NAMA FILE
name = "SektorEksternal2_2"
#VARIABLE UNTUK KONEKSI KE DATABASE
host = "localhost"
username = "postgres"
password = "1234567890"
port = "5432"
database = "bloomberg_sektoreksternal"
table = name.lower()
#JUDUL PADA PDF DAN EXCEL
judul = "Data Sektor Eksternal"
subjudul = "Badan Perencanaan Pembangunan Nasional"
#LIMIT DATA UNTUK SELECT DI DATABASE
limitdata = int(8)
#NAMA NEGARA UNTUK DITAMPILKAN DI EXCEL DAN PDF
negara = "Indonesia"
#BASE PATH DIRECTORY
basePath = 'C:/Users/ASUS/Documents/bappenas/'
#FILE CSV
filePath = basePath+ 'data mentah/BLOOMBERG/SektorEksternal/' +name+'.csv';
#KONEKSI KE DATABASE
engine = create_engine('postgresql://'+username+':'+password+'@'+host+':'+port+'/'+database)
#MEMANGGIL FUNGSI UPLOAD TO PSQL
checkUpload = uploadToPSQL(columns, table, filePath, engine)
#MENGECEK FUNGSI DARI UPLOAD PSQL, JIKA BERHASIL LANJUT MEMBUAT FUNGSI CHART, JIKA GAGAL AKAN MENAMPILKAN PESAN ERROR
if checkUpload == True:
makeChart(host, username, password, database, port, table, judul, columns, filePath, name, subjudul, limitdata, negara, basePath)
else:
print("Error When Upload CSV")
```
| github_jupyter |
# Functional Programming
Python provides several functions which enable a functional approach to programming.
These functions are all convenience features in that they can be written in Python fairly easily.
Functional programming is all about expressions.
We may say that the Functional programming is an expression oriented programming.
Expression oriented functions of Python provides are:
1. lambda,
2. map(aFunction, aSequence),
3. filter(aFunction, aSequence),
4. reduce(aFunction, aSequence),
5. list comprehension.
## Lambda
A lambda function is a small anonymous function.
A lambda function can take any number of arguments, but can only have one expression.
### Syntax
```
lambda arguments : expression
```
The expression is executed and the result is returned:
```
x = lambda a : a + 10
print(x(5))
x = lambda a, b : a * b
print(x(5, 6))
x = lambda a, b, c : a + b + c
print(x(5, 6, 2))
```
### Why Use Lambda Functions?
The power of lambda is better shown when you use them as an anonymous function inside another function.
Say you have a function definition that takes one argument, and that argument will be multiplied with an unknown number:
```
def myfunc(n):
return lambda a : a * n
mydoubler = myfunc(2)
print(mydoubler(11))
```
Or, use the same function definition to make a function that always triples the number you send in:
```
def myfunc(n):
return lambda a : a * n
mytripler = myfunc(3)
print(mytripler(11))
```
## Map
One of the common things we do with list and other sequences is applying an operation to each item and collect the result.
For example, updating all the items in a list can be done easily with a for loop:
```
items = [1, 2, 3, 4, 5]
squared = []
for x in items:
squared.append(x ** 2)
print(squared)
```
Since this is such a common operation, actually, we have a built-in feature that does most of the work for us.
The map(aFunction, aSequence) function applies a passed-in function to each item in an iterable object and returns a list containing all the function call results.
```
def sqr(x): return x ** 2
squared = list(map(sqr, items))
print(squared)
```
We passed in a user-defined function applied to each item in the list. map calls sqr on each list item and collects all the return values into a new list.
Because map expects a function to be passed in, it also happens to be one of the places where lambda routinely appears:
```
squared = list(map((lambda x: x **2), items))
print(squared)
```
Since it's a built-in, map is always available and always works the same way.
It also has some performance benefit because it is usually faster than a manually coded for loop.
## Filter
As the name suggests filter extracts each element in the sequence for which the function returns True.
These tools apply functions to sequences and other iterables.
The filter filters out items based on a test function which is a filter and apply functions to pairs of item and running result which is reduce.
Because they return iterables, range and filter both require list calls to display all their results in Python 3.0.
As an example, the following filter call picks out items in a sequence that are less than zero:
```
l = list(range(-5,5))
print(l)
f = list(filter((lambda x: x < 0), l))
print(f)
result = []
for x in l:
if x < 0:
result.append(x)
print(result)
```
Here is another use case for **filter** finding intersection of two lists:
```
a = [1,2,3,5,7,9]
b = [2,3,5,6,7,8]
f = list(filter(lambda x: x in a, b)) # prints out [2, 3, 5, 7]
print(f)
```
# Reduce
The reduce function is a little less obvious in its intent.
This function reduces a list to a single value by combining elements via a supplied function.
The map function is the simplest one among Python built-ins used for functional programming.
The reduce is in the **functools** in Python 3.0.
It is more complex.
It accepts an iterator to process, but it's not an iterator itself.
It returns a single result:
```
from functools import reduce
r = reduce( (lambda x, y: x + y), [1, 2, 3, 4])
print(r)
```
At each step, **reduce** passes the current product or division, along with the next item from the list, to the passed-in **lambda** function.
By default, the first item in the sequence initialized the starting value.
Here's the for loop version of the first of these calls, with the multiplication hardcoded inside the loop:
```
L = [1, 2, 3, 4]
result = L[0]
for x in L[1:]:
result = result * x
print(result)
```
We can concatenate a list of strings to make a sentence. Using the Dijkstra's famous quote on bug:
```
L = ['Testing ', 'shows ', 'the ', 'presence', ', ','not ', 'the ', 'absence ', 'of ', 'bugs']
r = reduce( (lambda x,y:x+y), L, 'banana ')
print(r)
```
# Further resources
1. [W3 Python Lambda](https://www.w3schools.com/python/python_lambda.asp)
2. [Lambda Expressions](https://docs.python.org/3.7/tutorial/controlflow.html#lambda-expressions)
3. [Map](https://docs.python.org/3.7/library/functions.html#map)
4. [Filter](https://docs.python.org/3.7/library/functions.html#filter)
5. [Functional Programming HOWTO](https://docs.python.org/3.7/howto/functional.html)
| github_jupyter |
# Styling
The default styling for plots works pretty well however sometimes you may need to change things. The following will show you how to change the style of your plots and have different types of lines and dots
This is the default plot we will start with:
```
from astropy.time import Time
import matplotlib.pyplot as plt
plt.ion()
import poliastro.plotting as plotting
from poliastro.bodies import Earth, Mars, Jupiter, Sun
from poliastro.twobody import Orbit
epoch = Time("2018-08-17 12:05:50", scale="tdb")
plotter = plotting.OrbitPlotter()
plotter.plot(Orbit.from_body_ephem(Earth, epoch), label='Earth')
plotter.plot(Orbit.from_body_ephem(Mars, epoch), label='Mars')
plotter.plot(Orbit.from_body_ephem(Jupiter, epoch), label='Jupiter');
epoch = Time("2018-08-17 12:05:50", scale="tdb")
plotter = plotting.OrbitPlotter()
earthPlots = plotter.plot(Orbit.from_body_ephem(Earth, epoch), label='Earth')
earthPlots[0].set_linestyle('-') # solid line
earthPlots[0].set_linewidth(0.5)
earthPlots[1].set_marker('H') # Hexagon
earthPlots[1].set_markersize(15)
marsPlots = plotter.plot(Orbit.from_body_ephem(Mars, epoch), label='Mars')
jupiterPlots = plotter.plot(Orbit.from_body_ephem(Jupiter, epoch), label='Jupiter')
```
Here we get hold of the lines list from the `OrbitPlotter.plot` method this is a list of lines. The first is the orbit line. The second is the current position marker. With the matplotlib lines objects we can start changing the style. First we make the line solid but thin line. Then we change the current position marker to a large hexagon.
More details of the style options for the markers can be found here: https://matplotlib.org/2.0.2/api/markers_api.html#module-matplotlib.markers
More details of the style options on lines can be found here: https://matplotlib.org/2.0.2/api/lines_api.html However make sure that you use the set methods rather than just changing the attributes as the methods will force a re-draw of the plot.
Next we will make some changes to the other two orbits.
```
epoch = Time("2018-08-17 12:05:50", scale="tdb")
plotter = plotting.OrbitPlotter()
earthPlots = plotter.plot(Orbit.from_body_ephem(Earth, epoch), label='Earth')
earthPlots[0].set_linestyle('-') # solid line
earthPlots[0].set_linewidth(0.5)
earthPlots[1].set_marker('H') # Hexagon
earthPlots[1].set_markersize(15)
marsPlots = plotter.plot(Orbit.from_body_ephem(Mars, epoch), label='Mars')
marsPlots[0].set_dashes([0,1,0,1,1,0])
marsPlots[0].set_linewidth(2)
marsPlots[1].set_marker('D') # Diamond
marsPlots[1].set_markersize(15)
marsPlots[1].set_fillstyle('none')
marsPlots[1].set_markeredgewidth(1) # make sure this is set if you use fillstyle 'none'
jupiterPlots = plotter.plot(Orbit.from_body_ephem(Jupiter, epoch), label='Jupiter')
jupiterPlots[0].set_linestyle('') # No line
jupiterPlots[1].set_marker('*') # star
jupiterPlots[1].set_markersize(15)
```
You can also change the style of the plot using the matplotlib axis which can be aquired from the OrbitPlotter()
See the folling example that creates a grid, adds a title, and makes the background transparent. To make the changes clearer it goes back to the inital example.
```
epoch = Time("2018-08-17 12:05:50", scale="tdb")
fig, ax = plt.subplots()
ax.grid(True)
ax.set_title("Earth, Mars, and Jupiter")
ax.set_facecolor('None')
plotter = plotting.OrbitPlotter(ax)
plotter.plot(Orbit.from_body_ephem(Earth, epoch), label='Earth')
plotter.plot(Orbit.from_body_ephem(Mars, epoch), label='Mars')
plotter.plot(Orbit.from_body_ephem(Jupiter, epoch), label='Jupiter');
```
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Introduction" data-toc-modified-id="Introduction-1"><span class="toc-item-num">1 </span>Introduction</a></span></li><li><span><a href="#Theory" data-toc-modified-id="Theory-2"><span class="toc-item-num">2 </span>Theory</a></span><ul class="toc-item"><li><span><a href="#The-process" data-toc-modified-id="The-process-2.1"><span class="toc-item-num">2.1 </span>The process</a></span></li><li><span><a href="#Two-tailed-and-One-tailed" data-toc-modified-id="Two-tailed-and-One-tailed-2.2"><span class="toc-item-num">2.2 </span>Two-tailed and One-tailed</a></span></li><li><span><a href="#Types-of-tests" data-toc-modified-id="Types-of-tests-2.3"><span class="toc-item-num">2.3 </span>Types of tests</a></span></li><li><span><a href="#Normal-distribution" data-toc-modified-id="Normal-distribution-2.4"><span class="toc-item-num">2.4 </span>Normal distribution</a></span></li></ul></li><li><span><a href="#Practice" data-toc-modified-id="Practice-3"><span class="toc-item-num">3 </span>Practice</a></span><ul class="toc-item"><li><span><a href="#One-sample-T-test-|-Two-tailed-|-Means" data-toc-modified-id="One-sample-T-test-|-Two-tailed-|-Means-3.1"><span class="toc-item-num">3.1 </span>One sample T-test | Two-tailed | Means</a></span></li><li><span><a href="#One-sample-T-test-|-One-tailed-|-Means" data-toc-modified-id="One-sample-T-test-|-One-tailed-|-Means-3.2"><span class="toc-item-num">3.2 </span>One sample T-test | One-tailed | Means</a></span></li><li><span><a href="#Two-sample-T-test-|-Two-tailed-|-Means" data-toc-modified-id="Two-sample-T-test-|-Two-tailed-|-Means-3.3"><span class="toc-item-num">3.3 </span>Two sample T-test | Two-tailed | Means</a></span></li><li><span><a href="#Two-sample-T-test-|-One-tailed-|-Means" data-toc-modified-id="Two-sample-T-test-|-One-tailed-|-Means-3.4"><span class="toc-item-num">3.4 </span>Two sample T-test | One-tailed | Means</a></span></li><li><span><a href="#Two-sample-Z-test-|-One-tailed-|-Means" data-toc-modified-id="Two-sample-Z-test-|-One-tailed-|-Means-3.5"><span class="toc-item-num">3.5 </span>Two sample Z-test | One-tailed | Means</a></span></li><li><span><a href="#Two-sample-Z-test-|-One-tailed-|-Proportions" data-toc-modified-id="Two-sample-Z-test-|-One-tailed-|-Proportions-3.6"><span class="toc-item-num">3.6 </span>Two sample Z-test | One-tailed | Proportions</a></span></li><li><span><a href="#One-sample-Z-test-|-One-tailed-|-Means" data-toc-modified-id="One-sample-Z-test-|-One-tailed-|-Means-3.7"><span class="toc-item-num">3.7 </span>One sample Z-test | One-tailed | Means</a></span></li><li><span><a href="#One-sample-Z-test-|-One-tailed-|-Proportions" data-toc-modified-id="One-sample-Z-test-|-One-tailed-|-Proportions-3.8"><span class="toc-item-num">3.8 </span>One sample Z-test | One-tailed | Proportions</a></span></li><li><span><a href="#F-test-(ANOVA)" data-toc-modified-id="F-test-(ANOVA)-3.9"><span class="toc-item-num">3.9 </span>F-test (ANOVA)</a></span></li><li><span><a href="#Chi-square-test" data-toc-modified-id="Chi-square-test-3.10"><span class="toc-item-num">3.10 </span>Chi-square test</a></span></li></ul></li><li><span><a href="#Conclusion" data-toc-modified-id="Conclusion-4"><span class="toc-item-num">4 </span>Conclusion</a></span></li></ul></div>
Let's say we are planning to move to Ames, Iowa, with a $120 000 budget to buy a house. We have no idea about the real estate market in the city. However the City Hall owns a precious piece of information : the House Price Dataset. It contains about 1500 lines of data about houses in the city, with attributes like Sale Price, Living Area, Garage Type, etc. The bad news is we can not access the entire database, it is too expensive. The good news is the City Hall proposes some samples : free for up to 25 observations, with a small fee for up to 100 observations. So we'll make use of this great offer to know a bit more about the real estate market, and understand what we can get for our money. And guess what? This is will be a good way to go through statistical tests. Note that we won't go too much into the theory here. This notebooks main goal is to have an overview of which statistical tests to use depending on the situation faced, and how to use them.
```
import pandas as pd
pd.set_option('max_colwidth', 200)
pd.set_option('display.float_format', lambda x: '%.3f' % x)
from statsmodels.stats.weightstats import *
import scipy.stats
#this is the entire dataset, but we'll only be able to use to extract samples from it.
FILE_PATH = 'data/house-prices-advanced-regression-techniques/train.csv'
city_hall_dataset = pd.read_csv(FILE_PATH)
```
# Introduction
# Theory
What we will be trying to do in this tutorial is make assumptions on the whole population of houses based only on the samples at our disposal.<br>
This is what statistical tests do, but one must know a few principles before using them.
## The process
The basic process of statistical tests is the following :
- Stating a Null Hypothesis (most often : "the two values are not different")
- Stating an Alternative Hypothesis (most often : "the two values are different")
- Defining an alpha value, which is a confidence level (most often : 95%). The higher it is, the harder it will be to validate the Alternative Hypothesis, but the more confident we will be if we do validate it.
- Depending on data at disposal, we choose the relevant test (Z-test, T-test, etc... More on that later)
- The test computes a score, which corresponds to a p-value.
- If p-value is below 1-alpha (0.05 if alpha is 95%), we can accept the Alternative Hypothesis (or "reject the Null Hypothesis"). If it is over, we'll have to stick with the Null Hypothesis (or "fail to reject the Null Hypothesis").
There's a built-in function for most statistical tests out there.<br>
Let's also build our own function to summarize all the information.<br>
All tests we will conduct from now on are based on alpha = 95%.
```
def results(p):
if(p['p_value']<0.05):p['hypothesis_accepted'] = 'alternative'
if(p['p_value']>=0.05):p['hypothesis_accepted'] = 'null'
df = pd.DataFrame(p, index=[''])
cols = ['value1', 'value2', 'score', 'p_value', 'hypothesis_accepted']
return df[cols]
```
## Two-tailed and One-tailed
Two-tails tests are used to show two values are just "different".<br>
One-tail tests are used to show one value is either "larger" or "lower" than another one.<br><br>
This has an influence on the p-value : in case of one-tail tests, p-value has to be divided by 2.<br>
<br>
Most of the functions we'll use (those from the statweights modules) do that by themselves if we input the right information in the parameters.<br>
We'll have to do it on our own with functions from the scipy module.
## Types of tests
There are different types of tests, here are the ones we will cover :
- T-tests. Used for small sample sizes (n<30), and when population's standard deviation is unknown.
- Z-tests. Used for large sample sizes (n=>30), and when population's standard deviation is known.
- F-tests. Used for comparing values of more than two variables.
- Chi-square. Used for comparing categorical data.
## Normal distribution
Also, most tests - parametric tests - require a population that is normally distributed.<br>
It it not the case for SalePrice - which we'll use for most tests - but we can fix this by log-transforming the variable.<br>
Note that to go back to our original scale and understand values vs. our \$120 000, we'll to exponantiate values back.
```
import numpy as np
city_hall_dataset['SalePrice'] = np.log1p(city_hall_dataset['SalePrice'])
logged_budget = np.log1p(120000) #logged $120 000 is 11.695
logged_budget
```
# Practice
So let's say we are ready to dive into the data, but not ready to pay the small fee for the large sample size.<br>
We'll be starting with the free samples of 25 observations.
```
sample = city_hall_dataset.sample(n=25)
p = {} #dictionnary we'll use to stock information and results
```
## One sample T-test | Two-tailed | Means
So first question we want to ask is : How are our $120 000 situated vs. the average Ames house SalePrice? <br>
In other words, is 120 000 (11.7 logged) any different from the mean SalePrice of the population?<br>
To know that from a 25 observations sample, we need to use a One Sample T-Test.
<b>Null Hypothesis</b> : Mean SalePrice = 11.695 <br>
<b>Alternative Hypothesis</b> : Mean SalePrice ≠ 11.695 <br>
```
p['value1'], p['value2'] = sample['SalePrice'].mean(), logged_budget
p['score'], p['p_value'] = stats.ttest_1samp(sample['SalePrice'], popmean=logged_budget)
results(p)
```
So we know our initial budget is significantely different from the mean SalePrice.<br>
From the table above, it unfortunately seems lower.<br>
## One sample T-test | One-tailed | Means
Let's make sure our budget is lower by running a one-tailed test.<br>
Question now is : is 120 000 (11.695 logged) lower than the mean SalePrice of the population?<br>
<b>Null Hypothesis</b> : Mean SalePrice <= 11.695 <br>
<b>Alternative Hypothesis</b> : Mean SalePrice > 11.695 <br>
```
p['value1'], p['value2'] = sample['SalePrice'].mean(), logged_budget
p['score'], p['p_value'] = stats.ttest_1samp(sample['SalePrice'], popmean=logged_budget)
p['p_value'] = p['p_value']/2 #one-tailed test (with scipy function), we need to divide p-value by 2 ourselves
results(p)
```
Unfortunately it is!<br>
We have 95% chance of believing that our starting budget won't let us buy a house at the average Ames price.
## Two sample T-test | Two-tailed | Means
Now that our expectations are lowered, we realize something important :<br>
The entire dataset probably contains some big houses fitted for entire families as well as small houses for fewer inhabitants.<br>
Prices are probably really different in-between the two types.<br>
And we are moving in alone, so we probably don't need that big of a house.<br><br>
What if we could ask the City Hall to give us a sample for big houses, and a sample for smaller houses?<br>
We first could see if there is a significant difference in prices.<br>
And then see how our \$120 000 are doing against the small houses average SalePrice.<br><br>
We do ask the City Hall, and because they understand it is also for the sake of this tutorial, they accept.<br>
They say they'll split the dataset in two, based on the surface area of the houses.<br>
They will give us a sample from the top 50\% houses in terms of surface, and another sample from the bottom 50\%.
```
smaller_houses = city_hall_dataset.sort_values('GrLivArea')[:730].sample(n=25)
larger_houses = city_hall_dataset.sort_values('GrLivArea')[730:].sample(n=25)
```
Now we first want to know if the two samples, extracted from two different populations, have significant differences in their average SalePrice.
<b>Null Hypothesis</b> : SalePrice of smaller houses = SalePrice of larger houses <br>
<b>Alternative Hypothesis</b> : SalePrice of smaller houses ≠ SalePrice of larger houses <br>
```
p['value1'], p['value2'] = smaller_houses['SalePrice'].mean(), larger_houses['SalePrice'].mean()
p['score'], p['p_value'], p['df'] = ttest_ind(smaller_houses['SalePrice'], larger_houses['SalePrice'])
results(p)
```
As expected, the two samples show some significant differences in SalePrice.
## Two sample T-test | One-tailed | Means
Obviously, larger houses have a higher SalePrice.<br>
Let's prove it this with one-tailed test.
<b>Null Hypothesis</b> : SalePrice of smaller houses >= SalePrice of larger houses <br>
<b>Alternative Hypothesis</b> : SalePrice of smaller houses < SalePrice of larger houses <br>
```
p['value1'], p['value2'] = smaller_houses['SalePrice'].mean(), larger_houses['SalePrice'].mean()
p['score'], p['p_value'], p['df'] = ttest_ind(smaller_houses['SalePrice'], larger_houses['SalePrice'], alternative='smaller')
results(p)
```
Still as expected, SalePrice is significantly higher for larger houses.
## Two sample Z-test | One-tailed | Means
Now that the City Hall has already splitted the population in two, why not ask them for larger samples?<br>
We'll pay a fee but that's all right, this is fake money.
```
smaller_houses = city_hall_dataset.sort_values('GrLivArea')[:730].sample(n=100, random_state=1)
larger_houses = city_hall_dataset.sort_values('GrLivArea')[730:].sample(n=100, random_state=1)
```
<b>Null Hypothesis</b> : SalePrice of smaller houses >= SalePrice of larger houses <br>
<b>Alternative Hypothesis</b> : SalePrice of smaller houses < SalePrice of larger houses <br>
```
p['value1'], p['value2'] = smaller_houses['SalePrice'].mean(), larger_houses['SalePrice'].mean()
p['score'], p['p_value'] = ztest(smaller_houses['SalePrice'], larger_houses['SalePrice'], alternative='smaller')
results(p)
```
Higher sample sizes show the same results : SalePrice is significantely higher for larger houses.
## Two sample Z-test | One-tailed | Proportions
Instead of means, we can also run tests on proportions.<br>
Is the proportion of houses over \$120 000 higher in the larger houses populations than in smaller houses population?
<b>Null Hypothesis</b> : Proportion of smaller houses with SalePrice over 11.695 >= Proportion of larger houses with SalePrice over 11.695 <br>
<b>Alternative Hypothesis</b> : Proportion of smaller houses with SalePrice over 11.695 < Proportion of larger houses with SalePrice over 11.695 <br>
```
from statsmodels.stats.proportion import *
A1 = len(smaller_houses[smaller_houses.SalePrice>logged_budget])
B1 = len(smaller_houses)
A2 = len(larger_houses[larger_houses.SalePrice>logged_budget])
B2 = len(larger_houses)
p['value1'], p['value2'] = A1/B1, A2/B2
p['score'], p['p_value'] = proportions_ztest([A1, A2], [B1, B2], alternative='smaller')
results(p)
```
Logically, the test shows that the larger houses population has a higher ratio of houses sold over \\$120 000 vs. the smaller houses population.
## One sample Z-test | One-tailed | Means
So now let's see how our \$120 000 (11.7 logged) are doing against smaller houses only, based on the 100 observations sample.
<b>Null Hypothesis</b> : Mean SalePrice of smaller houses => 11.695 <br>
<b>Alternative Hypothesis</b> : Mean SalePrice of smaller houses < 11.695 <br>
```
p['value1'], p['value2'] = smaller_houses['SalePrice'].mean(), logged_budget
p['score'], p['p_value'] = ztest(smaller_houses['SalePrice'], value=logged_budget, alternative='larger')
results(p)
```
That's quite depressing : our \\$120 000 do not even beat the average price of smaller houses.
## One sample Z-test | One-tailed | Proportions
Our \$120 000 do not seem too far from the average SalePrice of small houses though.<br>
Let's see if at least 25\% of houses have a SalePrice in our budget.
<b>Null Hypothesis</b> : Proportion of smaller houses with SalePrice under 11.695 <= 25% <br>
<b>Alternative Hypothesis</b> : Proportion of smaller houses with SalePrice under 11.695 > 25% <br>
```
from statsmodels.stats.proportion import *
A = len(smaller_houses[smaller_houses.SalePrice<logged_budget])
B = len(smaller_houses)
p['value1'], p['value2'] = A/B, 0.25
p['score'], p['p_value'] = proportions_ztest(A, B, alternative='larger', value=0.25)
results(p)
```
So at least, now we know we can buy a house among at least 25% of the smaller houses.<br>
## F-test (ANOVA)
The House Price Dataset has a MSZoning variable, which identifies the general zoning classification of the house.<br>
For instance, it lets you know if the house is situated in a residential or a commerical zone.<br><br>
We'll therefore try to know if there is a significant difference in SalePrice based on the zoning.<br>
And then know where we will be more likely to live with our budget.<br>
Based on the 100 observations samples of smaller houses, let's first have an overview of mean SalePrice by zone.
```
replacement = {'FV': "Floating Village Residential", 'C (all)': "Commercial", 'RH': "Residential High Density",
'RL': "Residential Low Density", 'RM': "Residential Medium Density"}
smaller_houses['MSZoning_FullName'] = smaller_houses['MSZoning'].replace(replacement)
mean_price_by_zone = smaller_houses.groupby('MSZoning_FullName')['SalePrice'].mean().to_frame()
mean_price_by_zone
```
To know if there is a significant difference between these values, we run an ANOVA test. (because there a more than 2 values to compare)<br>
The test won't not able to tell us what attributes are different from the others, but at least we'll know if there is a difference or not.
<b>Null Hypothesis</b> : No difference between SalePrice means <br>
<b>Alternative Hypothesis</b> : Difference between SalePrice means <br>
```
sh = smaller_houses.copy()
p['score'], p['p_value'] = stats.f_oneway(sh.loc[sh.MSZoning=='FV', 'SalePrice'],
sh.loc[sh.MSZoning=='C (all)', 'SalePrice'],
sh.loc[sh.MSZoning=='RH', 'SalePrice'],
sh.loc[sh.MSZoning=='RL', 'SalePrice'],
sh.loc[sh.MSZoning=='RM', 'SalePrice'],)
results(p)[['score', 'p_value', 'hypothesis_accepted']]
```
There is a difference between SalePrices based on where the house is located.<br>
Looking at the Average SalePrice by zone, Commerical Zones and Residential High Density zones seem to be the most affordable for our budget.
## Chi-square test
One last question we'll address : can we get a garage? If yes, what type of garage?<br>
If not, then we won't bother saving up for a car, and we'll try to get a house next to Public Transportion.<br>
The dataset contains a categorical variable, GarageType, that will help us answer the question.<br>
<br>
```
smaller_houses['GarageType'].fillna('No Garage', inplace=True)
smaller_houses['GarageType'].value_counts().to_frame()
```
We know we can get a house in at least the bottom 25% of smaller houses.<br>
We would ideally like to know if distribution of Garage Types among these 25% is different than in the three other quarters<br>
We are now friends with the City Hall, so we can ask them one last favor : <br>
Split the smaller houses population in 4 based on surface, and give us a sample of each quarter.<br>
Because we working here with categorical data, we'll run a Chi-Square test.
```
city_hall_dataset['GarageType'].fillna('No Garage', inplace=True)
sample1 = city_hall_dataset.sort_values('GrLivArea')[:183].sample(n=100)
sample2 = city_hall_dataset.sort_values('GrLivArea')[183:366].sample(n=100)
sample3 = city_hall_dataset.sort_values('GrLivArea')[366:549].sample(n=100)
sample4 = city_hall_dataset.sort_values('GrLivArea')[549:730].sample(n=100)
dff = pd.concat([
sample1['GarageType'].value_counts().to_frame(),
sample2['GarageType'].value_counts().to_frame(),
sample3['GarageType'].value_counts().to_frame(),
sample4['GarageType'].value_counts().to_frame()],
axis=1, sort=False)
dff.columns = ['Sample1 (smallest houses)', 'Sample2', 'Sample3', 'Sample4 (largest houses)']
dff = dff[:3] #chi-square tests do not work when table contains some 0, we take only the most frequent attributes
dff
```
<b>Null Hypothesis</b> : No difference between GarageType distribution <br>
<b>Alternative Hypothesis</b> : Difference between GarageType distribution <br>
```
p['score'], p['p_value'], p['ddf'], p['contigency'] = stats.chi2_contingency(dff)
p.pop('contigency')
results(p)[['score', 'p_value', 'hypothesis_accepted']]
```
Clearly there's a difference in GarageType distribution according to size of houses.<br>
The sample that concerns us, Sample1, has the highest proportion of "No Garage" and "Detached Garage".<br>
We'll probably have to stick with Public Transportation.
# Conclusion
We probably won't have a great house, but at least, we learned about statistical tests.
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import Image
%matplotlib inline
from google.colab import files
uploaded=files.upload()
Image('wheel_encoders.png')
```
In a differential-drive model, we discussed the relation between wheel velocities and robot's rotational and translational velocity, i.e, $v, \omega$.
If we need to know curent $v, \omega$ how to measure wheel velocities? We use wheel encoders
Encoder resolution = $N$
Every $\Delta T$ ms,
+ Encoder reports # ticks wheel moved = $n$
+ Angle moved = $2\pi \frac{n}{N}$
Usage
+ Smallest movement that can be measured is $\frac{2\pi}{N}$
+ Largest speed that can be measured is $\frac{2\pi}{\Delta T}$
Pros:
+ Fairly accurate estimates of linear/ angular velocity
+ Distances and rotations are accurate in short-term
Cons:
+ Vehicle position “drifts” when $v,\omega$ is integrated over longer periods
Say, your robot's initial pose is (0.0, 0.0, 0.0), write a program to accept left and right ticks as input and output the next pose
Assume an unicycle model and following vehicle parameters
+ Wheel radius ($r$) = 2.0m
+ Track-width ($L$) = 4.0m
+ Encoder ticks ($N$) = 100
+ Encoder frequency ($\Delta T$) = 0.1
Test your code with $n_{right} = 10$ and $N_{left} = 6$. Do you get the next pose as $(1.0053, 0.0, 0.1257)$?
```
def pose_from_encoder(left_tick,right_tick):
r=2 #m
L=4 #m
N=100 #encoder ticks
pose=(0,0,0)
theta_left=(2*np.pi*left_tick)/N;
w_left=theta_left/(0.1)
theta_right=(2*np.pi*right_tick)/N;
w_right=theta_right/(0.1)
v=(r/2)*(w_left+w_right)
w=(r/L)*(w_right-w_left)
print(w)
pose=unicycle_model(pose,v,w,0.1)
return pose
pose=pose_from_encoder(6,10)
print(pose)
```
### Effect of angular error in pose estimate
```
def unicycle_model(curr_pose, v, w, dt=1):
## write code to calculate next_pose
# refer to the kinematic equations of a unicycle model
x, y, theta = curr_pose
x += v*np.cos(theta)*dt
y += v*np.sin(theta)*dt
theta += w*dt
# Keep theta bounded between [-pi, pi]
theta = np.arctan2(np.sin(theta), np.cos(theta))
# return calculated (x, y, theta)
return x, y, theta
# write your code to simulate straight line motion
# robot can be assumed to be moving at constant speed of 2 m/s for 30 seconds
# feel free to borrow code from your previous notebooks
initial_error = np.linspace(-2,2,9)
initial_error= np.deg2rad(initial_error)
print(initial_error)
vc=2 #m/sec
steps=30
final_x=[]
all_w=np.zeros(steps)
all_v=vc*np.ones(steps)
for err in initial_error:
pose=(0, 0, np.pi/2 + err)
trajectory=[]
trajectory.append(pose)
for v,w in zip(all_v,all_w):
pose=unicycle_model(pose,v,w)
trajectory.append(pose)
trajectory=np.array(trajectory)
final_x+=[trajectory[-1,0]]
print(final_x)
```
If there was no initial angle error, robot will end up at $x$ = 0.0.
Plot a graph between initial_error and final $x$ position?
```
plt.figure()
#plt.axes().set_aspect('equal','datalim')
plt.plot(initial_error,final_x)
```
### Demonstrate localization drift in wheel encoders
### Synthetic dataset
```
all_v = np.ones(100)
all_w = np.zeros(100)
robot_traj, robot_traj_noisy = [], []
ideal_pose = np.array([0, 0, np.pi/2])
noisy_pose = np.array([0, 0, np.pi/2])
robot_traj.append(ideal_pose)
robot_traj_noisy.append(noisy_pose)
for v, w in zip(all_v, all_w):
ideal_pose=unicycle_model(ideal_pose,v,w)
robot_traj.append(ideal_pose)
#add gaussian noise with std dev of 0.01 to omega
noise=np.random.normal(0.0,0.01)
w=w+noise
noisy_pose=unicycle_model(noisy_pose,v,w)
robot_traj_noisy.append(noisy_pose)
robot_traj = np.array(robot_traj)
robot_traj_noisy = np.array(robot_traj_noisy)
plt.figure()
plt.grid()
#plt.axes().set_aspect("equal","datalim")
plt.plot(robot_traj[:,0], robot_traj[:,1],'k-')
plt.plot(robot_traj_noisy[:,0], robot_traj_noisy[:,1],'b-')
plt.plot(0, 0, 'r+', ms=10)
plt.legend()
```
### Real dataset
```
!ls
data_dir = "."
ground_truth = pd.read_csv(data_dir + "/ground_truth.csv")
gt_traj = np.array(ground_truth[['x','y']])
wheel_enc = np.array(pd.read_csv(data_dir + "/wheel_control.csv")[['v','w']])
def unicycle_model1(pose, v, w, dt = 0.01):
x, y, theta = pose
x += v*np.cos(theta)*dt
y += v*np.sin(theta)*dt
theta += w*dt
theta = np.arctan2(np.sin(theta), np.cos(theta))
return x, y, theta
pose = np.array(ground_truth[['x','y','theta']])[0] #initial pose
robot_traj = []
robot_traj.append(pose)
for v, w in wheel_enc:
pose = unicycle_model1(pose, v, w)
robot_traj.append(pose)
robot_traj = np.array(robot_traj)
end = 10000
plt.figure()
plt.grid()
plt.plot(gt_traj[:,0], gt_traj[:,1], label='Vehicle path - Actual')
plt.plot(robot_traj[:end,0], robot_traj[:end,1], '.', label='Vehicle path - wheel encoder')
plt.legend()
```
| github_jupyter |
# Example 6: Full suite for North America
In the previous example we mapped `Te` and `F` using the default values of crustal density and thickness. Let's examine how we can do better by loading grids of crustal density and thickness and using those values in the model. You will notice that this requires only a few extra steps.
<div class="alert alert-block alert-warning">
<b>Warning:</b> Be careful to ensure that all the data sets have the same grid specifications (i.e., registration, shape, sampling distance, etc.)!!
</div>
```
import numpy as np
import pandas as pd
from plateflex import TopoGrid, BougGrid, RhocGrid, ZcGrid, Project
# Read header (first line) of data set using pandas to get grid parameters
xmin, xmax, ymin, ymax, zmin, zmax, dx, dy, nx, ny = \
pd.read_csv('../data/Topo_NA.xyz', sep='\t', nrows=0).columns[1:].values.astype(float)
# Change type of nx and ny from float to integers
nx = int(nx); ny = int(ny)
# Read topography and Bouguer anomaly data
topodata = pd.read_csv('../data/Topo_NA.xyz', sep='\t', \
skiprows=1, names=['x', 'y', 'z'])['z'].values.reshape(ny,nx)[::-1]
bougdata = pd.read_csv('../data/Bouguer_NA.xyz', sep='\t', \
skiprows=1, names=['x', 'y', 'z'])['z'].values.reshape(ny,nx)[::-1]
# Read crustal density and thickness data
densdata = pd.read_csv('../data/crustal_density_NA.xyz', sep='\t', \
skiprows=1, names=['x', 'y', 'z'])['z'].values.reshape(ny,nx)[::-1]
thickdata = pd.read_csv('../data/crustal_thickness_NA.xyz', sep='\t', \
skiprows=1, names=['x', 'y', 'z'])['z'].values.reshape(ny,nx)[::-1]
```
All those data sets can be imported into their corresponding `Grid` objects:
```
# Load the data as `TopoGrid` and `BougGrid` objects
topo = TopoGrid(topodata, dx, dy)
boug = BougGrid(bougdata, dx, dy)
# Create contours
contours = topo.make_contours(0.)
# Make mask
mask = (topo.data < -500.)
# Load the crustal density and thickness data as `RhocGrid` and `ZcGrid` objects
dens = RhocGrid(densdata, dx, dy)
thick = ZcGrid(thickdata, dx, dy)
# Plot the two new data sets
dens.plot(mask=mask, contours=contours, cmap='Spectral')
thick.plot(mask=mask, contours=contours, cmap='Spectral')
```
Define the project with new `Grid` objects, initialize it and execute!
```
# Define new project with 4 `Grid` objects
project = Project(grids=[topo, boug, dens, thick])
# Initialize project
project.init()
# Calculate wavelet admittance and coherence
project.wlet_admit_coh()
# Make sure we are using 'L2'
project.inverse = 'L2'
# Insert mask
project.mask = mask
# Estimate flexural parameters at every 10 points of the initial grid
project.estimate_grid(10, atype='joint')
```
Now we should have slightly better estimates of `Te` and `F` - let's examine the maps!
```
project.plot_results(mean_Te=True, mask=True, contours=contours, cmap='Spectral')
project.plot_results(std_Te=True, mask=True, contours=contours, cmap='magma_r')
project.plot_results(mean_F=True, mask=True, contours=contours, cmap='Spectral')
project.plot_results(std_F=True, mask=True, contours=contours, cmap='magma_r')
project.plot_results(chi2=True, mask=True, contours=contours, cmap='cividis', vmin=0, vmax=40)
```
Now try again with `atype='coh'` to test the effect of using a single function. After that try one more time with `atype='admit'`.
As an additional exercise, you could try using the Free-air instead of Bouguer anomaly data. You would do this by loading the corresponding data into a `FairGrid` object. The rest is identical.
```
project.mean_Te_grid
```
| github_jupyter |
# Lecture 13: Agent Based Models
[Download on GitHub](https://github.com/NumEconCopenhagen/lectures-2022)
[<img src="https://mybinder.org/badge_logo.svg">](https://mybinder.org/v2/gh/NumEconCopenhagen/lectures-2022/master?urlpath=lab/tree/13/ABMs_SMD.ipynb)
1. [The Schelling model](#The-Schelling-model)
2. [Structural estimation and the consumption savings model](#Structural-estimation-and-the-consumption-savings-model)
<a id="The-Schelling-model"></a>
# 1. The Schelling model
The Schelling model demontrates how racial seggregation may occur in equilibrium, even though no citizen has particularly strong racist feelings. The model is from the seminal paper Schelling, T. (1971), "Dynamic Models of Segregation", *Journal of Mathematical Sociology*. It is very intuitive **and** widely cited. It also lends itself nicely to demonstrate Object Oriented Programming, which we will talk more about in the first part today.
* [Thomas Schelling's paper](https://www.uzh.ch/cmsssl/suz/dam/jcr:00000000-68cb-72db-ffff-ffffff8071db/04.02%7B_%7Dschelling%7B_%7D71.pdf)
```
import schelling
import ConsumptionSaving as cs
%load_ext autoreload
%autoreload 2
from types import SimpleNamespace
from copy import copy
from scipy import optimize
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
```
## 1.1 Model background
* A lot of research since the 70s has gone into understanding the forces that drives segregation between ethnic groups.
* This goes especially in the US, where heavy segregeation mostly entails poor outcomes for ethnic minorities.
* "White flight" out of urban centres was a common observation in the 70s and 80s, both in the US and Europe.
* The question then quickly becomes, how much does racism factor into this trend?
* The Schelling model shows that even **weak preferences** for having similar type neighbors are enough to drive segregation.
## 1.2 Model outline
**Agent-based** model
* The model is a simple **agent-based** model.
* "Agent-based" means that the model is entirely defined by the behavior of individual agents.
* There are no market clearing/equilibrium conditions imposed.
* The equilibrium must therefore emerge endogenously through a simulation.
**The city**
* The model represents a city or a landscape with $L$ locations and $N<L$ citizens.
* Locations are similarly sized squares, making the landscape like a chess board.
* Each "location square" therefore has 8 neighbouring squares. (Unless it is on the edge of the landscape)
<img src="square.png" alt="Drawing" style="width: 200px;"/>
**Citizens**
* A citizen can only occupy 1 square at a time.
* There are 2 groups of citizens: A and B.
* Both groups of citizens prefer, *to some degree*, having neighbours of the same group.
* A citizen may become **discontent and moves** from its current location.
* This happens if the share of same-group neighbours fall below a cut-off threshold $\sigma$
* Consider the example below:
* Citizens of group A are green and group B is blue.
* Citizen $i$ in the middle square has 2 other A neighbours, 4 group B neighbours and 2 empty squares.
* The share of same type neighbours is thus $\frac{2}{2+4} = \frac{1}{3}$
* If $\sigma > \frac{1}{3}$, then citizen $i$ decides to move.
* When moving, citizen $i$ will choose a random location that was observed to be satisfactory in the previous period. (static expectations)
<img src="square_citizens.png" alt="Drawing" style="width: 200px;"/>
**Model question**
* A what level of the preference threshold $\sigma$ will we get segregation?
**Quiz:** What do you think?
## 1.3 Running the model
* The simulation is implemented as an object that can be called.
* The module `schelling` has 3 classes: `Simulation`, `Square` and `Citizen`
* Only need to create a `Simulation` object.
```
# Create simulation object and get default settings
sim = schelling.Simulation()
mp = sim.get_model_params()
display(mp) # The settings displayed here can be modified
```
* `n_dim` is the number of squares along each dimension of the landscape (n_dim$^2$ squares in total).
* `cutoff` is the threshold $\sigma$.
* `T` is the number of simulation iterations.
* `share_pop` is the ration of citizens to squares.
* `share_A` and `share_B` is the share of each population group.
Set up the simulation for default parameter values
```
sim.setup_sim(mp)
sim.plot_state()
```
In the initial state of the model, citizens of both groups are randomly scattered over the landscape
* Type A citizens are azure blue
* Type B citizens are marine blue
* Empty squares are beige
Maybe we want a larger landscape with more citizens:
```
# Update parameters and simulation object
mp.n_dim = 100
sim.setup_sim(mp)
sim.plot_state()
```
**Quizz:** [link](https://forms.office.com/Pages/ResponsePage.aspx?id=kX-So6HNlkaviYyfHO_6kckJrnVYqJlJgGf8Jm3FvY9UQ0lKNFo5UTBUN0ZIQVk3TkxEOTVCWjZMViQlQCN0PWcu)
Run the simulation $T$ periods and plot the state of the simulation again
```
sim.run_sim()
sim.plot_state()
```
Obviously, the segregation happened quickly!
```
# See state at various iterations
sim.plot_state(2)
```
We can make a GIF of the development. Needs a stored image for each iteration.
```
sim.make_gif()# See GIF in folder
```
Lets **experiment** a bit with the cutoff:
```
mp.cutoff = 0.5
sim.setup_sim(mp)
sim.run_sim()
sim.plot_state()
```
## 1.4 Code structure
The code is organized around object oriented programming principles.
* The simulation is an object
* Each square is an object
* Each citizen is an object
**Key attributes of the model objects**
* *The simulation object:*
* A list of references to all squares in the landscape
* A list of references to all citizens
* *A square object:*
* A list of references to all neighbouring squares
* A reference to the citizen occupying the square (if any)
* Number of group A and B neighbours
* *A citizen object:*
* A reference to its current square
### A quick refresher on objects and references
* Everything in Python is an object
* Lists and assignments only contain **references** to the underlying objects.
Let's say we create the objects `a` and `b`.
If we put those into `list1` and `list2`, then these lists only contain a **reference** to `a` and `b`.
So that if we update an element in `list1`, we **also** update `list2`, because their elements refer to the same underlying objects.
See the diagram below:
<img src="references_diagram.png" alt="Drawing" style="width: 400px;"/>
The code shows how to harness references for making a robust implementation!
### The model "algorithm"
Create simulation object, model parameter object and initialize simulation.
**for** t = 1:T
 fetch all discontent citizens
 fetch all free squares that would make citizens of each group content
 **for** `dc` in discontent_citizens:
  Fetch a new avaible, satisfactory square
  **for** all neighbors to `dc`'s current square
   **-1** to counter of `dc.type` neighbours (A or B)
  **delete** references between current square and `dc` (moving away)
  **create** references between new square and `dc` (moving in)
  **for** all neighbors to new square
   **+1** to counter of `dc.type` neighbours
**How does OOP help the implementation?**
* When a citizen considers moving into a new square, all necessary information is stored in that square.
* That is, the square has a reference to each neighbour, which in turn has a reference to the residing citizen.
* You can say that the object form a *chain* through referencing.
* Thereby, through referencing each square "knows" its type of neighbours.
* Pro: references are very lightweight, so it does not cost a lot of memory that each square is referenced 8 times.
* Pro: code becomes easy to interpret and easy to change. E.g. a square may get 2 surrounding rows of neighbours instead of 1 (let's try that).
* Con: the simulation is running in pure Python. Not terribly fast.
We'll have a look at the code if time permits.
<a id="Structural-estimation-and-the-consumption-savings-model"></a>
# 2. Structural estimation and the consumption savings model
We have already encountered the canonical consumption-savings model back in L11 and in the problem set.
* The great **benefit** of such a model is that it can be used to run **counter factual scenarios** based on economic policy.
* For example, we may want to know how people react to stimulus packages during covid. [An example of such an exercise](carroll_2020.pdf).
* But then we need the model to be **empirically relevant**!
* When we solved it, we just plugged in some parameters that seemed reasonable.
* That is not good enough for proper policy guidance.
* We need to estimate the core parameters from data.
* This is called **structural estimation**.
* Structural estimation means that you impose behavioral structure (given by the model) on your data to get statistical predictions.
* In our current example, we impose that people are forward looking, optimizing and derive utility from choices in a certain way.
* For more material, go to Jeppes repository [ConsumptionSavingNotebooks](https://github.com/NumEconCopenhagen/ConsumptionSavingNotebooks) and check out the links on [numeconcopenhagen](https://numeconcopenhagen.netlify.app/).
<font size="4">Quick refresher on the consumption savings model</font>
A household lives for 2 periods and makes decisions on consumption and saving in each period.
The problem of the household is **solved backwards**, since choices today affects the household's state tomorrow.
**Second period:**
household gets utility from **consuming** and **leaving a bequest** (warm glow),
$$
\begin{aligned}
v_{2}(m_{2})&= \max_{c_{2}}\frac{c_{2}^{1-\rho}}{1-\rho}+\nu\frac{(a_2+\kappa)^{1-\rho}}{1-\rho}\\
\text{s.t.} \\
a_2 &= m_2-c_2 \\
a_2 &\geq 0
\end{aligned}
$$
where
* $m_t$ is cash-on-hand
* $c_t$ is consumption
* $a_t$ is end-of-period assets
* $\rho > 1$ is the risk aversion coefficient
* $\nu > 0 $ is the strength of the bequest motive
* $\kappa > 0$ is the degree of luxuriousness in the bequest motive
* $a_2\geq0$ ensures the household *cannot* die in debt
**First period:**
the household gets utility from immediate consumption. Household takes into account that next period income is stochastic.
$$
\begin{aligned}
v_1(m_1)&=\max_{c_1}\frac{c_{1}^{1-\rho}}{1-\rho}+\beta\mathbb{E}_{1}\left[v_2(m_2)\right]\\&\text{s.t.}&\\
a_1&=m_1-c_1\\
m_2&= (1+r)(m_1-c_1)+y_2 \\
y_{2}&= \begin{cases}
1-\Delta & \text{with prob. }0.5\\
1+\Delta & \text{with prob. }0.5
\end{cases}\\
a_1&\geq0
\end{aligned}
$$
where
* $\beta > 0$ is the discount factor
* $\mathbb{E}_1$ is the expectation operator conditional on information in period 1
* $y_2$ is income in period 2
* $\Delta \in (0,1)$ is the level of income risk (mean-preserving)
* $r$ is the interest rate
* $a_1\geq0$ ensures the household *cannot* borrow
<font size="5">How are the parameters of such a model estimated?</font>
* We can use **Simulated Minimum Distance** (SMD), aka Simulated Method of Moments, aka matching on moments.
* Yes, it is closely related to GMM.
* It is a Swiss-army knife of structural estimation, generally available.
* Word on the street: *"if you can simulate it, you can estimate it"*
* Other structural models may be estimated by maximum likelihood (preferable when possible).
* Today, we will only look at parameters estimates to get the intuition right.
* Standard errors are for another day, see [here](https://github.com/NumEconCopenhagen/ConsumptionSavingNotebooks/blob/master/00.%20DynamicProgramming/04.%20Structural%20Estimation.ipynb) if you are interested.
## 2.1 Simulated Minimum Distance
**Outline**
1. Define the set of parameters to be estimated, denoted $\theta$. We set $\theta = \rho$, the risk aversion.
2. Define a set of moments from data that can identify $\theta$. (The tricksy part)
3. We will use 3 moments: mean consumption in period 1 and 2, and mean variance in consumption across periods.
4. These moments are calculated from an empirical data set.
5. We then simulate the model with trial values of $\rho$ until the moments from the simulated data is close to the empirical moments.
**Definitions**
* We have individual observations on $N^d$ individuals over $T^d$ periods, denoted $w_i$.
* We assume that the empirical data is generated by our model which is parameterized by $\theta_{dgp}$
* We define a moment generating function:
* $\Lambda = \frac{1}{N}\sum_{i=1}^N m(\theta|w_i)$
* As noted $\Lambda$, holds the mean of $c_1$, mean of $c_2$ and mean of $\text{var}(c_1,c_2)$
* Thus, the moments from data is given by
* $\Lambda_{data} = \frac{1}{N^d}\sum_{i=1}^{N^d} m(\theta_{dgp}|w_i)$
* Given the *guess* $\theta$ on the data generating parameter $\theta_{dgp}$, we can simulate the same set of moments from the model.
* Therefore, we simulate $N^s$ individuals over $T^s$ periods, and the outcome observation is denoted $w_s$
* The simulated set of moments are given by
* $\Lambda_{sim}(\theta) = \frac{1}{N_{sim}}\sum_{s=1}^{N_{sim}} m(\theta|w_s)$
* Finally, we define the function $g(\theta)$, which is the difference between data moments and simulation moments:
* $g(\theta)=\Lambda_{data}-\Lambda_{sim}(\theta)$
**Simulated Minimum Distance (SMD)** estimator is then given by
$$
\hat{\theta} = \arg\min_{\theta} g(\theta)'Wg(\theta)
$$
where $W$ is a **weighting matrix**. $W$ is $J \times J$, where $J$ is the number of moments. The relative size of elements in $W$ determines the importance of the corresponding moments.
One can derive an optimal $W$, but in practice, the Identity matrix often works well. So in our case:
$$
\begin{aligned}
W =
\begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{bmatrix}
\end{aligned}
$$
**Quick quizz** on the SMD. Go [here](https://forms.office.com/Pages/ResponsePage.aspx?id=kX-So6HNlkaviYyfHO_6kckJrnVYqJlJgGf8Jm3FvY9UQVBKWlg3RVpJV1ZPRURVRVo0Q0dWTEVBRiQlQCN0PWcu) for a link.
### Estimating our model
Firstly, the consumption savings model of Lecture 11 has been moved into the class `ConsumptionSavingModel` in the module ConsumptionSaving.
Based on a set of "true" parameters we simulate the model for $N^d$ individuals. The outcome is our "empirical" data set.
We therefore know exactly what our estimation should lead to. This is an **important exercise** whenever you do structural estimation. Test if you can estimate on synthetical data.
The "true" data generating parameters.
```
par_dgp = SimpleNamespace()
par_dgp.rho = 8
par_dgp.kappa = 0.5
par_dgp.nu = 0.1
par_dgp.r = 0.04
par_dgp.beta = 0.94
par_dgp.Delta = 0.5
```
Create a model object based on true parameters and solve it:
```
true_model = cs.ConsumptionSavingModel(par_dgp)
m1,c1,m2,c2 = true_model.solve()
```
Visualize the solution just to be sure that it looks right
```
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(m1,c1, label=f'Period {1}')
ax.plot(m2,c2, label=f'Period {2}')
ax.legend(loc='lower right',facecolor='white',frameon=True)
ax.set_xlabel('$m_t$')
ax.set_ylabel('$c_t$')
ax.set_title('Policy functions')
ax.set_xlim([0,2])
ax.set_ylim([0,1.5]);
```
Based on the solution, we can create a distribution of initial cash-on-hand, $m_1$, and simulate the following consumption savings paths
```
# Simulate a data set based on the true model
simN = 100_000
true_model.sim_m1 = np.fmax(np.random.normal(1,0.1,size=simN), 0) #np.fmax: One cannot have negative m
data_c1, data_c2 = true_model.simulate() # Simulate choices based on initial m
```
We also need to set up a model for estimation.
We want to estimate $\rho$. This info is provided as an attribute to the model.
```
# Create model object for estimation
par = copy(par_dgp)
est_model = cs.ConsumptionSavingModel(par)
est_model.theta_name = 'rho'
est_model.sim_m1 = np.fmax(np.random.normal(1,0.1,size=simN),0)
```
The function $\Lambda = \frac{1}{N}\sum_{i=1}^N m(\theta|w_i)$ is called `moment_func()`
```
def moment_func(c1, c2):
mom1 = c1.mean()
mom2 = c2.mean()
mom3 = np.var(np.stack((c1, c2)), axis=0).mean() # Averaging the variance of [c_1, c_2] over individuals
return np.array([mom1, mom2, mom3])
```
The function $g(\theta)=\Lambda_{data}-\Lambda_{sim}(\theta)$ is called `moments_diff()`
```
def moments_diff(model, data_moms):
sim_c1, sim_c2 = model.simulate() # sim_c1 and sim_c2 are arrays
sim_moms = moment_func(sim_c1, sim_c2)
return sim_moms - data_moms
```
Our objective $g(\theta)'Wg(\theta)$ is in the function `obj_func()`
```
def obj_func(theta, model, data_moms, W):
setattr(model.par, model.theta_name, theta)
diff = moments_diff(model, data_moms)
obj = diff @ W @ diff
return obj
```
We can now calculate data moments, $\Lambda_{data}$ and define $W$
```
data_moms = moment_func(data_c1, data_c2)
W = np.eye(len(data_moms))
print('Data moments\n', data_moms)
print('Weighting matrix\n',W)
```
We are now ready to estimate!
**The estimation algorithm is as follows:**
1. Calculate data moments, define $W$ and initial guess at estimated parameter $\theta = \theta^{guess}_0$. Set stopping threshold $\epsilon > 0$.
2. Solve the model.
3. Simulate moments from the solution.
4. Calculate the objective based on simulated moments.
5. Make a new guess $\theta^{guess}_1$
6. Perform 2.-4. based on $\theta^{guess}_1$
7. If the **change** in objective value from the two simulations is below $\epsilon$, then stop.
Otherwise reiterate 5.-7.
**Warning:** Estimation by simulation can be very time consuming.
Here we use **Nelder-Mead** as the objective function can be rugged, which it handles well.
```
# Estimation of rho
rho_guess = 6
res = optimize.minimize(obj_func, rho_guess,
args=(est_model, data_moms, W), method='nelder-mead')
display(res)
print(f'rho_hat = {res.x[0]:1.4f}')
```
**Profile of the objective function**
```
npoints = 20
rhos = np.linspace(6.5, 9.5, npoints)
obj_vals = np.empty((npoints,))
for i,rho in enumerate(rhos):
obj_vals[i] = obj_func(rho, est_model, data_moms, W)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(rhos,obj_vals)
ax.set_xlabel(r'$\rho_{guess}$')
ax.set_ylabel('Objective')
ax.set_title(r'Profile of objective function. True $\rho = 8.0$')
plt.show()
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import altair as alt
train = pd.read_csv('data/sales_train.csv.gz')
items = pd.read_csv('data/items.csv')
categories = pd.read_csv('data/item_categories.csv')
shops = pd.read_csv('data/shops.csv')
test = pd.read_csv('data/test.csv.gz')
submission = pd.read_csv('data/sample_submission.csv.gz')
train.head()
train.shape
test.head()
test.shape
submission.head()
```
As we can see, the test set is different in size and structure when compared to the training set. We have the features 'shop_id' and 'item_id' in the test set, which are present in the trianing set as well. Each observation in the test set has an ID associated with it. If we look at our submission file, we need to submit the monthly count (item_cnt_month) for that particular ID. This means we need to predict a number for the monthly sale quantity of a particular item at a particular shop.
Let's look at the distribution of the training and test set to understand our dataset better and check for overlap, or lack thereof.
```
fig = plt.figure(figsize=(18,9))
plt.subplots_adjust(hspace=.5)
plt.subplot2grid((3,3), (0,0), colspan = 3)
train['shop_id'].value_counts(normalize=True).plot(kind='bar', color = 'red', alpha=0.7)
plt.title('Shop ID Values in the Training Set (Normalized)')
plt.subplot2grid((3,3), (1,0))
train['item_id'].plot(kind='hist', alpha=0.7)
plt.title('Item ID Histogram')
plt.subplot2grid((3,3), (1,1))
train['item_price'].plot(kind='hist', alpha=0.7, color='orange')
plt.title('Item Price Histogram')
plt.subplot2grid((3,3), (1,2))
train['item_cnt_day'].plot(kind='hist', alpha=0.7, color='green')
plt.title('Item Count Day Histogram')
plt.subplot2grid((3,3), (2,0), colspan = 3)
train['date_block_num'].value_counts(normalize=True).plot(kind='bar', color = 'red', alpha=0.7)
plt.title('Month (date_block_num) Values in the Training Set (Normalized)')
plt.show()
```
The above graphs are a nice way to look at the raw distribtion of the test dataset. Here are some observations:
1. We have 60 'shop_id's but there is an uneven distribtution of these in the dataset. Four (<7%) of these shops make up ~25 percent of this dataset. These are shops (31, 25, 54, 28).
2. The Item IDs seem to have variations in frequency. We can't attribute a reason to this yet but we can inspect this further. Certain cateogeries are bound to sell better and maybe items under the same category are closer to each other as far as their ID distributions are concerned
3. From the vast empty spaces in the histograms of 'item_price' and 'item_cnt_day', we can infer that there are outliers in their distribution. **Let's write some simple code below to put a value to these outliers.**
4. Plotting the individual months from January 2013 to October 2015, it is interesting to see that the block month 12, corresponding to December 2013, had the highest number of sales. Month 23, which corresponds to December 2014, had the second highest number of sales. Shortly, we will use some better graphs to observe the monthly sale trends.
```
train['item_id'].value_counts(ascending=False)[:5]
train['item_cnt_day'].sort_values(ascending=False)[:5]
train['item_price'].sort_values(ascending=False)[:5]
```
Having a look at the item id 20949 that has been sold the most number of times, it is a plastic bag!
```
items.loc[items['item_id']==20949]
categories.loc[categories['item_category_id']==71]
```
And just to satisfy my curiosity bowels, there is only one item under item category 71
```
fig = plt.figure(figsize=(18,8))
plt.subplots_adjust(hspace=.5)
plt.subplot2grid((3,3), (0,0), colspan = 3)
test['shop_id'].value_counts(normalize=True).plot(kind='bar', color = 'red', alpha=0.7)
plt.title('Shop ID Values in the Test Set (Normalized)')
plt.subplot2grid((3,3), (1,0))
test['item_id'].plot(kind='hist', alpha=0.7)
plt.title('Item ID Histogram - Test Set')
plt.show()
```
1. The Shop Id's are evenly spread out, unlike the training set. The font size of labels quickly tells me that there are certain Shop Id's missing in the test set as the bars in the training set 'shop_id' plot were more tightly packed.
2. While item id's in the histogram are binned, the spikes are less in the test set. The test set is much smaller in shape than the training set, and naturally, the frequency values are significantly lower. It is tough to be draw more insights from this histogram.
It seems there might be some values of shop_id and item_id completely missing in the test set. Let's have a closer look and put some numbers or percentages to these missing values.
```
shops_train = train['shop_id'].nunique()
shops_test = test['shop_id'].nunique()
print('Shops in Training Set: ', shops_train)
print('Shops in Test Set: ', shops_test)
```
However, this doesn't mean that the training set contains all of the shops present in the test set. For that, we need to see if every element of the test set is present in the training set. Let's write some simple code to see if the test set list is a subset of the training set list.
```
shops_train_list = list(train['shop_id'].unique())
shops_test_list = list(test['shop_id'].unique())
flag = 0
if(set(shops_test_list).issubset(set(shops_train_list))):
flag = 1
if (flag) :
print ("Yes, list is subset of other.")
else :
print ("No, list is not subset of other.")
```
Great, so all shops id's in the test set are also present in the training set. Let's see if this is true for the item_ids
```
items_train = train['item_id'].nunique()
items_test = test['item_id'].nunique()
print('Items in Training Set: ', items_train)
print('Items in Test Set: ', items_test)
```
There are a lot many more items in the training set than there are in the test set. However, this doesn't mean that the training set contains all of the items in the test set. For that, we need to see if every element of the test set is present in the training set. Let's write some simple code to achieve this.
```
items_train_list = list(train['item_id'].unique())
items_test_list = list(test['item_id'].unique())
flag = 0
if(set(items_test_list).issubset(set(items_train_list))):
flag = 1
if (flag) :
print ("Yes, list is subset of other.")
else :
print ("No, list is not subset of other.")
```
Well then, this means there are certain items that are present in the test set but completely absent in the training set! Can we put a number to this to get an intuition?
```
len(set(items_test_list).difference(items_train_list))
```
There are 363 items that are present in the test set but completely absent in the training set. This doesn't mean that the sales prediction against those items must be zero as new items can be added to the market or we simply didn't possess the data for those items before. The fascinating questions pops though, how do you go about predicting them?
Before we do that, let's find out more about the 5100 items in the test set. What categories to they belong to? What categories do we not have to make predictions in the test set for
```
categories_in_test = items.loc[items['item_id'].isin(sorted(test['item_id'].unique()))].item_category_id.unique()
categories.loc[categories['item_category_id'].isin(categories_in_test)]
categories.loc[~categories['item_category_id'].isin(categories_in_test)]
```
Now, let's generate sales data for each shop and item present in the training set. We should do this for each month, as the final prediction is for the monthly count of sales for a particular shop and item
```
train['date'] = pd.to_datetime(train['date'], format='%d.%m.%Y')
from itertools import product
# Testing generation of cartesian product for the month of January in 2013
shops_in_jan = train.loc[train['date_block_num']==0, 'shop_id'].unique()
items_in_jan = train.loc[train['date_block_num']==0, 'item_id'].unique()
jan = list(product(*[shops_in_jan, items_in_jan, [0]]))
print(len(jan))
```
As we can see, January 2013 contains 365,175 intersections of shops and items. Most of these will have no sales and we can verify this once we check our training set, which has been grouped by month, to see which 'items' x 'shops' combinations have a sale count associated with them.
But before that, we need to generate this cartesian product for all 33 months in the training set. But wait, before generating it for all months, I will generate it for February 2013, concatenate it with January 2013 and produce a dataframe.
```
# Testing generation of cartesian product for the month of February in 2013
shops_in_feb = train.loc[train['date_block_num']==1, 'shop_id'].unique()
items_in_feb = train.loc[train['date_block_num']==1, 'item_id'].unique()
feb = list(product(*[shops_in_feb, items_in_feb, [1]]))
print(len(feb))
cartesian_test = []
cartesian_test.append(np.array(jan))
cartesian_test.append(np.array(feb))
cartesian_test
```
Ran out of memory when trying to create a dataframe from january and february lists. The trick was to convert the lists to a numpy array. The handy numpy method 'vstack' will shape the cartesian_test object in the right manner, so we can convert it into a long form dataframe.
```
cartesian_test = np.vstack(cartesian_test)
cartesian_test_df = pd.DataFrame(cartesian_test, columns = ['shop_id', 'item_id', 'date_block_num'])
cartesian_test_df.head()
cartesian_test_df.shape
```
Viola! That worked. Time to extend this to all months using some neat code
```
months = train['date_block_num'].unique()
cartesian = []
for month in months:
shops_in_month = train.loc[train['date_block_num']==month, 'shop_id'].unique()
items_in_month = train.loc[train['date_block_num']==month, 'item_id'].unique()
cartesian.append(np.array(list(product(*[shops_in_month, items_in_month, [month]]))))
cartesian_df = pd.DataFrame(np.vstack(cartesian), columns = ['shop_id', 'item_id', 'date_block_num'])
cartesian_df.shape
x = train.groupby(['shop_id', 'item_id', 'date_block_num'])['item_cnt_day'].sum().rename('item_cnt_month').reset_index()
x.head()
x.shape
```
Now we need to merge our two dataframes. For the intersecting, we will simply put the values that exist in the dataframe x. For the remaining rows, we will sub in zero. Remember, the columns you want to merge on are the intersection of shop_id, item_id, and date_block_num
```
new_train = pd.merge(cartesian_df, x, on=['shop_id', 'item_id', 'date_block_num'], how='left').fillna(0)
```
By default, pandas fills the dataframes with NaN. That's why we use fillna to replace all NaN's with zero.
```
del x
del cartesian_df
del cartesian
del cartesian_test
del cartesian_test_df
del feb
del jan
new_train.head()
new_train['item_cnt_month'] = np.clip(new_train['item_cnt_month'], 0, 20)
new_train['shop_mean'] = new_train.groupby('shop_id')['item_cnt_month'].transform('mean')
new_train.head()
new_train['item_mean'] = new_train.groupby('item_id')['item_cnt_month'].transform('mean')
new_train.head()
```
Okay so what we need to do is fill the shop mean for all shops in the test set. To do this, we will get the unique shop ids present in the test set, iterate over them, locate all rows belonging to a shop, and fill them with a new column titled 'shop_mean' that contains the mean of the items sold in that shop
```
for i in test['shop_id'].unique():
test.loc[test['shop_id'] == i, 'shop_mean'] = new_train.groupby('shop_id')['item_cnt_month'].mean()[i]
test.head()
```
Now we need to do the same for item id means in the test set, but remember there are some items present in the test set that are not present in the training set. How do we deal with these values?
```
for i in test['item_id'].unique():
try:
test.loc[test['item_id'] == i, 'item_mean'] = new_train.groupby('item_id')['item_cnt_month'].mean()[i]
except:
test.loc[test['item_id'] == i, 'item_mean'] = new_train.groupby('item_id')['item_cnt_month'].mean().median()
test.head()
from xgboost import XGBClassifier
from matplotlib import pyplot
def xgtrain():
classifier = XGBClassifier(n_estimators = 50, max_depth = 7, n_jobs = -1)
classifier_ = classifier.fit(new_train.drop(['date_block_num', 'item_cnt_month'], axis=1), new_train['item_cnt_month'])
classifier_.score(new_train.drop(['date_block_num', 'item_cnt_month'], axis = 1), new_train['item_cnt_month'])
return classifier_
%%time
classifier_ = xgtrain()
predictions = classifier_.predict(test.drop('ID', axis = 1))
plt.bar(test.drop('ID', axis = 1).columns, classifier_.feature_importances_)
plt.show()
submission['item_cnt_month'] = predictions
submission.to_csv('data/sub.csv', index=False)
```
| github_jupyter |
```
import torch
from torch import nn, optim
import torch.nn.functional as F
from torchvision import datasets, transforms
import numpy as np
from sklearn.metrics import accuracy_score
class Classifier(nn.Module):
"""
Our simple classifier
"""
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = F.log_softmax(self.fc4(x), dim=1)
return x
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize([0.5], [0.5]),
])
# Download and load the training data
trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# define model, loss function and optimizer
model = Classifier()
criterion = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.003)
# hyperparameters
epochs = 5
# training loop
model.train()
for e in range(epochs):
train_loss = 0
for images, labels in trainloader:
images = images.view(images.shape[0], -1)
optimizer.zero_grad()
output = model.forward(images)
loss = criterion(output, labels)
loss.backward()
optimizer.step()
train_loss += loss.item()
else:
print(f"Training loss: {train_loss/len(trainloader)}")
# load validation data
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
# get validation prediction
model.eval()
y_preds = np.empty((1))
y_test = np.empty((1))
for images, labels in testloader:
with torch.no_grad():
images = images.view(images.shape[0], -1)
output = model.forward(images)
y_test = np.concatenate([y_test,labels.numpy()], axis=0)
_, top_class = output.topk(1, dim=1)
y_preds = np.concatenate([y_preds,np.squeeze(top_class.numpy())], axis=0)
# get validation score of our model
y_test = y_test.astype(int)
y_preds = y_preds.astype(int)
accuracy_score(y_test, y_preds)
# save the model state
torch.save(model.state_dict(), 'linear_mnist.pth')
# run a tracing module to convert to Torch Script
example_image, example_label = next(iter(trainloader))
traced_script_module = torch.jit.trace(model, example_image)
traced_script_module.save("traced_linear_mnist.pt")
```
| github_jupyter |
# LAB 02: Advanced Feature Engineering in BQML
**Learning Objectives**
1. Create SQL statements to evaluate the model
2. Extract temporal features
3. Perform a feature cross on temporal features
4. Apply ML.FEATURE_CROSS to categorical features
5. Create a Euclidan feature column
6. Feature cross coordinate features
7. Apply the BUCKETIZE function
8. Apply the TRANSFORM clause and L2 Regularization
9. Evaluate the model using ML.PREDICT
## Introduction
In this lab, we utilize feature engineering to improve the prediction of the fare amount for a taxi ride in New York City. We will use BigQuery ML to build a taxifare prediction model, using feature engineering to improve and create a final model. By continuing the utilization of feature engineering to improve the prediction of the fare amount for a taxi ride in New York City by reducing the RMSE.
In this Notebook, we perform a feature cross using BigQuery's ML.FEATURE_CROSS, derive coordinate features, feature cross coordinate features, clean up the code, apply the BUCKETIZE function, the TRANSFORM clause, L2 Regularization, and evaluate model performance throughout the process.
Each learning objective will correspond to a __#TODO__ in the [student lab notebook](courses/machine_learning/deepdive2/feature_engineering/labs/2_bqml_adv_feat_eng_bqml-lab.ipynb) -- try to complete that notebook first before reviewing this solution notebook.
### Set up environment variables and load necessary libraries
```
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
```
## The source dataset
Our dataset is hosted in [BigQuery](https://cloud.google.com/bigquery/). The taxi fare data is a publically available dataset, meaning anyone with a GCP account has access. Click [here](https://console.cloud.google.com/bigquery?project=bigquery-public-data&p=nyc-tlc&d=yellow&t=trips&page=table) to acess the dataset.
The Taxi Fare dataset is relatively large at 55 million training rows, but simple to understand, with only six features. The fare_amount is the target, the continuous value we’ll train a model to predict.
## Create a BigQuery Dataset
A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called __feat_eng__ if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
```
%%bash
# Create a BigQuery dataset for feat_eng if it doesn't exist
datasetexists=$(bq ls -d | grep -w feat_eng)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: feat_eng"
bq --location=US mk --dataset \
--description 'Taxi Fare' \
$PROJECT:feat_eng
echo "\nHere are your current datasets:"
bq ls
fi
```
## Create the training data table
Since there is already a publicly available dataset, we can simply create the training data table using this raw input data. Note the WHERE clause in the below query: This clause allows us to TRAIN a portion of the data (e.g. one hundred thousand rows versus one million rows), which keeps your query costs down. If you need a refresher on using MOD() for repeatable splits see this [post](https://www.oreilly.com/learning/repeatable-sampling-of-data-sets-in-bigquery-for-machine-learning).
* Note: The dataset in the create table code below is the one created previously, e.g. "feat_eng". The table name is "feateng_training_data". Run the query to create the table.
```
%%bigquery
CREATE OR REPLACE TABLE
feat_eng.feateng_training_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
passenger_count*1.0 AS passengers,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat
FROM
`nyc-tlc.yellow.trips`
WHERE
MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))), 10000) = 1
AND fare_amount >= 2.5
AND passenger_count > 0
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
```
## Verify table creation
Verify that you created the dataset.
```
%%bigquery
# LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT
*
FROM
feat_eng.feateng_training_data
LIMIT
0
```
### Baseline Model: Create the baseline model
Next, you create a linear regression baseline model with no feature engineering. Recall that a model in BigQuery ML represents what an ML system has learned from the training data. A baseline model is a solution to a problem without applying any machine learning techniques.
When creating a BQML model, you must specify the model type (in our case linear regression) and the input label (fare_amount). Note also that we are using the training data table as the data source.
Now we create the SQL statement to create the baseline model.
```
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.baseline_model OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
pickup_datetime,
pickuplon,
pickuplat,
dropofflon,
dropofflat
FROM
feat_eng.feateng_training_data
```
Note, the query takes several minutes to complete. After the first iteration is complete, your model (baseline_model) appears in the navigation panel of the BigQuery web UI. Because the query uses a CREATE MODEL statement to create a model, you do not see query results.
You can observe the model as it's being trained by viewing the Model stats tab in the BigQuery web UI. As soon as the first iteration completes, the tab is updated. The stats continue to update as each iteration completes.
Once the training is done, visit the [BigQuery Cloud Console](https://console.cloud.google.com/bigquery) and look at the model that has been trained. Then, come back to this notebook.
### Evaluate the baseline model
Note that BigQuery automatically split the data we gave it, and trained on only a part of the data and used the rest for evaluation. After creating your model, you evaluate the performance of the regressor using the ML.EVALUATE function. The ML.EVALUATE function evaluates the predicted values against the actual data.
NOTE: The results are also displayed in the [BigQuery Cloud Console](https://console.cloud.google.com/bigquery) under the **Evaluation** tab.
Review the learning and eval statistics for the baseline_model.
```
%%bigquery
# Eval statistics on the held out data.
SELECT
*,
SQRT(loss) AS rmse
FROM
ML.TRAINING_INFO(MODEL feat_eng.baseline_model)
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.baseline_model)
```
**NOTE:** Because you performed a linear regression, the results include the following columns:
* mean_absolute_error
* mean_squared_error
* mean_squared_log_error
* median_absolute_error
* r2_score
* explained_variance
**Resource** for an explanation of the [Regression Metrics](https://towardsdatascience.com/metrics-to-evaluate-your-machine-learning-algorithm-f10ba6e38234).
**Mean squared error** (MSE) - Measures the difference between the values our model predicted using the test set and the actual values. You can also think of it as the distance between your regression (best fit) line and the predicted values.
**Root mean squared error** (RMSE) - The primary evaluation metric for this ML problem is the root mean-squared error. RMSE measures the difference between the predictions of a model, and the observed values. A large RMSE is equivalent to a large average error, so smaller values of RMSE are better. One nice property of RMSE is that the error is given in the units being measured, so you can tell very directly how incorrect the model might be on unseen data.
**R2**: An important metric in the evaluation results is the R2 score. The R2 score is a statistical measure that determines if the linear regression predictions approximate the actual data. Zero (0) indicates that the model explains none of the variability of the response data around the mean. One (1) indicates that the model explains all the variability of the response data around the mean.
Next, we write a SQL query to take the SQRT() of the mean squared error as your loss metric for evaluation for the benchmark_model.
```
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.baseline_model)
```
#### Model 1: EXTRACT dayofweek from the pickup_datetime feature.
* As you recall, dayofweek is an enum representing the 7 days of the week. This factory allows the enum to be obtained from the int value. The int value follows the ISO-8601 standard, from 1 (Monday) to 7 (Sunday).
* If you were to extract the dayofweek from pickup_datetime using BigQuery SQL, the dataype returned would be integer.
Next, we create a model titled "model_1" from the benchmark model and extract out the DayofWeek.
```
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.model_1 OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
pickup_datetime,
EXTRACT(DAYOFWEEK
FROM
pickup_datetime) AS dayofweek,
pickuplon,
pickuplat,
dropofflon,
dropofflat
FROM
feat_eng.feateng_training_data
```
Once the training is done, visit the [BigQuery Cloud Console](https://console.cloud.google.com/bigquery) and look at the model that has been trained. Then, come back to this notebook.
Next, two distinct SQL statements show the TRAINING and EVALUATION metrics of model_1.
```
%%bigquery
SELECT
*,
SQRT(loss) AS rmse
FROM
ML.TRAINING_INFO(MODEL feat_eng.model_1)
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.model_1)
```
Here we run a SQL query to take the SQRT() of the mean squared error as your loss metric for evaluation for the benchmark_model.
```
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.model_1)
```
### Model 2: EXTRACT hourofday from the pickup_datetime feature
As you recall, **pickup_datetime** is stored as a TIMESTAMP, where the Timestamp format is retrieved in the standard output format – year-month-day hour:minute:second (e.g. 2016-01-01 23:59:59). Hourofday returns the integer number representing the hour number of the given date.
Hourofday is best thought of as a discrete ordinal variable (and not a categorical feature), as the hours can be ranked (e.g. there is a natural ordering of the values). Hourofday has an added characteristic of being cyclic, since 12am follows 11pm and precedes 1am.
Next, we create a model titled "model_2" and EXTRACT the hourofday from the pickup_datetime feature to improve our model's rmse.
```
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.model_2 OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
#pickup_datetime,
EXTRACT(DAYOFWEEK
FROM
pickup_datetime) AS dayofweek,
EXTRACT(HOUR
FROM
pickup_datetime) AS hourofday,
pickuplon,
pickuplat,
dropofflon,
dropofflat
FROM
`feat_eng.feateng_training_data`
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.model_2)
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.model_2)
```
### Model 3: Feature cross dayofweek and hourofday using CONCAT
First, let’s allow the model to learn traffic patterns by creating a new feature that combines the time of day and day of week (this is called a [feature cross](https://developers.google.com/machine-learning/crash-course/feature-crosses/video-lecture).
Note: BQML by default assumes that numbers are numeric features, and strings are categorical features. We need to convert both the dayofweek and hourofday features to strings because the model (Neural Network) will automatically treat any integer as a numerical value rather than a categorical value. Thus, if not cast as a string, the dayofweek feature will be interpreted as numeric values (e.g. 1,2,3,4,5,6,7) and hour ofday will also be interpreted as numeric values (e.g. the day begins at midnight, 00:00, and the last minute of the day begins at 23:59 and ends at 24:00). As such, there is no way to distinguish the "feature cross" of hourofday and dayofweek "numerically". Casting the dayofweek and hourofday as strings ensures that each element will be treated like a label and will get its own coefficient associated with it.
Create the SQL statement to feature cross the dayofweek and hourofday using the CONCAT function. Name the model "model_3"
```
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.model_3 OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
#pickup_datetime,
#EXTRACT(DAYOFWEEK FROM pickup_datetime) AS dayofweek,
#EXTRACT(HOUR FROM pickup_datetime) AS hourofday,
CONCAT(CAST(EXTRACT(DAYOFWEEK
FROM
pickup_datetime) AS STRING), CAST(EXTRACT(HOUR
FROM
pickup_datetime) AS STRING)) AS hourofday,
pickuplon,
pickuplat,
dropofflon,
dropofflat
FROM
`feat_eng.feateng_training_data`
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.model_3)
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.model_3)
```
### Model 4: Apply the ML.FEATURE_CROSS clause to categorical features
BigQuery ML now has ML.FEATURE_CROSS, a pre-processing clause that performs a feature cross.
* ML.FEATURE_CROSS generates a [STRUCT](https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types#struct-type) feature with all combinations of crossed categorical features, except for 1-degree items (the original features) and self-crossing items.
* Syntax: ML.FEATURE_CROSS(STRUCT(features), degree)
* The feature parameter is a categorical features separated by comma to be crossed. The maximum number of input features is 10. An unnamed feature is not allowed in features. Duplicates are not allowed in features.
* Degree(optional): The highest degree of all combinations. Degree should be in the range of [1, 4]. Default to 2.
Output: The function outputs a STRUCT of all combinations except for 1-degree items (the original features) and self-crossing items, with field names as concatenation of original feature names and values as the concatenation of the column string values.
We continue with feature engineering began in Lab 01. Here, we examine the components of ML.Feature_Cross. Note that the next cell contains errors, please correct the cell before continuing or you will get errors.
```
%%bigquery
CREATE OR REPLACE MODEL feat_eng.model_4
OPTIONS
(model_type='linear_reg',
input_label_cols=['fare_amount'])
AS
SELECT
fare_amount,
passengers,
#pickup_datetime,
#EXTRACT(DAYOFWEEK FROM pickup_datetime) AS dayofweek,
#EXTRACT(HOUR FROM pickup_datetime) AS hourofday,
#CONCAT(CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING),
#CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING)) AS hourofday,
ML.FEATURE_CROSS(STRUCT(CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING) AS dayofweek,
CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING) AS hourofday)) AS day_hr,
pickuplon,
pickuplat,
dropofflon,
dropofflat
FROM `feat_eng.feateng_training_data`
```
Next, two distinct SQL statements show the TRAINING and EVALUATION metrics of model_1.
```
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.model_4)
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.model_4)
```
### Sliding down the slope toward a loss minimum (reduced taxi fare)!
* Our fourth model above gives us an RMSE of 9.65 for estimating fares. Recall our heuristic benchmark was 8.29. This may be the result of feature crossing. Let's apply more feature engineering techniques to see if we can't get this loss metric lower!
### Model 5: Feature cross coordinate features to create a Euclidean feature
Pickup coordinate:
* pickup_longitude AS pickuplon
* pickup_latitude AS pickuplat
Dropoff coordinate:
* #dropoff_longitude AS dropofflon
* #dropoff_latitude AS dropofflat
**Coordinate Features**:
* The pick-up and drop-off longitude and latitude data are crucial to predicting the fare amount as fare amounts in NYC taxis are largely determined by the distance traveled. As such, we need to teach the model the Euclidean distance between the pick-up and drop-off points.
* Recall that latitude and longitude allows us to specify any location on Earth using a set of coordinates. In our training data set, we restricted our data points to only pickups and drop offs within NYC. New York city has an approximate longitude range of -74.05 to -73.75 and a latitude range of 40.63 to 40.85.
* The dataset contains information regarding the pickup and drop off coordinates. However, there is no information regarding the distance between the pickup and drop off points. Therefore, we create a new feature that calculates the distance between each pair of pickup and drop off points. We can do this using the Euclidean Distance, which is the straight-line distance between any two coordinate points.
* We need to convert those coordinates into a single column of a spatial data type. We will use the ST_DISTANCE and the ST_GEOGPOINT functions.
* ST_DISTANCE: ST_DISTANCE(geography_1, geography_2). Returns the shortest distance in meters between two non-empty GEOGRAPHYs (e.g. between two spatial objects).
* ST_GEOGPOINT: ST_GEOGPOINT(longitude, latitude). Creates a GEOGRAPHY with a single point. ST_GEOGPOINT creates a point from the specified FLOAT64 longitude and latitude parameters and returns that point in a GEOGRAPHY value.
Next we convert the feature coordinates into a single column of a spatial data type. Use the The ST_Distance and the ST.GeogPoint functions.
SAMPLE CODE:
ST_Distance(ST_GeogPoint(value1,value2), ST_GeogPoint(value3, value4)) AS euclidean
```
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.model_5 OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
#pickup_datetime,
#EXTRACT(DAYOFWEEK FROM pickup_datetime) AS dayofweek,
#EXTRACT(HOUR FROM pickup_datetime) AS hourofday,
#CONCAT(CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING),
#CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING)) AS hourofday,
ML.FEATURE_CROSS(STRUCT(CAST(EXTRACT(DAYOFWEEK
FROM
pickup_datetime) AS STRING) AS dayofweek,
CAST(EXTRACT(HOUR
FROM
pickup_datetime) AS STRING) AS hourofday)) AS day_hr,
#pickuplon,
#pickuplat,
#dropofflon,
#dropofflat,
ST_Distance(ST_GeogPoint(pickuplon,
pickuplat),
ST_GeogPoint(dropofflon,
dropofflat)) AS euclidean
FROM
`feat_eng.feateng_training_data`
```
Next, two distinct SQL statements show metrics for model_5.
```
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.model_5)
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.model_5)
```
### Model 6: Feature cross pick-up and drop-off locations features
In this section, we feature cross the pick-up and drop-off locations so that the model can learn pick-up-drop-off pairs that will require tolls.
This step takes the geographic point corresponding to the pickup point and grids to a 0.1-degree-latitude/longitude grid (approximately 8km x 11km in New York—we should experiment with finer resolution grids as well). Then, it concatenates the pickup and dropoff grid points to learn “corrections” beyond the Euclidean distance associated with pairs of pickup and dropoff locations.
Because the lat and lon by themselves don't have meaning, but only in conjunction, it may be useful to treat the fields as a pair instead of just using them as numeric values. However, lat and lon are continuous numbers, so we have to discretize them first. That's what SnapToGrid does.
* ST_SNAPTOGRID: ST_SNAPTOGRID(geography_expression, grid_size). Returns the input GEOGRAPHY, where each vertex has been snapped to a longitude/latitude grid. The grid size is determined by the grid_size parameter which is given in degrees.
**REMINDER**: The ST_GEOGPOINT creates a GEOGRAPHY with a single point. ST_GEOGPOINT creates a point from the specified FLOAT64 longitude and latitude parameters and returns that point in a GEOGRAPHY value. The ST_Distance function returns the minimum distance between two spatial objectsa. It also returns meters for geographies and SRID units for geometrics.
The following SQL statement is incorrect. Modify the code to feature cross the pick-up and drop-off locations features.
```
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.model_6 OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
#pickup_datetime,
#EXTRACT(DAYOFWEEK FROM pickup_datetime) AS dayofweek,
#EXTRACT(HOUR FROM pickup_datetime) AS hourofday,
ML.FEATURE_CROSS(STRUCT(CAST(EXTRACT(DAYOFWEEK
FROM
pickup_datetime) AS STRING) AS dayofweek,
CAST(EXTRACT(HOUR
FROM
pickup_datetime) AS STRING) AS hourofday)) AS day_hr,
#pickuplon,
#pickuplat,
#dropofflon,
#dropofflat,
ST_Distance(ST_GeogPoint(pickuplon,
pickuplat),
ST_GeogPoint(dropofflon,
dropofflat)) AS euclidean,
CONCAT(ST_AsText(ST_SnapToGrid(ST_GeogPoint(pickuplon,
pickuplat),
0.01)), ST_AsText(ST_SnapToGrid(ST_GeogPoint(dropofflon,
dropofflat),
0.01))) AS pickup_and_dropoff
FROM
`feat_eng.feateng_training_data`
```
Next, we evaluate model_6.
```
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.model_6)
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.model_6)
```
### Code Clean Up
#### Exercise: Clean up the code to see where we are
Remove all the commented statements in the SQL statement. We should now have a total of five input features for our model.
1. fare_amount
2. passengers
3. day_hr
4. euclidean
5. pickup_and_dropoff
```
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.model_6 OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
ML.FEATURE_CROSS(STRUCT(CAST(EXTRACT(DAYOFWEEK
FROM
pickup_datetime) AS STRING) AS dayofweek,
CAST(EXTRACT(HOUR
FROM
pickup_datetime) AS STRING) AS hourofday)) AS day_hr,
ST_Distance(ST_GeogPoint(pickuplon,
pickuplat),
ST_GeogPoint(dropofflon,
dropofflat)) AS euclidean,
CONCAT(ST_AsText(ST_SnapToGrid(ST_GeogPoint(pickuplon,
pickuplat),
0.01)), ST_AsText(ST_SnapToGrid(ST_GeogPoint(dropofflon,
dropofflat),
0.01))) AS pickup_and_dropoff
FROM
`feat_eng.feateng_training_data`
```
## BQML's Pre-processing functions:
Here are some of the preprocessing functions in BigQuery ML:
* ML.FEATURE_CROSS(STRUCT(features)) does a feature cross of all the combinations
* ML.POLYNOMIAL_EXPAND(STRUCT(features), degree) creates x, x<sup>2</sup>, x<sup>3</sup>, etc.
* ML.BUCKETIZE(f, split_points) where split_points is an array
### Model 7: Apply the BUCKETIZE Function
##### BUCKETIZE
Bucketize is a pre-processing function that creates "buckets" (e.g bins) - e.g. it bucketizes a continuous numerical feature into a string feature with bucket names as the value.
* ML.BUCKETIZE(feature, split_points)
* feature: A numerical column.
* split_points: Array of numerical points to split the continuous values in feature into buckets. With n split points (s1, s2 … sn), there will be n+1 buckets generated.
* Output: The function outputs a STRING for each row, which is the bucket name. bucket_name is in the format of bin_<bucket_number>, where bucket_number starts from 1.
* Currently, our model uses the ST_GeogPoint function to derive the pickup and dropoff feature. In this lab, we use the BUCKETIZE function to create the pickup and dropoff feature.
Next, apply the BUCKETIZE function to model_7 and run the query.
```
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.model_7 OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
ST_Distance(ST_GeogPoint(pickuplon,
pickuplat),
ST_GeogPoint(dropofflon,
dropofflat)) AS euclidean,
ML.FEATURE_CROSS(STRUCT(CAST(EXTRACT(DAYOFWEEK
FROM
pickup_datetime) AS STRING) AS dayofweek,
CAST(EXTRACT(HOUR
FROM
pickup_datetime) AS STRING) AS hourofday)) AS day_hr,
CONCAT( ML.BUCKETIZE(pickuplon,
GENERATE_ARRAY(-78, -70, 0.01)), ML.BUCKETIZE(pickuplat,
GENERATE_ARRAY(37, 45, 0.01)), ML.BUCKETIZE(dropofflon,
GENERATE_ARRAY(-78, -70, 0.01)), ML.BUCKETIZE(dropofflat,
GENERATE_ARRAY(37, 45, 0.01)) ) AS pickup_and_dropoff
FROM
`feat_eng.feateng_training_data`
```
Next, we evaluate model_7.
```
%%bigquery
SELECT
*,
SQRT(loss) AS rmse
FROM
ML.TRAINING_INFO(MODEL feat_eng.model_7)
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.model_7)
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.model_7)
```
### Final Model: Apply the TRANSFORM clause and L2 Regularization
Before we perform our prediction, we should encapsulate the entire feature set in a TRANSFORM clause. BigQuery ML now supports defining data transformations during model creation, which will be automatically applied during prediction and evaluation. This is done through the TRANSFORM clause in the existing CREATE MODEL statement. By using the TRANSFORM clause, user specified transforms during training will be automatically applied during model serving (prediction, evaluation, etc.)
In our case, we are using the TRANSFORM clause to separate out the raw input data from the TRANSFORMED features. The input columns of the TRANSFORM clause is the query_expr (AS SELECT part). The output columns of TRANSFORM from select_list are used in training. These transformed columns are post-processed with standardization for numerics and one-hot encoding for categorical variables by default.
The advantage of encapsulating features in the TRANSFORM clause is the client code doing the PREDICT doesn't change, e.g. our model improvement is transparent to client code. Note that the TRANSFORM clause MUST be placed after the CREATE statement.
##### [L2 Regularization](https://developers.google.com/machine-learning/glossary/#L2_regularization)
Sometimes, the training RMSE is quite reasonable, but the evaluation RMSE illustrate more error. Given the severity of the delta between the EVALUATION RMSE and the TRAINING RMSE, it may be an indication of overfitting. When we do feature crosses, we run into the risk of overfitting (for example, when a particular day-hour combo doesn't have enough taxi rides).
Overfitting is a phenomenon that occurs when a machine learning or statistics model is tailored to a particular dataset and is unable to generalize to other datasets. This usually happens in complex models, like deep neural networks. Regularization is a process of introducing additional information in order to prevent overfitting.
Therefore, we will apply L2 Regularization to the final model. As a reminder, a regression model that uses the L1 regularization technique is called Lasso Regression while a regression model that uses the L2 Regularization technique is called Ridge Regression. The key difference between these two is the penalty term. Lasso shrinks the less important feature’s coefficient to zero, thus removing some features altogether. Ridge regression adds “squared magnitude” of coefficient as a penalty term to the loss function.
In other words, L1 limits the size of the coefficients. L1 can yield sparse models (i.e. models with few coefficients); Some coefficients can become zero and eliminated.
L2 regularization adds an L2 penalty equal to the square of the magnitude of coefficients. L2 will not yield sparse models and all coefficients are shrunk by the same factor (none are eliminated).
The regularization terms are ‘constraints’ by which an optimization algorithm must ‘adhere to’ when minimizing the loss function, apart from having to minimize the error between the true y and the predicted ŷ. This in turn reduces model complexity, making our model simpler. A simpler model can reduce the chances of overfitting.
Now, replace the blanks with the correct syntax to apply the TRANSFORM clause and L2 Regularization to the final model. Delete the "TODO 5" when you are done.
```
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.final_model
TRANSFORM(fare_amount,
#SQRT( (pickuplon-dropofflon)*(pickuplon-dropofflon) + (pickuplat-dropofflat)*(pickuplat-dropofflat) ) AS euclidean,
ST_Distance(ST_GeogPoint(pickuplon,
pickuplat),
ST_GeogPoint(dropofflon,
dropofflat)) AS euclidean,
ML.FEATURE_CROSS(STRUCT(CAST(EXTRACT(DAYOFWEEK
FROM
pickup_datetime) AS STRING) AS dayofweek,
CAST(EXTRACT(HOUR
FROM
pickup_datetime) AS STRING) AS hourofday)) AS day_hr,
CONCAT( ML.BUCKETIZE(pickuplon,
GENERATE_ARRAY(-78, -70, 0.01)), ML.BUCKETIZE(pickuplat,
GENERATE_ARRAY(37, 45, 0.01)), ML.BUCKETIZE(dropofflon,
GENERATE_ARRAY(-78, -70, 0.01)), ML.BUCKETIZE(dropofflat,
GENERATE_ARRAY(37, 45, 0.01)) ) AS pickup_and_dropoff ) OPTIONS(input_label_cols=['fare_amount'],
model_type='linear_reg',
l2_reg=0.1) AS
SELECT
*
FROM
feat_eng.feateng_training_data
```
Next, we evaluate the final model.
```
%%bigquery
SELECT
*,
SQRT(loss) AS rmse
FROM
ML.TRAINING_INFO(MODEL feat_eng.final_model)
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.final_model)
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.final_model)
```
### Predictive Model
Now that you have evaluated your model, the next step is to use it to predict an outcome. You use your model to predict the taxifare amount.
The ML.PREDICT function is used to predict results using your model: feat_eng.final_model.
Since this is a regression model (predicting a continuous numerical value), the best way to see how it performed is to evaluate the difference between the value predicted by the model and the benchmark score. We can do this with an ML.PREDICT query.
Now, replace the blanks with the correct syntax to apply the ML.PREDICT function. Delete the "TODO 6" when you are done.
```
%%bigquery
SELECT
*
FROM
ML.PREDICT(MODEL feat_eng.final_model,
(
SELECT
-73.982683 AS pickuplon,
40.742104 AS pickuplat,
-73.983766 AS dropofflon,
40.755174 AS dropofflat,
3.0 AS passengers,
TIMESTAMP('2019-06-03 04:21:29.769443 UTC') AS pickup_datetime ))
```
### Lab Summary:
Our ML problem: Develop a model to predict taxi fare based on distance -- from one point to another in New York City.
#### OPTIONAL Exercise: Create a RMSE summary table.
Create a RMSE summary table:
**Solution Table**
| Model | Taxi Fare | Description |
|-------------|-----------|---------------------------------------|
| model_4 | 9.65 | --Feature cross categorical features |
| model_5 | 5.58 | --Create a Euclidian feature column |
| model_6 | 5.90 | --Feature cross Geo-location features |
| model_7 | 6.23 | --Apply the TRANSFORM Clause |
| final_model | 5.39 | --Apply L2 Regularization |
**RUN** the cell to visualize a RMSE bar chart.
```
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
x = ['m4', 'm5', 'm6','m7', 'final']
RMSE = [9.65,5.58,5.90,6.23,5.39]
x_pos = [i for i, _ in enumerate(x)]
plt.bar(x_pos, RMSE, color='green')
plt.xlabel("Model")
plt.ylabel("RMSE")
plt.title("RMSE Model Summary")
plt.xticks(x_pos, x)
plt.show()
plt.show()
```
Copyright 2020 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| github_jupyter |
Example of using EfficientNet model in PyTorch.
```
import numpy as np
import pandas as pd
import os
import matplotlib.image as mpimg
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
from torch.utils.data import DataLoader, Dataset
import torch.utils.data as utils
from torchvision import transforms
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
data_dir = '../input'
train_dir = data_dir + '/train/train/'
test_dir = data_dir + '/test/test/'
labels = pd.read_csv("../input/train.csv")
labels.head()
class ImageData(Dataset):
def __init__(self, df, data_dir, transform):
super().__init__()
self.df = df
self.data_dir = data_dir
self.transform = transform
def __len__(self):
return len(self.df)
def __getitem__(self, index):
img_name = self.df.id[index]
label = self.df.has_cactus[index]
img_path = os.path.join(self.data_dir, img_name)
image = mpimg.imread(img_path)
image = self.transform(image)
return image, label
data_transf = transforms.Compose([transforms.ToPILImage(), transforms.ToTensor()])
train_data = ImageData(df = labels, data_dir = train_dir, transform = data_transf)
train_loader = DataLoader(dataset = train_data, batch_size = 64)
!pip install efficientnet_pytorch
from efficientnet_pytorch import EfficientNet
model = EfficientNet.from_name('efficientnet-b1')
# Unfreeze model weights
for param in model.parameters():
param.requires_grad = True
num_ftrs = model._fc.in_features
model._fc = nn.Linear(num_ftrs, 1)
model = model.to('cuda')
optimizer = optim.Adam(model.parameters())
loss_func = nn.BCELoss()
%%time
# Train model
loss_log = []
for epoch in range(5):
model.train()
for ii, (data, target) in enumerate(train_loader):
data, target = data.cuda(), target.cuda()
target = target.float()
optimizer.zero_grad()
output = model(data)
m = nn.Sigmoid()
loss = loss_func(m(output), target)
loss.backward()
optimizer.step()
if ii % 1000 == 0:
loss_log.append(loss.item())
print('Epoch: {} - Loss: {:.6f}'.format(epoch + 1, loss.item()))
# plt.figure(figsize=(10,8))
# plt.plot(loss_log)
submit = pd.read_csv('../input/sample_submission.csv')
test_data = ImageData(df = submit, data_dir = test_dir, transform = data_transf)
test_loader = DataLoader(dataset = test_data, shuffle=False)
%%time
predict = []
model.eval()
for i, (data, _) in enumerate(test_loader):
data = data.cuda()
output = model(data)
pred = torch.sigmoid(output)
predicted_vals = pred > 0.5
predict.append(int(predicted_vals))
submit['has_cactus'] = predict
submit.to_csv('submission.csv', index=False)
submit.head()
```
| github_jupyter |
[GradMax: Growing Neural Networks using Gradient Information](https://arxiv.org/abs/2201.05125).
This colab includes Student-Teacher experiments, for image classification follow instructions at
[github.com/google-research/growneuron](https://github.com/google-research/growneuron).
Copyright 2021 Authors. SPDX-License-Identifier: Apache-2.0
```
!pip3 install git+https://github.com/google-research/growneuron.git
#@title LICENSE
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title Import libraries {display-mode: "form"}
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tensorflow.compat.v2 as tf
import tensorflow_datasets as tfds
from collections import Counter
from collections import defaultdict
from tqdm import tqdm
from copy import deepcopy
from absl import logging
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
import shutil
import time
import math
import os
tf.enable_v2_behavior()
import growneuron.layers as glayers
from growneuron import growers
from growneuron import updaters
#@title Model and Experiment Definition
class DenseModel(tf.keras.Model):
"""Example of model with a pressured fully connected layers."""
def __init__(self, n_neurons, activation='relu1', use_bn=False, use_bias=False):
super(DenseModel, self).__init__()
self.n_neurons = n_neurons
self.use_bn = use_bn
self.layers_seq = []
for i, n_neuron in enumerate(n_neurons):
if i == (len(n_neurons) - 1):
layer = tf.keras.layers.Dense(n_neuron, use_bias=use_bias, name=f'last_layer')
self.layers_seq.append(glayers.GrowLayer(layer))
else:
layer = tf.keras.layers.Dense(n_neuron, use_bias=use_bias, name=f'dense_{i}')
self.layers_seq.append(glayers.GrowLayer(layer, activation=activation))
if use_bn:
self.layers_seq.append(glayers.GrowLayer(tf.keras.layers.BatchNormalization()))
def call(self, x, is_debug=False):
"""Regular forward pass on the layer."""
for layer in self.layers_seq:
if is_debug: print(x[0])
x = layer(x)
return x
def flatten_list_of_vars(var_list):
flat_vars = [tf.reshape(v, -1) for v in var_list]
return tf.concat(flat_vars, axis=-1)
def train_w_growth(growth_type=None, n_neurons=(5, 5), n_growth_steps=5, growth_interval=200,
data=None, actv_fn='relu1', layer_id=0, lr=1e-2, ckpt_prefix='tf_ckpts', is_print=False,
num_to_add=1, total_steps=1200, scale=1.0, optim_type='sgd', use_bias=False,
start_iter=None):
print('Training with growth type: ', growth_type)
save_step = 100 # save the model every 100 steps
start_iter = growth_interval if start_iter is None else start_iter
total_steps += start_iter
print('rt', total_steps)
stats = defaultdict(list)
for i, (x_train, y_train) in enumerate(data):
start = time.time()
if optim_type == 'sgd':
optimizer = tf.keras.optimizers.SGD(learning_rate=lr)
carry_optimizer = False
elif optim_type == 'momentum_true':
optimizer = tf.keras.optimizers.SGD(learning_rate=lr, momentum=0.9)
carry_optimizer = True
elif optim_type == 'momentum_false':
optimizer = tf.keras.optimizers.SGD(learning_rate=lr, momentum=0.9)
carry_optimizer = False
elif optim_type == 'adam_true':
optimizer = tf.keras.optimizers.Adam(learning_rate=lr)
carry_optimizer = True
elif optim_type == 'adam_false':
optimizer = tf.keras.optimizers.Adam(learning_rate=lr)
carry_optimizer = False
student_model = DenseModel(n_neurons, activation=actv_fn, use_bn=False, use_bias=use_bias)
def get_loss(unused_arg=None):
del unused_arg
pred = student_model(x_train)
loss_fn = tf.keras.losses.MeanSquaredError()
return loss_fn(y_train, pred)
# init the model
get_loss()
if growth_type == 'gradmax':
grower = growers.AddGradmax()
elif growth_type == 'random':
grower = growers.AddRandom()
elif growth_type == 'random_zeros':
grower = growers.AddRandom()
grower.is_all_zero = True
elif growth_type == 'random_inverse':
grower = growers.AddRandom()
grower.is_outgoing_zero = True
elif growth_type == 'gradmax_opt_inverse':
grower = growers.AddGradmaxOptimInverse()
elif growth_type == 'gradmax_opt':
grower = growers.AddGradmaxOptim()
grower.optim_fn = lambda: tf.keras.optimizers.Adam()
else:
print('No growing')
grower = None
layers_to_grow = ((student_model.layers_seq[layer_id], student_model.layers_seq[layer_id+1]),)
updater = updaters.RoundRobin(grower, layers_to_grow, loss_fn=get_loss,
compile_fn=get_loss,
update_frequency=growth_interval,
start_iteration=start_iter,
n_growth_steps=n_growth_steps,
n_grow=num_to_add,
carry_optimizer=carry_optimizer,
scale=scale)
ckpt = tf.train.Checkpoint(net=student_model)
ckpt_path = f'./{ckpt_prefix}_{i}'
manager = tf.train.CheckpointManager(ckpt, ckpt_path, max_to_keep=None)
if os.path.exists(ckpt_path) and os.path.isdir(ckpt_path):
shutil.rmtree(ckpt_path)
stats['loss_curves'].append([])
stats['gnorm_curves'].append([])
stats['gnorm_l1_curves'].append([])
stats['gnorm_l2_curves'].append([])
for j in range(total_steps):
with tf.GradientTape() as tape:
loss = get_loss()
variables = student_model.trainable_variables
all_grads = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(all_grads, variables))
stats['gnorm_curves'][-1].append(tf.norm(flatten_list_of_vars(all_grads)).numpy())
stats['gnorm_l1_curves'][-1].append(tf.norm(all_grads[0]).numpy())
stats['gnorm_l2_curves'][-1].append(tf.norm(all_grads[1]).numpy())
stats['loss_curves'][-1].append(loss.numpy())
if updater.is_update_iteration(j):
save_path = manager.save()
stats['ckpt_path'].append(save_path)
# Data is fixed, therefore None
updater.update_network(None, optimizer=optimizer)
if is_print:
print('n_neurons', student_model.layers_seq[layer_id].units)
end = time.time()
stats['runtime'].append(end - start)
return student_model, stats
#@title Run Experiments
def run_experiments(values, key, width=10, actv_fn = 'relu1', n_data=1000, num_to_add=1, final_steps=500, scale=1.0, use_bias=False,
start_iter = None, n_growth_multiplier = 1, student_width=None,
lr=0.01, n_repeat=1, growth_interval=200, growth_type='baseline_small', weights_range=1., optim_type='momentum_true'):
all_data = []
if (width % 2) != 0:
raise ValueError(f'width: {width} needs to divide by 2.')
if student_width is None:
student_width = width // 2
in_dim, out_dim = width * 2, 10
tf.random.set_seed(0)
teacher_model = DenseModel([width, out_dim], activation=actv_fn, use_bias=use_bias)
# Init variables
teacher_model(tf.random.normal(shape=[1, out_dim]))
for layer in teacher_model.layers_seq:
if layer.weights:
layer.weights[0].assign(
tf.random.uniform(layer.weights[0].shape,
minval=-weights_range, maxval=weights_range))
x_train = tf.random.normal(shape=[n_data, out_dim])
y_train = teacher_model(x_train)
for i in range(n_repeat):
all_data.append((x_train, y_train))
if ((width - student_width) % num_to_add) != 0:
raise ValueError(f'width: {width} needs to devide by {num_to_add}')
n_growth = (width - student_width) // num_to_add
if n_growth_multiplier != 1:
n_growth = int(n_growth * n_growth_multiplier)
total_steps = (n_growth-1) * growth_interval + final_steps
print(total_steps)
baseline_kwargs = {
'n_neurons': (student_width, out_dim),
'n_growth_steps': n_growth,
'growth_type': growth_type,
'total_steps': total_steps,
'num_to_add': num_to_add,
'use_bias': use_bias,
'start_iter': start_iter,
'actv_fn': actv_fn,
'data': all_data,
'optim_type': optim_type,
'growth_interval': growth_interval,
'lr': lr,
'scale': scale
}
stats = {}
for val in values:
print(f'{key}: {val}')
new_kwargs = baseline_kwargs.copy()
new_kwargs[key] = val
if new_kwargs['growth_type'] == 'baseline_big':
new_kwargs['n_neurons'] = (width, out_dim)
print_kwargs = new_kwargs.copy()
print_kwargs['data'] = None
print(print_kwargs)
_, stats[val] = train_w_growth(ckpt_prefix=f'{width}_{key}_{val}', **new_kwargs)
return stats, all_data
#@title Plotting functions
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.font_manager as fm
import matplotlib as mpl
from matplotlib import style
style.use('default')
def plot_data(stats, methods, data_key='loss_curves', f_name='mlp_loss',
x_fn=dict.get, n_boot=100, ci=80, label_dict=None, figsize=(10, 8), **kwargs):
plt.figure(figsize=figsize)
for key in methods:
x = []
y = []
for loss_curve in x_fn(stats[key], data_key):
yy = loss_curve
xx = list(range(len(yy)))
x.extend(xx)
y.extend(yy)
x, y = np.array(x), np.array(y)
label_key = key
if label_dict:
label_key = label_dict[key]
sns.lineplot(x, y, lw=3, label=label_key, ci=ci, n_boot=n_boot)
if kwargs:
plt.setp(plt.gca(), **kwargs)
plt.yscale('log')
plt.legend()
plt.grid()
plt.axis('tight')
f_path = '/tmp/%s.pdf' % f_name
plt.savefig(f_path, dpi=300, bbox_inches = 'tight')
plt.show()
```
# Figure 3a and 2b
Following command runs experiments with all growing baselines needed for figures (3 repeats here instead of 10). Then plots the loss and gradient norm over the course of the training.
```
ALL_METHODS = ['random', 'gradmax', 'baseline_small', 'baseline_big', 'gradmax_opt']
stats, all_data = run_experiments(ALL_METHODS, 'growth_type', lr=0.1, n_repeat=3, growth_interval=200, final_steps=500, optim_type='sgd')
label_tranform_dict = {
'random': 'Random',
'gradmax': 'GradMax',
'gradmax_opt': 'GradMaxOpt',
'baseline_small': 'Baseline (Small)',
'baseline_big': 'Baseline (Big)'
}
plot_data(stats, ALL_METHODS, data_key='loss_curves', f_name='mlp_loss',
ylabel='Training Loss', label_dict=label_tranform_dict,
xlabel='Training Steps',figsize=(13,8))
def normalize_byloss_fn(d, k):
new_curves = []
for gns, losses in zip(d[k], d['loss_curves']):
new_curves.append([gn/loss for gn, loss in zip(gns, losses)])
return new_curves
C_METHODS = ['random', 'gradmax', 'gradmax_opt']
plot_data(stats, C_METHODS, data_key='gnorm_curves', f_name='mlp_gnorm_normalized',
x_fn=normalize_byloss_fn, ylabel='Adjusted Gradient Norm', xlabel='Training Steps',
label_dict=label_tranform_dict)
```
# Figure 2a and 2c
Here we load the checkpoints saved in our trainings to calculate gradients and then continue train
```
from functools import partial
width=10
growth_ids = range(5)
growth_types = ['gradmax', 'random', 'gradmax_opt']
teacher_model = DenseModel([width, width], activation='relu1')
actv_fn='relu1'
n_repeat=5
layer_id=0
n_iter=500
is_print=True
all_stats = defaultdict(lambda : defaultdict(list))
x_train, y_train = all_data[0]
def loss_fn(model, *args):
loss_fn = tf.keras.losses.MeanSquaredError()
pred = model(x_train)
return loss_fn(y_train, pred)
def compute_grads(model):
with tf.GradientTape() as tape:
loss = loss_fn(model)
variables = model.trainable_variables
all_grads = tape.gradient(loss, variables)
return all_grads
for j in growth_ids:
ckpt_name = stats['random']['ckpt_path'][j]
n_neurons=[5+j, 10]
print(f'Grow {j}', n_neurons)
for growth_type in growth_types:
if growth_type == 'gradmax':
grower = growers.AddGradmax()
elif growth_type == 'random':
grower = growers.AddRandom()
elif growth_type == 'gradmax_opt':
grower = growers.AddGradmaxOptim()
grower.optim_fn = lambda: tf.keras.optimizers.Adam(100)
else:
print('No growing')
grower = None
grower.strategy = tf.distribute.get_strategy()
dict_key = f'{growth_type}_{j}'
init_grad_diffs = []
loss_curves = []
repeat = n_repeat if growth_type != 'gradmax' else 1
for i in range(repeat):
optimizer = tf.keras.optimizers.SGD(learning_rate=0.05)
model = DenseModel(n_neurons, activation=actv_fn)
ckpt = tf.train.Checkpoint(net=model)
ckpt.restore(ckpt_name)
model(x_train[:2])
grow_layer_tuple = (model.layers_seq[layer_id], model.layers_seq[layer_id+1])
grad_norm_before = tf.norm(compute_grads(model)[layer_id]).numpy()
c_loss_fn = partial(loss_fn, model)
grower.loss_fn = c_loss_fn
grower.compile_fn = c_loss_fn
grower.grow_neurons(grow_layer_tuple, None, n_new=1)
c_loss_fn()
grad_norm_after = tf.norm(compute_grads(model)[layer_id]).numpy()
init_grad_diffs.append(grad_norm_after-grad_norm_before)
losses = [[], []]
for epoch in range(n_iter - 1):
with tf.GradientTape() as tape:
loss = c_loss_fn()
variables = model.trainable_variables
all_grads = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(all_grads, variables))
losses[0].append(epoch)
losses[1].append(loss.numpy())
loss_curves.append(losses)
if is_print:
print(growth_type, np.mean(loss_curves[-1][1]), np.max(init_grad_diffs))
all_stats[dict_key]['grad_diff'] = init_grad_diffs
all_stats[dict_key]['loss_curves'] = loss_curves
from matplotlib.ticker import MaxNLocator
plt.figure(figsize=(16,10))
ax = plt.figure().gca()
for key in ['random', 'gradmax', 'gradmax_opt']:
y = []
x = []
for j in growth_ids:
dict_key = f'{key}_{j}'
yy = all_stats[dict_key]['grad_diff']
x.extend([j+1] * len(yy))
y.extend(yy)
print(y[-5:])
plt.scatter(x, y, label=label_tranform_dict[key], linewidths=5)
# plt.title('Gradient Gain after Growth')
plt.legend()
plt.xlabel('Growth Number')
plt.ylabel('$||\\nabla W_{\\ell}^{new}||$')
plt.yscale('log')
# plt.ylim(5e-3, 5)
ax.xaxis.set_major_locator(MaxNLocator(integer=True))
f_path = '/tmp/mlp_graddiff.pdf'
plt.savefig(f_path, dpi=300, bbox_inches = 'tight')
plt.show()
plt.figure()
for j in growth_ids:
grad_maxyy = np.array(all_stats[f'gradmax_{j}']['loss_curves'][0][1])
x = []
y = []
for loss_curve in all_stats[f'random_{j}']['loss_curves']:
xx, yy = loss_curve
yy = np.array(yy) - grad_maxyy
x.extend(xx)
y.extend(yy)
sns.lineplot(x, y, lw=5, label=f'Growth {j+1}', ci=80)
# plt.title('$\Delta$ Loss between GradMax and Random')
plt.axhline(y=0, color='r', linestyle='-')
plt.legend()
plt.xlabel('Step Number')
plt.ylabel('$\Delta$ Loss')
f_path = '/tmp/mlp_lossdiff.pdf'
plt.savefig(f_path, dpi=300, bbox_inches = 'tight')
plt.show()
```
| github_jupyter |
```
from __future__ import print_function
import pandas as pd
import time
import od_python
from od_python.rest import ApiException
from pprint import pprint
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import datetime as dt
import seaborn as sns
import matplotlib.pyplot as plt
import math
from statistics import mean, stdev
import re
def get_matches(match_list):
# TEAMFIGHTS AND PLAYER DATA NOT COLLECTED HERE
m_id = []
d_bar_stat = []
r_bar_stat = []
cluster = []
cosmet = []
d_score = []
draft_time = []
duration = []
eng = []
f_blood = []
g_mode = []
h_players = []
l_id = []
lob_type = []
match_seq_num = []
objectives = []
picks_bans = []
r_gold_adv = []
r_score = []
r_win = []
r_xp_adv = []
s_time = []
d_tow_stat = []
r_tow_stat = []
ver = []
s_id = []
s_type = []
r_team = []
d_team = []
league = []
skill = []
patch = []
region = []
throw = []
loss = []
match_api = od_python.MatchesApi()
counter = 0
for match in match_list:
try:
if counter > 40:
time.sleep(90)
counter = 0
print(match)
api_response = match_api.matches_match_id_get(match)
counter = counter + 1
m_id.append(api_response.match_id)
d_bar_stat.append(api_response.barracks_status_dire)
r_bar_stat.append(api_response.barracks_status_radiant)
cluster.append(api_response.cluster)
cosmet.append(api_response.cosmetics)
d_score.append(api_response.dire_score)
draft_time.append(api_response.draft_timings)
duration.append(api_response.duration)
eng.append(api_response.engine)
f_blood.append(api_response.first_blood_time)
g_mode.append(api_response.game_mode)
h_players.append(api_response.human_players)
l_id.append(api_response.leagueid)
lob_type.append(api_response.lobby_type)
match_seq_num.append(api_response.match_seq_num)
objectives.append(api_response.objectives)
picks_bans.append(api_response.picks_bans)
r_gold_adv.append(api_response.radiant_gold_adv)
r_score.append(api_response.radiant_score)
r_win.append(api_response.radiant_win)
r_xp_adv.append(api_response.radiant_xp_adv)
s_time.append(api_response.start_time)
d_tow_stat.append(api_response.tower_status_dire)
r_tow_stat.append(api_response.tower_status_radiant)
ver.append(api_response.version)
s_id.append(api_response.series_id)
s_type.append(api_response.series_type)
r_team.append(api_response.radiant_team)
d_team.append(api_response.dire_team)
league.append(api_response.league)
skill.append(api_response.skill)
patch.append(api_response.patch)
region.append(api_response.region)
throw.append(api_response.throw)
loss.append(api_response.loss)
except ApiException as e:
print("Exception when calling MatchesAPI->matches_get: %s\n" % e)
match_id = 0
data_dict = {
"match_id": m_id, "barracks_status_dire":d_bar_stat, "barracks_status_radiant": r_bar_stat,
"cluster": cluster, "cosmetics": cosmet, "dire_score":d_score, "draft_timings": draft_time,
"duration": duration, "engine": eng, "first_blood_time": f_blood, "game_mode": g_mode,
"human_players": h_players, "leagueid": l_id, "lobby_type": lob_type, "match_seq_num": match_seq_num,
"objectives": objectives, "picks_bans": picks_bans, "radiant_gold_adv": r_gold_adv, "radiant_score": r_score,
"radiant_win": r_win, "radiant_xp_adv": r_xp_adv, "start_time": s_time, "tower_status_dire": d_tow_stat,
"tower_status_radiant": r_tow_stat, "version": ver, "series_id": s_id, "series_type": s_type,
"radiant_team": r_team, "dire_team": d_team, "league": league, "skill": skill, "patch": patch, "region": region,
"throw": throw, "loss": loss
}
return data_dict
```
# Fix Promatch Metadata
### International Promatches
```
# Read in promatch data file
int_pro_match = pd.read_csv("https://dota-match-ids.s3.amazonaws.com/promatch_csvs/international_promatch_data.csv")
# Read in promatch match_ids file
int_pro_match_ids = pd.read_csv("https://dota-match-ids.s3.amazonaws.com/match_ids/international_matches.txt",
names=["match_id"])
print(len(int_pro_match["match_id"]))
unique_ids = int_pro_match["match_id"]
print(len(unique_ids.unique()))
print(len(int_pro_match_ids))
unique_match_ids = int_pro_match_ids["match_id"]
print(len(unique_match_ids))
print(list(set(unique_match_ids) - set(unique_ids)))
## NO MISSING PROMATCH DATA
```
### Premier Promatches
```
# Read in promatch data file
prem_pro_match = pd.read_csv("https://dota-match-ids.s3.amazonaws.com/promatch_csvs/premier_promatch_data.csv")
# Read in promatch match_ids file
prem_pro_match_ids = pd.read_csv("https://dota-match-ids.s3.amazonaws.com/match_ids/premier_matches.txt",
names=["match_id"])
print(len(prem_pro_match["match_id"]))
unique_ids = prem_pro_match["match_id"]
print(len(unique_ids.unique()))
print(len(prem_pro_match_ids))
unique_match_ids = prem_pro_match_ids["match_id"]
print(len(unique_match_ids))
print(list(set(unique_match_ids) - set(unique_ids)))
## NO MISSING PROMATCH DATA
```
### Professional Promatches
```
# Read in promatch data file
prof_pro_match = pd.read_csv("https://dota-match-ids.s3.amazonaws.com/promatch_csvs/professional_promatch_data.csv")
# Read in promatch match_ids file
prof_pro_match_ids = pd.read_csv("https://dota-match-ids.s3.amazonaws.com/match_ids/pro_matches.txt",
names=["match_id"])
print(len(prof_pro_match["match_id"]))
unique_ids = prof_pro_match["match_id"]
print(len(unique_ids.unique()))
print(len(prof_pro_match_ids))
unique_match_ids = prof_pro_match_ids["match_id"]
print(len(unique_match_ids))
print(list(set(unique_match_ids) - set(unique_ids)))
## NO MISSING PROMATCH DATA
```
# Fix Raw Match Data
### International Match Data
```
int_raw_match = pd.read_csv("https://dota-match-ids.s3.amazonaws.com/raw_match_csvs/international_raw_match_full.csv")
int_pro_match_ids = pd.read_csv("https://dota-match-ids.s3.amazonaws.com/match_ids/international_matches.txt",
names=["match_id"])
print(len(int_raw_match["match_id"]))
unique_ids = int_raw_match["match_id"]
print(len(unique_ids.unique()))
print(len(int_pro_match_ids))
unique_match_ids = int_pro_match_ids["match_id"]
print(len(unique_match_ids))
print(list(set(unique_match_ids) - set(unique_ids)))
missing_entries = list(set(unique_match_ids) - set(unique_ids))
## SINGLE ID MISSING FROM INTERNATIONAL MATCH DATA COLLECTION - 1673865688
```
#### FIX DATASET BY APPENDING MISSING ENTRIES
```
int_raw_missing = get_matches(missing_entries)
df_int_missing = pd.DataFrame(data = int_raw_missing)
int_raw_match = int_raw_match.append(df_int_missing)
int_raw_match.to_csv("international_raw_match_full.csv")
int_raw_match_1000 = int_raw_match.sample(1000)
int_raw_match_1000.to_csv("international_raw_match_1000.csv")
int_raw_match_100 = int_raw_match.sample(100)
int_raw_match_100.to_csv("international_raw_match_100.csv")
```
### Premier Match Data
```
# Read in promatch data file
prem_raw_match = pd.read_csv("https://dota-match-ids.s3.amazonaws.com/raw_match_csvs/premier_raw_match_full.csv")
# Read in promatch match_ids file
prem_raw_match_ids = pd.read_csv("https://dota-match-ids.s3.amazonaws.com/match_ids/premier_matches.txt",
names=["match_id"])
print(len(prem_raw_match["match_id"]))
unique_ids = prem_raw_match["match_id"]
print(len(unique_ids.unique()))
print(len(prem_raw_match_ids))
unique_match_ids = prem_raw_match_ids["match_id"]
print(len(unique_match_ids))
print(list(set(unique_match_ids) - set(unique_ids)))
missing_entries = list(set(unique_match_ids) - set(unique_ids))
```
#### FIX DATASET BY APPENDING MISSING ENTRIES
```
prem_raw_missing = get_matches(missing_entries)
df_prem_missing = pd.DataFrame(data = prem_raw_missing)
prem_raw_match = prem_raw_match.append(df_prem_missing)
prem_raw_match.to_csv("premier_raw_match_full.csv")
prem_raw_match_1000 = prem_raw_match.sample(1000)
prem_raw_match_1000.to_csv("premier_raw_match_1000.csv")
prem_raw_match_100 = prem_raw_match.sample(100)
prem_raw_match_100.to_csv("premier_raw_match_100.csv")
```
### Professional Match Data
```
# Read in promatch data file
prof_raw_match = pd.read_csv("https://dota-match-ids.s3.amazonaws.com/raw_match_csvs/professional_raw_match_full.csv")
# Read in promatch match_ids file
#prof_raw_match_ids = pd.read_csv("https://dota-match-ids.s3.amazonaws.com/match_ids/professional_matches.txt",
# names=["match_id"])
print(prof_raw_match.shape)
# Read in promatch data file
prof_raw_match = pd.read_csv("https://dota-match-ids.s3.amazonaws.com/raw_match_csvs/professional_raw_match_full.csv")
# Read in promatch match_ids file
prof_raw_match_ids = pd.read_csv("https://dota-match-ids.s3.amazonaws.com/match_ids/pro_matches.txt",
names=["match_id"])
print(len(prof_raw_match["match_id"]))
unique_ids = prof_raw_match["match_id"]
print(len(unique_ids.unique()))
print(len(prof_raw_match_ids))
unique_match_ids = prof_raw_match_ids["match_id"]
print(len(unique_match_ids))
print(list(set(unique_match_ids) - set(unique_ids)))
missing_entries = list(set(unique_match_ids) - set(unique_ids))
## ____ IDs MISSING FROM PREMIER MATCH DATA COLLECTION
```
#### FIX DATASET BY APPENDING MISSING ENTRIES
```
prof_raw_missing = get_matches(missing_entries)
df_prof_missing = pd.DataFrame(data = prof_raw_missing)
prof_raw_match = prof_raw_match.append(df_prof_missing)
prof_raw_match.to_csv("professional_raw_match_full.csv")
prof_raw_match_1000 = prof_raw_match.sample(1000)
prof_raw_match_1000.to_csv("professional_raw_match_1000.csv")
prof_raw_match_100 = prof_raw_match.sample(100)
prof_raw_match_100.to_csv("professional_raw_match_100.csv")
```
| github_jupyter |
## 7.1 数据处理
先载入一些必备的包:
```
import zipfile # 处理压缩文件
import os
import pandas as pd # 处理 csv 文件
import numpy as np
from matplotlib import pyplot as plt
# ----- 自定义模块
from utils.zipimage import ImageZ
%matplotlib inline
```
查看 https://www.kaggle.com/c/dogs-vs-cats/data 下数据的基本信息。从该网址可以知道:训练数据 `tain.zip` 包括 $50\,000$ 个样本,其中猫和狗各半。而其任务是预测 `test1.zip` 的标签($1 = dog, 0 = cat$)。为了方便将数据下载到本地,然后做如下操作:
```
# 函数 unzip 被封装到了 kaggle/helper.py
def unzip(root, NAME):
source_path = os.path.join(root, 'all.zip')
dataDir = os.path.join(root, NAME)
with zipfile.ZipFile(source_path) as fp:
fp.extractall(dataDir) # 解压 all.zip 数据集到 dataDir
os.remove(source_path) # 删除 all.zip
return dataDir # 返回数据所在目录
root = 'data/'
NAME = 'dog_cat'
dataDir = unzip(root, dataDir)
```
为了以后方便处理其他的 Kaggle 提供的数据,将 `unzip` 函数封装到了 kaggle 包下的 helper 模块中。`unzip` 实现的功能是将 `root` 下的 `'all.zip'` 文件解压并返回解压后数据所在的目录,同时也将 `'all.zip'` 删除(之后用不到了)。
先看看 `dataDir` 目录下面都是什么文件?
```
dataDir = 'data/dog_cat'
os.listdir(dataDir)
```
文件 `'sampleSubmission.csv'` 是 kaggle 比赛提交预测结果的样式:
```
submit = pd.read_csv(os.path.join(dataDir, 'sampleSubmission.csv'))
submit.head()
```
其中 `id` 表示 `test.zip` 中图片的文件名称,比如 `id = 1` 代表图片文件名为 `1.jpg`。`label` 表示图片的标签(1=dog,0=cat)。训练集和测试集都是 `.zip` 压缩文件,下面直接利用类 ImageZ 来读取数据:
```
testset = ImageZ(dataDir, 'test1') # 测试数据
trainZ = ImageZ(dataDir, 'train') # 训练数据
```
你也许会疑惑,训练数据的标签呢?其实,在 https://www.kaggle.com/c/dogs-vs-cats/data 中你查看 `train.zip` 便可以发现其类别信息隐藏在文件名中,为此,可以直接查看 trainZ 的 `names` 属性:
```
trainZ.names[:5] # 查看其中的 5 个文件名
```
因而,对于训练数据可以通过文件名来获知其所属于的类别。为了与 `1=dog,0=cat` 对应,下面定义:
```
class_names = ('cat', 'dog')
class_names[1], class_names[0]
```
为了后期处理方便,定义 DataSet 类:
```
class DataSet(ImageZ):
def __init__(self, dataDir, dataType):
super().__init__(dataDir, dataType)
self.class_names = ('cat', 'dog') # 数据集的类名称
self._get_name_class_dict()
def _get_name_class_dict(self):
self.name_class_dict = {} # 通过文件名获取图片的类别
class_dict = {
class_name: i
for i, class_name in enumerate(self.class_names)
}
for name in self.names:
class_name = name.split('.')[0].split('/')[-1]
self.name_class_dict[name] = class_dict[class_name]
def __iter__(self):
for name in self.names:
# 返回 (data, label) 数据形式
yield self.buffer2array(name), self.name_class_dict[name]
```
下面看看如何使用 DataSet 类:
```
dataset = DataSet(dataDir, 'train')
for img, label in dataset:
print("大小:",img.shape, '标签:', label) # 查看一张图片
plt.imshow(img)
plt.show()
break
```
为了查看模型的泛化性,需要将 dataset 划分为 trainset 与 valset,为此需要将 dataset 的 names 属性进行划分。由于 dataset 的特殊性,下面对 trainZ 进行处理比较好:
```
cat_rec = []
dog_rec = []
for name in trainZ.names:
if name.startswith('train/cat'):
cat_rec.append((name, 0))
elif name.startswith('train/dog'):
dog_rec.append((name, 1))
```
为了避免模型依赖于数据集的顺序,下面将会打乱原数据集的顺序:
```
import random
random.shuffle(cat_rec) # 打乱 cat_names
random.shuffle(dog_rec) # 打乱 dog_names
train_rec = cat_rec[:10000] + dog_rec[:10000] # 各取其中的 10000 个样本作为训练
val_rec = cat_rec[10000:] + dog_rec[10000:] # 剩余的作为测试
random.shuffle(train_rec) # 打乱类别的分布,提高模型的泛化能力
random.shuffle(val_rec)
len(train_rec), len(val_rec)
```
从上面的代码我们可以知道,训练数据和验证数据的样本个数分别为:20000 和 5000。下面为了可以让模型能够使用该数据集,需要定义一个**生成器**:
```
class Loader:
def __init__(self, imgZ, rec, shuffle=False, target_size=None):
if shuffle:
random.shuffle(rec) # 训练集需要打乱
self.shuffle = shuffle
self.imgZ = imgZ
self.rec = rec
self.__target_size = target_size
def name2array(self, name):
# 将 name 转换为 array
import cv2
img = self.imgZ.buffer2array(name)
if self.__target_size: # 将图片 resize 为 self.target_size
return cv2.resize(img, self.__target_size)
else:
return img
def __getitem__(self, item):
rec = self.rec[item]
if isinstance(item, slice):
return [(self.name2array(name), label) for (name, label) in rec]
else:
return self.name2array(rec[0]), rec[1]
def __iter__(self):
for name, label in self.rec:
yield self.name2array(name), label # 返回 (data, label)
def __len__(self):
return len(self.rec) # 返回数据样本数
```
这样,便可以得到随机划分后的 `trainset` 和 `valset`。
```
trainset = Loader(trainZ, train_rec, True) # 训练集
valset = Loader(trainZ, val_rec) # 验证集
for img, label in trainset:
plt.imshow(img) # 显示出图片
plt.title(str(class_names[label])) # 将类别作为标题
plt.show()
break
```
数据处理好之后,看看图片的大小:
```
name2size = {} # 获得图片的 size
for name in trainZ.names:
name2size[name] = trainZ.buffer2array(name).shape[:-1]
min({w for h, w in set(name2size.values())}), min({h for h, w in set(name2size.values())})
```
从上述代码可以看出:图片的最小高和宽分别为 32 和 42,基于此,不妨将所有的图片均 resize 为 `(150, 150)`。
## 7.2 Gluon 实现模型的训练和预测
Gluon 的学习可以参考李沐大神的 [《动手学深度学习》](http://zh.d2l.ai/):http://zh.d2l.ai/ 。为了提高模型的泛化能力,数据增强技术是十分有必要的,Gluon 提供了一个十分方便的模块 `transforms`,下面我们来看看它的具体使用:
```
from mxnet.gluon.data.vision import transforms as gtf
transform_train = gtf.Compose([
# 随机对图像裁剪出面积为原图像面积0.08~1倍、且高和宽之比在3/4~4/3的图像,再放缩为高和
# 宽都是为 150 的新图
gtf.RandomResizedCrop(
150, scale=(0.08, 1.0), ratio=(3.0 / 4.0, 4.0 / 3.0)),
gtf.RandomFlipLeftRight(),
# 随机变化亮度、对比度和饱和度
gtf.RandomColorJitter(brightness=0.4, contrast=0.4, saturation=0.4),
# 随机加噪声
gtf.RandomLighting(0.1),
gtf.ToTensor(),
# 对图像的每个通道做标准化
gtf.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
transform_test = gtf.Compose([
gtf.Resize(256),
# 将图像中央的高和宽均为 150 的正方形区域裁剪出来
gtf.CenterCrop(150),
gtf.ToTensor(),
gtf.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
```
由于 Gluon 的数据输入是 `mxnet.ndarray.ndarray.NDArray` 类型的数据,而 Loader 类的输出是 `numpy.ndarray` 类型的数据,因而,需要改写 `Loader` 类。
```
from mxnet import nd
from mxnet.gluon import data as gdata
class GluonLoader(Loader, gdata.Dataset): # gdata.Dataset 是 gluon 处理数据的基类之一
def __init__(self, imgZ, rec):
super().__init__(imgZ, rec)
def name2array(self, name):
return nd.array(self.imgZ.buffer2array(name)) # 将 name 转换为 array
```
下面可以看看 GluonLoader 类实现了哪些有趣的功能?
```
train_ds = GluonLoader(trainZ, train_rec)
valid_ds = GluonLoader(trainZ, val_rec)
for img, label in train_ds:
print(type(img), img.shape, label)
break
```
此时,`img` 已经满足 Gluon 所创建的模型的输入数据要求。对于图像分类任务,卷积神经网络无疑是一个比较好的模型。对于卷积神经网络来说,一般会采用 第 2 章介绍的 小批量随机梯度下降优化策略来训练模型。我们需要将数据转换为批量形式,而 gluon.data.DataLoader 为我们实现该功能提供了便利:
```
from mxnet.gluon import data as gdata
batch_size = 8 # 批量大小
train_iter = gdata.DataLoader( # 每次读取一个样本数为 batch_size 的小批量数据
train_ds.transform_first(transform_train), # 变换应用在每个数据样本(图像和标签)的第一个元素,即图像之上
batch_size,
shuffle=True, # 训练数据需要打乱
last_batch='keep') # 保留最后一个批量
valid_iter = gdata.DataLoader(
valid_ds.transform_first(transform_test),
batch_size,
shuffle=True, # 验证数据需要打乱
last_batch='keep')
```
`train_ds.transform_first(transform_train)` 和 `valid_ds.transform_first(transform_test)` 均将原数据集应用了数据增强技术,并将其转换为了适合 Gluon 所创建卷积神经网络的输入形式。
### 7.2.1 建立并训练模型
在建立本章所需要的模型之前,先了解一些卷积神经网络的必备知识。卷积神经网络简单的说就是含有卷积层的神经网络。神经网络就是具有层级结构的神经元组成的具有提取数据的多级特征的机器学习模型。一个神经元由仿射变换(矩阵运算)+非线性变换(被称为激活函数)组成。在 Gluon 中卷积运算由 `nn.Conv2D` 来实现(具体细节见 http://zh.d2l.ai/chapter_convolutional-neural-networks/conv-layer.html )。
二维卷积层输出的数组可以看作是数据在宽和高维度上某一级的特征表示(也叫**特征图**(feature map)或表征)或是对输入的响应(称作**响应图**(response map))。
```
# 载入一些必备包
from mxnet import autograd, init, gluon
from mxnet.gluon import model_zoo
import d2lzh as d2l
from mxnet.gluon import loss as gloss, nn
from gluoncv.utils import TrainingHistory # 可视化
from mxnet import metric # 计算度量的模块
```
`nn.Conv2D` 常用参数:
- `channels`:下一层特征图的**通道数**,即卷积核的个数
- `kernel_size`:卷积核的尺寸
- `activation`:**激活函数**
还有一点需要注意:卷积层的输出可由下面的公式来计算:
$$
\begin{cases}
h_{out} = \lfloor \frac{h_{in} - f_h + 2p}{s} \rfloor + 1 \\
w_{out} = \lfloor \frac{w_{in} - f_w + 2p}{s} \rfloor + 1
\end{cases}
$$
- $h_{out}, w_{out}$ 表示卷积层的输出的特征图尺寸
- $h_{in}, w_{in}$ 表示卷积层的输入的特征图尺寸
- $f_h, f_w$ 表示卷积核的尺寸,即 `nn.Conv2D` 的参数 `kernel_size`
- $s$ 表示卷积核移动的步长,即 `nn.Conv2D` 的参数 `strides`
- $p$ 表示卷积层的输入的填充大小,即 `nn.Conv2D` 的参数 `padding`,减小边界效应带来的影响
虽然,使用步进卷积(strides 大于 1)可以对图像进行下采样,但是,一般很少使用它。一般地,可以使用**池化**操作来实现下采样。在 Gluon 中使用 `nn.MaxPool2D` 和 `nn.AvgPool2D` 实现。
下面创建模型并训练:
```
class SimpleNet(nn.HybridBlock):
def __init__(self, no4x1pooling=False, **kwargs):
super().__init__(**kwargs)
self.no4x1pooling = no4x1pooling
layer_size = [min(32 * 2 ** (i + 1), 512) for i in range(6)]
def convRelu(layer, i, bn=True):
layer.add(nn.Conv2D(
channels=layer_size[i], kernel_size=(3, 3), padding=(1, 1)))
if bn:
layer.add(nn.BatchNorm())
layer.add(nn.LeakyReLU(alpha=0.25))
return layer
with self.name_scope():
self.conv = nn.HybridSequential(prefix='')
with self.conv.name_scope():
self.conv = convRelu(self.conv, 0) # bz x 64 x 32 x 280
self.max_pool = nn.HybridSequential(prefix='')
# bz x 128 x 16 x 140
self.max_pool.add(nn.MaxPool2D(
pool_size=(2, 2), strides=(2, 2)))
self.avg_pool = nn.HybridSequential(prefix='')
# bz x 128 x 16 x 140
self.avg_pool.add(nn.AvgPool2D(
pool_size=(2, 2), strides=(2, 2)))
self.net = nn.HybridSequential(prefix='')
self.net = convRelu(self.net, 1)
# bz x 256 x 8 x 70
self.net.add(nn.MaxPool2D(pool_size=(2, 2), strides=(2, 2)))
self.net = convRelu(self.net, 2, True)
self.net = convRelu(self.net, 3)
# bz x 512 x 4 x 35
self.net.add(nn.MaxPool2D(pool_size=(2, 2), strides=(2, 2)))
self.net = convRelu(self.net, 4, True)
self.net = convRelu(self.net, 5)
self.c = 512
if not self.no4x1pooling:
self.cols_pool = nn.HybridSequential(prefix='')
# bz x 512 x 1 x 35
self.cols_pool.add(nn.AvgPool2D(pool_size=(4, 1)))
self.cols_pool.add(nn.Dropout(rate=0.5))
else:
self.c = self.c * 4
self.no_cols_pool = nn.HybridSequential(prefix='')
self.no_cols_pool.add(nn.Dropout(rate=0.5))
def hybrid_forward(self, F, x):
x = self.conv(x)
max = self.max_pool(x)
avg = self.avg_pool(x)
x = max - avg
x = self.net(x)
if not self.no4x1pooling:
x = self.cols_pool(x)
else:
x = self.no_cols_pool(x)
x = F.reshape(data=x, shape=(0, -3, 1, -2))
return x
def evaluate_loss(data_iter, net, ctx):
l_sum, n = 0.0, 0
for X, y in data_iter:
y = y.as_in_context(ctx).astype('float32') # 模型的输出是 float32 类型数据
outputs = net(X.as_in_context(ctx)) # 模型的输出
l_sum += loss(outputs, y).sum().asscalar() # 计算总损失
n += y.size # 计算样本数
return l_sum / n # 计算平均损失
def test(valid_iter, net, ctx):
val_metric = metric.Accuracy()
for X, y in valid_iter:
X = X.as_in_context(ctx)
y = y.as_in_context(ctx).astype('float32') # 模型的输出是 float32 类型数据
outputs = net(X)
val_metric.update(y, outputs)
return val_metric.get()
def train(net, train_iter, valid_iter, num_epochs, lr, wd, ctx, model_name):
import time
trainer = gluon.Trainer(net.collect_params(), 'rmsprop', {
'learning_rate': lr,
'wd': wd
}) # 优化策略
train_metric = metric.Accuracy()
train_history = TrainingHistory(['training-error', 'validation-error'])
best_val_score = 0
for epoch in range(num_epochs):
train_l_sum, n, start = 0.0, 0, time.time() # 计时开始
train_acc_sum = 0
train_metric.reset()
for X, y in train_iter:
X = X.as_in_context(ctx)
y = y.as_in_context(ctx).astype('float32') # 模型的输出是 float32 类型数据
with autograd.record(): # 记录梯度信息
outputs = net(X) # 模型输出
print(outputs.shape)
l = loss(outputs, y).sum() # 计算总损失
l.backward() # 反向传播
trainer.step(batch_size)
train_l_sum += l.asscalar() # 计算该批量的总损失
train_metric.update(y, outputs) # 计算训练精度
n += y.size
_, train_acc = train_metric.get()
time_s = "time {:.2f} sec".format(time.time() - start) # 计时结束
valid_loss = evaluate_loss(valid_iter, net, ctx) # 计算验证集的平均损失
_, val_acc = test(valid_iter, net, ctx) # 计算验证集的精度
epoch_s = (
"epoch {:d}, train loss {:.5f}, valid loss {:.5f}, train acc {:.5f}, valid acc {:.5f}, ".
format(epoch, train_l_sum / n, valid_loss, train_acc, val_acc))
print(epoch_s + time_s)
train_history.update([1 - train_acc, 1 - val_acc]) # 更新图像的纵轴
train_history.plot(save_path='{}/{}_history.png'.format(
'images', model_name)) # 实时更新图像
if val_acc > best_val_score: # 保存比较好的模型
best_val_score = val_acc
net.save_parameters('{}/{:.4f}-{}-{:d}-best.params'.format(
'models', best_val_score, model_name, epoch))
loss = gloss.SoftmaxCrossEntropyLoss() # 交叉熵损失函数
ctx, num_epochs, lr, wd = d2l.try_gpu(), 100, 1e-4, 1e-5
net = SimpleNet(no4x1pooling=True)
net.initialize(init.Xavier(), ctx=ctx) # 模型初始化, 在 ctx 上训练
net.hybridize() # 转换为符号式编程
#train(net, train_iter, valid_iter, num_epochs, lr, wd, ctx, 'dog_cat')
for xs, ys in train_iter:
xs = xs.as_in_context(ctx)
ys = ys.as_in_context(ctx).astype('float32') # 模型的输出是 float32 类型数据
break
xs.shape
net(xs).shape
```
上面创建的模型是依据 VGG 结构进行改写的,同时为了加快模型训练加入了 `nn.BatchNorm`,即**批量归一化**。
## 预测
重新载入已经训练过的网络模型:
```
net = get_net() # 在 ctx 上训练
ctx = d2l.try_gpu()
net.load_parameters(filename='models/0.9112-dog_cat-99-best.params', ctx=ctx)
net.hybridize() # 转换为符号式编程
class Testset(ImageZ):
def __init__(self, root, dataType, ctx):
super().__init__(root, dataType)
self.ctx = ctx
def name2label(self, name):
img = self.buffer2array(name)
X = transform_test(nd.array(img)).expand_dims(
axis=0).as_in_context(self.ctx)
return int(net(X).argmax(axis=1).asscalar())
def __getitem__(self, item):
names = self.names[item]
if isinstance(item, slice):
return [(name, self.name2label(name)) for name in names]
else:
return names, self.name2label(names)
def __iter__(self):
for name in self.names:
yield name, self.name2label(name) # 返回 (name, label)
import pandas as pd
_testset = Testset(dataDir, 'test1', ctx)
df = pd.DataFrame.from_records(
(name.split('/')[-1].split('.')[0], label) for name, label in _testset)
df.columns = ['id', 'label']
df.to_csv('data/dog_cat/results.csv', index=False) # 预测结果保存到本地
```
| github_jupyter |
## List
- append()
- clear()
- copy() -->make deep copy
- count()
- extend () -->in memory operation
- list1 +list2
- list1.__add__
- index()
- insert
- pop
- remove
- reverse
- sort
```
print(len(dir(list)))
#dir(list) #uncomment to see all avaliable methods
#help(list) #uncomment to observe all list methods
name = 'Qasim'
name
name1 = 'Ali'
name2 = 'Raza'
name3 = 'Qasim'
print(name1,name2,name3)
# 0 1 2
names = ['Ali', 'Raza', 'Qasim']
# -3 -2 -1
names.
print(names[1])
print(names[-1])
print(type(name1))
print(type(names))
l1 = []
l1.append('A')
l1.append('B')
l1.append('C')
print(l1)
print(l1)
l1.clear()
print(l1)
l1 = ['A','B','C']
print(l1)
del l1
print(l1)
l1 = ["Qasim",'Hassan','Ali']
l2 = l1 # shalloow Copy address same
print(id(l1))
print(id(l2))
l3 = l1.copy() # deep copy address change
print(id(l1))
print(id(l3))
l3[0] = 'Pakistan'
print(l3)
print(l1)
l1 = ['A','B','C','D','A','A']
l1.count('A')
l1 = ['A','B','C']
l2 = ['X','Y','Z']
print(l1 + l2) # Inline Operation
print(l1,l2)
l1 = ['a','b','c']
l2 = ['x','y','z']
print(l1,l2)
print(l1.__add__(l2)) # Inline Operation
print(l1,l2)
l1 = ['a','b','c']
l2 = ['x','y','z']
l1.extend(l2)
print(l1)
print(l2)
l1 = ['a','b','c']
print(l1.index('c'))
l1 = ['A','B', 'C', 'D','E','B','C']
l1.index('C',3)
l1 = []
l1.insert(0,'A') # ['A']
l1.insert(0,'B') # ['B','A']
l1.insert(0,'C') # ['C','B','A']
print(l1)
l1.insert(1,'Pakistan')
print(l1)
l1 = ['A','B','C']
del l1[-1] # ['A','B']
del l1[-1] # ['A']
l1
l2 = []
l1 = ['A','B','C','D']
a = l1.pop() # D
l2.append(a) # ['D']
a = l1.pop() # C
l2.append(a) # ['D','C']
a = l1.pop() # B
l2.append(a) #['D','C','B']
print(l1)
print(l2)
l2 = []
l1 = ['A','B','C','D']
a = l1.pop(1) # B
l2.append(a) # ['B']
a = l1.pop(1) # C
l2.append(a) # ['B','C']
a = l1.pop(1) # B
l2.append(a) #['B','C','D']
print(l1)
print(l2)
l1 = ['A','B','B']
l1.remove('B')
l1
l1 = ['A','B','C','D']
l1.reverse()
l1
l1 = ['X','Y','Z','A','B']
l1.sort()
l1
l1 = ['X','Y','Z','A','B']
l1.sort(reverse=True)
l1
```
### Magic or Dunder Methods
```
print(l1)
l1.__class__
l1 = ['X','Y','Z','A','B']
print(l1)
l1.__contains__('A')
l1.__doc__
print(l1)
l1.__delitem__(1)
l1
[1,2,3].__eq__([3,2,1]) # equal
[1,2,3].__le__([3,2,1]) #less then
[1,2,3].__sizeof__()
[1,2,3].__sizeof__
```
## Tuples`
```
l1 = ['A','B','C']
print(l1)
l1[0] = 'Pakistan'
print(l1)
l1 = ('A','B','C')
print(l1)
l1[0] = 'Pakistan'
print(l1)
l1 = ('A',5,'B','A','A','A')
l1
l1 = ('A',5,'B','A','A','A')
l1.count('A')
l1 = ('A',5,'B','A','A','A')
l1.index('A',1)
help(tuple)
```
| github_jupyter |
## Dependencies
```
import os
import sys
import cv2
import shutil
import random
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import multiprocessing as mp
import matplotlib.pyplot as plt
from tensorflow import set_random_seed
from sklearn.utils import class_weight
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, cohen_kappa_score
from keras import backend as K
from keras.models import Model
from keras.utils import to_categorical
from keras import optimizers, applications
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import Dense, Dropout, GlobalAveragePooling2D, Input
from keras.callbacks import EarlyStopping, ReduceLROnPlateau, Callback, LearningRateScheduler
def seed_everything(seed=0):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
set_random_seed(0)
seed = 0
seed_everything(seed)
%matplotlib inline
sns.set(style="whitegrid")
warnings.filterwarnings("ignore")
sys.path.append(os.path.abspath('../input/efficientnet/efficientnet-master/efficientnet-master/'))
from efficientnet import *
```
## Load data
```
hold_out_set = pd.read_csv('../input/aptos-split-oldnew/hold-out_5.csv')
X_train = hold_out_set[hold_out_set['set'] == 'train']
X_val = hold_out_set[hold_out_set['set'] == 'validation']
test = pd.read_csv('../input/aptos2019-blindness-detection/test.csv')
test["id_code"] = test["id_code"].apply(lambda x: x + ".png")
print('Number of train samples: ', X_train.shape[0])
print('Number of validation samples: ', X_val.shape[0])
print('Number of test samples: ', test.shape[0])
display(X_train.head())
```
# Model parameters
```
# Model parameters
FACTOR = 4
BATCH_SIZE = 8 * FACTOR
EPOCHS = 20
WARMUP_EPOCHS = 5
LEARNING_RATE = 1e-4 * FACTOR
WARMUP_LEARNING_RATE = 1e-3 * FACTOR
HEIGHT = 224
WIDTH = 224
CHANNELS = 3
TTA_STEPS = 5
ES_PATIENCE = 5
RLROP_PATIENCE = 3
DECAY_DROP = 0.5
LR_WARMUP_EPOCHS_1st = 2
LR_WARMUP_EPOCHS_2nd = 5
STEP_SIZE = len(X_train) // BATCH_SIZE
TOTAL_STEPS_1st = WARMUP_EPOCHS * STEP_SIZE
TOTAL_STEPS_2nd = EPOCHS * STEP_SIZE
WARMUP_STEPS_1st = LR_WARMUP_EPOCHS_1st * STEP_SIZE
WARMUP_STEPS_2nd = LR_WARMUP_EPOCHS_2nd * STEP_SIZE
```
# Pre-procecess images
```
old_data_base_path = '../input/diabetic-retinopathy-resized/resized_train/resized_train/'
new_data_base_path = '../input/aptos2019-blindness-detection/train_images/'
test_base_path = '../input/aptos2019-blindness-detection/test_images/'
train_dest_path = 'base_dir/train_images/'
validation_dest_path = 'base_dir/validation_images/'
test_dest_path = 'base_dir/test_images/'
# Making sure directories don't exist
if os.path.exists(train_dest_path):
shutil.rmtree(train_dest_path)
if os.path.exists(validation_dest_path):
shutil.rmtree(validation_dest_path)
if os.path.exists(test_dest_path):
shutil.rmtree(test_dest_path)
# Creating train, validation and test directories
os.makedirs(train_dest_path)
os.makedirs(validation_dest_path)
os.makedirs(test_dest_path)
def crop_image(img, tol=7):
if img.ndim ==2:
mask = img>tol
return img[np.ix_(mask.any(1),mask.any(0))]
elif img.ndim==3:
gray_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
mask = gray_img>tol
check_shape = img[:,:,0][np.ix_(mask.any(1),mask.any(0))].shape[0]
if (check_shape == 0): # image is too dark so that we crop out everything,
return img # return original image
else:
img1=img[:,:,0][np.ix_(mask.any(1),mask.any(0))]
img2=img[:,:,1][np.ix_(mask.any(1),mask.any(0))]
img3=img[:,:,2][np.ix_(mask.any(1),mask.any(0))]
img = np.stack([img1,img2,img3],axis=-1)
return img
def circle_crop(img):
img = crop_image(img)
height, width, depth = img.shape
largest_side = np.max((height, width))
img = cv2.resize(img, (largest_side, largest_side))
height, width, depth = img.shape
x = width//2
y = height//2
r = np.amin((x, y))
circle_img = np.zeros((height, width), np.uint8)
cv2.circle(circle_img, (x, y), int(r), 1, thickness=-1)
img = cv2.bitwise_and(img, img, mask=circle_img)
img = crop_image(img)
return img
def preprocess_image(image_id, base_path, save_path, HEIGHT=HEIGHT, WIDTH=WIDTH, sigmaX=10):
image = cv2.imread(base_path + image_id)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = crop_image(image)
image = cv2.resize(image, (HEIGHT, WIDTH))
# image = cv2.addWeighted(image, 4, cv2.GaussianBlur(image, (0,0), sigmaX), -4 , 128)
cv2.imwrite(save_path + image_id, image)
def preprocess_data(df, HEIGHT=HEIGHT, WIDTH=WIDTH, sigmaX=10):
df = df.reset_index()
for i in range(df.shape[0]):
item = df.iloc[i]
image_id = item['id_code']
item_set = item['set']
item_data = item['data']
if item_set == 'train':
if item_data == 'new':
preprocess_image(image_id, new_data_base_path, train_dest_path)
if item_data == 'old':
preprocess_image(image_id, old_data_base_path, train_dest_path)
if item_set == 'validation':
if item_data == 'new':
preprocess_image(image_id, new_data_base_path, validation_dest_path)
if item_data == 'old':
preprocess_image(image_id, old_data_base_path, validation_dest_path)
def preprocess_test(df, base_path=test_base_path, save_path=test_dest_path, HEIGHT=HEIGHT, WIDTH=WIDTH, sigmaX=10):
df = df.reset_index()
for i in range(df.shape[0]):
image_id = df.iloc[i]['id_code']
preprocess_image(image_id, base_path, save_path)
n_cpu = mp.cpu_count()
train_n_cnt = X_train.shape[0] // n_cpu
val_n_cnt = X_val.shape[0] // n_cpu
test_n_cnt = test.shape[0] // n_cpu
# Pre-procecss old data train set
pool = mp.Pool(n_cpu)
dfs = [X_train.iloc[train_n_cnt*i:train_n_cnt*(i+1)] for i in range(n_cpu)]
dfs[-1] = X_train.iloc[train_n_cnt*(n_cpu-1):]
res = pool.map(preprocess_data, [x_df for x_df in dfs])
pool.close()
# Pre-procecss validation set
pool = mp.Pool(n_cpu)
dfs = [X_val.iloc[val_n_cnt*i:val_n_cnt*(i+1)] for i in range(n_cpu)]
dfs[-1] = X_val.iloc[val_n_cnt*(n_cpu-1):]
res = pool.map(preprocess_data, [x_df for x_df in dfs])
pool.close()
# Pre-procecss test set
pool = mp.Pool(n_cpu)
dfs = [test.iloc[test_n_cnt*i:test_n_cnt*(i+1)] for i in range(n_cpu)]
dfs[-1] = test.iloc[test_n_cnt*(n_cpu-1):]
res = pool.map(preprocess_test, [x_df for x_df in dfs])
pool.close()
```
# Data generator
```
datagen=ImageDataGenerator(rescale=1./255,
rotation_range=360,
horizontal_flip=True,
vertical_flip=True)
train_generator=datagen.flow_from_dataframe(
dataframe=X_train,
directory=train_dest_path,
x_col="id_code",
y_col="diagnosis",
class_mode="raw",
batch_size=BATCH_SIZE,
target_size=(HEIGHT, WIDTH),
seed=seed)
valid_generator=datagen.flow_from_dataframe(
dataframe=X_val,
directory=validation_dest_path,
x_col="id_code",
y_col="diagnosis",
class_mode="raw",
batch_size=BATCH_SIZE,
target_size=(HEIGHT, WIDTH),
seed=seed)
test_generator=datagen.flow_from_dataframe(
dataframe=test,
directory=test_dest_path,
x_col="id_code",
batch_size=1,
class_mode=None,
shuffle=False,
target_size=(HEIGHT, WIDTH),
seed=seed)
def cosine_decay_with_warmup(global_step,
learning_rate_base,
total_steps,
warmup_learning_rate=0.0,
warmup_steps=0,
hold_base_rate_steps=0):
"""
Cosine decay schedule with warm up period.
In this schedule, the learning rate grows linearly from warmup_learning_rate
to learning_rate_base for warmup_steps, then transitions to a cosine decay
schedule.
:param global_step {int}: global step.
:param learning_rate_base {float}: base learning rate.
:param total_steps {int}: total number of training steps.
:param warmup_learning_rate {float}: initial learning rate for warm up. (default: {0.0}).
:param warmup_steps {int}: number of warmup steps. (default: {0}).
:param hold_base_rate_steps {int}: Optional number of steps to hold base learning rate before decaying. (default: {0}).
:param global_step {int}: global step.
:Returns : a float representing learning rate.
:Raises ValueError: if warmup_learning_rate is larger than learning_rate_base, or if warmup_steps is larger than total_steps.
"""
if total_steps < warmup_steps:
raise ValueError('total_steps must be larger or equal to warmup_steps.')
learning_rate = 0.5 * learning_rate_base * (1 + np.cos(
np.pi *
(global_step - warmup_steps - hold_base_rate_steps
) / float(total_steps - warmup_steps - hold_base_rate_steps)))
if hold_base_rate_steps > 0:
learning_rate = np.where(global_step > warmup_steps + hold_base_rate_steps,
learning_rate, learning_rate_base)
if warmup_steps > 0:
if learning_rate_base < warmup_learning_rate:
raise ValueError('learning_rate_base must be larger or equal to warmup_learning_rate.')
slope = (learning_rate_base - warmup_learning_rate) / warmup_steps
warmup_rate = slope * global_step + warmup_learning_rate
learning_rate = np.where(global_step < warmup_steps, warmup_rate,
learning_rate)
return np.where(global_step > total_steps, 0.0, learning_rate)
class WarmUpCosineDecayScheduler(Callback):
"""Cosine decay with warmup learning rate scheduler"""
def __init__(self,
learning_rate_base,
total_steps,
global_step_init=0,
warmup_learning_rate=0.0,
warmup_steps=0,
hold_base_rate_steps=0,
verbose=0):
"""
Constructor for cosine decay with warmup learning rate scheduler.
:param learning_rate_base {float}: base learning rate.
:param total_steps {int}: total number of training steps.
:param global_step_init {int}: initial global step, e.g. from previous checkpoint.
:param warmup_learning_rate {float}: initial learning rate for warm up. (default: {0.0}).
:param warmup_steps {int}: number of warmup steps. (default: {0}).
:param hold_base_rate_steps {int}: Optional number of steps to hold base learning rate before decaying. (default: {0}).
:param verbose {int}: quiet, 1: update messages. (default: {0}).
"""
super(WarmUpCosineDecayScheduler, self).__init__()
self.learning_rate_base = learning_rate_base
self.total_steps = total_steps
self.global_step = global_step_init
self.warmup_learning_rate = warmup_learning_rate
self.warmup_steps = warmup_steps
self.hold_base_rate_steps = hold_base_rate_steps
self.verbose = verbose
self.learning_rates = []
def on_batch_end(self, batch, logs=None):
self.global_step = self.global_step + 1
lr = K.get_value(self.model.optimizer.lr)
self.learning_rates.append(lr)
def on_batch_begin(self, batch, logs=None):
lr = cosine_decay_with_warmup(global_step=self.global_step,
learning_rate_base=self.learning_rate_base,
total_steps=self.total_steps,
warmup_learning_rate=self.warmup_learning_rate,
warmup_steps=self.warmup_steps,
hold_base_rate_steps=self.hold_base_rate_steps)
K.set_value(self.model.optimizer.lr, lr)
if self.verbose > 0:
print('\nBatch %02d: setting learning rate to %s.' % (self.global_step + 1, lr))
```
# Model
```
def create_model(input_shape):
input_tensor = Input(shape=input_shape)
base_model = EfficientNetB5(weights=None,
include_top=False,
input_tensor=input_tensor)
base_model.load_weights('../input/efficientnet-keras-weights-b0b5/efficientnet-b5_imagenet_1000_notop.h5')
x = GlobalAveragePooling2D()(base_model.output)
final_output = Dense(1, activation='linear', name='final_output')(x)
model = Model(input_tensor, final_output)
return model
```
# Train top layers
```
model = create_model(input_shape=(HEIGHT, WIDTH, CHANNELS))
for layer in model.layers:
layer.trainable = False
for i in range(-2, 0):
model.layers[i].trainable = True
cosine_lr_1st = WarmUpCosineDecayScheduler(learning_rate_base=WARMUP_LEARNING_RATE,
total_steps=TOTAL_STEPS_1st,
warmup_learning_rate=0.0,
warmup_steps=WARMUP_STEPS_1st,
hold_base_rate_steps=(2 * STEP_SIZE))
metric_list = ["accuracy"]
callback_list = [cosine_lr_1st]
optimizer = optimizers.Adam(lr=WARMUP_LEARNING_RATE)
model.compile(optimizer=optimizer, loss='mean_squared_error', metrics=metric_list)
model.summary()
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size
history_warmup = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=WARMUP_EPOCHS,
callbacks=callback_list,
verbose=2).history
```
# Fine-tune the complete model
```
for layer in model.layers:
layer.trainable = True
es = EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1)
cosine_lr_2nd = WarmUpCosineDecayScheduler(learning_rate_base=LEARNING_RATE,
total_steps=TOTAL_STEPS_2nd,
warmup_learning_rate=0.0,
warmup_steps=WARMUP_STEPS_2nd,
hold_base_rate_steps=(3 * STEP_SIZE))
callback_list = [es, cosine_lr_2nd]
optimizer = optimizers.Adam(lr=LEARNING_RATE)
model.compile(optimizer=optimizer, loss='mean_squared_error', metrics=metric_list)
model.summary()
history = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=EPOCHS,
callbacks=callback_list,
verbose=2).history
fig, (ax1, ax2) = plt.subplots(2, 1, sharex='col', figsize=(20, 6))
ax1.plot(cosine_lr_1st.learning_rates)
ax1.set_title('Warm up learning rates')
ax2.plot(cosine_lr_2nd.learning_rates)
ax2.set_title('Fine-tune learning rates')
plt.xlabel('Steps')
plt.ylabel('Learning rate')
sns.despine()
plt.show()
```
# Model loss graph
```
fig, (ax1, ax2) = plt.subplots(2, 1, sharex='col', figsize=(20, 14))
ax1.plot(history['loss'], label='Train loss')
ax1.plot(history['val_loss'], label='Validation loss')
ax1.legend(loc='best')
ax1.set_title('Loss')
ax2.plot(history['acc'], label='Train accuracy')
ax2.plot(history['val_acc'], label='Validation accuracy')
ax2.legend(loc='best')
ax2.set_title('Accuracy')
plt.xlabel('Epochs')
sns.despine()
plt.show()
# Create empty arays to keep the predictions and labels
df_preds = pd.DataFrame(columns=['label', 'pred', 'set'])
train_generator.reset()
valid_generator.reset()
# Add train predictions and labels
for i in range(STEP_SIZE_TRAIN + 1):
im, lbl = next(train_generator)
preds = model.predict(im, batch_size=train_generator.batch_size)
for index in range(len(preds)):
df_preds.loc[len(df_preds)] = [lbl[index], preds[index][0], 'train']
# Add validation predictions and labels
for i in range(STEP_SIZE_VALID + 1):
im, lbl = next(valid_generator)
preds = model.predict(im, batch_size=valid_generator.batch_size)
for index in range(len(preds)):
df_preds.loc[len(df_preds)] = [lbl[index], preds[index][0], 'validation']
df_preds['label'] = df_preds['label'].astype('int')
def classify(x):
if x < 0.5:
return 0
elif x < 1.5:
return 1
elif x < 2.5:
return 2
elif x < 3.5:
return 3
return 4
# Classify predictions
df_preds['predictions'] = df_preds['pred'].apply(lambda x: classify(x))
train_preds = df_preds[df_preds['set'] == 'train']
validation_preds = df_preds[df_preds['set'] == 'validation']
```
# Model Evaluation
## Confusion Matrix
### Original thresholds
```
labels = ['0 - No DR', '1 - Mild', '2 - Moderate', '3 - Severe', '4 - Proliferative DR']
def plot_confusion_matrix(train, validation, labels=labels):
train_labels, train_preds = train
validation_labels, validation_preds = validation
fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 7))
train_cnf_matrix = confusion_matrix(train_labels, train_preds)
validation_cnf_matrix = confusion_matrix(validation_labels, validation_preds)
train_cnf_matrix_norm = train_cnf_matrix.astype('float') / train_cnf_matrix.sum(axis=1)[:, np.newaxis]
validation_cnf_matrix_norm = validation_cnf_matrix.astype('float') / validation_cnf_matrix.sum(axis=1)[:, np.newaxis]
train_df_cm = pd.DataFrame(train_cnf_matrix_norm, index=labels, columns=labels)
validation_df_cm = pd.DataFrame(validation_cnf_matrix_norm, index=labels, columns=labels)
sns.heatmap(train_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax1).set_title('Train')
sns.heatmap(validation_df_cm, annot=True, fmt='.2f', cmap=sns.cubehelix_palette(8),ax=ax2).set_title('Validation')
plt.show()
plot_confusion_matrix((train_preds['label'], train_preds['predictions']), (validation_preds['label'], validation_preds['predictions']))
```
## Quadratic Weighted Kappa
```
def evaluate_model(train, validation):
train_labels, train_preds = train
validation_labels, validation_preds = validation
print("Train Cohen Kappa score: %.3f" % cohen_kappa_score(train_preds, train_labels, weights='quadratic'))
print("Validation Cohen Kappa score: %.3f" % cohen_kappa_score(validation_preds, validation_labels, weights='quadratic'))
print("Complete set Cohen Kappa score: %.3f" % cohen_kappa_score(np.append(train_preds, validation_preds), np.append(train_labels, validation_labels), weights='quadratic'))
evaluate_model((train_preds['label'], train_preds['predictions']), (validation_preds['label'], validation_preds['predictions']))
```
## Apply model to test set and output predictions
```
def apply_tta(model, generator, steps=10):
step_size = generator.n//generator.batch_size
preds_tta = []
for i in range(steps):
generator.reset()
preds = model.predict_generator(generator, steps=step_size)
preds_tta.append(preds)
return np.mean(preds_tta, axis=0)
preds = apply_tta(model, test_generator, TTA_STEPS)
predictions = [classify(x) for x in preds]
results = pd.DataFrame({'id_code':test['id_code'], 'diagnosis':predictions})
results['id_code'] = results['id_code'].map(lambda x: str(x)[:-4])
# Cleaning created directories
if os.path.exists(train_dest_path):
shutil.rmtree(train_dest_path)
if os.path.exists(validation_dest_path):
shutil.rmtree(validation_dest_path)
if os.path.exists(test_dest_path):
shutil.rmtree(test_dest_path)
```
# Predictions class distribution
```
fig = plt.subplots(sharex='col', figsize=(24, 8.7))
sns.countplot(x="diagnosis", data=results, palette="GnBu_d").set_title('Test')
sns.despine()
plt.show()
results.to_csv('submission.csv', index=False)
display(results.head())
```
## Save model
```
model.save_weights('../working/effNetB5_img224.h5')
```
| github_jupyter |
```
# Read Instructions carefully before attempting this assignment
# 1) don't rename any function name
# 2) don't rename any variable name
# 3) don't remove any #comment
# 4) don't remove """ under triple quate values """
# 5) you have to write code where you found "write your code here"
# 6) after download rename this file with this format "PIAICCompletRollNumber_AssignmentNo.py"
# Example piaic17896_Assignment1.py
# 7) After complete this assignment please push on your own GitHub repository.
# 8) you can submit this assignment through the google form
# 9) copy this file absolute URL then paste in the google form
# The example above: https://github.com/EnggQasim/Batch04_to_35/blob/main/Sunday/1_30%20to%203_30/Assignments/assignment1.txt
# * Because all assignment we will be checked through software if you missed any above points
# * then we can't assign your scores in our database.
import numpy as np
# Task no 1
def function1():
# create 2d array from 1,12 range
# dimension should be 6row 2 columns
# and assign this array values in x values in x variable
# Hint: you can use arange and reshape numpy methods
x = np.arange(1,13).reshape((6,2))
return x
"""
expected output:
[[ 1 2]
[ 3 4]
[ 5 6]
[ 7 8]
[ 9 10]
[11 12]]
"""
function1()
# Task2
def function2():
#create 3D array (3,3,3)
#must data type should have float64
#array value should be satart from 10 and end with 36 (both included)
# Hint: dtype, reshape
x = np.arange(10,37,dtype=np.float64).reshape((3,3,3)) #wrtie your code here
return x
"""
Expected: out put
array([[[10., 11., 12.],
[13., 14., 15.],
[16., 17., 18.]],
[[19., 20., 21.],
[22., 23., 24.],
[25., 26., 27.]],
[[28., 29., 30.],
[31., 32., 33.],
[34., 35., 36.]]])
"""
function2()
#task3
def function3():
#extract those numbers from given array. those are must exist in 5,7 Table
#example [35,70,105,..]
a = np.arange(1, 100*10+1).reshape((100,10))
x = a[(a%5==0) & (a%7 ==0) ] #wrtie your code here
return x
"""
Expected Output:
[35, 70, 105, 140, 175, 210, 245, 280, 315, 350, 385, 420, 455,
490, 525, 560, 595, 630, 665, 700, 735, 770, 805, 840, 875, 910,
945, 980]
"""
function3()
#task4
def function4():
#Swap columns 1 and 2 in the array arr.
arr = np.arange(9).reshape(3,3)
return arr#wrtie your code here
"""
Expected Output:
array([[1, 0, 2],
[4, 3, 5],
[7, 6, 8]])
"""
function4()
#task5
def function5():
#Create a null vector of size 20 with 4 rows and 5 columns with numpy function
z = np.zeros(20).reshape(4,5)#wrtie your code here
return z
"""
Expected Output:
array([[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]])
"""
function5()
#task6
def function6():
# Create a null vector of size 10 but the fifth and eighth value which is 10,20 respectively
arr = np.zeros(10)
arr[5] = 10
arr[8] = 20 #wrtie your code here
return arr
function6()
#task7
def function7():
# Create an array of zeros with the same shape and type as X. Dont use reshape method
x = np.zeros(4, dtype=np.int64)
return x#write your code here
"""
Expected Output:
array([0, 0, 0, 0], dtype=int64)
"""
function7()
#task8
def function8():
# Create a new array of 2x5 uints, filled with 6.
x = np.full((2,5), 6) #write your code here
return x
"""
Expected Output:
array([[6, 6, 6, 6, 6],
[6, 6, 6, 6, 6]], dtype=uint32)
"""
function8()
#task9
def function9():
# Create an array of 2, 4, 6, 8, ..., 100.
a = np.arange(2,101,2)# write your code here
return a
"""
Expected Output:
array([ 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26,
28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 50, 52,
54, 56, 58, 60, 62, 64, 66, 68, 70, 72, 74, 76, 78,
80, 82, 84, 86, 88, 90, 92, 94, 96, 98, 100])
"""
function9()
#task10
def function10():
# Subtract the 1d array brr from the 2d array arr, such that each item of brr subtracts from respective row of arr.
arr = np.array([[3,3,3],[4,4,4],[5,5,5]])
brr = np.array([1,2,3])
subt = np.append([[arr[0] - brr[0]],[arr[1]- arr[1]], [arr[2]-brr[2]]], axis = 0)# write your code here
return subt
"""
Expected Output:
array([[2 2 2]
[2 2 2]
[2 2 2]])
"""
function10()
#task11
def function11():
# Replace all odd numbers in arr with -1 without changing arr.
arr = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
ans = #write your code here
return ans
"""
Expected Output:
array([ 0, -1, 2, -1, 4, -1, 6, -1, 8, -1])
"""
#task12
def function12():
# Create the following pattern without hardcoding. Use only numpy functions and the below input array arr.
# HINT: use stacking concept
arr = np.array([1,2,3])
ans = #write your code here
return ans
"""
Expected Output:
array([1, 1, 1, 2, 2, 2, 3, 3, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3])
"""
#task13
def function13():
# Set a condition which gets all items between 5 and 10 from arr.
arr = np.array([2, 6, 1, 9, 10, 3, 27])
ans = #write your code here
return ans
"""
Expected Output:
array([6, 9])
"""
#task14
def function14():
# Create an 8X3 integer array from a range between 10 to 34 such that the difference between each element is 1 and then Split the array into four equal-sized sub-arrays.
# Hint use split method
arr = np.arange(10, 34, 1) #write reshape code
ans = #write your code here
return ans
"""
Expected Output:
[array([[10, 11, 12],[13, 14, 15]]),
array([[16, 17, 18],[19, 20, 21]]),
array([[22, 23, 24],[25, 26, 27]]),
array([[28, 29, 30],[31, 32, 33]])]
"""
#task15
def function15():
#Sort following NumPy array by the second column
arr = np.array([[ 8, 2, -2],[-4, 1, 7],[ 6, 3, 9]])
ans = #write your code here
return ans
"""
Expected Output:
array([[-4, 1, 7],
[ 8, 2, -2],
[ 6, 3, 9]])
"""
#task16
def function16():
#Write a NumPy program to join a sequence of arrays along depth.
x = np.array([[1], [2], [3]])
y = np.array([[2], [3], [4]])
ans = #write your code here
return ans
"""
Expected Output:
[[[1 2]]
[[2 3]]
[[3 4]]]
"""
#Task17
def function17():
# replace numbers with "YES" if it divided by 3 and 5
# otherwise it will be replaced with "NO"
# Hint: np.where
arr = np.arange(1,10*10+1).reshape((10,10))
return # Write Your Code HERE
#Excpected Out
"""
array([['NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO'],
['NO', 'NO', 'NO', 'NO', 'YES', 'NO', 'NO', 'NO', 'NO', 'NO'],
['NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'YES'],
['NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO'],
['NO', 'NO', 'NO', 'NO', 'YES', 'NO', 'NO', 'NO', 'NO', 'NO'],
['NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'YES'],
['NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO'],
['NO', 'NO', 'NO', 'NO', 'YES', 'NO', 'NO', 'NO', 'NO', 'NO'],
['NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'YES'],
['NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO']],
dtype='<U3')
"""
#Task18
def function18():
# count values of "students" are exist in "piaic"
piaic = np.arange(100)
students = np.array([5,20,50,200,301,7001])
x = # Write you code Here
return x
#Expected output: 3
# Task19
def function19():
#Create variable "X" from 1,25 (both are included) range values
#Convert "X" variable dimension into 5 rows and 5 columns
#Create one more variable "W" copy of "X"
#Swap "W" row and column axis (like transpose)
# then create variable "b" with value equal to 5
# Now return output as "(X*W)+b:
X = # Write your code here
W = # Write your code here
b = # Write your code here
output = # Write your code here
#expected output
"""
array([[ 6, 17, 38, 69, 110],
[ 17, 54, 101, 158, 225],
[ 38, 101, 174, 257, 350],
[ 69, 158, 257, 366, 485],
[110, 225, 350, 485, 630]])
"""
#Task20
def fucntion20():
#apply fuction "abc" on each value of Array "X"
x = np.arange(1,11)
def abc(x):
return x*2+3-2
return #Write your Code here
#Expected Output: array([ 3, 5, 7, 9, 11, 13, 15, 17, 19, 21])
```
## All Questions
```
# Task no 1
def function1():
# create 2d array from 1,12 range
# dimension should be 6row 2 columns
# and assign this array values in x values in x variable
# Hint: you can use arange and reshape numpy methods
x = np.arange(1,13).reshape((6,2))
return x
"""
expected output:
[[ 1 2]
[ 3 4]
[ 5 6]
[ 7 8]
[ 9 10]
[11 12]]
"""
# Task2
def function2():
#create 3D array (3,3,3)
#must data type should have float64
#array value should be satart from 10 and end with 36 (both included)
# Hint: dtype, reshape
x = np.arange(1,28,dtype=np.float64).reshape((3,3,3)) #wrtie your code here
return x
"""
Expected: out put
array([[[10., 11., 12.],
[13., 14., 15.],
[16., 17., 18.]],
[[19., 20., 21.],
[22., 23., 24.],
[25., 26., 27.]],
[[28., 29., 30.],
[31., 32., 33.],
[34., 35., 36.]]])
"""
#task3
def function3():
#extract those numbers from given array. those are must exist in 5,7 Table
#example [35,70,105,..]
a = np.arange(1, 100*10+1).reshape((100,10))
x = a[] #wrtie your code here
return x
"""
Expected Output:
[35, 70, 105, 140, 175, 210, 245, 280, 315, 350, 385, 420, 455,
490, 525, 560, 595, 630, 665, 700, 735, 770, 805, 840, 875, 910,
945, 980]
"""
#task4
def function4():
#Swap columns 1 and 2 in the array arr.
arr = np.arange(9).reshape(3,3)
return #wrtie your code here
"""
Expected Output:
array([[1, 0, 2],
[4, 3, 5],
[7, 6, 8]])
"""
#task5
def function5():
#Create a null vector of size 20 with 4 rows and 5 columns with numpy function
z = #wrtie your code here
return z
"""
Expected Output:
array([[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]])
"""
#task6
def function6():
# Create a null vector of size 10 but the fifth and eighth value which is 10,20 respectively
arr = #wrtie your code here
return arr
#task7
def function7():
# Create an array of zeros with the same shape and type as X. Dont use reshape method
x = np.arange(4, dtype=np.int64)
return #write your code here
"""
Expected Output:
array([0, 0, 0, 0], dtype=int64)
"""
#task8
def function8():
# Create a new array of 2x5 uints, filled with 6.
x = #write your code here
return x
"""
Expected Output:
array([[6, 6, 6, 6, 6],
[6, 6, 6, 6, 6]], dtype=uint32)
"""
#task9
def function9():
# Create an array of 2, 4, 6, 8, ..., 100.
a = # write your code here
return a
"""
Expected Output:
array([ 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26,
28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 50, 52,
54, 56, 58, 60, 62, 64, 66, 68, 70, 72, 74, 76, 78,
80, 82, 84, 86, 88, 90, 92, 94, 96, 98, 100])
"""
#task10
def function10():
# Subtract the 1d array brr from the 2d array arr, such that each item of brr subtracts from respective row of arr.
arr = np.array([[3,3,3],[4,4,4],[5,5,5]])
brr = np.array([1,2,3])
subt = # write your code here
return subt
"""
Expected Output:
array([[2 2 2]
[2 2 2]
[2 2 2]])
"""
#task11
def function11():
# Replace all odd numbers in arr with -1 without changing arr.
arr = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
ans = #write your code here
return ans
"""
Expected Output:
array([ 0, -1, 2, -1, 4, -1, 6, -1, 8, -1])
"""
#task12
def function12():
# Create the following pattern without hardcoding. Use only numpy functions and the below input array arr.
# HINT: use stacking concept
arr = np.array([1,2,3])
ans = #write your code here
return ans
"""
Expected Output:
array([1, 1, 1, 2, 2, 2, 3, 3, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3])
"""
#task13
def function13():
# Set a condition which gets all items between 5 and 10 from arr.
arr = np.array([2, 6, 1, 9, 10, 3, 27])
ans = #write your code here
return ans
"""
Expected Output:
array([6, 9])
"""
#task14
def function14():
# Create an 8X3 integer array from a range between 10 to 34 such that the difference between each element is 1 and then Split the array into four equal-sized sub-arrays.
# Hint use split method
arr = np.arange(10, 34, 1) #write reshape code
ans = #write your code here
return ans
"""
Expected Output:
[array([[10, 11, 12],[13, 14, 15]]),
array([[16, 17, 18],[19, 20, 21]]),
array([[22, 23, 24],[25, 26, 27]]),
array([[28, 29, 30],[31, 32, 33]])]
"""
#task15
def function15():
#Sort following NumPy array by the second column
arr = np.array([[ 8, 2, -2],[-4, 1, 7],[ 6, 3, 9]])
ans = #write your code here
return ans
"""
Expected Output:
array([[-4, 1, 7],
[ 8, 2, -2],
[ 6, 3, 9]])
"""
#task16
def function16():
#Write a NumPy program to join a sequence of arrays along depth.
x = np.array([[1], [2], [3]])
y = np.array([[2], [3], [4]])
ans = #write your code here
return ans
"""
Expected Output:
[[[1 2]]
[[2 3]]
[[3 4]]]
"""
#Task17
def function17():
# replace numbers with "YES" if it divided by 3 and 5
# otherwise it will be replaced with "NO"
# Hint: np.where
arr = np.arange(1,10*10+1).reshape((10,10))
return # Write Your Code HERE
#Excpected Out
"""
array([['NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO'],
['NO', 'NO', 'NO', 'NO', 'YES', 'NO', 'NO', 'NO', 'NO', 'NO'],
['NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'YES'],
['NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO'],
['NO', 'NO', 'NO', 'NO', 'YES', 'NO', 'NO', 'NO', 'NO', 'NO'],
['NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'YES'],
['NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO'],
['NO', 'NO', 'NO', 'NO', 'YES', 'NO', 'NO', 'NO', 'NO', 'NO'],
['NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'YES'],
['NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO']],
dtype='<U3')
"""
#Task18
def function18():
# count values of "students" are exist in "piaic"
piaic = np.arange(100)
students = np.array([5,20,50,200,301,7001])
x = # Write you code Here
return x
#Expected output: 3
# Task19
def function19():
#Create variable "X" from 1,25 (both are included) range values
#Convert "X" variable dimension into 5 rows and 5 columns
#Create one more variable "W" copy of "X"
#Swap "W" row and column axis (like transpose)
# then create variable "b" with value equal to 5
# Now return output as "(X*W)+b:
X = # Write your code here
W = # Write your code here
b = # Write your code here
output = # Write your code here
#expected output
"""
array([[ 6, 17, 38, 69, 110],
[ 17, 54, 101, 158, 225],
[ 38, 101, 174, 257, 350],
[ 69, 158, 257, 366, 485],
[110, 225, 350, 485, 630]])
"""
#Task20
def fucntion20():
#apply fuction "abc" on each value of Array "X"
x = np.arange(1,11)
def abc(x):
return x*2+3-2
return #Write your Code here
#Expected Output: array([ 3, 5, 7, 9, 11, 13, 15, 17, 19, 21])
#--------------------------X-----------------------------X-----------------------------X----------------------------X---------------------
```
| github_jupyter |
# Plagiarism Detection Model
Now that you've created training and test data, you are ready to define and train a model. Your goal in this notebook, will be to train a binary classification model that learns to label an answer file as either plagiarized or not, based on the features you provide the model.
This task will be broken down into a few discrete steps:
* Upload your data to S3.
* Define a binary classification model and a training script.
* Train your model and deploy it.
* Evaluate your deployed classifier and answer some questions about your approach.
To complete this notebook, you'll have to complete all given exercises and answer all the questions in this notebook.
> All your tasks will be clearly labeled **EXERCISE** and questions as **QUESTION**.
It will be up to you to explore different classification models and decide on a model that gives you the best performance for this dataset.
---
## Load Data to S3
In the last notebook, you should have created two files: a `training.csv` and `test.csv` file with the features and class labels for the given corpus of plagiarized/non-plagiarized text data.
>The below cells load in some AWS SageMaker libraries and creates a default bucket. After creating this bucket, you can upload your locally stored data to S3.
Save your train and test `.csv` feature files, locally. To do this you can run the second notebook "2_Plagiarism_Feature_Engineering" in SageMaker or you can manually upload your files to this notebook using the upload icon in Jupyter Lab. Then you can upload local files to S3 by using `sagemaker_session.upload_data` and pointing directly to where the training data is saved.
```
import pandas as pd
import boto3
import sagemaker
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# session and role
sagemaker_session = sagemaker.Session()
role = sagemaker.get_execution_role()
# create an S3 bucket
bucket = sagemaker_session.default_bucket()
print(bucket)
```
## EXERCISE: Upload your training data to S3
Specify the `data_dir` where you've saved your `train.csv` file. Decide on a descriptive `prefix` that defines where your data will be uploaded in the default S3 bucket. Finally, create a pointer to your training data by calling `sagemaker_session.upload_data` and passing in the required parameters. It may help to look at the [Session documentation](https://sagemaker.readthedocs.io/en/stable/session.html#sagemaker.session.Session.upload_data) or previous SageMaker code examples.
You are expected to upload your entire directory. Later, the training script will only access the `train.csv` file.
```
# should be the name of directory you created to save your features data
data_dir = 'plagiarism_data'
# set prefix, a descriptive name for a directory
prefix = 'plagiarism-data'
# upload all data to S3
input_data = sagemaker_session.upload_data(path=data_dir, bucket=bucket, key_prefix=prefix)
print(input_data)
```
### Test cell
Test that your data has been successfully uploaded. The below cell prints out the items in your S3 bucket and will throw an error if it is empty. You should see the contents of your `data_dir` and perhaps some checkpoints. If you see any other files listed, then you may have some old model files that you can delete via the S3 console (though, additional files shouldn't affect the performance of model developed in this notebook).
```
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# confirm that data is in S3 bucket
empty_check = []
for obj in boto3.resource('s3').Bucket(bucket).objects.all():
empty_check.append(obj.key)
print(obj.key)
assert len(empty_check) !=0, 'S3 bucket is empty.'
print('Test passed!')
```
---
# Modeling
Now that you've uploaded your training data, it's time to define and train a model!
The type of model you create is up to you. For a binary classification task, you can choose to go one of three routes:
* Use a built-in classification algorithm, like LinearLearner.
* Define a custom Scikit-learn classifier, a comparison of models can be found [here](https://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html).
* Define a custom PyTorch neural network classifier.
It will be up to you to test out a variety of models and choose the best one. Your project will be graded on the accuracy of your final model.
---
## EXERCISE: Complete a training script
To implement a custom classifier, you'll need to complete a `train.py` script. You've been given the folders `source_sklearn` and `source_pytorch` which hold starting code for a custom Scikit-learn model and a PyTorch model, respectively. Each directory has a `train.py` training script. To complete this project **you only need to complete one of these scripts**; the script that is responsible for training your final model.
A typical training script:
* Loads training data from a specified directory
* Parses any training & model hyperparameters (ex. nodes in a neural network, training epochs, etc.)
* Instantiates a model of your design, with any specified hyperparams
* Trains that model
* Finally, saves the model so that it can be hosted/deployed, later
### Defining and training a model
Much of the training script code is provided for you. Almost all of your work will be done in the `if __name__ == '__main__':` section. To complete a `train.py` file, you will:
1. Import any extra libraries you need
2. Define any additional model training hyperparameters using `parser.add_argument`
2. Define a model in the `if __name__ == '__main__':` section
3. Train the model in that same section
Below, you can use `!pygmentize` to display an existing `train.py` file. Read through the code; all of your tasks are marked with `TODO` comments.
**Note: If you choose to create a custom PyTorch model, you will be responsible for defining the model in the `model.py` file,** and a `predict.py` file is provided. If you choose to use Scikit-learn, you only need a `train.py` file; you may import a classifier from the `sklearn` library.
```
# directory can be changed to: source_sklearn or source_pytorch
!pygmentize source_sklearn/train.py
```
### Provided code
If you read the code above, you can see that the starter code includes a few things:
* Model loading (`model_fn`) and saving code
* Getting SageMaker's default hyperparameters
* Loading the training data by name, `train.csv` and extracting the features and labels, `train_x`, and `train_y`
If you'd like to read more about model saving with [joblib for sklearn](https://scikit-learn.org/stable/modules/model_persistence.html) or with [torch.save](https://pytorch.org/tutorials/beginner/saving_loading_models.html), click on the provided links.
---
# Create an Estimator
When a custom model is constructed in SageMaker, an entry point must be specified. This is the Python file which will be executed when the model is trained; the `train.py` function you specified above. To run a custom training script in SageMaker, construct an estimator, and fill in the appropriate constructor arguments:
* **entry_point**: The path to the Python script SageMaker runs for training and prediction.
* **source_dir**: The path to the training script directory `source_sklearn` OR `source_pytorch`.
* **entry_point**: The path to the Python script SageMaker runs for training and prediction.
* **source_dir**: The path to the training script directory `train_sklearn` OR `train_pytorch`.
* **entry_point**: The path to the Python script SageMaker runs for training.
* **source_dir**: The path to the training script directory `train_sklearn` OR `train_pytorch`.
* **role**: Role ARN, which was specified, above.
* **train_instance_count**: The number of training instances (should be left at 1).
* **train_instance_type**: The type of SageMaker instance for training. Note: Because Scikit-learn does not natively support GPU training, Sagemaker Scikit-learn does not currently support training on GPU instance types.
* **sagemaker_session**: The session used to train on Sagemaker.
* **hyperparameters** (optional): A dictionary `{'name':value, ..}` passed to the train function as hyperparameters.
Note: For a PyTorch model, there is another optional argument **framework_version**, which you can set to the latest version of PyTorch, `1.0`.
## EXERCISE: Define a Scikit-learn or PyTorch estimator
To import your desired estimator, use one of the following lines:
```
from sagemaker.sklearn.estimator import SKLearn
```
```
from sagemaker.pytorch import PyTorch
```
```
# your import and estimator code, here
from sagemaker.sklearn.estimator import SKLearn
estimator = SKLearn(entry_point="train.py",
source_dir="source_sklearn",
role=role,
train_instance_count=1,
train_instance_type='ml.c4.xlarge')
```
## EXERCISE: Train the estimator
Train your estimator on the training data stored in S3. This should create a training job that you can monitor in your SageMaker console.
```
%%time
# Train your estimator on S3 training data
estimator.fit({'train': input_data})
```
## EXERCISE: Deploy the trained model
After training, deploy your model to create a `predictor`. If you're using a PyTorch model, you'll need to create a trained `PyTorchModel` that accepts the trained `<model>.model_data` as an input parameter and points to the provided `source_pytorch/predict.py` file as an entry point.
To deploy a trained model, you'll use `<model>.deploy`, which takes in two arguments:
* **initial_instance_count**: The number of deployed instances (1).
* **instance_type**: The type of SageMaker instance for deployment.
Note: If you run into an instance error, it may be because you chose the wrong training or deployment instance_type. It may help to refer to your previous exercise code to see which types of instances we used.
```
%%time
# uncomment, if needed
# from sagemaker.pytorch import PyTorchModel
# deploy your model to create a predictor
predictor = estimator.deploy(initial_instance_count=1, instance_type='ml.t2.medium')
```
---
# Evaluating Your Model
Once your model is deployed, you can see how it performs when applied to our test data.
The provided cell below, reads in the test data, assuming it is stored locally in `data_dir` and named `test.csv`. The labels and features are extracted from the `.csv` file.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import os
# read in test data, assuming it is stored locally
test_data = pd.read_csv(os.path.join(data_dir, "test.csv"), header=None, names=None)
# labels are in the first column
test_y = test_data.iloc[:,0]
test_x = test_data.iloc[:,1:]
```
## EXERCISE: Determine the accuracy of your model
Use your deployed `predictor` to generate predicted, class labels for the test data. Compare those to the *true* labels, `test_y`, and calculate the accuracy as a value between 0 and 1.0 that indicates the fraction of test data that your model classified correctly. You may use [sklearn.metrics](https://scikit-learn.org/stable/modules/classes.html#module-sklearn.metrics) for this calculation.
**To pass this project, your model should get at least 90% test accuracy.**
```
# First: generate predicted, class labels
test_y_preds = predictor.predict(test_x)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# test that your model generates the correct number of labels
assert len(test_y_preds)==len(test_y), 'Unexpected number of predictions.'
print('Test passed!')
# Second: calculate the test accuracy
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(test_y, test_y_preds)
print(accuracy)
## print out the array of predicted and true labels, if you want
print('\nPredicted class labels: ')
print(test_y_preds)
print('\nTrue class labels: ')
print(test_y.values)
```
### Question 1: How many false positives and false negatives did your model produce, if any? And why do you think this is?
```
# Create confusion matrix of true labels and predicted labels
from sklearn.metrics import confusion_matrix
confusion_matrix(test_y, test_y_preds)
```
**In the binary case, we can extract true negatives, false positives, false negatives & true positives as follows:**
```
# Calculate tn, fp, fn, tp from confusion matrix
tn, fp, fn, tp = confusion_matrix(test_y, test_y_preds).ravel()
print('False Postives: {}'.format(fp))
print('False Negatives: {}'.format(fn))
# Create dataframe of test data with predicted labels
test_pred_df = pd.concat([test_data,pd.DataFrame(test_y_preds)], axis=1)
test_pred_df.columns = ['true_label', 'c_1', 'c_5', 'lcs_word', 'pred_label']
test_pred_df
```
** Answer**:
**``The accuracy achieved by Logistic Regression Classifier is 0.96 and we have got 1 false positives and 0 false negatives. The accuracy is quite high and also the number of false positives and false negatives is very low. This is quite evident from the fact that we have a very very small dataset here comprising of only 95 observations with only 3 features.``**
### Question 2: How did you decide on the type of model to use?
** Answer**:
**``Since the dataset is very very small with only 3 features and we have to perform the binary classification, we can opt for simpler classifiers, such as Logistic Regression, Naive Bayes & SVM.``**
----
## EXERCISE: Clean up Resources
After you're done evaluating your model, **delete your model endpoint**. You can do this with a call to `.delete_endpoint()`. You need to show, in this notebook, that the endpoint was deleted. Any other resources, you may delete from the AWS console, and you will find more instructions on cleaning up all your resources, below.
```
# uncomment and fill in the line below!
# <name_of_deployed_predictor>.delete_endpoint()
predictor.delete_endpoint()
```
### Deleting S3 bucket
When you are *completely* done with training and testing models, you can also delete your entire S3 bucket. If you do this before you are done training your model, you'll have to recreate your S3 bucket and upload your training data again.
```
# deleting bucket, uncomment lines below
bucket_to_delete = boto3.resource('s3').Bucket(bucket)
bucket_to_delete.objects.all().delete()
```
### Deleting all your models and instances
When you are _completely_ done with this project and do **not** ever want to revisit this notebook, you can choose to delete all of your SageMaker notebook instances and models by following [these instructions](https://docs.aws.amazon.com/sagemaker/latest/dg/ex1-cleanup.html). Before you delete this notebook instance, I recommend at least downloading a copy and saving it, locally.
---
## Further Directions
There are many ways to improve or add on to this project to expand your learning or make this more of a unique project for you. A few ideas are listed below:
* Train a classifier to predict the *category* (1-3) of plagiarism and not just plagiarized (1) or not (0).
* Utilize a different and larger dataset to see if this model can be extended to other types of plagiarism.
* Use language or character-level analysis to find different (and more) similarity features.
* Write a complete pipeline function that accepts a source text and submitted text file, and classifies the submitted text as plagiarized or not.
* Use API Gateway and a lambda function to deploy your model to a web application.
These are all just options for extending your work. If you've completed all the exercises in this notebook, you've completed a real-world application, and can proceed to submit your project. Great job!
| github_jupyter |
### Presented at <a href="http://qvik.fi/"><img style="height:100px" src="https://qvik.com/wp-content/themes/qvik/images/qvik-logo-dark.png"/></a>
### Session-based recommender Systems: Hands-on GRU4Rec
#### Frederick Ayala Gómez, PhD Student in Computer Science at ELTE University. Visiting Researcher at Aalto's Data Mining Group
##### Let's keep in touch!
Twitter: https://twitter.com/fredayala <br/>
LinkedIn: https://linkedin.com/in/frederickayala <br/>
GitHub: https://github.com/frederickayala
<hr/>
- Few notes:
- This notebook was tested on Windows and presents how to use GRU4Rec
- The paper of GRU4Rec is: B. Hidasi, et al. 2015 “Session-based recommendations with recurrent neural networks”. CoRR
- The poster of this paper can be found in http://www.hidasi.eu/content/gru4rec_iclr16_poster.pdf
- For OSx and Linux, CUDA, Theano and Anaconda 'might' need some extra steps
- On Linux Desktop (e.g. Ubuntu Desktop ), **be careful** with installing CUDA and NVIDIA drivers. It 'might' break lightdm 🙈🙉🙊
- An NVIDIA GEFORCE GTX 980M was used
- The starting point of this notebook is the original python demo file from Balázs Hidasi's GRU4REC repository.
- It's recommended to use Anaconda to install stuff easier
- Installation steps:
- Install *CUDA 8.0* from https://developer.nvidia.com/cuda-downloads
- Optional: Install cuDNN https://developer.nvidia.com/cudnn
- Install *Anaconda 4.3.1* for *Python 3.6* from https://www.continuum.io/downloads
- Open Anaconda Navigator
- Go to Enviroments / Create / Python Version 3.6 and give some name
- In Channels, add: conda-forge then click on Update index...
- Click on your enviroment Play arrow and choose Open Terminal
- Install the libraries that we need:
- conda install numpy scipy pandas mkl-service libpython m2w64-toolchain nose nose-parameterized sphinx pydot-ng
- conda install theano pygpu
- conda install matplotlib seaborn statsmodels
- Create a .theanorc file in your home directory and add the following: <br/>
[global] <br/>
device = cuda <br/>
\# Only if you want to use cuDNN <br/>
[dnn]<br/>
include_path=/path/to/cuDNN/include <br/>
library_path=/path/to/cuDNN/lib/x64
- Get the GRU4Rec code and the dataset
- GRU4Rec:
- git clone https://github.com/hidasib/GRU4Rec.git
- YOOCHOOSE Dataset:
- http://2015.recsyschallenge.com/challenge.html
- To get the training and testing files we have to preprocess the original dataset.
- Go to the terminal that is running your anaconda enviroment
- Navigate to the GRU4Rec folder
- Edit the file GRU4Rec/examples/rsc15/preprocess.py and modify the following variables:
- PATH_TO_ORIGINAL_DATA *The path to the input raw dataset*
- PATH_TO_PROCESSED_DATA *The path to where you want the output*
- Run the command: python preprocess.py
- This will take some time, when the process ends you will have the files *rsc15_train_full.txt* and *rsc15_test.txt* in your *PATH_TO_PROCESSED_DATA* path
- Place this notebook in the folder GRU4Rec/examples/rsc15/
- That's it! we are ready to run GRU4Rec
```
# -*- coding: utf-8 -*-
import theano
import pickle
import sys
import os
sys.path.append('../..')
import numpy as np
import pandas as pd
import gru4rec #If this shows an error probably the notebook is not in GRU4Rec/examples/rsc15/
import evaluation
# Validate that the following assert makes sense in your platform
# This works on Windows with a NVIDIA GPU
# In other platforms theano.config.device gives other things than 'cuda' when using the GPU
assert 'cuda' in theano.config.device,("Theano is not configured to use the GPU. Please check .theanorc. "
"Check http://deeplearning.net/software/theano/tutorial/using_gpu.html")
```
#### Update PATH_TO_TRAIN and PATH_TO_TEST to the path for rsc15_train_full.txt and rsc15_test.txt respectively
```
PATH_TO_TRAIN = 'C:/Users/frede/datasets/recsys2015/rsc15_train_full.txt'
PATH_TO_TEST = 'C:/Users/frede/datasets/recsys2015/rsc15_test.txt'
data = pd.read_csv(PATH_TO_TRAIN, sep='\t', dtype={'ItemId':np.int64})
valid = pd.read_csv(PATH_TO_TEST, sep='\t', dtype={'ItemId':np.int64})
```
#### Let's take a look to the datasets
```
%matplotlib inline
import numpy as np
import pandas as pd
from scipy import stats, integrate
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(color_codes=True)
```
##### Sneak Peak to the dataset
```
data.head()
valid.head()
sessions_training = set(data.SessionId)
print("There are %i sessions in the training dataset" % len(sessions_training))
sessions_testing = set(valid.SessionId)
print("There are %i sessions in the testing dataset" % len(sessions_testing))
assert len(sessions_testing.intersection(sessions_training)) == 0, ("Huhu!"
"there are sessions from the testing set in"
"the training set")
print("Sessions in the testing set doesn't exist in the training set")
items_training = set(data.ItemId)
print("There are %i items in the training dataset" % len(items_training))
items_testing = set(valid.ItemId)
print("There are %i items in the testing dataset" % len(items_testing))
assert items_testing.issubset(items_training), ("Huhu!"
"there are items from the testing set "
"that are not in the training set")
print("Items in the testing set exist in the training set")
df_visualization = data.copy()
df_visualization["value"] = 1
df_item_count = df_visualization[["ItemId","value"]].groupby("ItemId").sum()
# Most of the items are infrequent
df_item_count.describe().transpose()
fig = plt.figure(figsize=[15,8])
ax = fig.add_subplot(111)
ax = sns.kdeplot(df_item_count["value"], ax=ax)
ax.set(xlabel='Item Frequency', ylabel='Kernel Density Estimation')
plt.show()
fig = plt.figure(figsize=[15,8])
ax = fig.add_subplot(111)
ax = sns.distplot(df_item_count["value"],
hist_kws=dict(cumulative=True),
kde_kws=dict(cumulative=True))
ax.set(xlabel='Item Frequency', ylabel='Cummulative Probability')
plt.show()
# Let's analyze the co-occurrence
df_cooccurrence = data.copy()
df_cooccurrence["next_SessionId"] = df_cooccurrence["SessionId"].shift(-1)
df_cooccurrence["next_ItemId"] = df_cooccurrence["ItemId"].shift(-1)
df_cooccurrence["next_Time"] = df_cooccurrence["Time"].shift(-1)
df_cooccurrence = df_cooccurrence.query("SessionId == next_SessionId").dropna()
df_cooccurrence["next_ItemId"] = df_cooccurrence["next_ItemId"].astype(int)
df_cooccurrence["next_SessionId"] = df_cooccurrence["next_SessionId"].astype(int)
df_cooccurrence.head()
df_cooccurrence["time_difference_minutes"] = np.round((df_cooccurrence["next_Time"] - df_cooccurrence["Time"]) / 60, 2)
df_cooccurrence[["time_difference_minutes"]].describe().transpose()
df_cooccurrence["value"] = 1
df_cooccurrence_sum = df_cooccurrence[["ItemId","next_ItemId","value"]].groupby(["ItemId","next_ItemId"]).sum().reset_index()
df_cooccurrence_sum[["value"]].describe().transpose()
```
### Training GRU
```
n_layers = 100
save_to = os.path.join(os.path.dirname(PATH_TO_TEST), "gru_" + str(n_layers) +".pickle")
if not os.path.exists(save_to):
print('Training GRU4Rec with ' + str(n_layers) + ' hidden units')
gru = gru4rec.GRU4Rec(layers=[n_layers], loss='top1', batch_size=50,
dropout_p_hidden=0.5, learning_rate=0.01, momentum=0.0)
gru.fit(data)
pickle.dump(gru, open(save_to, "wb"))
else:
print('Loading existing GRU4Rec model with ' + str(n_layers) + ' hidden units')
gru = pickle.load(open(save_to, "rb"))
```
### Evaluating GRU
```
res = evaluation.evaluate_sessions_batch(gru, valid, None,cut_off=20)
print('The proportion of cases having the desired item within the top 20 (i.e Recall@20): {}'.format(res[0]))
batch_size = 500
print("Now let's try to predict over the first %i items of our testint dataset" % batch_size)
df_valid = valid.head(batch_size)
df_valid["next_ItemId"] = df_valid["ItemId"].shift(-1)
df_valid["next_SessionId"] = df_valid["SessionId"].shift(-1)
session_ids = valid.head(batch_size)["SessionId"].values
input_item_ids = valid.head(batch_size)["ItemId"].values
predict_for_item_ids=None
%timeit gru.predict_next_batch(session_ids=session_ids, input_item_ids=input_item_ids, predict_for_item_ids=None, batch=batch_size)
df_preds = gru.predict_next_batch(session_ids=session_ids,
input_item_ids=input_item_ids,
predict_for_item_ids=None,
batch=batch_size)
df_valid.shape
df_preds.shape
df_preds.columns = df_valid.index.values
len(items_training)
df_preds
for c in df_preds:
df_preds[c] = df_preds[c].rank(ascending=False)
df_valid_preds = df_valid.join(df_preds.transpose())
df_valid_preds = df_valid_preds.query("SessionId == next_SessionId").dropna()
df_valid_preds["next_ItemId"] = df_valid_preds["next_ItemId"].astype(int)
df_valid_preds["next_SessionId"] = df_valid_preds["next_SessionId"].astype(int)
df_valid_preds["next_ItemId_at"] = df_valid_preds.apply(lambda x: x[int(x["next_ItemId"])], axis=1)
df_valid_preds_summary = df_valid_preds[["SessionId","ItemId","Time","next_ItemId","next_ItemId_at"]]
df_valid_preds_summary.head(20)
cutoff = 20
df_valid_preds_summary_ok = df_valid_preds_summary.query("next_ItemId_at <= @cutoff")
df_valid_preds_summary_ok.head(20)
recall_at_k = df_valid_preds_summary_ok.shape[0] / df_valid_preds_summary.shape[0]
print("The recall@%i for this batch is %f"%(cutoff,recall_at_k))
fig = plt.figure(figsize=[15,8])
ax = fig.add_subplot(111)
ax = sns.kdeplot(df_valid_preds_summary["next_ItemId_at"], ax=ax)
ax.set(xlabel='Next Desired Item @K', ylabel='Kernel Density Estimation')
plt.show()
fig = plt.figure(figsize=[15,8])
ax = fig.add_subplot(111)
ax = sns.distplot(df_valid_preds_summary["next_ItemId_at"],
hist_kws=dict(cumulative=True),
kde_kws=dict(cumulative=True))
ax.set(xlabel='Next Desired Item @K', ylabel='Cummulative Probability')
plt.show()
print("Statistics for the rank of the next desired item (Lower the best)")
df_valid_preds_summary[["next_ItemId_at"]].describe()
```
| github_jupyter |
```
# KNN approach
import numpy as np
import pandas as pd
import os
for dirname, _, filenames in os.walk('./'):
for filename in filenames:
print(os.path.join(dirname, filename))
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import math
import seaborn as sns
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
%matplotlib inline
# Preprocessing
data = pd.read_csv("./Website Phishing.csv")
data.head(5)
a=len(data[data.Result==0])
b=len(data[data.Result==-1])
c=len(data[data.Result==1])
print("Count of Legitimate Websites = ", b)
print("Count of Suspicious Websites = ", a)
print("Count of Phishy Websites = ", c)
data.plot.hist(subplots=True, layout=(5,5), figsize=(15, 15), bins=20)
x=data.drop("Request_URL", axis=1)
y=data["Request_URL"]
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.2, random_state=60)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
x_train = sc.fit_transform(x_train)
x_test = sc.transform(x_test)
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors = 6)
knn.fit(x_train, y_train)
y_pred = knn.predict(x_test)
y_pred
from sklearn.metrics import confusion_matrix
import seaborn as sns
cm = confusion_matrix(y_test, y_pred)
sns.heatmap(cm, annot=True)
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
predic = knn.predict(x_test)
print("F1 score:",f1_score(y_test, predic,average='weighted'))
print("Accuracy:", accuracy_score(y_test, predic) * 100, "%")
print(knn.predict(sc.transform([[0,1,1,1,1,1,-1,0,1]])))
# Decision Tree approach
x = data.drop('Result',axis=1).values
y = data['Result'].values
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.20,random_state=10)
print("Training set has {} samples.".format(x_train.shape[0]))
print("Testing set has {} samples.".format(x_test.shape[0]))
from sklearn import tree
model_tree = tree.DecisionTreeClassifier()
model = model_tree.fit(x_train, y_train)
features = ('SFH', 'popUpWidnow', 'SSLfinal_State', 'Request_URL', 'URL_of_Anchor', 'web_traffic', 'URL_Length', 'age_of_domain', 'having_IP_Address')
name = ('Phishing', 'Suspicious', 'Legitimate')
import graphviz
dot_data = tree.export_graphviz(model, out_file=None,
feature_names = features, class_names = name,
filled=True, rounded=True,
special_characters=True)
graph = graphviz.Source(dot_data)
graph
from sklearn.metrics import matthews_corrcoef
predictions = model.predict(x_test)
from sklearn.metrics import confusion_matrix,classification_report
c=confusion_matrix(y_test,predictions)
sns.heatmap(c, annot=True)
print("F1 score",f1_score(y_test,predictions,average='weighted'))
print("Accuracy:",100 *accuracy_score(y_test,predictions))
#Naive Bayes
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import GaussianNB
gnb = GaussianNB()
y_pred = gnb.fit(x_train, y_train).predict(x_test)
print("Number of mislabeled points out of a total",x_test.shape[0],"points:", (y_test != y_pred).sum())
from sklearn.metrics import matthews_corrcoef
predictions = model.predict(x_test)
from sklearn.metrics import confusion_matrix,classification_report
c=confusion_matrix(y_test,y_pred)
sns.heatmap(c, annot=True)
print("F1 score",f1_score(y_test,y_pred,average='weighted'))
print("Accuracy:",100 *accuracy_score(y_test,y_pred))
```
| github_jupyter |
## Hydrograph Development Notebooks
# Methodology Overview:
### Reproduce [Grand Rapids Michigan Test Cast](documentation\ProofofConceptHydrologyStudies.pdf)
```
import os
from glob import glob
import numpy as np
import pandas as pd
from scipy.integrate import trapz, cumtrapz, simps
from importlib import reload
import utils; reload(utils)
from utils import *
%matplotlib inline
import ny_clean_nb; reload(ny_clean_nb)
from ny_clean_nb import *
df = initialize_testcase()
testcase = df[['04119000_00060_iv']].interpolate().copy()
storm1 = testcase['1997-02-17 18:00':'1997-03-10'].values
storm2 = testcase['2000-05-12':'2000-06-09'].values
storm3 = testcase['2004-05-17':'2004-06-11'].values
f, ax = plt.subplots()
ax.plot(np.arange(0, len(storm1)),storm1 ,color = 'blue', label = 'Feb 1997')
ax.plot(np.arange(0, len(storm2)),storm2 ,color = 'black',label = 'May 2000')
ax.plot(np.arange(0, len(storm3)),storm3 ,color = 'green',label = 'May 2004')
ax.grid()
ax.legend()
ax.set_xlim(-250, 2000)
f.set_size_inches(10,4)
testcase = df[['04119000_00060_iv']].interpolate().copy()
storm3 = testcase['2004-05-15 00:00 ':'2004-06-10'].copy()
storm3['2004-05-15 00:00':'2004-05-22 00:00'] = np.nan
len(storm3['2004-05-15 00:00':'2004-05-22 00:00'])
idx =storm3.index[-337:]
storm3['2004-05-15 00:00':'2004-05-15 00:00'] = 5000
storm3['2004-05-16 00:00':'2004-05-16 00:00'] = 6000
storm3['2004-05-17 00:00':'2004-05-17 00:00'] = 7000
storm3['2004-05-18 00:00':'2004-05-18 00:00'] = 8000
storm3['2004-05-19 00:00':'2004-05-19 00:00'] = 9500
smooth_storm = storm3.interpolate(how = 'spline', order=3)
f, ax = plt.subplots()
ax.plot(smooth_storm,color = 'green',label = 'May 2004')
ax.grid()
ax.legend()
f.set_size_inches(6,4)
f.autofmt_xdate()
peak = float(storm3.max())
pct_1 = 50000
pct_02 = 65200
stretch_1pct = pct_1/peak
stretch_02pct = pct_02/peak
smooth_storm_resample = smooth_storm.resample('30T').mean()
smooth_storm_1pct = smooth_storm_resample*stretch_1pct
smooth_storm_02pct = smooth_storm_resample*stretch_02pct
print('Factors: \n','\t1 Percent\t{}'.format(stretch_1pct), '\n\t0.2 Percent\t{}'.format(stretch_02pct))
storm01 = smooth_storm_1pct.values
storm02 = smooth_storm_02pct.values
idx = np.arange(0, len(storm01))
f, ax = plt.subplots()
ax.plot(idx,storm01 ,color = 'blue', label = '1-percent chance hydrograph')
ax.plot(idx,storm02 ,color = 'green', label = '0.2-percent chance hydrograph')
ax.grid()
ax.legend(loc=0)
my_xticks = np.multiply(idx,100)
ax.set_xticklabels(my_xticks)
ax.set_xlim(0,1350)
f.set_size_inches(8,6)
#--Calculate Volume to Verify Procedure
printbold('Volume Check')
may_volume = IntegrateHydrograph(smooth_storm, 4900.)
print('May Event\t',may_volume, 'inches' )
pct_1_volume = IntegrateHydrograph(smooth_storm_1pct, 4900.)
print('1 Percent\t', pct_1_volume, 'inches')
pct_02_volume = IntegrateHydrograph(smooth_storm_02pct, 4900.)
print('0.2 Percent\t',pct_02_volume, 'inches')
```
# Concept Checker Complete, Procedure Verified
### Next Steps: Compare stretching via factor to peak only interpolation
```
# Method #1
pct_1_peak, inst_peak = Stretched_Daily_100yr(df, plot = False)
peak = 14209.0
pct_1 = 20960.0
stretch_1pct = pct_1/peak
smooth_storm_resample = inst_peak.resample('30T').mean()
smooth_storm_1pct = smooth_storm_resample*stretch_1pct
print('Factor: \n','\t1 Percent\t{}'.format(stretch_1pct))
# APPLY TO NY DATA
storm01 = smooth_storm_resample.values
storm02 = smooth_storm_1pct.values
idx = np.arange(0, len(storm01))
f, ax = plt.subplots()
ax.plot(idx,storm01 ,color = 'blue', label = 'Base Storm hydrograph')
ax.plot(idx,storm02 ,color = 'green', label = '0.1-percent chance hydrograph')
ax.grid()
ax.legend(loc=0)
ax.set_xlim(0,175)
f.set_size_inches(8,6)
pct_1_peak, inst_peak = Stretched_Daily_100yr(df, plot = False)
storm0 = pd.DataFrame(pct_1_peak).resample('30T').mean().values
storm01 = smooth_storm_resample.values
storm02 = smooth_storm_1pct.values
idx = np.arange(0, len(storm01))
f, ax = plt.subplots()
ax.plot(idx,storm0 ,color = 'black', label = 'Method 1:Stretch Daily Means')
ax.plot(idx,storm02 ,color = 'green', label = 'Method 2: Apply Factor')
ax.plot(idx,storm01 ,color = 'blue', label = 'Base Storm hydrograph')
ax.grid()
ax.legend(loc=0)
ax.set_xlim(-20,175)
f.set_size_inches(8,6)
#--Calculate Volume to Verify Procedure
printbold('Volume Check')
raw = IntegrateHydrograph(smooth_storm_resample, 490.)
print('Base Hydrograph\t', pct_1_volume, 'inches')
stretched = IntegrateHydrograph(pct_1_peak, 490.)
print('Stretch Method\t',may_volume, 'inches' )
factored = IntegrateHydrograph(smooth_storm_1pct, 490.)
print('Factor Method',pct_02_volume, 'inches')
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.