code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import pandas as pd
import numpy as np
# ## Merge zipcode-to-cbsa.csv with MSA Data
# ### Load MSA data into DataFrame
filename = '../../data/external/msa_data.tab'
msa = pd.read_csv(filename, sep='\t', lineterminator='\n')
msa.info()
# ### Load zipcode-to-cbsa.csv into DataFrame
filename = '../../data/interim/zipcode-to-cbsa.csv'
z2c = pd.read_csv(filename)
z2c.info()
# ### Merge DataFrames
merged = pd.merge(z2c, msa, on='CBSA')
merged.info()
# ##### Drop duplicate zipcodes
merged = merged.drop_duplicates(['zipcode', 'year'])
merged.info()
merged.zipcode.nunique()
# ### Output merged DataFrame to file
filename = '../../data/interim/msa-data-with-zipcode.csv'
merged.to_csv(filename, index=False)
| notebooks/data-cleaning/1.1-tjc-zip-match-to-msa-merge.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.6 64-bit (''CV_env'': venv)'
# language: python
# name: python37664bitcvenvvenv03c93bf3526e47759c313a461ad3422b
# ---
# # Self-Driving Car Engineer Nanodegree
#
#
# ## Project: **Finding Lane Lines on the Road**
# ***
# In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
#
# Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
#
# In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project.
#
# ---
# Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
#
# **Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**
#
# ---
# **The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**
#
# ---
#
# <figure>
# <img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" />
# <figcaption>
# <p></p>
# <p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
# </figcaption>
# </figure>
# <p></p>
# <figure>
# <img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
# <figcaption>
# <p></p>
# <p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
# </figcaption>
# </figure>
# **Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
# ## Import Packages
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
# %matplotlib inline
# ## Read in an Image
# +
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
# -
# ## Ideas for Lane Detection Pipeline
# **Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**
#
# `cv2.inRange()` for color selection
# `cv2.fillPoly()` for regions selection
# `cv2.line()` to draw lines on an image given endpoints
# `cv2.addWeighted()` to coadd / overlay two images
# `cv2.cvtColor()` to grayscale or change color
# `cv2.imwrite()` to output images to file
# `cv2.bitwise_and()` to apply a mask to an image
#
# **Check out the OpenCV documentation to learn about these and discover even more awesome functionality!**
# ## Helper Functions
# Below are some helper functions to help get you started. They should look familiar from the lesson!
# +
class line_finder:
def __init__(self):
self.fit_left_avg_old = np.array([])
self.fit_right_avg_old = np.array([])
def fit_coordinate(self, image, fit_parameters):
slope, intercept = fit_parameters
y1 = image.shape[0]
y2 = int(y1*0.6)
x1 = int((y1-intercept)//slope)
x2 = int((y2-intercept)//slope)
return np.array([x1, y1, x2, y2])
def average_slope_intercept(self, image, lines):
fit_right = []
fit_left = []
fit_left_avg = np.array([])
fit_right_avg = np.array([])
# use np.polyfit to find slope and intercept
for line in lines:
for x1, y1, x2, y2 in line:
slope, intercept = np.polyfit((x1, x2), (y1, y2), 1)
# slope filter
slope_threthold = 0.4
if slope > slope_threthold:
fit_right.append((slope, intercept))
elif slope < -slope_threthold:
fit_left.append((slope, intercept))
if fit_left:
fit_left_avg = np.average(fit_left, axis=0)
if fit_right:
fit_right_avg = np.average(fit_right, axis=0)
# smooth the move with fit_xx_avg_old
""" if there's new and old, return avg(new,old) line
if there's only new, return new line
if there's only old, return old line
if there's nothing, return default line
"""
if (fit_left_avg.size == 0) and self.fit_left_avg_old.size != 0:
fit_left_avg = self.fit_left_avg_old.copy()
left_line = self.fit_coordinate(image, fit_left_avg)
elif fit_left_avg.size != 0:
if self.fit_left_avg_old.size != 0:
fit_left_avg = np.average(
(fit_left_avg, self.fit_left_avg_old), axis=0, weights=[2, 8])
self.fit_left_avg_old = fit_left_avg.copy()
left_line = self.fit_coordinate(image, fit_left_avg)
else:
left_line = self.fit_coordinate(image, np.array([-1.0, 700]))
if fit_right_avg.size == 0 and self.fit_right_avg_old.size != 0:
fit_right_avg = self.fit_right_avg_old.copy()
right_line = self.fit_coordinate(image, fit_right_avg)
elif fit_right_avg.size != 0:
if self.fit_right_avg_old.size != 0:
fit_right_avg = np.average(
(fit_right_avg, self.fit_right_avg_old), axis=0, weights=[2, 8])
self.fit_right_avg_old = fit_right_avg.copy()
right_line = self.fit_coordinate(image, fit_right_avg)
else:
right_line = self.fit_coordinate(image, np.array([1.0, 0]))
return np.array([left_line, right_line])
def draw_lines(self, lines, image, color, weight):
for line in lines:
x1, y1, x2, y2 = line
cv2.line(image, (x1, y1), (x2, y2), color, weight)
return image
def traffic_line(self, image_input):
# Read in and grayscale the image
image = image_input
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
# Define a kernel size and apply Gaussian smoothing
kernel_size = 3
blur_gray = cv2.GaussianBlur(gray, (kernel_size, kernel_size), 0)
# Define our parameters for Canny and apply
low_threshold = 50 # 50
high_threshold = 180 # 180
edges = cv2.Canny(blur_gray, low_threshold,
high_threshold, L2gradient=True)
mask = np.zeros_like(edges)
ignore_color = 255
imshape = image.shape
y_high = int(imshape[0]*0.6) + 10
vertices = np.array([[(0+int(imshape[1]*0.05), int(imshape[0])),
(imshape[1]/2 - imshape[1]/16, y_high),
(imshape[1]/2 + imshape[1]/16, y_high),
(imshape[1]-int(imshape[1]*0.05), int(imshape[0]))]], dtype=np.int32)
cv2.fillPoly(mask, vertices, ignore_color)
masked_edges = cv2.bitwise_and(edges, mask)
# Define the Hough transform parameters
# Make a blank the same size as our image to draw on
rho = 1
theta = (np.pi/180)
threshold = 20
min_line_length = 5
max_line_gap = 3
line_image = np.copy(image)*0 # creating a blank to draw lines on
# Run Hough on edge detected image
lines = cv2.HoughLinesP(masked_edges, rho, theta, threshold, np.array(
[]), min_line_length, max_line_gap)
# ployfit() -- slope,b -- fit_right(+),fit_left(-) -- find points in vertices -- cv2.line
fit_lines = self.average_slope_intercept(image, lines)
line_image = self.draw_lines(fit_lines, line_image, (255,0,0), 10) # show traffic line
# line_image = self.draw_lines([x[0] for x in lines], line_image,(0,255,0), 3) # show Houghline
# line_image = self.draw_lines((vertices.reshape(2,4)[1],vertices.reshape(2,4)[0]),line_image,(255,255,0), 3) # show vertices
# Create a "color" binary image to combine with line image
color_edges = np.dstack((edges, edges, edges))
# Draw the lines on the edge image
color_image = True
if color_image:
image_output = cv2.addWeighted(
image, 0.8, line_image, 1, 0) # color image
else:
image_output = cv2.addWeighted(
color_edges, 0.8, line_image, 1, 0) # no color image
return image_output
# -
# ## Test Images
#
# Build your pipeline to work on the images in the directory "test_images"
# **You should make sure your pipeline works well on these images before you try the videos.**
import os
test_images = os.listdir("test_images/")
for i, x in enumerate(test_images):
image_input = cv2.cvtColor(cv2.imread("test_images/"+x), cv2.COLOR_BGR2RGB)
image_processing = line_finder()
image_output = image_processing.traffic_line(image_input)
plt.subplot(3, 3, i+1)
plt.imshow(image_output, aspect='auto')
plt.show()
# ## Build a Lane Finding Pipeline
#
#
# Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.
#
# Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images_output directory.
import os
test_images = os.listdir("test_images/")
for i, x in enumerate(test_images):
image_input = cv2.cvtColor(cv2.imread("test_images/"+x), cv2.COLOR_BGR2RGB)
image_processing = line_finder()
image_output = image_processing.traffic_line(image_input)
cv2.imwrite("test_images_output/"+x+".jpg", cv2.cvtColor(image_output,cv2.COLOR_BGR2RGB))
# ## Test on Videos
#
# You know what's cooler than drawing lanes over images? Drawing lanes over video!
#
# We can test our solution on two provided videos:
#
# `solidWhiteRight.mp4`
#
# `solidYellowLeft.mp4`
#
# **Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
#
# **If you get an error that looks like this:**
# ```
# NeedDownloadError: Need ffmpeg exe.
# You can download it by calling:
# imageio.plugins.ffmpeg.download()
# ```
# **Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
return line_finder()
# Let's try the one with the solid white lane on the right first ...
for x in [i for i in os.listdir("test_videos/") if i[-3::]=='mp4']:
white_output = 'test_videos_output/' + x
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
process_image = line_finder()
clip1 = VideoFileClip("test_videos/" + x)
white_clip = clip1.fl_image(process_image.traffic_line) #NOTE: this function expects color images!!
white_clip.write_videofile(white_output, audio=False)
# Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
# +
os.system("firefox test_videos_output/*.mp4")
# -
# ## Writeup and Submission
#
# If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file.
#
| P1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="BwHR1frLWbAp"
# # WebocketClientWorker Tutorial
#
# This tutorial is a 2 notebook tutorial. The partner notebook is the notebook entitled `WebsocketServerWorker.ipynb` and is in the same folder as this notebook. You should execute this notebook AFTER you have executed the other one.
#
# In that tutorial, we'll demonstrate how to launch a WebsocketWorker server which will listen for PyTorch commands over a socket connection. In this tutorial, the two workers are connected via a socket connection on the localhost network.
#
# If you'd prefer to download this notebook and run it locally, you can do so via the `Download .ipynb` in the `File` dropdown field in this Google colab environment.
#
# Setup instructions: https://research.google.com/colaboratory/local-runtimes.html
# + [markdown] colab_type="text" id="LkAOFaq3WdCn"
#
# # Step -1: Copy This Notebook
#
# Go up to File -> Save A Copy in Drive
#
# This will let you execute the notebook (it won't let you execute this one by default)
#
# # Step 0: Install Dependencies
# + colab={"base_uri": "https://localhost:8080/", "height": 5154} colab_type="code" id="G58GV77oWgO-" outputId="b6080406-3e39-47b6-e39c-95709cd8d932"
# If not using local runtimes, uncomment out the following snippets where necessary.
# #! rm -rf /content/PySyft
# #! git clone https://github.com/OpenMined/PySyft.git
# http://pytorch.org/
#from os import path
#from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
#platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
# #cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/'
#accelerator = cuda_output[0] if path.exists('/opt/bin/nvidia-smi') else 'cpu'
# #!pip3 install https://download.pytorch.org/whl/cu100/torch-1.1.0-cp36-cp36m-linux_x86_64.whl
# #!pip3 install https://download.pytorch.org/whl/cu100/torchvision-0.3.0-cp36-cp36m-linux_x86_64.whl
import torch
# #!cd PySyft; pip3 install -r requirements.txt; pip3 install -r requirements_dev.txt; python3 setup.py install
import os
import sys
#module_path = '/content/PySyft' # You want './PySyft' to be on your sys.path
#if module_path not in sys.path:
# sys.path.append(module_path)
# + colab={} colab_type="code" id="VnXF1hWxjtKr"
# Check sys.path
#sys.path
# + [markdown] colab_type="text" id="qjuWR5W1WbAq"
# # Step 1: Hook Torch and Create Local Worker
#
# In this step, we hook PyTorch and initialize within the hook a client SocketWorker.
# + colab={"base_uri": "https://localhost:8080/", "height": 334} colab_type="code" id="3Uiboh5tWbAs" outputId="32f16cbc-d65f-44cc-c4e6-611f98ee516b"
import syft
from syft.workers.websocket_server import WebsocketServerWorker
hook = syft.TorchHook(torch)
local_worker = WebsocketServerWorker(
host='localhost',
hook=hook,
id=0,
port=8182,
log_msgs=True,
verbose=True)
# + [markdown] colab_type="text" id="9xbDlCZEWbAz"
# # Step 2: Create Pointer to Remote Socket Worker
#
# In order to interact with a foreign worker over a socket connection, we need to create a pointer to it containing information on how to contact it. We set the is_pointer=True to signify that this Python object is not in fact a worker in and of itself but that it is merely a pointer to one over the network. We then inform our local worker about this pointer.
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="pakbrhCuWbAz" outputId="ecab6f57-0eb0-4a06-ccb5-b581f4a65945"
hook = syft.TorchHook(torch, local_worker=local_worker)
from syft.workers.websocket_client import WebsocketClientWorker
remote_client = WebsocketClientWorker(
host = 'localhost',
hook=hook,
id=2,
port=8181)
hook.local_worker.add_worker(remote_client)
# + [markdown] colab_type="text" id="yaap8ep0WbA3"
# # Step 3: Create Tensors & Send To The Worker
# + colab={} colab_type="code" id="pQ9iQViMWbA5"
x = syft.FixedPrecisionTensor([1,3,5,7,9]).share(remote_client)
# + colab={} colab_type="code" id="Ao5OL3JsWbA7"
x2 = syft.FixedPrecisionTensor([2,4,6,8,10]).share(remote_client)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="zfVc5_h5WbA-" outputId="4a6cbff9-1129-4dfa-bb5f-5260b51a22f1"
x
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="uBUfHGk5WbBB" outputId="97d6dc33-4a95-4828-cf87-2bed16d09b8d"
x2
# + [markdown] colab_type="text" id="pFQoIbF9WbBF"
# # Step 4: Execute Operations Like Normal
# + colab={} colab_type="code" id="i7xOloWOWbBG"
y = x + x2 + x
# + [markdown] colab_type="text" id="hv6Ci9EpWbBK"
# # Step 5: Get Results
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="JFXHDVvYWbBL" outputId="40a82592-f409-4f3a-9363-d02fa6d0bdb8"
y
# + colab={"base_uri": "https://localhost:8080/", "height": 136} colab_type="code" id="AIZYgjn3WbBO" outputId="4e3a27e5-98d8-4760-cf97-1206a8cd52b3"
y.get()
# + colab={"base_uri": "https://localhost:8080/", "height": 136} colab_type="code" id="VEBZFX53WbBR" outputId="25a12c43-16e0-4c52-ec06-3ed487634740"
y
# + colab={} colab_type="code" id="k61CHx_sWbBV"
| examples/tutorials/websocket/WebsocketClientWorker.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import skimage
from skimage.util.shape import view_as_blocks
import os
import shutil
import json
# +
image_path = "/home/gauthamar11/pyLattice2/src/tensorflow/quickUnet/dataset/psnr100_5000/test__X_nonIsotropic_production_5000_1_psnr=730.95935741513900.tiff"
mask_path= "/home/gauthamar11/pyLattice2/src/tensorflow/quickUnet/dataset/psnr100_5000/test__X_nonIsotropic_production_5000_1_psnr=730.959357415139_mask_00.tiff"
split_directory="/home/gauthamar11/pyLattice2/src/tensorflow/quickUnet/dataset/genPSNR100_5000_48Data/"
patch_size = 48
train_split = 1 #Trying to get coverage of whole large dataset frame. Can change once we use more frames of our large data
if "train" not in os.listdir(split_directory):
os.mkdir(split_directory+"train/")
if "test" not in os.listdir(split_directory):
os.mkdir(split_directory+"test/")
# +
latticeMovieImage = skimage.external.tifffile.imread(image_path)
latticeMovieMask = skimage.external.tifffile.imread(mask_path)
offset=np.asarray([0,0,0])
x_extra = latticeMovieImage.shape[0]%patch_size
x_size = latticeMovieImage.shape[0] - x_extra
if offset[0] > x_extra:
print("1st dim offset exceeds image dim")
offset[0] = 0
y_extra = latticeMovieImage.shape[1]%patch_size
y_size = latticeMovieImage.shape[1] - y_extra
if offset[1] > y_extra:
print("2st dim offset exceeds image dim")
offset[1] = 0
z_extra = latticeMovieImage.shape[2]%patch_size
z_size = latticeMovieImage.shape[2] - z_extra
if offset[2] > z_extra:
print("3rd dim offset exceeds image dim")
offset[2] = 0
latticeMovieImage = latticeMovieImage[offset[0]:x_size+offset[0], offset[1]:y_size+offset[1], offset[2]:z_size+offset[2]]
latticeMovieMask = latticeMovieMask[offset[0]:x_size+offset[0], offset[1]:y_size+offset[1], offset[2]:z_size+offset[2]]
print("Image cropped to: " + str(x_size) + ", " + str(y_size) + ", " + str(z_size))
print(latticeMovieImage.shape)
print(latticeMovieMask.shape)
print(np.amax(latticeMovieMask))
# -
def filter_patches(lattice_patches, mask_patches, percent_covered=1e-10):
zero_mask_ids = []
for patch_index in range (0, mask_patches.shape[0]):
patch = mask_patches[patch_index]
if(np.count_nonzero(patch == 256.0)/(mask_patches.shape[1]**3) < percent_covered): #Means that the mask has all 0s
zero_mask_ids.append(patch_index)
lattice_patches = np.delete(lattice_patches, zero_mask_ids, axis=0)
mask_patches = np.delete(mask_patches, zero_mask_ids, axis=0)
return lattice_patches, mask_patches
# +
lattice_patches = view_as_blocks(latticeMovieImage, block_shape=(patch_size, patch_size, patch_size))
lattice_patches = lattice_patches.reshape(int(x_size/patch_size)*int(y_size/patch_size)*int(z_size/patch_size), patch_size, patch_size, patch_size)
print(lattice_patches.shape)
mask_patches = view_as_blocks(latticeMovieMask, block_shape=(patch_size, patch_size, patch_size))
mask_patches = mask_patches.reshape(int(x_size/patch_size)*int(y_size/patch_size)*int(z_size/patch_size), patch_size, patch_size, patch_size)
lattice_patches, mask_patches = filter_patches(lattice_patches, mask_patches)
print(lattice_patches.shape)
# +
num_patches = lattice_patches.shape[0]
for k in range(0, num_patches):
x_file = lattice_patches[k].astype('uint16')
y_file = mask_patches[k].astype('uint16')
metadata_x = dict(microscope='joh', shape=x_file.shape, dtype=x_file.dtype.str)
metadata_x = json.dumps(metadata_x)
metadata_y = dict(microscope='joh', shape=y_file.shape, dtype=y_file.dtype.str)
metadata_y = json.dumps(metadata_y)
os.mkdir(split_directory+"train/Region0_"+str(k)+"/")
skimage.external.tifffile.imsave(split_directory+"train/Region0_"+str(k)+"/"+"lattice_light_sheet.tif", x_file, description=metadata_x)
skimage.external.tifffile.imsave(split_directory+"train/Region0_"+str(k)+"/"+"truth.tif", y_file, description=metadata_y)
# -
| src/1GenPSNR100Data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="hxrhkGBVY4UN" colab_type="text"
# ### Deep Kung-Fu with advantage actor-critic
#
# In this notebook you'll build a deep reinforcement learning agent for Atari [Kung-Fu Master](https://gym.openai.com/envs/KungFuMaster-v0/) that uses a recurrent neural net.
#
# 
# + id="zRUw7nQHY4UO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="c4daabd9-fdac-43c4-fe63-4ad59a2837fc"
import sys, os
if 'google.colab' in sys.modules:
# https://github.com/yandexdataschool/Practical_RL/issues/256
# !pip uninstall tensorflow --yes
# !pip uninstall keras --yes
# !pip install tensorflow-gpu==1.13.1
# !pip install keras==2.2.4
if not os.path.exists('.setup_complete'):
# !wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/setup_colab.sh -O- | bash
# !wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/week08_pomdp/atari_util.py
# !touch .setup_complete
# This code creates a virtual display to draw game images on.
# It will have no effect if your machine has a monitor.
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
# !bash ../xvfb start
os.environ['DISPLAY'] = ':1'
# + id="RhkyrISI5lA5" colab_type="code" colab={}
"""
A thin wrapper for openAI gym environments that maintains a set of parallel games and has a method to generate
interaction sessions given agent one-step applier function.
"""
import numpy as np
# A whole lot of space invaders
class EnvPool(object):
def __init__(self, actor, critic, make_env, n_parallel_games=1):
"""
A special class that handles training on multiple parallel sessions
and is capable of some auxilary actions like evaluating agent on one game session (See .evaluate()).
:param agent: Agent which interacts with the environment.
:param make_env: Factory that produces environments OR a name of the gym environment.
:param n_games: Number of parallel games. One game by default.
:param max_size: Max pool size by default (if appending sessions). By default, pool is not constrained in size.
"""
# Create atari games.
self.actor = actor
self.critic = critic
self.make_env = make_env
self.envs = [self.make_env() for _ in range(n_parallel_games)]
# Initial observations.
self.prev_observations = [env.reset() for env in self.envs]
# Agent memory variables (if you use recurrent networks).
self.prev_memory_states_actor = actor.get_initial_state(n_parallel_games)
self.prev_memory_states_critic = critic.get_initial_state(n_parallel_games)
# Whether particular session has just been terminated and needs
# restarting.
self.just_ended = [False] * len(self.envs)
def interact(self, n_steps=100, verbose=False):
"""Generate interaction sessions with ataries (openAI gym atari environments)
Sessions will have length n_steps. Each time one of games is finished, it is immediately getting reset
and this time is recorded in is_alive_log (See returned values).
:param n_steps: Length of an interaction.
:returns: observation_seq, action_seq, reward_seq, is_alive_seq
:rtype: a bunch of tensors [batch, tick, ...]
"""
def env_step(i, action):
if not self.just_ended[i]:
new_observation, cur_reward, is_done, info = self.envs[i].step(action)
if is_done:
# Game ends now, will finalize on next tick.
self.just_ended[i] = True
# note: is_alive=True in any case because environment is still
# alive (last tick alive) in our notation.
return new_observation, cur_reward, True, info
else:
# Reset environment, get new observation to be used on next
# tick.
new_observation = self.envs[i].reset()
# Reset memory for new episode.
initial_memory_state_actor = self.actor.get_initial_state(batch_size=1)
initial_memory_state_critic = self.critic.get_initial_state(batch_size=1)
for m_i in range(len(new_memory_states_actor)):
new_memory_states_actor[m_i][i] = initial_memory_state_actor[m_i][0]
for m_i in range(len(new_memory_states_critic)):
new_memory_states_critic[m_i][i] = initial_memory_state_critic[m_i][0]
if verbose:
print("env %i reloaded" % i)
self.just_ended[i] = False
return new_observation, 0, False, {'end': True}
history_log = []
last_prev_mem_state_actor = self.prev_memory_states_actor
last_prev_mem_state_critic = self.prev_memory_states_critic
for i in range(n_steps):
new_memory_states_actor, logits = self.actor.step(self.prev_memory_states_actor,
self.prev_observations)
sampled_actions = self.actor.sample_actions(logits)
new_memory_states_critic, state_values = self.critic.step(self.prev_memory_states_critic,
self.prev_observations)
new_observations, cur_rewards, is_alive, infos = zip(
*map(env_step, range(len(self.envs)), sampled_actions))
# Append data tuple for this tick.
history_log.append(
(self.prev_observations, sampled_actions, cur_rewards, is_alive))
self.prev_observations = new_observations
last_prev_mem_state_actor = self.prev_memory_states_actor
last_prev_mem_state_critic = self.prev_memory_states_critic
self.prev_memory_states_actor = new_memory_states_actor
self.prev_memory_states_critic = new_memory_states_critic
# add last observation
#dummy_actions = [0] * len(self.envs)
#dummy_rewards = [0] * len(self.envs)
#dummy_mask = [1] * len(self.envs)
#history_log.append(
# (self.prev_observations,
# dummy_actions,
# dummy_rewards,
# dummy_mask))
# cast to numpy arrays,
# transpose from [time, batch, ...] to [batch, time, ...]
history_log = [
np.array(tensor).swapaxes(0, 1)
for tensor in zip(*history_log)
]
observation_seq, action_seq, reward_seq, is_alive_seq = history_log
return observation_seq, action_seq, reward_seq, is_alive_seq, last_prev_mem_state_critic
# + id="cm35vEADY4UT" colab_type="code" colab={}
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
from IPython.display import display
# + [markdown] id="4pI8-QsiY4UW" colab_type="text"
# For starters, let's take a look at the game itself:
#
# * Image resized to 42x42 and converted to grayscale to run faster
# * Agent sees last 4 frames of game to account for object velocity
# + id="Qg8qz_iMY4UX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 87} outputId="c241c883-71d2-4d48-c4c0-af8d7caf0889"
import gym
from atari_util import PreprocessAtari
def make_env():
env = gym.make("KungFuMasterDeterministic-v0")
env = PreprocessAtari(
env, height=42, width=42,
crop=lambda img: img[60:-30, 5:],
dim_order='tensorflow',
color=False, n_frames=4)
return env
env = make_env()
obs_shape = env.observation_space.shape
n_actions = env.action_space.n
print("Observation shape:", obs_shape)
print("Num actions:", n_actions)
print("Action names:", env.env.env.get_action_meanings())
# + id="Z658owR2Y4Uc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 412} outputId="e037f7a4-23d1-47a3-9373-005a80da7c74"
s = env.reset()
for _ in range(100):
s, _, _, _ = env.step(env.action_space.sample())
plt.title('Game image')
plt.imshow(env.render('rgb_array'))
plt.show()
plt.title('Agent observation (4-frame buffer)')
plt.imshow(s.transpose([0, 2, 1]).reshape([42,-1]))
plt.show()
# + [markdown] id="U-iqdJDMY4Uf" colab_type="text"
# ### Simple agent for fully-observable MDP
#
# Here's a code for an agent that only uses feedforward layers. Please read it carefully: you'll have to extend it later!
# + id="wiTI0SnkY4Ug" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 255} outputId="d3b2437c-2948-49d0-84ac-9af189cb7951"
import tensorflow as tf
from keras.layers import Conv2D, Dense, Flatten
from tensorflow.nn.rnn_cell import LSTMCell, LSTMStateTuple
tf.reset_default_graph()
sess = tf.InteractiveSession()
# + id="Ff74wi7RY4Uj" colab_type="code" colab={}
class SimpleRecurrentAgent_Q:
def __init__(self, name, obs_shape, n_actions, reuse=False):
"""A simple actor-critic agent"""
with tf.variable_scope(name, reuse=reuse):
# Note: number of units/filters is arbitrary, you can and should change it at your will
# Note: number of units/filters is arbitrary, you can and should change it at your will
self.conv0 = Conv2D(32, (4, 4), strides=(2, 2), activation='relu')
self.conv1 = Conv2D(64, (3, 3), strides=(2, 2), activation='relu')
self.conv2 = Conv2D(64, (3, 3), strides=(1, 1), activation='relu')
self.flatten = Flatten()
self.hid = Dense(128, activation='relu')
# Actor: pi(a|s)
self.logits = Dense(n_actions)
# Recurrent Layer
self.hid_size = 128
self.rnn0 = LSTMCell(self.hid_size, state_is_tuple = True)
# prepare a graph for agent step
initial_state_c = tf.placeholder(dtype=tf.float32,
shape=[None, self.hid_size],
name="init_state_c")
initial_state_h = tf.placeholder(dtype=tf.float32,
shape=[None, self.hid_size],
name="init_state_h")
self.prev_state_placeholder = LSTMStateTuple(initial_state_c, initial_state_h)
self.obs_t = tf.placeholder(tf.float32, [None, ] + list(obs_shape))
self.next_state, self.agent_outputs = self.symbolic_step(self.prev_state_placeholder,
self.obs_t)
self.variables = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,
scope = name)
print("\n" + "Model Summary (" + name + ")")
for t in self.variables:
print(t)
def symbolic_step(self, prev_state, obs_t):
"""Takes agent's previous step and observation, returns next state and whatever it needs to learn (tf tensors)"""
nn = self.conv0(obs_t)
nn = self.conv1(nn)
nn = self.conv2(nn)
nn = self.flatten(nn)
nn = self.hid(nn)
# Apply recurrent neural net for one step here.
# The recurrent cell should take the last feedforward dense layer as input.
batch_ones = tf.ones(tf.shape(obs_t)[0])
new_out, new_state_ch = tf.nn.dynamic_rnn(self.rnn0, nn[:,None],
initial_state = prev_state,
sequence_length = batch_ones)
logits = self.logits(new_out[:,0])
return new_state_ch, logits
def get_initial_state(self, batch_size):
# LSTMStateTuple([batch_size x hid_size], [batch_size x hid_size]]
a = np.zeros([batch_size, self.hid_size], dtype=np.float32)
return LSTMStateTuple(a, a)
# Instantiation
def step(self, prev_state, obs_t):
"""Same as symbolic state except it operates on numpy arrays"""
sess = tf.get_default_session()
feed_dict = {self.obs_t: obs_t,
self.prev_state_placeholder: prev_state}
return sess.run([self.next_state, self.agent_outputs], feed_dict)
def sample_actions(self, logits):
"""pick actions given numeric agent outputs (np arrays)"""
policy = np.exp(logits) / np.sum(np.exp(logits), axis=-1, keepdims=True)
return [np.random.choice(len(p), p=p) for p in policy]
# + id="i5BXsxvgmDKX" colab_type="code" colab={}
class SimpleRecurrentAgent_V:
def __init__(self, name, obs_shape, n_actions, reuse=False):
"""A simple actor-critic agent"""
with tf.variable_scope(name, reuse=reuse):
# Note: number of units/filters is arbitrary, you can and should change it at your will
# Note: number of units/filters is arbitrary, you can and should change it at your will
self.conv0 = Conv2D(32, (4, 4), strides=(2, 2), activation='relu')
self.conv1 = Conv2D(64, (3, 3), strides=(2, 2), activation='relu')
self.conv2 = Conv2D(64, (3, 3), strides=(1, 1), activation='relu')
self.flatten = Flatten()
self.hid = Dense(128, activation='relu')
# Critic: State Values
self.state_value = Dense(1)
# Recurrent Layer
self.hid_size = 128
self.rnn0 = LSTMCell(self.hid_size, state_is_tuple = True)
# prepare a graph for agent step
initial_state_c = tf.placeholder(dtype=tf.float32,
shape=[None, self.hid_size],
name="init_state_c")
initial_state_h = tf.placeholder(dtype=tf.float32,
shape=[None, self.hid_size],
name="init_state_h")
self.prev_state_placeholder = LSTMStateTuple(initial_state_c, initial_state_h)
self.obs_t = tf.placeholder(tf.float32, [None, ] + list(obs_shape))
self.next_state, self.agent_outputs = self.symbolic_step(self.prev_state_placeholder,
self.obs_t)
self.variables = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,
scope = name)
print("\n" + "Model Summary (" + name + ")")
for t in self.variables:
print(t)
def symbolic_step(self, prev_state, obs_t):
"""Takes agent's previous step and observation, returns next state and whatever it needs to learn (tf tensors)"""
nn = self.conv0(obs_t)
nn = self.conv1(nn)
nn = self.conv2(nn)
nn = self.flatten(nn)
nn = self.hid(nn)
# Apply recurrent neural net for one step here.
# The recurrent cell should take the last feedforward dense layer as input.
batch_ones = tf.ones(tf.shape(obs_t)[0])
new_out, new_state_ch = tf.nn.dynamic_rnn(self.rnn0, nn[:,None],
initial_state = prev_state,
sequence_length = batch_ones)
state_value = self.state_value(new_out[:,0])
return new_state_ch, state_value
def get_initial_state(self, batch_size):
# LSTMStateTuple([batch_size x hid_size], [batch_size x hid_size]]
a = np.zeros([batch_size, self.hid_size], dtype=np.float32)
return LSTMStateTuple(a, a)
# Instantiation
def step(self, prev_state, obs_t):
"""Same as symbolic state except it operates on numpy arrays"""
sess = tf.get_default_session()
feed_dict = {self.obs_t: obs_t,
self.prev_state_placeholder: prev_state}
return sess.run([self.next_state, self.agent_outputs], feed_dict)
def get_state_values(self, prev_state, obs_t):
sess = tf.get_default_session()
feed_dict = {self.obs_t: obs_t,
self.prev_state_placeholder: prev_state}
return sess.run(self.agent_outputs, feed_dict)
# + id="lKfAXDe1Y4Un" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 709} outputId="0cf0afb6-5cad-4b11-9c6a-2766d6d75133"
n_parallel_games = 10
rollout_length = 25
gamma = 0.99
actor = SimpleRecurrentAgent_Q('actor', obs_shape, n_actions)
critic = SimpleRecurrentAgent_V('critic', obs_shape, n_actions)
sess.run(tf.global_variables_initializer())
# + id="xjgYtYdwY4Uq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 118} outputId="39688903-8eeb-4df3-e6d4-1f1d1871410f"
state = [env.reset()]
_, logits = actor.step(actor.get_initial_state(1), state)
_, state_values = critic.step(critic.get_initial_state(1), state)
print("action logits:\n", logits)
print("state values:\n", state_values)
# + [markdown] id="klqvnKWdY4Us" colab_type="text"
# ### Let's play!
# Let's build a function that measures agent's average reward.
# + id="ln4WuILsY4Ut" colab_type="code" colab={}
def evaluate(actor, env, n_games=1):
"""Plays an a game from start till done, returns per-game rewards """
game_rewards = []
for _ in range(n_games):
# initial observation and memory
observation = env.reset()
prev_memories = actor.get_initial_state(1)
total_reward = 0
while True:
new_memories, readouts = actor.step(prev_memories,
observation[None, ...])
action = actor.sample_actions(readouts)
observation, reward, done, info = env.step(action[0])
total_reward += reward
prev_memories = new_memories
if done:
break
game_rewards.append(total_reward)
return game_rewards
# + id="PArnjUsxI4Fq" colab_type="code" colab={}
#import gym.wrappers
#with gym.wrappers.Monitor(make_env(), directory="videos", force=True) as env_monitor:
# rewards = evaluate(actor, env_monitor, n_games=3)
#print(rewards)
# + id="FrmfSJAEY4Uy" colab_type="code" colab={}
# Show video. This may not work in some setups. If it doesn't
# work for you, you can download the videos and view them locally.
#from pathlib import Path
#from IPython.display import HTML
#video_names = sorted([s for s in Path('videos').iterdir() if s.suffix == '.mp4'])
#HTML("""
#<video width="640" height="480" controls>
# <source src="{}" type="video/mp4">
#</video>
#""".format(video_names[-1])) # You can also try other indices
# + [markdown] id="Ff_83RwqY4U0" colab_type="text"
# ### Training on parallel games
#
# We introduce a class called EnvPool - it's a tool that handles multiple environments for you. Here's how it works:
# 
# + id="8Q77ZJKGY4U1" colab_type="code" colab={}
# TARGET AGENT is used to sample
pool = EnvPool(actor, critic, make_env, n_parallel_games)
# + id="WF9OpsgRY4U3" colab_type="code" colab={}
# for each of n_parallel_games, take "rollout_length" steps
rollout_obs, rollout_actions, rollout_rewards, rollout_mask, last_prev_mem_state_critic = pool.interact(rollout_length)
# + id="4HQ7gxu3Y4U8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 101} outputId="adf5c690-bac5-4a06-9193-f2e2422e8aa1"
print("Actions shape:", rollout_actions.shape)
print("Rewards shape:", rollout_rewards.shape)
print("Mask shape:", rollout_mask.shape)
print("Observations shape: ", rollout_obs.shape)
print("Last Previous Memory State: ", (last_prev_mem_state_critic[0].shape, last_prev_mem_state_critic[1].shape))
# + [markdown] id="mhyVE0AzY4VA" colab_type="text"
# # Actor-critic objective
#
# Here we define a loss function that uses rollout above to train advantage actor-critic agent.
#
#
# Our loss consists of three components:
#
# * __The policy "loss"__
# $$ \hat J = {1 \over T} \cdot \sum_t { \log \pi(a_t | s_t) } \cdot A_{const}(s,a) $$
# * This function has no meaning in and of itself, but it was built such that
# * $ \nabla \hat J = {1 \over N} \cdot \sum_t { \nabla \log \pi(a_t | s_t) } \cdot A(s,a) \approx \nabla E_{s, a \sim \pi} R(s,a) $
# * Therefore if we __maximize__ J_hat with gradient descent we will maximize expected reward
#
#
# * __The value "loss"__
# $$ L_{td} = {1 \over T} \cdot \sum_t { [r + \gamma \cdot V_{const}(s_{t+1}) - V(s_t)] ^ 2 }$$
# * Ye Olde TD_loss from q-learning and alike
# * If we minimize this loss, V(s) will converge to $V_\pi(s) = E_{a \sim \pi(a | s)} R(s,a) $
#
#
# * __Entropy Regularizer__
# $$ H = - {1 \over T} \sum_t \sum_a {\pi(a|s_t) \cdot \log \pi (a|s_t)}$$
# * If we __maximize__ entropy we discourage agent from predicting zero probability to actions
# prematurely (a.k.a. exploration)
#
#
# So we optimize a linear combination of $L_{td} - \hat J -H$
#
#
# __One more thing:__ since we train on T-step rollouts, we can use N-step formula for advantage for free:
# * At the last step, $A(s_t,a_t) = r(s_t, a_t) + \gamma \cdot V(s_{t+1}) - V(s) $
# * One step earlier, $A(s_t,a_t) = r(s_t, a_t) + \gamma \cdot r(s_{t+1}, a_{t+1}) + \gamma ^ 2 \cdot V(s_{t+2}) - V(s) $
# * Et cetera, et cetera. This way agent starts training much faster since it's estimate of A(s,a) depends less on his (imperfect) value function and more on actual rewards. There's also a [nice generalization](https://arxiv.org/abs/1506.02438) of this.
#
#
# __Note:__ it's also a good idea to scale rollout_len up to learn longer sequences. You may wish set it to >=20 or to start at 10 and then scale up as time passes.
# + id="d_98omkZY4VA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 171} outputId="5e83d0bc-6453-47e6-d98a-f98ffc303136"
# Updatable agent
# [batch, time, h, w, c]
observations_ph = tf.placeholder('float32', [None, None,] + list(obs_shape))
sampled_actions_ph = tf.placeholder('int32', (None, None,))
mask_ph = tf.placeholder('float32', (None, None,))
rewards_ph = tf.placeholder('float32', (None, None,))
cumulative_rewards_ph = tf.placeholder('float32', (None, None,))
init_state_actor_ph = actor.prev_state_placeholder
init_state_critic_ph = critic.prev_state_placeholder
# get new_state, (actor->logits, critic->state_value)
next_state_actor, logits = actor.symbolic_step(init_state_actor_ph,
observations_ph[:, 0])
next_state_critic, state_values = critic.symbolic_step(init_state_critic_ph,
observations_ph[:, 0])
def f(stack, obs_t):
init_state_actor, init_state_critic = stack[0], stack[1]
next_state_actor, logits = actor.symbolic_step(init_state_actor, obs_t)
next_state_critic, state_values = critic.symbolic_step(init_state_critic, obs_t)
return [next_state_actor, next_state_critic, logits, state_values]
[next_state_seq_actor, next_state_seq_critic, logits_seq, state_values_seq] = tf.scan(
f,
initializer = [init_state_actor_ph, init_state_critic_ph, logits, state_values],
elems = tf.transpose(observations_ph, [1, 0, 2, 3, 4])
# elem.shape = [time, batch, h, w, c]
)
print(next_state_seq_actor)
print(next_state_seq_critic)
print(logits_seq)
print(state_values_seq)
# from [time, batch] back to [batch, time]
logits_seq = tf.transpose(logits_seq, [1, 0, 2])
state_values_seq = tf.transpose(state_values_seq, [1, 0, 2])
next_state_seq_actor = [tf.transpose(tensor, [1, 0] + list(range(2, tensor.shape.ndims)))
for tensor in next_state_seq_actor]
next_state_seq_critic = [tf.transpose(tensor, [1, 0] + list(range(2, tensor.shape.ndims)))
for tensor in next_state_seq_critic]
print(logits_seq)
print(state_values_seq)
print(next_state_seq_actor)
print(next_state_seq_critic)
# + id="Yf8tHYyRY4VE" colab_type="code" colab={}
# Updatable agent
# actor-critic losses
# actor -> logits, with shape: [batch, time, n_actions]
# critic -> states, with shape: [batch, time, 1]
r = 0
logprobs_seq = tf.nn.log_softmax(logits_seq)
logp_actions = tf.reduce_sum(logprobs_seq * tf.one_hot(sampled_actions_ph, n_actions),
axis=-1)[:, r:-1]
current_rewards = rewards_ph[:, r:-1] / 100.
current_state_values = state_values_seq[:, r:-1, 0]
next_state_values = state_values_seq[:, r+1:, 0] * mask_ph[:, r:-1]
# policy gradient
# compute 1-step advantage using current_rewards, current_state_values and next_state_values
# have to manually adjust in the code for "r" the right size of cumulative_rewards_ph
advantage = cumulative_rewards_ph - current_state_values
assert advantage.shape.ndims == 2
# compute policy entropy given logits_seq. Mind the sign!
policy = tf.nn.softmax(logits_seq, axis=-1)
entropy = - tf.reduce_sum(policy * logprobs_seq, axis=-1)
assert entropy.shape.ndims == 2
actor_loss = - tf.reduce_mean(logp_actions * tf.stop_gradient(advantage))
actor_loss -= 0.001 * tf.reduce_mean(entropy)
# Prepare Temporal Difference error (States)
target_state_values = (current_rewards + gamma * next_state_values + cumulative_rewards_ph) / 2
critic_loss = tf.reduce_mean(
(current_state_values - tf.stop_gradient(target_state_values))**2)
train_step = tf.train.AdamOptimizer(1e-5).minimize(actor_loss + critic_loss)
# + id="cm6ND-61Y4VH" colab_type="code" colab={}
sess.run(tf.global_variables_initializer())
# + [markdown] id="aNl_YI7JY4VL" colab_type="text"
# # Train
#
# just run train step and see if agent learns any better
# + id="FengAe_yPtZg" colab_type="code" colab={}
def acc_rewards(rewards, last_state_values, r=10, gamma=0.99):
# rewards at each step [batch, time]
# in a phase, last previous memory state [batch, state_dim]
# discount for reward
"""
Take a list of immediate rewards r(s,a) for the whole session
and compute cumulative returns (a.k.a. G(s,a) in Sutton '16).
G_t = r_t + gamma*r_{t+1} + gamma^2*r_{t+2} + ...
A simple way to compute cumulative rewards is to iterate from the last
to the first timestep and compute G_t = r_t + gamma*G_{t+1} recurrently
You must return an array/list of cumulative rewards with as many elements as in the initial rewards.
"""
curr_rewards = rewards / 100.
b_size, time = rewards.shape
acc_reward = np.zeros((b_size, time-r-1), dtype='float32')
acc_reward[:,time-r-2] = last_state_values
for i in reversed(np.arange(time-r-2)):
acc_reward[:,i] = curr_rewards[:,i] + gamma * acc_reward[:,i+1]
return acc_reward
# + id="SAkqEf_BY4VM" colab_type="code" colab={}
def sample_batch(rollout_length=rollout_length):
rollout_obs, rollout_actions, rollout_rewards, rollout_mask, prev_state_critic = pool.interact(rollout_length)
last_state_values = critic.get_state_values(prev_state_critic, rollout_obs[:,-1])
rollout_cumulative_rewards = acc_rewards(rollout_rewards, last_state_values[:,0], r)
feed_dict = {
init_state_actor_ph: actor.get_initial_state(n_parallel_games),
init_state_critic_ph: critic.get_initial_state(n_parallel_games),
observations_ph: rollout_obs,
sampled_actions_ph: rollout_actions,
mask_ph: rollout_mask,
rewards_ph: rollout_rewards,
cumulative_rewards_ph: rollout_cumulative_rewards
}
return feed_dict
# + id="uoSvv_ewY4VO" colab_type="code" colab={}
from IPython.display import clear_output
from tqdm import trange
from pandas import DataFrame
moving_average = lambda x, **kw: DataFrame(
{'x': np.asarray(x)}).x.ewm(**kw).mean().values
rewards_history = []
# + id="S_3kOalqY4VR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 265} outputId="380c64c6-832d-4c85-a1de-a9098263f1d4"
iters = 2001
for i in range(iters):
train_step.run(sample_batch())
if i % 100 == 0:
rewards_history.append(np.mean(evaluate(actor, env, n_games=1)))
clear_output(True)
plt.plot(rewards_history, label='rewards')
plt.plot(moving_average(np.array(rewards_history),
span=rollout_length), label='rewards ewma@'+str(rollout_length))
plt.legend()
plt.show()
if rewards_history[-1] >= 20000:
print("Your trainable_agent has just passed the minimum homework threshold")
break
# + [markdown] id="Il8-jD91Y4VV" colab_type="text"
# ### "Final" evaluation
# + id="1rPp_T9yB91q" colab_type="code" colab={}
import gym.wrappers
with gym.wrappers.Monitor(make_env(), directory="videos", force=True) as env_monitor:
final_rewards = evaluate(agent, env_monitor, n_games=n_parallel_games)
print("Final mean reward", np.mean(final_rewards))
# + id="QBeg64nWY4VY" colab_type="code" colab={}
# Show video. This may not work in some setups. If it doesn't
# work for you, you can download the videos and view them locally.
from pathlib import Path
from IPython.display import HTML
video_names = sorted([s for s in Path('videos').iterdir() if s.suffix == '.mp4'])
HTML("""
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
""".format(video_names[-1])) # You can also try other indices
# + [markdown] id="6t_y0TcOY4Va" colab_type="text"
# ### POMDP setting
#
# The Atari game we're working with is actually a POMDP: your agent needs to know timing at which enemies spawn and move, but cannot do so unless it has some memory.
#
# Let's design another agent that has a recurrent neural net memory to solve this.
#
# __Note:__ it's also a good idea to scale rollout_len up to learn longer sequences. You may wish set it to >=20 or to start at 10 and then scale up as time passes.
# + [markdown] id="fJgo_LgQY4Vg" colab_type="text"
# ### Now let's train it!
# + id="cyzuw5WcY4Vh" colab_type="code" colab={}
# A whole lot of your code here: train the new agent with GRU memory.
# - create pool
# - write loss functions and training op
# - train
# You can reuse most of the code with zero to few changes
| week08_pomdp/w8_practice_lstm_distinct_tf_v2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="jQvQLZH99PFw"
# **Text Classification with ArabicTransformer and TPU**
#
# * First, you need to activate TPU by going to Runtime-> Change RunTime Type -> TPU .
#
# * This example was tested with HuggingFace Transformer Library version v4.11.2 . If you experience any issue roll back to this version.
#
# * This example uses PyTorchXLA, a library that allows you to use PyTorch code on TPU by having PyTorchXLA in the middle. You may experience that the pre-processing of the dataset is slow if you run the code for the first time, but this is just for the first time. If you change the batch size, the pre-processing again will be slow. So try to fix the batch size every time you do a grid search for the best hyperparameters.
#
# * In our paper, we use the original implementation of funnel transformer (PyTorch) (https://github.com/laiguokun/Funnel-Transformer) and V100 GPU, which is no longer provided for Google Colab Pro users. We will update you later on our modified code of the Funnel Transfomer library. However, in the meantime, you need to find the best hyperparameters here and dont rely in our setting in this notebook since the implementation is different from our paper. However, our current set of hyperparameters in this example is still close to what we reported in our paper. You may also get better results with our model than what we reported if you extend the grid search (:
#
# * You can easily run this code on GPU with O2 mixed precision by just changing the runtime to GPU and removing this line from fine-tuning code
#
# ```
# # # !python /content/transformers/examples/pytorch/xla_spawn.py --num_cores=8 transformers/examples/pytorch/text-classification/run_glue.py
# ```
#
# with
#
# ```
# # # !python transformers/examples/pytorch/text-classification/run_glue.py
# ```
#
#
# * The new pytorch library >1.6 allow you to use Automatic Mixed Precision (AMP) without APEX since its part of the native PyTroch library.
#
#
# * This example is based on GLUE fine-tuning task example from huggingface team but it can work with any text classification task and can be used to fine-tune any Arabic Language Model that was uploaded to HuggingFace Hub here https://huggingface.co/models . A text classification task is where we have a sentence and a label like sentiment analysis tasks. You just need to name the header of first sentence that you need to classify to sentence1 and label to "label" colmun. If you want to classify two sentences, then name the first sentence as sentence1 and the other one to sentence2 .
#
# * When you use PyTorchXLA, then you should be aware the batch size will be batch_size*8 since we have 8 cores on the TPU. In this example, we choose a batch size of 4 to get the final batch size of 32.
#
# * We did not include language models that use pre-segmentation (FARASA), such as AraBERTv2, in the list of models below. You can do the pre-segmentation part from your own side using codes that AUB Mind published here https://github.com/aub-mind/arabert. Then use our code to fine-tune AraBERTv2 or similar models.
#
# * If the model scale is changed (small, base, large) or the architecture is different (Funnel, BERT, ELECTRA, ALBERT), you need to change your hyperparameters. Evaluating all models using the same hyperparameters across different scales and architectures is bad practice to report results.
# + colab={"base_uri": "https://localhost:8080/"} id="uaihe27TrDHX" outputId="8bf45495-b101-4241-826b-be743326e7c9"
# !git clone https://github.com/huggingface/transformers
# !pip3 install -e transformers
# !pip3 install -r transformers/examples/pytorch/text-classification/requirements.txt
# !pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.9-cp37-cp37m-linux_x86_64.whl
# + id="aJsmWd35r8Xx" colab={"base_uri": "https://localhost:8080/"} outputId="516ae82d-0e17-4e5b-91ab-b96f723d9be2"
import pandas as pd
# !rm -r /content/data
# !mkdir -p data/raw/scarcasmv2
# !mkdir -p data/scarcasmv2
# !wget -O data/raw/scarcasmv2/dev.csv https://raw.githubusercontent.com/iabufarha/ArSarcasm-v2/main/ArSarcasm-v2/testing_data.csv
# !wget -O data/raw/scarcasmv2/train.csv https://raw.githubusercontent.com/iabufarha/ArSarcasm-v2/main/ArSarcasm-v2/training_data.csv
df = pd.read_csv(r'data/raw/scarcasmv2/train.csv', header=0,escapechar='\n',usecols = [0,2],names=["sentence1", "label"])
df.to_csv('data/scarcasmv2/train.csv',index=False)
df.to_csv('data/scarcasmv2/train.tsv',sep='\t',index=False)
df = pd.read_csv(r'data/raw/scarcasmv2/dev.csv', header=0, escapechar='\n',usecols = [0,2],names=["sentence1", "label"])
df.to_csv('data/scarcasmv2/dev.csv',index=False)
df.to_csv('data/scarcasmv2/dev.tsv',sep='\t',index=False)
# + id="C4zvQh9IjlOz"
import pandas as pd
from sklearn.metrics import f1_score,classification_report,accuracy_score
def calc_scarcasm(y_pred,y_true):
y_pred=pd.read_csv(y_pred, sep='\t',header=None,usecols=[1] )
y_true=pd.read_csv(y_true,usecols=[1],header=None)
print("Accur Score:",accuracy_score(y_true, y_pred)*100)
print("F1 PN Score:",f1_score(y_true, y_pred,labels=['NEG','POS'],average="macro")*100)
print("########################### Full Report ###########################")
print(classification_report(y_true, y_pred,digits=4,labels=['NEG','POS'] ))
# + [markdown] id="h3cJQVJNKfUA"
# # **ArabicTransformer Small (B4-4-4)**
# + colab={"base_uri": "https://localhost:8080/"} id="4Km3xMPqraxO" outputId="378420d6-ed1f-44e6-e189-10585319c7c3"
import os
model= "sultan/ArabicTransformer-small" #@param ["sultan/ArabicTransformer-small","sultan/ArabicTransformer-intermediate","sultan/ArabicTransformer-large","aubmindlab/araelectra-base-discriminator","asafaya/bert-base-arabic","aubmindlab/bert-base-arabertv02","aubmindlab/bert-base-arabert", "aubmindlab/bert-base-arabertv01","kuisailab/albert-base-arabic","aubmindlab/bert-large-arabertv02"]
task= "scarcasmv2" #@param ["scarcasmv2"]
seed= "42" #@param ["42", "123", "1234","12345","666"]
batch_size = 4 #@param {type:"slider", min:4, max:128, step:4}
learning_rate = "3e-5"#@param ["1e-4", "3e-4", "1e-5","3e-5","5e-5","7e-5"]
epochs_num = 2 #@param {type:"slider", min:1, max:50, step:1}
max_seq_length= "256" #@param ["128", "256", "384","512"]
os.environ['batch_size'] = str(batch_size)
os.environ['learning_rate'] = str(learning_rate)
os.environ['epochs_num'] = str(epochs_num)
os.environ['task'] = str(task)
os.environ['model'] = str(model)
os.environ['max_seq_length'] = str(max_seq_length)
os.environ['seed'] = str(seed)
# !python /content/transformers/examples/pytorch/xla_spawn.py --num_cores=8 transformers/examples/pytorch/text-classification/run_glue.py --model_name_or_path $model \
# --train_file data/$task/train.csv \
# --validation_file data/$task/dev.csv \
# --test_file data/$task/dev.csv \
# --output_dir output_dir/$task \
# --overwrite_cache \
# --seed $seed \
# --overwrite_output_dir \
# --logging_steps 1000000 \
# --max_seq_length $max_seq_length \
# --per_device_train_batch_size $batch_size \
# --learning_rate $learning_rate \
# --warmup_ratio 0.1 \
# --num_train_epochs $epochs_num \
# --save_steps 50000 \
# --do_train \
# --do_predict
# + colab={"base_uri": "https://localhost:8080/"} id="fcub9L6FjjVA" outputId="1d31a5df-4488-4284-bca9-fa2542f0bc8a"
calc_scarcasm('/content/output_dir/scarcasmv2/predict_results_None.txt','/content/data/scarcasmv2/dev.csv')
| Examples/Text_Classification_with_ArabicTransformer_with_PyTorchXLA_on_TPU_or_with_PyTorch_on_GPU.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: CoSpar_envs
# language: python
# name: cospar_envs
# ---
# # Synthetic bifurcation
# We simulated a differentiation process over a bifurcation fork. In this simulation,
# cells are barcoded in the beginning, and the barcodes remain un-changed. In the simulation we resample clones over time. The first sample is obtained after 5 cell cycles post labeling. The dataset has two time points. See Wang et al. (2021) for more details.
import cospar as cs
cs.logging.print_version()
cs.settings.verbosity=2
cs.settings.set_figure_params(format='png',dpi=75,fontsize=14) # use png to reduce file size.
cs.settings.data_path='test' # A relative path to save data. If not existed before, create a new one.
cs.settings.figure_path='test' # A relative path to save figures. If not existed before, create a new one.
# ## Loading data
adata_orig=cs.datasets.synthetic_bifurcation()
adata_orig.obsm['X_clone']
cs.pl.embedding(adata_orig,color='state_info')
cs.pl.embedding(adata_orig,color='time_info')
# ## Basic clonal analysis
cs.pl.clones_on_manifold(adata_orig,selected_clone_list=[1])
coupling=cs.pl.fate_coupling_from_clones(adata_orig,selected_times='2', color_bar=True)
cs.pl.barcode_heatmap(adata_orig,selected_times='2', color_bar=True)
results=cs.pl.clonal_fate_bias(adata_orig,selected_fate='Fate_A')
results[:5]
# ## Transition map inference
# ### Transition map from multiple clonal time points.
adata=cs.tmap.infer_Tmap_from_multitime_clones(adata_orig,clonal_time_points=['1','2'],smooth_array=[10,10,10],
CoSpar_KNN=20,sparsity_threshold=0.2)
cs.pl.fate_bias(adata,selected_fates=['Fate_A','Fate_B'],used_Tmap='transition_map',
plot_target_state=False,map_backward=True,sum_fate_prob_thresh=0.01)
# ### Transition map from a single clonal time point
adata=cs.tmap.infer_Tmap_from_one_time_clones(adata_orig,initial_time_points=['1'],later_time_point='2',
initialize_method='OT',smooth_array=[10,10,10],
sparsity_threshold=0.2,compute_new=False)
cs.pl.fate_bias(adata,selected_fates=['Fate_A','Fate_B'],used_Tmap='transition_map',
plot_target_state=False,map_backward=True,sum_fate_prob_thresh=0.005)
# ### Transition amp from only the clonal information
# +
adata=cs.tmap.infer_Tmap_from_clonal_info_alone(adata_orig)
cs.pl.fate_bias(adata,selected_fates=['Fate_A','Fate_B'],used_Tmap='clonal_transition_map',
plot_target_state=False,map_backward=True,sum_fate_prob_thresh=0.01)
| docs/source/20210120_bifurcation_model_static_barcoding.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exercise 1 - Two-layer Earth
#
# *Don't forget to hit * **Shift+Enter** * to run the code in any Cell.*
# **PLEASE ENTER YOUR NAME HERE***
# +
# First, import some additional libraries
import numpy as np
import matplotlib.pylab as plt
# %matplotlib notebook
from eq_tools import *
# -
# In the cell below, we define two variables: *radii* and *velocities*.
#
# These variables are *lists* and can hold multiple numbers.
#
# The values in *radii* represent the outer radius of each layer of this hypothetical planet (think crust, mantle, core, etc.). They are given in kilometers (km).
#
# The values in *velocities* represent the velocity that P-waves travel through that layer. They are given in kilometers per second (km/s).
#
# Right now, these values are all set to 0. Edit this so that the values are reflective of your answers for the 2-layer Earth.
#
# *Note that the radius is determined by the distance from the center of the Earth.*
# 2-layer Earth
radii = [0.0, 0.0] # The first number should be the outermost radius (the radius of the Earth)
velocities = [0.0, 0.0]
# You can see a plot of radius versus velocity, for the values you've entered, if you run the following cell.
plot_velocities(radii,velocities)
# Run the cell below to produce an image of where the earthquake waves (drawn as rays) go.
#
# You will also see a plot of arrival time versus angular distance from the spot of the earthquake.
#
# You can edit the code below to change the number of earthquake rays drawn (*nrays*).
#
# If you want to see what the arrival times of a more realistic Earth model look like, after *nrays=50* (or whatever number you change it to) add a comma and then *real_earth=True*.
make_earthquake(radii,velocities,nrays=50)
# ## Improve your model
#
# Go back to the cell where you defined *radii* and *velocities* and start changing the radii (keep the outside radius that of the Earth!) and the velocities of each layer to try to get a better fit to arrival times as measured in the real Earth.
#
# Once you edit that cell, you'll have to rerun the next few cells to get the plots to update.
#
# Keep track of your different models in the table below.
# **Model 1**
#
# | Radius of layer (km) | velocity of layer (km/s) | Notes and comments (did it agree or disagree with the real Earth?) |
# |----------------------|---------------------------|--------------------------------------------------------------------|
# | 0.0 | 0.0 | Comment? |
# | 0.0 | 0.0 | |
#
# **Model 2**
#
# | Radius of layer (km) | velocity of layer (km/s) | Notes and comments (did it agree or disagree with the real Earth?) |
# |----------------------|---------------------------|--------------------------------------------------------------------|
# | 0.0 | 0.0 | Comment? |
# | 0.0 | 0.0 | |
#
# **Model 3**
#
# | Radius of layer (km) | velocity of layer (km/s) | Notes and comments (did it agree or disagree with the real Earth?) |
# |----------------------|---------------------------|--------------------------------------------------------------------|
# | 0.0 | 0.0 | Comment? |
# | 0.0 | 0.0 | |
#
# **Model 4**
#
# | Radius of layer (km) | velocity of layer (km/s) | Notes and comments (did it agree or disagree with the real Earth?) |
# |----------------------|---------------------------|--------------------------------------------------------------------|
# | 0.0 | 0.0 | Comment? |
# | 0.0 | 0.0 | |
#
# **Model 5**
#
# | Radius of layer (km) | velocity of layer (km/s) | Notes and comments (did it agree or disagree with the real Earth?) |
# |----------------------|---------------------------|--------------------------------------------------------------------|
# | 0.0 | 0.0 | Comment? |
# | 0.0 | 0.0 | |
#
#
# ## Which model worked best?
#
# Which model worked best for you? Write your results and comments in the cell below.
# ***Your answer here***
# ## Next steps...
#
# If you have completed this exercise, you are ready to move on to Exercise 2.
| Exercise_1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 本週末(2021/07/10-11)作業
# 擷取高科大所有的焦點新聞內容,放在資料庫中,然後從資料庫中取出這些內容,利用jieba斷詞之後,再送到WordCloud製作成文字雲。文字雲使用的圖片要另行設計喔。
# + tags=[]
# Retrieve articles title, date, link and context thru
# https://www.nkust.edu.tw/p/403-1000-12-1.php
# and store in database.
# %matplotlib inline
import requests
import time
import sqlite3
from bs4 import BeautifulSoup as bs
# generate url thru given page.
def urlGen(page :int =1):
return 'https://www.nkust.edu.tw/p/403-1000-12-{}.php'.format(page)
# retrieve article context thtu given url.
# await 500ms for the good practice of not getting banned
def getContext(url):
cHtml =requests.get(url).text
cSoup =bs(cHtml,"html.parser")
cSel ='#Dyn_2_3 > div.module.module-detail.md_style1 > div > section > div.mcont > div.mpgdetail'
r =cSoup.select(cSel)[0].text.strip().replace('\n','')
time.sleep(0.5)
return r
# log the status of retrieving article
def logAdd(srcUrl:str, contextUrl:str, status:str ='ok'):
log.append('{} {} {}'.format(srcUrl,contextUrl,status))
print(log[-1])
# get total page urls, put into srcUrls
html =requests.get(urlGen()).text
soup =bs(html,"html.parser")
totalPageSel ='#Dyn_2_3 > div > section > div.mpgbar > nav > span'
soup.select(totalPageSel)
pages =int(soup.select(totalPageSel)[0].text.strip()[1:-1])
srcUrls =[] # collection of source urls that contain article
for p in range(1,pages+1):
srcUrls.append(urlGen(page =p))
# get article title, date, link and context
article =list()
log =list()
pageListSel ='#pageptlist > div > div > div > div > div'
for su in srcUrls:
curSrcHtml =requests.get(su).text
curSrcSoup =bs(curSrcHtml,"html.parser")
pageList =curSrcSoup.select(pageListSel)
try:
for pageItem in pageList:
temp =dict()
temp['title'] =pageItem.a.text.strip()
temp['context'] =getContext(url =pageItem.a['href'])
temp['link'] =pageItem.a['href']
temp['date'] =pageItem.i.text
article.append(temp)
logAdd(srcUrl=su.split('/')[-1],contextUrl=pageItem.a['href'].split('/')[-1])
time.sleep(0.5)
except Exception as e:
logAdd(srcUrl=su.split('/')[-1],contextUrl=pageItem.a['href'],status ='failed. {}'.format(e))
pass
print('done')
# write logfile
with open('nkust_news.log','w') as f:
f.writelines([(lg+'\n') for lg in log])
# store into database
conn =sqlite3.connect("nkust_news.db")
for a in article:
try:
sqlStr ="INSERT INTO news('date','title','link','context') VALUES('{}','{}','{}','{}')".format(
a['date'],a['title'],a['link'],a['context'])
conn.execute(sqlStr)
conn.commit()
except:
pass
conn.close()
# +
# Generate WordCloud thru context in db
import sqlite3
from PIL import Image
import matplotlib.pyplot as plt
from wordcloud import WordCloud,ImageColorGenerator
import numpy as np
import os
import jieba
conn =sqlite3.connect("nkust_news.db")
db =conn.execute('SELECT context FROM news')
contexts =db.fetchall()
db.close()
data =list()
for c in contexts:
data.append(c[0])
data =','.join(data)
stopwords = list()
with open('stopWords.txt', 'r', encoding='utf-8') as fp:
stopwords = [word.strip() for word in fp.readlines()]
jieba.load_userdict("userDict.txt")
keyterms = [keyterm for keyterm in jieba.cut(data) if keyterm not in stopwords]
keyterms = ','.join(keyterms)
mask = np.array(Image.open('flutter.jpg'))
wordcloud = WordCloud(background_color='white',
width=1000, height=860,
margin=2, font_path="simhei.ttf",
mask=mask).generate(keyterms)
sampled_colors = ImageColorGenerator(mask,default_color='blue') # image color sampling
plt.figure(figsize=(10,10))
plt.imshow(wordcloud.recolor(color_func=sampled_colors))
plt.axis("off")
plt.show()
| 0709_wk_homework/0709_week_homework.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:gdg_denver] *
# language: python
# name: conda-env-gdg_denver-py
# ---
# + [markdown] id="jn94QT_nxOAC"
# # Deep Q-Network
# + [markdown] id="lHUCRea_xOAF"
# In 2015, Google DeepMind ([Link](https://deepmind.com/research/dqn/)) published a paper in Nature magazine that combines a deep convolution neural network with reinforcement learning for the first time in order to master a range of Atari 2600 games. They used only the raw pixels and score as the inputs. They were able to use the convolution layer to translate the pixels.
#
# The very simple description is that they replaced the Q table in a Q-Learner with a neural network. This allowed them to take advantage of neural networks but still use reinforcement learning.
# + id="Xb_yU93xxOAG" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1627367050419, "user_tz": -120, "elapsed": 763, "user": {"displayName": "\u0110or\u0111<NAME>\u0107", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjMtdPDPC6XJoAlVJ9SyzF_WldAaS1YlXbiUOAReA=s64", "userId": "17224412897437983949"}} outputId="7028bfdd-4576-4c4b-e95e-73c234ed5901"
#Imports
import gym
import numpy as np
import matplotlib.pyplot as plt
from collections import deque
import tensorflow as tf
from tensorflow import keras
#from keras.models import Sequential
#from keras.layers import Dense
#from keras.optimizers import Adam
import random
#Create Gym
from gym import wrappers
envCartPole = gym.make('CartPole-v1')
envCartPole.seed(50) #Set the seed to keep the environment consistent across runs
# + [markdown] id="fiSmyoR1xOAI"
# **Experience Replay**
# Definition: A mechanism inspired by biology that randomizes over the data removing the correlation in the observation sequence and smoothing over changes in the data distribution.
#
# To perform an experience replay, the algorithm stores all of the agents experiences {$s_t,a_t,r_t,s_{t+1}$} at each time step in a data set. Normally in a q-learner, we would run the update rule on them. But, with experience replay we just store them.
#
# Later during the training process these replays will be drawn uniformly from the memory queue and be ran through the update rule. There are 2 ways to handle this and I have coded both in the past. The first is to run them on every loop and the other is to run them after X amount of runs. In this code below, I run them each time.
# + [markdown] id="XMCLovO6xOAL"
# **CartPole Example**
# Again we will use the [CartPole](https://gym.openai.com/envs/CartPole-v1/) environment from OpenAI.
#
# The actions are 0 to push the cart to the left and 1 to push the cart to the right.
#
# The continuous state space is an X coordinate for location, the velocity of the cart, the angle of the pole, and the velocity at the tip of the pole. The X coordinate goes from -4.8 to +4.8, velocity is -Inf to +Inf, angle of the pole goes from -24 degrees to +24 degrees, tip velocity is -Inf to +Inf. With all of the possible combinations you can see why we can't create a Q table for each one.
#
# To "solve" this puzzle you have to have an average reward of > 195 over 100 consecutive episodes. One thing to note, I am hard capping the rewards at 210 so this number can't average above that and it also could potentially drive the average down.
# + id="q7EjamzgxOAM" executionInfo={"status": "ok", "timestamp": 1627367054555, "user_tz": -120, "elapsed": 4, "user": {"displayName": "\u0110or\u0111e Grbi\u0107", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjMtdPDPC6XJoAlVJ9SyzF_WldAaS1YlXbiUOAReA=s64", "userId": "17224412897437983949"}}
#Global Variables
EPISODES = 500
TRAIN_END = 0
# + id="PK_RNOAmxOAM" executionInfo={"status": "ok", "timestamp": 1627367056716, "user_tz": -120, "elapsed": 1, "user": {"displayName": "\u0110or\u0111e Grbi\u0107", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjMtdPDPC6XJoAlVJ9SyzF_WldAaS1YlXbiUOAReA=s64", "userId": "17224412897437983949"}}
#Hyper Parameters
discount_rate = 0.95 ## Gamma parameter
learning_rate = 0.001 ##Alpha
batch_size = 24 ##Size of the batch used in the experience replay
# + [markdown] id="8LnXacK4xOAN"
# **Deep Q-Network Class**
# The following class is the deep Q-network that is built using the neural network code from Keras.
# **init**:
# * This creates the class and sets the local parameters.
# * We use a *deque* for the local memory to hold the experiences and a keras model for the NN.
#
# **build_model(self)**:
# * This builds the NN. I am using sequential model. Each of the layers are *Dense* despite the fact the document talks about using *Convolution*. But, they are only using that because they need to convert pixels and I already have numbers.
# * I am using an input layer(4), 24 neuron layer, 24 neuron layer, and an output layer(2).
# * For calculating the loss I am using mean squared error.
# * For an optimizer I am using [Adam](https://arxiv.org/abs/1412.6980v8). It is a variant of gradient descent and you can read the technical document at the link. If you want a slightly lighter explaining you can check out [Machine Learning Mastery](https://machinelearningmastery.com/adam-optimization-algorithm-for-deep-learning/). You could also use SGD (Stochastic Gradient Descent) but Adam gives me better results and seems to be the standard in most examples.
#
# **action(self,state)**:
# * This generates the action.
# * Explore: I am using the epsilon like previous lessons.
# * Exploit: I use the NN to grab the 2 possible actions and then grab the argmax to find the better one
#
# **test_action(self,state)**:
# * This generates the action when I am testing. I want to 100% exploit
#
# **store(self, state, action, reward, nstate, done)**:
# * This places the observables in memory
#
# **experience_replay(self, batch_size)**:
# * This is where the training occurs. We grab the sample batches and then use the NN to predict the optimal action.
# + id="LQDdbH2VxOAN" executionInfo={"status": "ok", "timestamp": 1627366839430, "user_tz": -120, "elapsed": 337, "user": {"displayName": "\u0110or\u0111e Grbi\u0107", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjMtdPDPC6XJoAlVJ9SyzF_WldAaS1YlXbiUOAReA=s64", "userId": "17224412897437983949"}}
class DeepQNetwork():
def __init__(self, states, actions, alpha, gamma, epsilon,epsilon_min, epsilon_decay):
self.nS = states
self.nA = actions
self.memory = deque([], maxlen=2500)
self.alpha = alpha
self.gamma = gamma
#Explore/Exploit
self.epsilon = epsilon
self.epsilon_min = epsilon_min
self.epsilon_decay = epsilon_decay
self.model = self.build_model()
self.loss = []
def build_model(self):
model = keras.Sequential() #linear stack of layers https://keras.io/models/sequential/
model.add(keras.layers.Dense(24, input_dim=self.nS, activation='relu')) #[Input] -> Layer 1
# Dense: Densely connected layer https://keras.io/layers/core/
# 24: Number of neurons
# input_dim: Number of input variables
# activation: Rectified Linear Unit (relu) ranges >= 0
model.add(keras.layers.Dense(24, activation='relu')) #Layer 2 -> 3
model.add(keras.layers.Dense(self.nA, activation='linear')) #Layer 3 -> [output]
# Size has to match the output (different actions)
# Linear activation on the last layer
model.compile(loss='mean_squared_error', #Loss function: Mean Squared Error
optimizer=keras.optimizers.Adam(lr=self.alpha)) #Optimaizer: Adam (Feel free to check other options)
return model
def action(self, state):
if np.random.rand() <= self.epsilon:
return random.randrange(self.nA) #Explore
action_vals = self.model.predict(state) #Exploit: Use the NN to predict the correct action from this state
return np.argmax(action_vals[0])
def test_action(self, state): #Exploit
action_vals = self.model.predict(state)
return np.argmax(action_vals[0])
def store(self, state, action, reward, nstate, done):
#Store the experience in memory
self.memory.append( (state, action, reward, nstate, done) )
def experience_replay(self, batch_size):
#Execute the experience replay
minibatch = random.sample( self.memory, batch_size ) #Randomly sample from memory
#Convert to numpy for speed by vectorization
x = []
y = []
np_array = np.array(minibatch)
st = np.zeros((0,self.nS)) #States
nst = np.zeros( (0,self.nS) )#Next States
for i in range(len(np_array)): #Creating the state and next state np arrays
st = np.append( st, np_array[i,0], axis=0)
nst = np.append( nst, np_array[i,3], axis=0)
st_predict = self.model.predict(st) #Here is the speedup! I can predict on the ENTIRE batch
nst_predict = self.model.predict(nst)
index = 0
for state, action, reward, nstate, done in minibatch:
x.append(state)
#Predict from state
nst_action_predict_model = nst_predict[index]
if done == True: #Terminal: Just assign reward much like {* (not done) - QB[state][action]}
target = reward
else: #Non terminal
target = reward + self.gamma * np.amax(nst_action_predict_model)
target_f = st_predict[index]
target_f[action] = target
y.append(target_f)
index += 1
#Reshape for Keras Fit
x_reshape = np.array(x).reshape(batch_size,self.nS)
y_reshape = np.array(y)
epoch_count = 1 #Epochs is the number or iterations
hist = self.model.fit(x_reshape, y_reshape, epochs=epoch_count, verbose=0)
#Graph Losses
for i in range(epoch_count):
self.loss.append( hist.history['loss'][i] )
#Decay Epsilon
if self.epsilon > self.epsilon_min:
self.epsilon *= self.epsilon_decay
# + id="bNmq6UVMxOAO" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1627367111679, "user_tz": -120, "elapsed": 410, "user": {"displayName": "\u0110or\u0111<NAME>\u0107", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjMtdPDPC6XJoAlVJ9SyzF_WldAaS1YlXbiUOAReA=s64", "userId": "17224412897437983949"}} outputId="8c24e0c7-d51f-40ab-e452-fe063a8b9631"
#Create the agent
nS = envCartPole.observation_space.shape[0] #This is only 4
nA = envCartPole.action_space.n #Actions
dqn = DeepQNetwork(nS, nA, learning_rate, discount_rate, 1, 0.001, 0.995 )
# + id="_G-J85OcxOAO" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1627368126986, "user_tz": -120, "elapsed": 266065, "user": {"displayName": "\u0110or\u0111e Grbi\u0107", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjMtdPDPC6XJoAlVJ9SyzF_WldAaS1YlXbiUOAReA=s64", "userId": "17224412897437983949"}} outputId="bcdb6ea9-297b-471b-d960-103e75666744"
#Training
rewards = [] #Store rewards for graphing
epsilons = [] # Store the Explore/Exploit
TEST_Episodes = 0
for e in range(EPISODES):
state = envCartPole.reset()
state = np.reshape(state, [1, nS]) # Resize to store in memory to pass to .predict
tot_rewards = 0
for time in range(210): #200 is when you "solve" the game. This can continue forever as far as I know
action = dqn.action(state)
nstate, reward, done, _ = envCartPole.step(action)
nstate = np.reshape(nstate, [1, nS])
tot_rewards += reward
dqn.store(state, action, reward, nstate, done) # Resize to store in memory to pass to .predict
state = nstate
#done: CartPole fell.
#time == 209: CartPole stayed upright
if done or time == 209:
rewards.append(tot_rewards)
epsilons.append(dqn.epsilon)
print("episode: {}/{}, score: {}, e: {}"
.format(e, EPISODES, tot_rewards, dqn.epsilon))
break
#Experience Replay
if len(dqn.memory) > batch_size:
dqn.experience_replay(batch_size)
#If our current NN passes we are done
#I am going to use the last 5 runs
if len(rewards) > 5 and np.average(rewards[-5:]) > 195:
#Set the rest of the EPISODES for testing
TEST_Episodes = EPISODES - e
TRAIN_END = e
break
# + id="w_dBYWXExOAP" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1627335185445, "user_tz": -120, "elapsed": 2443226, "user": {"displayName": "", "photoUrl": "", "userId": ""}} outputId="f7f4f3c4-e09d-4351-cc3a-f69658e53d91"
#Test the agent that was trained
# In this section we ALWAYS use exploit don't train any more
for e_test in range(TEST_Episodes):
state = envCartPole.reset()
state = np.reshape(state, [1, nS])
tot_rewards = 0
for t_test in range(210):
action = dqn.test_action(state)
nstate, reward, done, _ = envCartPole.step(action)
nstate = np.reshape( nstate, [1, nS])
tot_rewards += reward
#DON'T STORE ANYTHING DURING TESTING
state = nstate
#done: CartPole fell.
#t_test == 209: CartPole stayed upright
if done or t_test == 209:
rewards.append(tot_rewards)
epsilons.append(0) #We are doing full exploit
print("episode: {}/{}, score: {}, e: {}"
.format(e_test, TEST_Episodes, tot_rewards, 0))
break;
# + [markdown] id="mnbp8mrOxOAP"
# **Results**
# Here is a graph of the results. If everything was done correctly you should see the rewards over the red line.
#
# Black: This is the 100 episode rolling average
# Red: This is the "solved" line at 195
# Blue: This is the reward for each episode
# Green: This is the value of epsilon scaled by 200
# Yellow: This is where the tests started.
# + id="9nyLB7t9xOAP" outputId="dd970bd9-de18-45ad-a75b-f954f7219390"
rolling_average = np.convolve(rewards, np.ones(100)/100)
plt.plot(rewards)
plt.plot(rolling_average, color='black')
plt.axhline(y=195, color='r', linestyle='-') #Solved Line
#Scale Epsilon (0.001 - 1.0) to match reward (0 - 200) range
eps_graph = [200*x for x in epsilons]
plt.plot(eps_graph, color='g', linestyle='-')
#Plot the line where TESTING begins
plt.axvline(x=TRAIN_END, color='y', linestyle='-')
plt.xlim( (0,EPISODES) )
plt.ylim( (0,220) )
plt.show()
envCartPole.close()
# + [markdown] id="HgvuW5A9xOAQ"
# **Reference**
# <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., ... & <NAME>. (2015). *Human-level control through deep reinforcement learning*. Nature, 518(7540), 529
| day08/IN_CLASS-dqn_cartpole.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # SciPy Optimize Basin Hopping
# <hr />
#
# Basin hopping is a Monte Carlo algorithm motivated based on physics. A <b>Monte Carlo</b> algorithm is one that uses random numbers. The standard version of the algorithm is a <b>Markov Chain</b>. Markov Chain's are a random walk, composed of a series of locations that only depend on the previous step.
#
# The routine has a three step process:
# 1. [Randomly create a new coordinate](#new)
# 2. [Perform a local minimization](#local)
# 3. [Decide whether or not to accept the new minimum](#accept)
#
# 
#
# This method of global optimization can work in an arbitrarily large dimensional problem space, can overcome large energy barriers between local minima, and can store all found local minima. But the algorithm can still get stuck in an area if the entire domain is large and filled with many local minima. This also routine takes requires customization and tuning. That customization and tuning makes the algorithm more difficult straight out of the box, but can make this option much more favorable when you want that control.
#
# So let's get started with the tutorial by importing packages:
# Importing the packages for the example
from scipy import optimize
import numpy as np
import matplotlib.pyplot as plt
# And defining a simple test function:
# +
f = lambda x : 1/50*(x[0]**2 + x[1]**2) - np.cos(x[0])*np.cos(x[1])
df = lambda x : np.array([1/25*x[0]+np.sin(x[0])*np.cos(x[1]),
1/25*x[1]+np.cos(x[0])*np.sin(x[1])])
f_parameter = lambda x, a : 1/50*(x[0]**2 + x[1]**2) - np.cos(x[0]-a)*np.cos(x[1])
# -
# Now that we've defined this function, what does it look like?
# +
x0=np.arange(-3*np.pi,3*np.pi,.05)
x0_a, x1_a = np.meshgrid(x0,x0)
# plotting our test functions
fig, ax = plt.subplots()
pos = ax.pcolormesh(x0_a,x1_a,f([x0_a,x1_a]) )
# labeling and measuring necessities
fig.legend()
fig.colorbar(pos,ax=ax)
ax.set_title('g(x)')
ax.set_xlabel('x[0]')
ax.set_ylabel('x[1]')
# -
# <div id="basic" />
#
# ## Basic Call
# <hr>
#
# Basin hopping is highly configurable. Each of the steps can be both tuned by parameters or completely overriden by new functions. The un-tuned minimization has a low chance of working. Even with the simple tutorial function, this routine will fail to find the global minimum without help:
starting_point=[3*np.pi,3*np.pi]
result_bad=optimize.basinhopping(f,starting_point)
result_bad.x
# <div id="new" />
#
# ## 1. Choosing a new coordinate
# <hr>
#
# In step 1, the routine has to generate new coordinates. The default settings pick a displacement from a uniform probability distribution function ranging from -stepsize to +stepsize:
#
# $$
# x_{i+1} = x_i + \delta \qquad \delta \in [-\text{stepsize},\text{stepsize}]
# $$
#
# The greatest efficiency comes when the stepsize is approximately the distance between adjacent minima. Since the local minima in our function are generate by $\cos(x)$, the period is $2 \pi$. Simply setting our stepsize to that number allows the routine to converge to the global minimum:
result_ss=optimize.basinhopping(f,starting_point,stepsize=2*np.pi)
result_ss.x
# The routine also has the ability to automatically determine the best stepsize itself. If we didn't know ahead of time the periodicity of our function, or the periodicity changed throughout space we can set the `interval` to some number:
result_updatess=optimize.basinhopping(f,starting_point,interval=5)
result_updatess.x
# <div id="step-function" />
#
# #### Custom step taking function
#
# `stepsize` and `interval` just tune parameters for the provided function. We can provide an entirely new protocol for generating the new coordinates as well.
#
# We just need a function that takes in and returns a set of coordinates. We can generate the function the standard way, or we can create the function as an instance of a class. If it's an instance of a class, it can also have the `stepsize` attribute. Only this way can `basinhopping` adapt the stepsize over time:
#
# So how does this work? Here I define a class that pulls from a Gaussian, also known as normal, distribution instead. We define a `__init__` and `__call__` components and set up the stepsize.
class Take_Step_Class(object):
def __init__(self, stepsize=1):
self.stepsize = stepsize
def __call__(self,x):
x += self.stepsize * np.random.standard_normal(x.shape)
return x
# Now we can initialize this class:
take_step_object = Take_Step_Class(stepsize=2*np.pi)
# And now verify that it does indeed have the stepsize component:
take_step_object.stepsize
# And we can use it a function as well to generate the next set of coordinates:
# +
xk = np.array([0.,0.]) # current coordinates
take_step_object(xk)
# -
# After all that work, we pass the object/ function to `basinhopping` via the `take_step` flag:
result_takestep = optimize.basinhopping(f,starting_point,take_step=take_step_object)
result_takestep.x
# <div id="local" />
#
# ## 2. Local Minimization
# <hr>
#
# After generating a new set of coordinates, the routine performs a local minimization starting at that point. This step is less critical to the overall success of the global minimization process, but does affect the over speed an efficiency of the process.
#
# This process uses `scipy.optimize.minimize`, so check [that tutorial](./Optimization_ND.ipynb) for more information. We can send information to `minimize` in a dictionary:
minimizer_kwargs_dict = {"method":"BFGS",
"jac":df}
result_local = optimize.basinhopping(f,starting_point,stepsize=2*np.pi,
minimizer_kwargs=minimizer_kwargs_dict)
result_local.x
# <div id="accept" />
#
# ## 3. Accepting a New Coordinate
# <hr>
#
# The standard algorithm accepts a new minimum according to the <b>Metropolis-Hastings</b> criterion.
#
# In the Metropolis-Hastings criteria, the probability of accepting a new value $f_{i+1}$ with an old value $f_{i}$
#
# $$
# P(\text{accept}) = \text{min} \big\{ e^{\frac{f_{i+1}-f_i}{T}} , 1 \big\}
# $$
# Just like we created a [custom step taking function](#step-function), we can create a custom function to accept a new minima.
#
# The function recieves `x_new`,`x_old`,`f_new`, and `f_old` in a dictionary form. The function does not recieve temperature `T` or any other parameters. By making a class function, we can incorporate the temperature or store any other type of necessary data for the evaluation.
#
# Here, I will define a create a custom minimizer to create a <b>Simulated Annealing</b> simulation. In Simulated Annealing, the temperature decreases over time. In the beginning of the simulation, the algorithm is free to bounce around and explore even unfavorable areas; only later does the simulation settle down toward the lowest possible energies
# +
class MyBounds(object):
def __init__(self, T0 = 10 ):
self.T = T0
def __call__(self, **kwargs):
#this is the Simulated Annealing part
self.T = .8* self.T;
if kwargs["f_new"]<kwargs["f_old"]:
# if the new minimum is lower, we accept it
return True
elif np.random.rand() < np.exp( (kwargs["f_old"]-kwargs["f_new"])/self.T ):
#we Metropolis-Hastings test it against a random number
return True
else:
return False
# -
mybounds = MyBounds()
result_accept = optimize.basinhopping(f, starting_point, stepsize=2*np.pi,niter=5,
accept_test=mybounds)
print(result_accept)
# ## Callback on Iterations
def callback(x,f,accept):
print(x,"\t",f,"\t",accept)
return False
optimize.basinhopping(f,starting_point,stepsize=2*np.pi,niter=5,
callback=callback)
| Optimization_Global_BasinHopping.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import matplotlib.pyplot as plt
import scipy.stats as stats
import numpy as np
from sklearn.ensemble import RandomForestClassifier
from sklearn import svm
# -
higgs = pd.read_pickle('../Lab5/higgs_100000_pt_1000_1200.pkl')
qcd = pd.read_pickle('../Lab5/qcd_100000_pt_1000_1200.pkl')
# +
normalization_higgs = 50/len(higgs)
normalization_qcd = 2000/len(qcd)
print(normalization_higgs, normalization_qcd)
# +
def theory(n_qcd, n_higgs):
prob = stats.poisson.sf(n_qcd + n_higgs, n_qcd)
sigma = stats.norm.isf(prob)
return sigma
def approximation(n_qcd, n_higgs):
sigma = n_higgs/np.sqrt(n_qcd)
return sigma
# +
fig, ax = plt.subplots(1, figsize = (10,10))
hist_qcd = ax.hist(qcd['mass'], weights = np.ones(len(qcd))*normalization_qcd, bins = 50, histtype = 'step', label = 'QCD')
hist_higgs = ax.hist(higgs['mass'], weights = np.ones(len(higgs))*normalization_higgs, bins = hist_qcd[1], histtype = 'step', label = 'Higgs')
ax.set_title('Mass Histogram')
ax.set_ylabel('Normalized Counts')
ax.set_xlabel('Mass')
plt.legend()
plt.show()
# +
n_qcd = 2000
n_higgs = 50
prob = stats.poisson.sf(n_qcd + n_higgs, n_qcd)
sigma = stats.norm.isf(prob)
approx_sig = n_higgs/np.sqrt(n_qcd)
print(sigma, approx_sig)
# -
# They are not the same, which means that the approximation doesn't match the model that I've used. But it is very close since the model is a Poisson with high mean which is like a Gaussian. The approximation is for a Gaussian.
# +
mass_cut = [180, 150, 140, 135, 130]
for i in mass_cut:
print(f'mass cut: {i}')
cut_qcd = qcd[qcd['mass'] < i]
cut_higgs = higgs[higgs['mass'] < i]
n_qcd = 2000/len(qcd)*len(cut_qcd)
n_higgs = 50/len(higgs)*len(cut_higgs)
print(f'N_qcd: {n_qcd:0.3f} N_higgs: {n_higgs:0.3f}')
theory_sigma = theory(n_qcd, n_higgs)
approx_sigma = approximation(n_qcd, n_higgs)
print(f'theory sigma: {theory_sigma:.3f} approximate sigma: {approx_sigma:.3f}\n')
# +
keys = ['pt', 'eta', 'phi', 'mass', 'ee2', 'ee3', 'd2', 'angularity', 't1',
't2', 't3', 't21', 't32', 'KtDeltaR']
title = ['No Cut', 'Mass Cut']
normalization_higgs = 50/len(higgs)
normalization_qcd = 2000/len(qcd)
cut_qcd = qcd[qcd['mass']<140]
cut_higgs = higgs[higgs['mass']<140]
def get_ylims(y1, y2, y3, y4):
all_y = np.hstack((y1, y2, y3, y4))
ymax = all_y.max()+10
ymin = all_y.min()
#print(all_y)
return ymax, ymin
fig, ax = plt.subplots(14, 2, figsize = (20,140))
for i in range(len(keys)):
#for i in range(1):
hist1 = ax[i,0].hist(qcd[keys[i]], weights = np.ones(len(qcd))*normalization_qcd, bins = 50, histtype = 'step' ,label = 'QCD');
hist2 = ax[i,0].hist(higgs[keys[i]], weights = np.ones(len(higgs))*normalization_higgs, bins = hist1[1], histtype = 'step' ,label = 'Higgs');
hist3 = ax[i,1].hist(cut_qcd[keys[i]], weights = np.ones(len(cut_qcd))*normalization_qcd, bins = hist1[1], histtype = 'step' , label = 'QCD');
hist4 = ax[i,1].hist(cut_higgs[keys[i]], weights = np.ones(len(cut_higgs))*normalization_higgs, bins = hist1[1], histtype = 'step', label = 'Higgs');
#print(hist1[0], hist2[0], hist3[0], hist4[0])
ymax, ymin = get_ylims(hist1[0], hist2[0], hist3[0], hist4[0])
#print(ymin, ymax)
for k in range(len(title)):
ax[i,k].set_ylim(ymin, ymax)
ax[i,k].set_title(title[k])
ax[i,k].set_ylabel('Normalized Counts')
ax[i,k].set_xlabel(keys[i])
ax[i,k].legend()
plt.show()
# +
t21_cut = [0.6, 0.5, 0.4, 0.3]
for i in t21_cut:
print(f't12 cut: {i}')
cut2_qcd = cut_qcd[cut_qcd['t21'] < i]
cut2_higgs = cut_higgs[cut_higgs['t21'] < i]
n_qcd = 2000/len(qcd)*len(cut2_qcd)
n_higgs = 50/len(higgs)*len(cut2_higgs)
print(f'N_qcd: {n_qcd:0.3f} N_higgs: {n_higgs:0.3f}')
theory_sigma = theory(n_qcd, n_higgs)
approx_sigma = approximation(n_qcd, n_higgs)
print(f'theory sigma: {theory_sigma:.3f} approximate sigma: {approx_sigma:.3f}\n')
# +
keys = ['pt', 'eta', 'phi', 'mass', 'ee2', 'ee3', 'd2', 'angularity', 't1',
't2', 't3', 't21', 't32', 'KtDeltaR']
#title = ['No Cut', 'Mass Cut', 't21 Cut']
title = ['Mass Cut', 't21 Cut']
normalization_higgs = 50/len(higgs)
normalization_qcd = 2000/len(qcd)
cut_qcd = qcd[qcd['mass']<140]
cut_higgs = higgs[higgs['mass']<140]
cut2_qcd = cut_qcd[cut_qcd['t21'] < 0.6]
cut2_higgs = cut_higgs[cut_higgs['t21'] < 0.6]
def get_ylims(y3, y4, y5, y6):
all_y = np.hstack((y3, y4, y5, y6))
ymax = all_y.max()+5
ymin = all_y.min()
#print(all_y)
return ymax, ymin
fig, ax = plt.subplots(14, 2, figsize = (20,140))
for i in range(len(keys)):
#hist1 = ax[i,0].hist(qcd[keys[i]], weights = np.ones(len(qcd))*normalization_qcd, bins = 50, histtype = 'step', label = 'QCD');
#hist2 = ax[i,0].hist(higgs[keys[i]], weights = np.ones(len(higgs))*normalization_higgs, bins = hist1[1], histtype = 'step', label = 'Higgs');
hist3 = ax[i,0].hist(cut_qcd[keys[i]], weights = np.ones(len(cut_qcd))*normalization_qcd, bins = 50, histtype = 'step', label = 'QCD');
hist4 = ax[i,0].hist(cut_higgs[keys[i]], weights = np.ones(len(cut_higgs))*normalization_higgs, bins = hist3[1], histtype = 'step', label = 'Higgs');
hist5 = ax[i,1].hist(cut2_qcd[keys[i]], weights = np.ones(len(cut2_qcd))*normalization_qcd, bins = hist3[1], histtype = 'step', label = 'QCD');
hist6 = ax[i,1].hist(cut2_higgs[keys[i]], weights = np.ones(len(cut2_higgs))*normalization_higgs, bins = hist3[1], histtype = 'step', label = 'Higgs');
#ymax, ymin = get_ylims(hist1[0], hist2[0], hist3[0], hist4[0], hist5[0], hist6[0])
ymax, ymin = get_ylims(hist3[0], hist4[0], hist5[0], hist6[0])
for k in range(len(title)):
ax[i,k].set_ylim(ymin, ymax)
ax[i,k].set_title(title[k])
ax[i,k].set_ylabel('Normalized Counts')
ax[i,k].set_xlabel(keys[i])
ax[i,k].legend()
plt.show()
# +
ktdeltar_cut = [0.1, 0.2]
for i in ktdeltar_cut:
print(f'ktdeltar cut: {i}')
cut3_qcd = cut2_qcd[cut2_qcd['KtDeltaR'] > i]
cut3_higgs = cut2_higgs[cut2_higgs['KtDeltaR'] > i]
n_qcd = 2000/len(qcd)*len(cut3_qcd)
n_higgs = 50/len(higgs)*len(cut3_higgs)
print(f'N_qcd: {n_qcd:0.3f} N_higgs: {n_higgs:0.3f}')
theory_sigma = theory(n_qcd, n_higgs)
approx_sigma = approximation(n_qcd, n_higgs)
print(f'theory sigma: {theory_sigma:.3f} approximate sigma: {approx_sigma:.3f}\n')
# +
keys = ['pt', 'eta', 'phi', 'mass', 'ee2', 'ee3', 'd2', 'angularity', 't1',
't2', 't3', 't21', 't32', 'KtDeltaR']
title = ['Mass and t21 Cut', '+ KtDeltaR Cut']
normalization_higgs = 50/len(higgs)
normalization_qcd = 2000/len(qcd)
cut_qcd = qcd[qcd['mass']<140]
cut_higgs = higgs[higgs['mass']<140]
cut2_qcd = cut_qcd[cut_qcd['t21'] < 0.6]
cut2_higgs = cut_higgs[cut_higgs['t21'] < 0.6]
cut3_qcd = cut2_qcd[cut2_qcd['KtDeltaR'] > 0.2]
cut3_higgs = cut2_higgs[cut2_higgs['KtDeltaR'] > 0.2]
def get_ylims(y1, y2, y3, y4):
all_y = np.hstack((y1, y2, y3, y4))
ymax = all_y.max()+1
ymin = all_y.min()
#print(all_y)
return ymax, ymin
fig, ax = plt.subplots(14, 2, figsize = (20,140))
for i in range(len(keys)):
hist1 = ax[i,0].hist(cut2_qcd[keys[i]], weights = np.ones(len(cut2_qcd))*normalization_qcd, bins = 50, histtype = 'step', label = 'QCD');
hist2 = ax[i,0].hist(cut2_higgs[keys[i]], weights = np.ones(len(cut2_higgs))*normalization_higgs, bins = hist1[1], histtype = 'step', label = 'Higgs');
hist3 = ax[i,1].hist(cut3_qcd[keys[i]], weights = np.ones(len(cut3_qcd))*normalization_qcd, bins = hist1[1], histtype = 'step', label = 'QCD');
hist4 = ax[i,1].hist(cut3_higgs[keys[i]], weights = np.ones(len(cut3_higgs))*normalization_higgs, bins = hist1[1], histtype = 'step', label = 'Higgs');
ymax, ymin = get_ylims(hist1[0], hist2[0], hist3[0], hist4[0])
for k in range(len(title)):
ax[i,k].set_ylim(ymin, ymax)
ax[i,k].set_title(title[k])
ax[i,k].set_ylabel('Normalized Counts')
ax[i,k].set_xlabel(keys[i])
ax[i,k].legend()
plt.show()
# -
# Overall, I chose the cuts: mass < 140, t21 < 0.6, ktdeltar > 0.2. These cuts give a sigma of around 5.
# ## Testing out some supervised learning:
# +
keys = ['pt', 'eta', 'phi', 'mass', 'ee2', 'ee3', 'd2', 'angularity', 't1',
't2', 't3', 't21', 't32', 'KtDeltaR']
X = pd.concat([higgs, qcd], ignore_index = True)
Y = np.hstack((np.ones(len(higgs)), np.zeros(len(qcd))))
print(X.shape, Y.shape)
# +
clf1 = RandomForestClassifier(n_estimators = 10)
clf1 = clf1.fit(X,Y)
feature_importance1 = np.vstack((keys, clf1.feature_importances_))
feature_importance1.sort(axis = 1)
for i in range(len(feature_importance1[0])):
print(f'{feature_importance1[0][i]}: {float(feature_importance1[1][i]):.3f}')
# +
X = pd.concat([higgs, qcd], ignore_index = True)
Y = np.hstack((np.ones(len(higgs)), np.zeros(len(qcd))))
fig, ax = plt.subplots(figsize = (10,10))
ax.hist2d(X['t3'], X['t21'], bins = 50)
ax.set_xlabel('t3')
ax.set_ylabel('t21')
plt.show()
# +
from matplotlib.colors import ListedColormap
X = pd.concat([higgs.loc[:, ['t3', 't21']], qcd.loc[:,['t3', 't21']]]).to_numpy()
Y = np.hstack((np.ones(len(higgs)), np.zeros(len(qcd))))
cmap = plt.cm.RdBu
clf2 = RandomForestClassifier(n_estimators = 10)
clf2 = clf2.fit(X,Y)
#take bounds
xmin, xmax = X[:, 0].min()-1, X[:, 0].max()+1
ymin, ymax = X[:, 1].min()-1, X[:, 1].max()+1
xgrid = np.arange(xmin, xmax, 0.1)
ygrid = np.arange(ymin, ymax, 0.1)
xx, yy = np.meshgrid(xgrid, ygrid)
# make predictions for the grid
Z = clf2.predict(np.c_[xx.ravel(), yy.ravel()])
# reshape the predictions back into a grid
zz = Z.reshape(xx.shape)
# plot the grid of x, y and z values as a surface
fig, ax = plt.subplots(figsize = (10,10))
ax.contourf(xx, yy, zz, cmap = cmap)
ax.scatter(
X[:, 0],
X[:, 1],
c=Y,
cmap=ListedColormap(["r", "b"]),
edgecolor="k",
s=20,
)
ax.set_xlabel('t3')
ax.set_ylabel('t21')
plt.show()
# -
# This doesn't make any sense
| Lab7/Lab7.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
data_df = pd.read_csv('data/ex1data2.txt',names=['size','rooms','price'])
data_df.head()
data_df.describe()
data = (data_df-data_df.mean())/data_df.std()
data.head()
data.insert(2,'ones',1)
data.head()
cols=data.shape[1]
X = data.iloc[:,0:cols-1]
y = data.iloc[:,cols-1:cols]
X.head()
y.head()
theta = np.matrix(np.array([0,0,0]))
theta
def cost(X,y,theta):
squared_error = np.power(((X*theta.T)-y),2)
return np.sum(squared_error) / 2*len(X)*1.0
X = np.matrix(X.values)
y = np.matrix(y.values)
cost(X,y,theta)
def gradient_descent(X,y,theta,iters,learning_rate):
temp = np.matrix(np.zeros(theta.shape))
no_of_params = int(theta.ravel().shape[1])
iter_cost = np.zeros(iters)
for i in range(iters):
error = (X*theta.T)-y
for j in range(no_of_params):
term = np.multiply(error,X[:,j])
temp[0,j] = theta[0,j] - (learning_rate/len(X))*np.sum(term)
theta = temp
iter_cost[i] = cost(X,y,theta)
return theta,iter_cost
learning_rate = 0.01
iters = 1000
optimal_theta,optimal_cost = gradient_descent(X,y,theta,iters,learning_rate)
optimal_theta
cost(X,y,optimal_theta)
fig, ax =plt.subplots(figsize=(10,8))
ax.plot(np.arange(iters),optimal_cost,'r')
ax.set_xlabel('iteration')
ax.set_ylabel('error')
ax.set_title('error vs iterations')
| multivariate_linear_regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + pycharm={"is_executing": false}
# Load packages necessary for the computations below
# and define the rng and visualization function.
from numpy import ones_like, exp, array, pi, zeros
from numpy.random import uniform, gamma, randint, permutation
import matplotlib
import scipy.special as sps
font = {'weight' : 'bold',
'size' : 22}
matplotlib.rc('font', **font)
from numpy import sqrt
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib.patches import Wedge
from matplotlib.collections import PatchCollection
from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes, mark_inset# make font huge for beamer
# !pip install git+https://github.com/naught101/sobol_seq
# !pip install ghalton
import sobol_seq
import ghalton
def rng(n,name):
if name=="uniform":
x = uniform(0,1,n)
y = uniform(0,1,n)
return x,y
if name=='sobol':
# currently a bug in sobol that seems to give the same
# random numbers. this is a hack to avoid that
samples = sobol_seq.i4_sobol_generate(2, n)
x =samples[:n,0]
y =samples[:n,1]
return x,y
if name=='halton':
sequencer = ghalton.GeneralizedHalton(2,randint(10000)) # two dimensions
samples = array(sequencer.get(n))
x = samples[:,0]
y = samples[:,1]
return x,y
def visualize(x, y, name):
fig, ax = plt.subplots(figsize=(10, 10))
inside = x ** 2 + y ** 2 <= 1.0
ax.scatter(x[inside], y[inside], c='green', s=3, marker='^')
ax.scatter(x[~inside], y[~inside], c='red', s=3)
estimate = sum(inside) / len(inside) * 4
ax.set_title(
"Approximating $\pi$ with {} samples as {:f}".format(name, estimate),
y=1.08)
p = PatchCollection([Wedge((0, 0), 1, 0, 360)], alpha=0.1)
ax.add_collection(p)
axins = zoomed_inset_axes(ax, 2.5, loc=3) # zoom = 6
axins.axis([1.4, 1.1, 1.4, 1.1])
axins.scatter(x[inside], y[inside], c='green', s=50, marker='^')
axins.scatter(x[~inside], y[~inside], c='red', s=50)
p = PatchCollection([Wedge((0, 0), 1, 0, 360)], alpha=0.1)
axins.add_collection(p)
axins.set_xlim(1 / sqrt(2), 1 / sqrt(2) + 0.2) # Limit the region for zoom
axins.set_ylim(1 / sqrt(2) - 0.2, 1 / sqrt(2))
ax.set_xlim([0, 1])
ax.set_ylim([0, 1])
plt.xticks(visible=False) # Not present ticks
plt.yticks(visible=False)
#
## draw a bbox of the region of the inset axes in the parent axes and
## connecting lines between the bbox and the inset axes area
mark_inset(ax, axins, loc1=2, loc2=4, fc="none", ec="0.5", linewidth=3)
# + [markdown] pycharm={"metadata": false, "name": "#%% md\n"}
# ----
# ## 1) Uniformly distributed random variables
# Create some samples from a uniform distribution and compare
# with statistical quantities.
#
#
# + pycharm={"is_executing": false, "metadata": false, "name": "#%%\n"}
nsamples = 10000
nbins = 100
a, b = 1.0, 2.0
s = uniform(a,b,nsamples)
fig, ax = plt.subplots(figsize=(16,8))
count, bins, ignored = ax.hist(s, nbins, density=True)
ax.plot(bins, ones_like(bins)/(b-a), linewidth=2, color='r')
mean = (b+a)/2
var = (b-a)**2/12
ax.axvline(mean, linewidth=4, color='r', linestyle="--")
ax.axvline(mean-sqrt(var),0,0.5,linewidth=4, color='y', linestyle="--")
ax.axvline(mean+sqrt(var),0,0.5,linewidth=4, color='y', linestyle=":")
ax.legend(["pdf","mean","mean-std","mean+std","histogram"],bbox_to_anchor=(0.5, -0.05),shadow=True, ncol=3);
ax.set_title("Uniform distribution with a={} and b={} and {} samples".format(a,b,nsamples));
# + [markdown] pycharm={"metadata": false, "name": "#%% md\n"}
# -----
# ## 2) Gamma distributed random variables
# Create some samples from a Gamma distribution and compare
# with statistical quantities.
#
# + pycharm={"metadata": false, "name": "#%%\n", "is_executing": false}
shape, scale = 2., 2. # mean=4, std=2*sqrt(2)
s = gamma(shape, scale, 1000)
fig, ax = plt.subplots(figsize=(16,8))
count, bins, ignored = ax.hist(s, 50, density=True)
y = bins**(shape-1)*(exp(-bins/scale) /
(sps.gamma(shape)*scale**shape))
ax.plot(bins, y, linewidth=2, color='r');
mean = shape*scale
var = shape*scale**2
ax.axvline(mean, linewidth=4, color='r', linestyle="--")
ax.axvline(mean-sqrt(var),0,0.5,linewidth=4, color='y', linestyle="--")
ax.axvline(mean+sqrt(var),0,0.5,linewidth=4, color='y', linestyle=":")
ax.legend(["pdf","mean","mean-std","mean+std","histogram"],bbox_to_anchor=(0.5, -0.05),shadow=True, ncol=3);
ax.set_title("Gamma distribution with shape={} and scale={} and {} samples".format(shape,scale, nsamples));
# + [markdown] pycharm={"metadata": false, "name": "#%% md\n"}
# ----
# ## 3) Approximating $\pi$ with Monte Carlo
#
# + pycharm={"metadata": false, "name": "#%%\n", "is_executing": false}
n = 10000
name = 'uniform'
x, y = rng(n,name)
visualize(x,y,name)
# + pycharm={"metadata": false, "name": "#%%\n", "is_executing": false}
name = 'sobol'
x, y = rng(n,name)
visualize(x,y,name)
# + pycharm={"metadata": false, "name": "#%%\n", "is_executing": false}
name = 'halton'
x, y = rng(n,name)
visualize(x,y,name)
# + [markdown] pycharm={"metadata": false}
# ## 4) Convergence study
#
# + pycharm={"metadata": false, "name": "#%%\n", "is_executing": false}
df = pd.DataFrame()
nruns = 10
nsamples = [10**k for k in range(1,5)]
types = ['uniform','halton','sobol']
for type in types:
print("type = {}".format(type))
for n in nsamples:
print("n = {}".format(n))
for run in range(nruns):
x,y = rng(n,type)
estimate = 4* sum(x**2+y**2 <=1.0) / n
err = abs(estimate - pi)
df = df.append({'Name': type, 'N': n, 'Error': err},ignore_index=True)
print("..done")
# + pycharm={"metadata": false, "name": "#%%\n", "is_executing": false}
ax = sns.lineplot(x="N", y="Error",hue='Name', data=df)
ax.set(xscale="log", yscale="log")
ax.set_xlabel("Number of samples")
ax.set_ylabel("Error");
# + pycharm={"metadata": false, "name": "#%%\n"}
| ex1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={} colab_type="code" id="Es0HzH7yeniL" outputId="600e15a3-aaed-4775-9c96-f92cb075ab5f"
from numpy import hstack
from numpy import zeros
from numpy import ones
import numpy as np
import phik
import datetime, os
import seaborn as sns
from phik import report
from numpy.random import rand
from numpy.random import randn
from numpy.random import random
from keras.models import Sequential
from keras.layers import Dense
from keras.models import load_model
from matplotlib import pyplot
from tensorboard import notebook
from sklearn.metrics.pairwise import euclidean_distances
from sklearn.preprocessing import normalize
from sklearn import preprocessing
# %matplotlib inline
from xgboost import XGBClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.preprocessing import LabelEncoder
import pandas as pd
# + colab={} colab_type="code" id="hDfkQNyQQ5k7"
### Adult dataset
colnames = ['age', 'workclass', 'fnlwgt', 'education', 'education.num', 'marital.status', 'occupation', 'relationship', 'race', 'sex', 'capital.gain', 'capital.loss', 'hours.per.week', 'native.country', 'income']
df_raw = pd.read_csv('adult.data', names=colnames)
col_dtype = df_raw.dtypes.to_dict()
#df = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data", names=colnames)
features = colnames
cat_features = ['workclass', 'education', 'education.num','marital.status', 'occupation', 'relationship', 'race', 'sex', 'native.country', 'income']
num_features = list(set(colnames) - set(cat_features))
# + colab={} colab_type="code" id="84vJYSzZQ5lB"
df = df_raw[features]
#df[num_features] = (df_raw[num_features] - df_raw[num_features].mean()) / (df_raw[num_features].std())
df.loc[:,num_features] = (df_raw[num_features] - df_raw[num_features].mean()) / (df_raw[num_features].max() - df_raw[num_features].min())
# + colab={} colab_type="code" id="mQfC76IeQ5lH"
def cast_datatype(df, col_dtype):
for col in df.columns:
df[col] = df[col].astype(col_dtype[col])
return df
# + colab={} colab_type="code" id="dBBS3OEreniT"
def categorical_mapping(data,feature):
featureMap=dict()
data[feature] = data[feature].astype(str)
freq_dist = data[feature].value_counts(normalize=True, sort=True).cumsum()
unique_vals = freq_dist.index.tolist()
idx = 0
for i in unique_vals:
if idx == 0:
featureMap[i] = (freq_dist[idx]/2 , freq_dist[idx]/6, 0, freq_dist[idx]) #np.random.normal(freq_dist[idx], (freq_dist[idx]-0)/6)
else:
featureMap[i] = ((freq_dist[idx]+freq_dist[idx-1])/2, (freq_dist[idx]-freq_dist[idx-1])/6, freq_dist[idx-1], freq_dist[idx])
idx += 1
return featureMap
# + colab={} colab_type="code" id="8vtAyjPJeniQ"
catFeatureMap = dict()
for col in cat_features:
catFeatureMap[col] = categorical_mapping(df, col)
df.loc[:,col] = df[col].map(catFeatureMap[col])
df.loc[:,col] = df[col].apply(lambda x: np.random.normal(x[0], x[1]))
# + colab={} colab_type="code" id="IqOqS_Usn1Lb"
dataset = df.values
# + colab={"base_uri": "https://localhost:8080/", "height": 134} colab_type="code" id="m2FJZ4Y2cIYt" outputId="f2cddfee-4cdc-476c-e970-a53e0b7522f5"
dataset
# + [markdown] colab_type="text" id="6EAl8t8uQ5lq"
# # Latest Version
# + colab={} colab_type="code" id="D6feauzHQ5ls"
# define the standalone discriminator model
def define_discriminator(n_inputs=2):
model = Sequential()
model.add(Dense(256, activation='relu', kernel_initializer='he_uniform', input_dim=n_inputs))
model.add(Dense(128, activation='relu', kernel_initializer='he_uniform', input_dim=n_inputs))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
# define the standalone generator model
def define_generator(latent_dim, n_outputs=2):
model = Sequential()
model.add(Dense(128, activation='relu', kernel_initializer='he_uniform', input_dim=latent_dim))
model.add(Dense(256, activation='relu', kernel_initializer='he_uniform', input_dim=latent_dim))
model.add(Dense(n_outputs, activation='linear'))
return model
# define the combined generator and discriminator model, for updating the generator
def define_gan(generator, discriminator):
# make weights in the discriminator not trainable
discriminator.trainable = False
# connect them
model = Sequential()
# add generator
model.add(generator)
# add the discriminator
model.add(discriminator)
# compile model
model.compile(loss='binary_crossentropy', optimizer='adam')
return model
# generate n real samples with class labels
def generate_real_samples(n):
X_train = dataset
data_batch =X_train[np.random.randint(low=0,high=X_train.shape[0],size=n)]
# generate class labels
y = ones((n, 1))
return data_batch, y
# generate points in latent space as input for the generator
def generate_latent_points(latent_dim, n):
# generate points in the latent space
x_input = randn(latent_dim * n)
# reshape into a batch of inputs for the network
x_input = x_input.reshape(n, latent_dim)
return x_input
# use the generator to generate n fake examples, with class labels
def generate_fake_samples(generator, latent_dim, n):
# generate points in latent space
x_input = generate_latent_points(latent_dim, n)
# predict outputs
X = generator.predict(x_input)
# create class labels
y = zeros((n, 1))
return X, y
# evaluate the discriminator and plot real and fake points
def summarize_performance(epoch, generator, discriminator, latent_dim, n=100):
# prepare real samples
x_real, y_real = generate_real_samples(n)
# evaluate discriminator on real examples
_, acc_real = discriminator.evaluate(x_real, y_real, verbose=0)
# prepare fake examples
x_fake, y_fake = generate_fake_samples(generator, latent_dim, n)
# evaluate discriminator on fake examples
_, acc_fake = discriminator.evaluate(x_fake, y_fake, verbose=0)
# summarize discriminator performance
print(epoch, acc_real, acc_fake)
def generate_fake_table(generator, discriminator, latent_dim, n=100):
# prepare fake examples
x_fake, y_fake = generate_fake_samples(generator, latent_dim, n)
return x_fake
def plot_loss(loss_real_, loss_fake_, loss_gan_):
pyplot.plot(loss_real_, label='loss disc real')
pyplot.plot(loss_fake_, label='loss disc fake')
pyplot.legend()
pyplot.savefig('plot_loss.png')
pyplot.close()
def plot_acc(acc_real_, acc_fake_):
pyplot.plot(acc_real_, label='acc real')
pyplot.plot(acc_fake_, label='acc fake')
pyplot.legend()
pyplot.savefig('plot_accuracy.png')
pyplot.close()
def label_noise(y, prob):
n_noise = int(prob * y.shape[0])
noise_indices = [np.random.randint(0, y.shape[0]) for i in np.arange(0, n_noise)]
y[noise_indices] = 1 - y[noise_indices]
return y
def label_smoothing(y, y_class):
if y_class == 'real':
return y - 0.1 + (random(y.shape) * 0.5)
else:
return y + (random(y.shape) * 0.1)
# train the generator and discriminator
def train(g_model, d_model, gan_model, latent_dim, n_epochs=10000, n_batch=128, n_eval=2000):
acc_real_ = []
acc_fake_ = []
loss_real_ = []
loss_fake_ = []
loss_gan_ = []
# determine half the size of one batch, for updating the discriminator
half_batch = int(n_batch / 2)
# manually enumerate epochs
for i in range(n_epochs):
# prepare real samples
x_real, y_real = generate_real_samples(half_batch)
y_real = label_noise(y_real, 0.1)
# prepare fake examples
x_fake, y_fake = generate_fake_samples(g_model, latent_dim, half_batch)
# update discriminator
loss_real_i, acc_real_i = d_model.train_on_batch(x_real, y_real)
loss_fake_i, acc_fake_i = d_model.train_on_batch(x_fake, y_fake)
# prepare points in latent space as input for the generator
x_gan = generate_latent_points(latent_dim, n_batch)
# create inverted labels for the fake samples
y_gan = ones((n_batch, 1))
# update the generator via the discriminator's error
loss_gan_i = gan_model.train_on_batch(x_gan, y_gan)
acc_real_.append(acc_real_i)
loss_real_.append(loss_real_i)
acc_fake_.append(acc_fake_i)
loss_fake_.append(loss_fake_i)
loss_gan_.append(loss_gan_i)
# evaluate the model every n_eval epochs
if (i+1) % n_eval == 0:
summarize_performance(i, g_model, d_model, latent_dim)
plot_loss(loss_real_, loss_fake_, loss_gan_)
plot_acc(acc_real_, acc_fake_)
return acc_real_, acc_fake_
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="U7Kxdqtwenie" outputId="f97fec0d-ec43-4750-d1e3-9daed8b53325"
# size of the latent space
latent_dim = 100
# create the discriminator
discriminator = define_discriminator(dataset.shape[1])
# create the generator
generator = define_generator(latent_dim, dataset.shape[1])
# create the gan
gan_model = define_gan(generator, discriminator)
# train model
#train(generator, discriminator, gan_model, latent_dim)
acc_real_, acc_fake_ = train(generator, discriminator, gan_model, latent_dim, n_epochs=10000)
# + colab={} colab_type="code" id="zUVn1z1Denij"
fake_data = generate_fake_table(generator, discriminator, latent_dim, df.shape[0])
# + colab={} colab_type="code" id="iZFgMbKYQ5l9"
df_fake_norm = pd.DataFrame(fake_data, columns = features)
# + colab={} colab_type="code" id="GTKFMElMQ5mD" outputId="2872c506-25d1-4ed9-87ae-cdf835a93f90"
df_fake_norm.head(5)
# + colab={} colab_type="code" id="VprVlnAgzp2V"
df_fake = pd.DataFrame(fake_data, columns = features)
# + [markdown] colab_type="text" id="IaHCLJYjQ5mO"
# # Privacy
# + [markdown] colab_type="text" id="px_0owEZQ5mQ"
# ## Euclidean Distance
# + colab={} colab_type="code" id="JlXQlfXkQ5mR"
#df_fake_norm.to_csv('adult_fake_norm_100k_new.csv', header=True, index=False)
df_fake_norm.to_csv('adult_fake_norm_10k_new.csv', header=True, index=False)
#df.to_csv('adult_norm.csv', header=True, index=False)
# + colab={} colab_type="code" id="zChErrakQ5mX"
shortest_dist = np.zeros((df.shape[0], 2), dtype=np.float32)
#shortest_dist = np.zeros(df.shape[0], dtype=np.float32)
for index, row in df.iterrows():
row = row.values.reshape(1,df.shape[1])
dist_arr = euclidean_distances(row, df_fake_norm.values)
shortest_dist[index, 0] = np.min(dist_arr)
shortest_dist[index, 1] = np.argmin(dist_arr)
# + colab={} colab_type="code" id="DOfFT57vQ5md" outputId="f0bcbe14-f0ef-4f2d-e628-6ef65d5cf3af"
print(np.mean(shortest_dist[:,0]))
print(np.std(shortest_dist[:,0]))
# + colab={} colab_type="code" id="Bu0TFhfgQ5mk"
sdist = pd.DataFrame(shortest_dist)
# + colab={} colab_type="code" id="cpgMO8wNQ5mu"
closest5 = sdist.sort_values(0).head(5)
# + colab={} colab_type="code" id="VupKr0UpQ5m7" outputId="59a19897-b000-44b9-e1cc-d0a62239e55f"
closest5
# + colab={} colab_type="code" id="cAzQaNSwQ5nC"
sdist.to_csv('Adult_10k_Distance_new.csv', index=False)
# + colab={} colab_type="code" id="3FCUEfwhQ5nG"
idx_ori = closest5.index.values
idx_syn = closest5[1].values
df_raw.iloc[idx_ori, :].to_clipboard()
# + colab={} colab_type="code" id="rlz8JBASQ5nO"
df_fake[num_features] = df_fake[num_features] * (df_raw[num_features].max().values - df_raw[num_features].min().values ) + df_raw[num_features].mean().values
# + colab={} colab_type="code" id="jmg-f2Ovenim"
for feat,dist in catFeatureMap.items():
for key, val in dist.items():
lower_bound = val[2]
upper_bound = val[3]
if key == list(dist.keys())[-1]:
df_fake.loc[df_fake[feat] >= lower_bound, 'temp'] = key
else:
df_fake.loc[ df_fake[feat].abs().between(lower_bound, upper_bound), 'temp'] = key
df_fake.loc[:,feat] = df_fake['temp']
df_fake = df_fake.drop(columns='temp')
# + colab={} colab_type="code" id="HNBfOBhkQ5nY"
df_fake = cast_datatype(df_fake, col_dtype)
# + colab={} colab_type="code" id="cTcPh7EkQ5nc"
df_fake.iloc[idx_syn, :].to_clipboard()
# + colab={} colab_type="code" id="4MDWH-bUQ5nh" outputId="745d82a1-7690-475e-ef67-699564dc1d3a"
df_raw['income'].sort_values().hist()
# + colab={} colab_type="code" id="9LJffP1tQ5nl" outputId="b6e2522c-3100-490d-ab7d-6a2c1113d174"
df_fake['income'].sort_values().hist()
# + colab={} colab_type="code" id="V3lvOd5zQ5nq" outputId="9a82d3d5-2c1d-4400-f3c6-5e8e4dfcc4df"
df_fake['App Date'].hist()
# + colab={} colab_type="code" id="1wVeppGxQ5nv" outputId="9261a4c2-bad0-43e8-d37d-f8f27e5a745b"
df_raw['App Date'].hist()
# + colab={} colab_type="code" id="Ub7mU8XoQ5n1"
df_fake_100k = df_fake.copy()
# + colab={} colab_type="code" id="9Z9jvHYdQ5n6"
df_fake_10k = df_fake.copy()
# + colab={} colab_type="code" id="4ap8URy7Q5n-" outputId="61be6791-2911-4534-bba3-e688b33a09f7"
#df_raw.income.hist(alpha=1, label='Original', color='green', stacked=False)
#df_fake.income.hist(alpha=1, label='Synthetic', stacked=False)
inc_hist = np.array([df_raw.income, df_fake_10k.income,df_fake_100k.income]).T
category = ['Original', 'GAN10K','GAN100K',]
pyplot.hist(inc_hist, histtype='bar', label=category)
pyplot.legend(loc='upper right')
pyplot.title('Income')
# + colab={} colab_type="code" id="OXD9IoyeQ5oC" outputId="d00a8ac7-708e-4039-ce5e-665347036c8c"
column = 'hours.per.week'
hist_ = np.array([df_raw[column], df_fake_10k[column],df_fake_100k[column]]).T
category = ['Original', 'GAN10K','GAN100K',]
pyplot.hist(hist_, histtype='bar', label=category)
pyplot.legend(loc='upper right')
pyplot.title(column)
# + colab={} colab_type="code" id="aJhphm4AQ5oG" outputId="2df45d5b-25a6-401d-e3aa-5aa1fbce6a71"
df_raw.income.hist()
# + colab={"base_uri": "https://localhost:8080/", "height": 726} colab_type="code" id="dZBh_KsY0aTy" outputId="1171d910-858a-47dc-9cb4-56163b82802d"
sns.heatmap(df.corr(), xticklabels = df.columns, yticklabels = df.columns, center = 0,vmin=-1, vmax=1.0, cmap="BrBG")
# + colab={} colab_type="code" id="GoWXjoFKQ5oS" outputId="973798b9-b599-42d5-dbfa-dcecc1d47366"
sns.heatmap(df_fake_norm.corr(), xticklabels = df_fake_norm.columns, yticklabels = df_fake_norm.columns, center = 0, vmin = -1, vmax=1.0, cmap="BrBG")
# + colab={} colab_type="code" id="sNQzlAtuQ5oY" outputId="e2a3e888-dd5b-4537-8ac3-2da5b270b52c"
sns.heatmap(df.corr() - df_fake_norm.corr(), center = 0, cmap="BrBG")
# + colab={} colab_type="code" id="b_yO-OmGQ5oc"
df_fake_cd = df_fake_100k.copy()
df_ori_cd = df_raw.copy()
for col in cat_features:
labelEncoder = LabelEncoder()
df_fake_cd.loc[:, col] = labelEncoder.fit_transform(df_fake_cd.loc[:,col])
df_ori_cd.loc[:, col] = labelEncoder.fit_transform(df_ori_cd.loc[:,col])
# + colab={} colab_type="code" id="vfyTYgONQ5oo" outputId="a94e99c9-6afd-49f1-c2ee-417ca4e1f3cf"
corr = df_fake_cd.corr()
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
cmap = sns.diverging_palette(220, 10, as_cmap=True)
sns.heatmap(corr, mask=mask, square=True, linewidths=.5,center = 0,vmin=-1, vmax=1.0, cmap="RdBu_r")
# + colab={} colab_type="code" id="lO3bSO4ZQ5ot" outputId="5afb4409-508e-4b44-8e1b-10e33f47d288"
df_fake_100k[np.logical_and(df_fake_100k.age == 29, df_fake_100k['capital.gain'] == 14)]
# + colab={} colab_type="code" id="D3RmLRHGQ5ox" outputId="36dbf075-ecee-462d-bb53-a09d48dd9bec"
df_fake_cd[np.logical_and(df_fake_cd.age == 29, df_fake_cd['capital.gain'] == 14)]
# + colab={} colab_type="code" id="C93WMWb0Q5o2" outputId="b7f701c5-e1d7-45aa-d2fe-11f40eab8ea1"
df_raw[df_raw.age == 29]
# + colab={} colab_type="code" id="7VNXULcZQ5pA" outputId="2e0b66e5-4ac0-4d7e-f473-cb9d5177fa26"
#for col in cat_features:
# labelEncoder = LabelEncoder()
# df_ori_cd.loc[:, col] = labelEncoder.fit_transform(df_ori_cd.loc[:,col])
corr = df_ori_cd.corr()
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
cmap = sns.diverging_palette(220, 10, as_cmap=True)
sns.heatmap(corr, mask=mask, square=True, linewidths=.5,center = 0,vmin=-1, vmax=1.0, cmap="RdBu_r")
# + [markdown] colab_type="text" id="OPnwQQoJQ5pG"
# # Utility - XGBoost Classification
# + colab={} colab_type="code" id="7E8O3UncQ5pH"
X_real = df_raw.iloc[:,:-1]
y_real = df_raw.iloc[:,-1]
X_fake = df_fake.iloc[:,:-1]
y_fake = df_fake.iloc[:,-1]
# + colab={} colab_type="code" id="AtkQT90iQ5pO"
from sklearn.preprocessing import LabelEncoder
for col in cat_features:
labelEncoder = LabelEncoder()
if col != 'income':
X_fake.loc[:, col] = labelEncoder.fit_transform(X_fake.loc[:,col])
else:
y_fake = labelEncoder.fit_transform(y_fake)
# + colab={} colab_type="code" id="ReE84sbxQ5pS"
for col in cat_features:
labelEncoder = LabelEncoder()
if col != 'income':
X_real.loc[:, col] = labelEncoder.fit_transform(X_real.loc[:,col])
else:
y_real = labelEncoder.fit_transform(y_real)
# + colab={} colab_type="code" id="S31B_aACQ5pX"
seed = 7777
test_size = 0.2
X_train_real, X_test_real, y_train_real, y_test_real = train_test_split(X_real, y_real, test_size=test_size, random_state=seed)
X_train_fake, X_test_fake, y_train_fake, y_test_fake = train_test_split(X_fake, y_fake, test_size=test_size, random_state=seed)
# + [markdown] colab_type="text" id="fLYRrpCgQ5pa"
# ### Train and Test on Original
# + colab={} colab_type="code" id="Zc3ERB1FQ5pb" outputId="bdb089d0-220c-4a42-a637-818bb09a2a77"
# fit model no training data
from xgboost import XGBClassifier
xgb_model = XGBClassifier()
xgb_model.fit(X_train_real, y_train_real)
# + colab={} colab_type="code" id="DwxBMIlcQ5pf" outputId="92f0b740-da23-48fd-b84b-694e44976843"
# make predictions for test data
y_pred_real = xgb_model.predict(X_test_real)
predictions = [round(value) for value in y_pred_real]
# evaluate predictions
accuracy = accuracy_score(list(y_test_real), predictions)
print("Accuracy: ", accuracy * 100.0)
# + [markdown] colab_type="text" id="tlAxklinQ5pl"
# ### Train on Synthetic and Test on Original
# + colab={} colab_type="code" id="R-miDiAKQ5pm" outputId="eceaee37-8fd3-4ddf-9fed-1036899d8288"
# fit model no training data
xgb_model = XGBClassifier()
xgb_model.fit(X_train_fake, y_train_fake)
# + colab={} colab_type="code" id="wqjvrIOBQ5pp" outputId="c9c0ceb7-59fe-4792-a7bd-8f7c3801bf85"
# make predictions for test data
y_pred_real = xgb_model.predict(X_test_real)
predictions = [round(value) for value in y_pred_real]
# evaluate predictions
accuracy = accuracy_score(list(y_test_real), predictions)
print("Accuracy: ", accuracy * 100.0)
# + [markdown] colab_type="text" id="mjtAnl1jQ5ps"
# ### Train and test on Synthetic
# + colab={} colab_type="code" id="XZlcuvoWQ5px" outputId="8de833c9-ffd3-4893-99a3-949337d8a5e6"
# make predictions for test data
y_pred_fake = xgb_model.predict(X_test_fake)
predictions = [round(value) for value in y_pred_fake]
# evaluate predictions
accuracy = accuracy_score(list(y_test_fake), predictions)
print("Accuracy: ", accuracy * 100.0)
# + colab={} colab_type="code" id="1Hf59ltcQ5p1" outputId="66849696-c119-46aa-e790-b010275f2f8e"
# fit model no training data
xgb_model = XGBClassifier()
xgb_model.fit(X_train_fake, y_train_fake)
# make predictions for test data
y_pred_fake = xgb_model.predict(X_test_fake)
predictions = [round(value) for value in y_pred_fake]
# evaluate predictions
accuracy = accuracy_score(list(y_test_fake), predictions)
print("Accuracy: ", accuracy * 100.0)
# + colab={} colab_type="code" id="5FJfeYvIQ5p7" outputId="6f6a9c2d-3c6c-47c3-f960-d0f4346a10c7"
#phik.report.correlation_report(df)
sns.heatmap(df_raw.phik_matrix(), xticklabels = df_raw.columns, yticklabels = df_raw.columns, cmap=sns.cm.rocket_r)
# + colab={} colab_type="code" id="7D-WxPBDQ5p-" outputId="0354c8c1-d096-4c9b-9dae-60e6894383f6"
sns.heatmap(df_fake.phik_matrix(), xticklabels = df_fake.columns, yticklabels = df_fake.columns, cmap=sns.cm.rocket_r)
# + colab={} colab_type="code" id="3xaJlp-lQ5qB"
#df_fake_norm.to_csv('License_Fake_Norm.csv', index=False)
#df_fake.to_csv('License_Fake.csv', index=False)
#pd.DataFrame.from_dict(catFeatureMap).to_csv('Feature_Map.csv', index=False)
pd.DataFrame(shortest_dist).to_csv('Adult_10k_Distance.csv', index=False)
# + [markdown] colab_type="text" id="eMZgwggWQ5qE"
# ## Record Linkage
# + colab={} colab_type="code" id="nAyLETgvQ5qE"
# #!pip install recordlinkage
# + colab={} colab_type="code" id="xNpxAjNxQ5qI" outputId="89c83f15-db85-428b-a07a-26431d508ecd"
import recordlinkage
indexer = recordlinkage.Index()
indexer.full()
pairs = indexer.index(df_raw, df_fake)
# + colab={} colab_type="code" id="jku3kxxqQ5qK"
# Comparing Step
compare_cl = recordlinkage.Compare()
for col in colnames:
compare_cl.numeric(col, col)
rec_features = compare_cl.compute(pairs, df_raw, df_fake)
# + colab={} colab_type="code" id="7ux5n6gWQ5qN" outputId="2bc6f734-1ff3-477b-c793-4a67f49c54cc"
# Classification step
for i in np.arange(3, len(colnames)):
matches = rec_features[rec_features.sum(axis=1) >= i]
print("Record Linkage >= %d the value %d" % (i ,len(matches)))
# + colab={} colab_type="code" id="ke0DKD_6Q5qS"
| Scripts/.ipynb_checkpoints/GAN_Synthetic_Dataset_Adult_20190901-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# import numpy
# %pylab inline
# # cd the working directory
# !cd /share/lamost/dr7/slam_tutorial/
# read parameters
from astropy.table import Table
params = Table.read("./params.fits")
# take a look at the parameters
plot(params["teff"], params["logg"],'.')
# check filepath
import os
for i in range(len(params)):
if not os.path.exists("./spectra/"+params[i]["fps"]):
print(i)
# an example of reading spectra
from laspec.mrs import MrsSpec
ms = MrsSpec.from_lrs("./spectra/"+params["fps"][149], deg=10)
# fig = figure(figsize=(15, 7))
ms.plot_norm()
# define wavelength grid
wave = np.arange(4000, 5500, 1.)
npix = len(wave)
nobs = len(params)
# +
# import joblib
# # read spectra
# ms_list = joblib.Parallel(n_jobs=-1, verbose=10)(
# joblib.delayed(MrsSpec.from_lrs)("./spectra/"+params["fps"][i], deg=10) for i in range(nobs))
# # save spectra
# joblib.dump(ms_list, "ms_list.dump")
# +
# load spectra
import joblib
ms_list = joblib.load("ms_list.dump")
flux_norm = np.array([np.interp(wave, ms.wave, ms.flux_norm) for ms in ms_list])
ivar_norm = np.array([np.interp(wave, ms.wave, ms.flux_norm_err**-2) for ms in ms_list])
# -
# take a look at spectra
plot(wave, flux_norm[::100].T)
# initiate slam
from slam import Slam
s = Slam(wave, # wavelength grid
tr_flux=flux_norm, # training flux
tr_ivar=ivar_norm, # training ivar
tr_labels=np.array(params["teff","logg", "feh"].to_pandas()), # training labels
scale=True, # if True, scale flux and labels. default is True
robust=False, # if True, use robust scaling. default is True --> this is to tackle the cosmic rays
mask_conv=(1, 2), # lower and upper limits of the kernel used in convolution
flux_bounds=(0.001, 100.0), # flux bounds.
ivar_eps=0, # slam will eliminate the pixels whose ivar<ivar_eps
)
print(s)
# train pixels
# 1. defind hyperparameter grid
pgrid = {"C":[-1,0,1],"gamma":[1/3,],"epsilon":[0.05,],}
# 2. train pixels
s.train_pixels(
profile=None, targets='all', temp_dir=None, sample_weight_scheme='bool', # usually you can leave these unchanged
model='svr', # model type: svr/nn
method='grid', # simple/grid. if simple, specify hyperparameter values; if grid, specify the grid
param_grid=pgrid, # the grid
cv=3, # cv fold
scoring='neg_mean_squared_error',
n_jobs=-1, verbose=5) # parallel
# get initial estimate of parameters by chi2 best match
Xinit = s.predict_labels_quick(s.tr_flux, s.tr_ivar)
#optimize parameters
Rpred = s.predict_labels_multi(Xinit[::100], s.tr_flux[::100], s.tr_ivar[::100])
Xpred = np.array([_["x"] for _ in Rpred])
# compare labels
from slam.diagnostic import compare_labels
fig = compare_labels(s.tr_labels[::100], Xpred, labelname1="Input", labelname2="Output")
# check MSE (goodness of training)
fig = figure(figsize=(15, 6))
plot(s.wave, np.median(s.tr_flux, axis=0), label="mean spectrum")
plot(s.wave, -s.nmse, label="MSE")
xlim(4000, 5500)
ylim(0, 1.1)
legend()
# +
# have fun with slam!
# -
| doc/slam_tutorial_2020.08.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/usmandroid/ML_Comm_TUM/blob/main/tut01.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="D65Okar_b8-y"
# # Tutorial 1: Introduction to Python and Equalizer in Pytorch
# October 21st, 2021
#
# In this tutorial, we will introduce the basic principles of Python and Pytorch.
#
# ## Table of Contents
#
#
# * 1) Python Intro:
# * Arrays
# * Loops and Conditions
# * Print formatting
# * Plotting
# * 2) Pytorch Intro:
# * Tensors
# * 3) Equalizer Example
#
#
# + [markdown] id="d5dt4143kZLv"
# ## Jupyter Notebooks
#
# We will use Jupyter Notebooks during the whole semester.
#
# * Jupyter Notebooks are composed of cells that can contain text (like this cell), images, LaTex, **code**,...
# * Connects to a Python runtime environment in the background to execute code
#
#
#
# + [markdown] id="Syh7AHl3oo7I"
# ## 1) Python Intro
# ### Numpy Arrays
# + id="E_Vk6_2boo7J"
import numpy as np
# + colab={"base_uri": "https://localhost:8080/"} id="wPAMorYioo7K" outputId="fea25674-7bee-4acc-ba85-652d2974f82d"
a = np.array([[1, 2, 3], [4, 5, 6]])
print('Visualize the array: ')
print(a)
print('Number of elements in the array: ', a.size)
print('Shape of the array: ', a.shape)
print('Data type of the array: ', a.dtype)
# + colab={"base_uri": "https://localhost:8080/"} id="GImA-Jkioo7L" outputId="09cf6b2c-d5b7-4b76-9235-e528140d5f59"
a = np.array([[1., 2., 3.], [4., 5., 6.]])
a.dtype
# + [markdown] id="Nnh5P7wboo7L"
# #### Indexing, Slicing, Reshaping
# + colab={"base_uri": "https://localhost:8080/"} id="pkrPQyxFoo7M" outputId="9d52a612-6f30-4d1b-e315-527c41ea31c9"
print('Access a single element: ', a[1,2])
print('Access a row: ', a[0,:])
print('Access a column: ', a[:,1])
# + colab={"base_uri": "https://localhost:8080/"} id="AF4l1s_Aoo7N" outputId="c001bc7a-26cd-40b0-8ab6-620a8cac79ed"
print(np.reshape(a, (3,2)))
print(np.reshape(a, (-1,6)))
# + [markdown] id="epaD0NZaoo7N"
# #### Passing by reference
# + colab={"base_uri": "https://localhost:8080/"} id="fsdnqGGEoo7N" outputId="25261851-9182-4347-e46d-6b52c58905ba"
print('Array a before reference: ', a)
b = a[:,0]
b[:] = 8
print('Array a after reference: ', a)
# + colab={"base_uri": "https://localhost:8080/"} id="MGVr5pYroo7O" outputId="e9a50e57-a968-47f8-e444-bfec33adc26d"
# if real copies are needed
a[:,0] = [1, 4]
b = a[:,0].copy()
b[:] = 8
print(a)
# + [markdown] id="ZtQXmW-Coo7O"
# ### Loops and conditions
# Be careful of the 0-indexing!
# + colab={"base_uri": "https://localhost:8080/"} id="T50V7R5Goo7O" outputId="6cc42208-e134-43e4-cb40-d0c91f21f26e"
for i in range(5):
print(i)
# + colab={"base_uri": "https://localhost:8080/"} id="Bl72xXRboo7P" outputId="41169a49-bfc7-43cf-94ca-9baeddf4a9c4"
if a[0,0]==0:
print('first element is zero')
elif a[0,0]==1:
print('first element is one')
else:
print('first element is nor zero nor one')
# + [markdown] id="OLOlEPTWoo7P"
# ### Print formatting
# It is important to have feedback from the code.
# + colab={"base_uri": "https://localhost:8080/"} id="R_d3AE09oo7P" outputId="cf3d9db2-1cdd-4663-99c8-98e550dbf068"
n = 10
m = 3
print('The result of the division between {:d} and {:d} is {:.2f}.'.format(n,m,n/m))
print(f'The result of the division between {n :d} and {m :d} is {n/m :.2f}.')
# + [markdown] id="jzBGogXaoo7Q"
# ### Plotting
# + id="luVqieDwoo7Q"
import matplotlib.pyplot as plt
# + [markdown] id="Pwz-sKTzoo7Q"
# #### Signal time plot
# + colab={"base_uri": "https://localhost:8080/", "height": 296} id="mH8It1GMoo7Q" outputId="3c61cdcc-effc-4465-f3ae-225d92f892c4"
t = np.arange(0,10,0.1)
y = np.sin(t)
plt.plot(t,y, label='y=sin(t)')
plt.grid()
plt.xlabel('time t')
plt.legend()
# + [markdown] id="q_cD0bT4oo7R"
# #### Constellation scatterplot
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="4YDYLdwyoo7R" outputId="fcd23ba9-8756-4157-890a-3342f7eddcd8"
constellation = [1+1j, -1+1j, -1-1j, 1-1j]
x = np.random.choice(constellation, 1000)
noise = np.random.normal(0, 0.2, size=(1000,)) + 1j* np.random.normal(0, 0.2, size=(1000,))
y = x + noise
plt.scatter(np.real(y), np.imag(y))
plt.scatter(np.real(x), np.imag(x), color='r')
plt.grid()
# + [markdown] id="FiL3-dhuoo7R"
# #### Histogram
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="VMZQRiBRoo7R" outputId="965ffec9-310a-4022-c292-cef4a32f9797"
x = np.random.normal(0,1,100000)
plt.hist(x, bins=30);
plt.grid()
# + [markdown] id="uZILepF7oo7S"
# ## 2) Pytorch Intro
# Let's look at some basic manipulations of tensors.
# + id="bX4_trDHoo7S"
import torch
# + colab={"base_uri": "https://localhost:8080/"} id="lz0kJg30oo7S" outputId="ab24ca3c-848b-4fba-d7d3-c73c40f2333e"
a = torch.tensor([5., 3.])
print(a)
print(a.dtype)
# + [markdown] id="19qeKiFsoo7S"
# 32-bit floating point numbers is the default data type in Pytorch. If desired, it can be overwritten.
# + colab={"base_uri": "https://localhost:8080/"} id="N5EuCc_Ioo7T" outputId="4b32c470-e483-4895-9ac7-c3863ea34eea"
a = torch.tensor([5, 3], dtype=torch.int16)
print(a)
# + [markdown] id="EgePpfOloo7T"
# #### Pass by reference
# Numpy's ndarrays and Pytorch's tensors are highly compatible and it is easy to switch between them.
# This is a way to create an object with the same undelying memory. This means that chenges to the new tensor are reflected to the ndarray.
# + colab={"base_uri": "https://localhost:8080/"} id="YuTG9VaKoo7T" outputId="d8365a7f-58d1-46de-f28e-7b26ba1aab55"
a = np.array([[1,2],[3,4]])
print(a)
b = torch.from_numpy(a[:,0])
print(b)
b[:] = 8
print(a)
# + [markdown] id="0NPNEl-Goo7T"
# The conversion works also in the other direction, with the same rule.
#
# + colab={"base_uri": "https://localhost:8080/"} id="JXjeakPioo7T" outputId="4d04acb9-988f-4a83-c602-c9cade88aa7b"
b = torch.rand(2,3)
print(b)
a = b.numpy()
print(a)
# + [markdown] id="PpzMmh07oo7U"
# #### Copy from NumPy
# If we do not wish the two objects to use the same undelying memory, torch.tensor() creates a copy of the data.
# + colab={"base_uri": "https://localhost:8080/"} id="CLdTfHiLoo7U" outputId="e15e572e-6617-4f6b-bc7e-557fd0744161"
a = np.array([[1,2],[3,4]])
b = torch.tensor(a)
print(b)
b[0,0]=8
print(a)
# + [markdown] id="X9A7y-yAoo7U"
# #### Maths
# Tensors performs mathematic and arithmetic operations intuitively, and very similar to Numpy.
# + colab={"base_uri": "https://localhost:8080/"} id="aqFyd-tsoo7U" outputId="580d38c0-c638-430a-a2da-cbfe3b59dd56"
ones = torch.ones(2,3)
print(ones)
twos = ones * 2
print(twos)
threes = ones + twos
print(threes)
# + colab={"base_uri": "https://localhost:8080/"} id="B5uJCK2qoo7U" outputId="ceffc649-30f4-4903-dd7c-34b7d96b5059"
m = torch.rand((10,10)) # matrix of random numbers between 0 and 1
print('Max value of the matrix: ', torch.max(m))
print('Mean value of the matrix: ', torch.mean(m))
print('Determinant of the matrix: ', torch.det(m))
# + [markdown] id="fK9NdEiMTnAV"
# ## 3) Equalizer example
# + id="s7de147xoo7U"
from torch import nn, optim
import matplotlib.pyplot as plt
# + id="Tf6OcDDSoo7V"
def downsample_td(signal, down):
assert len(signal.shape)==2, 'signal format [number_dimensions][signal_length] expected'
return down * signal[:, ::down]
# + id="GSs_8vP4oo7V"
# Initialize Filter
num_taps = 41
nn_filter = nn.Conv1d (in_channels=1 ,
out_channels=1 ,
kernel_size=num_taps,
padding='same' )
# + id="ibr1w-uDPlXO"
# Import data
# https://drive.google.com/file/d/18wBNg-3RH0waZ-PhJAsrS99PrqOODKYU/view?usp=sharing
# https://drive.google.com/file/d/1a3f16dFKTgr_K7zKCZfLIaZYfAz0__Cd/view?usp=sharing
# !wget -O x.txt "https://drive.google.com/uc?export=download&id=18wBNg-3RH0waZ-PhJAsrS99PrqOODKYU"
# !wget -O y.txt "https://drive.google.com/uc?export=download&id=1a3f16dFKTgr_K7zKCZfLIaZYfAz0__Cd"
# + id="Lw8kIEuaoo7V"
# Prepare data
x = np.loadtxt('x.txt')
y = np.loadtxt('y.txt')
y_t = torch.Tensor(y.reshape(1, 1, -1))
x_t = torch.Tensor(x.reshape(1, -1))
# + id="z-YJDuQtoo7V"
# Define loss function and optimizer
loss_fn = nn.MSELoss()
optimizer = optim.Adam(nn_filter.parameters())
# + colab={"base_uri": "https://localhost:8080/"} id="zOyVHAHMoo7V" outputId="da321238-235d-444d-bdad-b509767425b4"
[p for p in nn_filter.parameters()]
# + colab={"background_save": true, "base_uri": "https://localhost:8080/"} id="SpRt0vKSoo7V" outputId="d570172e-daaf-493e-dc72-9a80d9dd6dd9"
# Trainings loop
for j in range(1000):
x_hat = nn_filter(y_t).reshape(1, -1)
x_hat = downsample_td(x_hat, 2)
loss = loss_fn(x_hat, x_t)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if j % 50 == 0:
print(f'epoch {j}: Loss = {loss.detach().numpy() :.4f}')
# + colab={"background_save": true} id="yrDaMx9aoo7W" outputId="f4f7a0fa-345c-4f98-e865-0c43bf820b6b"
plt.hist(downsample_td(y.reshape(1,-1), 2).reshape(-1), bins=np.arange(-5, 5, 0.1))
plt.grid()
# + colab={"background_save": true} id="y_2tv8AIoo7W" outputId="31c9b164-f390-4f2b-b2dc-92c485376cc6"
plt.hist(x_hat.detach().numpy().reshape(-1), bins=np.arange(-5, 5, 0.1));
plt.grid()
# + colab={"background_save": true} id="V_Fg9Xm3oo7W"
# + colab={"background_save": true} id="2HB2bkhioo7W"
| tut01.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/cwl286/ncode-crawler/blob/main/ncode.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="Q5sz5PU03CWV"
# # カクヨムから一括ダウンロードをしてepub/mobiを作る
#
# - Create novel epub & mobi from https://ncode.syosetu.com/
# - Mount Google Drive
# - Load htmls from Google Drive (saved in the last time processing if exist)
# - (Starting from the last episode) or from the 1st episodes
# - Save novel's htmls and parsed htmls into "syosetu" folder in Google Drive
# - Save epub and mobi into "syosetu/epub" and "syosetu/mobi" folder in Google Drive
#
# e.g. https://ncode.syosetu.com/n4698cv/
# e.g. ncode: n4698cv
#
# + id="ArjDw68G3BSP" cellView="form"
##### INPUT AREA
#@title INPUT { run: "auto" }
NCODE="n8045dg" #@param {type:"string"}
# + id="dvXqj6RBHlph"
##############################
# Mount gdrive
##############################
from google.colab import drive
drive.mount("/content/gdrive/")
# + id="SImGG2-JAnax"
##############################
##### Initialize varibles
##############################
from bs4 import BeautifulSoup
import glob
import os
import subprocess
import requests
import shutil
# Init vars
CHAPTER_BEG = 1
CHAPTER_NUM = CHAPTER_BEG + 1
TITLE = ""
CREATOR = ""
BASE_URL=f"https://ncode.syosetu.com/{NCODE}/"
ORG_DIR=f"{NCODE}_org"
DRIVE_DIR = "syosetu"
EPUB_NAME=f"{TITLE}.epub"
MOBI_NAME=f"{TITLE}.mobi"
# Make colab directories
# !mkdir -p $NCODE
# !mkdir -p $ORG_DIR
# + id="6Ck2cr-j-qQh"
##############################
# Create google drive directories
##############################
try:
os.mkdir(f"/content/gdrive/My Drive/{DRIVE_DIR}")
except Exception as e:
print(e)
# Create dir for parsed ncodes
try:
os.mkdir(f"/content/gdrive/My Drive/{DRIVE_DIR}/{NCODE}")
except Exception as e:
print(e)
# Create dir for org ncodes
try:
os.mkdir(f"/content/gdrive/My Drive/{DRIVE_DIR}/{ORG_DIR}")
except Exception as e:
print(e)
# Create dir for org ncodes
try:
os.mkdir(f"/content/gdrive/My Drive/{DRIVE_DIR}/epub")
except Exception as e:
print(e)
# Create dir for org ncodes
try:
os.mkdir(f"/content/gdrive/My Drive/{DRIVE_DIR}/mobi")
except Exception as e:
print(e)
# + id="7NdcY8RR-33j"
##############################
# Clone google drive files to colab for CHAPTER_BEG
##############################
for html_path in glob.glob(f"/content/gdrive/My Drive/{DRIVE_DIR}/{NCODE}/*.html"):
shutil.copy(html_path, f"{NCODE}/{os.path.basename(html_path)}")
# + id="G0c6pd-qsZDk"
##############################
##### Download main.html to update TITLE, CREATOR, CHAPTER_NUM
##############################
# !curl $BASE_URL > main.html
with open("main.html") as f:
# query TITLE, CREATOR, CHAPTER_NUM
text1 = f.read()
soup1 = BeautifulSoup(text1, 'html.parser')
TITLE = str(soup1.title.string)
CREATOR = str(soup1.find("div", class_="novel_writername").text)
CHAPTER_NUM= len(soup1.find_all("dl", class_="novel_sublist2"))
os.remove("main.html")
# set variables
CHAPTER_BEG = len(glob.glob(f'{NCODE}/*.html')) + 1
EPUB_NAME=f"{TITLE}.epub"
MOBI_NAME=f"{TITLE}.mobi"
# + id="8Pj0dZhyDHZV"
##############################
##### Print parameters
##############################
print(EPUB_NAME)
print(CREATOR)
print([CHAPTER_BEG, CHAPTER_NUM])
# + id="3NadV3sTeCVQ"
for i in range(CHAPTER_BEG, CHAPTER_NUM + 1):
url = f"{BASE_URL}{i}/"
print(f"downloading {i}/{CHAPTER_NUM} : {url}")
file_name = f"{ORG_DIR}/{i:05d}.html" # Move file to _org dir
# !curl $url > $file_name
# + id="dEdrIT2fgUnu"
##############################
##### Parse HTML
##############################
TEMPLATE = """
<html>
<head>
<meta charset="UTF-8">
{0}
</head>
<body>
<h1>{1}</h1>
{2}
<hr/>
{3}
</body>
</html>
"""
def extract_article(fname):
with open(f"{ORG_DIR}/{fname}") as f:
text = f.read()
with open(f"{NCODE}/{fname}", "w") as f:
soup = BeautifulSoup(text, 'html.parser')
f.write(TEMPLATE.format(str(soup.title),
str(soup.find("p", class_="novel_subtitle").string),
str(soup.find(id="novel_honbun")).replace("<br/>", ""),
str(soup.find(id="novel_attention"))
)
)
fnames = [os.path.basename(f) for f in glob.glob(f'{ORG_DIR}/*.html')]
[extract_article(f) for f in fnames]
# + id="PzwEHtSBDLWP"
##############################
##### Convert HTML to epud
##############################
meta1 = f'--metadata=title:"{TITLE}"'
meta2 = f'--metadata=author:"{CREATOR}"'
meta3 = f'--metadata=lang:"ja"'
html_paths = sorted(glob.glob(f'{NCODE}/*.html'))
cmd = ['pandoc', '-o', EPUB_NAME, meta1, meta2, meta3]
cmd.extend(html_paths)
subprocess.call(cmd)
# + id="jNZRSN-PcORY"
##############################
##### Install if needed https://calibre-ebook.com/download_linux
##############################
# !sudo -v && wget -nv -O- https://download.calibre-ebook.com/linux-installer.sh | sudo sh /dev/stdin
# + id="k0hPUPF0XhpQ"
##############################
##### Convert epub to mobi
##### Refresh "Files" when done
##############################
cmd = ["ebook-convert",EPUB_NAME,MOBI_NAME]
subprocess.call(cmd)
# + id="teykVzGxdD3r"
##############################
# Copy colab files to google drive
##############################
for html_path in glob.glob(f'{NCODE}/*.html'):
shutil.copy(html_path, f"/content/gdrive/My Drive/{DRIVE_DIR}/{html_path}")
for html_path in glob.glob(f'{ORG_DIR}/*.html'):
shutil.copy(html_path, f"/content/gdrive/My Drive/{DRIVE_DIR}/{html_path}")
# + id="XgnbPD0veCXL"
if os.path.exists(f"/content/gdrive/My Drive/{DRIVE_DIR}/epub/{EPUB_NAME}"):
os.remove(f"/content/gdrive/My Drive/{DRIVE_DIR}/epub/{EPUB_NAME}") # remove old epub
shutil.copy(EPUB_NAME, f"/content/gdrive/My Drive/{DRIVE_DIR}/epub")
# + id="tSxBDx3JeCop"
if os.path.exists(f"/content/gdrive/My Drive/{DRIVE_DIR}/mobi/{MOBI_NAME}"):
os.remove(f"/content/gdrive/My Drive/{DRIVE_DIR}/mobi/{MOBI_NAME}") # remove old mobi
shutil.copy(MOBI_NAME, f"/content/gdrive/My Drive/{DRIVE_DIR}/mobi")
| ncode.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Homework 1
# ## Due: January 22, 2018, 8 a.m.
#
# Please give a complete, justified solution to each question below. A single-term answer without explanation will receive no credit.
#
# Please complete each question on its own sheet of paper (or more if necessary), and upload to [Gradsescope](https://gradescope.com/).
# $$
# \newcommand{\R}{\mathbb{R}}
# \newcommand{\dydx}{\frac{dy}{dx}}
# \newcommand{\proj}{\textrm{proj}}
# % For boldface vectors:
# \renewcommand{\vec}[1]{\mathbf{#1}}
# $$
# 1\. a\. Let $\mathcal{S}$ be the solid region first octant bounded by the coordinate planes and the planes $x = 3$, $y = 3$, and $z = 4$ (including points on the surface of the region). Sketch, or describe the shape of the solid region $\mathcal{E}$ consisting of all points that are at most 1 unit of distance from some point in $\mathcal{S}$. Also find the volume of $\mathcal{E}$.
#
# b\. Write an equation that describes the set of all points that are equidistant from the origin and the point $(2, -1, -2)$. What does this set look like?
#
# 2\. Consider the points $A = (0, -3, -1)$ and $B = (1, 2, -2)$. Let $O$ denote the origin, $(0, 0, 0)$.
#
# a\. Let $M$ denote the midpoint of the line segment $\overline{AB}$. Find the vector $\overrightarrow{OM}$.
#
#
# b\. Let $N$ denote the point on the line segment $\overline{AB}$, whose distance from $A$ is a quarter of the distance between $A$ and $B$. Find the vector $\overrightarrow{ON}$.
# 3\. Consider the vectors $\mathbf a = \langle 3, 2\rangle$ and $\mathbf b = \langle 2, -1\rangle$.
#
# a\. Draw the vectors: (i) $0.5 \mathbf a + 0.5 \mathbf b$; (ii) $2 \mathbf a - \mathbf b$; and (iii) $1.5 \mathbf a - 0.5 \mathbf b$.
#
# b\. Choose any two scalars $s$ and $t$ that add up to 1. Then, draw the vector $s \mathbf a + t\mathbf b$. (Choose $s$ and $t$ so that the resulting vector is different from any of the vectors in part (a)).
#
# c\.Describe what you observe from parts (a) and (b). That is,
# describe the vectors obtained by adding $s$ times $\mathbf a$ and $t$ times $\mathbf b$, whenever $s + t = 1$.
#
# d\. Describe the vectors obtained by adding $s$ times $\mathbf a$ and $t$ times $\mathbf b$, whenever $s + t = 1$ and $s$ and $t$ are nonnegative.
# 4\. Find the vectors whose lengths and directions are given:
# \begin{enumerate}
#
# a\.length = $\frac{1}{\sqrt{14}}$, direction = $-3 \mathbf i + 2 \mathbf j + \vec k$
#
# b\. length = $\frac{13}{12}$, direction = $\frac{3}{13} \mathbf i - \frac{12}{13} \mathbf j + \frac{4}{13} \mathbf k$
#
# 5\. Compute the scalar triple product $\mathbf u \cdot (\mathbf v \times \mathbf w)$, where $\mathbf u, \mathbf v, \mathbf w$ are as follows:
# \begin{eqnarray*}
# \mathbf u & = & 2\mathbf i - 2 \mathbf j + 4\mathbf k\\
# \mathbf v & = & 2\mathbf i +9 \mathbf j -\mathbf k \\
# \mathbf w & = & 4 \mathbf i + 7 \mathbf j + 3 \mathbf k.
# \end{eqnarray*}
# Then, explain how you can tell that all three vectors lie on the same plane from the value of the scalar triple product that you computed above. (_Hint:_ What is $\mathbf u$ perpendicular to?)
# 6\. Suppose that $\mathbf u$ and $\mathbf v$ are nonzero vectors in $\mathbb{R}^3$. Show that the vector $\mathbf u - \operatorname{proj}_{\mathbf v} \mathbf u$ is orthogonal to $\operatorname{proj}_\vec{v}\vec u$.
#
# **bonus** Use *this* result to find the point on the plane containing $(0,0,0)$, $(1,1,0)$, and $(0,1,1)$ that is closest to the point $(1,0,0)$.
# 7\. Consider a 100 Newton weight suspended by two wires as shown in the figure below. Find the magnitudes and the $\mathbf i$- and $\mathbf j$- components of the force vectors $\mathbf F_1$ and $\mathbf F_2$.
#
# 
#
| _teaching/2018Spring/APMA_E2000/Homework1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
# Import modules
# %matplotlib inline
import os
import pylab
import random
import cPickle as pkl
import numpy as np
import pandas as pd
from scipy.misc import imread, imresize
from lasagne import layers, updates, nonlinearities
from nolearn.lasagne import NeuralNet, BatchIterator, visualize
data_dir = '../data/misc/mnist/'
model_root= '../models'
# -
# Load train and test set
train = pd.read_csv(data_dir + "train.csv")
test = pd.read_csv(data_dir + "test.csv")
# +
# Visualizing Training Dataset
i = random.randrange(0, train.shape[0])
img = np.asarray(train.ix[i, 1:])
img = img.reshape(28, 28)
pylab.imshow(img)
pylab.gray(); pylab.axis('off')
pylab.show()
#print "-----------------------------"
#print train.head(5)
#print "-----------------------------"
#print train.count()
# +
# Preprocessing step
# Normalizing image
train_labels = train.ix[:, 0].values.astype(np.int32)
train_images = train.ix[:, 1:].values.astype(np.float32)
train_images /= train_images.std(axis = None)
train_images -= train_images.mean()
test_images = test.values.astype(np.float32)
test_images /= test_images.std(axis = None)
test_images -= test_images.mean()
# +
# Reshape dataset to fit to NN
X = train_images.reshape(-1, 1, 28, 28)
y = train_labels
test_x = test_images.reshape(-1, 1, 28, 28)
# -
# Setting architecture of NN
net = NeuralNet(
layers = [
('input', layers.InputLayer),
('conv1', layers.Conv2DLayer),
('pool1', layers.MaxPool2DLayer),
('dropout1', layers.DropoutLayer),
('conv2', layers.Conv2DLayer),
('pool2', layers.MaxPool2DLayer),
('dropout2', layers.DropoutLayer),
('conv3', layers.Conv2DLayer),
('output', layers.DenseLayer),
],
input_shape = (None, 1, 28, 28),
conv1_num_filters = 32, conv1_filter_size = (5, 5),
pool1_pool_size = (2, 2),
dropout1_p = 0.2,
conv2_num_filters = 32, conv2_filter_size = (5, 5),
pool2_pool_size = (2, 2),
conv3_num_filters = 32, conv3_filter_size = (4, 4),
output_num_units = 10, output_nonlinearity = nonlinearities.softmax,
batch_iterator_train = BatchIterator(batch_size = 150),
batch_iterator_test = BatchIterator(batch_size = 150),
update = updates.adam,
use_label_encoder = True,
regression = False,
max_epochs = 20,
verbose = 1,
)
# Train NN
net.fit(X, y);
# +
# Save model
with open(os.path.join(model_root, 'toy_classifier_model.pkl'), 'wb') as f:
pkl.dump(net, f, -1)
f.close()
# +
# load model
with open(os.path.join(model_root, 'toy_classifier_model.pkl'), 'rb') as f:
net = pkl.load(f)
f.close()
# -
pred = net.predict(test_x)
# Visualizing output
# %matplotlib inline
i = random.randrange(0, 28000)
img = np.asarray(test.ix[i])
img = img.reshape(28, 28)
pylab.imshow(img)
pylab.gray(); pylab.axis('off')
pylab.show()
print '--------------'
print 'PREDICTION: ', pred[i]
#visualize layer 1 weights
visualize.plot_conv_weights(net.layers_['conv1'])
visualize.plot_conv_activity(net.layers_['conv1'], test_x[i:i+1, :, :, :])
| project/toy_classifier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from datetime import date
import pandas as pd
import matplotlib.dates as mdates
from pvoutput import PVOutput
# +
# Set API key and System ID here, or in ~/.pvoutput.yml
API_KEY = None
SYSTEM_ID = None
pv = PVOutput(API_KEY, SYSTEM_ID)
# -
# ## Search
#
# Search for PV systems within 5km of a point in the UK.
#
# See [the PVOutput.org API docs for details of how to search](https://pvoutput.org/help.html#search).
pv_systems = pv.search(query='5km', lat=52.0668589, lon=-1.3484038)
pv_systems.head()
# ## Get PV systems for a country
#
# If you haven't paid for a commercial license, then one way to get all the PV systems for a specific country is to scrape the PVOutput.org/map.jsp website. This can be done using `pvoutput.mapscraper.get_pv_systems_for_country`:
from pvoutput.mapscraper import get_pv_systems_for_country
pv_systems_for_uk = get_pv_systems_for_country(
country='United Kingdom', # See pvoutput.consts.PV_OUTPUT_COUNTRY_CODES for all recognised strings
sort_by='capacity',
ascending=False,
max_pages=1 # For this demo, we'll just scrape the first page of the map
)
pv_systems_for_uk.head()
# ## Check how many API requests we have left this hour
pv.rate_limit_info()
# ## Get rid of systems with < 50 outputs
pv_systems = pv_systems.query('num_outputs >= 50')
pv_systems.head()
# ## Get metadata for one PV system
pv_system_id = pv_systems.index[0]
metadata = pv.get_metadata(pv_system_id)
metadata
# ## Get power generation statistics for one PV system
pv.get_statistic(pv_system_id)
# ## Get timeseries of power data for one day
#
# In one API request, PVOutput.org allows us to retrieve timeseries data for one day and one PV system. Only able to search by system_id if you have donated.
# +
# Get timeseries for 2019-07-01
DATE = date(2019, 7, 1)
status = pv.get_status(pv_system_id, date=DATE)
# The timestamps are localtime, local to the PV system
# and we know this PV system is from the United Kingdom.
status = status.tz_localize('Europe/London')
status.head()
# -
# ### Plot timeseries
# +
# Plot Solar PV power output for 2019-07-01
ax = status['instantaneous_power_gen_W'].plot(figsize=(15, 5))
ax.set_xlabel('hour (local to the PV system)')
ax.set_ylabel('watts')
ax.set_title('Solar PV power generation for system ID {} on {}'.format(pv_system_id, DATE))
ax.xaxis.set_major_formatter(mdates.DateFormatter('%H', tz=status.index.tz))
# Plot the system size
system_size = metadata['system_AC_capacity_W']
ax.plot(ax.get_xlim(), (system_size, system_size), label='system AC capacity')
ax.legend();
# -
# ## Get batch status
#
# If you have paid for a 'PVOutput Data Services' subscription, and you've added `data_service_url` to `~/.pvoutput.yml`, then you can use the PVOutput.org download batch status API to download a year of data at a time:
batch_status = pv.get_batch_status(pv_system_id)
batch_status.head()
len(batch_status)
batch_status['instantaneous_power_gen_W'].plot(figsize=(15, 5), linewidth=0.5);
# ## Batch download multiple PV systems to disk
#
# `PVOutput.download_multiple_systems_to_disk` can download multiple PV systems, and save them to disk as an HDF5 file.
#
# See the `download_pv_timeseries.ipynb` notebook for a working example.
| examples/quick_start.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + deletable=true editable=true
import algolab
import sys
#sys.path.append('../exams/2017-01-26-lab/solutions')
sys.path.append('past-exams/2017-01-26/solutions')
algolab.init()
# + [markdown] deletable=true editable=true
# <center>
# <span class="algolab-title"> Algolab Exam</span><br/><br/>
# <span style="font-size:20px"> Scientific Programming Module 2</span> <br/>
# <span style="font-size:20px"> Algorithms and Data Structures </span> <br/><br/>
# <span> Thusday 26th, Jan 2017</span><br/><br/>
#
#
# </center>
# + [markdown] deletable=true editable=true
# # Introduction
#
# * Taking part to this exam erases any vote you had before, both lab and theory
# * If you don't ship or you don't pass this lab part, you lose also the theory part.
#
#
# * Log into your computer in _exam mode_, it should start Ubuntu
# * To edit the files, you can use any editor of your choice: _Editra_ seems easy to use, you can find it under _Applications->Programming->Editra_. Others could be _GEdit_ (simpler), or _PyCharm_ (more complex).
#
#
#
# ## Allowed material
#
# There won't be any internet access. You will only be able to access:
#
# * <a href="index.html" target="_blank">Sciprog Algolab worksheets</a>
# * <a href="../montresor/Montresor%20sciprog/cricca.disi.unitn.it/montresor/teaching/scientific-programming/slides/index.html">Alberto Montresor slides</a>
# * <a href="../teso/disi.unitn.it/_teso/courses/sciprog/index.html" target="_blank">Stefano Teso docs</a>
# * Python 2.7 documentation : <a href="../python-docs/html/index.html" target="_blank">html</a>
# <a href="../python-docs/pdf" target="_blank">pdf</a>
# * In particular, <a href="../python-docs/html/library/unittest.html" target="_blank">Unittest docs</a>
# * The course book _Problem Solving with Algorithms and Data Structures using Python_ <a href="../pythonds/index.html" target="_blank">html</a> <a href="../pythonds/ProblemSolvingwithAlgorithmsandDataStructures.pdf" target="_blank">pdf</a>
#
#
# ## Grading
#
#
# * The grade of this lab part will range from 0 to 30. Total grade for the module will be given by the average with the theory part of Alberto Montresor.
# * Correct implementations with the required complexity grant you full grade.
# * Partial implementations _might_ still give you a few points. If you just can't solve an exercise, try to solve it at least for some subcase (i.e. array of fixed size 2) commenting why you did so.
# * One bonus point can be earned by writing stylish code. You got style if you:
#
# - do not infringe the [Commandments](../algolab/index.html#Commandments)
# - write [pythonic code](http://docs.python-guide.org/en/latest/writing/style)
# - avoid convoluted code like i.e.
#
# ```
# if x > 5:
# return True
# else:
# return False
# ```
#
# when you could write just
#
# ```
# return x > 5
# ```
#
# + deletable=true editable=true
# %%HTML
<p class="algolab-warn">
!!!!!!!!! WARNING !!!!!!!!!
<br/>
<br/>
!!!!!!!!! **ONLY** IMPLEMENTATIONS OF THE PROVIDED FUNCTION SIGNATURES WILL BE EVALUATED !!!!!!!!! <br/>
</p>
# + [markdown] deletable=true editable=true
#
# For example, if you are given to implement:
#
# ```python
# def cool_fun(x):
# raise Exception("TODO implement me")
# ```
#
# and you ship this code:
#
# ``` python
# def cool_fun_non_working_trial(x):
# # do some absurdity
#
# def cool_fun_a_perfectly_working_trial(x):
# # a super fast, correct and stylish implementation
#
# def cool_fun(x):
# raise Exception("TODO implement me")
# ```
#
# We will assess only the latter one `cool_fun(x)`, and conclude it doesn't work at all :P !!!!!!!
#
# Still, you are allowed to define any extra helper function you might need. If your `cool_fun(x)` implementation calls some other function you defined like `my_helper` here, it is ok:
#
# ```python
#
# def my_helper(y,z):
# # do something useful
#
# def cool_fun(x):
# my_helper(x,5)
#
# # this will get ignored:
# def some_trial(x):
# # do some absurdity
#
# ```
#
# + [markdown] deletable=true editable=true
# ## What to do
#
#
# In <a href="/usr/local/esame" target="_blank">/usr/local/esame</a> you should find a file named `algolab-17-01-26.zip`. Download it and extract it on your desktop. The content should be like this:
#
# ```
# algolab-17-01-26
# |- FIRSTNAME-LASTNAME-ID
# |- exercise1.py
# |- exercise2.py
# |- exercise3.py
#
# ```
#
# 2) Check this folder also shows under `/var/exam`.
#
# 3) Rename `FIRSTNAME-LASTNAME-ID` folder: put your name, lastname an id number, like `john-doe-432432`
#
# From now on, you will be editing the files in that folder. At the end of the exam, that is what will be evaluated.
#
# 4) Edit the files following the instructions in this worksheet for each exercise.
#
#
# + deletable=true editable=true
# %%HTML
<p class="algolab-warn">
WARNING: <i>DON'T</i> modify function signatures! Just provide the implementation.
</p>
<p class="algolab-warn">
WARNING: <i>DON'T</i> change the existing test methods, just add new ones !!! You can add as many as you want.
</p>
<p class="algolab-warn">
WARNING: <i>DON'T</i> create other files. If you still do it, they won't be evaluated.
</p>
<p class="algolab-important">
IMPORTANT: Pay close attention to the comments of the functions.
</p>
<p class="algolab-important">
IMPORTANT: if you need to print some debugging information, you <i>are allowed</i> to put extra <code>print</code>
statements in the function bodies.
</p>
<p class="algolab-warn">
WARNING: even if <code>print</code> statements are allowed, be careful with prints that might
break your function, i.e. avoid stuff like this: <code> print 1/0 </code>
</p>
# + [markdown] deletable=true editable=true
# 3) Every exercise should take max 25 mins. If it takes longer, leave it and try another exercise.
#
# + deletable=true editable=true
# %%HTML
<p class="algolab-warn">
WARNING: MAKE SURE ALL EXERCISE FILES AT LEAST COMPILE !!! <br/> 10 MINS BEFORE THE END
OF THE EXAM I WILL ASK YOU TO DO A FINAL CLEAN UP OF THE CODE
</p>
# + [markdown] deletable=true editable=true
# # Exercises
# + [markdown] deletable=true editable=true
# ## 1) SwapArray
#
#
# You are given a class `SwapArray` that models an array where the only modification you can do is to swap an element with the successive one.
# + deletable=true editable=true
from exercise1_solution import *
# + [markdown] deletable=true editable=true
# To create a `SwapArray`, just call it passing a python list:
# + deletable=true editable=true
sarr = SwapArray([7,8,6])
print sarr
# + [markdown] deletable=true editable=true
# Then you can query in $O(1)$ it by calling `get()` and `get_last()`
# + deletable=true editable=true
print sarr.get(0)
# + deletable=true editable=true
print sarr.get(1)
# + deletable=true editable=true
print sarr.get_last()
# + [markdown] deletable=true editable=true
# You can know the size in $O(1)$ with `size()` method:
# + deletable=true editable=true
print sarr.size()
# + [markdown] deletable=true editable=true
# As we said, the only modification you can do to the internal array is to call `swap_next` method:
#
# ```python
#
# def swap_next(self, i):
# """ Swaps the elements at indeces i and i + 1
#
# If index is negative or greater or equal of the last index, raises
# an IndexError
#
# """
# ```
#
# For example:
# + deletable=true editable=true
sarr = SwapArray([7,8,6,3])
print sarr
# + deletable=true editable=true
sarr.swap_next(2)
print sarr
# + deletable=true editable=true
sarr.swap_next(0)
print sarr
# + [markdown] deletable=true editable=true
# Now start editing the file `exercise1.py`:
#
#
# ### 1.0) test swap
#
# To check your environment is working fine, try to run the tests for the sole `swap` method. You don't need to implement it, the tests are in `SwapTest` class and should all pass:
#
# **Notice that _`exercise1`_** is followed by a dot and test class name: _`.SwapTest`_
#
# ```bash
#
# python -m unittest exercise1.SwapTest
#
# ```
#
# ### 1.1) is_sorted
#
# Implement the `is_sorted` function, which is a function *external* to the class `SwapArray`:
#
# ```python
#
# def is_sorted(sarr):
# """ Returns True if the provided SwapArray sarr is sorted, False otherwise
#
# NOTE: Here you are a user of SwapArray, so you *MUST NOT* access
# directly the field _arr.
# """
# raise Exception("TODO IMPLEMENT ME !")
# ```
#
#
# Once done, running this will run only the tests in `IsSortedTest` class and hopefully they will pass.
#
# **Notice that _`exercise1`_** is followed by a dot and test class name: _`.IsSortedTest`_
#
# ```bash
#
# python -m unittest exercise1.IsSortedTest
#
# ```
#
# **Example usage:**
#
# + deletable=true editable=true
print is_sorted(SwapArray([8,5,6]))
# + deletable=true editable=true
print is_sorted(SwapArray([5,6,6,8]))
# + [markdown] deletable=true editable=true
#
#
# ### 1.2) max_to_right
#
# Implement `max_to_right` function, which is a function *external* to the class `SwapArray`. There are two ways to implement it, try to minimize the reads from the SwapArray.
#
#
# ```python
# def max_to_right(sarr):
# """ Modifies the provided SwapArray sarr so that its biggest element is
# moved to the last index. The order in which the other elements will be
# after a call to this function is left unspecified (so it could be any).
#
# NOTE: Here you are a user of SwapArray, so you *MUST NOT* access
# directly the field _arr. To do changes, you can only use
# the method swap(self, i).
# """
# raise Exception("TODO IMPLEMENT ME !")
# ```
#
# ** Testing **: ` python -m unittest exercise1.MaxToRightTest`
#
# ** Example usage: **
# + deletable=true editable=true
sarr = SwapArray([8, 7, 6])
print sarr
# + deletable=true editable=true
max_to_right(sarr)
print sarr
# + deletable=true editable=true
sarr = SwapArray([6,8,6])
print sarr
# + deletable=true editable=true
max_to_right(sarr)
print sarr
# + [markdown] deletable=true editable=true
# ## 2) DiGraph
#
# Now you are going to build some `DiGraph`, by defining functions _external_ to class `DiGraph`.
# + deletable=true editable=true
# %%HTML
<p class="algolab-warn" target="_blank">
WARNING: To build the graphs, just use the methods you find inside <code>DiGraph</code> class, like <code>add_vertex</code>,
<code>add_edge</code>, etc.
</p>
# + [markdown] deletable=true editable=true
# Start editing file `exercise2.py`
# + deletable=true editable=true
from exercise2_solution import *
# + [markdown] deletable=true editable=true
# ### 2.1) odd_line
#
# Implement the function `odd_line`. Note the function is defined *outside* `DiGraph` class.
#
# ```python
#
# def odd_line(n):
# """ Returns a DiGraph with n verteces, displaced like a line of odd numbers
#
# Each vertex is an odd number i, for 1 <= i < 2n. For example, for
# n=4 verteces are displaced like this:
#
# 1 -> 3 -> 5 -> 7
#
# For n = 0, return the empty graph
#
# """
# raise Exception("TODO IMPLEMENT ME !")
# ```
#
# ** Testing: ** `python -m unittest exercise2.OddLineTest`
#
# ** Example usage **:
#
# + deletable=true editable=true
print odd_line(0)
# + deletable=true editable=true
print odd_line(1)
# + deletable=true editable=true
print odd_line(2)
# + deletable=true editable=true
print odd_line(3)
# + deletable=true editable=true
print odd_line(4)
# + [markdown] deletable=true editable=true
# ### 2.2) even_line
#
# Implement the function `even_line`. Note the function is defined *outside* `DiGraph` class.
#
#
# ```python
# def even_line(n):
# """ Returns a DiGraph with n verteces, displaced like a line of even numbers
#
# Each vertex is an even number i, for 2 <= i <= 2n. For example, for
# n=4 verteces are displaced like this:
#
# 2 <- 4 <- 6 <- 8
#
# For n = 0, return the empty graph
#
# """
#
# raise Exception("TODO IMPLEMENT ME !")
# ```
#
# ** Testing: ** `python -m unittest exercise2.EvenLineTest`
#
# ** Example usage: **
# + deletable=true editable=true
print even_line(0)
# + deletable=true editable=true
print even_line(1)
# + deletable=true editable=true
print even_line(2)
# + deletable=true editable=true
print even_line(3)
# + deletable=true editable=true
print even_line(4)
# + [markdown] deletable=true editable=true
# ### 2.3) quads
#
# Implement the quads function. Note the function is defined *outside* `DiGraph` class.
#
# ```python
#
# def quads(n):
# """ Returns a DiGraph with 2n verteces, displaced like a strip of quads.
#
# Each vertex is a number i, 1 <= i <= 2n.
# For example, for n = 4, verteces are displaced like this:
#
# 1 -> 3 -> 5 -> 7
# ^ | ^ |
# | ; | ;
# 2 <- 4 <- 6 <- 8
#
# where
#
# ^ |
# | represents an upward arrow, while ; represents a downward arrow
#
# """
# raise Exception("TODO IMPLEMENT ME !")
#
# ```
#
# ** Testing: ** `python -m unittest exercise2.QuadsTest`
#
#
# ** Example usage: **
# + deletable=true editable=true
print quads(0)
# + deletable=true editable=true
print quads(1)
# + deletable=true editable=true
print quads(2)
# + deletable=true editable=true
print quads(3)
# + deletable=true editable=true
print quads(4)
# + [markdown] deletable=true editable=true
# ## 3) GenericTree
#
#
# In this exercise you will deal with family matters, using the `GenericTree` we saw during labs:
#
#
# <img src="img/generic-tree-labeled.png"/>
#
# Now start editing the file `exercise3.py`:
#
#
# ### 3.1) grandchildren
#
#
# Implement the `grandchildren` method:
#
# ```python
# def grandchildren(self):
# """ Returns a python list containing the data of all the grandchildren of this
# node.
#
# - Data must be from left to right order in the tree horizontal representation
# (or up to down in the vertical representation).
# - If there are no grandchildren, returns an empty array.
#
# For example, for this tree:
#
# a
# |-b
# | |-c
# | \-d
# | \-g
# |-e
# \-f
# \-h
#
# Returns ['c','d','h']
# """
# raise Exception("TODO IMPLEMENT ME !")
# ```
#
# ** Testing: ** `python -m unittest exercise3.GrandChildrenTest`
#
# **Usage examples:**
#
# + deletable=true editable=true
from exercise3_solution import *
# + deletable=true editable=true
ta = gt('a', gt('b', gt('c')))
print ta
# + deletable=true editable=true
print ta.grandchildren()
# + deletable=true editable=true
ta = gt('a', gt('b'))
print ta
# + deletable=true editable=true
print ta.grandchildren()
# + deletable=true editable=true
ta = gt('a', gt('b', gt('c'), gt('d')), gt('e', gt('f')) )
print ta
# + deletable=true editable=true
print ta.grandchildren()
# + [markdown] deletable=true editable=true
# ### 3.2) uncles
#
# Implement the `uncles` method:
#
#
# ```python
# def uncles(self):
# """ Returns a python list containing the data of all the uncles of this
# node (that is, *all* the siblings of its parent).
#
# NOTE: returns also the father siblings which are *BEFORE* the father !!
#
# - Data must be from left to right order in the tree horizontal representation
# (or up to down in the vertical representation).
# - If there are no uncles, returns an empty array.
#
# For example, for this tree:
#
# a
# |-b
# | |-c
# | \-d
# | \-g
# |-e
# \-h
# \-f
#
#
# calling this method on 'h' returns ['b','f']
# """
# ```
#
# ** Testing: ** `python -m unittest exercise3.UnclesTest`
#
# **Example usages:**
# + deletable=true editable=true
td = gt('d')
tb = gt('b')
ta = gt('a', tb, gt('c', td), gt('e'))
print ta
# + deletable=true editable=true
print td.uncles()
# + deletable=true editable=true
print tb.uncles()
# + [markdown] deletable=true editable=true
#
#
| exam-2017-01-26.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Text 2: Latent semantic indexing
# **Internet Analytics - Lab 4**
#
# ---
#
# **Group:** *Your group letter.*
#
# **Names:**
#
# * *Name 1*
# * *Name 2*
# * *Name 3*
#
# ---
#
# #### Instructions
#
# *This is a template for part 2 of the lab. Clearly write your answers, comments and interpretations in Markodown cells. Don't forget that you can add $\LaTeX$ equations in these cells. Feel free to add or remove any cell.*
#
# *Please properly comment your code. Code readability will be considered for grading. To avoid long cells of codes in the notebook, you can also embed long python functions and classes in a separate module. Don’t forget to hand in your module if that is the case. In multiple exercises, you are required to come up with your own method to solve various problems. Be creative and clearly motivate and explain your methods. Creativity and clarity will be considered for grading.*
import pickle
import numpy as np
from scipy.sparse.linalg import svds
# ## Exercise 4.4: Latent semantic indexing
# ## Exercise 4.5: Topic extraction
# ## Exercise 4.6: Document similarity search in concept-space
# ## Exercise 4.7: Document-document similarity
| ix-lab4/ix-lab4/.ipynb_checkpoints/2-lsi-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="EREOjdlq5pJX"
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib.ticker import PercentFormatter
# + id="DJx9lYrR5wSp"
plt.style.use('bmh')
df = pd.read_csv('https://raw.githubusercontent.com/irvin-s/in-1166-smd/main/data/crx.data', sep=',')
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="AOHp8CC55-Cz" outputId="a624aeb1-2baf-4326-9ade-c16ce7731991"
#Histograma 1
plt.hist(df["a16"])
plt.gca().yaxis.set_major_formatter(PercentFormatter(690))
# + colab={"base_uri": "https://localhost:8080/"} id="nKXcmtXA9aeZ" outputId="0518ce20-124f-4c68-e9a7-bbdff0dc20e8"
#Detalhando a porcentagem
df.a16.value_counts(normalize=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 316} id="JqqHOIzp6F2F" outputId="ec5af677-5b68-4108-aa7b-679bb95104c3"
#histograma 2
plt.hist(df["a16"])
| missao_2/src/smd_eda.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Requirements
#
# ```
# pip install seaborn jupyter cached-property
# ```
# +
# %matplotlib inline
import seaborn as sns
import matplotlib
import pandas as pd
import matplotlib.pyplot as plt
from cached_property import cached_property
from pathlib import Path
import os
import dateutil
import numpy as np
plt.rcParams['figure.figsize'] = [15, 15]
sns.set()
# Handle date time conversions between pandas and matplotlib
#from pandas.plotting import register_matplotlib_converters
#register_matplotlib_converters()
# +
class Universe:
def __init__(self, data):
self.data = data
self.excluded_columns = set([
'date',
'Recovered',
'Diamond Princess'
])
@staticmethod
def _date_columns(df):
return sorted(
[col for col in df.columns.unique() if col[0].isnumeric()],
key=lambda col: dateutil.parser.parse(col)
)
@staticmethod
def _date_columns_as_dates(df):
return [
dateutil.parser.parse(col) for col in Universe._date_columns(df)
]
@cached_property
def _date_formatter(self):
return matplotlib.dates.DateFormatter('%m.%d')
@cached_property
def states(self):
states_cases = sorted(self.cases_by_state.columns.unique())
states_deaths = sorted(self.deaths_by_state.columns.unique())
assert states_cases == states_deaths
return sorted(
list(
set(states_cases) - self.excluded_columns
)
)
@cached_property
def countries(self):
countries_cases = sorted(self.cases_by_country.columns.unique())
countries_deaths = sorted(self.deaths_by_country.columns.unique())
assert countries_cases == countries_deaths
return sorted(
list(
set(countries_cases) - self.excluded_columns
)
)
@staticmethod
def _pivot(df, index):
df = pd.pivot_table(
df,
index=index,
values=Universe._date_columns(df),
aggfunc=np.sum
)
df = df[Universe._date_columns(df)]
df = df.rename(columns={
str_date: date_date
for str_date, date_date
in zip(
Universe._date_columns(df), Universe._date_columns_as_dates(df)
)
})
df = df.transpose()
df = df.reset_index()
df = df.rename(columns={'index': 'date'})
return df
@staticmethod
def _plot_data_total(df, col):
return {
'x': df.date,
'y': df[col]
}
@staticmethod
def _plot_data_daily(df, col):
return {
'x': df.date[1:],
'y': np.diff(df[col])
}
@cached_property
def cases_by_state(self):
return self._pivot(self.data['cases_us'], 'Province_State')
@cached_property
def deaths_by_state(self):
return self._pivot(self.data['deaths_us'], 'Province_State')
@cached_property
def cases_by_country(self):
return self._pivot(self.data['cases_global'], 'Country/Region')
@cached_property
def deaths_by_country(self):
return self._pivot(self.data['deaths_global'], 'Country/Region')
def plot_total_deaths_by_state(self, state):
self._plot_total_deaths(self.deaths_by_state, state)
def plot_daily_deaths_by_state(self, state):
self._plot_daily_deaths(self.deaths_by_state, state)
def plot_total_cases_by_state(self, state):
self._plot_total_cases(self.cases_by_state, state)
def plot_daily_cases_by_state(self, state):
self._plot_daily_cases(self.cases_by_state, state)
def plot_total_deaths_by_country(self, country):
self._plot_total_deaths(self.deaths_by_country, country)
def plot_daily_deaths_by_country(self, country):
self._plot_daily_deaths(self.deaths_by_country, country)
def plot_total_cases_by_country(self, country):
self._plot_total_cases(self.cases_by_country, country)
def plot_daily_cases_by_country(self, country):
self._plot_daily_cases(self.cases_by_country, country)
def _plot_daily_deaths(self, df, col):
ax = sns.lineplot(**self._plot_data_daily(df, col))
ax.set_title(f'Daily Deaths - {col}')
ax.set_ylabel('Deaths')
ax.set_xlabel('')
ax.xaxis.set_major_formatter(self._date_formatter)
def _plot_total_deaths(self, df, col):
ax = sns.lineplot(**self._plot_data_total(df, col))
ax.set_title(f'Daily Deaths - {col}')
ax.set_ylabel('Deaths')
ax.set_xlabel('')
ax.xaxis.set_major_formatter(self._date_formatter)
def _plot_daily_cases(self, df, col):
ax = sns.lineplot(**self._plot_data_daily(df, col))
ax.set_title(f'Daily Cases - {col}')
ax.set_ylabel('Cases')
ax.set_xlabel('')
ax.xaxis.set_major_formatter(self._date_formatter)
def _plot_total_cases(self, df, col):
ax = sns.lineplot(**self._plot_data_total(df, col))
ax.set_title(f'Total Cases - {col}')
ax.set_ylabel('Cases')
ax.set_xlabel('')
ax.xaxis.set_major_formatter(self._date_formatter)
@classmethod
def from_repo(cls):
repo_dir = Path(os.getcwd(), 'COVID-19')
if not repo_dir.exists():
os.system('git clone https://github.com/CSSEGISandData/COVID-19.git')
else:
os.system('cd COVID-19;git pull origin master')
time_series_dir = Path('COVID-19\\csse_covid_19_data\\csse_covid_19_time_series')
deaths_us_csv = Path(time_series_dir, 'time_series_covid19_deaths_US.csv')
cases_us_csv = Path(time_series_dir, 'time_series_covid19_confirmed_US.csv')
deaths_global_csv = Path(time_series_dir, 'time_series_covid19_deaths_global.csv')
cases_global_csv = Path(time_series_dir, 'time_series_covid19_confirmed_global.csv')
return cls({
'cases_us': pd.read_csv(cases_us_csv),
'deaths_us': pd.read_csv(deaths_us_csv),
'cases_global': pd.read_csv(cases_global_csv),
'deaths_global': pd.read_csv(deaths_global_csv)
})
universe = Universe.from_repo()
# -
universe.plot_daily_deaths_by_state('California')
universe.plot_total_cases_by_country('Italy')
| Corona.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import torch
import torch.nn as nn
from torch.functional import F
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import PIL
# %config InlineBackend.figure_format = 'svg'
# -
use_cuda = True
device = torch.device('cuda:0' if use_cuda else 'cpu')
x = torch.Tensor([0]).cuda(device)
# # 1. Load classify-leaves
# +
class classifyLeaves(torch.utils.data.Dataset):
def __init__(self, path = 'data/classify-leaves'):
super(classifyLeaves, self).__init__()
self.train_csv = pd.read_csv(path + '/train.csv').to_numpy()
self.test_csv = pd.read_csv(path + '/test.csv').to_numpy()
self.cls = np.array(sorted(list(set(self.train_csv[:,1]))))
self.cls_num = len(self.cls)
self.cls2idx = {self.cls[i]:i for i in range(self.cls_num)}
self.idx2cls = {i:self.cls[i] for i in range(self.cls_num)}
self.path = path
import os
if 'data.pt' in os.listdir(path):
self.train_data, self.train_label, self.test_data, self.test_name = torch.load(path + '/data.pt')
else:
self.loadData()
def loadData(self):
self.train_data = []
self.train_label = []
self.test_data = []
self.test_name = []
i = 0
l = len(self.train_csv)
for name,clss in self.train_csv:
i += 1
print('\rLoading train data: [{i}/{l}] '.format(i=i,l=l),end='')
self.train_data.append(torchvision.io.read_image(path + '/' + name)/255)
self.train_label.append(clss)
print(' Success!')
i = 0
l = len(self.test_csv)
for name, in self.test_csv:
i += 1
print('\rLoading test data: [{i}/{l}] '.format(i=i,l=l),end='')
self.test_data.append(torchvision.io.read_image(path + '/' + name)/255)
self.test_name.append(name)
print(' Success!')
torch.save([self.train_data, self.train_label, self.test_data, self.test_name], self.path + '/data.pt')
def __getitem__(self, index):
return self.train_data[index], self.cls2idx[self.train_label[index]]
def __len__(self):
return len(self.train_csv)
# -
path = 'data/classify-leaves'
dset = classifyLeaves()
plt.imshow(dset.train_data[8].permute(1,2,0).numpy())
dataloader = torch.utils.data.DataLoader(dset, batch_size = 64, shuffle = True)
class BasicBlock(nn.Module):
def __init__(self, input_channels, num_channels, stride = 1, downsample = None):
super(BasicBlock, self).__init__()
self.conv1 = nn.Conv2d(input_channels, num_channels, 3, stride = 1, padding = 1)
self.conv2 = nn.Conv2d(num_channels, num_channels, 3, stride = stride, padding = 1)
self.conv3 = nn.Conv2d(input_channels, num_channels, 1, stride = stride)
self.bn1 = nn.BatchNorm2d(num_channels)
self.bn2 = nn.BatchNorm2d(num_channels)
self.relu = nn.ReLU(inplace = True)
def forward(self, x):
out = self.conv1(x)
out = self.bn1(out)
out = F.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = F.relu(out)
x = self.conv3(x)
Y = F.relu(out + x)
return Y
def Resnet_block(input_channels, num_channels, num_layers):
blk = []
for i in range(num_layers):
blk.append(BasicBlock(input_channels, num_channels, stride = 2))
input_channels = num_channels
return blk
class Resnet_leaves(nn.Module):
def __init__(self, input_channels, num_channels):
super(Resnet_leaves, self).__init__()
self.b1 = nn.Sequential(
nn.Conv2d(input_channels, 64, 3, 2, 1),
nn.BatchNorm2d(64),
nn.ReLU(inplace = True),
)
self.b2 = nn.Sequential(*Resnet_block(64,64,2))
self.b3 = nn.Sequential(*Resnet_block(64,128,2))
self.b4 = nn.Sequential(*Resnet_block(128,256,2))
self.b5 = nn.Sequential(*Resnet_block(256,512,2))
self.avgpool = nn.AdaptiveAvgPool2d(1)
self.flatten = nn.Flatten()
self.linear = nn.Linear(512, num_channels)
self.softmax = nn.Softmax(dim = 1)
def forward(self, x):
x = self.b1(x)
x = self.b2(x)
x = self.b3(x)
x = self.b4(x)
x = self.b5(x)
x = self.avgpool(x)
x = self.flatten(x)
x = self.linear(x)
x = self.softmax(x)
return x
model = Resnet_leaves(3,176)
model.to(device)
x,y = iter(dataloader).next()
model(x.cuda(device)).shape
optimizer = optim.Adam(model.parameters(), lr = 0.0001)
loss_func = nn.CrossEntropyLoss()
model.train()
for epoch in range(0):
for step,(x,y) in enumerate(dataloader):
x = x.cuda(device)
y = y.cuda(device)
y_ = model(x)
loss = loss_func(y_,y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
acc = torch.sum(torch.argmax(y_, 1) == y)/y.shape[0]
print('\r epoch:{epoch:5}--step:{step:7}--loss:{loss:.10}--acc:{acc:.5}'.format(epoch=epoch, step=step, loss=loss,acc = acc),end = '')
# ## It's too hard to train from beginning
# # 2. What about pre-trained model?
import timm
class leavesNet(nn.Module):
def __init__(self, out_channels):
super(leavesNet, self).__init__()
self.resnet50d = timm.models.resnet50d(pretrained=True)
in_features = self.resnet50d.fc.in_features
self.resnet50d.fc = nn.Linear(in_features, out_channels)
def forward(self, x):
x = self.resnet50d(x)
return x
# + tags=[]
model = leavesNet(176)
model.to(device)
model(x.cuda(device)).shape
# -
optimizer = optim.AdamW(model.parameters(), lr = 0.001)
loss_func = nn.CrossEntropyLoss().to(device)
model.train()
for epoch in range(5):
for step,(x,y) in enumerate(dataloader):
x = x.cuda(device)
y = y.cuda(device)
y_ = model(x)
loss = loss_func(y_,y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
acc = torch.sum(torch.argmax(y_, 1) == y)/y.shape[0]
print('\r epoch:{epoch:5}--step:{step:7}--loss:{loss:.10}--acc:{acc:.5}'.format(epoch=epoch, step=step, loss=loss,acc = acc),end = '')
# +
# torch.save(model, 'model/leavesNet_model.pt')
# -
test_dataloader = torch.utils.data.DataLoader(list(zip(dset.test_data,dset.test_name)),batch_size = 64)
model.eval()
result = []
for step,(x,y) in enumerate(test_dataloader):
x = x.cuda(device)
y_ = model(x)
for file, label in zip(y,y_.argmax(1)):
result.append([file, dset.idx2cls[int(label)]])
print('\r step:{step:7}'.format(step=step,),end = '')
result = np.array(result)
result
result_pd = pd.DataFrame(result, columns= ['image', 'label'])
result_pd.to_csv(path + '/submission.csv', index=False)
| Image Classification/leavesClassification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.6 64-bit (''DS'': pipenv)'
# name: python37664bitdspipenv4c679a2e85cb436f8d784862c1efd959
# ---
# + id="aEn5LADWFkaS" colab_type="code" colab={}
import pandas as pd
# + id="UCOteYvg3M--" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="36f0ef63-f960-4058-ca04-998c52104808"
leafly = pd.read_json('../data/leafly.json')
leafly.shape
# -
# # Data Wrangling
# + id="TRQ2EwHXtuQB" colab_type="code" colab={}
# Drop nans from leafly
leafly = leafly.dropna()
leafly.shape
# + id="q9zer66JQPOh" colab_type="code" colab={}
# Combine feelings into one column for one hot encoding
leafly['feelings'] = leafly['feeling_1'] + ',' + leafly['feeling_2'] + ',' + leafly['feeling_3'] + ',' + leafly['feeling_4'] + ',' + leafly['feeling_5']
# + id="_WLrGw91Q6Er" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 221} outputId="d64de616-bb3a-4794-cd1d-c79a37ffa1b0"
# Check feelings
leafly['feelings']
# + id="5N7MMpudR5bx" colab_type="code" colab={}
# Combine helps into one column for one hot encoding
leafly['helps'] = leafly['helps_1'] + ',' + leafly['helps_2'] + ',' + leafly['helps_3'] + ',' + leafly['helps_4'] + ',' + leafly['helps_5']
# + id="r_1pE_5ESa2e" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 221} outputId="90c61fb9-ccf6-454f-858f-667451af4783"
# Check helps
leafly['helps']
# + id="OA5NIAh4SkHi" colab_type="code" colab={}
# Combine feelings and helps columnbs
leafly["feelings_helps"] = leafly['feelings'] + ',' + leafly['helps']
# + id="KnD1S524S6VS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 221} outputId="a2eef28e-46ae-404d-cd42-14e6df9ff30c"
# Check feelings_helps
leafly['feelings_helps']
# + id="uZJDJwGJzSpK" colab_type="code" colab={}
# Remove commas and add spaces
leafly['feelings'] = leafly['feelings'].str.replace(',', ' ')
leafly['helps']= leafly['helps'].str.replace(',',' ')
# + id="2CKPhbmKzSxe" colab_type="code" colab={}
# Combine all text columns into one
leafly['feelings_helps_description'] = leafly["feelings"] + ' ' + leafly["helps"] + ' ' + leafly['description']
# + id="XFf3eouszS5S" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 581} outputId="a1b04bea-9174-4ec9-e75d-496cb1ba7004"
# Verify new column
leafly['feelings_helps_description']
# + id="tFjld4oX2ioV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="73ed4aa7-5d7d-4c1f-e1ce-9bcdc93d492a" tags=[]
import pickle
# Load nearest neighbors model
model_pkl = open('../pickles/nn_model.pkl', 'rb')
NN_model = pickle.load(model_pkl)
print ("Loaded model :: ", NN_model)
# load tfidf transformer
tfidf_pkl = open('../pickles/tfidf.pkl' , 'rb')
tfidf_model = pickle.load(tfidf_pkl)
print ("Loaded model :: ", tfidf_model) # print to verify
# + id="0q78RNlW4aXZ" colab_type="code" colab={}
# Demo Recommend Function from DS
import json
def recommend(user_input):
temp_df = NN_model.kneighbors(tfidf_model.transform([user_input]).todense())[1]
for i in range(4):
info = leafly.iloc[temp_df[0][i]]['strain']
info_aka = leafly.iloc[temp_df[0][i]]['aka']
info_type = leafly.iloc[temp_df[0][i]]['type']
info_rating = leafly.iloc[temp_df[0][i]]['rating']
info_num_reviews= leafly.iloc[temp_df[0][i]]['num_reviews']
info_feelings = leafly.iloc[temp_df[0][i]]['feelings']
info_helps = leafly.iloc[temp_df[0][i]]['helps']
info_description = leafly.iloc[temp_df[0][i]]['description']
print(json.dumps(info))
print(json.dumps(info_aka))
print(json.dumps(info_type))
print(json.dumps(info_rating))
print(json.dumps(info_num_reviews))
print(json.dumps(info_feelings))
print(json.dumps(info_helps))
print(json.dumps(info_description))
# + id="cQEwULSV4ahL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 581} outputId="6a67a316-4a57-4081-8607-e8d4f5a72442" tags=[]
recommend('Relaxed Happy Euphoric Uplifted Hungry')
# -
# # Prediction Accuracy Utility
# Accuracy of predicted strain(s) is calculated as number of expected 'feelings' and 'helps' terms accurately predicted divided by total number of espected 'feelings' and 'helps' specified.
def RecommendAccuracy(user_input):
temp_df = NN_model.kneighbors(tfidf_model.transform([user_input]).todense())[1]
for i in temp_df[0]:
expect_words = user_input.split(" ")
actual_words = leafly['feelings'].iloc[i].split(" ") + leafly['helps'].iloc[i].split(" ")
intersection = [word for word in expect_words if word in actual_words]
accuracy = len(intersection) / len(expect_words) * 100
print(f'Accuracy {i} = {accuracy}')
RecommendAccuracy('Relaxed Happy Euphoric Uplifted Hungry')
# # Pickle minimal data needed for output
# +
min_data = leafly[['strain', 'type', 'feeling_1', 'feeling_2', 'feeling_3', 'feeling_4','feeling_5', 'helps_1', 'helps_2', 'helps_3', 'helps_4', 'helps_5', 'description']].to_dict('records')
with open('../pickles/min_data.pkl', 'wb') as data_pkl:
pickle.dump(min_data, data_pkl)
# -
# # Code for use in api
#
# need to adjust paths
# +
import pickle
with open('../pickles/nn_model.pkl', 'rb') as nn_pkl:
model = pickle.load(nn_pkl)
with open('../pickles/tfidf.pkl', 'rb') as tfidf_pkl:
tfidf = pickle.load(tfidf_pkl)
with open('../pickles/min_data.pkl', 'rb') as data_pkl:
data = pickle.load(data_pkl)
def get_recommendations(user_input:str, num: int = 4):
neighbors = model.kneighbors(
tfidf.transform([user_input]).todense(),
n_neighbors=num, return_distance=False
)
results = []
for index in neighbors[0]:
results.append({
'strain': data[index]['strain'],
'strain_type': data[index]['type'],
'description': data[index]['description'],
'effects': [data[index][f'feeling_{i}'] for i in range(1,6)],
'helps': [data[index][f'helps_{i}'] for i in range(1,6)],
})
return results
# -
get_recommendations('Relaxed Happy Euphoric Uplifted Hungry')
| notebooks/pickle_to_app.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
# %config IPython.matplotlib.backend = "retina"
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import rcParams
rcParams["figure.dpi"] = 150
rcParams["savefig.dpi"] = 150
import tensorflow as tf
from vaneska.models import Gaussian
from vaneska.photometry import PSFPhotometry
from tqdm import tqdm
from lightkurve import KeplerTargetPixelFile, LightCurve
tpf = KeplerTargetPixelFile.from_archive('kepler-10b', quarter=5)
tpf.plot(scale=None, bkg=True, cmap='coolwarm')
gaussian = Gaussian(shape=tpf.shape[1:], col_ref=tpf.column, row_ref=tpf.row)
xc, yc = tpf.centroids()
fluxes = [tf.Variable(initial_value=np.nansum(tpf.flux, axis=(1, 2))[i], dtype=tf.float64)
for i in range(10)]
cols = [tf.Variable(initial_value=xc[i], dtype=tf.float64,)
for i in range(10)]
rows = [tf.Variable(initial_value=yc[i], dtype=tf.float64,)
for i in range(10)]
a = [tf.Variable(initial_value=np.ones(tpf.shape[0])[i], dtype=tf.float64)
for i in range(10)]
b = [tf.Variable(initial_value=np.zeros(tpf.shape[0])[i], dtype=tf.float64)
for i in range(10)]
c = [tf.Variable(initial_value=np.ones(tpf.shape[0])[i], dtype=tf.float64)
for i in range(10)]
bkg = [tf.Variable(initial_value=np.nanmean(tpf.flux_bkg, axis=(1, 2))[i], dtype=tf.float64)
for i in range(10)]
mean = [gaussian(fluxes[i], cols[i], rows[i], a[i], b[i], c[i]) + bkg[i] for i in tqdm(range(10))]
flat_field = tf.Variable(initial_value=np.ones_like(tpf.flux[0]), dtype=tf.float64)
data = tf.placeholder(dtype=tf.float64)
# Poisson likelihood
loss = tf.reduce_sum(tf.subtract(tf.multiply(mean, flat_field),
tf.multiply(data, tf.log(mean) + tf.log(flat_field))))
var_list=[flat_field] + sum([fluxes, cols, rows, a, b, c, bkg], [])
grad = tf.gradients(loss, var_list)
session = tf.Session()
session.run(fetches=tf.global_variables_initializer())
session.run(grad, feed_dict={data: tpf.flux[1000:1010] + tpf.flux_bkg[1000:1010]})
session.run(loss, feed_dict={data: tpf.flux[1000:1010] + tpf.flux_bkg[1000:1010]})
# +
#session.run(grad, feed_dict={data: tpf.flux[1000:1010] + tpf.flux_bkg[1000:1010]})
# -
optimizer = tf.contrib.opt.ScipyOptimizerInterface(loss=loss, var_list=var_list, method='TNC')
psf_flux = []
for i in tqdm(range((tpf.shape[0] - 10) // 10)):
optimizer.minimize(session=session, feed_dict={data: tpf.flux[i*10:(i+1)*10] + tpf.flux_bkg[i*10:(i+1)*10]})
psf_flux.append(session.run([fluxes]))
plt.imshow(session.run(flat_field), origin='lower')
plt.colorbar()
psf_flux = np.asarray(psf_flux)
psf_flux = psf_flux.reshape(-1)
psf_lc = LightCurve(tpf.time[:len(psf_flux)], psf_flux).flatten().fold(0.837495)
aper_lc = LightCurve(tpf.time, np.nansum(tpf.flux, axis=(1, 2))).flatten().fold(0.837495)
plt.plot(psf_lc.time, psf_lc.flux, 'ro', markersize=1)
plt.plot(aper_lc.time, aper_lc.flux, 'ko', markersize=1)
plt.plot(tpf.time[:len(psf_flux)], psf_flux)
plt.plot(tpf.time, np.nansum(tpf.flux, axis=(1, 2)))
| examples/demo-flat-field.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tensorflow course
# ## Convolutional Neural Net Example
# Using tensorflow build a convolutional neural net to classify the mnist dataset.
# ## Convolutional net
# 
# ## MNIST Dataset
# This example is using MNIST handwritten digits. The dataset contains 60,000 examples for training and 10,000 examples for testing. The digits have been size-normalized and centered in a fixed-size image (28x28 pixels) with values from 0 to 1. For simplicity, each image has been flatten and converted to a 1-D numpy array of 784 features (28*28).
#
# 
# We import the modules needed.
import sys, os, datetime
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import utils.utils as utils
# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=False)
# Reshape to image for visualization
train_data = mnist.train.images
train_label = mnist.train.labels
test_data = mnist.test.images
test_label = mnist.test.labels
# Show input images
show_data = np.reshape(train_data,(-1, 28, 28))
utils.show_images(show_data[0:10], 2)
train_dataSet = utils.DataSet(train_data, train_label)
test_dataSet = utils.DataSet(test_data, test_label)
# +
# Parameters for the neural net
# step size for gradient decsent
learning_rate = 0.005
#Iterations of forward and back pass
training_iters = 2**20
#size of each batch used for forward/backwards pass
batch_size = 2**8
# how often we write to console and to tensorboard the tensorboard
display_step = 64
# Network Parameters
#size of the kernel the local region that is slid along the convolution
kernel = 5
#amount of output classes
n_classes = 10
# Dropout to reguralize and ensure we dont overfit.
# it basically disables some random nodes each forward/backwards pass
# to enforce new paths to be discovered in the net. Not used in testing.
dropout = 0.50 # Dropout, probability to keep units
# tf Graph input
with tf.name_scope('input'):
x = tf.placeholder(tf.float32, [None, 28 * 28])
y = tf.placeholder(tf.int64, None)
keep_prob = tf.placeholder(tf.float32) #dropout (keep probability)
# -
# Create layer weights & bias
with tf.name_scope('Weights'):
weights = {
# Weights for 1st conv
'wc1': tf.Variable(tf.truncated_normal([kernel, kernel, 1, 32])),
'wc2': tf.Variable(tf.truncated_normal([kernel, kernel, 32, 64])),
# Weights for Fully connected
# 28x28-> 7x7 after 2x maxpool(2,2) times the amount of filters
'wf1': tf.Variable(tf.truncated_normal([7 * 7 * 64, 1024])),
'out': tf.Variable(tf.truncated_normal([1024, n_classes]))
}
with tf.name_scope('Biases'):
biases = {
# Bias for 1st conv block
'bc1': tf.Variable(tf.truncated_normal([32])),
'bc2': tf.Variable(tf.truncated_normal([64])),
# Bias for Fully connected
'bf1': tf.Variable(tf.truncated_normal([1024])),
# Bias for Output
'out': tf.Variable(tf.truncated_normal([n_classes]))
}
# Create model using tflayers
# the model is identical
def conv_net(x, weights, biases, dropout, reuse, is_training = False):
# Define a scope for reusing the variables
with tf.variable_scope('ConvNet/', reuse = reuse):
x = tf.reshape(x, shape=[-1, 28, 28, 1])
# 1st conv-layer block
conv_1 = tf.layers.conv2d(x, 32, 5, activation = tf.nn.relu)
pool_1 = tf.layers.max_pooling2d(conv_1, 2, 2)
# 2nd conv-layer block
conv_2 = tf.layers.conv2d(pool_1, 64, 5, activation = tf.nn.relu)
pool_2 = tf.layers.max_pooling2d(conv_2, 2, 2)
# Fully connected layer
fc_1 = tf.contrib.layers.flatten(pool_2)
fc_1 = tf.layers.dense(fc_1, 1024)
fc_1 = tf.layers.dropout(fc_1, rate = dropout, training = is_training)
# Output layer, class prediction
out = tf.layers.dense(fc_1, n_classes)
return out
# +
# Construct models
# Training graph
train_prediction = conv_net(x, weights, biases, keep_prob,
reuse=False, is_training = True)
# Testing graph reuses weights learned in train
test_prediction = conv_net(x, weights, biases, keep_prob,
reuse=True, is_training = False)
# +
# Define loss and optimizer
with tf.name_scope('loss'):
#calculate the crossentropy for each sample in batch
x_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(
logits = train_prediction,
labels = y)
#find mean loss of batch
loss_op = tf.reduce_mean(x_entropy)
with tf.name_scope('optimizer'):
# define what optimizer to use SGD in this case
#optimizer = tf.train.AdagradOptimizer(learning_rate = learning_rate)
#optimizer = tf.train.RMSPropOptimizer(learning_rate = learning_rate)
optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate)
# tell the optimizer to minimize the loss
minimize_op = optimizer.minimize(loss_op)
# Evaluate model with test graph
with tf.name_scope('Accurracy'):
predictions = tf.argmax(test_prediction, 1)
correct_pred = tf.equal(predictions, y)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
tf.summary.scalar("Test_Accuracy", accuracy)
# add scalars to Tensorboard
merged_summaries = tf.summary.merge_all()
tf.summary.scalar("Training_X_entropy", loss_op)
# Initializing the variables
init = tf.global_variables_initializer()
# +
# Name for tensorboard.
name = 'tb_'+ str(datetime.datetime.now().strftime('%Y-%m-%d_%H%M_%S'))
cwd = os.getcwd()
tb_path = os.path.join(cwd, "Tensorboard")
tb_path = os.path.join(tb_path, name)
# Launch the graph
sess=tf.InteractiveSession()
# initialize graph variables
sess.run(init)
# create the summary file for tensorboard
writer = tf.summary.FileWriter(tb_path, sess.graph)
# define a list of the variables we want the graph to return
outputs = [accuracy, merged_summaries]
step = 1
# Keep training until max iterations is reached
while step * batch_size < training_iters:
# load first batch
batch_x, batch_y = train_dataSet.next_batch(batch_size)
# feed dict is what we are giving the graph these are placed in the
# placeholders we defined earlier
feed_dict = {x: batch_x, y: batch_y, keep_prob: dropout}
# this is the training call to backpropagate.
# If the minimize is added the network first does a forward pass
# and then a backwards pass attempting to minimize the loss
_, loss = sess.run([minimize_op, loss_op], feed_dict)
# Testing step see if data is converging
if step % display_step == 0:
# To devleoper show loss accuracy and save values to tensorboard
# this runs a forward pass and compares the output with the label
# and stores the accuracy and loss in the tensorboard.
# No dropout in testing so keep prob = 1
feed_dict = {x: batch_x, y: batch_y, keep_prob: 1.}
# forward pass
acc, summary = sess.run(outputs, feed_dict)
# print Loss and accuracy
pvalue = "Iter {0}, Minibatch Loss= {1:.4f}, Training Accuracy= {2:.3f}"
print(pvalue.format(str(step * batch_size), loss, acc), end='\r')
# write to tensorboard
writer.add_summary(summary, step)
# update step
step += 1
print("\nOptimization Finished!, Training Accuracy= {:.3f}".format(acc))
# -
# Inference
# Test on the test dataset to see how good our network is]
# append accuracies from each batch
acc_app=[]
for i in range(10):
batch_x, batch_y = test_dataSet.next_batch(1056)
feed_dict={x: batch_x, y: batch_y, keep_prob: 1.}
# Forward pass and return accuracy and predictions
acc, cp = sess.run([accuracy, predictions], feed_dict)
acc_app.append(acc)
mean = np.mean(np.array(acc_app))
print("Accuracy for the testing dataset is: {0:.4f}".format(mean))
# select a random sample in the test data
rnd = np.random.randint(len(test_dataSet.data)-1)
image = test_dataSet.data[rnd]
label = test_dataSet.label[rnd]
show = np.reshape(image,(28,28))
plt.imshow(show, cmap='gray')
plt.show()
# +
# expand dimensions of image to have get right dimensions
# basically saying the input has a batch size of 1
exp_img = np.expand_dims(image, 0)
feed_dict={x: exp_img, y: label, keep_prob: 1.}
acc, pred = sess.run([accuracy, predictions], feed_dict)
print("Prediction: {0} \nLabel: {1} \nis equal? {2}".format(pred[0], label, pred[0] == label))
# +
# Create some wrappers for simplicity
def conv2d(x, W, b, strides=1):
# Conv2D wrapper, with bias and relu activation
x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')
x = tf.nn.bias_add(x, b)
return tf.nn.relu(x)
def maxpool2d(x, k=2):
# MaxPool2D wrapper
return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1],
padding='SAME')
# -
# Create model using wrappers
# the model is identical
def conv_net_wrappers(x, weights, biases, dropout, name, reuse, is_training = False):
with tf.variable_scope(name+"_ConvNet", reuse=reuse):
x = tf.reshape(x, [-1, 28, 28, 1]) # bwhc
# 1st conv-layer block
conv_1 = conv2d(x, weights['wc1'], biases['bc1'])
# 1st pool layer
pool_1 = maxpool2d(conv_1, 2)
# 2nd conv-layer block
conv_2= conv2d(pool_1, weights['wc2'], biases['bc2'])
# 2nd pool layer
pool_2 = maxpool2d(conv_2, 2)
# Fully connected layer
fc_1 = tf.reshape(pool_2, [-1, weights['wf1'].get_shape().as_list()[0]])
fc_1 = tf.matmul(fc_1fc_1, weights['wf1']) + biases['bf1']
fc_1 = tf.nn.relu(fc_1)
# Apply Dropout
drop = tf.nn.dropout(fc_1, dropout)
# Output, class prediction
out = tf.matmul(drop, weights['out']) + biases['out']
return out
# Create model using tflayers
# the model is identical
def conv_net_tflayers(x, dropout, name, reuse, is_training = False):
# Define a scope for reusing the variables
with tf.variable_scope(name +'_ConvNet', reuse=reuse):
x = tf.reshape(x, shape=[-1, 28, 28, 1])
with tf.variable_scope(name + "_conv_block1")
# 1st conv-layer block
conv_1 = tf.layers.conv2d(x, 32, 5, activation=tf.nn.relu)
pool_1 = tf.layers.max_pooling2d(conv_1, 2, 2)
with tf.variable_scope(name + "_conv_block2")
# 2nd conv-layer block
conv_2 = tf.layers.conv2d(pool_1, 64, 3, activation=tf.nn.relu)
pool_2 = tf.layers.max_pooling2d(conv2, 2, 2)
with tf.variable_scope(name + "_fully_connected")
# Fully connected layer
fc_1 = tf.contrib.layers.flatten(pool_2)
fc_1 = tf.layers.dense(fc_1, 1024)
fc_1 = tf.layers.dropout(fc_1, rate=dropout, training=is_training)
# Output layer, class prediction
out = tf.layers.dense(fc1, n_classes)
# Create model
def conv_net(x, weights, biases, dropout, name, reuse, is_training = False):
with tf.variable_scope(name + "_ConvNet", reuse=reuse):
# reshape input from vector to [batch, width, height, channels]
x = tf.reshape(x, [-1, 28, 28, 1]) # bwhc
# 1st conv-layer block
with tf.variable_scope(name+"_conv_block1"):
conv_1 = tf.nn.conv2d(x, weights['wc1'], strides=[1, 1, 1, 1], padding='SAME')
conv_1 = conv_1 + biases['bc1']
conv_1 = tf.nn.relu(conv_1)
# 1st pool layer
pool_1 = tf.nn.max_pool(conv_1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1],
padding='SAME')
# 2nd conv-layer block
with tf.variable_scope(name + "_conv_block2"):
conv_2 = tf.nn.conv2d(pool_1, weights['wc2'], strides=[1, 1, 1, 1], padding='SAME')
conv_2 = conv_2 + biases['bc2']
conv_2 = tf.nn.relu(conv_2)
# 2nd pool layer
pool_2 = tf.nn.max_pool(conv_2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1],
padding='SAME')
with tf.variable_scope(name + "_fcn_1"):
# Fully connected 1'
# flatten the convolutional output to a vector
fc_1 = tf.reshape(pool_2, [-1, weights['wf1'].get_shape().as_list()[0]])
# matrix multiplication with weights and add bias
fc_1 = tf.matmul(fc_1, weights['wf1']) + biases['bf1']
fc_1 = tf.nn.relu(fc_1)
# Apply Dropout
drop = tf.nn.dropout(fc_1, dropout)
# Output, class prediction
out = tf.matmul(drop, weights['out']) + biases['out']
return out
| utils/tst.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
def addfun(a,b):
'''This function adds two intergers together.'''
print(a+b)
addfun(1,2)
def hello():
print('Hello')
hello()
def greeting(name):
print ('Hello', name,'!')
greeting('Omar')
# +
'''Return statement: Returns a result
that can be stored as a variable or used however the
coder feels like'''
'''Whatever is put in front of return thats
what your function is goign to output'''
def addnum(c,d):
return c+d
# -
x = addnum(2,3)
x
print(addnum('st','ring')) #concantenates
"This is because we didnt declare a variable type"
def is_prime(num):
'''This functions checks for prime numbers.
Input: A number
Output: A print statement whether or not the number is
prime.'''
for n in range(2,num):
if num%n==0:
print (num,'is not a prime.')
break
else:
print (num,'is a prime.')
is_prime(13)
| Introduction to Python/Functions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
from scse.main import miniSCOTnotebook
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sys
# Run a bunch of sims with the following settings:
# * demand is always 7 mil, no reason to change it
# * renewable varies from ~10% - ~70%
# * flexible varies from ~10% - ~50%
# * backup power to buy varies from 2 standard deviations to 25 standard deviations (in case model variance is also off)
# still need to make a decision on the pricing points for everything, then I'm ready to run a bunch of sims and be done with
# this
num_runs = 10
renewable_scales = np.linspace(.1, .7, 13)
flexible_scales = np.linspace(.1, .5, 9)
backup_power = np.linspace(0,25, 26)
# just add the desired key-value to these run params to run a sim
run_params = {
'time_horizon': 730 # run for 2 years
}
m = miniSCOTnotebook()
market_demand = 7000000
path = "./sims/renewable_scales/"
for s in renewable_scales:
for i in range(num_runs):
print(path + "renewable_{}_{}".format(s,i))
sys.stdout = path + "renewable_{}_{}".format(s,i)
run_params['renewable_scale'] = s * market_demand
m.start(**run_params)
m.run()
sys.stdout.close()
| .ipynb_checkpoints/Run Sims-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Gym environment with scikit-decide tutorial: Continuous Mountain Car
#
# In this notebook we tackle the continuous mountain car problem taken from [OpenAI Gym](https://gym.openai.com/), a toolkit for developing environments, usually to be solved by Reinforcement Learning (RL) algorithms.
#
# Continuous Mountain Car, a standard testing domain in RL, is a problem in which an under-powered car must drive up a steep hill.
#
# <div align="middle">
# <video controls autoplay preload
# src="https://gym.openai.com/videos/2019-10-21--mqt8Qj1mwo/MountainCarContinuous-v0/original.mp4">
# </video>
# </div>
#
# Note that we use here the *continuous* version of the mountain car because
# it has a *shaped* or *dense* reward (i.e. not sparse) which can be used successfully when solving, as opposed to the other "Mountain Car" environments.
# For reminder, a sparse reward is a reward which is null almost everywhere, whereas a dense or shaped reward has more meaningful values for most transitions.
#
# This problem has been chosen for two reasons:
# - Show how scikit-decide can be used to solve Gym environments (the de-facto standard in the RL community),
# - Highlight that by doing so, you will be able to use not only solvers from the RL community (like the ones in [stable_baselines3](https://github.com/DLR-RM/stable-baselines3) for example), but also other solvers coming from other communities like genetic programming and planning/search (use of an underlying search graph) that can be very efficient.
#
# Therefore in this notebook we will go through the following steps:
# - Wrap a Gym environment in a scikit-decide domain;
# - Use a classical RL algorithm like PPO to solve our problem;
# - Give CGP (Cartesian Genetic Programming) a try on the same problem;
# - Finally use IW (Iterated Width) coming from the planning community on the same problem.
# +
import os
from time import sleep
from typing import Callable, Optional
import gym
import matplotlib.pyplot as plt
from IPython.display import clear_output
from stable_baselines3 import PPO
from skdecide import Solver
from skdecide.hub.domain.gym import (
GymDiscreteActionDomain,
GymDomain,
GymPlanningDomain,
GymWidthDomain,
)
from skdecide.hub.solver.cgp import CGP
from skdecide.hub.solver.iw import IW
from skdecide.hub.solver.stable_baselines import StableBaseline
# choose standard matplolib inline backend to render plots
# %matplotlib inline
# -
# When running this notebook on remote servers like with Colab or Binder, rendering of gym environment will fail as no actual display device exists. Thus we need to start a virtual display to make it work.
if "DISPLAY" not in os.environ:
import pyvirtualdisplay
_display = pyvirtualdisplay.Display(visible=False, size=(1400, 900))
_display.start()
# ## About Continuous Mountain Car problem
# In this a problem, an under-powered car must drive up a steep hill.
# The agent (a car) is started at the bottom of a valley. For any given
# state the agent may choose to accelerate to the left, right or cease
# any acceleration.
#
# ### Observations
#
# - Car Position [-1.2, 0.6]
# - Car Velocity [-0.07, +0.07]
#
# ### Action
# - the power coefficient [-1.0, 1.0]
#
#
# ### Goal
# The car position is more than 0.45.
#
# ### Reward
#
# Reward of 100 is awarded if the agent reached the flag (position = 0.45) on top of the mountain.
# Reward is decrease based on amount of energy consumed each step.
#
# ### Starting State
# The position of the car is assigned a uniform random value in [-0.6 , -0.4].
# The starting velocity of the car is always assigned to 0.
#
#
# ## Wrap Gym environment in a scikit-decide domain
# We choose the gym environment we would like to use.
ENV_NAME = "MountainCarContinuous-v0"
# We define a domain factory using `GymDomain` proxy available in scikit-decide which will wrap the Gym environment.
domain_factory = lambda: GymDomain(gym.make(ENV_NAME))
# Here is a screenshot of such an environment.
#
# Note: We close the domain straight away to avoid leaving the OpenGL pop-up window open on local Jupyter sessions.
domain = domain_factory()
domain.reset()
plt.imshow(domain.render(mode="rgb_array"))
plt.axis("off")
domain.close()
# ## Solve with Reinforcement Learning (StableBaseline + PPO)
#
# We first try a solver coming from the Reinforcement Learning community that is make use of OpenAI [stable_baselines3](https://github.com/DLR-RM/stable-baselines3), which give access to a lot of RL algorithms.
#
# Here we choose [Proximal Policy Optimization (PPO)](https://stable-baselines3.readthedocs.io/en/master/modules/ppo.html) solver. It directly optimizes the weights of the policy network using stochastic gradient ascent. See more details in stable baselines [documentation](https://stable-baselines3.readthedocs.io/en/master/modules/ppo.html) and [original paper](https://arxiv.org/abs/1707.06347).
# ### Check compatibility
# We check the compatibility of the domain with the chosen solver.
domain = domain_factory()
assert StableBaseline.check_domain(domain)
domain.close()
# ### Solver instantiation
solver = StableBaseline(
PPO, "MlpPolicy", learn_config={"total_timesteps": 10000}, verbose=True
)
# ### Training solver on domain
GymDomain.solve_with(solver, domain_factory)
# ### Rolling out a solution
#
# We can use the trained solver to roll out an episode to see if this is actually solving the problem at hand.
#
# For educative purpose, we define here our own rollout (which will probably be needed if you want to actually use the solver in a real case). If you want to take a look at the (more complex) one already implemented in the library, see the `rollout()` function in [utils.py](https://github.com/airbus/scikit-decide/blob/master/skdecide/utils.py) module.
#
# By default we display the solution in a matplotlib figure. If you need only to check wether the goal is reached or not, you can specify `render=False`. In this case, the rollout is greatly speed up and a message is still printed at the end of process specifying success or not, with the number of steps required.
def rollout(
domain: GymDomain,
solver: Solver,
max_steps: int,
pause_between_steps: Optional[float] = 0.01,
render: bool = True,
):
"""Roll out one episode in a domain according to the policy of a trained solver.
Args:
domain: the maze domain to solve
solver: a trained solver
max_steps: maximum number of steps allowed to reach the goal
pause_between_steps: time (s) paused between agent movements.
No pause if None.
render: if True, the rollout is rendered in a matplotlib figure as an animation;
if False, speed up a lot the rollout.
"""
# Initialize episode
solver.reset()
observation = domain.reset()
# Initialize image
if render:
plt.ioff()
fig, ax = plt.subplots(1)
ax.axis("off")
plt.ion()
img = ax.imshow(domain.render(mode="rgb_array"))
display(fig)
# loop until max_steps or goal is reached
for i_step in range(1, max_steps + 1):
if pause_between_steps is not None:
sleep(pause_between_steps)
# choose action according to solver
action = solver.sample_action(observation)
# get corresponding action
outcome = domain.step(action)
observation = outcome.observation
# update image
if render:
img.set_data(domain.render(mode="rgb_array"))
fig.canvas.draw()
clear_output(wait=True)
display(fig)
# final state reached?
if outcome.termination:
break
# close the figure to avoid jupyter duplicating the last image
if render:
plt.close(fig)
# goal reached?
is_goal_reached = observation[0] >= 0.45
if is_goal_reached:
print(f"Goal reached in {i_step} steps!")
else:
print(f"Goal not reached after {i_step} steps!")
return is_goal_reached, i_step
# We create a domain for the roll out and close it at the end. If not closing it, an OpenGL popup windows stays open, at least on local Jupyter sessions.
domain = domain_factory()
try:
rollout(
domain=domain,
solver=solver,
max_steps=999,
pause_between_steps=None,
render=True,
)
finally:
domain.close()
# We can see that PPO does not find a solution to the problem. This is mainly due to the way the reward is computed. Indeed negative reward accumulates as long as the goal is not reached, which encourages the agent to stop moving.
# Even if we increase the training time, it still occurs. (You can test that by increasing the parameter "total_timesteps" in the solver definition.)
#
# Actually, typical RL algorithms like PPO are a good fit for domains with "well-shaped" rewards (guiding towards the goal), but can struggle in sparse or "badly-shaped" reward environment like Mountain Car Continuous.
#
# We will see in the next sections that non-RL methods can overcome this issue.
# ### Cleaning up
# Some solvers need proper cleaning before being deleted.
solver._cleanup()
# Note that this is automatically done if you use the solver within a `with` statement. The syntax would look something like:
#
# ```python
# with solver_factory() as solver:
# MyDomain.solve_with(solver, domain_factory)
# rollout(domain=domain, solver=solver)
# ```
# ## Solve with Cartesian Genetic Programming (CGP)
#
# CGP (Cartesian Genetic Programming) is a form of genetic programming that uses a graph representation (2D grid of nodes) to encode computer programs.
# See [<NAME>. (2003). Cartesian Genetic Programming. 10.1007/978-3-642-17310-3.](https://www.researchgate.net/publication/2859242_Cartesian_Genetic_Programming) for more details.
#
# Pros:
# + ability to customize the set of atomic functions used by CPG (e.g. to inject some domain knowledge)
# + ability to inspect the final formula found by CGP (no black box)
#
# Cons:
# - the fitness function of CGP is defined by the rewards, so can be unable to solve in sparse reward scenarios
# ### Check compatibility
# We check the compatibility of the domain with the chosen solver.
domain = domain_factory()
assert CGP.check_domain(domain)
domain.close()
# ### Solver instantiation
solver = CGP("TEMP_CGP", n_it=25, verbose=True)
# ### Training solver on domain
GymDomain.solve_with(solver, domain_factory)
# ### Rolling out a solution
# We use the same roll out function as for PPO solver.
domain = domain_factory()
try:
rollout(
domain=domain,
solver=solver,
max_steps=999,
pause_between_steps=None,
render=True,
)
finally:
domain.close()
# CGP seems doing well on this problem. Indeed the presence of periodic functions ($asin$, $acos$, and $atan$) in its base set of atomic functions makes it suitable for modelling this kind of pendular motion.
# ***Warning***: On some cases, it happens that CGP does not actually find a solution. As there is randomness here, this is not possible. Running multiple episodes can sometimes solve the problem. If you have bad luck, you will even have to train again the solver.
for i_episode in range(10):
print(f"Episode #{i_episode}")
domain = domain_factory()
try:
rollout(
domain=domain,
solver=solver,
max_steps=999,
pause_between_steps=None,
render=False,
)
finally:
domain.close()
# ### Cleaning up
solver._cleanup()
# ## Solve with Classical Planning (IW)
#
# Iterated Width (IW) is a width based search algorithm that builds a graph on-demand, while pruning non-novel nodes.
#
# In order to handle continuous domains, a state encoding specific to continuous state variables dynamically and adaptively discretizes the continuous state variables in such a way to build a compact graph based on intervals (rather than a naive grid of discrete point values).
#
# The novelty measures discards intervals that are included in previously explored intervals, thus favoring to extend the state variable intervals.
#
# See https://www.ijcai.org/proceedings/2020/578 for more details.
# ### Prepare the domain for IW
#
# We need to wrap the Gym environment in a domain with finer charateristics so that IW can be used on it. More precisely, it needs the methods inherited from `GymPlanningDomain`, `GymDiscreteActionDomain` and `GymWidthDomain`. In addition, we will need to provide to IW a state features function to dynamically increase state variable intervals. For Gym domains, we use Boundary Extension Encoding (BEE) features as explained in the [paper](https://www.ijcai.org/proceedings/2020/578) mentioned above. This is implemented as `bee2_features()` method in `GymWidthDomain` that our domain class will inherit.
# +
class D(GymPlanningDomain, GymWidthDomain, GymDiscreteActionDomain):
pass
class GymDomainForWidthSolvers(D):
def __init__(
self,
gym_env: gym.Env,
set_state: Callable[[gym.Env, D.T_memory[D.T_state]], None] = None,
get_state: Callable[[gym.Env], D.T_memory[D.T_state]] = None,
termination_is_goal: bool = True,
continuous_feature_fidelity: int = 5,
discretization_factor: int = 3,
branching_factor: int = None,
max_depth: int = 1000,
) -> None:
GymPlanningDomain.__init__(
self,
gym_env=gym_env,
set_state=set_state,
get_state=get_state,
termination_is_goal=termination_is_goal,
max_depth=max_depth,
)
GymDiscreteActionDomain.__init__(
self,
discretization_factor=discretization_factor,
branching_factor=branching_factor,
)
GymWidthDomain.__init__(
self, continuous_feature_fidelity=continuous_feature_fidelity
)
gym_env._max_episode_steps = max_depth
# -
# We redefine accordingly the domain factory.
domain4width_factory = lambda: GymDomainForWidthSolvers(gym.make(ENV_NAME))
# ### Check compatibility
# We check the compatibility of the domain with the chosen solver.
domain = domain4width_factory()
assert IW.check_domain(domain)
domain.close()
# ### Solver instantiation
# As explained earlier, we use the Boundary Extension Encoding state features `bee2_features` so that IW can dynamically increase state variable intervals. In other domains, other state features might be more suitable.
solver = IW(
state_features=lambda d, s: d.bee2_features(s),
node_ordering=lambda a_gscore, a_novelty, a_depth, b_gscore, b_novelty, b_depth: a_novelty
> b_novelty,
parallel=False,
debug_logs=False,
domain_factory=domain4width_factory,
)
# ### Training solver on domain
GymDomainForWidthSolvers.solve_with(solver, domain4width_factory)
# ### Rolling out a solution
#
# **Disclaimer:** This roll out can be a bit painful to look on local Jupyter sessions. Indeed, IW creates copies of the environment at each step which makes pop up then close a new OpenGL window each time.
# We have to slightly modify the roll out function as observations for the new domain are now wrapped in a `GymDomainProxyState` to make them serializable. So to get access to the underlying numpy array, we need to look for `observation._state`.
def rollout_iw(
domain: GymDomain,
solver: Solver,
max_steps: int,
pause_between_steps: Optional[float] = 0.01,
render: bool = False,
):
"""Roll out one episode in a domain according to the policy of a trained solver.
Args:
domain: the maze domain to solve
solver: a trained solver
max_steps: maximum number of steps allowed to reach the goal
pause_between_steps: time (s) paused between agent movements.
No pause if None.
render: if True, the rollout is rendered in a matplotlib figure as an animation;
if False, speed up a lot the rollout.
"""
# Initialize episode
solver.reset()
observation = domain.reset()
# Initialize image
if render:
plt.ioff()
fig, ax = plt.subplots(1)
ax.axis("off")
plt.ion()
img = ax.imshow(domain.render(mode="rgb_array"))
display(fig)
# loop until max_steps or goal is reached
for i_step in range(1, max_steps + 1):
if pause_between_steps is not None:
sleep(pause_between_steps)
# choose action according to solver
action = solver.sample_action(observation)
# get corresponding action
outcome = domain.step(action)
observation = outcome.observation
# update image
if render:
img.set_data(domain.render(mode="rgb_array"))
fig.canvas.draw()
clear_output(wait=True)
display(fig)
# final state reached?
if outcome.termination:
break
# close the figure to avoid jupyter duplicating the last image
if render:
plt.close(fig)
# goal reached?
is_goal_reached = observation._state[0] >= 0.45
if is_goal_reached:
print(f"Goal reached in {i_step} steps!")
else:
print(f"Goal not reached after {i_step} steps!")
return is_goal_reached, i_step
domain = domain4width_factory()
try:
rollout_iw(
domain=domain,
solver=solver,
max_steps=999,
pause_between_steps=None,
render=True,
)
finally:
domain.close()
# IW works especially well in mountain car.
#
# Indeed we need to increase the cinetic+potential energy to reach the goal, which comes to increase as much as possible the values of the state variables (position and velocity). This is exactly what IW is designed to do (trying to explore novel states, which means here with higher position or velocity).
#
# As a consequence, IW can find an optimal strategy in a few seconds (whereas in most cases PPO and CGP can't find optimal strategies in the same computation time).
# ### Cleaning up
solver._cleanup()
# ## Conclusion
# We saw that it is possible thanks to scikit-decide to apply solvers from different fields and communities (Reinforcement Learning, Genetic Programming, and Planning) on a OpenAI Gym Environment.
#
# Even though the domain used here is more classical for RL community, the solvers from other communities performed far better. In particular the IW algorithm was able to find an efficient solution in a very short time.
| notebooks/12_gym_tuto.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="AlGl1p_LMK6e"
# # ** Neural Network for generating text based on training txt file**
# + colab={} colab_type="code" id="Wvqydb3MqCZJ"
import os
import sys
import keras
import numpy as np
import string
# + colab={} colab_type="code" id="VI-K9QzxqfrE"
#from google.colab import drive
# + colab={"base_uri": "https://localhost:8080/", "height": 191} colab_type="code" id="YZdNaMD6qmJm" outputId="f36a11cc-9f61-401c-9c2e-f2c0a83c27d8"
#drive.mount('/content/drive')
# + colab={} colab_type="code" id="-77YEV1Mq1xa"
#os.chdir("drive/My Drive/ML Practice/Text Generator")
# + [markdown] colab_type="text" id="69W4F47sMZdn"
# ## List of chars to be treated as separate tokens in the dictionary
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="XVAq7FzFqCZQ" outputId="2cdf7938-dd88-443d-8ecd-392059617903"
string.punctuation
# + colab={} colab_type="code" id="ReCryGqvqCZb"
chars = string.punctuation
# + colab={} colab_type="code" id="IGgK3c_6qCZg"
chars = chars.replace("`", "")
# + colab={} colab_type="code" id="LjjaAkGnqCZn"
chars = chars.replace("'", "")
# + colab={} colab_type="code" id="X6vc7q88qCZw"
f = open("/Users/elizabethlorelei/Downloads/ML/alice_in_wonderland.txt")
alice = f.read()
f.close()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="A7CSwxHBqCZ2" outputId="1df8bcb0-ca52-4ec7-9b7f-5764a2b44c3b"
len(alice)
# + [markdown] colab_type="text" id="Rqj-veQ4qCZ9"
# ## Replace ' and `
# + colab={} colab_type="code" id="5Yccu7xvqCZ-"
alice = alice.replace(" '", '"')
alice = alice.replace("' ", '"')
alice = alice.replace(" `", '"')
alice = alice.replace("` ", '"')
# + colab={"base_uri": "https://localhost:8080/", "height": 74} colab_type="code" id="LNA2eBWBqCaE" outputId="c005280a-29f1-480c-a0be-fd4027496647"
alice[:1500]
# + colab={} colab_type="code" id="u1pWxFYOqCaM"
import re
# + [markdown] colab_type="text" id="RgRRmhtzMkEO"
# ## The text initially is given row by row separated by \n symbol, paragraphs are separated by multiple \n symbols. Replacing multiple \n with unique qwerty word, deleting all \n tokens, replacing qwerty back to \n
# + colab={} colab_type="code" id="Dk9oCc8-qCaR"
s = "\n\n"
# + colab={} colab_type="code" id="jAK-qDgaqCaU"
s = re.sub("\n\n+", "a", s) #какое-то количество \n
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="0uMeKD75qCaX" outputId="ab6766ad-cfd7-492f-9301-1a9b3a08e6f9"
s
# + colab={} colab_type="code" id="KNYApz-RqCab"
alice = re.sub("\n\n+", "qwerty", alice)
# + colab={"base_uri": "https://localhost:8080/", "height": 74} colab_type="code" id="mqEOh0VVqCae" outputId="e01d4f3c-6a46-49af-a7fc-49bee3b792e9"
alice[:1500]
# + colab={} colab_type="code" id="8iArKAebqCaj"
alice = alice.replace("\n", " ")
# + colab={"base_uri": "https://localhost:8080/", "height": 74} colab_type="code" id="oJKJYAaUqCam" outputId="a07ea4c5-53a2-4098-d4b9-d9de89b99fe7"
alice[:1500]
# + colab={} colab_type="code" id="29lDMakXqCaq"
alice = alice.replace("qwety", "\n")
# + colab={"base_uri": "https://localhost:8080/", "height": 74} colab_type="code" id="1GgKvy-5qCat" outputId="b2dfda69-4210-443a-aaf1-ba0f3dfcf820"
print(alice[:1500])
# + colab={} colab_type="code" id="YrOEDDGTqCax"
alice = alice.replace(" '", '"')
alice = alice.replace("' ", '"')
alice = alice.replace(" `", '"')
alice = alice.replace("` ", '"')
# + [markdown] colab_type="text" id="nkLu_ER9MskR"
# ## Separating punctuation symbols by " " from both sides to make them separate words
# + colab={} colab_type="code" id="9_XUJnTZqCaz"
for c in chars:
alice = alice.replace(c," "+c+" ")
# + colab={} colab_type="code" id="7NEKs59BqCa1"
alice = alice.replace("\t", " ")
# + colab={} colab_type="code" id="V5B8UxHfqCa4"
alice = alice.replace("*", " ")
# + colab={} colab_type="code" id="1Z1McFJ7qCa6"
f = open("/Users/elizabethlorelei/Downloads/ML/alice_formatted.txt", "w")
f.write(alice)
f.close()
# + colab={"base_uri": "https://localhost:8080/", "height": 74} colab_type="code" id="HYouRmeSqCa8" outputId="09f8259c-e960-47e4-cb4d-139d4c2aa42a"
print(alice)
# + colab={} colab_type="code" id="4BdlXnRiqCbB"
alice = alice.lower()
# + colab={} colab_type="code" id="cRgtk9vhqCbD"
alice_words = alice.split()
# + colab={"base_uri": "https://localhost:8080/", "height": 1717} colab_type="code" id="f_KF34jfqCbF" outputId="c1cefd77-c3b6-47d0-c727-192d8c1683f9"
alice_words[:100]
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="mw2x2kg-qCbH" outputId="736effce-5959-4dc5-cbe7-bf782eff5d1a"
len(alice_words)
# + colab={} colab_type="code" id="mS1pNJMYqCbK"
vocab = set(alice_words)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="977ottKOqCbM" outputId="d25bb9fa-c4cd-4732-b0f0-ad4139c2c8c4"
len(vocab)
# + [markdown] colab_type="text" id="upeZp39tMyKj"
# ## Dictionaries for converting index to word and back
# + colab={} colab_type="code" id="EfA7HAoFqCbO"
index_to_word = {i: word for i, word in enumerate(vocab)}
# + colab={"base_uri": "https://localhost:8080/", "height": 17034} colab_type="code" id="-nUTXShdqCbQ" outputId="c2b94063-7c47-481a-9f1a-4876ae54334a"
index_to_word
# + colab={} colab_type="code" id="ZnWstIe_qCbT"
word_to_index = {word: i for i, word in index_to_word.items()}
# + colab={"base_uri": "https://localhost:8080/", "height": 17034} colab_type="code" id="b0Oe9HY7qCbW" outputId="cccd77bc-f382-4078-9580-f4791976d55c"
word_to_index
# + [markdown] colab_type="text" id="LGNoVxqiM49t"
# ## Function changes input text to the form ready to feed into next stage of preprocessing
# + colab={} colab_type="code" id="MGI719znqCbZ"
def normalize(s):
chars = string.punctuation
for c in chars:
s = s.replace(c," "+c+" ")
s = s.lower()
return s
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="O5HsRB8aqCbb" outputId="072371cd-eb5c-4b0d-f7b5-768cabbde6a9"
normalize(" irfvjj; dd:kDJJRRJ")
# + [markdown] colab_type="text" id="21BG4Y7nM8i6"
# ## Function accept checks if given list of words is inside dictionary
# + colab={} colab_type="code" id="mD4MftgnqCbe"
def accept(words):
for word in words:
if(not word in vocab):
return False
return True
# + [markdown] colab_type="text" id="R_I1QMdJNAwF"
# ## Converting input text to numpy array of indices
# + colab={} colab_type="code" id="DxivrF8RqCbg"
def text_to_index(s):
s = normalize(s)
s_words = s.split()
if (not accept(s_words)):
print("Error")
return ""
return np.array([word_to_index[word] for word in s_words])
# + [markdown] colab_type="text" id="TgkffCJDNFFI"
# ## Function converts list of words to numpy array of indices
# + colab={} colab_type="code" id="VfbZQEi9qCbi"
def word_list_to_index(s_words):
if (not accept(s_words)):
print("Error")
return ""
return np.array([word_to_index[word] for word in s_words])
# + [markdown] colab_type="text" id="Fwg0rudmNIrf"
# ## Function converts numpy array of indices back to text
# + colab={} colab_type="code" id="wQrn8D9aqCbk"
def index_to_text(inds):
s = ""
for i in range(inds.shape[0]):
s += index_to_word[inds[i]] + " "
return s
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="5vjF7AHoqCbm" outputId="b98c5757-a3c6-4e99-a352-85a78f03b78e"
index_to_text(text_to_index("Alice , ? land"))
# + colab={"base_uri": "https://localhost:8080/", "height": 74} colab_type="code" id="jo3bkuAQqCbo" outputId="b0785586-68bc-4b6e-b92f-cfe343aa1233"
alice
# + colab={} colab_type="code" id="GOO8NEAdqCbq"
alice_ind = word_list_to_index(alice_words)
# + [markdown] colab_type="text" id="b07wP89KNM-E"
# ## Creating training set: input - sequence of 20 words, output - 21-th word
# + colab={} colab_type="code" id="WGA1PaO3qCbr"
sequence_len = 20
train_x = []
train_y = []
for i in range(alice_ind.shape[0] - sequence_len - 1):
train_x.append(alice_ind[i:i+sequence_len])
train_y.append(alice_ind[i+sequence_len])
train_x = np.array(train_x)
train_y = np.array(train_y)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="U4QZbpkuPFuw" outputId="33f95186-b529-4bf6-aa5c-95a329fbb243"
train_x.shape
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="htaTj6_uqCbs" outputId="b9b50e82-d361-4221-e44d-555f8983e61f"
train_y.shape
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="8zfPZmGjPBF4" outputId="4b41f156-baef-42f8-841c-fed09cb62fea"
index_to_text(train_x[46])
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="NrZyyLDPPIgx" outputId="6b6eb5fc-cbe9-4426-e12c-84c3e831816a"
index_to_text(np.array([train_y[45]]))
# + [markdown] colab_type="text" id="4g-hlgOINTAI"
# ## Building RNN: embedding layer followed by 2 LSTM layers placed on top of each other, followed by Dense layer with softmax predicting next word in the sequence
# + colab={} colab_type="code" id="LbHsr9pNqCbt"
from keras.layers import LSTM, Dense, Embedding#, CuDNNLSTM
from tensorflow.contrib.rnn import *
# + colab={} colab_type="code" id="__LnKRStqCbv"
model = keras.Sequential()
model.add(Embedding(len(vocab), 30))
model.add(LSTM(64, return_sequences = True))
model.add(LSTM(128, return_sequences = False))
model.add(Dense(len(vocab), activation = "softmax"))
# + colab={"base_uri": "https://localhost:8080/", "height": 272} colab_type="code" id="ZSKhDLdHqCbx" outputId="4f14441c-239f-4701-e763-c610e28ebfa3"
model.summary()
# + colab={} colab_type="code" id="lV0s0ueqs0zs"
model.compile(optimizer=keras.optimizers.Adam(),
loss = keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
# + colab={"base_uri": "https://localhost:8080/", "height": 714} colab_type="code" id="GswNMP2gqCbz" outputId="40cfcd9d-573b-4cd3-e7f9-45961dec97d9"
import time
start_time = time.time()
model.fit(train_x, train_y, batch_size = 32, epochs = 20)
print("--- %s seconds ---" % (time.time() - start_time))
# + [markdown] colab_type="text" id="JRhUyuuFNdL3"
# ## model_new has the same structure and weights as model, except it is stateful - so its hidden state is not reseted after each call of predict function. This allows to make text predictions of arbitrary length updating hidden state word by word
# + colab={} colab_type="code" id="_pBsutK5u0gD"
model_new = keras.Sequential()
model_new.add(Embedding(len(vocab), 30, batch_input_shape = (1, 1)))
model_new.add(LSTM(64, return_sequences = True, stateful = True))
model_new.add(LSTM(128, return_sequences = False, stateful = True))
model_new.add(Dense(len(vocab), activation = "softmax"))
# + colab={} colab_type="code" id="4tBFqrfdvUdh"
model_new.set_weights(model.get_weights())
# + [markdown] colab_type="text" id="vS3863yANisa"
# ## Function predict takes input string and number of words to be predicted as parameters and produces next words using trained model. It chooses most probable word at each step.
# + colab={} colab_type="code" id="97fgSC8Ds1qo"
def predict(s, numWords):
s_return = s
s_ind = text_to_index(s)
model_new.reset_states()
for i in range(s_ind.shape[0]):
pred = model_new.predict_on_batch(np.array(s_ind[i]).reshape(1,1))
for i in range(numWords):
next_word_ind = np.argmax(pred)
s_return += " " + index_to_word[next_word_ind]
pred = model_new.predict_on_batch(np.array(next_word_ind).reshape(1,1))
return s_return
# + [markdown] colab_type="text" id="I6fifa_ANmnm"
# ## Function predictRandom chooses predicted words randomly according to computed probabilities. Parameter conf allows to control certainty in making predictions: big conf converts process to argmax, low conf is equivalent to choosing words with uniform probability.
# + colab={} colab_type="code" id="eLi6HzyG8wq-"
def predictRandom(s, numWords, conf = 1):
s_return = s
s_ind = text_to_index(s)
model_new.reset_states()
for i in range(s_ind.shape[0]):
pred = model_new.predict_on_batch(np.array(s_ind[i]).reshape(1,1))
for i in range(numWords):
pred_new = np.power(pred[0], conf)
pred_new = pred_new / np.sum(pred_new)
next_word_ind = np.random.choice(np.arange(len(vocab)), p = pred_new)
s_return += " " + index_to_word[next_word_ind]
pred = model_new.predict_on_batch(np.array(next_word_ind).reshape(1,1))
return s_return
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="gVlqGrO0xzbU" outputId="0153e855-1bb4-4e8d-b907-16cbb5db0936"
predictRandom("Sense of life be", 15, 2)
# + [markdown] colab_type="text" id="CBJGqQSINs9x"
# ## Here we visualize learned 30-dimentional embeddings with TSNE procedure, that fits 2-dimensional manifold and gives us 2-dimensional representation of embeddings that can be plotted.
# + colab={} colab_type="code" id="pl7u9X6px3OM"
from sklearn.manifold import TSNE
# + colab={} colab_type="code" id="JIxk5pz4Awc7"
embeddings = model.get_weights()[0]
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="QR5EXApmBKSW" outputId="2cc04c8d-80c4-476c-aa9d-66d007d3ec0e"
embeddings.shape
# + colab={} colab_type="code" id="Bd8hzcVRBPqC"
tsne = TSNE()
# + colab={} colab_type="code" id="qm2JoAW3BTdd"
plane_embs = tsne.fit_transform(embeddings)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="hUWSKBNlB5yn" outputId="e4b4ab82-2790-44ec-d22e-5a082683b39b"
plane_embs.shape
# + [markdown] colab_type="text" id="BXWqAU8POA5s"
# ## Normalizing embeddings to be inside [0,1] interval
# + colab={} colab_type="code" id="ODo5Ra7_E2eC"
plane_embs[:,0] = (plane_embs[:, 0] + 60)/120
plane_embs[:,1] = (plane_embs[:, 1] + 60)/120
# + colab={} colab_type="code" id="x8vlSUWnOGRz"
import matplotlib.pyplot as plt
# + colab={"base_uri": "https://localhost:8080/", "height": 364} colab_type="code" id="4duN33PnOIcE" outputId="2db6c4af-eb7c-403b-c1c7-2669748b8674"
plt.scatter(plane_embs[:,0], plane_embs[:,1])
# + [markdown] colab_type="text" id="aR8u0lKwONPJ"
# ## Plotting 2-dimensinal representations of words on plane
# + colab={} colab_type="code" id="y-m_giHVOKXV"
import pylab
# + colab={"base_uri": "https://localhost:8080/", "height": 266} colab_type="code" id="IvsWjn1xOQoV" outputId="66ae8dc6-668f-4b6b-851c-30f6c8676293"
pylab.figure(figsize = (100,100))
for i in range(plane_embs.shape[0]):
#print(i)
#print(plane_embs[i])
pylab.annotate(ind_to_word[i], (plane_embs[i][0], plane_embs[i][1]))
pylab.savefig("emb.jpg")
| Text Generator/Text_Generator (CPU).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + raw_mimetype="text/restructuredtext" active=""
# .. _nb_moead:
# -
# ## MOEA/D
# ### Example
# + pycharm={"is_executing": false, "name": "#%%\n"}
from jmetal.algorithm.multiobjective.moead import MOEAD
from jmetal.operator import PolynomialMutation, DifferentialEvolutionCrossover
from jmetal.problem import LZ09_F2
from jmetal.util.aggregative_function import Tschebycheff
from jmetal.util.termination_criterion import StoppingByEvaluations
problem = LZ09_F2()
max_evaluations = 150000
algorithm = MOEAD(
problem=problem,
population_size=300,
crossover=DifferentialEvolutionCrossover(CR=1.0, F=0.5, K=0.5),
mutation=PolynomialMutation(probability=1.0 / problem.number_of_variables, distribution_index=20),
aggregative_function=Tschebycheff(dimension=problem.number_of_objectives),
neighbor_size=20,
neighbourhood_selection_probability=0.9,
max_number_of_replaced_solutions=2,
weight_files_path='resources/MOEAD_weights',
termination_criterion=StoppingByEvaluations(max=max_evaluations)
)
algorithm.run()
solutions = algorithm.get_result()
# -
# We can now visualize the Pareto front approximation:
# + pycharm={"is_executing": false, "name": "#%%\n"}
from jmetal.lab.visualization.plotting import Plot
from jmetal.util.solution import get_non_dominated_solutions
front = get_non_dominated_solutions(solutions)
plot_front = Plot(plot_title='Pareto front approximation', axis_labels=['x', 'y'])
plot_front.plot(front, label='MOEAD-LZ09_F2')
# -
# ### API
# + pycharm={"is_executing": false, "name": "#%% raw\n"} raw_mimetype="text/restructuredtext" active=""
# .. autoclass:: jmetal.algorithm.multiobjective.moead.MOEAD
# :members:
# :undoc-members:
# :show-inheritance:
#
| docs/source/api/algorithm/multiobjective/eas/moead.ipynb |
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .fs
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: F#
// language: fsharp
// name: ifsharp
// ---
// (*** hide ***)
// +
#I "../../bin/net45"
#load "Deedle.fsx"
#I "../../packages/MathNet.Numerics/lib/net40"
#load "../../packages/FSharp.Charting/lib/net45/FSharp.Charting.fsx"
open System
open FSharp.Data
open Deedle
open FSharp.Charting
let root = __SOURCE_DIRECTORY__ + "/data/"
// -
// (**
// Working with series and time series data in F#
// ==============================================
//
// In this section, we look at F# data frame library features that are useful when working
// with time series data or, more generally, any ordered series. Although we mainly look at
// operations on the `Series` type, many of the operations can be applied to data frame `Frame`
// containing multiple series. Furthermore, data frame provides an elegant way for aligning and
// joining series.
//
// You can also get this page as an [F# script file](https://github.com/fslaborg/Deedle/blob/master/docs/content/series.fsx)
// from GitHub and run the samples interactively.
//
// Generating input data
// ---------------------
//
// For the purpose of this tutorial, we'll need some input data. For simplicitly, we use the
// following function which generates random prices using the geometric Brownian motion.
// The code is adapted from the [financial tutorial on Try F#](http://www.tryfsharp.org/Learn/financial-computing#simulating-and-analyzing).
//
// *)
// +
// Use Math.NET for probability distributions
#r "MathNet.Numerics.dll"
open MathNet.Numerics.Distributions
/// Generates price using geometric Brownian motion
/// - 'seed' specifies the seed for random number generator
/// - 'drift' and 'volatility' set properties of the price movement
/// - 'initial' and 'start' specify the initial price and date
/// - 'span' specifies time span between individual observations
/// - 'count' is the number of required values to generate
let randomPrice seed drift volatility initial start span count =
// +
(*[omit:(Implementation omitted)]*)
let dist = Normal(0.0, 1.0, RandomSource=Random(seed))
let dt = (span:TimeSpan).TotalDays / 250.0
let driftExp = (drift - 0.5 * pown volatility 2) * dt
let randExp = volatility * (sqrt dt)
((start:DateTimeOffset), initial) |> Seq.unfold (fun (dt, price) ->
let price = price * exp (driftExp + randExp * dist.Sample())
Some((dt, price), (dt + span, price))) |> Seq.take count
// 12:00 AM today, in current time zone
let today = DateTimeOffset(DateTime.Today)
let stock1 = randomPrice 1 0.1 3.0 20.0 today
let stock2 = randomPrice 2 0.2 1.5 22.0 today
(**
The implementation of the function is not particularly important for the purpose of this
page, but you can find it in the [script file with full source](https://github.com/fslaborg/Deedle/blob/master/docs/content/series.fsx).
Once we have the function, we define a date `today` (representing today's midnight) and
two helper functions that set basic properties for the `randomPrice` function.
To get random prices, we now only need to call `stock1` or `stock2` with `TimeSpan` and
the required number of prices:
*)
(*** define-output: stocks ***)
Chart.Combine
[ stock1 (TimeSpan(0, 1, 0)) 1000 |> Chart.FastLine
stock2 (TimeSpan(0, 1, 0)) 1000 |> Chart.FastLine ]
(**
The above snippet generates 1k of prices in one minute intervals and plots them using the
[F# Charting library](https://github.com/fsharp/FSharp.Charting). When you run the code
and tweak the chart look, you should see something like this:
*)
(*** include-it: stocks ***)
(**
<a name="alignment"></a>
Data alignment and zipping
--------------------------
One of the key features of the data frame library for working with time series data is
_automatic alignment_ based on the keys. When we have multiple time series with date
as the key (here, we use `DateTimeOffset`, but any type of date will do), we can combine
multiple series and align them automatically to specified date keys.
To demonstrate this feature, we generate random prices in 60 minute, 30 minute and
65 minute intervals:
*)
let s1 = stock1 (TimeSpan(1, 0, 0)) 6 |> series
// [fsi:val s1 : Series<DateTimeOffset,float> =]
// [fsi: series [ 12:00:00 AM => 20.76; 1:00:00 AM => 21.11; 2:00:00 AM => 22.51 ]
// [fsi: 3:00:00 AM => 23.88; 4:00:00 AM => 23.23; 5:00:00 AM => 22.68 ] ]
let s2 =stock2 (TimeSpan(0, 30, 0)) 12 |> series
// [fsi:val s2 : Series<DateTimeOffset,float> =]
// [fsi: series [ 12:00:00 AM => 21.61; 12:30:00 AM => 21.64; 1:00:00 AM => 21.86 ]
// [fsi: 1:30:00 AM => 22.22; 2:00:00 AM => 22.35; 2:30:00 AM => 22.76 ]
// [fsi: 3:00:00 AM => 22.68; 3:30:00 AM => 22.64; 4:00:00 AM => 22.90 ]
// [fsi: 4:30:00 AM => 23.40; 5:00:00 AM => 23.33; 5:30:00 AM => 23.43] ]
let s3 = stock1 (TimeSpan(1, 5, 0)) 6 |> series
// [fsi:val s3 : Series<DateTimeOffset,float> =]
// [fsi: series [ 12:00:00 AM => 21.37; 1:05:00 AM => 22.73; 2:10:00 AM => 22.08 ]
// [fsi: 3:15:00 AM => 23.92; 4:20:00 AM => 22.72; 5:25:00 AM => 22.79 ]
(**
### Zipping time series
Let's first look at operations that are available on the `Series<K, V>` type. A series
exposes `Zip` operation that can combine multiple series into a single series of pairs.
This is not as convenient as working with data frames (which we'll see later), but it
is useful if you only need to work with one or two columns without missing values:
*)
// Match values from right series to keys of the left one
// (this creates series with no missing values)
s1.Zip(s2, JoinKind.Left)
// [fsi:val it : Series<DateTimeOffset,float opt * float opt>]
// [fsi: 12:00:00 AM -> (21.32, 21.61) ]
// [fsi: 1:00:00 AM -> (22.62, 21.86) ]
// [fsi: 2:00:00 AM -> (22.00, 22.35) ]
// [fsi: (...)]
// Match values from the left series to keys of the right one
// (right has higher resolution, so half of left values are missing)
s1.Zip(s2, JoinKind.Right)
// [fsi:val it : Series<DateTimeOffset,float opt * float opt>]
// [fsi: 12:00:00 AM -> (21.32, 21.61) ]
// [fsi: 12:30:00 AM -> (<missing>, 21.64) ]
// [fsi: 1:00:00 AM -> (22.62, 21.86) ]
// [fsi: (...)]
// Use left series key and find the nearest previous
// (smaller) value from the right series
s1.Zip(s2, JoinKind.Left, Lookup.ExactOrSmaller)
// [fsi:val it : Series<DateTimeOffset,float opt * float opt>]
// [fsi: 12:00:00 AM -04:00 -> (21.32, 21.61) ]
// [fsi: 1:00:00 AM -04:00 -> (22.62, 21.86) ]
// [fsi: 2:00:00 AM -04:00 -> (22.00, 22.35) ]
// [fsi: (...)]
(**
Using `Zip` on series is somewhat complicated. The result is a series of tuples, but each
component of the tuple may be missing. To represent this, the library uses the `T opt` type
(a type alias for `OptionalValue<T>`). This is not necessary when we use data frame to
work with multiple columns.
### Joining data frames
When we store data in data frames, we do not need to use tuples to represent combined values.
Instead, we can simply use data frame with multiple columns. To see how this works, let's first
create three data frames containing the three series from the previous section:
*)
// Contains value for each hour
let f1 = Frame.ofColumns ["S1" => s1]
// Contains value every 30 minutes
let f2 = Frame.ofColumns ["S2" => s2]
// Contains values with 65 minute offsets
let f3 = Frame.ofColumns ["S3" => s3]
(**
Similarly to `Series<K, V>`, the type `Frame<R, C>` has an instance method `Join` that can be
used for joining (for unordered) or aligning (for ordered) data. The same operation is also
exposed as `Frame.join` and `Frame.joinAlign` functions, but it is usually more convenient to use
the member syntax in this case:
*)
// Union keys from both frames and align corresponding values
f1.Join(f2, JoinKind.Outer)
// [fsi:val it : Frame<DateTimeOffset,string> =]
// [fsi: S1 S2 ]
// [fsi: 12:00:00 AM -> 21.32 21.61 ]
// [fsi: 12:30:00 AM -> <missing> 21.64 ]
// [fsi: 1:00:00 AM -> 22.62 21.86 ]
// [fsi: (...)]
// Take only keys where both frames contain all values
// (We get only a single row, because 'f3' is off by 5 minutes)
f2.Join(f3, JoinKind.Inner)
// [fsi:val it : Frame<DateTimeOffset,string> =]
// [fsi: S2 S3 ]
// [fsi: 12:00:00 AM -> 21.61 21.37 ]
// Take keys from the left frame and find corresponding values
// from the right frame, or value for a nearest smaller date
// ($21.37 is repeated for all values between 12:00 and 1:05)
f2.Join(f3, JoinKind.Left, Lookup.ExactOrSmaller)
// [fsi:val it : Frame<DateTimeOffset,string> =]
// [fsi: S2 S3 ]
// [fsi: 12:00:00 AM -> 21.61 21.37 ]
// [fsi: 12:30:00 AM -> 21.64 21.37 ]
// [fsi: 1:00:00 AM -> 21.86 21.37 ]
// [fsi: 1:30:00 AM -> 22.22 22.73 ]
// [fsi: (...)]
// If we perform left join as previously, but specify exact
// matching, then most of the values are missing
f2.Join(f3, JoinKind.Left, Lookup.Exact)
// [fsi:val it : Frame<DateTimeOffset,string> =]
// [fsi: S2 S3 ]
// [fsi: 12:00:00 AM -> 21.61 21.37]
// [fsi: 12:30:00 AM -> 21.64 <missing> ]
// [fsi: 1:00:00 AM -> 21.86 <missing> ]
// [fsi: (...)]
// Equivalent to line 2, using function syntax
Frame.join JoinKind.Outer f1 f2
// Equivalent to line 20, using function syntax
Frame.joinAlign JoinKind.Left Lookup.ExactOrSmaller f1 f2
(**
The automatic alignment is extremely useful when you have multiple data series with different
offsets between individual observations. You can choose your set of keys (dates) and then easily
align other data to match the keys. Another alternative to using `Join` explicitly is to create
a new frame with just keys that you are interested in (using `Frame.ofRowKeys`) and then use
the `AddSeries` member (or the `df?New <- s` syntax) to add series. This will automatically left
join the new series to match the current row keys.
When aligning data, you may or may not want to create data frame with missing values. If your
observations do not happen at exact time, then using `Lookup.ExactOrSmaller` or `Lookup.ExactOrGreater`
is a great way to avoid mismatch.
If you have observations that happen e.g. at two times faster rate (one series is hourly and
another is half-hourly), then you can create data frame with missing values using `Lookup.Exact`
(the default value) and then handle missing values explicitly (as [discussed here](frame.html#missing)).
<a name="windowing"></a>
Windowing, chunking and pairwise
--------------------------------
Windowing and chunking are two operations on ordered series that allow aggregating
the values of series into groups. Both of these operations work on consecutive elements,
which contrast with [grouping](tutorial.html#grouping) that does not use order.
### Sliding windows
Sliding window creates windows of certain size (or certain condition). The window
"slides" over the input series and provides a view on a part of the series. The
key thing is that a single element will typically appear in multiple windows.
*)
// Create input series with 6 observations
let lf = stock1 (TimeSpan(0, 1, 0)) 6 |> series
// Create series of series representing individual windows
lf |> Series.window 4
// Aggregate each window using 'Stats.mean'
lf |> Series.windowInto 4 Stats.mean
// Get first value in each window
lf |> Series.windowInto 4 Series.firstValue
(**
The functions used above create window of size 4 that moves from the left to right.
Given input `[1,2,3,4,5,6]` the this produces the following three windows:
`[1,2,3,4]`, `[2,3,4,5]` and `[3,4,5,6]`. By default, the `Series.window` function
automatically chooses the key of the last element of the window as the key for
the whole window (we'll see how to change this soon):
*)
// Calculate means for sliding windows
let lfm1 = lf |> Series.windowInto 4 Stats.mean
// Construct dataframe to show aligned results
Frame.ofColumns [ "Orig" => lf; "Means" => lfm1 ]
// [fsi:val it : Frame<DateTimeOffset,string> =]
// [fsi: Means Orig ]
// [fsi: 12:00:00 AM -> <missing> 20.16]
// [fsi: 12:01:00 AM -> <missing> 20.32]
// [fsi: 12:02:00 AM -> <missing> 20.25]
// [fsi: 12:03:00 AM -> 20.30 20.45]
// [fsi: 12:04:00 AM -> 20.34 20.32]
// [fsi: 12:05:00 AM -> 20.34 20.33]
(**
What if we want to avoid creating `<missing>` values? One approach is to
specify that we want to generate windows of smaller sizes at the beginning
or at the end of the beginning. This way, we get _incomplete_ windows that look as
`[1]`, `[1,2]`, `[1,2,3]` followed by the three _complete_ windows shown above:
*)
let lfm2 =
// Create sliding windows with incomplete windows at the beginning
lf |> Series.windowSizeInto (4, Boundary.AtBeginning) (fun ds ->
Stats.mean ds.Data)
Frame.ofColumns [ "Orig" => lf; "Means" => lfm2 ]
// [fsi:val it : Frame<DateTimeOffset,string> =]
// [fsi: Means Orig ]
// [fsi: 12:00:00 AM -> 20.16 20.16]
// [fsi: 12:01:00 AM -> 20.24 20.32]
// [fsi: 12:02:00 AM -> 20.24 20.25]
// [fsi: 12:03:00 AM -> 20.30 20.45]
// [fsi: 12:04:00 AM -> 20.34 20.32]
// [fsi: 12:05:00 AM -> 20.34 20.33]
(**
As you can see, the values in the first column are equal, because the first
`Mean` value is just the average of singleton series.
When you specify `Boundary.AtBeginning` (this example) or `Boundary.Skip`
(default value used in the previous example), the function uses the last key
of the window as the key of the aggregated value. When you specify
`Boundary.AtEnding`, the last key is used, so the values can be nicely
aligned with original values. When you want to specify custom key selector,
you can use a more general function `Series.aggregate`.
In the previous sample, the code that performs aggregation is no longer
just a simple function like `Stats.mean`, but a lambda that takes `ds`,
which is of type `DataSegment<T>`. This type informs us whether the window
is complete or not. For example:
*)
// Simple series with characters
let st = Series.ofValues [ 'a' .. 'e' ]
st |> Series.windowSizeInto (3, Boundary.AtEnding) (function
| DataSegment.Complete(ser) ->
// Return complete windows as uppercase strings
String(ser |> Series.values |> Array.ofSeq).ToUpper()
| DataSegment.Incomplete(ser) ->
// Return incomplete windows as padded lowercase strings
String(ser |> Series.values |> Array.ofSeq).PadRight(3, '-') )
// [fsi:val it : Series<int,string> =]
// [fsi: 0 -> ABC ]
// [fsi: 1 -> BCD ]
// [fsi: 2 -> CDE ]
// [fsi: 3 -> de- ]
// [fsi: 4 -> e-- ]
(**
### Window size conditions
The previous examples generated windows of fixed size. However, there are two other
options for specifying when a window ends.
- The first option is to specify the maximal
_distance_ between the first and the last key
- The second option is to specify a function that is called with the first
and the last key; a window ends when the function returns false.
The two functions are `Series.windowDist` and `Series.windowWhile` (together
with versions suffixed with `Into` that call a provided function to aggregate
each window):
*)
// Generate prices for each hour over 30 days
let hourly = stock1 (TimeSpan(1, 0, 0)) (30*24) |> series
// Generate windows of size 1 day (if the source was
// irregular, windows would have varying size)
hourly |> Series.windowDist (TimeSpan(24, 0, 0))
// Generate windows such that date in each window is the same
// (windows start every hour and end at the end of the day)
hourly |> Series.windowWhile (fun d1 d2 -> d1.Date = d2.Date)
(**
### Chunking series
Chunking is similar to windowing, but it creates non-overlapping chunks,
rather than (overlapping) sliding windows. The size of chunk can be specified
in the same three ways as for sliding windows (fixed size, distance on keys
and condition):
*)
// Generate per-second observations over 10 minutes
let hf = stock1 (TimeSpan(0, 0, 1)) 600 |> series
// Create 10 second chunks with (possible) incomplete
// chunk of smaller size at the end.
hf |> Series.chunkSize (10, Boundary.AtEnding)
// Create 10 second chunks using time span and get
// the first observation for each chunk (downsample)
hf |> Series.chunkDistInto (TimeSpan(0, 0, 10)) Series.firstValue
// Create chunks where hh:mm component is the same
// (containing observations for all seconds in the minute)
hf |> Series.chunkWhile (fun k1 k2 ->
(k1.Hour, k1.Minute) = (k2.Hour, k2.Minute))
(**
The above examples use various chunking functions in a very similar way, mainly
because the randomly generated input is very uniform. However, they all behave
differently for inputs with non-uniform keys.
Using `chunkSize` means that the chunks have the same size, but may correspond
to time series of different time spans. Using `chunkDist` guarantees that there
is a maximal time span over each chunk, but it does not guarantee when a chunk
starts. That is something which can be achieved using `chunkWhile`.
Finally, all of the aggregations discussed so far are just special cases of
`Series.aggregate` which takes a discriminated union that specifies the kind
of aggregation ([see API reference](reference/fsharp-dataframe-aggregation-1.html)).
However, in practice it is more convenient to use the helpers presented here -
in some rare cases, you might need to use `Series.aggregate` as it provides
a few other options.
### Pairwise
A special form of windowing is building a series of pairs containing a current
and previous value from the input series (in other words, the key for each pair
is the key of the later element). For example:
*)
// Create a series of pairs from earlier 'hf' input
hf |> Series.pairwise
// Calculate differences between the current and previous values
hf |> Series.pairwiseWith (fun k (v1, v2) -> v2 - v1)
(**
The `pairwise` operation always returns a series that has no value for
the first key in the input series. If you want more complex behavior, you
will usually need to replace `pairwise` with `window`. For example, you might
want to get a series that contains the first value as the first element,
followed by differences. This has the nice property that summing rows,
starting from the first one gives you the current price:
*)
// Sliding window with incomplete segment at the beginning
hf |> Series.windowSizeInto (2, Boundary.AtBeginning) (function
// Return the first value for the first segment
| DataSegment.Incomplete s -> s.GetAt(0)
// Calculate difference for all later segments
| DataSegment.Complete s -> s.GetAt(1) - s.GetAt(0))
(**
<a name="sampling"></a>
Sampling and resampling time series
-----------------------------------
Given a time series with high-frequency prices, sampling or resampling makes
it possible to get time series with representative values at lower frequency.
The library uses the following terminology:
- **Lookup** means that we find values at specified key; if a key is not
available, we can look for value associated with the nearest smaller or
the nearest greater key.
- **Resampling** means that we aggregate values values into chunks based
on a specified collection of keys (e.g. explicitly provided times), or
based on some relation between keys (e.g. date times having the same date).
- **Uniform resampling** is similar to resampling, but we specify keys by
providing functions that generate a uniform sequence of keys (e.g. days),
the operation also fills value for days that have no corresponding
observations in the input sequence.
Finally, the library also provides a few helper functions that are specifically
desinged for series with keys of types `DateTime` and `DateTimeOffset`.
### Lookup
Given a series `hf`, you can get a value at a specified key using `hf.Get(key)`
or using `hf |> Series.get key`. However, it is also possible to find values
for larger number of keys at once. The instance member for doing this
is `hf.GetItems(..)`. Moreover, both `Get` and `GetItems` take an optional
parameter that specifies the behavior when the exact key is not found.
Using the function syntax, you can use `Series.getAll` for exact key
lookup and `Series.lookupAll` when you want more flexible lookup:
*)
// Generate a bit less than 24 hours of data with 13.7sec offsets
let mf = stock1 (TimeSpan.FromSeconds(13.7)) 6300 |> series
// Generate keys for all minutes in 24 hours
let keys = [ for m in 0.0 .. 24.0*60.0-1.0 -> today.AddMinutes(m) ]
// Find value for a given key, or nearest greater key with value
mf |> Series.lookupAll keys Lookup.ExactOrGreater
// [fsi:val it : Series<DateTimeOffset,float> =]
// [fsi: 12:00:00 AM -> 20.07 ]
// [fsi: 12:01:00 AM -> 19.98 ]
// [fsi: ... -> ... ]
// [fsi: 11:58:00 PM -> 19.03 ]
// [fsi: 11:59:00 PM -> <missing> ]
// Find value for nearest smaller key
// (This returns value for 11:59:00 PM as well)
mf |> Series.lookupAll keys Lookup.ExactOrSmaller
// Find values for exact key
// (This only works for the first key)
mf |> Series.lookupAll keys Lookup.Exact
(**
Lookup operations only return one value for each key, so they are useful for
quick sampling of large (or high-frequency) data. When we want to calculate
a new value based on multiple values, we need to use resampling.
### Resampling
Series supports two kinds of resamplings. The first kind is similar to lookup
in that we have to explicitly specify keys. The difference is that resampling
does not find just the nearest key, but all smaller or greater keys. For example:
*)
// For each key, collect values for greater keys until the
// next one (chunk for 11:59:00 PM is empty)
mf |> Series.resample keys Direction.Forward
// For each key, collect values for smaller keys until the
// previous one (the first chunk will be singleton series)
mf |> Series.resample keys Direction.Backward
// Aggregate each chunk of preceding values using mean
mf |> Series.resampleInto keys Direction.Backward
(fun k s -> Stats.mean s)
// Resampling is also available via the member syntax
mf.Resample(keys, Direction.Forward)
(**
The second kind of resampling is based on a projection from existing keys in
the series. The operation then collects chunks such that the projection returns
equal keys. This is very similar to `Series.groupBy`, but resampling assumes
that the projection preserves the ordering of the keys, and so it only aggregates
consequent keys.
The typical scenario is when you have time series with date time information
(here `DateTimeOffset`) and want to get information for each day (we use
`DateTime` with empty time to represent dates):
*)
// Generate 2.5 months of data in 1.7 hour offsets
let ds = stock1 (TimeSpan.FromHours(1.7)) 1000 |> series
// Sample by day (of type 'DateTime')
ds |> Series.resampleEquiv (fun d -> d.Date)
// Sample by day (of type 'DateTime')
ds.ResampleEquivalence(fun d -> d.Date)
(**
The same operation can be easily implemented using `Series.chunkWhile`, but as
it is often used in the context of sampling, it is included in the library as a
primitive. Moreover, we'll see that it is closely related to uniform resampling.
Note that the resulting series has different type of keys than the source. The
source has keys `DateTimeOffset` (representing date with time) while the resulting
keys are of the type returned by the projection (here, `DateTime` representing just
dates).
### Uniform resampling
In the previous section, we looked at `resampleEquiv`, which is useful if you want
to sample time series by keys with "lower resolution" - for example, sample date time
observations by date. However, the function discussed in the previous section only
generates values for which there are keys in the input sequence - if there is no
observation for an entire day, then the day will not be included in the result.
If you want to create sampling that assigns value to each key in the range specified
by the input sequence, then you can use _uniform resampling_.
The idea is that uniform resampling applies the key projection to the smallest and
greatest key of the input (e.g. gets date of the first and last observation) and then
it generates all keys in the projected space (e.g. all dates). Then it picks the
best value for each of the generated key.
*)
// Create input data with non-uniformly distributed keys
// (1 value for 10/3, three for 10/4 and two for 10/6)
let days =
[ "10/3/2013 12:00:00"; "10/4/2013 15:00:00"
"10/4/2013 18:00:00"; "10/4/2013 19:00:00"
"10/6/2013 15:00:00"; "10/6/2013 21:00:00" ]
let nu =
stock1 (TimeSpan(24,0,0)) 10 |> series
|> Series.indexWith days |> Series.mapKeys DateTimeOffset.Parse
// Generate uniform resampling based on dates. Fill
// missing chunks with nearest smaller observations.
let sampled =
nu |> Series.resampleUniform Lookup.ExactOrSmaller
(fun dt -> dt.Date) (fun dt -> dt.AddDays(1.0))
// Same thing using the C#-friendly member syntax
// (Lookup.ExactOrSmaller is the default value)
nu.ResampleUniform((fun dt -> dt.Date), (fun dt -> dt.AddDays(1.0)))
// Turn into frame with multiple columns for each day
// (to format the result in a readable way)
sampled
|> Series.mapValues Series.indexOrdinally
|> Frame.ofRows
// [fsi:val it : Frame<DateTime,int> =]
// [fsi: 0 1 2 ]
// [fsi:10/3/2013 -> 21.45 <missing> <missing> ]
// [fsi:10/4/2013 -> 21.63 19.83 17.51]
// [fsi:10/5/2013 -> 17.51 <missing> <missing> ]
// [fsi:10/6/2013 -> 18.80 20.93 <missing> ]
(**
To perform the uniform resampling, we need to specify how to project (resampled) keys
from original keys (we return the `Date`), how to calculate the next key (add 1 day)
and how to fill missing values.
After performing the resampling, we turn the data into a data frame, so that we can
nicely see the results. The individual chunks have the actual observation times as keys,
so we replace those with just integers (using `Series.indexOrdinal`). The result contains
a simple ordered row of observations for each day.
The important thing is that there is an observation for each day - even for for 10/5/2013
which does not have any corresponding observations in the input. We call the resampling
function with `Lookup.ExactOrSmaller`, so the value 17.51 is picked from the last observation
of the previous day (`Lookup.ExactOrGreater` would pick 18.80 and `Lookup.Exact` would give
us an empty series for that date).
### Sampling time series
Perhaps the most common sampling operation that you might want to do is to sample time series
by a specified `TimeSpan`. Although this can be easily done by using some of the functions above,
the library provides helper functions exactly for this purpose:
*)
// Generate 1k observations with 1.7 hour offsets
let pr = stock1 (TimeSpan.FromHours(1.7)) 1000 |> series
// Sample at 2 hour intervals; 'Backward' specifies that
// we collect all previous values into a chunk.
pr |> Series.sampleTime (TimeSpan(2, 0, 0)) Direction.Backward
// Same thing using member syntax - 'Backward' is the dafult
pr.Sample(TimeSpan(2, 0, 0))
// Get the most recent value, sampled at 2 hour intervals
pr |> Series.sampleTimeInto
(TimeSpan(2, 0, 0)) Direction.Backward Series.lastValue
(**
<a name="stats"></a>
Calculations and statistics
---------------------------
In the final section of this tutorial, we look at writing some calculations over time series. Many of the
functions demonstrated here can be also used on unordered data frames and series.
### Shifting and differences
First of all, let's look at functions that we need when we need to compare subsequent values in
the series. We already demonstrated how to do this using `Series.pairwise`. In many cases,
the same thing can be done using an operation that operates over the entire series.
The two useful functions here are:
- `Series.diff` calcualtes the difference between current and n-_th_ previous element
- `Series.shift` shifts the values of a series by a specified offset
The following snippet illustrates how both functions work:
*)
// Generate sample data with 1.7 hour offsets
let sample = stock1 (TimeSpan.FromHours(1.7)) 6 |> series
// Calculates: new[i] = s[i] - s[i-1]
let diff1 = sample |> Series.diff 1
// Diff in the opposite direction
let diffM1 = sample |> Series.diff -1
// Shift series values by 1
let shift1 = sample |> Series.shift 1
// Align all results in a frame to see the results
let df =
[ "Shift +1" => shift1
"Diff +1" => diff1
"Diff" => sample - shift1
"Orig" => sample ] |> Frame.ofColumns
// [fsi:val it : Frame<DateTimeOffset,string> =]
// [fsi: Diff Diff +1 Orig Shift +1 ]
// [fsi: 12:00:00 AM -> <missing> <missing> 21.73 <missing> ]
// [fsi: 1:42:00 AM -> 1.73 1.73 23.47 21.73 ]
// [fsi: 3:24:00 AM -> -0.83 -0.83 22.63 23.47 ]
// [fsi: 5:06:00 AM -> 2.37 2.37 25.01 22.63 ]
// [fsi: 6:48:00 AM -> -1.57 -1.57 23.43 25.01 ]
// [fsi: 8:30:00 AM -> 0.09 0.09 23.52 23.43 ]
(**
In the above snippet, we first calcluate difference using the `Series.diff` function.
Then we also show how to do that using `Series.shift` and binary operator applied
to two series (`sample - shift`). The following section provides more details.
So far, we also used the functional notation (e.g. `sample |> Series.diff 1`), but
all operations can be called using the member syntax - very often, this gives you
a shorter syntax. This is also shown in the next few snippets.
### Operators and functions
Time series also supports a large number of standard F# functions such as `log` and `abs`.
You can also use standard numerical operators to apply some operation to all elements
of the series.
Because series are indexed, we can also apply binary operators to two series. This
automatically aligns the series and then applies the operation on corresponding elements.
*)
// Subtract previous value from the current value
sample - sample.Shift(1)
// Calculate logarithm of such differences
log (sample - sample.Shift(1))
// Calculate square of differences
sample.Diff(1) ** 2.0
// Calculate average of value and two immediate neighbors
(sample.Shift(-1) + sample + sample.Shift(2)) / 3.0
// Get absolute value of differences
abs (sample - sample.Shift(1))
// Get absolute value of distance from the mean
abs (sample - (Stats.mean sample))
(**
The time series library provides a large number of functions that can be applied in this
way. These include trigonometric functions (`sin`, `cos`, ...), rounding functions
(`round`, `floor`, `ceil`), exponentials and logarithms (`exp`, `log`, `log10`) and more.
In general, whenever there is a built-in numerical F# function that can be used on
standard types, the time series library should support it too.
However, what can you do when you write a custom function to do some calculation and
want to apply it to all series elements? Let's have a look:
*)
// Truncate value to interval [-1.0, +1.0]
let adjust v = min 1.0 (max -1.0 v)
// Apply adjustment to all function
adjust $ sample.Diff(1)
// The $ operator is a shorthand for
sample.Diff(1) |> Series.mapValues adjust
(**
In general, the best way to apply custom functions to all values in a series is to
align the series (using either `Series.join` or `Series.joinAlign`) into a single series
containing tuples and then apply `Series.mapValues`. The library also provides the `$` operator
that simplifies the last step - `f $ s` applies the function `f` to all values of the series `s`.
### Data frame operations
Finally, many of the time series operations demonstrated above can be applied to entire
data frames as well. This is particularly useful if you have data frame that contains multiple
aligned time series of similar structure (for example, if you have multiple stock prices or
open-high-low-close values for a given stock).
The following snippet is a quick overview of what you can do:
*)
/// Multiply all numeric columns by a given constant
df * 0.65
// Apply function to all columns in all series
let conv x = min x 20.0
df |> Frame.mapRowValues (fun os -> conv $ os.As<float>())
|> Frame.ofRows
// Sum each column and divide results by a constant
Stats.sum df / 6.0
// Divide sum by mean of each frame column
Stats.sum df / Stats.mean df
| notebooks/other/Deedle/series.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (ML)
# language: python
# name: ml
# ---
# # IMDb dataset
# +
# %matplotlib inline
import sys
import os
import numpy as np
import matplotlib
from matplotlib import pyplot as plt
import torch
import pandas
import PIL.Image
sys.path.append("../../")
from experiments.datasets import IMDBLoader
# -
# ## Options
transform = False
# ## Preprocessing: Transform images and age info to numpy arrays (only need to do this once)
# +
test_fraction = 0.1
def process_images(filenames, ages, facescores1, facescores2, category="train", subfolders=True, basedir=f"../data/samples/imdb"):
filenames_out = []
ages_out = []
i = 0
for filename, age, fs1, fs2 in zip(filenames, ages, facescores1, facescores2):
img = PIL.Image.open(f"{basedir}/raw/{filename}")
dims = np.array(img).shape
if len(dims) != 3 or dims[2] != 3 or dims[0] != dims[1] or dims[0] < 64: # Let's skip b/w and non-square images (the latter are often corrupted)
continue
if age < 18 or age > 80: # Let's limit it to this range
continue
if not np.isfinite(fs1):
continue
if np.isfinite(fs2):
continue
img = img.resize((64, 64), PIL.Image.ANTIALIAS)
folder = f"{category}/{(i // 1000):03d}" if subfolders else category
img_filename_out = f"{folder}/{category}_{i:05d}.png"
os.makedirs(f"{basedir}/{folder}", exist_ok=True)
img.save(f"{basedir}/{img_filename_out}")
filenames_out.append(img_filename_out)
ages_out.append(age)
i += 1
df_out = pandas.DataFrame({'age':ages_out, 'filename':filenames_out})
df_out.to_csv(f"{basedir}/{category}.csv")
return df_out
df = pandas.read_csv("../data/samples/imdb/raw.csv")
n = len(df)
n_test = int(round(n * test_fraction))
np.random.seed(81357)
idx_all = list(range(n))
np.random.shuffle(idx_all)
idx_train, idx_test = idx_all[n_test:], idx_all[:n_test]
if transform:
dfs = {}
for cat, idx in zip(["train", "test"], [idx_train, idx_test]):
paths = np.array(df["path"])[idx]
age = np.array(df["age"])[idx]
fs1 = np.array(df["facescore1"])[idx]
fs2 = np.array(df["facescore2"])[idx]
dfs[cat] = process_images(paths, age, fs1, fs2, cat, subfolders=(cat=="train"))
age_pdf, age_edges = np.histogram(df["age"], range=(17.5, 80.5), bins=63, density=True)
ages = (age_edges[1:] + age_edges[:-1])/2
# -
# ## Age distribution
# +
age_pdf, _ = np.histogram(df["age"], range=(17.5, 80.5), bins=63, density=True)
print("age_vals = np.linspace(18, 80, 63)")
print("age_probs = np.array([" + ", ".join([str(x) for x in age_pdf]) + "])")
# -
np.std(df["age"]), np.mean(df["age"])
# +
fig = plt.figure(figsize=(5,5))
plt.hist(np.array(df["age"])[idx_train], range=(17.5, 80.5), bins=63, histtype="step", color="C0", label="Train", density=True)
plt.hist(np.array(df["age"])[idx_test], range=(17.5, 80.5), bins=63, histtype="step", color="C1", label="Test", density=True)
plt.xlim(16, 82)
plt.ylim(0., None)
plt.xlabel("Age")
plt.ylabel("Probability")
plt.tight_layout()
plt.show()
# -
# ## Test DataLoader
# +
loader = IMDBLoader()
data = loader.load_dataset(train=True, dataset_dir="../data/samples/imdb")
fig = plt.figure(figsize=(5*3., 4*3.2))
for i in range(20):
x, y = data[np.random.randint(len(data) - 1)]
y = loader.preprocess_params(y, inverse=True)[0]
x_ = np.transpose(np.array(x), [1,2,0]) / 256.
ax = plt.subplot(4, 5, i+1)
plt.imshow(x_)
plt.title(f"{y} years", fontsize=11.)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.tight_layout()
plt.show()
# -
loader.preprocess_params(loader.sample_from_prior(100), inverse=True)
loader.evaluate_log_prior(loader.preprocess_params([17, 18, 20, 30, 40, 50, 60, 70, 80, 81]))
| experiments/datasets/dataset_preparation/imdb_preparation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Beacon Home Study
import pandas as pd
from datetime import datetime
beacons_tested = [7, 20]
# +
beacon = pd.read_csv("../data/processed/beacon-wcwh_s20.csv", index_col="timestamp", parse_dates=True)
beacon = beacon[beacon["beacon"].isin(beacons_tested)]
beacon.drop(["beiwe","fitbit","redcap"],axis="columns",inplace=True)
beacon_restricted = pd.DataFrame()
for bb, loc in zip(beacons_tested,["bedroom","kitchen"]):
beacon_by_beacon = beacon[beacon["beacon"] == bb]
beacon_by_beacon["location"] = loc
beacon_by_beacon = beacon_by_beacon[datetime(2021,3,1):]
beacon_restricted = beacon_restricted.append(beacon_by_beacon)
beacon_restricted.head()
# -
beacon_restricted.to_csv("/Users/hagenfritz/Desktop/beacon-processed-home_study.csv",index=True)
| notebooks/5.0.3-hef-beacon_home_study.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # PairGrids
#
# Os PairGrids são tipos gerais de gráficos que permitem mapear tipos de plotagem diferentes para linhas e colunas de um grid, isso ajuda você a criar plots similares separadas por categoias.
import seaborn as sns
import matplotlib.pyplot as plt
# %matplotlib inline
iris = sns.load_dataset('iris')
iris.head()
# ## PairGrid
#
# Pairgrid é um plot de grade para traçar relacionamentos entre pares de um conjunto de dados.
# Just the Grid
sns.PairGrid(iris)
# Then you map to the grid
g = sns.PairGrid(iris)
g.map(plt.scatter)
# Altera os tipos de plots na diagonal, parte superior e inferior.
g = sns.PairGrid(iris)
g.map_diag(plt.hist)
g.map_upper(plt.scatter)
g.map_lower(sns.kdeplot)
# ## pairplot
#
# Pairplot é uma versão mais simples do PairGrid (você usará com bastante frequência)
sns.pairplot(iris)
sns.pairplot(iris,hue='species',palette='rainbow')
# ## FacetGrid
#
# FacetGrid é a maneira geral de criar plots de grades com base em um recurso:
tips = sns.load_dataset('tips')
tips.head()
# Só a grade
g = sns.FacetGrid(tips, col="time", row="smoker")
g = sns.FacetGrid(tips, col="time", row="smoker")
g = g.map(plt.hist, "total_bill")
g = sns.FacetGrid(tips, col="time", row="smoker",hue='sex')
# Observe como os argumentos vêm após a chamada do plt.scatter
g = g.map(plt.scatter, "total_bill", "tip").add_legend()
# ## JointGrid
#
# JointGrid é a versão geral para grades tipo jointplot (), para um exemplo rápido:
g = sns.JointGrid(x="total_bill", y="tip", data=tips)
g = sns.JointGrid(x="total_bill", y="tip", data=tips)
g = g.plot(sns.regplot, sns.distplot)
# Consulte a documentação conforme necessário para os tipos de grade, mas na maioria das vezes você apenas usará os gráficos mais simples discutidos anteriormente.
| Utils/Visu_dados/Seaborn/PairGrid.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#
# Import all necessary libraries.
import numpy as np
import pandas as pd
from IPython import display
import matplotlib.pyplot as plt
# %matplotlib inline
# Read all the dataset we've export
# ## A look at the data
#
# **1.** Read all the data with right separator and pandas.
calendar = pd.read_csv('calendar.csv', sep=',')
listings = pd.read_csv('listings.csv', sep=',')
reviews = pd.read_csv('reviews.csv', sep=',')
# **2.** Analyse the types of property in the AirBnB
(listings['property_type'].value_counts()/listings.shape[0]).plot(kind='bar')
# Most of the property are appartment (70%) <br> House come in second with (15%) and others types of property then
listings.columns
# **3.** adding month to reviews
# +
import datetime
def adding_month(value):
""" adding the month
to the dataframe
"""
months = ['Jan', 'Fev',
'Mar','Apr',
'May', 'Jun',
'Jul', 'Aug',
'Sep','Oct',
'Nov', 'Dec']
value_converted = datetime.datetime.strptime(value, '%Y-%m-%d')
return value_converted.month
reviews['months'] = reviews['date'].apply(adding_month)
def adding_season(value):
""" adding season
to the dataframe
"""
if value>=6 and value<9:
season = 'Summer'
elif value>=3 and value<6:
season = 'Automn'
elif value==12 and value<3:
season = 'Winter'
else:
season = 'Spring'
return season
# -
# **4.** Let's see when people use to book houses in Boston
#
# +
# getting price to compute mean
def compute_price(value):
'''
Convert to float the
price value of pro-
perty
INPUT
- price in dollar
OUTPUT
- price in float
'''
if isinstance(value,str):
value_back = value.split('$')
val = value_back[1]
if ',' in val: #avoid ','
val = val.replace(',','')
return float(val)
else:
return float('NaN')
# -
calendar_copy = calendar.copy()
calendar_copy['months'] = calendar['date'].apply(adding_month)
calendar_copy['season'] = calendar_copy['months'].apply(adding_season)
calendar_copy['real_price'] = calendar_copy['price'].apply(compute_price)
(calendar_copy['season'].value_counts()/calendar_copy.shape[0]).plot(kind = 'bar')
# **Conclusion:**<br> People used to book property in the summer or before the summer not in Winter perhaps because they prefer good temperature to coolness
# **5.** Get the average price according to season
calendar_copy.groupby(['season'])['real_price'].mean().dropna()
# The average price is bigger in summer and before summer in comparison to automn
# **6** Create a function for our purposes and avoid repetition
# +
def property_analysis(property_name, listings, calendar):
'''
add month and season of booking
of the different property, then
convert the price in dollar to
float
INPUT
- property_name
OUTPUT
- calendar dataframe with addi-
tionnal columnes
'''
property_type = listings[listings['property_type']==property_name]
property_type.reset_index(inplace = True)
calendar_property_type = calendar.loc[calendar['listing_id'].isin(property_type['id'])]
calendar_property_type_group = calendar_property_type.groupby('available')
calendar_property_type_2 = calendar_property_type_group.get_group('t')
calendar_property_type_2.index = np.arange(0, len(calendar_property_type_2))
calendar_property_type_copy = calendar_property_type_2.copy()
calendar_property_type_copy['months'] = calendar_property_type_copy['date'].apply(adding_month)
calendar_property_type_copy['season'] = calendar_property_type_copy['months'].apply(adding_season)
calendar_property_type_copy['real_price'] = calendar_property_type_copy['price'].apply(compute_price)
return calendar_property_type_copy
# -
# **7.** Get more informations about each listings property
# +
calendar_appartment = property_analysis('Apartment', listings, calendar)
calendar_house = property_analysis('House', listings, calendar)
#(calendar_appartment['season'].value_counts()/calendar_appartment.shape[0]).plot(kind = 'bar')
# -
# **8.** Price average per season for appartment
calendar_appartment.groupby(['season'])['real_price'].mean().dropna()
# **9.** Price average per season for houses
calendar_house.groupby(['season'])['real_price'].mean().dropna()
# **10.** Visualisation of appartments booking per season
(calendar_appartment['season'].value_counts()/calendar_appartment.shape[0]).plot(kind='bar')
plt.title('Season of appartment booking')
# Appartments as seen in **A look at the data** is the most demanded houses <br>
# People mostly book in Spring and summer
# **11.** Visualisation of houses booking per season
(calendar_house['season'].value_counts()/calendar_house.shape[0]).plot(kind='bar')
plt.title('Season of house booking')
# Same conclusion for houses
# the offer property types are not relevant
| Where is the peak of demand in AirBnB at Boston.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="56dTmJneULGe" outputId="79b8c334-520e-4083-da91-02a9b6da3e96"
from google.colab import drive
drive.mount("/content/drive")
# + id="TZFZezxmd_gV"
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
from keras.layers import Bidirectional
from random import random
from numpy import array
from numpy import cumsum
from keras.models import Sequential
from keras.layers import LSTM,Activation,Dropout
from keras.layers import Dense
from keras.layers import TimeDistributed
from keras.layers import Bidirectional
import pickle
from sklearn.ensemble import RandomForestClassifier
from lightgbm import LGBMClassifier
from xgboost import XGBClassifier
from sklearn.svm import SVC
#from numpy import dstac
import glob
# + id="74pnWaiELfNV"
header=[]
for i in range(0,43):
header.append(i)
# + colab={"base_uri": "https://localhost:8080/"} id="21Pw42MTeBD-" outputId="f6edb180-3593-4dd8-e16c-574f54e70d8c"
df=pd.read_csv('/content/drive/MyDrive/asl/easter.csv',names=header)
#df.columns=header
#df=df.fillna(0,axis=1)
#df['y']='book'
#df=df.drop('Unnamed: 0',axis=1)
df.head(100)
df.shape
# + colab={"base_uri": "https://localhost:8080/"} id="wL5l2nbdL-Sh" outputId="f877485c-0504-4143-bdc9-e05f65570905"
df_like=pd.read_csv('/content/drive/MyDrive/asl/like.csv',names=header)
#df.columns=header
#df=df.fillna(0,axis=1)
#df['y']='book'
#df=df.drop('Unnamed: 0',axis=1)
df_like.head(100)
df_like.shape
# + colab={"base_uri": "https://localhost:8080/"} id="Y_akx-PYMI8X" outputId="846106b9-615b-47fe-ccba-76bb2576343d"
df_phone=pd.read_csv('/content/drive/MyDrive/asl/phone.csv',names=header)
#df.columns=header
#df=df.fillna(0,axis=1)
#df['y']='book'
#df=df.drop('Unnamed: 0',axis=1)
df_phone.head(100)
df_phone.shape
# + colab={"base_uri": "https://localhost:8080/"} id="_XZySWzcMRjb" outputId="92c0ccf5-348e-4905-f5a9-243ed6f8df12"
df_rainbow=pd.read_csv('/content/drive/MyDrive/asl/rainbow.csv',names=header)
#df.columns=header
#df=df.fillna(0,axis=1)
#df['y']='book'
#df=df.drop('Unnamed: 0',axis=1)
df_rainbow.head(100)
df_rainbow.shape
# + colab={"base_uri": "https://localhost:8080/"} id="g6oaODo_5LpU" outputId="7af494e5-532c-4643-bf5b-393bfbea9ddf"
df_some=pd.read_csv('/content/drive/MyDrive/asl/some.csv',names=header)
df_some.head()
df_some.shape
# + colab={"base_uri": "https://localhost:8080/"} id="6zDjvA695l0j" outputId="36ad4b8a-d23e-4edc-e486-4f816f27751a"
df_boring=pd.read_csv('/content/drive/MyDrive/asl/boring.csv',names=header)
df_boring.head()
df_boring.shape
# + colab={"base_uri": "https://localhost:8080/"} id="y7kUN8xd6yz8" outputId="f01c4aa5-64d6-422b-87cc-e29ce0d2ab52"
df_lib=pd.read_csv('/content/drive/MyDrive/asl/library.csv',names=header)
df_lib.head()
df_lib.shape
# + colab={"base_uri": "https://localhost:8080/"} id="RRyfTAfJSplY" outputId="a437e1ec-ad1c-416b-ee8d-d9d617ee7c73"
df_sorry=pd.read_csv('/content/drive/MyDrive/asl/sorry.csv',names=header)
df_sorry.shape
# + colab={"base_uri": "https://localhost:8080/"} id="OQ1jFrDuSr1B" outputId="786a1774-7cda-4e0a-887a-fa2e99f9f4d1"
df_thankyou=pd.read_csv('/content/drive/MyDrive/asl/thank you.csv',names=header)
df_thankyou.shape
# + colab={"base_uri": "https://localhost:8080/"} id="UEhpXBcISzYE" outputId="1c4698aa-20a5-4b76-91c9-b8c6d533013b"
df_white=pd.read_csv('/content/drive/MyDrive/asl/white.csv',names=header)
df_white.shape
# + colab={"base_uri": "https://localhost:8080/"} id="1sJMWHWaS82j" outputId="dd3fa93d-11da-48c6-d39a-1ae75db2e77c"
df_yesterday=pd.read_csv('/content/drive/MyDrive/asl/yesterday.csv',names=header)
df_yesterday.shape
# + colab={"base_uri": "https://localhost:8080/"} id="hklHTloeTZpu" outputId="72cf2844-1ea8-4d5c-b274-7960231ad13b"
df_bed=pd.read_csv('/content/drive/MyDrive/asl/bed.csv',names=header)
df_bed.shape
# + colab={"base_uri": "https://localhost:8080/"} id="QsL1pEJnMRf2" outputId="dc231897-440f-4a93-88ec-c338de6c3df9"
df=pd.concat([df_phone,df,df_rainbow,df_boring,df_yesterday,df_lib,df_some,df_white,df_thankyou,df_bed,df_like,df_sorry],ignore_index=True)
df.shape
# + id="4jVzJ-OhQ909"
df.to_csv('/content/drive/MyDrive/asl/12_classes.csv',index=False)
# + colab={"base_uri": "https://localhost:8080/"} id="8L2LrM3fLcSQ" outputId="42fe4677-f86d-4cb0-8874-605a839d2ae4"
df.shape
# + colab={"base_uri": "https://localhost:8080/", "height": 422} id="WJCA3lHE7dNE" outputId="828d203c-855b-4de4-c440-61f629c41664"
df_comb=pd.read_csv('/content/drive/MyDrive/asl/12_classes.csv')
df_comb.head(100)
# + id="_nhC6FwMkobR" colab={"base_uri": "https://localhost:8080/"} outputId="38454280-718f-4545-dcf6-6e8554804ae2"
le = LabelEncoder()
df_comb['42']= le.fit_transform(df_comb['42'])
df_comb['42']
# + colab={"base_uri": "https://localhost:8080/"} id="K8BGNuO_wx2H" outputId="eec57f4f-027b-4789-839d-d5f8bf625326"
le.classes_
# + colab={"base_uri": "https://localhost:8080/"} id="V7JmNS3I774o" outputId="3d099855-64be-4f88-d814-9da247e1371b"
df[42][16]
# + id="myM6Cvi9gypw"
X=df_comb.loc[:, df_comb.columns != 42]
y=df_comb.loc[:,'42']
# + colab={"base_uri": "https://localhost:8080/", "height": 422} id="60JW0Gtfo14S" outputId="4effe1bc-c6b9-44af-ec50-baffabf5594d"
X
# + id="0zhBHpxRo5Mz" colab={"base_uri": "https://localhost:8080/"} outputId="1b17ef82-abaa-4258-d63e-f55824d3faa2"
y
# + id="9tEDdOFzjS-q"
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20,random_state=42)
# + id="cMIPDcgumF5s"
from sklearn.ensemble import RandomForestClassifier
clf=RandomForestClassifier(n_estimators=50, random_state=0)
#clf.fit(X_train,y_train)
# + id="CCC9UlsMo_Wr"
from sklearn.model_selection import GridSearchCV
param_grid = {
'n_estimators': [200, 300,400,500],
'max_features': ['auto', 'sqrt', 'log2'],
'criterion':['gini','entropy'],
'max_depth':[2,4,6,8],
'min_samples_leaf':[2,4,6]
}
CV_rfc = GridSearchCV(estimator=clf, param_grid=param_grid, cv= 5)
#CV_rfc.fit(X_train, y_train)
# + colab={"base_uri": "https://localhost:8080/"} id="YUT_LjpZqg10" outputId="71857ce2-dd76-45de-9c0f-6271b9f419ce"
clf.fit(X_train, y_train)
# + id="31CZfClPmPrp"
y_pred=clf.predict(X_test)
# + colab={"base_uri": "https://localhost:8080/"} id="V8RGjkN7m6o_" outputId="86bbca11-24d7-4fd8-c0f8-e8526093490a"
y_pred
# + colab={"base_uri": "https://localhost:8080/"} id="QAAgS4yKnjh2" outputId="a0bef80d-d03c-486a-f619-33c60718b0f3"
y_test.shape
# + colab={"base_uri": "https://localhost:8080/"} id="9Foq77penEIE" outputId="174ac839-c17d-4c8f-f2ce-791f480efcfb"
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test,y_pred)
# + colab={"base_uri": "https://localhost:8080/"} id="R-cLxP0BnW5R" outputId="cfff8da0-65d6-42f5-8aee-914dd4cada6b"
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_pred)
# + id="TgRBZ2dHl7UX"
filename = '/content/drive/MyDrive/asl/randomforestc_model_12_classes.sav'
pickle.dump(clf, open(filename, 'wb'))
# + id="eWLCgF6rmLZV"
loaded_model = pickle.load(open(filename, 'rb'))
# + id="0OXT_7hWmSBx"
clf1=RandomForestClassifier(n_estimators=50, random_state=0)
clf2=XGBClassifier(random_state =0)
clf3=LGBMClassifier(random_state=0)
clf4=SVC(gamma='auto')
# + id="Vmp4nnL36v5Q"
clf_lst=[]
clf_lst.append(('RandomForest',clf1))
clf_lst.append(('XGB',clf2))
#clf_lst.append(('LGBM',clf3))
clf_lst.append(('SVC',clf4))
# + id="xZ2od-H86yqj"
from sklearn import model_selection
from sklearn.ensemble import VotingClassifier
ensemble = VotingClassifier(clf_lst)
ensemble.fit(X_train,y_train)
y_pred=ensemble.predict(X_test)
# + colab={"base_uri": "https://localhost:8080/"} id="EUJaZJfW63Rr" outputId="58e74448-5843-4d62-e5a6-25c27e2d93c2"
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_pred)
# + colab={"base_uri": "https://localhost:8080/"} id="IxLZ2obZ9iix" outputId="cea9a849-3f71-43bf-b425-641b8e143189"
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
# + [markdown] id="Vhzo_Drv6oaW"
# **LSTM MODEL**
# + colab={"base_uri": "https://localhost:8080/"} id="vJEwf7M86zir" outputId="5ccc3259-56ef-4273-d7a1-7810fa1b0e2a"
stack=dstack(X_train)
stack.shape
# + id="sNtKrVzF7aWg"
stacky=dstack(y_train)
# + id="6OtPdY0CwQRB"
X_train=X_train.to_numpy()
# + id="4iAXC74B0CRI"
X_test=X_test.to_numpy()
#y_train=y_train.to_numpy()
# + id="Aczdo_5i9h9_"
y_test=y_test.to_numpy()
# + id="TEuUR651rH7k"
X_train=X_train.reshape(X_train.shape[0],1,X_train.shape[1])
# + id="nde6pyyHz9AX"
X_test=X_test.reshape(X_test.shape[0], 1,X_test.shape[1])
# + colab={"base_uri": "https://localhost:8080/"} id="RMHx4HVOj-a1" outputId="94046069-6541-4943-f4d2-04832f183a2c"
X_train.shape
# + id="fjhvdkYNkEoy"
model = keras.Sequential()
# Add an Embedding layer expecting input vocab of size 1000, and
# output embedding dimension of size 64.
model.add(layers.LSTM(64,return_sequences=True))
# Add a LSTM layer with 128 internal units.
#model.add(layers.LSTM(128))
model.add(keras.layers.Dense(units=64, activation='relu'))
model.add(keras.layers.Dense(units=64, activation='relu'))
# A a Dense layer with 10 units.
model.add(layers.Dense(1,activation='sigmoid'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# + id="0dtxd8SwrQYU"
model = keras.Sequential()
model.add(layers.LSTM(64, return_sequences=True,
))
model.add(layers.LSTM(32, return_sequences=True))
model.add(layers.LSTM(32))
model.add(layers.Dense(1, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# + id="epVuFgEDRXPk"
model = Sequential()
model.add(LSTM(64, return_sequences=True))
model.add(Activation('tanh'))
model.add(LSTM(64, return_sequences=True))
model.add(Activation('tanh'))
model.add(LSTM(64))
model.add(Dropout(0.25))
model.add(Dense(20))
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(layers.Dense(1,activation='sigmoid'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
#model.fit(datasetTrain, Y_train, batch_size=64, nb_epoch=25)
# + id="A175HpSKDD0w"
from sklearn.utils import compute_class_weight
classWeight = compute_class_weight('balanced',y,y_train)
classWeight = dict(enumerate(classWeight))
# + id="aXVagXM5BGYY"
model = Sequential()
model.add(Bidirectional(LSTM(200, return_sequences=True)))
model.add(Bidirectional(LSTM(200, return_sequences=True)))
model.add(keras.layers.Dense(units=64, activation='tanh'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# + colab={"base_uri": "https://localhost:8080/"} id="BhluWFcc3cbr" outputId="03605db3-690e-4602-abbe-bea0663ec8ad"
X_train.shape
# + id="xE3jtP9J3Znw"
y_train.shape[1]
# + id="lyQLz2cGlUDL" colab={"base_uri": "https://localhost:8080/"} outputId="942a51c2-17ec-46e0-aa9c-9c9a72f5fd12"
#model.compile(optimizer="nadam",metrics=['accuracy'],loss='mae')
model.fit(X_train,y_train,epochs=10,batch_size=64)
# + colab={"base_uri": "https://localhost:8080/"} id="3z5xZZ4f1Zp-" outputId="dc706356-ee7e-4017-d8b4-bfef6c463e01"
model.summary()
# + id="fo3kXzUrzzma"
y_predict=model.predict(X_test)
#y_predict=y_predict.round()
# + colab={"base_uri": "https://localhost:8080/"} id="zjKFs3G9kaed" outputId="565fc823-6b75-4052-ae2d-21307477b19e"
y_predict.shape
# + id="XDUQKNfMkZys"
y_predict=y_predict.reshape(y_predict.shape[0],1)
# + colab={"base_uri": "https://localhost:8080/", "height": 639} id="tpqH9Y2Qg7gN" outputId="8136d857-fe10-4b3b-d05b-70888526f5f1"
df3=pd.DataFrame((y_predict.astype('int32')))
df3
# + id="58vTcA1vl-df"
y_test=y_test.reshape(11,1)
# + id="Zci9XnfYg3Gg"
df4=pd.DataFrame(y_test)
df4
# + colab={"base_uri": "https://localhost:8080/"} id="Lke7-xsOPdNA" outputId="b1f3df82-86e9-497e-ad74-287e92d453d9"
accuracy_score(y_test, y_predict, normalize=False)
# + id="23wMELxdjeSZ"
f1_score(y_test, y_predict, zero_division=1)
# + colab={"base_uri": "https://localhost:8080/"} id="KRqi_auDIeXe" outputId="7c5b4894-7507-42a1-8cb2-faa47c676047"
model.save("/content/drive/MyDrive/asl/first")
# + id="ZGKm9kzQIyiZ"
model=keras.models.load_model("/content/drive/MyDrive/asl/first")
# + colab={"base_uri": "https://localhost:8080/"} id="lpQfDK0Y9Xh5" outputId="d1a1cf76-8e86-4f1c-dbf8-4f18114faa93"
files=glob.glob('/content/drive/MyDrive/CSV/*')
for i in files:
df=pd.read_csv(i,names=header)
df=df.loc[(df!=0).any(1)]
print(df)
# + colab={"base_uri": "https://localhost:8080/", "height": 422} id="xBls3Wlu-ucl" outputId="ef22a826-6e4f-49ad-f6b6-dfd61658e8a3"
df1=pd.read_csv('/content/drive/MyDrive/CSV/blue.csv',names=header)
#df1=df1.drop(0,axis=1)
df1
| Detection/asl_sign.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" deletable=true editable=true id="kR-4eNdK6lYS"
# Feedforward Neural Network with Regularization
# =============
# + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" deletable=true editable=true id="JLpLa8Jt7Vu4"
from __future__ import print_function
import numpy as np
import random
import scipy.io as sio
import tensorflow as tf
from six.moves import cPickle as pickle
# + [markdown] colab_type="text" deletable=true editable=true id="1HrCK6e17WzV"
# First load the data dumped by MATLAB (*.mat file):
# + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}, "output_extras": [{"item_id": 1}]} colab_type="code" deletable=true editable=true executionInfo={"elapsed": 11777, "status": "ok", "timestamp": 1449849322348, "user": {"color": "", "displayName": "", "isAnonymous": false, "isMe": true, "permissionId": "", "photoUrl": "", "sessionId": "0", "userId": ""}, "user_tz": 480} id="y3-cj1bpmuxc" outputId="e03576f1-ebbe-4838-c388-f1777bcc9873"
# X_init_offset_cancelled = sio.loadmat('scraping/X_init_offset_cancelled_scraping.mat', struct_as_record=True)['X_init_offset_cancelled']
# X_init_offset_cancelled = sio.loadmat('scraping/Xioc_phasePSI_scraping.mat', struct_as_record=True)['Xioc_phasePSI']
#X_234_all
X_init_offset_cancelled_all= sio.loadmat('scraping/X_gauss_basis_func_scraping.mat', struct_as_record=True)['X_gauss_basis_func'].astype(np.float32)
# X_init_offset_cancelled = sio.loadmat('scraping/Xioc_PD_ratio_mean_3std_scraping.mat', struct_as_record=True)['Xioc_PD_ratio_mean_3std']
# Ct_target = sio.loadmat('scraping/Ct_target_scraping.mat', struct_as_record=True)['Ct_target']
#X_234
X_init_offset_cancelled = sio.loadmat('scraping/X_gauss_basis_func_scraping_elim_3_train.mat', struct_as_record=True)['X_gauss_basis_func_train'].astype(np.float32)
#Ct_target_234
Ct_target = sio.loadmat('scraping/Ct_target_filt_scraping_elim_3_train.mat', struct_as_record=True)['Ct_target_filt_train'].astype(np.float32)
# Dataset for Extrapolation Test
#X_5toend
X_extrapolate_test = sio.loadmat('scraping/X_gauss_basis_func_scraping_elim_3_test.mat', struct_as_record=True)['X_gauss_basis_func_test'].astype(np.float32)
#Ct_5toend
Ctt_extrapolate_test = sio.loadmat('scraping/Ct_target_filt_scraping_elim_3_test.mat', struct_as_record=True)['Ct_target_filt_test'].astype(np.float32)
# Dummy Data for learning simulation/verification:
# X_init_offset_cancelled = sio.loadmat('scraping/dummy_X.mat', struct_as_record=True)['X']
# Ct_target = sio.loadmat('scraping/dummy_Ct.mat', struct_as_record=True)['Ct']
# + [markdown] colab_type="text" deletable=true editable=true id="L7aHrm6nGDMB"
# Verify the dimensions are correct and shuffle the data (for Stochastic Gradient Descent (SGD)):
# + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}, "output_extras": [{"item_id": 1}]} colab_type="code" deletable=true editable=true executionInfo={"elapsed": 11728, "status": "ok", "timestamp": 1449849322356, "user": {"color": "", "displayName": "", "isAnonymous": false, "isMe": true, "permissionId": "", "photoUrl": "", "sessionId": "0", "userId": ""}, "user_tz": 480} id="IRSyYiIIGIzS" outputId="3f8996ee-3574-4f44-c953-5c8a04636582"
N_data_extrapolate_test = Ctt_extrapolate_test.shape[0]
permutation_extrapolate_test = np.random.permutation(N_data_extrapolate_test)
permutation_extrapolate_test_select = permutation_extrapolate_test[1:1000]
X_extrapt = X_extrapolate_test[permutation_extrapolate_test_select, :]
Ctt_extrapt = Ctt_extrapolate_test[permutation_extrapolate_test_select, :]
print('X_init_offset_cancelled.shape =', X_init_offset_cancelled.shape)
print('Ct_target.shape =', Ct_target.shape)
N_data = Ct_target.shape[0]
D_input = X_init_offset_cancelled.shape[1]
D_output = Ct_target.shape[1]
print('N_data =', N_data)
print('D_input =', D_input)
print('D_output =', D_output)
print('X_extrapolate_test.shape =', X_extrapolate_test.shape)
print('Ctt_extrapolate_test.shape =', Ctt_extrapolate_test.shape)
random.seed(38)
np.random.seed(38)
X_init_offset_cancelled = X_init_offset_cancelled
X_init_offset_cancelled_all = X_init_offset_cancelled_all
permutation = np.random.permutation(N_data)
X_shuffled = X_init_offset_cancelled[permutation,:]
Ct_target_shuffled = Ct_target[permutation,:]
fraction_train_dataset = 0.85
fraction_test_dataset = 0.075
N_train_dataset = np.round(fraction_train_dataset * N_data).astype(int)
N_test_dataset = np.round(fraction_test_dataset * N_data).astype(int)
N_valid_dataset = N_data - N_train_dataset - N_test_dataset
print('N_train_dataset =', N_train_dataset)
print('N_valid_dataset =', N_valid_dataset)
print('N_test_dataset =', N_test_dataset)
X_train_dataset = X_shuffled[0:N_train_dataset,:]
Ct_train = Ct_target_shuffled[0:N_train_dataset,:]
X_valid_dataset = X_shuffled[N_train_dataset:(N_train_dataset+N_valid_dataset),:]
Ct_valid = Ct_target_shuffled[N_train_dataset:(N_train_dataset+N_valid_dataset),:]
X_test_dataset = X_shuffled[(N_train_dataset+N_valid_dataset):N_data,:]
Ct_test = Ct_target_shuffled[(N_train_dataset+N_valid_dataset):N_data,:]
# + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" deletable=true editable=true id="RajPLaL_ZW6w"
def computeNMSE(predictions, labels):
mse = np.mean(np.square(predictions-labels), axis=0);
var_labels = np.var(labels, axis=0)
nmse = np.divide(mse, var_labels)
return (nmse)
# + [markdown] colab_type="text" deletable=true editable=true id="-b1hTz3VWZjw"
# ---
# Feed-Forward Neural Network Model
# ---------
#
# Here it goes:
#
# ---
#
# + deletable=true editable=true
import os
batch_size = 64
num_steps = 700001
# Number of units in hidden layer
N_HIDDEN1_UNITS = 250
N_HIDDEN2_UNITS = 125
N_HIDDEN3_UNITS = 64
N_HIDDEN4_UNITS = 32
# L2 Regularizer constant
beta1 = 0.0000000001
logs_path = "/tmp/ffnn/"
def defineFeedForwardNeuralNetworkModel(input_size, num_hidden1_units, num_hidden2_units, num_hidden3_units, num_hidden4_units, output_size):
# Hidden 1 Layer
with tf.variable_scope('hidden1', reuse=False):
weights = tf.get_variable('weights', [input_size, num_hidden1_units], initializer=tf.random_normal_initializer(0.0, 1e-7))
biases = tf.get_variable('biases', [num_hidden1_units], initializer=tf.constant_initializer(0))
# Hidden 2 Layer
with tf.variable_scope('hidden2', reuse=False):
weights = tf.get_variable('weights', [num_hidden1_units, num_hidden2_units], initializer=tf.random_normal_initializer(0.0, 1e-7))
biases = tf.get_variable('biases', [num_hidden2_units], initializer=tf.constant_initializer(0))
# Hidden 3 Layer
with tf.variable_scope('hidden3', reuse=False):
weights = tf.get_variable('weights', [num_hidden2_units, num_hidden3_units], initializer=tf.random_normal_initializer(0.0, 1e-7))
biases = tf.get_variable('biases', [num_hidden3_units], initializer=tf.constant_initializer(0))
# Hidden 4 Layer
with tf.variable_scope('hidden4', reuse=False):
weights = tf.get_variable('weights', [num_hidden3_units, num_hidden4_units], initializer=tf.random_normal_initializer(0.0, 1e-7))
biases = tf.get_variable('biases', [num_hidden4_units], initializer=tf.constant_initializer(0))
# Linear (Output) Layer
with tf.variable_scope('linear', reuse=False):
weights = tf.get_variable('weights', [num_hidden4_units, output_size], initializer=tf.random_normal_initializer(0.0, 1e-7))
biases = tf.get_variable('biases', [output_size], initializer=tf.constant_initializer(0))
return None
# Build prediction graph.
def performFeedForwardNeuralNetworkPrediction(train_dataset, input_size, num_hidden1_units, num_hidden2_units, num_hidden3_units, num_hidden4_units, output_size, dropout_keep_prob):
"""Build the Feed-Forward Neural Network model for prediction.
Args:
train_dataset: training dataset's placeholder.
num_hidden1_units: Size of the 1st hidden layer.
Returns:
outputs: Output tensor with the computed logits.
"""
# Hidden 1
with tf.variable_scope('hidden1', reuse=True):
weights = tf.get_variable('weights', [input_size, num_hidden1_units])
biases = tf.get_variable('biases', [num_hidden1_units])
hidden1 = tf.nn.relu(tf.matmul(train_dataset, weights) + biases)
# hidden1 = tf.matmul(train_dataset, weights) + biases
hidden1_drop = tf.nn.dropout(hidden1, dropout_keep_prob)
# Hidden 2
with tf.variable_scope('hidden2', reuse=True):
weights = tf.get_variable('weights', [num_hidden1_units, num_hidden2_units])
biases = tf.get_variable('biases', [num_hidden2_units])
hidden2 = tf.nn.relu(tf.matmul(hidden1_drop, weights) + biases)
hidden2_drop = tf.nn.dropout(hidden2, dropout_keep_prob)
# Hidden 3
with tf.variable_scope('hidden3', reuse=True):
weights = tf.get_variable('weights', [num_hidden2_units, num_hidden3_units])
biases = tf.get_variable('biases', [num_hidden3_units])
hidden3 = tf.nn.relu(tf.matmul(hidden2_drop, weights) + biases)
hidden3_drop = tf.nn.dropout(hidden3, dropout_keep_prob)
# Hidden 4
with tf.variable_scope('hidden4', reuse=True):
weights = tf.get_variable('weights', [num_hidden3_units, num_hidden4_units])
biases = tf.get_variable('biases', [num_hidden4_units])
hidden4 = tf.nn.relu(tf.matmul(hidden3_drop, weights) + biases)
hidden4_drop = tf.nn.dropout(hidden4, dropout_keep_prob)
# Linear (Output)
with tf.variable_scope('linear', reuse=True):
weights = tf.get_variable('weights', [num_hidden4_units, output_size])
biases = tf.get_variable('biases', [output_size])
outputs = tf.matmul(hidden4_drop, weights) + biases
return outputs
# Build training graph.
def performFeedForwardNeuralNetworkTraining(outputs, labels, initial_learning_rate, input_size, num_hidden1_units, num_hidden2_units, num_hidden3_units, num_hidden4_units, output_size):
"""Build the training graph.
Args:
outputs: Output tensor, float - [BATCH_SIZE, output_size].
labels : Labels tensor, float - [BATCH_SIZE, output_size].
initial_learning_rate: The initial learning rate to use for gradient descent.
Returns:
train_op: The Op for training.
loss: The Op for calculating loss.
"""
# Create an operation that calculates L2 prediction loss.
pred_l2_loss = tf.nn.l2_loss(outputs - labels, name='my_pred_l2_loss')
# Create an operation that calculates L2 loss.
# Hidden 1
with tf.variable_scope('hidden1', reuse=True):
weights = tf.get_variable('weights', [input_size, num_hidden1_units])
biases = tf.get_variable('biases', [num_hidden1_units])
hidden1_layer_l2_loss = tf.nn.l2_loss(weights) + tf.nn.l2_loss(biases)
# Hidden 2
with tf.variable_scope('hidden2', reuse=True):
weights = tf.get_variable('weights', [num_hidden1_units, num_hidden2_units])
biases = tf.get_variable('biases', [num_hidden2_units])
hidden2_layer_l2_loss = tf.nn.l2_loss(weights) + tf.nn.l2_loss(biases)
# Hidden 3
with tf.variable_scope('hidden3', reuse=True):
weights = tf.get_variable('weights', [num_hidden2_units, num_hidden3_units])
biases = tf.get_variable('biases', [num_hidden3_units])
hidden3_layer_l2_loss = tf.nn.l2_loss(weights) + tf.nn.l2_loss(biases)
# Hidden 4
with tf.variable_scope('hidden4', reuse=True):
weights = tf.get_variable('weights', [num_hidden3_units, num_hidden4_units])
biases = tf.get_variable('biases', [num_hidden4_units])
hidden4_layer_l2_loss = tf.nn.l2_loss(weights) + tf.nn.l2_loss(biases)
# Linear (Output)
with tf.variable_scope('linear', reuse=True):
weights = tf.get_variable('weights', [num_hidden4_units, output_size])
biases = tf.get_variable('biases', [output_size])
output_layer_l2_loss = tf.nn.l2_loss(weights) + tf.nn.l2_loss(biases)
loss = tf.reduce_mean(pred_l2_loss, name='my_pred_l2_loss_mean') + (beta1 * (hidden1_layer_l2_loss + hidden2_layer_l2_loss + hidden3_layer_l2_loss + hidden4_layer_l2_loss + output_layer_l2_loss))
# Create a variable to track the global step.
global_step = tf.Variable(0, name='global_step', trainable=False)
# Exponentially-decaying learning rate:
learning_rate = tf.train.exponential_decay(initial_learning_rate, global_step, num_steps, 0.1)
# Create the gradient descent optimizer with the given learning rate.
# Use the optimizer to apply the gradients that minimize the loss
# (and also increment the global step counter) as a single training step.
# train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
# train_op = tf.train.MomentumOptimizer(learning_rate, momentum=learning_rate/4.0, use_nesterov=True).minimize(loss, global_step=global_step)
train_op = tf.train.AdagradOptimizer(initial_learning_rate).minimize(loss, global_step=global_step)
return train_op, loss, learning_rate
# Save model.
def saveFeedForwardNeuralNetworkToMATLABMatFile(input_size, num_hidden1_units, num_hidden2_units, num_hidden3_units, num_hidden4_units, output_size):
model_params={}
# Hidden 1
with tf.variable_scope('hidden1', reuse=True):
weights = tf.get_variable('weights', [input_size, num_hidden1_units])
biases = tf.get_variable('biases', [num_hidden1_units])
model_params['weights_1']=weights.eval()
model_params['biases_1']=biases.eval()
# Hidden 2
with tf.variable_scope('hidden2', reuse=True):
weights = tf.get_variable('weights', [num_hidden1_units, num_hidden2_units])
biases = tf.get_variable('biases', [num_hidden2_units])
model_params['weights_2']=weights.eval()
model_params['biases_2']=biases.eval()
# Hidden 3
with tf.variable_scope('hidden3', reuse=True):
weights = tf.get_variable('weights', [num_hidden2_units, num_hidden3_units])
biases = tf.get_variable('biases', [num_hidden3_units])
model_params['weights_3']=weights.eval()
model_params['biases_3']=biases.eval()
# Hidden 4
with tf.variable_scope('hidden4', reuse=True):
weights = tf.get_variable('weights', [num_hidden3_units, num_hidden4_units])
biases = tf.get_variable('biases', [num_hidden4_units])
model_params['weights_4']=weights.eval()
model_params['biases_4']=biases.eval()
# Linear (Output)
with tf.variable_scope('linear', reuse=True):
weights = tf.get_variable('weights', [num_hidden4_units, output_size])
biases = tf.get_variable('biases', [output_size])
model_params['weights_out']=weights.eval()
model_params['biases_out']=biases.eval()
return model_params
# Build the complete graph for feeding inputs, training, and saving checkpoints.
ff_nn_graph = tf.Graph()
with ff_nn_graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32, shape=[batch_size, D_input], name="tf_train_dataset_placeholder")
tf_train_labels = tf.placeholder(tf.float32, shape=[batch_size, D_output], name="tf_train_labels_placeholder")
tf_train_all_dataset = tf.constant(X_train_dataset, name="tf_train_all_dataset_constant")
tf_valid_dataset = tf.constant(X_valid_dataset, name="tf_valid_dataset_constant")
tf_test_dataset = tf.constant(X_test_dataset, name="tf_test_dataset_constant")
tf_whole_dataset = tf.constant(X_init_offset_cancelled, name="tf_whole_dataset_constant")
tf_whole_all_dataset = tf.constant(X_init_offset_cancelled_all, name="tf_whole_all_dataset_constant")
tf_extrapolate_test_dataset = tf.constant(X_extrapt, name="tf_extrapolate_test_dataset_constant")
# Currently turn off dropouts:
tf_train_dropout_keep_prob = 0.77
# Define the Neural Network model.
defineFeedForwardNeuralNetworkModel(D_input, N_HIDDEN1_UNITS, N_HIDDEN2_UNITS, N_HIDDEN3_UNITS, N_HIDDEN4_UNITS, D_output)
# Build the Prediction Graph (that computes predictions from the inference model).
tf_outputs = performFeedForwardNeuralNetworkPrediction(tf_train_dataset, D_input, N_HIDDEN1_UNITS, N_HIDDEN2_UNITS, N_HIDDEN3_UNITS, N_HIDDEN4_UNITS, D_output, tf_train_dropout_keep_prob)
# Build the Training Graph (that calculate and apply gradients).
train_op, loss, learning_rate = performFeedForwardNeuralNetworkTraining(tf_outputs, tf_train_labels, 0.1, D_input, N_HIDDEN1_UNITS, N_HIDDEN2_UNITS, N_HIDDEN3_UNITS, N_HIDDEN4_UNITS, D_output)
# train_op, loss, learning_rate = performFeedForwardNeuralNetworkTraining(tf_outputs, tf_train_labels, 0.00001, D_input, N_HIDDEN1_UNITS, N_HIDDEN2_UNITS, N_HIDDEN3_UNITS, N_HIDDEN4_UNITS, D_output)
# Create a summary:
tf.summary.scalar("loss", loss)
tf.summary.scalar("learning_rate", learning_rate)
# merge all summaries into a single "operation" which we can execute in a session
summary_op = tf.summary.merge_all()
# Predictions for the training, validation, and test data.
train_prediction = tf_outputs
train_all_prediction = performFeedForwardNeuralNetworkPrediction(tf_train_all_dataset, D_input, N_HIDDEN1_UNITS, N_HIDDEN2_UNITS, N_HIDDEN3_UNITS, N_HIDDEN4_UNITS, D_output, 1.0)
valid_prediction = performFeedForwardNeuralNetworkPrediction(tf_valid_dataset, D_input, N_HIDDEN1_UNITS, N_HIDDEN2_UNITS, N_HIDDEN3_UNITS, N_HIDDEN4_UNITS, D_output, 1.0)
test_prediction = performFeedForwardNeuralNetworkPrediction(tf_test_dataset, D_input, N_HIDDEN1_UNITS, N_HIDDEN2_UNITS, N_HIDDEN3_UNITS, N_HIDDEN4_UNITS, D_output, 1.0)
whole_prediction = performFeedForwardNeuralNetworkPrediction(tf_whole_dataset, D_input, N_HIDDEN1_UNITS, N_HIDDEN2_UNITS, N_HIDDEN3_UNITS, N_HIDDEN4_UNITS, D_output, 1.0)
whole_all_prediction = performFeedForwardNeuralNetworkPrediction(tf_whole_all_dataset, D_input, N_HIDDEN1_UNITS, N_HIDDEN2_UNITS, N_HIDDEN3_UNITS, N_HIDDEN4_UNITS, D_output, 1.0)
extrapolate_test_prediction = performFeedForwardNeuralNetworkPrediction(tf_extrapolate_test_dataset, D_input, N_HIDDEN1_UNITS, N_HIDDEN2_UNITS, N_HIDDEN3_UNITS, N_HIDDEN4_UNITS, D_output, 1.0)
# Run training for num_steps and save checkpoint at the end.
with tf.Session(graph=ff_nn_graph) as session:
# Run the Op to initialize the variables.
tf.global_variables_initializer().run()
print("Initialized")
# create log writer object
writer = tf.summary.FileWriter(logs_path, graph=tf.get_default_graph())
# Start the training loop.
for step in range(num_steps):
# Read a batch of input dataset and labels.
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (Ct_train.shape[0] - batch_size)
# Generate a minibatch.
batch_data = X_train_dataset[offset:(offset + batch_size), :]
batch_labels = Ct_train[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
# Run one step of the model. The return values are the activations
# from the `train_op` (which is discarded) and the `loss` Op. To
# inspect the values of your Ops or variables, you may include them
# in the list passed to sess.run() and the value tensors will be
# returned in the tuple from the call.
_, loss_value, predictions, summary = session.run([train_op, loss, train_prediction, summary_op], feed_dict=feed_dict)
# write log
writer.add_summary(summary, step)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, loss_value))
print("Minibatch NMSE: ", computeNMSE(predictions, batch_labels))
print("Validation NMSE: ", computeNMSE(valid_prediction.eval(), Ct_valid))
print("Extrapolation NMSE: ", computeNMSE(extrapolate_test_prediction.eval(), Ctt_extrapt))
if (step % 5000 == 0):
model_params = saveFeedForwardNeuralNetworkToMATLABMatFile(D_input, N_HIDDEN1_UNITS, N_HIDDEN2_UNITS, N_HIDDEN3_UNITS, N_HIDDEN4_UNITS, D_output)
print("Logging model_params.mat ...")
sio.savemat('model_params/model_params.mat', model_params)
whole_prediction_result = whole_prediction.eval()
whole_prediction_result_dict={}
whole_prediction_result_dict['whole_prediction_result'] = whole_prediction_result
print("Logging Ct_fit_onset.mat ...")
sio.savemat('scraping/Ct_fit_onset.mat', whole_prediction_result_dict)
whole_all_prediction_result = whole_all_prediction.eval()
whole_all_prediction_result_dict={}
whole_all_prediction_result_dict['whole_all_prediction_result'] = whole_all_prediction_result
print("Logging Ct_fit_all.mat ...")
sio.savemat('scraping/Ct_fit_all.mat', whole_all_prediction_result_dict)
print("Final Training NMSE : ", computeNMSE(train_all_prediction.eval(), Ct_train))
print("Final Validation NMSE: ", computeNMSE(valid_prediction.eval(), Ct_valid))
print("Final Test NMSE : ", computeNMSE(test_prediction.eval(), Ct_test))
| python/dmp_coupling/learn_tactile_feedback/fitting_script_archives/feedforward_NN_w_regularization_onset_vs_all_4_hiddenlayers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="y4_oKcTi9cIr" colab_type="text"
# Library
# + id="U18-lLvKDFH5" colab_type="code" colab={}
from sklearn.model_selection import train_test_split
import pandas as pd
import tensorflow as tf
import tensorflow_hub as hub
from datetime import datetime
from bs4 import BeautifulSoup
import torch
import numpy as np
from sklearn.model_selection import KFold
from sklearn.metrics import classification_report
import re
# %tensorflow_version 1.x
import tensorflow as tf
import bert
from bert import run_classifier
from bert import optimization
from bert import tokenization
# + colab_type="code" outputId="fa7e71a9-843a-4e0e-ef5b-78c6a0bffdb4" id="hM4hmL2I87sF" colab={"base_uri": "https://localhost:8080/", "height": 34}
OUTPUT_DIR = 'OUTPUT_MODEL'
tf.gfile.MakeDirs(OUTPUT_DIR)
# + [markdown] id="udbcSm3Q9gJh" colab_type="text"
# Data Preparation
# + id="0ao7es7ESDpx" colab_type="code" colab={}
datanews=pd.read_excel(r'path_to_newscontent.xlsx','Sheet1')
datacomment=pd.read_excel(r'path_to_comments.xlsx','Sheet1')
# + id="nnTIR6kWDOTw" colab_type="code" colab={}
def datapreparation(data,data_column):
data[data_column]=[str(i) for i in data[data_column]]
num_split=int(len(data.index)*0.8)
return data, data_column, num_split
# + id="1fqxXFsl5cyD" colab_type="code" colab={}
LABEL_COLUMN = 'label'
label_list = [0, 1, 2]
data, DATA_COLUMN, num_split= datapreparation(datacomment, "comment")
#data, DATA_COLUMN, num_split= datapreparation(datanews, "all_lower")
# + id="e2-XOMYiDQBs" colab_type="code" colab={}
train_InputExamples = data.iloc[:num_split].apply(lambda x: bert.run_classifier.InputExample(guid=None,
text_a = x[DATA_COLUMN],
text_b = None,
label = x[LABEL_COLUMN]), axis = 1)
test_InputExamples = data.iloc[(num_split+1):].apply(lambda x: bert.run_classifier.InputExample(guid=None,
text_a = x[DATA_COLUMN],
text_b = None,
label = x[LABEL_COLUMN]), axis = 1)
# + id="RKKTJyLgDRzB" colab_type="code" colab={}
BERT_MODEL_HUB = "https://tfhub.dev/google/bert_multi_cased_L-12_H-768_A-12/1"
def create_tokenizer_from_hub_module():
with tf.Graph().as_default():
bert_module = hub.Module(BERT_MODEL_HUB)
tokenization_info = bert_module(signature="tokenization_info", as_dict=True)
with tf.Session() as sess:
vocab_file, do_lower_case = sess.run([tokenization_info["vocab_file"],
tokenization_info["do_lower_case"]])
return bert.tokenization.FullTokenizer(
vocab_file=vocab_file, do_lower_case=do_lower_case)
tokenizer = create_tokenizer_from_hub_module()
# + id="4bXE6hxL7o08" colab_type="code" colab={}
MAX_SEQ_LENGTH = 128
train_features = bert.run_classifier.convert_examples_to_features(train_InputExamples, label_list, MAX_SEQ_LENGTH, tokenizer)
test_features = bert.run_classifier.convert_examples_to_features(test_InputExamples, label_list, MAX_SEQ_LENGTH, tokenizer)
# + [markdown] id="lnKDPOEO9joZ" colab_type="text"
# BERT Model
# + id="foz60worDVXZ" colab_type="code" colab={}
def create_model(is_predicting, input_ids, input_mask, segment_ids, labels,
num_labels):
bert_module = hub.Module(
BERT_MODEL_HUB,
trainable=True)
bert_inputs = dict(
input_ids=input_ids,
input_mask=input_mask,
segment_ids=segment_ids)
bert_outputs = bert_module(
inputs=bert_inputs,
signature="tokens",
as_dict=True)
# Use "pooled_output" for classification tasks on an entire sentence.
output_layer = bert_outputs["pooled_output"]
hidden_size = output_layer.shape[-1].value
output_weights = tf.get_variable(
"output_weights", [num_labels, hidden_size],
initializer=tf.truncated_normal_initializer(stddev=0.02))
output_bias = tf.get_variable(
"output_bias", [num_labels], initializer=tf.zeros_initializer())
with tf.variable_scope("loss"):
output_layer = tf.nn.dropout(output_layer, keep_prob=0.9)
logits = tf.matmul(output_layer, output_weights, transpose_b=True)
logits = tf.nn.bias_add(logits, output_bias)
log_probs = tf.nn.log_softmax(logits, axis=-1)
one_hot_labels = tf.one_hot(labels, depth=num_labels, dtype=tf.float32)
predicted_labels = tf.squeeze(tf.argmax(log_probs, axis=-1, output_type=tf.int32))
if is_predicting:
return (predicted_labels, log_probs)
per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1)
loss = tf.reduce_mean(per_example_loss)
return (loss, predicted_labels, log_probs)
# + id="F38uOuEQDaTv" colab_type="code" colab={}
def model_fn(features, labels, mode, params):
input_ids = features["input_ids"]
input_mask = features["input_mask"]
segment_ids = features["segment_ids"]
label_ids = features["label_ids"]
is_predicting = (mode == tf.estimator.ModeKeys.PREDICT)
if not is_predicting:
(loss, predicted_labels, log_probs) = create_model(
is_predicting, input_ids, input_mask, segment_ids, label_ids, 3)
train_op = bert.optimization.create_optimizer(
loss, params["learning_rate"],
params["num_train_steps"], params["num_warmup_steps"], use_tpu=False)
if mode == tf.estimator.ModeKeys.TRAIN:
return tf.estimator.EstimatorSpec(mode=mode,
loss=loss,
train_op=train_op)
else:
(predicted_labels, log_probs) = create_model(
is_predicting, input_ids, input_mask, segment_ids, label_ids, 3)
predictions = {
'probabilities': log_probs,
'labels': predicted_labels,
}
return tf.estimator.EstimatorSpec(mode, predictions=predictions)
# + [markdown] id="KkaaWto-9nrx" colab_type="text"
# Train Model
# + id="OQcjnQYkDcd6" colab_type="code" colab={}
BATCH_SIZE = 16
LEARNING_RATE = 5e-5
NUM_TRAIN_EPOCHS = 100
WARMUP_PROPORTION = 0.1
SAVE_CHECKPOINTS_STEPS = 100
SAVE_SUMMARY_STEPS = 1
# + id="2fwsWqZ2FFeK" colab_type="code" colab={}
run_config = tf.estimator.RunConfig(
model_dir=OUTPUT_DIR,
save_summary_steps=SAVE_SUMMARY_STEPS,
save_checkpoints_steps=SAVE_CHECKPOINTS_STEPS,
log_step_count_steps=10)
# + id="nK43V9urFGKC" colab_type="code" colab={}
num_train_steps = int(len(train_features) / BATCH_SIZE * NUM_TRAIN_EPOCHS)
num_warmup_steps = int(num_train_steps * WARMUP_PROPORTION)
# + id="_IqrnWBx1Ces" colab_type="code" colab={}
def input_fn_builder(features, seq_length, is_training, drop_remainder):
all_input_ids = []
all_input_mask = []
all_segment_ids = []
all_label_ids = []
for feature in features:
all_input_ids.append(feature.input_ids)
all_input_mask.append(feature.input_mask)
all_segment_ids.append(feature.segment_ids)
all_label_ids.append(feature.label_id)
def input_fn(params):
batch_size = params["batch_size"]
num_examples = len(features)
d = tf.data.Dataset.from_tensor_slices({
"input_ids":
tf.constant(
all_input_ids, shape=[num_examples, seq_length],
dtype=tf.int32),
"input_mask":
tf.constant(
all_input_mask,
shape=[num_examples, seq_length],
dtype=tf.int32),
"segment_ids":
tf.constant(
all_segment_ids,
shape=[num_examples, seq_length],
dtype=tf.int32),
"label_ids":
tf.constant(all_label_ids, shape=[num_examples], dtype=tf.int32)
})
if is_training:
d = d.repeat()
d = d.shuffle(buffer_size=100)
d = d.batch(batch_size=batch_size, drop_remainder=drop_remainder)
return d
return input_fn
# + id="Qj6Ix3272b-f" colab_type="code" colab={}
train_input_fn = input_fn_builder(
features=train_features,
seq_length=MAX_SEQ_LENGTH,
is_training=True,
drop_remainder=False)
test_input_fn = input_fn_builder(
features=test_features,
seq_length=MAX_SEQ_LENGTH,
is_training=False,
drop_remainder=False)
# + id="d9W4djWZt4mL" colab_type="code" colab={}
print('Beginning Training!')
current_time = datetime.now()
estimator = tf.estimator.Estimator(
model_fn=model_fn,
config=run_config,
params={"batch_size": BATCH_SIZE,
"learning_rate": LEARNING_RATE,
"num_train_steps": num_train_steps,
"num_warmup_steps": num_warmup_steps,
"epoch":NUM_TRAIN_EPOCHS})
estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
print("Training took time ", datetime.now() - current_time)
# + [markdown] id="D2K6Amv-9q4a" colab_type="text"
# Evaluation
# + id="lW0qhSjIxhVx" colab_type="code" colab={}
predictions = estimator.predict(test_input_fn)
prelabel=[]
for pre in predictions:
prelabel.append(pre['labels'])
print(classification_report(prelabel,list(data.iloc[(num_split+1):]['label'])))
| models/BERT.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 음성 통화 데이터 분석
# * 음성 통화 데이터 분석을 통해 특정 지역에서 많이 주문하는 음식의 종류를 파악하기
#
# ### 문제정의
# 서울의 중심지인 강남에서 어떤 음식을 자주 시켜먹는지 알아보기
# ### 문제
# 강남에 거주하는 사람들은 어떤 종류의 음식을 자주 주문하는가?
#
# ### 검증 방식
# 음성통화 데이터를 활용하여 특정 지역[강남]에서 어떤 종류의 음식을 많이 주문하는지 확인한다.
# ## 데이터 구성
# * 일자 (YYYYMMDD) : 통화 일자
# * 연령 : 나이
# * 성별 : 성별
# * 발신지(시도) : 음성통화 발신지(시도)
# * 발신지(시군구) : 음성통화 발신지(시군구)
# * 대분류 : 통화 업종 대분류
# * 중분류 : 통화 업종 중분류
# * 통화비율(시군구내) : 급여 (단위 : €유로)
#
#
# > 출처 : SKT Data Hub, 음성통화 이용데이터 - 20년 09월
# https://www.bigdatahub.co.kr/product/view.do?pid=1002333
# ## 음성 통화 데이터 구조 확인
import numpy as np
import pandas as pd
call_data = pd.read_csv('call_data.csv')
call_data.shape
call_data.head(3)
call_data.info()
call_data.columns # 데이터의 속성명
# ## 음성 통화 데이터 정리
# * 발신지 정리
call_data.rename(columns={'발신지(시도)':'시도', '발신지(시군구)':'시군구'},inplace=True)
# **!!inplace=True!!**
call_data.head(2)
call_data['대분류'].unique()
# * 음식점 관련 통화 데이터만 변수에 저장
food_call = call_data[call_data['대분류'] == '음식점']
food_call.head(10)
# ## 음성 통화 데이터 분석
# * 서울 내 특정 지역에서 자주 주문해 먹는 음식점의 종류
# > `food_call`에 들어있는 데이터프레임의 구역 컬럼이 서울인 데이터프레임을 추출하여 `seoul`에 저장
food_call['시도'].unique()
seoul = food_call[food_call['시도']== '서울']
seoul.head(2)
seoul['시군구'].unique()
# > * 음식 종류별 주문 전화량을 알고싶은 지역명을 `input()`함수를 활용하여 입력받아 name 변수에 저장
# > * seoul에서 지역 컬럼이 name인 데이터프레임을 추출하여 location 변수에 저장하고 중분류 컬럼의 데이터 값 분포를 출력
name = input("음식 종류별 주문 전화량을 알고싶은 지역명을 입력하세요")
s_location = food_call[food_call['시군구']== name]
food = s_location['중분류'].value_counts()
food
# ## 음성 통화 데이터 시각화
# 세로 막대그래프`bar()`
# * 그래프 사이즈 : (7,7)
# * X값 : food.index
# * Y값 : food.values
# * 그래프 제목 : “주문 그래프”, 글자 크기 20
# * X축 이름 : “종류” 글자 크기 15
# * Y축 이름 : “주문 수” 글자 크기 15
import matplotlib.pyplot as plt
plt.rc('font', family='malgun gothic') # 그래프에서 한글 깨지는 현상 방지
plt.rcParams['axes.unicode_minus']=False #음수 부호 깨짐 방지
plt.figure(figsize=(7,7))
plt.title('주문 그래프', size = 20)
#plt.bar(food.index, food.values, color='blue', alpha=0.5)
ax = plt.bar(food.index, food.values, color='blue',alpha=0.5)
plt.legend(handles = ax,labels= ['한식','치킨','중식','분식','양식'])
plt.xlabel('종류', size = 15)
plt.ylabel('주문 수', size = 15)
kind = [food.index[i] for i in range(0,len(food.index))]
kind
plt.figure(figsize=(7,7))
plt.title('주문 그래프', size = 20)
#plt.bar(food.index, food.values, color='slateblue', alpha=0.8)
ax = plt.bar(food.index, food.values, color='slateblue',alpha=0.8)
plt.legend(handles = ax,labels= kind)
plt.xlabel('종류', size = 15)
plt.ylabel('주문 수', size = 15)
food.plot(kind='bar',rot=0,color='plum')
ax = plt.bar(food.index, food.values, color='plum')
plt.title('주문 그래프', size = 20)
plt.legend(handles = ax,labels= ['한식','치킨','중식','분식','양식'])
plt.xlabel('종류', size = 15)
plt.ylabel('주문 수', size = 15)
plt.show()
# ## 데이터 분석 결과
# 1. 특정 지역(서울 강남구)을 기준으로 가장 많이 주문하는 음식은 **한식**
# 2. 특정 지역(서울 강남구)을 기준으로 한식, 치킨, 중식, 양식, 간식 순으로 주문 수가 많음을 확인
| practice/call_data_vis.ipynb |
# Pipelining becomes powerful with GridSearchCV
# -----------------------------------------------
from sklearn.svm import LinearSVC
from sklearn.pipeline import make_pipeline
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.grid_search import GridSearchCV
import numpy as np
# +
from sklearn.datasets import load_iris
from sklearn.cross_validation import train_test_split
iris = load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y)
# -
# The wrong way to do GridSearchCV with preprocessing:
# +
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
scaler = StandardScaler()
X_preprocessed = scaler.fit_transform(X_train)
param_grid = {'C': 10. ** np.arange(-3, 3), 'gamma': 10. ** np.arange(-3, 3)}
grid = GridSearchCV(SVC(), param_grid=param_grid, cv=5)
# -
# The right way to do GridSearchCV with preprocessing
# +
from sklearn.pipeline import make_pipeline
param_grid_pipeline = {'svc__C': 10. ** np.arange(-3, 3), 'svc__gamma': 10. ** np.arange(-3, 3)}
scaler_pipe = make_pipeline(StandardScaler(), SVC())
grid = GridSearchCV(scaler_pipe, param_grid=param_grid_pipeline, cv=5)
# -
grid.fit(X_train, y_train)
print(grid.best_params_)
# +
from sklearn.pipeline import make_pipeline
from sklearn.svm import SVC
from sklearn.feature_selection import SelectKBest
param_grid = {'selectkbest__k': [1, 2, 3, 4], 'svc__C': 10. ** np.arange(-3, 3), 'svc__gamma': 10. ** np.arange(-3, 3)}
scaler_pipe = make_pipeline(SelectKBest(), SVC())
grid = GridSearchCV(scaler_pipe, param_grid=param_grid, cv=5)
grid.fit(X_train, y_train)
print(grid.best_params_)
# +
text_pipe = make_pipeline(TfidfVectorizer(), LinearSVC())
param_grid = {'tfidifvectorizer__ngram_range': [(1, 1), (1, 2), (2, 2)], 'linearsvc__C': 10. ** np.arange(-3, 3)}
grid = GridSearchCV(text_pipe, param_grid=param_grid, cv=5)
| Combining Pipelines and GridSearchCV.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
# %reload_ext autoreload
# %autoreload 2
# 多行输出
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
from fastai.basics import *
from fastai.tabular import *
# import modin.pandas as pd
# # Rossmann 数据预处理
Config.data_path()
ROOT = Config.data_path()/'rossmann'
table_names = ['train', 'store', 'store_states', 'state_names', 'googletrend', 'weather', 'test']
tables = [pd.read_csv(ROOT/f'{fname}.csv', low_memory=False) for fname in table_names]
train, store, store_states, state_names, googletrend, weather, test = tables
len(train),len(test)
train.head()
test.head()
# - 使用 bool 值
train.StateHoliday = train.StateHoliday!='0'
test.StateHoliday = test.StateHoliday!='0'
train.head()
store.head()
store_states.head()
state_names.head()
googletrend.head()
weather.head()
# - 合并数据
def join_df(left, right, left_on, right_on=None, suffix='_y'):
if right_on is None: right_on = left_on
return left.merge(right, how='left', left_on=left_on, right_on=right_on,
suffixes=("", suffix))
# ?pd.merge
weather.columns
state_names.columns
weather = join_df(weather, state_names, "file", "StateName")
weather.columns
state_names.columns
weather.tail().T
googletrend.head()
googletrend['Date'] = googletrend.week.str.split(' - ', expand=True)[0]
googletrend['State'] = googletrend.file.str.split('_', expand=True)[2]
googletrend.loc[googletrend.State=='NI', "State"] = 'HB,NI'
googletrend.head()
def add_datepart(df, fldname, drop=True, time=False):
"Helper function that adds columns relevant to a date."
fld = df[fldname]
fld_dtype = fld.dtype
if isinstance(fld_dtype, pd.core.dtypes.dtypes.DatetimeTZDtype):
fld_dtype = np.datetime64
if not np.issubdtype(fld_dtype, np.datetime64):
df[fldname] = fld = pd.to_datetime(fld, infer_datetime_format=True)
targ_pre = re.sub('[Dd]ate$', '', fldname)
attr = ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear',
'Is_month_end', 'Is_month_start', 'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start']
if time: attr = attr + ['Hour', 'Minute', 'Second']
for n in attr: df[targ_pre + n] = getattr(fld.dt, n.lower())
df[targ_pre + 'Elapsed'] = fld.astype(np.int64) // 10 ** 9
if drop: df.drop(fldname, axis=1, inplace=True)
add_datepart(weather, "Date", drop=False)
add_datepart(googletrend, "Date", drop=False)
add_datepart(train, "Date", drop=False)
add_datepart(test, "Date", drop=False)
weather.head().T
googletrend.head()
trend_de = googletrend[googletrend.file == 'Rossmann_DE']
store = join_df(store, store_states, "Store")
len(store[store.State.isnull()])
store.head()
joined = join_df(train, store, "Store")
joined_test = join_df(test, store, "Store")
len(joined[joined.StoreType.isnull()]),len(joined_test[joined_test.StoreType.isnull()])
joined.head().T
joined_test.head().T
joined = join_df(joined, googletrend, ["State","Year", "Week"])
joined_test = join_df(joined_test, googletrend, ["State","Year", "Week"])
len(joined[joined.trend.isnull()]),len(joined_test[joined_test.trend.isnull()])
joined = joined.merge(trend_de, 'left', ["Year", "Week"], suffixes=('', '_DE'))
joined_test = joined_test.merge(trend_de, 'left', ["Year", "Week"], suffixes=('', '_DE'))
len(joined[joined.trend_DE.isnull()]),len(joined_test[joined_test.trend_DE.isnull()])
joined = join_df(joined, weather, ["State","Date"])
joined_test = join_df(joined_test, weather, ["State","Date"])
len(joined[joined.Mean_TemperatureC.isnull()]),len(joined_test[joined_test.Mean_TemperatureC.isnull()])
joined.head().T
joined_test.head().T
for df in (joined, joined_test):
for c in df.columns:
if c.endswith('_y'):
if c in df.columns: df.drop(c, inplace=True, axis=1)
joined.head().T
joined.head().T
for df in (joined,joined_test):
df['CompetitionOpenSinceYear'] = df.CompetitionOpenSinceYear.fillna(1900).astype(np.int32)
df['CompetitionOpenSinceMonth'] = df.CompetitionOpenSinceMonth.fillna(1).astype(np.int32)
df['Promo2SinceYear'] = df.Promo2SinceYear.fillna(1900).astype(np.int32)
df['Promo2SinceWeek'] = df.Promo2SinceWeek.fillna(1).astype(np.int32)
for df in (joined,joined_test):
df["CompetitionOpenSince"] = pd.to_datetime(dict(year=df.CompetitionOpenSinceYear,
month=df.CompetitionOpenSinceMonth, day=15))
df["CompetitionDaysOpen"] = df.Date.subtract(df.CompetitionOpenSince).dt.days
for df in (joined,joined_test):
df.loc[df.CompetitionDaysOpen<0, "CompetitionDaysOpen"] = 0
df.loc[df.CompetitionOpenSinceYear<1990, "CompetitionDaysOpen"] = 0
for df in (joined,joined_test):
df["CompetitionMonthsOpen"] = df["CompetitionDaysOpen"]//30
df.loc[df.CompetitionMonthsOpen>24, "CompetitionMonthsOpen"] = 24
joined.CompetitionMonthsOpen.unique()
from isoweek import Week
for df in (joined,joined_test):
df["Promo2Since"] = pd.to_datetime(df.apply(lambda x: Week(
x.Promo2SinceYear, x.Promo2SinceWeek).monday(), axis=1))
df["Promo2Days"] = df.Date.subtract(df["Promo2Since"]).dt.days
for df in (joined,joined_test):
df.loc[df.Promo2Days<0, "Promo2Days"] = 0
df.loc[df.Promo2SinceYear<1990, "Promo2Days"] = 0
df["Promo2Weeks"] = df["Promo2Days"]//7
df.loc[df.Promo2Weeks<0, "Promo2Weeks"] = 0
df.loc[df.Promo2Weeks>25, "Promo2Weeks"] = 25
df.Promo2Weeks.unique()
joined.to_pickle(ROOT/'joined')
joined_test.to_pickle(ROOT/'joined_test')
joined.head().T
joined_test.head().T
def get_elapsed(fld, pre):
day1 = np.timedelta64(1, 'D')
last_date = np.datetime64()
last_store = 0
res = []
for s,v,d in zip(df.Store.values,df[fld].values, df.Date.values):
if s != last_store:
last_date = np.datetime64()
last_store = s
if v: last_date = d
res.append(((d-last_date).astype('timedelta64[D]') / day1))
df[pre+fld] = res
columns = ["Date", "Store", "Promo", "StateHoliday", "SchoolHoliday"]
df = train[columns].append(test[columns])
fld = 'SchoolHoliday'
df = df.sort_values(['Store', 'Date'])
get_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
get_elapsed(fld, 'Before')
fld = 'StateHoliday'
df = df.sort_values(['Store', 'Date'])
get_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
get_elapsed(fld, 'Before')
fld = 'Promo'
df = df.sort_values(['Store', 'Date'])
get_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
get_elapsed(fld, 'Before')
df = df.set_index("Date")
columns = ['SchoolHoliday', 'StateHoliday', 'Promo']
for o in ['Before', 'After']:
for p in columns:
a = o+p
df[a] = df[a].fillna(0).astype(int)
bwd = df[['Store']+columns].sort_index().groupby("Store").rolling(7, min_periods=1).sum()
fwd = df[['Store']+columns].sort_index(ascending=False
).groupby("Store").rolling(7, min_periods=1).sum()
bwd.drop('Store',1,inplace=True)
bwd.reset_index(inplace=True)
fwd.drop('Store',1,inplace=True)
fwd.reset_index(inplace=True)
df.reset_index(inplace=True)
df = df.merge(bwd, 'left', ['Date', 'Store'], suffixes=['', '_bw'])
df = df.merge(fwd, 'left', ['Date', 'Store'], suffixes=['', '_fw'])
df.drop(columns,1,inplace=True)
df.head()
df.to_pickle(ROOT/'df')
df["Date"] = pd.to_datetime(df.Date)
df.columns
joined = pd.read_pickle(ROOT/'joined')
joined_test = pd.read_pickle(ROOT/f'joined_test')
joined = join_df(joined, df, ['Store', 'Date'])
joined_test = join_df(joined_test, df, ['Store', 'Date'])
joined = joined[joined.Sales!=0]
joined.reset_index(inplace=True)
joined_test.reset_index(inplace=True)
joined.to_pickle(ROOT/'train_clean')
joined_test.to_pickle(ROOT/'test_clean')
joined.head().T
joined_test.head().T
| fastai_DL_nbs/lesson06_rossmann_pro.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
ACM = r"""
%%
%% This is file `sample-sigconf.tex',
%% generated with the docstrip utility.
%%
%% The original source files were:
%%
%% samples.dtx (with options: `sigconf')
%%
%% IMPORTANT NOTICE:
%%
%% For the copyright see the source file.
%%
%% Any modified versions of this file must be renamed
%% with new filenames distinct from sample-sigconf.tex.
%%
%% For distribution of the original source see the terms
%% for copying and modification in the file samples.dtx.
%%
%% This generated file may be distributed as long as the
%% original source files, as listed above, are part of the
%% same distribution. (The sources need not necessarily be
%% in the same archive or directory.)
%%
%% The first command in your LaTeX source must be the \documentclass command.
\documentclass[sigconf]{acmart}
%%
%% \BibTeX command to typeset BibTeX logo in the docs
\AtBeginDocument{%
\providecommand\BibTeX{{%
\normalfont B\kern-0.5em{\scshape i\kern-0.25em b}\kern-0.8em\TeX}}}
%% Rights management information. This information is sent to you
%% when you complete the rights form. These commands have SAMPLE
%% values in them; it is your responsibility as an author to replace
%% the commands and values with those provided to you when you
%% complete the rights form.
\setcopyright{acmcopyright}
\copyrightyear{2018}
\acmYear{2018}
\acmDOI{10.1145/1122445.1122456}
%% These commands are for a PROCEEDINGS abstract or paper.
\acmConference[Woodstock '18]{Woodstock '18: ACM Symposium on Neural
Gaze Detection}{June 03--05, 2018}{Woodstock, NY}
\acmBooktitle{Woodstock '18: ACM Symposium on Neural Gaze Detection,
June 03--05, 2018, Woodstock, NY}
\acmPrice{15.00}
\acmISBN{978-1-4503-XXXX-X/18/06}
%%
%% Submission ID.
%% Use this when submitting an article to a sponsored event. You'll
%% receive a unique submission ID from the organizers
%% of the event, and this ID should be used as the parameter to this command.
%%\acmSubmissionID{123-A56-BU3}
%%
%% The majority of ACM publications use numbered citations and
%% references. The command \citestyle{authoryear} switches to the
%% "author year" style.
%%
%% If you are preparing content for an event
%% sponsored by ACM SIGGRAPH, you must use the "author year" style of
%% citations and references.
%% Uncommenting
%% the next command will enable that style.
%%\citestyle{acmauthoryear}
%%
%% end of the preamble, start of the body of the document source.
\begin{document}
%%
%% The "title" command has an optional parameter,
%% allowing the author to define a "short title" to be used in page headers.
\title{The Name of the Title is Hope}
%%
%% The "author" command and its associated commands are used to define
%% the authors and their affiliations.
%% Of note is the shared affiliation of the first two authors, and the
%% "authornote" and "authornotemark" commands
%% used to denote shared contribution to the research.
\author{<NAME>}
\authornote{Both authors contributed equally to this research.}
\email{<EMAIL>}
\orcid{1234-5678-9012}
\author{<NAME>}
\authornotemark[1]
\email{<EMAIL>}
\affiliation{%
\institution{Institute for Clarity in Documentation}
\streetaddress{P.O. Box 1212}
\city{Dublin}
\state{Ohio}
\postcode{43017-6221}
}
\author{<NAME>{\o}rv{\"a}ld}
\affiliation{%
\institution{The Th{\o}rv{\"a}ld Group}
\streetaddress{1 Th{\o}rv{\"a}ld Circle}
\city{Hekla}
\country{Iceland}}
\email{<EMAIL>}
\author{<NAME>}
\affiliation{%
\institution{Inria Paris-Rocquencourt}
\city{Rocquencourt}
\country{France}
}
\author{<NAME>}
\affiliation{%
\institution{Rajiv Gandhi University}
\streetaddress{Rono-Hills}
\city{Doimukh}
\state{Arunachal Pradesh}
\country{India}}
\author{<NAME>}
\affiliation{%
\institution{Tsinghua University}
\streetaddress{30 Shuangqing Rd}
\city{Haidian Qu}
\state{Beijing Shi}
\country{China}}
\author{<NAME>}
\affiliation{%
\institution{Palmer Research Laboratories}
\streetaddress{8600 Datapoint Drive}
\city{San Antonio}
\state{Texas}
\postcode{78229}}
\email{<EMAIL>}
\author{<NAME>}
\affiliation{\institution{The Th{\o}rv{\"a}ld Group}}
\email{<EMAIL>}
\author{<NAME>}
\affiliation{\institution{The Kumquat Consortium}}
\email{<EMAIL>}
%%
%% By default, the full list of authors will be used in the page
%% headers. Often, this list is too long, and will overlap
%% other information printed in the page headers. This command allows
%% the author to define a more concise list
%% of authors' names for this purpose.
\renewcommand{\shortauthors}{Trovato and Tobin, et al.}
%%
%% The abstract is a short summary of the work to be presented in the
%% article.
\begin{abstract}
A clear and well-documented \LaTeX\ document is presented as an
article formatted for publication by ACM in a conference proceedings
or journal publication. Based on the ``acmart'' document class, this
article presents and explains many of the common variations, as well
as many of the formatting elements an author may use in the
preparation of the documentation of their work.
\end{abstract}
%%
%% The code below is generated by the tool at http://dl.acm.org/ccs.cfm.
%% Please copy and paste the code instead of the example below.
%%
\begin{CCSXML}
<ccs2012>
<concept>
<concept_id>10010520.10010553.10010562</concept_id>
<concept_desc>Computer systems organization~Embedded systems</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010520.10010575.10010755</concept_id>
<concept_desc>Computer systems organization~Redundancy</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10010520.10010553.10010554</concept_id>
<concept_desc>Computer systems organization~Robotics</concept_desc>
<concept_significance>100</concept_significance>
</concept>
<concept>
<concept_id>10003033.10003083.10003095</concept_id>
<concept_desc>Networks~Network reliability</concept_desc>
<concept_significance>100</concept_significance>
</concept>
</ccs2012>
\end{CCSXML}
\ccsdesc[500]{Computer systems organization~Embedded systems}
\ccsdesc[300]{Computer systems organization~Redundancy}
\ccsdesc{Computer systems organization~Robotics}
\ccsdesc[100]{Networks~Network reliability}
%%
%% Keywords. The author(s) should pick words that accurately describe
%% the work being presented. Separate the keywords with commas.
\keywords{datasets, neural networks, gaze detection, text tagging}
%% A "teaser" image appears between the author and affiliation
%% information and the body of the document, and typically spans the
%% page.
\begin{teaserfigure}
\includegraphics[width=\textwidth]{sampleteaser}
\caption{Seattle Mariners at Spring Training, 2010.}
\Description{Enjoying the baseball game from the third-base
seats. Ichiro Suzuki preparing to bat.}
\label{fig:teaser}
\end{teaserfigure}
%%
%% This command processes the author and affiliation and title
%% information and builds the first part of the formatted document.
\maketitle
\section{Introduction}
ACM's consolidated article template, introduced in 2017, provides a
consistent \LaTeX\ style for use across ACM publications, and
incorporates accessibility and metadata-extraction functionality
necessary for future Digital Library endeavors. Numerous ACM and
SIG-specific \LaTeX\ templates have been examined, and their unique
features incorporated into this single new template.
If you are new to publishing with ACM, this document is a valuable
guide to the process of preparing your work for publication. If you
have published with ACM before, this document provides insight and
instruction into more recent changes to the article template.
The ``\verb|acmart|'' document class can be used to prepare articles
for any ACM publication --- conference or journal, and for any stage
of publication, from review to final ``camera-ready'' copy, to the
author's own version, with {\itshape very} few changes to the source.
\section{Template Overview}
As noted in the introduction, the ``\verb|acmart|'' document class can
be used to prepare many different kinds of documentation --- a
double-blind initial submission of a full-length technical paper, a
two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready''
journal article, a SIGCHI Extended Abstract, and more --- all by
selecting the appropriate {\itshape template style} and {\itshape
template parameters}.
This document will explain the major features of the document
class. For further information, the {\itshape \LaTeX\ User's Guide} is
available from
\url{https://www.acm.org/publications/proceedings-template}.
\subsection{Template Styles}
The primary parameter given to the ``\verb|acmart|'' document class is
the {\itshape template style} which corresponds to the kind of publication
or SIG publishing the work. This parameter is enclosed in square
brackets and is a part of the {\verb|documentclass|} command:
\begin{verbatim}
\documentclass[STYLE]{acmart}
\end{verbatim}
Journals use one of three template styles. All but three ACM journals
use the {\verb|acmsmall|} template style:
\begin{itemize}
\item {\verb|acmsmall|}: The default journal template style.
\item {\verb|acmlarge|}: Used by JOCCH and TAP.
\item {\verb|acmtog|}: Used by TOG.
\end{itemize}
The majority of conference proceedings documentation will use the {\verb|acmconf|} template style.
\begin{itemize}
\item {\verb|acmconf|}: The default proceedings template style.
\item{\verb|sigchi|}: Used for SIGCHI conference articles.
\item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles.
\item{\verb|sigplan|}: Used for SIGPLAN conference articles.
\end{itemize}
\subsection{Template Parameters}
In addition to specifying the {\itshape template style} to be used in
formatting your work, there are a number of {\itshape template parameters}
which modify some part of the applied template style. A complete list
of these parameters can be found in the {\itshape \LaTeX\ User's Guide.}
Frequently-used parameters, or combinations of parameters, include:
\begin{itemize}
\item {\verb|anonymous,review|}: Suitable for a ``double-blind''
conference submission. Anonymizes the work and includes line
numbers. Use with the \verb|\acmSubmissionID| command to print the
submission's unique ID on each page of the work.
\item{\verb|authorversion|}: Produces a version of the work suitable
for posting by the author.
\item{\verb|screen|}: Produces colored hyperlinks.
\end{itemize}
This document uses the following string as the first command in the
source file:
\begin{verbatim}
\documentclass[sigconf]{acmart}
\end{verbatim}
\section{Modifications}
Modifying the template --- including but not limited to: adjusting
margins, typeface sizes, line spacing, paragraph and list definitions,
and the use of the \verb|\vspace| command to manually adjust the
vertical spacing between elements of your work --- is not allowed.
{\bfseries Your document will be returned to you for revision if
modifications are discovered.}
\section{Typefaces}
The ``\verb|acmart|'' document class requires the use of the
``Libertine'' typeface family. Your \TeX\ installation should include
this set of packages. Please do not substitute other typefaces. The
``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used,
as they will override the built-in typeface families.
\section{Title Information}
The title of your work should use capital letters appropriately -
\url{https://capitalizemytitle.com/} has useful rules for
capitalization. Use the {\verb|title|} command to define the title of
your work. If your work has a subtitle, define it with the
{\verb|subtitle|} command. Do not insert line breaks in your title.
If your title is lengthy, you must define a short version to be used
in the page headers, to prevent overlapping text. The \verb|title|
command has a ``short title'' parameter:
\begin{verbatim}
\title[short title]{full title}
\end{verbatim}
\section{Authors and Affiliations}
Each author must be defined separately for accurate metadata
identification. Multiple authors may share one affiliation. Authors'
names should not be abbreviated; use full first names wherever
possible. Include authors' e-mail addresses whenever possible.
Grouping authors' names or e-mail addresses, or providing an ``e-mail
alias,'' as shown below, is not acceptable:
\begin{verbatim}
\author{<NAME>, <NAME>}
\email{dave,judy,<EMAIL>}
\email{<EMAIL>}
\end{verbatim}
The \verb|authornote| and \verb|authornotemark| commands allow a note
to apply to multiple authors --- for example, if the first two authors
of an article contributed equally to the work.
If your author list is lengthy, you must define a shortened version of
the list of authors to be used in the page headers, to prevent
overlapping text. The following command should be placed just after
the last \verb|\author{}| definition:
\begin{verbatim}
\renewcommand{\shortauthors}{McCartney, et al.}
\end{verbatim}
Omitting this command will force the use of a concatenated list of all
of the authors' names, which may result in overlapping text in the
page headers.
The article template's documentation, available at
\url{https://www.acm.org/publications/proceedings-template}, has a
complete explanation of these commands and tips for their effective
use.
Note that authors' addresses are mandatory for journal articles.
\section{Rights Information}
Authors of any work published by ACM will need to complete a rights
form. Depending on the kind of work, and the rights management choice
made by the author, this may be copyright transfer, permission,
license, or an OA (open access) agreement.
Regardless of the rights management choice, the author will receive a
copy of the completed rights form once it has been submitted. This
form contains \LaTeX\ commands that must be copied into the source
document. When the document source is compiled, these commands and
their parameters add formatted text to several areas of the final
document:
\begin{itemize}
\item the ``ACM Reference Format'' text on the first page.
\item the ``rights management'' text on the first page.
\item the conference information in the page header(s).
\end{itemize}
Rights information is unique to the work; if you are preparing several
works for an event, make sure to use the correct set of commands with
each of the works.
The ACM Reference Format text is required for all articles over one
page in length, and is optional for one-page articles (abstracts).
\section{CCS Concepts and User-Defined Keywords}
Two elements of the ``acmart'' document class provide powerful
taxonomic tools for you to help readers find your work in an online
search.
The ACM Computing Classification System ---
\url{https://www.acm.org/publications/class-2012} --- is a set of
classifiers and concepts that describe the computing
discipline. Authors can select entries from this classification
system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the
commands to be included in the \LaTeX\ source.
User-defined keywords are a comma-separated list of words and phrases
of the authors' choosing, providing a more flexible way of describing
the research being presented.
CCS concepts and user-defined keywords are required for for all
articles over two pages in length, and are optional for one- and
two-page articles (or abstracts).
\section{Sectioning Commands}
Your work should use standard \LaTeX\ sectioning commands:
\verb|section|, \verb|subsection|, \verb|subsubsection|, and
\verb|paragraph|. They should be numbered; do not remove the numbering
from the commands.
Simulating a sectioning command by setting the first word or words of
a paragraph in boldface or italicized text is {\bfseries not allowed.}
\section{Tables}
The ``\verb|acmart|'' document class includes the ``\verb|booktabs|''
package --- \url{https://ctan.org/pkg/booktabs} --- for preparing
high-quality tables.
Table captions are placed {\itshape above} the table.
Because tables cannot be split across pages, the best placement for
them is typically the top of the page nearest their initial cite. To
ensure this proper ``floating'' placement of tables, use the
environment \textbf{table} to enclose the table's contents and the
table caption. The contents of the table itself must go in the
\textbf{tabular} environment, to be aligned properly in rows and
columns, with the desired horizontal and vertical rules. Again,
detailed instructions on \textbf{tabular} material are found in the
\textit{\LaTeX\ User's Guide}.
Immediately following this sentence is the point at which
Table~\ref{tab:freq} is included in the input file; compare the
placement of the table here with the table in the printed output of
this document.
\begin{table}
\caption{Frequency of Special Characters}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
Non-English or Math&Frequency&Comments\\
\midrule
\O & 1 in 1,000& For Swedish names\\
$\pi$ & 1 in 5& Common in math\\
\$ & 4 in 5 & Used in business\\
$\Psi^2_1$ & 1 in 40,000& Unexplained usage\\
\bottomrule
\end{tabular}
\end{table}
To set a wider table, which takes up the whole width of the page's
live area, use the environment \textbf{table*} to enclose the table's
contents and the table caption. As with a single-column table, this
wide table will ``float'' to a location deemed more
desirable. Immediately following this sentence is the point at which
Table~\ref{tab:commands} is included in the input file; again, it is
instructive to compare the placement of the table here with the table
in the printed output of this document.
\begin{table*}
\caption{Some Typical Commands}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Command &A Number & Comments\\
\midrule
\texttt{{\char'134}author} & 100& Author \\
\texttt{{\char'134}table}& 300 & For tables\\
\texttt{{\char'134}table*}& 400& For wider tables\\
\bottomrule
\end{tabular}
\end{table*}
Always use midrule to separate table header rows from data rows, and
use it only for this purpose. This enables assistive technologies to
recognise table headers and support their users in navigating tables
more easily.
\section{Math Equations}
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of the three are
discussed in the next sections.
\subsection{Inline (In-text) Equations}
A formula that appears in the running text is called an inline or
in-text formula. It is produced by the \textbf{math} environment,
which can be invoked with the usual
\texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with
the short form \texttt{\$\,\ldots\$}. You can use any of the symbols
and structures, from $\alpha$ to $\omega$, available in
\LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few
examples of in-text equations in context. Notice how this equation:
\begin{math}
\lim_{n\rightarrow \infty}x=0
\end{math},
set here in in-line math style, looks slightly different when
set in display style. (See next section).
\subsection{Display Equations}
A numbered display equation---one set off by vertical space from the
text and centered horizontally---is produced by the \textbf{equation}
environment. An unnumbered display equation is produced by the
\textbf{displaymath} environment.
Again, in either environment, you can use any of the symbols and
structures available in \LaTeX\@; this section will just give a couple
of examples of display equations in context. First, consider the
equation, shown as an inline equation above:
\begin{equation}
\lim_{n\rightarrow \infty}x=0
\end{equation}
Notice how it is formatted somewhat differently in
the \textbf{displaymath}
environment. Now, we'll enter an unnumbered equation:
\begin{displaymath}
\sum_{i=0}^{\infty} x + 1
\end{displaymath}
and follow it with another numbered equation:
\begin{equation}
\sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f
\end{equation}
just to demonstrate \LaTeX's able handling of numbering.
\section{Figures}
The ``\verb|figure|'' environment should be used for figures. One or
more images can be placed within a figure. If your figure contains
third-party material, you must clearly identify it as such, as shown
in the example below.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{sample-franklin}
\caption{1907 Franklin Model D roadster. Photograph by Harris \&
Ewing, Inc. [Public domain], via Wikimedia
Commons. (\url{https://goo.gl/VLCRBB}).}
\Description{A woman and a girl in white dresses sit in an open car.}
\end{figure}
Your figures should contain a caption which describes the figure to
the reader.
Figure captions are placed {\itshape below} the figure.
Every figure should also have a figure description unless it is purely
decorative. These descriptions convey what’s in the image to someone
who cannot see it. They are also used by search engine crawlers for
indexing images, and when images cannot be loaded.
A figure description must be unformatted plain text less than 2000
characters long (including spaces). {\bfseries Figure descriptions
should not repeat the figure caption – their purpose is to capture
important information that is not already provided in the caption or
the main text of the paper.} For figures that convey important and
complex new information, a short text description may not be
adequate. More complex alternative descriptions can be placed in an
appendix and referenced in a short figure description. For example,
provide a data table capturing the information in a bar chart, or a
structured list representing a graph. For additional information
regarding how best to write figure descriptions and why doing this is
so important, please see
\url{https://www.acm.org/publications/taps/describing-figures/}.
\subsection{The ``Teaser Figure''}
A ``teaser figure'' is an image, or set of images in one figure, that
are placed after all author and affiliation information, and before
the body of the article, spanning the page. If you wish to have such a
figure in your article, place the command immediately before the
\verb|\maketitle| command:
\begin{verbatim}
\begin{teaserfigure}
\includegraphics[width=\textwidth]{sampleteaser}
\caption{figure caption}
\Description{figure description}
\end{teaserfigure}
\end{verbatim}
\section{Citations and Bibliographies}
The use of \BibTeX\ for the preparation and formatting of one's
references is strongly recommended. Authors' names should be complete
--- use full first names (``<NAME>'') not initials
(``<NAME>'') --- and the salient identifying features of a
reference should be included: title, year, volume, number, pages,
article DOI, etc.
The bibliography is included in your source document with these two
commands, placed just before the \verb|\end{document}| command:
\begin{verbatim}
\bibliographystyle{ACM-Reference-Format}
\bibliography{bibfile}
\end{verbatim}
where ``\verb|bibfile|'' is the name, without the ``\verb|.bib|''
suffix, of the \BibTeX\ file.
Citations and references are numbered by default. A small number of
ACM publications have citations and references formatted in the
``author year'' style; for these exceptions, please include this
command in the {\bfseries preamble} (before the command
``\verb|\begin{document}|'') of your \LaTeX\ source:
\begin{verbatim}
\citestyle{acmauthoryear}
\end{verbatim}
Some examples. A paginated journal article \cite{Abril07}, an
enumerated journal article \cite{Cohen07}, a reference to an entire
issue \cite{JCohen96}, a monograph (whole book) \cite{Kosiur01}, a
monograph/whole book in a series (see 2a in spec. document)
\cite{Harel79}, a divisible-book such as an anthology or compilation
\cite{Editor00} followed by the same example, however we only output
the series if the volume number is given \cite{Editor00a} (so
Editor00a's series should NOT be present since it has no vol. no.),
a chapter in a divisible book \cite{Spector90}, a chapter in a
divisible book in a series \cite{Douglass98}, a multi-volume work as
book \cite{Knuth97}, a couple of articles in a proceedings (of a
conference, symposium, workshop for example) (paginated proceedings
article) \cite{Andler79, Hagerup1993}, a proceedings article with
all possible elements \cite{Smith10}, an example of an enumerated
proceedings article \cite{VanGundy07}, an informally published work
\cite{Harel78}, a couple of preprints \cite{Bornmann2019,
AnzarootPBM14}, a doctoral dissertation \cite{Clarkson85}, a
master's thesis: \cite{anisi03}, an online document / world wide web
resource \cite{Thornburg01, Ablamowicz07, Poker06}, a video game
(Case 1) \cite{Obama08} and (Case 2) \cite{Novak03} and \cite{Lee05}
and (Case 3) a patent \cite{JoeScientist001}, work accepted for
publication \cite{rous08}, 'YYYYb'-test for prolific author
\cite{SaeediMEJ10} and \cite{SaeediJETC10}. Other cites might
contain 'duplicate' DOI and URLs (some SIAM articles)
\cite{Kirschmer:2010:AEI:1958016.1958018}. Boris / <NAME>:
multi-volume works as books \cite{MR781536} and \cite{MR781537}. A
couple of citations with DOIs:
\cite{2004:ITE:1009386.1010128,Kirschmer:2010:AEI:1958016.1958018}. Online
citations: \cite{TUGInstmem, Thornburg01, CTANacmart}. Artifacts:
\cite{R} and \cite{UMassCitations}.
\section{Acknowledgments}
Identification of funding sources and other support, and thanks to
individuals and groups that assisted in the research and the
preparation of the work should be included in an acknowledgment
section, which is placed just before the reference section in your
document.
This section has a special environment:
\begin{verbatim}
\begin{acks}
...
\end{acks}
\end{verbatim}
so that the information contained therein can be more easily collected
during the article metadata extraction phase, and to ensure
consistency in the spelling of the section heading.
Authors should not prepare this section as a numbered or unnumbered {\verb|\section|}; please use the ``{\verb|acks|}'' environment.
\section{Appendices}
If your work needs an appendix, add it before the
``\verb|\end{document}|'' command at the conclusion of your source
document.
Start the appendix with the ``\verb|appendix|'' command:
\begin{verbatim}
\appendix
\end{verbatim}
and note that in the appendix, sections are lettered, not
numbered. This document has two appendices, demonstrating the section
and subsection identification method.
\section{SIGCHI Extended Abstracts}
The ``\verb|sigchi-a|'' template style (available only in \LaTeX\ and
not in Word) produces a landscape-orientation formatted article, with
a wide left margin. Three environments are available for use with the
``\verb|sigchi-a|'' template style, and produce formatted output in
the margin:
\begin{itemize}
\item {\verb|sidebar|}: Place formatted text in the margin.
\item {\verb|marginfigure|}: Place a figure in the margin.
\item {\verb|margintable|}: Place a table in the margin.
\end{itemize}
%%
%% The acknowledgments section is defined using the "acks" environment
%% (and NOT an unnumbered section). This ensures the proper
%% identification of the section in the article metadata, and the
%% consistent spelling of the heading.
\begin{acks}
To Robert, for the bagels and explaining CMYK and color spaces.
\end{acks}
%%
%% The next two lines define the bibliography style to be used, and
%% the bibliography file.
\bibliographystyle{ACM-Reference-Format}
\bibliography{sample-base}
%%
%% If your work has an appendix, this is the place to put it.
\appendix
\section{Research Methods}
\subsection{Part One}
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Morbi
malesuada, quam in pulvinar varius, metus nunc fermentum urna, id
sollicitudin purus odio sit amet enim. Aliquam ullamcorper eu ipsum
vel mollis. Curabitur quis dictum nisl. Phasellus vel semper risus, et
lacinia dolor. Integer ultricies commodo sem nec semper.
\subsection{Part Two}
Etiam commodo feugiat nisl pulvinar pellentesque. Etiam auctor sodales
ligula, non varius nibh pulvinar semper. Suspendisse nec lectus non
ipsum convallis congue hendrerit vitae sapien. Donec at laoreet
eros. Vivamus non purus placerat, scelerisque diam eu, cursus
ante. Etiam aliquam tortor auctor efficitur mattis.
\section{Online Resources}
Nam id fermentum dui. Suspendisse sagittis tortor a nulla mollis, in
pulvinar ex pretium. Sed interdum orci quis metus euismod, et sagittis
enim maximus. Vestibulum gravida massa ut felis suscipit
congue. Quisque mattis elit a risus ultrices commodo venenatis eget
dui. Etiam sagittis eleifend elementum.
Nam interdum magna at lectus dignissim, ac dignissim lorem
rhoncus. Maecenas eu arcu ac neque placerat aliquam. Nunc pulvinar
massa et mattis lacinia.
\end{document}
\endinput
%%
%% End of file `sample-sigconf.tex'."""
# %run reader.py
reader = Reader(ACM)
for token in reader:
print(token)
| legacy/tex/Tests.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Frequency
# To use the data transformation script `Frequency.pl`, we provide it with a single input file followed by what we want it to name the output file it creates and a channel number:
#
# `$ perl ./perl/Frequency.pl [inputFile1 inputFile2 ...] [outputFile1 outputFile2 ...] [column] [binType switch] [binValue]`
#
# The last two values have a peculiar usage compared to the other transformation scripts. Here, `binType` is a switch that can be either `0` or `1` to tell the script how you want to divide the data into bins; this choice then determines what the `binValue` parameter means. The choices are
#
# 0: Divide the data into a number of bins equal to `binValue`
# 1: Divide the data into bins of width `binValue` (in nanoseconds)
#
# It isn't immedately obvious what this means, though, or what the `column` parameter does. We'll try it out on the test data in the `test_data` directory. Use the UNIX shell command `$ ls test_data` to see what's there:
# !ls test_data
# Let's start simple, using a single input file and a single output file. We'll run
#
# `$ perl ./perl/Frequency.pl test_data/6148.2016.0109.0.test.thresh test_data/freqOut01 1 1 2`
#
# to see what happens. The `binType` switch is set to the e-Lab default of `1`, "bin by fixed width," and the value of that fixed width is set to the e-Lab-default of `2`ns. Notice that we've named the output file `freqOut01`; we may have to do lots of experimentation to figure out what exactly the transformation `Frequency.pl` does, so we'll increment that number each time to keep a record of our progess. The `column` parameter is `1`.
#
# Before we begin, we'll make sure we know what the input file looks like. The UNIX `wc` (word count) utility tells us that `6148.2016.0109.0.test.thresh` has over a thousand lines:
# !wc -l test_data/6148.2016.0109.0.test.thresh
# (`wc` stands for "word count", and the `-l` flag means "but count lines instead of words." The first number in the output, before the filename, is the number of lines, in this case 1003)
# The UNIX `head` utility will show us the beginning of the file:
# !head -25 test_data/6148.2016.0109.0.test.thresh
# Now, we'll execute
#
# `$ perl ./perl/Frequency.pl test_data/6148.2016.0109.0.test.thresh test_data/freqOut01 1 1 2`
#
# from the command line and see what changes. After doing so, we can see that `freqOut01` was created in the `test_data/` folder, so we must be on the right track:
# !ls test_data
# !wc -l test_data/freqOut01
# It only has one line, though! Better investigate further:
# !cat test_data/freqOut01
#
# It turns out that `SingleChannel` has a little bit more power, though. It can actually handle multiple single channels at a time, as odd as that might sound. We'll try specifying additional channels while adding additional respective output names for them:
#
# `$ perl ./perl/SingleChannel.pl test_data/6148.2016.0109.0.test.thresh "test_data/singleChannelOut1 test_data/singleChannelOut2 test_data/singleChannelOut3 test_data/singleChannelOut4" "1 2 3 4"`
#
# (for multiple channels/outputs, we have to add quotes `"` to make sure `SingleChannel` knows which arguments are the output filenames and which are the channel numbers)
#
# If we run this from the command line, we do in fact get four separate output files:
# !ls -1 test_data/
# Out of curiosity, let's line-count them using the UNIX `wc` utility:
# !wc -l test_data/singleChannelOut1
# !wc -l test_data/singleChannelOut2
# !wc -l test_data/singleChannelOut3
# !wc -l test_data/singleChannelOut4
# Recall that the original input threshold file `6148.2016.0109.0.test.thresh` had 1003 lines - three header lines, and 1000 data lines.
# **Exercise 1**
#
# Add the line counts of the four output files above. Do you get what you expect?
# **Exercise 2**
#
# In a well-functioning cosmic ray muon detector using 4 channels, what percentage of the total number of counts do you expect each channel to record? Using the example above of a file with 1000 counts, how many counts would you expect each channel to have? If the actual results differ from what you would have expected, try to explain why.
# **Exercise 3**
#
# Find a file with a much larger number of counts (that is, lines) than `6148.2016.0109.0.test.thresh` has, perhaps in the `files/` directory. Repeat the above process of using `SingleChannel` to separate the file into individual-channel files, naming the outputs `test_data/singleChannelOut-Big1`, `test_data/singleChannelOut-Big2`, etc.
#
# Calculate what percentage of the total number of counts each output file has. How do these compare to your expectations? How do they compare to the 1000 counts of `6148.2016.0109.0.test.thresh`?
# **A Word of Warning**
#
# If you've been playing around with word counts for a bit, you may have noticed that `SingleChannel` has a quirk: if you specify an output file that already exists, `SingleChannel` will *add to* the existing file rather than replacing it with the new output. Most of the other e-Lab data transformations will replace the existing file, so this may represent a bug in this particular script.
#
# *Be aware of this when running similar commands multiple times!*
| Analysis/script_Frequency.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Data C102 Fall 2021 Final Project - Steven
# My contributions to the final project.
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels.api as sm
sns.set_style(style='darkgrid')
plt.style.use('ggplot')
# %matplotlib inline
# -
# ## Data cleaning
# Code ~stolen~ adapted from the main project notebook
# Load data into DataFrames
asthma = pd.read_csv('data/asthma.csv')
pm25 = pd.read_csv('data/pm25.csv')
states = pd.read_csv('data/states.csv')
fips = pd.read_csv('https://gist.githubusercontent.com/dantonnoriega/bf1acd2290e15b91e6710b6fd3be0a53/raw/11d15233327c8080c9646c7e1f23052659db251d/us-state-ansi-fips.csv')
state_pops = pd.read_csv('data/nst-est2019-alldata.csv')
# *For hypothesis testing*:
# + jupyter={"outputs_hidden": true}
# Add divisions to the asthma data
asthma_states = asthma.merge(states, left_on='LocationAbbr', right_on='State Code').drop(columns=['State', 'State Code'])
asthma_states.head()
# + jupyter={"outputs_hidden": true}
# Query for overall age-adjusted prevalence
asthma_aap = asthma_states.query(
'Question == "Current asthma prevalence among adults aged >= 18 years"' +
'& StratificationCategory1 == "Overall"' +
'& DataValueType == "Age-adjusted Prevalence"' # Asthma prevalence is expressed as a percentage of the overall population
)[['YearStart', 'LocationAbbr', 'LocationDesc', 'Division', 'DataValue']].rename(
columns={'YearStart': 'year',
'LocationAbbr': 'state',
'LocationDesc': 'stname',
'Division': 'div',
'DataValue': 'aap'}
).reset_index().drop(columns='index')
asthma_aap.head()
# + jupyter={"outputs_hidden": true}
# Fill the only NA value with the average age-adjusted prevalence in NJ
NJ_aap_mean = round(asthma_aap.query('state == "NJ"').mean()['aap'], 1)
asthma_aap = asthma_aap.fillna(value={'aap': NJ_aap_mean})
asthma_aap.query('state == "NJ"')
# +
# Calculate weighted average age-adjusted prevalence for each division
state_pop_means_df = pd.DataFrame( # Calculate mean population in each state over the years 2011-2019
list(
map(
lambda x: [x[0], round(np.mean(x[1:]), 0)],
state_pops.query('SUMLEV == 40')[['NAME'] + list(state_pops.columns[8:17])].to_numpy()
)
)
).rename(columns={1: 'pop_mean'})
asthma_aap_pop_means = asthma_aap.merge(# Merge mean population with AAP DataFrame
state_pop_means_df,
left_on='stname',
right_on=0
).drop(columns=0)
asthma_aap_pop_means['asthma_est'] = (asthma_aap_pop_means['aap'] * asthma_aap_pop_means['pop_mean'] / 100).apply( # Calculate estimated number of people with asthma
lambda x: round(x, 0)
)
asthma_div_agg = asthma_aap_pop_means.groupby(# Add up the components for calculating the weighted averages
['year', 'div']
)[['pop_mean', 'asthma_est']].sum()
asthma_aap_div = (100 * asthma_div_agg['asthma_est'] / asthma_div_agg['pop_mean']).apply(
lambda x: round(x, 1)
).unstack( # Calculate the weighted averages
level=0
)
asthma_aap_div
# + jupyter={"outputs_hidden": true}
asthma_aap_div_melt = asthma_aap_div.melt(
ignore_index=False
).reset_index().rename(
columns={'value': 'aap_w'}
).sort_values(['div', 'year'], ignore_index=True)
asthma_aap_div_melt.head(10)
# -
# *For causal inference*:
# + jupyter={"outputs_hidden": true}
# Add state names to the PM2.5 data
pm25_states = pm25.merge(
fips,
left_on='statefips',
right_on=' st'
).drop(
columns=['ds_pm_stdd', 'statefips', ' st']
).rename(
columns={' stusps': 'state'}
)[['year', 'state', 'stname', 'ds_pm_pred']]
pm25_states.head()
# + jupyter={"outputs_hidden": true}
# Merge AAP data with PM2.5 data
pm25_asthma = pm25_states.merge(
asthma_aap,
how='left',
on=['year', 'stname']
).drop(
columns='state_y'
).rename(
columns={'state_x': 'state'}
)[['year', 'state', 'div', 'ds_pm_pred', 'aap']]
pm25_asthma.head()
# -
# ## Multiple hypothesis testing
# First, we check visually that the assumption of normality is valid:
# + jupyter={"outputs_hidden": true}
# Adapted from Brighten's notebook
for div in asthma_aap_div_melt.value_counts('div').index:
plt.figure(div);
plt.hist(asthma_aap_div_melt.query('div == "' + div + '"')['aap_w'], density=1, color='c');
plt.xlabel('aap');
plt.title(div);
# + jupyter={"outputs_hidden": true}
# Also adapted from Brighten (not stealing, I promise!)
for div in asthma_aap_div_melt.value_counts('div').index:
plt.figure(div);
sm.qqplot(asthma_aap_div_melt.query('div == "' + div + '"')['aap_w'], line='45', fit=True);
plt.xlabel('aap');
plt.title(div);
# -
# Statistically, we assume that for a given state, each sampled proportion is independentally and identically distributed according to some normal distribution that is particular to that given state. The weighted mean for a given division is thus a linear combination of normally distributed random variables, so it itself should also be normally distributed.
#
# Using the weighted rates, we perform two-sided $t$-tests between every pair of divisions:
asthma_aap_div
aap_samples = asthma_aap_div.values
aap_samples
divs = list(asthma_aap_div.index)
divs
# +
from scipy.stats import ttest_ind_from_stats
def ttest_ind_props(sample1, sample2):
n1 = len(sample1)
n2 = len(sample2)
phat1 = np.mean(sample1)
phat2 = np.mean(sample2)
s_phat1 = np.sqrt(phat1 * (1 - phat1) / (n1 - 1))
s_phat2 = np.sqrt(phat2 * (1 - phat2) / (n2 - 1))
return ttest_ind_from_stats(
mean1=phat1, std1=s_phat1, nobs1=n1,
mean2=phat2, std2=s_phat2, nobs2=n2,
alternative='two-sided'
)
# -
ttest_ind_props(aap_samples[8] / 100, aap_samples[4] / 100) # Why so large?
# +
from scipy.stats import ttest_rel
mannwhitneyu(aap_samples[8] / 100, aap_samples[4] / 100)
# -
# we can get nice, rejectable p-values if we use ttest_rel which:
#
# 'Calculate the t-test on TWO RELATED samples of scores, a and b.
#
# This is a two-sided test for the null hypothesis that 2 related or repeated samples have identical average (expected) values.'
#
# It would be nice if we could justify that the samples are related. I'm wondering if we can use the fact that our hypothesis assumes that all of these distributions are the same(ie geographic location does NOT affect astham prevelance)
# +
p_vals = []
left_region = []
right_region = []
for i in np.arange(9):
for j in np.arange(9):
if i==j:
continue
elif divs[j] in left_region and divs[i] in right_region:
continue
else:
p_vals.append(ttest_rel(aap_samples[i] / 100, aap_samples[j] / 100)[1])
left_region.append(divs[i])
right_region.append(divs[j])
# +
from scipy.stats import mannwhitneyu
p_vals = []
left_region = []
right_region = []
for i in np.arange(9):
for j in np.arange(9):
if i==j:
continue
elif divs[j] in left_region and divs[i] in right_region:
continue
else:
p_vals.append(mannwhitneyu(aap_samples[i] / 100, aap_samples[j] / 100)[1])
left_region.append(divs[i])
right_region.append(divs[j])
p_vals
# +
#borrowed from lab01 **B-H requires null p-vals to be indep
alpha = 0.05
def benjamini_hochberg(p_values, alpha):
"""
Returns decisions on p-values using Benjamini-Hochberg.
Inputs:
p_values: array of p-values
alpha: desired FDR (FDR = E[# false positives / # positives])
Returns:
decisions: binary array of same length as p-values, where `decisions[i]` is 1
if `p_values[i]` is deemed significant, and 0 otherwise
"""
n = len(p_values)
K = np.arange(n)
p_values_copy = p_values.copy()
p_values_copy.sort()
opt_p = 0
for k in K:
if p_values_copy[k] <= [(k+1)*alpha/n]:
opt_p = p_values_copy[k]
decisions = p_values <= opt_p
return decisions
#Bonferroni also from lab01
def bonferroni(p_values, alpha_total):
"""
Returns decisions on p-values using the Bonferroni correction.
Inputs:
p_values: array of p-values
alpha_total: desired family-wise error rate (FWER = P(at least one false discovery))
Returns:
decisions: binary array of same length as p-values, where `decisions[i]` is 1
if `p_values[i]` is deemed significant, and 0 otherwise
"""
m = len(p_values)
decisions = p_values <= (alpha_total/m)
return decisions
# +
BH_decisions = benjamini_hochberg(np.array(p_vals), alpha)
bon_decisions = bonferroni(np.array(p_vals), alpha)
#there is definitely a prettier way to do this, but I like for loops soooo
decisions = []
for i in np.arange(len(BH_decisions)):
decisions.append(int(BH_decisions[i] and bon_decisions[i]))
dec_df = pd.DataFrame({"left":left_region, "right":right_region, "reject_null":decisions})
dec_df
sum(dec_df['reject_null'])
# +
import seaborn as sns
plt.figure(figsize = (15,10))
sns.boxplot(data=asthma_aap_div.T);
# -
# ## Graveyard
# Code that didn't make the cut
# + jupyter={"outputs_hidden": true}
asthma_aap_list = asthma_aap.sort_values(
['stname', 'year']
).groupby(
'stname'
).agg(
{'aap': list}
).reset_index().rename(
columns={'aap': 'aaps'}
).merge(states, left_on='stname', right_on='State').drop(
columns=['State', 'State Code', 'Region']
).rename(
columns={'Division': 'div'}
)[['stname', 'div', 'aaps']]
asthma_aap_list
# + jupyter={"outputs_hidden": true}
NJ_means = asthma_aap_list.iloc[30, 1]
NJ_means[8] = round(np.mean(NJ_means[0:7]), 1)
asthma_aap_list
# + jupyter={"outputs_hidden": true}
asthma_aap.value_counts('year')
# + jupyter={"outputs_hidden": true}
state_pops.info(verbose=True)
# + jupyter={"outputs_hidden": true}
state_pops_list = state_pops.query('SUMLEV == 40').melt(
id_vars='NAME',
value_vars=state_pops.columns[8:17]
)[['NAME', 'value']].groupby('NAME').agg(list).reset_index().rename(
columns={'NAME': 'stname', 'value': 'pops'}
)
state_pops_list
# + jupyter={"outputs_hidden": true}
asthma_aap_pops = asthma_aap_list.merge(state_pops_list, on='stname')
asthma_aap_pops
# + jupyter={"outputs_hidden": true}
state_pop_means = asthma_aap_pops['pops'].apply(lambda x: int(round(np.mean(x), 0))).to_numpy()
asthma_aap_pops
# -
| data102-steven.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import keras
from keras.models import Model
from keras.layers import Dense, Dropout, Flatten
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K
from hyperas.distributions import uniform
import tensorflow as tf
import random
import os
from tqdm import tqdm
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
import cv2
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
from subprocess import check_output
print(check_output(["ls", "../input"]).decode("utf8"))
# Any results you write to the current directory are saved as output.
# +
######### VGG19 parameters ########
dontFreezeLast = 2;
patience = 30;
loadWeights = False;
saveWeights = False;
tensorboard_dir = '../tb/catsdogs/try_64_dense90_64_90x2_120_drop05';
if not os.path.exists(tensorboard_dir):
os.makedirs(tensorboard_dir)
checkPointPath = tensorboard_dir + '/best_weights.hdf5';
####################################
gpuName = '/device:GPU:1'
tensorboardFlag = True;
workers = 10;
histogram_freq = 0;
batchSize = 64;
epochs = 100;
validation_size=0.3;
# -
# First we will read in the csv's so we can see some more information on the filenames and breeds
# +
df_train = pd.read_csv('../input/labels.csv')
df_test = pd.read_csv('../input/sample_submission.csv')
print('Training images: ',df_train.shape[0])
print('Test images: ',df_test.shape[0])
# reduce dimensionality
#df_train = df_train.head(100)
#df_test = df_test.head(100)
# -
df_train.head(10)
# We can see that the breed needs to be one-hot encoded for the final submission, so we will now do this.
targets_series = pd.Series(df_train['breed'])
one_hot = pd.get_dummies(targets_series, sparse = True)
one_hot_labels = np.asarray(one_hot)
# Next we will read in all of the images for test and train, using a for loop through the values of the csv files. I have also set an im_size variable which sets the size for the image to be re-sized to, 90x90 px, you should play with this number to see how it affects accuracy.
im_size = 90
x_train = []
y_train = []
x_test = []
i = 0
for f, breed in tqdm(df_train.values):
img = cv2.imread('../input/train/{}.jpg'.format(f))
label = one_hot_labels[i]
x_train.append(cv2.resize(img, (im_size, im_size)))
y_train.append(label)
i += 1
#for f in tqdm(df_test['id'].values):
# img = cv2.imread('../input/test/{}.jpg'.format(f))
# x_test.append(cv2.resize(img, (im_size, im_size)))
y_train_raw = np.array(y_train, np.uint8)
x_train_raw = np.array(x_train, np.float32) / 255.
x_test = np.array(x_test, np.float32) / 255.
# We check the shape of the outputs to make sure everyting went as expected.
print(x_train_raw.shape)
print(y_train_raw.shape)
print(x_test.shape)
# We can see above that there are 120 different breeds. We can put this in a num_class variable below that can then be used when creating the CNN model.
num_class = y_train_raw.shape[1]
print('Number of classes: ', num_class)
# It is important to create a validation set so that you can gauge the performance of your model on independent data, unseen to the model in training. We do this by splitting the current training set (x_train_raw) and the corresponding labels (y_train_raw) so that we set aside 30 % of the data at random and put these in validation sets (X_valid and Y_valid).
#
# * This split needs to be improved so that it contains images from every class, with 120 separate classes some can not be represented and so the validation score is not informative.
X_train, X_valid, Y_train, Y_valid = train_test_split(x_train_raw, y_train_raw, test_size=validation_size, random_state=1)
# Now we build the CNN architecture. Here we are using a pre-trained model VGG19 which has already been trained to identify many different dog breeds (as well as a lot of other objects from the imagenet dataset see here for more information: http://image-net.org/about-overview). Unfortunately it doesn't seem possible to downlod the weights from within this kernel so make sure you set the weights argument to 'imagenet' and not None, as it currently is below.
#
# We then remove the final layer and instead replace it with a single dense layer with the number of nodes corresponding to the number of breed classes we have (120).
# Create the base pre-trained model
# Can't download weights in the kernel
with tf.device(gpuName):
dropout_rate = 0.5
if K.image_data_format() == 'channels_first':
input_shape = (3, im_size, im_size)
else:
input_shape = (im_size, im_size, 3)
model = Sequential()
model.add(Conv2D(64, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(Dense(90))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(rate = dropout_rate,noise_shape=None, seed=None))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(90, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(90))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(120))
model.add(Activation('softmax'))
##### Mattia's model #####
# Add a new top layer
#x = base_model.output
#x = Flatten()(x)
#x = Dense(1024,activation='relu')(x)
#x = Dense(512,activation='relu')(x)
#predictions = Dense(num_class, activation='softmax')(x)
# This is the model we will train
#model = Model(inputs=base_model.input, outputs=predictions)
# First: train only the top layers (which were randomly initialized)
#for i in range(len(base_model.layers)-dontFreezeLast):
#base_model.layers[i].trainable = False
#if loadWeights:
# model.load_weights(checkPointPath)
##### Mattia's model #####
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
callbacks_list = [];
callbacks_list.append(keras.callbacks.EarlyStopping(
monitor='val_acc',
patience=patience,
verbose=1));
if saveWeights:
callbacks_list.append(keras.callbacks.ModelCheckpoint(
checkPointPath,
monitor='val_acc',
verbose=1,
save_best_only=True,
mode='max',
save_weights_only=True))
if tensorboardFlag:
callbacks_list.append(keras.callbacks.TensorBoard(
log_dir=tensorboard_dir,
histogram_freq=histogram_freq,
write_graph=False,
write_images=False));
print('Tensorboard activated in directory: ',tensorboard_dir)
else:
print('Tensorboard NOT activated')
model.summary()
def generator(X,Y,batch_size):
batch_features = np.ndarray(shape=(batch_size,) + X.shape[1:],
dtype=X.dtype);
batch_labels = np.ndarray(shape=(batch_size,) + Y.shape[1:],
dtype=Y.dtype)
N = X.shape[0];
while True:
for i in range(batch_size):
# choose random index in features
index= np.random.choice(N,1)
batch_features[i] = X[index]
batch_labels[i] = Y[index]
yield batch_features, batch_labels
# +
model.fit(X_train, Y_train,
epochs=epochs,
batch_size = batchSize,
validation_data=(X_valid, Y_valid),
verbose=1,
callbacks=callbacks_list)
# steps_per_epoch = round(X_train.shape[0]/batchSize)
# model.fit_generator(generator(X_train,Y_train,batchSize),
# steps_per_epoch=steps_per_epoch,
# epochs=epochs,
# verbose=1,
# callbacks=callbacks_list,
# validation_data=(X_valid,Y_valid),
# workers=workers,
# use_multiprocessing=True)
# -
# Remember, accuracy is low here because we are not taking advantage of the pre-trained weights as they cannot be downloaded in the kernel. This means we are training the wights from scratch and I we have only run 1 epoch due to the hardware constraints in the kernel.
#
# Next we will make our predictions.
#preds = model.predict(x_test, verbose=1)
#sub = pd.DataFrame(preds)
## Set column names to those generated by the one-hot encoding earlier
#col_names = one_hot.columns.values
#sub.columns = col_names
## Insert the column id from the sample_submission at the start of the data frame
#sub.insert(0, 'id', df_test['id'])
#sub.head(10)
| code/Keras Dogs and Cats Adapted.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="W_tvPdyfA-BL"
# ##### Copyright 2018 The TensorFlow Authors.
# + cellView="form" id="0O_LFhwSBCjm"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="PWUmcKKjtwXL"
# # Transfer learning with TensorFlow Hub
#
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/tutorials/images/transfer_learning_with_hub"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/images/transfer_learning_with_hub.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/images/transfer_learning_with_hub.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/images/transfer_learning_with_hub.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
# </td>
# <td>
# <a href="https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
# </td>
# </table>
# + [markdown] id="crU-iluJIEzw"
# [TensorFlow Hub](https://tfhub.dev/) is a repository of pre-trained TensorFlow models.
#
# This tutorial demonstrates how to:
#
# 1. Use models from TensorFlow Hub with `tf.keras`
# 1. Use an image classification model from TensorFlow Hub
# 1. Do simple transfer learning to fine-tune a model for your own image classes
# + [markdown] id="CKFUvuEho9Th"
# ## Setup
# + id="OGNpmn43C0O6"
import numpy as np
import time
import PIL.Image as Image
import matplotlib.pylab as plt
import tensorflow as tf
import tensorflow_hub as hub
# + [markdown] id="s4YuF5HvpM1W"
# ## An ImageNet classifier
#
# You'll start by using a pretrained classifer model to take an image and predict what it's an image of - no training required!
# + [markdown] id="xEY_Ow5loN6q"
# ### Download the classifier
#
# Use `hub.KerasLayer` to load a [MobileNetV2 model](https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/2) from TensorFlow Hub. Any [compatible image classifier model](https://tfhub.dev/s?q=tf2&module-type=image-classification) from TensorFlow Hub will work here.
# + cellView="form" id="feiXojVXAbI9"
classifier_model ="https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/4" #@param {type:"string"}
# + id="y_6bGjoPtzau"
IMAGE_SHAPE = (224, 224)
classifier = tf.keras.Sequential([
hub.KerasLayer(classifier_model, input_shape=IMAGE_SHAPE+(3,))
])
# + [markdown] id="pwZXaoV0uXp2"
# ### Run it on a single image
# + [markdown] id="TQItP1i55-di"
# Download a single image to try the model on.
# + id="w5wDjXNjuXGD"
grace_hopper = tf.keras.utils.get_file('image.jpg','https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg')
grace_hopper = Image.open(grace_hopper).resize(IMAGE_SHAPE)
grace_hopper
# + id="BEmmBnGbLxPp"
grace_hopper = np.array(grace_hopper)/255.0
grace_hopper.shape
# + [markdown] id="0Ic8OEEo2b73"
# Add a batch dimension, and pass the image to the model.
# + id="EMquyn29v8q3"
result = classifier.predict(grace_hopper[np.newaxis, ...])
result.shape
# + [markdown] id="NKzjqENF6jDF"
# The result is a 1001 element vector of logits, rating the probability of each class for the image.
#
# So the top class ID can be found with argmax:
# + id="rgXb44vt6goJ"
predicted_class = np.argmax(result[0], axis=-1)
predicted_class
# + [markdown] id="YrxLMajMoxkf"
# ### Decode the predictions
#
# Take the predicted class ID and fetch the `ImageNet` labels to decode the predictions
# + id="ij6SrDxcxzry"
labels_path = tf.keras.utils.get_file('ImageNetLabels.txt','https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt')
imagenet_labels = np.array(open(labels_path).read().splitlines())
# + id="uzziRK3Z2VQo"
plt.imshow(grace_hopper)
plt.axis('off')
predicted_class_name = imagenet_labels[predicted_class]
_ = plt.title("Prediction: " + predicted_class_name.title())
# + [markdown] id="amfzqn1Oo7Om"
# ## Simple transfer learning
# + [markdown] id="K-nIpVJ94xrw"
# But what if you want to train a classifier for a dataset with different classes? You can also use a model from TFHub to train a custom image classier by retraining the top layer of the model to recognize the classes in our dataset.
# + [markdown] id="Z93vvAdGxDMD"
# ### Dataset
#
# For this example you will use the TensorFlow flowers dataset:
# + id="DrIUV3V0xDL_"
data_root = tf.keras.utils.get_file(
'flower_photos','https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
# + [markdown] id="jFHdp18ccah7"
# Let's load this data into our model using images off disk using image_dataset_from_directory.
# + id="mqnsczfLgcwv"
batch_size = 32
img_height = 224
img_width = 224
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
str(data_root),
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
# + [markdown] id="cCrSRlomEIZ4"
# The flowers dataset has five classes.
# + id="AFgDHs6VEFRD"
class_names = np.array(train_ds.class_names)
print(class_names)
# + [markdown] id="L0Btd0V3C8h4"
# TensorFlow Hub's conventions for image models is to expect float inputs in the `[0, 1]` range. Use the `Rescaling` layer to achieve this.
# + [markdown] id="Rs6gfO-ApTQW"
# Note: you could also include the `Rescaling` layer inside the model. See this [guide](https://www.tensorflow.org/guide/keras/preprocessing_layers) for a discussion of the tradeoffs.
# + id="8NzDDWEMCL20"
normalization_layer = tf.keras.layers.experimental.preprocessing.Rescaling(1./255)
train_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
# + [markdown] id="IW-BUJ-NC7y-"
# Let's make sure to use buffered prefetching so we can yield data from disk without having I/O become blocking. These are two important methods you should use when loading data.
#
# Interested readers can learn more about both methods, as well as how to cache data to disk in the [data performance guide](https://www.tensorflow.org/guide/data_performance#prefetching).
# + id="ZmJMKFw7C4ki"
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
# + id="m0JyiEZ0imgf"
for image_batch, labels_batch in train_ds:
print(image_batch.shape)
print(labels_batch.shape)
break
# + [markdown] id="0gTN7M_GxDLx"
# ### Run the classifier on a batch of images
# + [markdown] id="O3fvrZR8xDLv"
# Now run the classifier on the image batch.
# + id="pcFeNcrehEue"
result_batch = classifier.predict(train_ds)
# + id="-wK2ky45hlyS"
predicted_class_names = imagenet_labels[np.argmax(result_batch, axis=-1)]
predicted_class_names
# + [markdown] id="QmvSWg9nxDLa"
# Now check how these predictions line up with the images:
# + id="IXTB22SpxDLP"
plt.figure(figsize=(10,9))
plt.subplots_adjust(hspace=0.5)
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
plt.title(predicted_class_names[n])
plt.axis('off')
_ = plt.suptitle("ImageNet predictions")
# + [markdown] id="FUa3YkvhxDLM"
# See the `LICENSE.txt` file for image attributions.
#
# The results are far from perfect, but reasonable considering that these are not the classes the model was trained for (except "daisy").
# + [markdown] id="JzV457OXreQP"
# ### Download the headless model
#
# TensorFlow Hub also distributes models without the top classification layer. These can be used to easily do transfer learning.
#
# Any [compatible image feature vector model](https://tfhub.dev/s?module-type=image-feature-vector&q=tf2) from TensorFlow Hub will work here.
# + id="4bw8Jf94DSnP"
feature_extractor_model = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4" #@param {type:"string"}
# + [markdown] id="sgwmHugQF-PD"
# Create the feature extractor. Use `trainable=False` to freeze the variables in the feature extractor layer, so that the training only modifies the new classifier layer.
# + id="5wB030nezBwI"
feature_extractor_layer = hub.KerasLayer(
feature_extractor_model, input_shape=(224, 224, 3), trainable=False)
# + [markdown] id="0QzVdu4ZhcDE"
# It returns a 1280-length vector for each image:
# + id="Of7i-35F09ls"
feature_batch = feature_extractor_layer(image_batch)
print(feature_batch.shape)
# + [markdown] id="RPVeouTksO9q"
# ### Attach a classification head
#
# Now wrap the hub layer in a `tf.keras.Sequential` model, and add a new classification layer.
# + id="vQq_kCWzlqSu"
num_classes = len(class_names)
model = tf.keras.Sequential([
feature_extractor_layer,
tf.keras.layers.Dense(num_classes)
])
model.summary()
# + id="IyhX4VCFmzVS"
predictions = model(image_batch)
# + id="FQdUaTkzm3jQ"
predictions.shape
# + [markdown] id="OHbXQqIquFxQ"
# ### Train the model
#
# Use compile to configure the training process:
# + id="4xRx8Rjzm67O"
model.compile(
optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['acc'])
# + [markdown] id="58-BLV7dupJA"
# Now use the `.fit` method to train the model.
#
# To keep this example short train just 2 epochs. To visualize the training progress, use a custom callback to log the loss and accuracy of each batch individually, instead of the epoch average.
# + id="JI0yAKd-nARd"
class CollectBatchStats(tf.keras.callbacks.Callback):
def __init__(self):
self.batch_losses = []
self.batch_acc = []
def on_train_batch_end(self, batch, logs=None):
self.batch_losses.append(logs['loss'])
self.batch_acc.append(logs['acc'])
self.model.reset_metrics()
batch_stats_callback = CollectBatchStats()
history = model.fit(train_ds, epochs=2,
callbacks=[batch_stats_callback])
# + [markdown] id="Kd0N272B9Q0b"
# Now after, even just a few training iterations, we can already see that the model is making progress on the task.
# + id="A5RfS1QIIP-P"
plt.figure()
plt.ylabel("Loss")
plt.xlabel("Training Steps")
plt.ylim([0,2])
plt.plot(batch_stats_callback.batch_losses)
# + id="3uvX11avTiDg"
plt.figure()
plt.ylabel("Accuracy")
plt.xlabel("Training Steps")
plt.ylim([0,1])
plt.plot(batch_stats_callback.batch_acc)
# + [markdown] id="kb__ZN8uFn-D"
# ### Check the predictions
#
# To redo the plot from before, first get the ordered list of class names:
# + id="JGbEf5l1I4jz"
predicted_batch = model.predict(image_batch)
predicted_id = np.argmax(predicted_batch, axis=-1)
predicted_label_batch = class_names[predicted_id]
# + [markdown] id="CkGbZxl9GZs-"
# Plot the result
# + id="hW3Ic_ZlwtrZ"
plt.figure(figsize=(10,9))
plt.subplots_adjust(hspace=0.5)
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
plt.title(predicted_label_batch[n].title())
plt.axis('off')
_ = plt.suptitle("Model predictions")
# + [markdown] id="uRcJnAABr22x"
# ## Export your model
#
# Now that you've trained the model, export it as a SavedModel for use later on.
# + id="PLcqg-RmsLno"
t = time.time()
export_path = "/tmp/saved_models/{}".format(int(t))
model.save(export_path)
export_path
# + [markdown] id="AhQ9liIUsPsi"
# Now confirm that we can reload it, and it still gives the same results:
# + id="7nI5fvkAQvbS"
reloaded = tf.keras.models.load_model(export_path)
# + id="jor83-LqI8xW"
result_batch = model.predict(image_batch)
reloaded_result_batch = reloaded.predict(image_batch)
# + id="dnZO14taYPH6"
abs(reloaded_result_batch - result_batch).max()
# + [markdown] id="TYZd4MNiV3Rc"
# This SavedModel can be loaded for inference later, or converted to [TFLite](https://www.tensorflow.org/lite/convert/) or [TFjs](https://github.com/tensorflow/tfjs-converter).
#
# + [markdown] id="mSBRrW-MqBbk"
# ## Learn more
#
# Check out more [tutorials](https://www.tensorflow.org/hub/tutorials) for using image models from TensorFlow Hub.
| site/en/tutorials/images/transfer_learning_with_hub.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:pythonengineer_env]
# language: python
# name: conda-env-pythonengineer_env-py
# ---
# # Function arguments
# In this article we will talk about function parameters and function arguments in detail. We will learn:
#
# - The difference between arguments and parameters
# - Positional and keyword arguments
# - Default arguments
# - Variable-length arguments (`*args` and `**kwargs`)
# - Container unpacking into function arguments
# - Local vs. global arguments
# - Parameter passing (by value or by reference?)
# ## Arguments and parameters
# - Parameters are the variables that are defined or used inside parentheses while defining a function
# - Arguments are the value passed for these parameters while calling a function
# +
def print_name(name): # name is the parameter
print(name)
print_name('Alex') # 'Alex' is the argument
# -
# ## Positional and keyword arguments
# We can pass arguments as positional or keyword arguments. Some benefits of keyword arguments can be:
# - We can call arguments by their names to make it more clear what they represent
# - We can rearrange arguments in a way that makes them most readable
# +
def foo(a, b, c):
print(a, b, c)
# positional arguments
foo(1, 2, 3)
# keyword arguments
foo(a=1, b=2, c=3)
foo(c=3, b=2, a=1) # Note that the order is not important here
# mix of both
foo(1, b=2, c=3)
# This is not allowed:
# foo(1, b=2, 3) # positional argument after keyword argument
# foo(1, b=2, a=3) # multiple values for argument 'a'
# -
# ## Default arguments
# Functions can have default arguments with a predefined value. This argument can be left out and the default value is then passed to the function, or the argument can be used with a different value. Note that default arguments must be defined as the last parameters in a function.
# +
# default arguments
def foo(a, b, c, d=4):
print(a, b, c, d)
foo(1, 2, 3, 4)
foo(1, b=2, c=3, d=100)
# not allowed: default arguments must be at the end
# def foo(a, b=2, c, d=4):
# print(a, b, c, d)
# -
# ## Variable-length arguments (`*args` and `**kwargs`)
# - If you mark a parameter with one asterisk (`*`), you can pass any number of positional arguments to your function (Typically called `*args`)
# - If you mark a parameter with two asterisks (`**`), you can pass any number of keyword arguments to this function (Typically called `**kwargs`).
# +
def foo(a, b, *args, **kwargs):
print(a, b)
for arg in args:
print(arg)
for kwarg in kwargs:
print(kwarg, kwargs[kwarg])
# 3, 4, 5 are combined into args
# six and seven are combined into kwargs
foo(1, 2, 3, 4, 5, six=6, seven=7)
print()
# omitting of args or kwargs is also possible
foo(1, 2, three=3)
# -
# ## Forced keyword arguments
# Sometimes you want to have keyword-only arguments. You can enforce that with:
# - If you write '`*,`' in your function parameter list, all parameters after that must be passed as keyword arguments.
# - Arguments after variable-length arguments must be keyword arguments.
# +
def foo(a, b, *, c, d):
print(a, b, c, d)
foo(1, 2, c=3, d=4)
# not allowed:
# foo(1, 2, 3, 4)
# arguments after variable-length arguments must be keyword arguments
def foo(*args, last):
for arg in args:
print(arg)
print(last)
foo(8, 9, 10, last=50)
# -
# ## Unpacking into agruments
# - Lists or tuples can be unpacked into arguments with one asterisk (`*`) if the length of the container matches the number of function parameters.
# - Dictionaries can be unpacked into arguments with two asterisks (`**`) the the length and the keys match the function parameters.
# +
def foo(a, b, c):
print(a, b, c)
# list/tuple unpacking, length must match
my_list = [4, 5, 6] # or tuple
foo(*my_list)
# dict unpacking, keys and length must match
my_dict = {'a': 1, 'b': 2, 'c': 3}
foo(**my_dict)
# my_dict = {'a': 1, 'b': 2, 'd': 3} # not possible since wrong keyword
# -
# ## Local vs global variables
# Global variables can be accessed within a function body, but to modify them, we first must state `global var_name` in order to change the global variable.
# +
def foo1():
x = number # global variable can only be accessed here
print('number in function:', x)
number = 0
foo1()
# modifying the global variable
def foo2():
global number # global variable can now be accessed and modified
number = 3
print('number before foo2(): ', number)
foo2() # modifies the global variable
print('number after foo2(): ', number)
# -
# If we do not write `global var_name` and asign a new value to a variable with the same name as the global variable, this will create a local variable within the function. The global variable remains unchanged.
# +
number = 0
def foo3():
number = 3 # this is a local variable
print('number before foo3(): ', number)
foo3() # does not modify the global variable
print('number after foo3(): ', number)
# -
# ## Parameter passing
# Python uses a mechanism, which is known as "Call-by-Object" or "Call-by-Object-Reference. The following rules must be considered:
# - The parameter passed in is actually a reference to an object (but the reference is passed by value)
# - Difference between mutable and immutable data types
#
# This means that:
#
# 1. Mutable objects (e.g. lists,dict) can be changed within a method.
# * But if you rebind the reference in the method, the outer reference will still point at the original object.
# 3. Immutable objects (e.g. int, string) cannot be changed within a method.
# * But immutable object CONTAINED WITHIN a mutable object can be re-assigned within a method.
# +
# immutable objects -> no change
def foo(x):
x = 5 # x += 5 also no effect since x is immutable and a new variable must be created
var = 10
print('var before foo():', var)
foo(var)
print('var after foo():', var)
# +
# mutable objects -> change
def foo(a_list):
a_list.append(4)
my_list = [1, 2, 3]
print('my_list before foo():', my_list)
foo(my_list)
print('my_list after foo():', my_list)
# +
# immutable objects within a mutable object -> change
def foo(a_list):
a_list[0] = -100
a_list[2] = "Paul"
my_list = [1, 2, "Max"]
print('my_list before foo():', my_list)
foo(my_list)
print('my_list after foo():', my_list)
# +
# Rebind a mutable reference -> no change
def foo(a_list):
a_list = [50, 60, 70] # a_list is now a new local variable within the function
a_list.append(50)
my_list = [1, 2, 3]
print('my_list before foo():', my_list)
foo(my_list)
print('my_list after foo():', my_list)
# -
# Be careful with `+=` and `=` operations for mutable types. The first operation has an effect on the passed argument while the latter has not:
# +
# another example with rebinding references:
def foo(a_list):
a_list += [4, 5] # this chanches the outer variable
def bar(a_list):
a_list = a_list + [4, 5] # this rebinds the reference to a new local variable
my_list = [1, 2, 3]
print('my_list before foo():', my_list)
foo(my_list)
print('my_list after foo():', my_list)
my_list = [1, 2, 3]
print('my_list before bar():', my_list)
bar(my_list)
print('my_list after bar():', my_list)
| Programming/python/advanced-python/18-Functions arguments.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
import datetime as dt
df = pd.read_csv('ibm_classification/loan_train.csv')
df
df.shape
df.columns
df['loan_status'].unique()
df['loan_status'].value_counts()
df['due_date'] = pd.to_datetime(df['due_date'])
df['effective_date'] = pd.to_datetime(df['effective_date'])
df.head()
df.info()
df['education'].value_counts()
colors = ['royalblue', 'yellow', 'pink', 'lightgreen']
explode = (0.1, 0.1, 0.1,0.0)
plt.subplots(figsize = (12,8))
plt.pie(df['education'].value_counts(), autopct='%1.2f%%',labeldistance = 1.1, shadow=True, colors = colors, explode = explode)
plt.legend(labels = df['education'].unique(), loc = "upper right")
plt.title("Chart depicting the data of loan taken by different educational sub-categories", fontsize = 15, fontweight = 1000)
plt.tight_layout()
bins = np.linspace(df.Principal.min(), df.Principal.max(), 10)
graph = sns.FacetGrid(df, col='Gender', hue='loan_status', col_wrap = 2)
graph.map(plt.hist, 'Principal', bins=bins)
graph.axes[1].legend()
bins = np.linspace(df.age.min(), df.age.max(), 10)
graph = sns.FacetGrid(df, col="Gender", hue="loan_status", col_wrap = 2)
graph.map(plt.hist, 'age', bins=bins, ec="k")
graph.axes[1].legend()
bins = np.linspace(df.age.min(), df.age.max(), 10)
graph = sns.FacetGrid(df, col="education", hue="Gender", col_wrap = 4)
graph.map(plt.hist, 'age', bins=bins, ec="k")
##### Legend for each graph ####
#Method 1
#graph.axes[0].legend()
#graph.axes[1].legend()
#graph.axes[2].legend()
#graph.axes[3].legend()
### Method 2
a = df['education'].unique()
for i in range(len(a)):
graph.axes[i].legend()
df['effective_date'].dt.weekday()
s = pd.date_range(df['effective_date'].min(), df['effective_date'].max(), freq='D').to_series()
s.dt.dayofweek
df['dayofweek'] = df['effective_date'].dt.dayofweek
bins = np.linspace(df.dayofweek.min(), df.dayofweek.max(), 10)
g = sns.FacetGrid(df, col="Gender", hue="loan_status", palette="Set1", col_wrap=2)
g.map(plt.hist, 'dayofweek', bins=bins, ec="k")
g.axes[-1].legend()
plt.show()
df.shape
df.head()
df['Gender'].replace(to_replace=['male', 'female'], value=[0, 1], inplace = True)
df
df.groupby(['Gender'])['loan_status'].value_counts(normalize = True)
df.groupby('education')['loan_status'].value_counts(normalize = True)
df['weekend'] = df['dayofweek'].apply(lambda x : 1 if (x > 3) else 0)
df
df['education'].value_counts()
df.head()
# +
######## get_dummies #######
##get_dummies indicates whether the values present in the speific column of education##
##We drop master or above column because there were only 2 data which is very small to get to a result
Feature = df[['Principal', 'terms', 'age', 'Gender', 'weekend']]
Feature = pd.concat([Feature, pd.get_dummies(df['education'])], axis = 1)
Feature.head()
# -
Feature.drop(['Master or Above'], axis = 1, inplace = True)
Feature.head()
| ibm/loan_prediction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Willqie/BookExchange/blob/master/PA1/a1_code.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="fr75-y9A4zcM"
# # Programming Assignment 1: Learning Distributed Word Representations
# **Version**: 1.0
#
# **Version Release Date**: 2022-01-22
#
# **Due Date**: Friday, Feb. 4, at 11:59pm
#
# Based on an assignment by <NAME>
#
# For CSC413/2516 in Winter 2022 with Professor <NAME> and Professor <NAME>
#
# **Submission:**
# You must submit two files through MarkUs:
# 1. [ ] A PDF file containing your writeup, titled *a1-writeup.pdf*, which will be the PDF export of this notebook (i.e., by printing this notebook webpage as PDF). Your writeup must be typed. There will be sections in the notebook for you to write your responses. Make sure that the relevant outputs (e.g. `print_gradients()` outputs, plots, etc.) are included and clearly visible.
# 2. [ ] This `a1-code.ipynb` iPython Notebook.
#
# The programming assignments are individual work. See the Course Syllabus for detailed policies.
#
# You should attempt all questions for this assignment. Most of them can be answered at least partially even if you were unable to finish earlier questions. If you think your computational results are incorrect, please say so; that may help you get partial credit.
#
# The teaching assistants for this assignment are <NAME> and <NAME>. Send your email with subject "*\[CSC413\] PA1*" to mailto:<EMAIL> or post on Piazza with the tag `pa1`.
#
# # Introduction
# In this assignment we will learn about word embeddings and make neural networks learn about words.
# We could try to match statistics about the words, or we could train a network that takes a sequence of words as input and learns to predict the word that comes next.
#
# This assignment will ask you to implement a linear embedding and then the backpropagation computations for a neural language model and then run some experiments to analyze the learned representation.
# The amount of code you have to write is very short but each line will require you to think very carefully.
# You will need to derive the updates mathematically, and then implement them using matrix and vector operations in NumPy.
# + [markdown] id="-UUSJPdr3Ge_"
# # Starter code and data
#
# First, perform the required imports for your code:
#
# + id="CRwuwhoJ3Knl"
import collections
import pickle
import numpy as np
import os
from tqdm import tqdm
import pylab
from six.moves.urllib.request import urlretrieve
import tarfile
import sys
import itertools
TINY = 1e-30
EPS = 1e-4
nax = np.newaxis
# + [markdown] id="qNLvRXdy3NDO"
# If you're using colaboratory, this following script creates a folder - here we used 'CSC413/A1' - in order to download and store the data. If you're not using colaboratory, then set the path to wherever you want the contents to be stored at locally.
#
# You can also manually download and unzip the data from [http://www.cs.toronto.edu/~jba/a1_data.tar.gz] and put them in the same folder as where you store this notebook.
#
# Feel free to use a different way to access the files *data.pk* , *partially_trained.pk*, and *raw_sentences.txt*.
#
# The file *raw_sentences.txt* contains the sentences that we will be using for this assignment.
# These sentences are fairly simple ones and cover a vocabulary of only 250 words (+ 1 special `[MASK]` token word).
#
#
#
#
# + id="Gkug8am63SzY" outputId="b729fb43-d509-4b85-f45e-944479e7ab04" colab={"base_uri": "https://localhost:8080/"}
######################################################################
# Setup working directory
######################################################################
# Change this to a local path if running locally
# %mkdir -p /content/CSC413/A1/
# %cd /content/CSC413/A1
######################################################################
# Helper functions for loading data
######################################################################
# adapted from
# https://github.com/fchollet/keras/blob/master/keras/datasets/cifar10.py
def get_file(fname,
origin,
untar=False,
extract=False,
archive_format='auto',
cache_dir='data'):
datadir = os.path.join(cache_dir)
if not os.path.exists(datadir):
os.makedirs(datadir)
if untar:
untar_fpath = os.path.join(datadir, fname)
fpath = untar_fpath + '.tar.gz'
else:
fpath = os.path.join(datadir, fname)
print('File path: %s' % fpath)
if not os.path.exists(fpath):
print('Downloading data from', origin)
error_msg = 'URL fetch failure on {}: {} -- {}'
try:
try:
urlretrieve(origin, fpath)
except URLError as e:
raise Exception(error_msg.format(origin, e.errno, e.reason))
except HTTPError as e:
raise Exception(error_msg.format(origin, e.code, e.msg))
except (Exception, KeyboardInterrupt) as e:
if os.path.exists(fpath):
os.remove(fpath)
raise
if untar:
if not os.path.exists(untar_fpath):
print('Extracting file.')
with tarfile.open(fpath) as archive:
archive.extractall(datadir)
return untar_fpath
if extract:
_extract_archive(fpath, datadir, archive_format)
return fpath
# + id="KUQjRpWqnkzk" outputId="ee026dbe-8482-42f9-a5ff-1a8ea7e83ffd" colab={"base_uri": "https://localhost:8080/"}
# Download the dataset and partially pre-trained model
get_file(fname='a1_data',
origin='http://www.cs.toronto.edu/~jba/a1_data.tar.gz',
untar=True)
drive_location = 'data'
PARTIALLY_TRAINED_MODEL = drive_location + '/' + 'partially_trained.pk'
data_location = drive_location + '/' + 'data.pk'
# + [markdown] id="Qna9z_wJ3U5e"
# We have already extracted the 4-grams from this dataset and divided them into training, validation, and test sets.
# To inspect this data, run the following:
# + id="RD1LN16d3a0u" outputId="032e7d51-63bb-4ff4-b950-8b02ed35a80b" colab={"base_uri": "https://localhost:8080/"}
data = pickle.load(open(data_location, 'rb'))
print(data['vocab'][0]) # First word in vocab is [MASK]
print(data['vocab'][1])
print(len(data['vocab'])) # Number of words in vocab
print(data['vocab']) # All the words in vocab
print(data['train_inputs'][:10]) # 10 example training instances
# + [markdown] id="lXd2Msqs3fPQ"
# Now `data` is a Python dict which contains the vocabulary, as well as the inputs and targets for all three splits of the data. `data['vocab']` is a list of the 251 words in the dictionary; `data['vocab'][0]` is the word with index 0, and so on. `data['train_inputs']` is a 372,500 x 4 matrix where each row gives the indices of the 4 consecutive context words for one of the 372,500 training cases.
# The validation and test sets are handled analogously.
#
# Even though you only have to modify two specific locations in the code, you may want to read through this code before starting the assignment.
# + [markdown] id="pa9ggqxJPPs0"
# # Part 1: GLoVE Word Representations (3pts)
#
# In this section we will be implementing a simplified version of [GloVe](https://nlp.stanford.edu/pubs/glove.pdf).
# Given a corpus with $V$ distinct words, we define the co-occurrence matrix $X\in \mathbb{N}^{V\times V}$ with entries $X_{ij}$ representing the frequency of the $i$-th word and $j$-th word in the corpus appearing in the same *context* - in our case the adjacent words. The co-occurrence matrix can be *symmetric* (i.e., $X_{ij} = X_{ji}$) if the order of the words do not matter, or *asymmetric* (i.e., $X_{ij} \neq X_{ji}$) if we wish to distinguish the counts for when $i$-th word appears before $j$-th word.
# GloVe aims to find a $d$-dimensional embedding of the words that preserves properties of the co-occurrence matrix by representing the $i$-th word with two $d$-dimensional vectors $\mathbf{w}_i,\tilde{\mathbf{w}}_i \in\mathbb{R}^d$, as well as two scalar biases $b_i, \tilde{b}_i\in\mathbb{R}$. Typically we have the dimension of the embedding $d$ much smaller than the number of words $V$. This objective can be written as:
#
# $$L(\{\mathbf{w}_i,\tilde{\mathbf{w}}_i,b_i, \tilde{b}_i\}_{i=1}^V) = \sum_{i,j=1}^V (\mathbf{w}_i^\top\tilde{\mathbf{w}}_j + b_i + \tilde{b}_j - \log X_{ij})^2$$.
#
# Note that each word is represented by two $d$-dimensional embedding vectors $\mathbf{w}_i, \tilde{\mathbf{w}}_i$ and two scalar biases $b_i, \tilde{b}_i$.
#
#
# + [markdown] id="Xo1R6rfP4aJQ"
# Answer the following questions:
#
# ## 1.1. GLoVE Parameter Count \[0pt\]
# Given the vocabulary size $V$ and embedding dimensionality $d$, how many parameters does the GLoVE model have? Note that each word in the vocabulary is associated with 2 embedding vectors and 2 biases.
# + [markdown] id="gREV4DxJx98K"
# 1.1 **Answer**: There are $2V(d+1)$ parameters.
# + [markdown] id="rKbDkmuGoTCC"
# ## 1.2 Expression for the Vectorized Loss function [0.5pt]
# In practice, we concatenate the $V$ embedding vectors into matrices $\mathbf{W}, \tilde{\mathbf{W}} \in \mathbb{R}^{V \times d}$ and bias (column) vectors $\mathbf{b}, \tilde{\mathbf{b}} \in \mathbb{R}^{V}$, where $V$ denotes the number of distinct words as described in the introduction. Rewrite the loss function $L$ (Eq. 1) in a vectorized format in terms of $\mathbf{W}, \tilde{\mathbf{W}}, \mathbf{b}, \tilde{\mathbf{b}}, X$.
#
# *Hint: Use the all-ones column vector $\mathbf{1} = [1 \dots 1]^{T} \in \mathbb{R}^{V}$. You can assume the bias vectors are column vectors, i.e. implicitly a matrix with $V$ rows and 1 column: $\mathbf{b}, \tilde{\mathbf{b}} \in \mathbb{R}^{V \times 1}$*
# + [markdown] id="0tz4hzwPogsL"
# 1.2 **Answer**:
# Let $A= \mathbf{W}\tilde{\mathbf{W}}^T+ \mathbf{1}\tilde{\mathbf{b}}^T + \mathbf{b}\mathbf{1}^T - \log(\mathbf{X})$
#
# $L(\mathbf{W}, \tilde{\mathbf{W}}, \mathbf{b}, \tilde{\mathbf{b}}) = trace(A^TA)$
# + [markdown] id="_vQIRZynyGpl"
# ## 1.3. Expression for gradient $\frac{\partial L}{\partial \mathbf{W}}$ \[0.5pt\]
#
# Write the vectorized expression for $\frac{\partial L}{\partial \mathbf{W}}$, the gradient of the loss function $L$ with respect to the embedding matrix $\mathbf{W}$. The gradient should be a function of $\mathbf{W}, \tilde{\mathbf{W}}, \mathbf{b}, \tilde{\mathbf{b}}, X$.
#
# *Hint: Make sure that the shape of the gradient is equivalent to the shape of the matrix. You can use the all-ones vector as in the previous question.*
# + [markdown] id="HYDCmo7UyLyI"
# 1.3 **Answer**: $\frac{∂L}{∂\mathbf{W}} = 2 A^T \tilde{\mathbf{W}}$
#
# + [markdown] id="jQJrG7fkpEOe"
# ## 1.4 Implement Vectorized Loss Function [1pt]
#
# Implement the `loss_GloVe()` function of GloVe.
#
# **See** `YOUR CODE HERE` **Comment below for where to complete the code**
#
# Note that you need to implement both the loss for an *asymmetric* model (from your answer in question 1.2) and the loss for a *symmetric* model which uses the same embedding matrix $\mathbf{W}$ and bias vector $\mathbf{b}$ for the first and second word in the co-occurrence, i.e. $\tilde{\mathbf{W}} = \mathbf{W}$ and $\tilde{\mathbf{b}} = \mathbf{b}$ in the original loss.
#
# *Hint: You may take advantage of NumPy's broadcasting feature for the bias vectors: https://numpy.org/doc/stable/user/basics.broadcasting.html*
#
# We have provided a few functions for training the embedding:
#
# * `calculate_log_co_occurence` computes the log co-occurrence matrix of a given corpus
# * `train_GloVe` runs momentum gradient descent to optimize the embedding
# * `loss_GloVe`: **TO BE IMPLEMENTED.**
# * INPUT
# * V x d matrix `W` (collection of $V$ embedding vectors, each $d$-dimensional)
# * V x d matrix `W_tilde`
# * V x 1 vector `b` (collection of $V$ bias terms)
# * V x 1 vector `b_tilde`
# * V x V log co-occurrence matrix.
# * OUTPUT
# * loss of the GLoVE objective
# * `grad_GLoVE`: **TO BE IMPLEMENTED.**
# * INPUT:
# * V x d matrix `W` (collection of $V$ embedding vectors, each $d$-dimensional), embedding for first word;
# * V x d matrix `W_tilde`, embedding for second word;
# * V x 1 vector `b` (collection of $V$ bias terms);
# * V x 1 vector `b_tilde`, bias for second word;
# * V x V log co-occurrence matrix.
# * OUTPUT:
# * V x d matrix `grad_W` containing the gradient of the loss function w.r.t. `W`;
# * V x d matrix `grad_W_tilde` containing the gradient of the loss function w.r.t. `W_tilde`;
# * V x 1 vector `grad_b` which is the gradient of the loss function w.r.t. `b`.
# * V x 1 vector `grad_b_tilde` which is the gradient of the loss function w.r.t. `b_tilde`.
#
# Run the code to compute the co-occurence matrix.
# Make sure to add a 1 to the occurences, so there are no 0's in the matrix when we take the elementwise log of the matrix.
#
#
# + id="rw0IToBap3E2"
vocab_size = len(data['vocab']) # Number of vocabs
def calculate_log_co_occurence(word_data, symmetric=False):
"Compute the log-co-occurence matrix for our data."
log_co_occurence = np.zeros((vocab_size, vocab_size))
for input in word_data:
# Note: the co-occurence matrix may not be symmetric
log_co_occurence[input[0], input[1]] += 1
log_co_occurence[input[1], input[2]] += 1
log_co_occurence[input[2], input[3]] += 1
# Diagonal entries are just the frequency of the word
log_co_occurence[input[0], input[0]] += 1
log_co_occurence[input[1], input[1]] += 1
log_co_occurence[input[2], input[2]] += 1
# If we want symmetric co-occurence can also increment for these.
if symmetric:
log_co_occurence[input[1], input[0]] += 1
log_co_occurence[input[2], input[1]] += 1
log_co_occurence[input[3], input[2]] += 1
delta_smoothing = 0.5 # A hyperparameter. You can play with this if you want.
log_co_occurence += delta_smoothing # Add delta so log doesn't break on 0's.
log_co_occurence = np.log(log_co_occurence)
return log_co_occurence
# + id="5K0knDihp45W"
asym_log_co_occurence_train = calculate_log_co_occurence(data['train_inputs'], symmetric=False)
asym_log_co_occurence_valid = calculate_log_co_occurence(data['valid_inputs'], symmetric=False)
# + [markdown] id="7vbozDCFp8lD"
# * [x] **TO BE IMPLEMENTED**: Implement the loss function. You should vectorize the computation, i.e. not loop over every word.
# + id="U1zltFcrqFnq"
def loss_GloVe(W, W_tilde, b, b_tilde, log_co_occurence):
""" Compute the GLoVE loss given the parameters of the model. When W_tilde
and b_tilde are not given, then the model is symmetric (i.e. W_tilde = W,
b_tilde = b).
Args:
W: word embedding matrix, dimension V x d where V is vocab size and d
is the embedding dimension
W_tilde: for asymmetric GLoVE model, a second word embedding matrix, with
dimensions V x d
b: bias vector, dimension V.
b_tilde: for asymmetric GLoVE model, a second bias vector, dimension V
log_co_occurence: V x V log co-occurrence matrix (log X)
Returns:
loss: a scalar (float) for GloVe loss
"""
n,_ = log_co_occurence.shape
# Symmetric Case, no W_tilde and b_tilde
if W_tilde is None and b_tilde is None:
# Symmetric model
########################### YOUR CODE HERE ##############################
ones = np.ones((n, 1))
b = b.reshape(-1, 1)
A = W @ W.transpose() + ones @ b.transpose() + b @ ones.transpose() - log_co_occurence
loss = np.sum(np.power(A, 2))
# loss = ...
############################################################################
else:
# Asymmetric model
########################### YOUR CODE HERE ##############################
ones = np.ones((n, 1))
b = b.reshape(-1, 1)
b_tilde = b_tilde.reshape(-1, 1)
# print(ones.shape)
# print(b_tilde.transpose().shape)
A = W @ W_tilde.transpose() + ones @ b_tilde.transpose() + b @ ones.transpose() - log_co_occurence
loss = np.sum(np.power(A, 2))
# loss = ...
############################################################################
return loss
# + [markdown] id="HzWek3lP0p2e"
# ## 1.5. Implement the gradient update of GLoVE. \[1pt\]
#
# Implement the `grad_GloVe()` function which computes the gradient of GloVe.
#
# **See** `YOUR CODE HERE` **Comment below for where to complete the code**
#
#
# Again, note that you need to implement the gradient for both the symmetric and asymmetric models.
# + [markdown] id="gNnKkMy-d2bB"
# * [x] **TO BE IMPLEMENTED**: Calculate the gradient of the loss function w.r.t. the parameters $W$, $\tilde{W}$, $\mathbf{b}$, and $\mathbf{b}$. You should vectorize the computation, i.e. not loop over every word.
# + id="LbpkXeaAdwnj"
def grad_GLoVE(W, W_tilde, b, b_tilde, log_co_occurence):
"""Return the gradient of GLoVE objective w.r.t its parameters
Args:
W: word embedding matrix, dimension V x d where V is vocab size and d
is the embedding dimension
W_tilde: for asymmetric GLoVE model, a second word embedding matrix, with
dimensions V x d
b: bias vector, dimension V.
b_tilde: for asymmetric GLoVE model, a second bias vector, dimension V
log_co_occurence: V x V log co-occurrence matrix (log X)
Returns:
grad_W: gradient of the loss wrt W, dimension V x d
grad_W_tilde: gradient of the loss wrt W_tilde, dimension V x d. Return
None if W_tilde is None.
grad_b: gradient of the loss wrt b, dimension V x 1
grad_b_tilde: gradient of the loss wrt b, dimension V x 1. Return
None if b_tilde is None.
"""
n,_ = log_co_occurence.shape
loss = loss_GloVe(W, W_tilde, b, b_tilde, log_co_occurence)
if W_tilde is None and b_tilde is None:
# Symmmetric case
########################### YOUR CODE HERE ##############################
ones = np.ones((n, 1))
b = b.reshape(-1, 1)
A = W @ W.transpose() + ones @ b.transpose() + b @ ones.transpose() - log_co_occurence
grad_W = np.multiply(2, A).transpose() @ W
grad_b = np.multiply(2, A).sum(axis=1).reshape(-1, 1)
grad_W_tilde = None
grad_b_tilde = None
############################################################################
else:
# Asymmetric case
########################### YOUR CODE HERE ##############################
ones = np.ones((n, 1))
b = b.reshape(-1, 1)
b_tilde = b_tilde.reshape(-1, 1)
A = W @ W_tilde.transpose() + ones @ b_tilde.transpose() + b @ ones.transpose() - log_co_occurence
grad_W = np.multiply(2, A).transpose() @ W_tilde
grad_W_tilde = np.multiply(2, A).transpose() @ W
grad_b = np.multiply(2, A).sum(axis=1).reshape(-1, 1)
grad_b_tilde = np.multiply(2, A).sum(axis=0).transpose().reshape(-1, 1)
############################################################################
return grad_W, grad_W_tilde, grad_b, grad_b_tilde
# + [markdown] id="DXJBYGqX6hP_"
# We define the training function for the model given the initial weights and ground truth log co-occurence matrix:
# + id="sefu3T7u6jBL"
def train_GloVe(W, W_tilde, b, b_tilde, log_co_occurence_train, log_co_occurence_valid, n_epochs, do_print=False):
"Traing W and b according to GLoVE objective."
n,_ = log_co_occurence_train.shape
learning_rate = 0.05 / n # A hyperparameter. You can play with this if you want.
train_loss_list = np.zeros(n_epochs)
valid_loss_list = np.zeros(n_epochs)
vocab_size = log_co_occurence_train.shape[0]
for epoch in range(n_epochs):
grad_W, grad_W_tilde, grad_b, grad_b_tilde = grad_GLoVE(W, W_tilde, b, b_tilde, log_co_occurence_train)
W = W - learning_rate * grad_W
b = b - learning_rate * grad_b
if not grad_W_tilde is None and not grad_b_tilde is None:
W_tilde = W_tilde - learning_rate * grad_W_tilde
b_tilde = b_tilde - learning_rate * grad_b_tilde
train_loss, valid_loss = loss_GloVe(W, W_tilde, b, b_tilde, log_co_occurence_train), loss_GloVe(W, W_tilde, b, b_tilde, log_co_occurence_valid)
if do_print:
print(f"Average Train Loss: {train_loss / vocab_size}, Average valid loss: {valid_loss / vocab_size}, grad_norm: {np.sum(grad_W**2)}")
train_loss_list[epoch] = train_loss / vocab_size
valid_loss_list[epoch] = valid_loss / vocab_size
return W, W_tilde, b, b_tilde, train_loss_list, valid_loss_list
# + [markdown] id="AwmBLMvKGE9-"
# - [ ] ** TODO **: Run this cell below to run an experiment training GloVe model
# + id="eIbyEcyhFwDC" colab={"base_uri": "https://localhost:8080/", "height": 312} outputId="dd6fc128-1dc4-46d8-89a7-010dc5eb6017"
### TODO: Run this cell ###
np.random.seed(1)
n_epochs = 500 # A hyperparameter. You can play with this if you want.
# Store the final losses for graphing
do_print = False # If you want to see diagnostic information during training
init_variance = 0.1 # A hyperparameter. You can play with this if you want.
embedding_dim = 16
W = init_variance * np.random.normal(size=(vocab_size, embedding_dim))
W_tilde = init_variance * np.random.normal(size=(vocab_size, embedding_dim))
b = init_variance * np.random.normal(size=(vocab_size, 1))
b_tilde = init_variance * np.random.normal(size=(vocab_size, 1))
# Run the training for the asymmetric and symmetric GloVe model
Asym_W_final, Asym_W_tilde_final, Asym_b_final, Asym_b_tilde_final, Asym_train_loss_list, Asym_valid_loss_list = train_GloVe(W, W_tilde, b, b_tilde, asym_log_co_occurence_train, asym_log_co_occurence_valid, n_epochs, do_print=do_print)
Sym_W_final, Sym_W_tilde_final, Sym_b_final, Sym_b_tilde_final, Sym_train_loss_list, Sym_valid_loss_list = train_GloVe(W, None, b, None, asym_log_co_occurence_train, asym_log_co_occurence_valid, n_epochs, do_print=do_print)
# Plot the resulting training curve
pylab.plot(Asym_train_loss_list, label="Asym Train Loss", color='red')
pylab.plot(Asym_valid_loss_list, label="Asym Valid Loss", color='red', linestyle='--')
pylab.plot(Sym_train_loss_list, label="Sym Train Loss", color='blue')
pylab.plot(Sym_valid_loss_list, label="Sym Valid Loss", color='blue', linestyle='--')
pylab.xlabel("Iterations")
pylab.ylabel("Average GloVe Loss")
pylab.title("Asymmetric and Symmetric GloVe Model on Asymmetric Log Co-Occurrence (Emb Dim={})".format(embedding_dim))
pylab.legend()
# + [markdown] id="cLTzPRRMqh2H"
# ## 1.6 Effects of a buggy implementation [0pt]
#
# Suppose that during the implementation, you initialized the weight embedding matrix $\mathbf{W}$ and $\tilde{\mathbf{W}}$ with the same initial values (i.e., $\mathbf{W} = \tilde{\mathbf{W}} = \mathbf{W}_0$).
#
# What will happen to the values of $\mathbf{W}$ and $\tilde{\mathbf{W}}$ over the course of training. Will they stay equal to each other, or diverge from each other? Explain your answer briefly.
#
# *Hint: Consider the gradient $\frac{\partial L}{\partial \mathbf{W}}$ versus $\frac{\partial L}{\partial \tilde{\mathbf{W}}}$*
# + [markdown] id="cEd9gGgkaijK"
# 1.6 **Answer**: **\*\*TODO: Write Part 1.6 answer here \*\***
#
# + [markdown] id="bRoG_sqZySHD"
# ## 1.7. Effect of embedding dimension $d$ \[0pt\]
# Train the both the symmetric and asymmetric GLoVe model with varying dimensionality $d$ by running the cell below. Comment on:
# 1. Which $d$ leads to optimal validation performance for the asymmetric and symmetric models?
# 2. Why does / doesn't larger $d$ always lead to better validation error?
# 3. Which model is performing better, and why?
#
# + [markdown] id="mD5jnHJB2hFy"
# 1.7 Answer: **\*\*TODO: Write Part 1.7 answer here\*\***
# + [markdown] id="pjiNQ0WkWi1Z"
# Train the GLoVE model for a range of embedding dimensions
# + id="46yGUezEMLJe" colab={"base_uri": "https://localhost:8080/"} outputId="4993ede7-826d-45a0-c627-c7fba94<PASSWORD>"
np.random.seed(1)
n_epochs = 500 # A hyperparameter. You can play with this if you want.
embedding_dims = np.array([1, 2, 10, 128, 256]) # Play with this
# Store the final losses for graphing
asymModel_asymCoOc_final_train_losses, asymModel_asymCoOc_final_val_losses = [], []
symModel_asymCoOc_final_train_losses, symModel_asymCoOc_final_val_losses = [], []
Asym_W_final_2d, Asym_b_final_2d, Asym_W_tilde_final_2d, Asym_b_tilde_final_2d = None, None, None, None
W_final_2d, b_final_2d = None, None
do_print = False # If you want to see diagnostic information during training
for embedding_dim in tqdm(embedding_dims):
init_variance = 0.1 # A hyperparameter. You can play with this if you want.
W = init_variance * np.random.normal(size=(vocab_size, embedding_dim))
W_tilde = init_variance * np.random.normal(size=(vocab_size, embedding_dim))
b = init_variance * np.random.normal(size=(vocab_size, 1))
b_tilde = init_variance * np.random.normal(size=(vocab_size, 1))
if do_print:
print(f"Training for embedding dimension: {embedding_dim}")
# Train Asym model on Asym Co-Oc matrix
Asym_W_final, Asym_W_tilde_final, Asym_b_final, Asym_b_tilde_final, train_loss_list, valid_loss_list = train_GloVe(W, W_tilde, b, b_tilde, asym_log_co_occurence_train, asym_log_co_occurence_valid, n_epochs, do_print=do_print)
if embedding_dim == 2:
# Save a parameter copy if we are training 2d embedding for visualization later
Asym_W_final_2d = Asym_W_final
Asym_W_tilde_final_2d = Asym_W_tilde_final
Asym_b_final_2d = Asym_b_final
Asym_b_tilde_final_2d = Asym_b_tilde_final
asymModel_asymCoOc_final_train_losses += [train_loss_list[-1]]
asymModel_asymCoOc_final_val_losses += [valid_loss_list[-1]]
if do_print:
print(f"Final validation loss: {valid_loss}")
# Train Sym model on Asym Co-Oc matrix
W_final, W_tilde_final, b_final, b_tilde_final, train_loss_list, valid_loss_list = train_GloVe(W, None, b, None, asym_log_co_occurence_train, asym_log_co_occurence_valid, n_epochs, do_print=do_print)
if embedding_dim == 2:
# Save a parameter copy if we are training 2d embedding for visualization later
W_final_2d = W_final
b_final_2d = b_final
symModel_asymCoOc_final_train_losses += [train_loss_list[-1]]
symModel_asymCoOc_final_val_losses += [valid_loss_list[-1]]
if do_print:
print(f"Final validation loss: {valid_loss}")
# + [markdown] id="hzV-qFf5WfAp"
# Plot the training and validation losses against the embedding dimension.
# + id="WHgHgSzJTg5d" colab={"base_uri": "https://localhost:8080/", "height": 300} outputId="f173908c-1a60-44fb-9bd9-5eeaa2960538"
pylab.loglog(embedding_dims, asymModel_asymCoOc_final_train_losses, label="Asymmetric Model / Asymmetric Co-Oc", linestyle="--")
pylab.loglog(embedding_dims, symModel_asymCoOc_final_train_losses , label="Symmetric Model / Asymmetric Co-Oc")
pylab.xlabel("Embedding Dimension")
pylab.ylabel("Training Loss")
pylab.legend()
# + id="0UJSfg_hfvIV" colab={"base_uri": "https://localhost:8080/", "height": 301} outputId="197c9481-3307-42d4-b002-9a569537a42d"
pylab.loglog(embedding_dims, asymModel_asymCoOc_final_val_losses, label="Asymmetric Model / Asymmetric Co-Oc", linestyle="--")
pylab.loglog(embedding_dims, symModel_asymCoOc_final_val_losses , label="Sym Model / Asymmetric Co-Oc")
pylab.xlabel("Embedding Dimension")
pylab.ylabel("Validation Loss")
pylab.legend(loc="upper left")
# + [markdown] id="7YwDZcOhjywe"
# # Part 2: Network Architecture (1pts)
# See the handout for the written questions in this part.
#
# ## Answer the following questions
# + [markdown] id="wBQmEPvazDkA"
# ## 2.1. Number of parameters in neural network model \[0.5pt\]
#
# Assume in general that we have $V$ words in the dictionary and use the previous $N$ words as inputs. Suppose we use a $D$-dimensional word embedding and a hidden layer with $H$ hidden units. The trainable parameters of the model consist of 3 weight matrices and 2 sets of biases. What is the total number of trainable parameters in the model, as a function of $V,N,D,H$?
#
# In the diagram given, which part of the model (i.e., `word_embbeding_weights`, `embed_to_hid_weights`, `hid_to_output_weights`, `hid_bias`, or `output_bias`) has the largest number of trainable parameters if we have the constraint that $V \gg H > D > N$? Note: The symbol $\gg$ means ``much greater than" Explain your reasoning.
#
# + [markdown] id="CJKJ7qi9zVAm"
# 2.1 Answer: **\*\*TODO: Write Part 2.1 answer here\*\***
# + [markdown] id="Cx_h25zYzVWJ"
# ## 2.2 Number of parameters in $n$-gram model \[0.5pt\]
# Another method for predicting the next words is an *n-gram model*, which was mentioned in Lecture 3. If we wanted to use an n-gram model with the same context length $N-1$ as our network (since we mask 1 of the $N$ words in our input), we'd need to store the counts of all possible $N$-grams. If we stored all the counts explicitly and suppose that we have $V$ words in the dictionary, how many entries would this table have?
# + [markdown] id="n4hTwsjTzXvi"
# 2.2 Answer: **\*\*TODO: Write Part 2.2 answer here\*\***
# + [markdown] id="xDU6LrF-zX1v"
#
# ## 2.3. Comparing neural network and $n$-gram model scaling \[0pt\]
# How do the parameters in the neural network model scale with the number of context words $N$ versus how the number of entries in the $n$-gram model scale with $N$? [0pt]
# + [markdown] id="IVJSp7gFzYa3"
# 2.3 Answer: **\*\*TODO: Write Part 2.3 answer here\*\***
# + [markdown] id="C8FZTyrzlCNl"
# # Part 3: Training the model (3pts)
#
# + [markdown] id="ua0qkOH1tqHw"
# As described in the previous section, during training, we randomly sample one of the $N$ context words to replace with a `[MASK]` token. The goal is for the network to predict the word that was masked, at the corresponding output word position. In practice, this `[MASK]` token is assigned the index 0 in our dictionary. The weights $W^{(2)}$ = `hid_to_output_weights` now has the shape $NV \times H$, as the output layer has $NV$ neurons, where the first $V$ output units are for predicting the first word, then the next $V$ are for predicting the second word, and so on.
# We call this as *concatenating* output units across all word positions, i.e. the $(v + nV)$-th column is for the word $v$ in vocabulary for the $n$-th output word position.
# Note here that the softmax is applied in chunks of $V$ as well, to give a valid probability distribution over the $V$ words (For simplicity we also include the `[MASK]` token as one of the possible prediction even though we know the target should not be this token). Only the output word positions that were masked in the input are included in the cross entropy loss calculation:
#
# $$C = -\sum_{i}^{B}\sum_{n}^{N}\sum_{v}^{V} m^{(i)}_{n} (t^{(i)}_{v + nV} \log y^{(i)}_{v + nV})$$
#
# Where:
# * $y^{(i)}_{v + nV}$ denotes the output probability prediction from the neural network for the $i$-th training example for the word $v$ in the $n$-th output word. Denoting $z$ as the logits output, we define the output probability $y$ as a softmax on $z$ over contiguous chunks of $V$ units (see also Figure 1):
#
# $$y^{(i)}_{v + nV} = \frac{e^{z^{(i)}_{v+nV}}}{\sum_{l}^{V} e^{z^{(i)}_{l+nV}}}$$
# * $t^{(i)}_{v + nV} \in \{0,1\}$ is 1 if for the $i$-th training example, the word $v$ is the $n$-th word in context
# * $m^{(i)}_{n} \in \{0,1\}$ is a mask that is set to 1 if we are predicting the $n$-th word position for the $i$-th example (because we had masked that word in the input), and 0 otherwise
#
# There are three classes defined in this part: `Params`, `Activations`, `Model`.
# You will make changes to `Model`, but it may help to read through `Params` and `Activations` first.
# + id="EfGEjB3QLNXf"
class Params(object):
"""A class representing the trainable parameters of the model. This class has five fields:
word_embedding_weights, a matrix of size V x D, where V is the number of words in the vocabulary
and D is the embedding dimension.
embed_to_hid_weights, a matrix of size H x ND, where H is the number of hidden units. The first D
columns represent connections from the embedding of the first context word, the next D columns
for the second context word, and so on. There are N context words.
hid_bias, a vector of length H
hid_to_output_weights, a matrix of size NV x H
output_bias, a vector of length NV"""
def __init__(self, word_embedding_weights, embed_to_hid_weights, hid_to_output_weights,
hid_bias, output_bias):
self.word_embedding_weights = word_embedding_weights
self.embed_to_hid_weights = embed_to_hid_weights
self.hid_to_output_weights = hid_to_output_weights
self.hid_bias = hid_bias
self.output_bias = output_bias
def copy(self):
return self.__class__(self.word_embedding_weights.copy(), self.embed_to_hid_weights.copy(),
self.hid_to_output_weights.copy(), self.hid_bias.copy(), self.output_bias.copy())
@classmethod
def zeros(cls, vocab_size, context_len, embedding_dim, num_hid):
"""A constructor which initializes all weights and biases to 0."""
word_embedding_weights = np.zeros((vocab_size, embedding_dim))
embed_to_hid_weights = np.zeros((num_hid, context_len * embedding_dim))
hid_to_output_weights = np.zeros((vocab_size * context_len, num_hid))
hid_bias = np.zeros(num_hid)
output_bias = np.zeros(vocab_size * context_len)
return cls(word_embedding_weights, embed_to_hid_weights, hid_to_output_weights,
hid_bias, output_bias)
@classmethod
def random_init(cls, init_wt, vocab_size, context_len, embedding_dim, num_hid):
"""A constructor which initializes weights to small random values and biases to 0."""
word_embedding_weights = np.random.normal(0., init_wt, size=(vocab_size, embedding_dim))
embed_to_hid_weights = np.random.normal(0., init_wt, size=(num_hid, context_len * embedding_dim))
hid_to_output_weights = np.random.normal(0., init_wt, size=(vocab_size * context_len, num_hid))
hid_bias = np.zeros(num_hid)
output_bias = np.zeros(vocab_size * context_len)
return cls(word_embedding_weights, embed_to_hid_weights, hid_to_output_weights,
hid_bias, output_bias)
###### The functions below are Python's somewhat oddball way of overloading operators, so that
###### we can do arithmetic on Params instances. You don't need to understand this to do the assignment.
def __mul__(self, a):
return self.__class__(a * self.word_embedding_weights,
a * self.embed_to_hid_weights,
a * self.hid_to_output_weights,
a * self.hid_bias,
a * self.output_bias)
def __rmul__(self, a):
return self * a
def __add__(self, other):
return self.__class__(self.word_embedding_weights + other.word_embedding_weights,
self.embed_to_hid_weights + other.embed_to_hid_weights,
self.hid_to_output_weights + other.hid_to_output_weights,
self.hid_bias + other.hid_bias,
self.output_bias + other.output_bias)
def __sub__(self, other):
return self + -1. * other
# + id="k6XFQUPsLSi7"
class Activations(object):
"""A class representing the activations of the units in the network. This class has three fields:
embedding_layer, a matrix of B x ND matrix (where B is the batch size, D is the embedding dimension,
and N is the number of input context words), representing the activations for the embedding
layer on all the cases in a batch. The first D columns represent the embeddings for the
first context word, and so on.
hidden_layer, a B x H matrix representing the hidden layer activations for a batch
output_layer, a B x V matrix representing the output layer activations for a batch"""
def __init__(self, embedding_layer, hidden_layer, output_layer):
self.embedding_layer = embedding_layer
self.hidden_layer = hidden_layer
self.output_layer = output_layer
def get_batches(inputs, batch_size, shuffle=True):
"""Divide a dataset (usually the training set) into mini-batches of a given size. This is a
'generator', i.e. something you can use in a for loop. You don't need to understand how it
works to do the assignment."""
if inputs.shape[0] % batch_size != 0:
raise RuntimeError('The number of data points must be a multiple of the batch size.')
num_batches = inputs.shape[0] // batch_size
if shuffle:
idxs = np.random.permutation(inputs.shape[0])
inputs = inputs[idxs, :]
for m in range(num_batches):
yield inputs[m * batch_size:(m + 1) * batch_size, :]
# + [markdown] id="uuAXaDNll0lf"
# In this part of the assignment, you implement a method which computes the gradient using backpropagation.
# To start you out, the *Model* class contains several important methods used in training:
#
#
# * `compute_activations` computes the activations of all units on a given input batch
# * `compute_loss_derivative` computes the gradient with respect to the output logits $\frac{\partial C}{\partial z}$
# * `evaluate` computes the average cross-entropy loss for a given set of inputs and targets
#
# You will need to complete the implementation of two additional methods to complete the training, and print the outputs of the gradients.
# + [markdown] id="lVF5TxDgtqHx"
# ## 3.1 Implement gradient with respect to output layer inputs [1pt]
# Implement a vectorized `compute_loss` function, which computes the total cross-entropy loss on a mini-batch according to Eq. 2. Look for the `## YOUR CODE HERE ##` comment for where to complete the code. The docstring provides a description of the inputs to the function.
#
#
# + [markdown] id="pyAoLfRPYZYx"
# ## 3.2 Implement gradient with respect to parameters [1pt]
# `back_propagate` is the function which computes the gradient of the loss with respect to model parameters using backpropagation.
# It uses the derivatives computed by *compute_loss_derivative*.
# Some parts are already filled in for you, but you need to compute the matrices of derivatives for `embed_to_hid_weights`, `hid_bias`, `hid_to_output_weights`, and `output_bias`.
# These matrices have the same sizes as the parameter matrices (see previous section). These matrices have the same sizes as the parameter matrices. Look for the `## YOUR CODE HERE ##` comment for where to complete the code.
#
# In order to implement backpropagation efficiently, you need to express the computations in terms of matrix operations, rather than *for* loops.
# You should first work through the derivatives on pencil and paper.
# First, apply the chain rule to compute the derivatives with respect to individual units, weights, and biases.
# Next, take the formulas you've derived, and express them in matrix form.
# You should be able to express all of the required computations using only matrix multiplication, matrix transpose, and elementwise operations --- no *for* loops!
# If you want inspiration, read through the code for *Model.compute_activations* and try to understand how the matrix operations correspond to the computations performed by all the units in the network.
#
# + id="0F4CTBipK9B6"
class Model(object):
"""A class representing the language model itself. This class contains various methods used in training
the model and visualizing the learned representations. It has two fields:
params, a Params instance which contains the model parameters
vocab, a list containing all the words in the dictionary; vocab[0] is the word with index
0, and so on."""
def __init__(self, params, vocab):
self.params = params
self.vocab = vocab
self.vocab_size = len(vocab)
self.embedding_dim = self.params.word_embedding_weights.shape[1]
self.embedding_layer_dim = self.params.embed_to_hid_weights.shape[1]
self.context_len = self.embedding_layer_dim // self.embedding_dim
self.num_hid = self.params.embed_to_hid_weights.shape[0]
def copy(self):
return self.__class__(self.params.copy(), self.vocab[:])
@classmethod
def random_init(cls, init_wt, vocab, context_len, embedding_dim, num_hid):
"""Constructor which randomly initializes the weights to Gaussians with standard deviation init_wt
and initializes the biases to all zeros."""
params = Params.random_init(init_wt, len(vocab), context_len, embedding_dim, num_hid)
return Model(params, vocab)
def indicator_matrix(self, targets, mask_zero_index=True):
"""Construct a matrix where the (v + n*V)th entry of row i is 1 if the n-th target word
for example i is v, and all other entries are 0.
Note: if the n-th target word index is 0, this corresponds to the [MASK] token,
and we set the entry to be 0.
"""
batch_size, context_len = targets.shape
expanded_targets = np.zeros((batch_size, context_len * len(self.vocab)))
offset = np.repeat((np.arange(context_len) * len(self.vocab))[np.newaxis, :], batch_size, axis=0) # [[0, V, 2V], [0, V, 2V], ...]
targets_offset = targets + offset
for c in range(context_len):
expanded_targets[np.arange(batch_size), targets_offset[:,c]] = 1.
if mask_zero_index:
# Note: Set the targets with index 0, V, 2V to be zero since it corresponds to the [MASK] token
expanded_targets[np.arange(batch_size), offset[:,c]] = 0.
return expanded_targets
def compute_loss_derivative(self, output_activations, expanded_target_batch, target_mask):
"""Compute the gradient of cross-entropy loss wrt output logits z
For example:
[y_{0} .... y_{V-1}] [y_{V}, ..., y_{2*V-1}] [y_{2*V} ... y_{i,3*V-1}] [y_{3*V} ... y_{i,4*V-1}]
Where for column v + n*V,
y_{v + n*V} = e^{z_{v + n*V}} / \sum_{m=0}^{V-1} e^{z_{m + n*V}}, for n=0,...,N-1
This function should return a dC / dz matrix of size [batch_size x (vocab_size * context_len)],
where each row i in dC / dz has columns 0 to V-1 containing the gradient the 1st output
context word from i-th training example, then columns vocab_size to 2*vocab_size - 1 for the 2nd
output context word of the i-th training example, etc.
C is the loss function summed acrossed all examples as well:
C = -\sum_{i,j,n} mask_{i,n} (t_{i, j + n*V} log y_{i, j + n*V}), for j=0,...,V, and n=0,...,N
where mask_{i,n} = 1 if the i-th training example has n-th context word as the target,
otherwise mask_{i,n} = 0.
Args:
output_activations: A [batch_size x (context_len * vocab_size)] matrix,
for the activations of the output layer, i.e. the y_j's.
expanded_target_batch: A [batch_size x (context_len * vocab_size)] matrix,
where expanded_target_batch[i,n*V:(n+1)*V] is the indicator vector for
the n-th context target word position, i.e. the (i, j + n*V) entry is 1 if the
i'th example, the context word at position n is j, and 0 otherwise.
target_mask: A [batch_size x context_len x 1] tensor, where target_mask[i,n] = 1
if for the i'th example the n-th context word is a target position, otherwise 0
Outputs:
loss_derivative: A [batch_size x (context_len * vocab_size)] matrix,
where loss_derivative[i,0:vocab_size] contains the gradient
dC / dz_0 for the i-th training example gradient for 1st output
context word, and loss_derivative[i,vocab_size:2*vocab_size] for
the 2nd output context word of the i-th training example, etc.
"""
# Reshape output_activations and expanded_target_batch and use broadcasting
output_activations_reshape = output_activations.reshape(-1, self.context_len, len(self.vocab))
expanded_target_batch_reshape = expanded_target_batch.reshape(-1, self.context_len, len(self.vocab))
gradient_masked_reshape = target_mask * (output_activations_reshape - expanded_target_batch_reshape)
gradient_masked = gradient_masked_reshape.reshape(-1, self.context_len * len(self.vocab))
return gradient_masked
def compute_loss(self, output_activations, expanded_target_batch, target_mask):
"""Compute the total cross entropy loss over a mini-batch.
Args:
output_activations: [batch_size x (context_len * vocab_size)] matrix,
for the activations of the output layer, i.e. the y_j's.
expanded_target_batch: [batch_size (context_len * vocab_size)] matrix,
where expanded_target_batch[i,n*V:(n+1)*V] is the indicator vector for
the n-th context target word position, i.e. the (i, j + n*V) entry is 1 if the
i'th example, the context word at position n is j, and 0 otherwise. matrix obtained
target_mask: A [batch_size x context_len x 1] tensor, where target_mask[i,n,0] = 1
if for the i'th example the n-th context word is a target position, otherwise 0
Returns:
loss: a scalar for the total cross entropy loss over the batch,
defined in Part 3
"""
########################### YOUR CODE HERE ##############################
pass
############################################################################
return loss
def compute_activations(self, inputs):
"""Compute the activations on a batch given the inputs. Returns an Activations instance.
You should try to read and understand this function, since this will give you clues for
how to implement back_propagate."""
batch_size = inputs.shape[0]
if inputs.shape[1] != self.context_len:
raise RuntimeError('Dimension of the input vectors should be {}, but is instead {}'.format(
self.context_len, inputs.shape[1]))
# Embedding layer
# Look up the input word indices in the word_embedding_weights matrix
embedding_layer_state = self.params.word_embedding_weights[inputs.reshape([-1]), :].reshape([batch_size, self.embedding_layer_dim])
# Hidden layer
inputs_to_hid = np.dot(embedding_layer_state, self.params.embed_to_hid_weights.T) + \
self.params.hid_bias
# Apply logistic activation function
hidden_layer_state = 1. / (1. + np.exp(-inputs_to_hid))
# Output layer
inputs_to_softmax = np.dot(hidden_layer_state, self.params.hid_to_output_weights.T) + \
self.params.output_bias
# Subtract maximum.
# Remember that adding or subtracting the same constant from each input to a
# softmax unit does not affect the outputs. So subtract the maximum to
# make all inputs <= 0. This prevents overflows when computing their exponents.
inputs_to_softmax -= inputs_to_softmax.max(1).reshape((-1, 1))
# Take softmax along each V chunks in the output layer
output_layer_state = np.exp(inputs_to_softmax)
output_layer_state_shape = output_layer_state.shape
output_layer_state = output_layer_state.reshape((-1, self.context_len, len(self.vocab)))
output_layer_state /= output_layer_state.sum(axis=-1, keepdims=True) # Softmax along vocab of each target word
output_layer_state = output_layer_state.reshape(output_layer_state_shape) # Flatten back to 2D matrix
return Activations(embedding_layer_state, hidden_layer_state, output_layer_state)
def back_propagate(self, input_batch, activations, loss_derivative):
"""Compute the gradient of the loss function with respect to the trainable parameters
of the model.
Part of this function is already completed, but you need to fill in the derivative
computations for hid_to_output_weights_grad, output_bias_grad, embed_to_hid_weights_grad,
and hid_bias_grad. See the documentation for the Params class for a description of what
these matrices represent.
Args:
input_batch: A [batch_size x context_length] matrix containing the
indices of the context words
activations: an Activations object representing the output of
Model.compute_activations
loss_derivative: A [batch_size x (context_len * vocab_size)] matrix,
where loss_derivative[i,0:vocab_size] contains the gradient
dC / dz_0 for the i-th training example gradient for 1st output
context word, and loss_derivative[i,vocab_size:2*vocab_size] for
the 2nd output context word of the i-th training example, etc.
Obtained from calling compute_loss_derivative()
Returns:
Params object containing the gradient for word_embedding_weights_grad,
embed_to_hid_weights_grad, hid_to_output_weights_grad,
hid_bias_grad, output_bias_grad
"""
# The matrix with values dC / dz_j, where dz_j is the input to the jth hidden unit,
# i.e. h_j = 1 / (1 + e^{-z_j})
hid_deriv = np.dot(loss_derivative, self.params.hid_to_output_weights) \
* activations.hidden_layer * (1. - activations.hidden_layer)
hid_to_output_weights_grad = np.dot(loss_derivative.T, activations.hidden_layer)
########################### YOUR CODE HERE ##############################
# output_bias_grad = ...
# embed_to_hid_weights_grad = ...
############################################################################
hid_bias_grad = hid_deriv.sum(0)
# The matrix of derivatives for the embedding layer
embed_deriv = np.dot(hid_deriv, self.params.embed_to_hid_weights)
# Word Embedding Weights gradient
word_embedding_weights_grad = np.dot(self.indicator_matrix(input_batch.reshape([-1,1]), mask_zero_index=False).T,
embed_deriv.reshape([-1, self.embedding_dim]))
return Params(word_embedding_weights_grad, embed_to_hid_weights_grad, hid_to_output_weights_grad,
hid_bias_grad, output_bias_grad)
def sample_input_mask(self, batch_size):
"""Samples a binary mask for the inputs of size batch_size x context_len
For each row, at most one element will be 1.
"""
mask_idx = np.random.randint(self.context_len, size=(batch_size,))
mask = np.zeros((batch_size, self.context_len), dtype=np.int)# Convert to one hot B x N, B batch size, N context len
mask[np.arange(batch_size), mask_idx] = 1
return mask
def evaluate(self, inputs, batch_size=100):
"""Compute the average cross-entropy over a dataset.
inputs: matrix of shape D x N"""
ndata = inputs.shape[0]
total = 0.
for input_batch in get_batches(inputs, batch_size):
mask = self.sample_input_mask(batch_size)
input_batch_masked = input_batch * (1 - mask)
activations = self.compute_activations(input_batch_masked)
expanded_target_batch = self.indicator_matrix(input_batch)
target_mask = np.expand_dims(mask, axis=2)
cross_entropy = self.compute_loss(activations, expanded_target_batch, target_mask)
total += cross_entropy
return total / float(ndata)
def display_nearest_words(self, word, k=10):
"""List the k words nearest to a given word, along with their distances."""
if word not in self.vocab:
print('Word "{}" not in vocabulary.'.format(word))
return
# Compute distance to every other word.
idx = self.vocab.index(word)
word_rep = self.params.word_embedding_weights[idx, :]
diff = self.params.word_embedding_weights - word_rep.reshape((1, -1))
distance = np.sqrt(np.sum(diff ** 2, axis=1))
# Sort by distance.
order = np.argsort(distance)
order = order[1:1 + k] # The nearest word is the query word itself, skip that.
for i in order:
print('{}: {}'.format(self.vocab[i], distance[i]))
def word_distance(self, word1, word2):
"""Compute the distance between the vector representations of two words."""
if word1 not in self.vocab:
raise RuntimeError('Word "{}" not in vocabulary.'.format(word1))
if word2 not in self.vocab:
raise RuntimeError('Word "{}" not in vocabulary.'.format(word2))
idx1, idx2 = self.vocab.index(word1), self.vocab.index(word2)
word_rep1 = self.params.word_embedding_weights[idx1, :]
word_rep2 = self.params.word_embedding_weights[idx2, :]
diff = word_rep1 - word_rep2
return np.sqrt(np.sum(diff ** 2))
# + [markdown] id="JbwZCTkboEhz"
# ## 3.3 Print the gradients [1pt]
#
# To make your life easier, we have provided the routine `check_gradients`, which checks your gradients using finite differences.
# You should make sure this check passes before continuing with the assignment. Once `check_gradients()` passes, call `print_gradients()` and include its output in your write-up.
# + id="B5soRTiRn6W4"
def relative_error(a, b):
return np.abs(a - b) / (np.abs(a) + np.abs(b))
def check_output_derivatives(model, input_batch, target_batch, mask):
def softmax(z):
z = z.copy()
z -= z.max(-1, keepdims=True)
y = np.exp(z)
y /= y.sum(-1, keepdims=True)
return y
batch_size = input_batch.shape[0]
z = np.random.normal(size=(batch_size, model.context_len, model.vocab_size))
y = softmax(z).reshape((batch_size, model.context_len * model.vocab_size))
z = z.reshape((batch_size, model.context_len * model.vocab_size))
expanded_target_batch = model.indicator_matrix(target_batch)
target_mask = np.expand_dims(mask, axis=2)
loss_derivative = model.compute_loss_derivative(y, expanded_target_batch, target_mask)
if loss_derivative is None:
print('Loss derivative not implemented yet.')
return False
if loss_derivative.shape != (batch_size, model.vocab_size * model.context_len):
print('Loss derivative should be size {} but is actually {}.'.format(
(batch_size, model.vocab_size), loss_derivative.shape))
return False
def obj(z):
z = z.reshape((-1, model.context_len, model.vocab_size))
y = softmax(z).reshape((batch_size, model.context_len * model.vocab_size))
return model.compute_loss(y, expanded_target_batch, target_mask)
for count in range(1000):
i, j = np.random.randint(0, loss_derivative.shape[0]), np.random.randint(0, loss_derivative.shape[1])
z_plus = z.copy()
z_plus[i, j] += EPS
obj_plus = obj(z_plus)
z_minus = z.copy()
z_minus[i, j] -= EPS
obj_minus = obj(z_minus)
empirical = (obj_plus - obj_minus) / (2. * EPS)
rel = relative_error(empirical, loss_derivative[i, j])
if rel > 1e-4:
print('The loss derivative has a relative error of {}, which is too large.'.format(rel))
return False
print('The loss derivative looks OK.')
return True
def check_param_gradient(model, param_name, input_batch, target_batch, mask):
activations = model.compute_activations(input_batch)
expanded_target_batch = model.indicator_matrix(target_batch)
target_mask = np.expand_dims(mask, axis=2)
loss_derivative = model.compute_loss_derivative(activations.output_layer, expanded_target_batch, target_mask)
param_gradient = model.back_propagate(input_batch, activations, loss_derivative)
def obj(model):
activations = model.compute_activations(input_batch)
return model.compute_loss(activations.output_layer, expanded_target_batch, target_mask)
dims = getattr(model.params, param_name).shape
is_matrix = (len(dims) == 2)
if getattr(param_gradient, param_name).shape != dims:
print('The gradient for {} should be size {} but is actually {}.'.format(
param_name, dims, getattr(param_gradient, param_name).shape))
return
for count in range(1000):
if is_matrix:
slc = np.random.randint(0, dims[0]), np.random.randint(0, dims[1])
else:
slc = np.random.randint(dims[0])
model_plus = model.copy()
getattr(model_plus.params, param_name)[slc] += EPS
obj_plus = obj(model_plus)
model_minus = model.copy()
getattr(model_minus.params, param_name)[slc] -= EPS
obj_minus = obj(model_minus)
empirical = (obj_plus - obj_minus) / (2. * EPS)
exact = getattr(param_gradient, param_name)[slc]
rel = relative_error(empirical, exact)
if rel > 5e-4:
print('The loss derivative has a relative error of {}, which is too large for param {}.'.format(rel, param_name))
return False
print('The gradient for {} looks OK.'.format(param_name))
def load_partially_trained_model():
obj = pickle.load(open(PARTIALLY_TRAINED_MODEL, 'rb'))
params = Params(obj['word_embedding_weights'], obj['embed_to_hid_weights'],
obj['hid_to_output_weights'], obj['hid_bias'],
obj['output_bias'])
vocab = obj['vocab']
return Model(params, vocab)
def check_gradients():
"""Check the computed gradients using finite differences."""
np.random.seed(0)
np.seterr(all='ignore') # suppress a warning which is harmless
model = load_partially_trained_model()
data_obj = pickle.load(open(data_location, 'rb'))
train_inputs = data_obj['train_inputs']
input_batch = train_inputs[:100, :]
mask = model.sample_input_mask(input_batch.shape[0])
input_batch_masked = input_batch * (1 - mask)
if not check_output_derivatives(model, input_batch_masked, input_batch, mask):
return
for param_name in ['word_embedding_weights', 'embed_to_hid_weights', 'hid_to_output_weights',
'hid_bias', 'output_bias']:
check_param_gradient(model, param_name, input_batch_masked, input_batch, mask)
def print_gradients():
"""Print out certain derivatives for grading."""
np.random.seed(0)
model = load_partially_trained_model()
data_obj = pickle.load(open(data_location, 'rb'))
train_inputs = data_obj['train_inputs']
input_batch = train_inputs[:100, :]
mask = model.sample_input_mask(input_batch.shape[0])
input_batch_masked = input_batch * (1 - mask)
activations = model.compute_activations(input_batch_masked)
expanded_target_batch = model.indicator_matrix(input_batch)
target_mask = np.expand_dims(mask, axis=2)
loss_derivative = model.compute_loss_derivative(activations.output_layer, expanded_target_batch, target_mask)
param_gradient = model.back_propagate(input_batch, activations, loss_derivative)
print('loss_derivative[46, 785]', loss_derivative[46, 785])
print('loss_derivative[46, 766]', loss_derivative[46, 766])
print('loss_derivative[5, 42]', loss_derivative[5, 42])
print('loss_derivative[5, 31]', loss_derivative[5, 31])
print()
print('param_gradient.word_embedding_weights[27, 2]', param_gradient.word_embedding_weights[27, 2])
print('param_gradient.word_embedding_weights[43, 3]', param_gradient.word_embedding_weights[43, 3])
print('param_gradient.word_embedding_weights[22, 4]', param_gradient.word_embedding_weights[22, 4])
print('param_gradient.word_embedding_weights[2, 5]', param_gradient.word_embedding_weights[2, 5])
print()
print('param_gradient.embed_to_hid_weights[10, 2]', param_gradient.embed_to_hid_weights[10, 2])
print('param_gradient.embed_to_hid_weights[15, 3]', param_gradient.embed_to_hid_weights[15, 3])
print('param_gradient.embed_to_hid_weights[30, 9]', param_gradient.embed_to_hid_weights[30, 9])
print('param_gradient.embed_to_hid_weights[35, 21]', param_gradient.embed_to_hid_weights[35, 21])
print()
print('param_gradient.hid_bias[10]', param_gradient.hid_bias[10])
print('param_gradient.hid_bias[20]', param_gradient.hid_bias[20])
print()
print('param_gradient.output_bias[0]', param_gradient.output_bias[0])
print('param_gradient.output_bias[1]', param_gradient.output_bias[1])
print('param_gradient.output_bias[2]', param_gradient.output_bias[2])
print('param_gradient.output_bias[3]', param_gradient.output_bias[3])
# + id="6Tlficab3ZfJ"
# Run this to check if your implement gradients matches the finite difference within tolerance
# Note: this may take a few minutes to go through all the checks
check_gradients()
# + id="1TCLl7v189SI"
# Run this to print out the gradients
print_gradients()
# + [markdown] id="qtC-br-N5xGT"
# ## 3.4 Run model training [0pt]
#
# Once you've implemented the gradient computation, you'll need to train the model.
# The function *train* implements the main training procedure.
# It takes two arguments:
#
#
# * `embedding_dim`: The number of dimensions in the distributed representation.
# * `num_hid`: The number of hidden units
#
#
# As the model trains, the script prints out some numbers that tell you how well the training is going.
# It shows:
#
#
# * The cross entropy on the last 100 mini-batches of the training set. This is shown after every 100 mini-batches.
# * The cross entropy on the entire validation set every 1000 mini-batches of training.
#
# At the end of training, this function shows the cross entropies on the training, validation and test sets.
# It will return a *Model* instance.
# + id="akBYJQOdLfaF"
_train_inputs = None
_train_targets = None
_vocab = None
DEFAULT_TRAINING_CONFIG = {'batch_size': 100, # the size of a mini-batch
'learning_rate': 0.1, # the learning rate
'momentum': 0.9, # the decay parameter for the momentum vector
'epochs': 50, # the maximum number of epochs to run
'init_wt': 0.01, # the standard deviation of the initial random weights
'context_len': 4, # the number of context words used
'show_training_CE_after': 100, # measure training error after this many mini-batches
'show_validation_CE_after': 1000, # measure validation error after this many mini-batches
}
def find_occurrences(word1, word2, word3):
"""Lists all the words that followed a given tri-gram in the training set and the number of
times each one followed it."""
# cache the data so we don't keep reloading
global _train_inputs, _train_targets, _vocab
if _train_inputs is None:
data_obj = pickle.load(open(data_location, 'rb'))
_vocab = data_obj['vocab']
_train_inputs, _train_targets = data_obj['train_inputs'], data_obj['train_targets']
if word1 not in _vocab:
raise RuntimeError('Word "{}" not in vocabulary.'.format(word1))
if word2 not in _vocab:
raise RuntimeError('Word "{}" not in vocabulary.'.format(word2))
if word3 not in _vocab:
raise RuntimeError('Word "{}" not in vocabulary.'.format(word3))
idx1, idx2, idx3 = _vocab.index(word1), _vocab.index(word2), _vocab.index(word3)
idxs = np.array([idx1, idx2, idx3])
matches = np.all(_train_inputs == idxs.reshape((1, -1)), 1)
if np.any(matches):
counts = collections.defaultdict(int)
for m in np.where(matches)[0]:
counts[_vocab[_train_targets[m]]] += 1
word_counts = sorted(list(counts.items()), key=lambda t: t[1], reverse=True)
print('The tri-gram "{} {} {}" was followed by the following words in the training set:'.format(
word1, word2, word3))
for word, count in word_counts:
if count > 1:
print(' {} ({} times)'.format(word, count))
else:
print(' {} (1 time)'.format(word))
else:
print('The tri-gram "{} {} {}" did not occur in the training set.'.format(word1, word2, word3))
def train(embedding_dim, num_hid, config=DEFAULT_TRAINING_CONFIG):
"""This is the main training routine for the language model. It takes two parameters:
embedding_dim, the dimension of the embedding space
num_hid, the number of hidden units."""
# For reproducibility
np.random.seed(123)
# Load the data
data_obj = pickle.load(open(data_location, 'rb'))
vocab = data_obj['vocab']
train_inputs = data_obj['train_inputs']
valid_inputs = data_obj['valid_inputs']
test_inputs = data_obj['test_inputs']
# Randomly initialize the trainable parameters
model = Model.random_init(config['init_wt'], vocab, config['context_len'], embedding_dim, num_hid)
# Variables used for early stopping
best_valid_CE = np.infty
end_training = False
# Initialize the momentum vector to all zeros
delta = Params.zeros(len(vocab), config['context_len'], embedding_dim, num_hid)
this_chunk_CE = 0.
batch_count = 0
for epoch in range(1, config['epochs'] + 1):
if end_training:
break
print()
print('Epoch', epoch)
for m, (input_batch) in enumerate(get_batches(train_inputs, config['batch_size'])):
batch_count += 1
# For each example (row in input_batch), select one word to mask out
mask = model.sample_input_mask(config['batch_size'])
input_batch_masked = input_batch * (1 - mask) # We only zero out one word per row
# Forward propagate
activations = model.compute_activations(input_batch_masked)
# Compute loss derivative
expanded_target_batch = model.indicator_matrix(input_batch)
loss_derivative = model.compute_loss_derivative(activations.output_layer, expanded_target_batch, mask[:,:, np.newaxis])
loss_derivative /= config['batch_size']
# Measure loss function
cross_entropy = model.compute_loss(activations.output_layer, expanded_target_batch, np.expand_dims(mask, axis=2)) / config['batch_size']
this_chunk_CE += cross_entropy
if batch_count % config['show_training_CE_after'] == 0:
print('Batch {} Train CE {:1.3f}'.format(
batch_count, this_chunk_CE / config['show_training_CE_after']))
this_chunk_CE = 0.
# Backpropagate
loss_gradient = model.back_propagate(input_batch, activations, loss_derivative)
# Update the momentum vector and model parameters
delta = config['momentum'] * delta + loss_gradient
model.params -= config['learning_rate'] * delta
# Validate
if batch_count % config['show_validation_CE_after'] == 0:
print('Running validation...')
cross_entropy = model.evaluate(valid_inputs)
print('Validation cross-entropy: {:1.3f}'.format(cross_entropy))
if cross_entropy > best_valid_CE:
print('Validation error increasing! Training stopped.')
end_training = True
break
best_valid_CE = cross_entropy
print()
train_CE = model.evaluate(train_inputs)
print('Final training cross-entropy: {:1.3f}'.format(train_CE))
valid_CE = model.evaluate(valid_inputs)
print('Final validation cross-entropy: {:1.3f}'.format(valid_CE))
test_CE = model.evaluate(test_inputs)
print('Final test cross-entropy: {:1.3f}'.format(test_CE))
return model
# + [markdown] id="ZX-g3K-F55h7"
# Run the training.
#
# + id="BwlRG7j8LmIM"
embedding_dim = 16
num_hid = 128
trained_model = train(embedding_dim, num_hid)
# + [markdown] id="2FD5Om0ypNPe"
# To convince us that you have correctly implemented the gradient computations, please include the following with your assignment submission:
#
# * [ ] You will submit `a1-code.ipynb` through MarkUs.
# You do not need to modify any of the code except the parts we asked you to implement.
# * [ ] In your writeup, include the output of the function `print_gradients`.
# This prints out part of the gradients for a partially trained network which we have provided, and we will check them against the correct outputs. **Important:** make sure to give the output of `print_gradients`, **not** `check_gradients`.
#
# + [markdown] id="_VUBTt0ZQl3s"
# # Part 4: Bias in Word Embeddings (2pts)
#
# Unfortunately, stereotypes and prejudices are often reflected in the outputs of natural language processing algorithms. For example, Google Translate is more likely to translate a non-English sentence to "_He_ is a doctor" than "_She_ is a doctor when the sentence is ambiguous. In this section, you will explore how bias enters natural language processing algorithms by implementing and analyzing a popular method for measuring bias in word embeddings.
#
# > Note: In AI and machine learning, **bias** generally refers to prior information, a necessary prerequisite for intelligent action. However, bias can be problematic when it is derived from aspects of human culture known to lead to harmful behaviour, such as stereotypes and prejudices.
# + [markdown] id="-HlZw3-5Q5XJ"
# ## 4.1 WEAT method for detecting bias [1pt]
#
# Word embedding models such as GloVe attempt to learn a vector space where semantically similar words are clustered close together. However, they have been shown to learn problematic associations, e.g. by embedding "man" more closely to "doctor" than "woman" (and vice versa for "nurse"). To detect such biases in word embeddings, ["Semantics derived automatically from language corpora contain human-like biases"](https://www.science.org/doi/10.1126/science.aal4230) introduced the Word Embedding Association Test (WEAT). The WEAT test measures whether two _target_ word sets (e.g., {programmer, engineer, scientist, ...} and {nurse, teacher, librarian, ...}) have the same relative association to two _attribute_ word sets (e.g., man, male, ... and woman, female ...).
#
# > There is an excellent blog on bias in word embeddings and the WEAT test [here](https://developers.googleblog.com/2018/04/text-embedding-models-contain-bias.html).
#
# In the following section, you will run a WEAT test for a given set of target and attribute words. Specifically, you must implement the function `weat_association_score` and then run the remaining cells to compute the p-value and effect size. Before you begin, make sure you understand the formal definition of the WEAT test given in section 4.1 of the handout.
#
#
#
# + [markdown] id="FMfmBOJnqNmJ"
# Run the following cell to download pretrained GloVe embeddings.
# + colab={"base_uri": "https://localhost:8080/"} id="AI1hYohRQ-lz" outputId="4be6e9d6-2012-4842-f388-bec934b55384"
import gensim.downloader as api
glove = api.load("glove-wiki-gigaword-50")
num_words, num_dims = glove.vectors.shape
print(f"Downloaded {num_words} word embeddings of dimension {num_dims}.")
# + [markdown] id="x0mViYtFnwLR"
# Before proceeding, you should familiarize yourself with the `similarity` method, which computes the cosine similarity between two words. You will need this method to implement `weat_association_score`. Some examples are given below.
#
# > Can you spot the gender bias between occupations in the examples below?
# + colab={"base_uri": "https://localhost:8080/"} id="SuRrncLtn5Tl" outputId="e813ab37-87a2-4715-bf62-164c1c740164"
print(glove.similarity("man", "scientist"))
print(glove.similarity("man", "nurse"))
print(glove.similarity("woman", "scientist"))
print(glove.similarity("woman", "nurse"))
# + [markdown] id="WlJS8luQoQV5"
# Below, we define our target words (`occupations`) and attribute words (`A` and `B`). Our target words consist of *occupations*, and our attribute words are *gendered*. We will use the WEAT test to determine if the word embeddings contain gender biases for certain occupations.
# + id="H1OEkmNiX-AH"
# Target words (occupations)
occupations = ["programmer", "engineer", "scientist", "nurse", "teacher", "librarian"]
# Two sets of gendered attribute words, A and B
A = ["man", "male", "he", "boyish"]
B = ["woman", "female", "she", "girlish"]
# + [markdown] id="lvTau4w1s2es"
# - [ ] __TODO__: Implement the following function, `weat_association_score` which computes the association of a word _w_ with the attribute:
#
# $$s(w, A, B) = \text{mean}_{a\in A} \cos(w, a) - \text{mean}_{b\in B} \cos(w,b)$$
# + id="FH7KHVgPYyl5"
def weat_association_score(w, A, B, glove):
"""Given a target word w, the set of attribute words A and B,
and the GloVe embeddings, returns the association score s(w, A, B).
"""
########################### YOUR CODE HERE ##############################
pass
############################################################################
# + [markdown] id="5VM9-DCewvsJ"
# Use the following code to check your implementation:
# + id="QlN4D0JRwgpu"
np.isclose(weat_association_score("programmer", A, B, glove), 0.019615129)
# + [markdown] id="Anbhmfiy_qiU"
# Now, compute the WEAT association score for each element of `occupations` and the attribute sets A and B. Include the printed out association scores in your pdf.
# + id="4ld48gnL_ySM"
# TODO: Print out the weat association score for each occupation
########################### YOUR CODE HERE ##############################
############################################################################
# + [markdown] id="SHf4e3Aextcz"
# ## 4.2 Reasons for bias in word embeddings [0pt]
#
# Based on these WEAT association scores, do the pretrained word embeddings associate certain occuptations with one gender more than another? What might cause word embedding models to learn certain stereotypes and prejudices? How might this be a problem in downstream applications?
# + [markdown] id="rDaumHBBSXm7"
# 4.2 Answer:
# **\*\*TODO: Write Part 4.2 answer here\*\***
# + [markdown] id="KzFpg3AFRAp0"
# ## 4.3 Analyzing WEAT [1pt]
#
# While WEAT makes intuitive sense by asserting that closeness in the embedding space indicates greater similarity, more recent work ([Ethayarajh et al. [2019]](https://aclanthology.org/P19-1166.pdf)) has further analyzed the mathematical assertions and found some flaws with this method. Analyzing edge cases is a good way to find logical inconsistencies with any algorithm, and WEAT in particular can behave strangely when A and B contain just one word each.
#
#
#
# + [markdown] id="NFvAf7jXWhrS"
# ### 4.3.1 [0.5 pts]
# Find 1-word subsets of the original A and B that reverse the sign of the association score for at least some of the occupations
# + id="mEuECz_6PEMQ"
## Original sets provided here for convenience - try commenting out all but one word from each set
# Two sets of gendered attribute words, C and D
C = ["man",
"male",
"he",
"boyish"
]
D = ["woman",
"female",
"she",
"girlish"
]
# TODO: Print out the weat association score for each word in occupations, with regards to C and D
########################### YOUR CODE HERE ##############################
############################################################################
# + [markdown] id="SStZzhgNVcYA"
# ### 4.3.2 [0.5 pts]
#
# Consider the fact that the squared norm of a word embedding is linear in the log probability of the word in the training corpus. In other words, the more common a word is in the training corpus, the larger the norm of its word embedding. (See handout for more thorough description)
#
# Briefly explain how this fact might contribute to the results from the previous section when using different attribute words. Provide your answers in no more than three sentences.
#
# *Hint 2: The paper cited above is a great resource if you are stuck.*
# + [markdown] id="0J9-uOZgRQCL"
# 4.3 Answer:
# **\*\*TODO: Write Part 4.3 answer here\*\***
# + [markdown] id="pzh6MIbQAyBi"
# ### 4.3.3 Relative association between two sets of target words [0 pts]
#
# In the original WEAT paper, the authors do not examine the association of individual words with attributes, but rather compare the relative association of two sets of target words. For example, are insect words more associated with positive attributes or negative attributes than flower words.
#
# Formally, let $X$ and $Y$ be two sets of target words of equal size. The WEAT test statistic is given by:
# $$ s(X, Y, A, B) = \sum_{x\in X} s(x, A, B) - \sum_{y \in Y} s(y, A, B) $$
#
# Will the same technique from the previous section work to manipulate this test statistic as well? Provide your answer in no more than 3 sentences.
# + [markdown] id="88rIqovtLJt9"
# 4.3.3 Answer: **TODO: Write 4.3.3 answer here**
# + [markdown] id="-DVGkTS3CPqi"
# # What you have to submit
#
# Refer to the handout for the checklist
# + id="jcVh0VMgFGsW"
| PA1/a1_code.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # GANILLA model
#
# > Defines the GANILLA model architecture.
# +
# default_exp models.ganilla
# -
#export
from fastai.vision.all import *
from fastai.basics import *
from typing import List
from fastai.vision.gan import *
from upit.models.cyclegan import *
from huggingface_hub import PyTorchModelHubMixin
#hide
from nbdev.showdoc import *
# We use the generator that was introduced in the [GANILLA paper](https://arxiv.org/abs/1703.10593).
# ## Generator
#export
class BasicBlock_Ganilla(nn.Module):
expansion = 1
def __init__(self, in_planes, planes, use_dropout, stride=1):
super(BasicBlock_Ganilla, self).__init__()
self.rp1 = nn.ReflectionPad2d(1)
self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, stride=stride, padding=0, bias=False)
self.bn1 = nn.InstanceNorm2d(planes)
self.use_dropout = use_dropout
if use_dropout:
self.dropout = nn.Dropout(use_dropout)
self.rp2 = nn.ReflectionPad2d(1)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=0, bias=False)
self.bn2 = nn.InstanceNorm2d(planes)
self.out_planes = planes
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != self.expansion*planes:
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion*planes, kernel_size=1, stride=stride, bias=False),
nn.InstanceNorm2d(self.expansion*planes)
)
self.final_conv = nn.Sequential(
nn.ReflectionPad2d(1),
nn.Conv2d(self.expansion * planes * 2, self.expansion * planes, kernel_size=3, stride=1,
padding=0, bias=False),
nn.InstanceNorm2d(self.expansion * planes)
)
else:
self.final_conv = nn.Sequential(
nn.ReflectionPad2d(1),
nn.Conv2d(planes*2, planes, kernel_size=3, stride=1, padding=0, bias=False),
nn.InstanceNorm2d(planes)
)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(self.rp1(x))))
if self.use_dropout:
out = self.dropout(out)
out = self.bn2(self.conv2(self.rp2(out)))
inputt = self.shortcut(x)
catted = torch.cat((out, inputt), 1)
out = self.final_conv(catted)
out = F.relu(out)
return out
#export
class PyramidFeatures(nn.Module):
def __init__(self, C2_size, C3_size, C4_size, C5_size, fpn_weights, feature_size=128):
super(PyramidFeatures, self).__init__()
self.sum_weights = fpn_weights #[1.0, 0.5, 0.5, 0.5]
# upsample C5 to get P5 from the FPN paper
self.P5_1 = nn.Conv2d(C5_size, feature_size, kernel_size=1, stride=1, padding=0)
self.P5_upsampled = nn.Upsample(scale_factor=2, mode='nearest')
#self.rp1 = nn.ReflectionPad2d(1)
#self.P5_2 = nn.Conv2d(feature_size, feature_size, kernel_size=3, stride=1, padding=0)
# add P5 elementwise to C4
self.P4_1 = nn.Conv2d(C4_size, feature_size, kernel_size=1, stride=1, padding=0)
self.P4_upsampled = nn.Upsample(scale_factor=2, mode='nearest')
#self.rp2 = nn.ReflectionPad2d(1)
#self.P4_2 = nn.Conv2d(feature_size, feature_size, kernel_size=3, stride=1, padding=0)
# add P4 elementwise to C3
self.P3_1 = nn.Conv2d(C3_size, feature_size, kernel_size=1, stride=1, padding=0)
self.P3_upsampled = nn.Upsample(scale_factor=2, mode='nearest')
#self.rp3 = nn.ReflectionPad2d(1)
#self.P3_2 = nn.Conv2d(feature_size, feature_size, kernel_size=3, stride=1, padding=0)
self.P2_1 = nn.Conv2d(C2_size, feature_size, kernel_size=1, stride=1, padding=0)
self.P2_upsampled = nn.Upsample(scale_factor=2, mode='nearest')
self.rp4 = nn.ReflectionPad2d(1)
self.P2_2 = nn.Conv2d(int(feature_size), int(feature_size/2), kernel_size=3, stride=1, padding=0)
#self.P1_1 = nn.Conv2d(feature_size, feature_size, kernel_size=1, stride=1, padding=0)
#self.P1_upsampled = nn.Upsample(scale_factor=2, mode='nearest')
#self.rp5 = nn.ReflectionPad2d(1)
#self.P1_2 = nn.Conv2d(feature_size, feature_size, kernel_size=3, stride=1, padding=0)
def forward(self, inputs):
C2, C3, C4, C5 = inputs
i = 0
P5_x = self.P5_1(C5) * self.sum_weights[i]
P5_upsampled_x = self.P5_upsampled(P5_x)
#P5_x = self.rp1(P5_x)
# #P5_x = self.P5_2(P5_x)
i += 1
P4_x = self.P4_1(C4) * self.sum_weights[i]
P4_x = P5_upsampled_x + P4_x
P4_upsampled_x = self.P4_upsampled(P4_x)
#P4_x = self.rp2(P4_x)
# #P4_x = self.P4_2(P4_x)
i += 1
P3_x = self.P3_1(C3) * self.sum_weights[i]
P3_x = P3_x + P4_upsampled_x
P3_upsampled_x = self.P3_upsampled(P3_x)
#P3_x = self.rp3(P3_x)
#P3_x = self.P3_2(P3_x)
i += 1
P2_x = self.P2_1(C2) * self.sum_weights[i]
P2_x = P2_x * self.sum_weights[2] + P3_upsampled_x
P2_upsampled_x = self.P2_upsampled(P2_x)
P2_x = self.rp4(P2_upsampled_x)
P2_x = self.P2_2(P2_x)
return P2_x
#export
class ResNet(nn.Module):
def __init__(self, input_nc, output_nc, ngf, use_dropout, fpn_weights, block, layers):
self.inplanes = ngf
super(ResNet, self).__init__()
# first conv
self.pad1 = nn.ReflectionPad2d(input_nc)
self.conv1 = nn.Conv2d(input_nc, ngf, kernel_size=7, stride=1, padding=0, bias=True)
self.in1 = nn.InstanceNorm2d(ngf)
self.relu = nn.ReLU(inplace=True)
self.pad2 = nn.ReflectionPad2d(1)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=0)
# Output layer
self.pad3 = nn.ReflectionPad2d(output_nc)
self.conv2 = nn.Conv2d(64, output_nc, 7)
self.tanh = nn.Tanh()
if block == BasicBlock_Ganilla:
# residuals
self.layer1 = self._make_layer_ganilla(block, 64, layers[0], use_dropout, stride=1)
self.layer2 = self._make_layer_ganilla(block, 128, layers[1], use_dropout, stride=2)
self.layer3 = self._make_layer_ganilla(block, 128, layers[2], use_dropout, stride=2)
self.layer4 = self._make_layer_ganilla(block, 256, layers[3], use_dropout, stride=2)
fpn_sizes = [self.layer1[layers[0] - 1].conv2.out_channels,
self.layer2[layers[1] - 1].conv2.out_channels,
self.layer3[layers[2] - 1].conv2.out_channels,
self.layer4[layers[3] - 1].conv2.out_channels]
else:
print("This block type is not supported")
sys.exit()
self.fpn = PyramidFeatures(fpn_sizes[0], fpn_sizes[1], fpn_sizes[2], fpn_sizes[3], fpn_weights)
def _make_layer_ganilla(self, block, planes, blocks, use_dropout, stride=1):
strides = [stride] + [1] * (blocks - 1)
layers = []
for stride in strides:
layers.append(block(self.inplanes, planes, use_dropout, stride))
self.inplanes = planes * block.expansion
return nn.Sequential(*layers)
def freeze_bn(self):
'''Freeze BatchNorm layers.'''
for layer in self.modules():
if isinstance(layer, nn.BatchNorm2d):
layer.eval()
def forward(self, inputs):
img_batch = inputs
x = self.pad1(img_batch)
x = self.conv1(x)
x = self.in1(x)
x = self.relu(x)
x = self.pad2(x)
x = self.maxpool(x)
x1 = self.layer1(x)
x2 = self.layer2(x1)
x3 = self.layer3(x2)
x4 = self.layer4(x3)
out = self.fpn([x1, x2, x3, x4]) # use all resnet layers
out = self.pad3(out)
out = self.conv2(out)
out = self.tanh(out)
return out
#export
def init_weights(net, init_type='normal', gain=0.02):
def init_func(m):
classname = m.__class__.__name__
if hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1):
if init_type == 'normal':
torch.nn.init.normal_(m.weight.data, 0.0, gain)
elif init_type == 'xavier':
torch.nn.init.xavier_normal_(m.weight.data, gain=gain)
elif init_type == 'kaiming':
torch.nn.init.kaiming_normal_(m.weight.data, a=0, mode='fan_in')
elif init_type == 'orthogonal':
torch.nn.init.orthogonal_(m.weight.data, gain=gain)
else:
raise NotImplementedError('initialization method [%s] is not implemented' % init_type)
if hasattr(m, 'bias') and m.bias is not None:
torch.nn.init.constant_(m.bias.data, 0.0)
elif classname.find('BatchNorm2d') != -1:
torch.nn.init.normal_(m.weight.data, 1.0, gain)
torch.nn.init.constant_(m.bias.data, 0.0)
net.apply(init_func)
#export
def ganilla_generator(input_nc, output_nc, ngf, drop, fpn_weights=[1.0, 1.0, 1.0, 1.0], init_type='normal', gain=0.02, **kwargs):
"""Constructs a ResNet-18 GANILLA generator."""
model = ResNet(input_nc, output_nc, ngf, drop, fpn_weights, BasicBlock_Ganilla, [2, 2, 2, 2], **kwargs)
init_weights(model,init_type='normal', gain=gain)
return model
# Let's test for a few things:
# 1. The generator can indeed be initialized correctly
# 2. A random image can be passed into the model successfully with the correct size output
# First let's create a random batch:
img1 = torch.randn(4,3,256,256)
m = ganilla_generator(3,3,64,0.5)
with torch.no_grad():
out1 = m(img1)
out1.shape
# ## Full model
# We group two discriminators and two generators in a single model, then a `Callback` (defined in `02_cyclegan_training.ipynb`) will take care of training them properly. The discriminator and training loop is the same as CycleGAN.
#export
class GANILLA(nn.Module, PyTorchModelHubMixin):
"""
GANILLA model. \n
When called, takes in input batch of real images from both domains and outputs fake images for the opposite domains (with the generators).
Also outputs identity images after passing the images into generators that outputs its domain type (needed for identity loss).
Attributes: \n
`G_A` (`nn.Module`): takes real input B and generates fake input A \n
`G_B` (`nn.Module`): takes real input A and generates fake input B \n
`D_A` (`nn.Module`): trained to make the difference between real input A and fake input A \n
`D_B` (`nn.Module`): trained to make the difference between real input B and fake input B \n
"""
def __init__(self, ch_in:int=3, ch_out:int=3, n_features:int=64, disc_layers:int=3, lsgan:bool=True,
drop:float=0., norm_layer:nn.Module=None, fpn_weights:list=[1.0, 1.0, 1.0, 1.0], init_type:str='normal', gain:float=0.02,**kwargs):
"""
Constructor for GANILLA model.
Arguments: \n
`ch_in` (`int`): Number of input channels (default=3) \n
`ch_out` (`int`): Number of output channels (default=3) \n
`n_features` (`int`): Number of input features (default=64) \n
`disc_layers` (`int`): Number of discriminator layers (default=3) \n
`lsgan` (`bool`): LSGAN training objective (output unnormalized float) or not? (default=True) \n
`drop` (`float`): Level of dropout (default=0) \n
`norm_layer` (`nn.Module`): Type of normalization layer to use in the discriminator (default=None)
`fpn_weights` (`list`): Weights for feature pyramid network (default=[1.0, 1.0, 1.0, 1.0]) \n
`init_type` (`str`): Type of initialization (default='normal') \n
`gain` (`float`): Gain for initialization (default=0.02)
"""
super().__init__()
#G_A: takes real input B and generates fake input A
#G_B: takes real input A and generates fake input B
#D_A: trained to make the difference between real input A and fake input A
#D_B: trained to make the difference between real input B and fake input B
self.D_A = discriminator(ch_in, n_features, disc_layers, norm_layer, sigmoid=not lsgan)
self.D_B = discriminator(ch_in, n_features, disc_layers, norm_layer, sigmoid=not lsgan)
self.G_A = ganilla_generator(ch_in, ch_out, n_features, drop, fpn_weights, init_type, gain, **kwargs)
self.G_B = ganilla_generator(ch_in, ch_out, n_features, drop, fpn_weights, init_type, gain, **kwargs)
def forward(self, input):
"""Forward function for CycleGAN model. The input is a tuple of a batch of real images from both domains A and B."""
real_A, real_B = input
fake_A, fake_B = self.G_A(real_B), self.G_B(real_A)
idt_A, idt_B = self.G_A(real_A), self.G_B(real_B) #Needed for the identity loss during training.
return [fake_A, fake_B, idt_A, idt_B]
show_doc(GANILLA,title_level=3)
show_doc(GANILLA.__init__)
show_doc(GANILLA.forward)
# ### Quick model tests
#
# Again, let's check that the model can be called sucsessfully and outputs the correct shapes.
#
ganilla_model = GANILLA()
img1 = torch.randn(4,3,256,256)
img2 = torch.randn(4,3,256,256)
# %%time
with torch.no_grad(): ganilla_output = ganilla_model((img1,img2))
test_eq(len(ganilla_output),4)
for output_batch in ganilla_output:
test_eq(output_batch.shape,img1.shape)
#skip
ganilla_model.push_to_hub('upit-ganilla-test')
#skip
#hide_output
ganilla_model.from_pretrained('tmabraham/upit-ganilla-test')
#hide
from nbdev.export import notebook2script
notebook2script()
| nbs/09_models.ganilla.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 3TM Nickel
#
# In this demonstration we are considering 3 coupled heat diffusion equations, i.e. the 3 temperature model formulated as
#
# \begin{align}\label{eq:coupledHeatequation}
# \begin{cases}
# C_i^E(\varphi^E)\cdot\rho_i\cdot\partial_t\varphi^E &= \partial_x\left(k^E_i(\varphi^E_i)\cdot \partial_x\varphi^E_i\right) + G_i^{EL}\cdot(\varphi^L_i-\varphi^E_i)+G_i^{SE}\cdot(\varphi^S_i-\varphi^E_i) + S(x,t) \\ \nonumber
# C_i^L(\varphi^L)\cdot\rho_i\cdot\partial_t\varphi^L &= \partial_x\left(k^L_i(\varphi^L_i)\cdot \partial_x\varphi^L_i\right) + G_i^{EL}\cdot(\varphi^E_i-\varphi^L_i)+G_i^{LS}\cdot(\varphi^S_i-\varphi^L_i) \\ \nonumber
# C_i^S(\varphi^S)\cdot\rho_i\cdot\partial_t\varphi^S &= \partial_x\left(k^S_i(\varphi^S_i)\cdot \partial_x\varphi^S_i\right) + G_i^{SE}\cdot(\varphi^E_i-\varphi^S_i)+G_i^{LS}\cdot(\varphi^L_i-\varphi^S_i)
# \end{cases}
# \end{align}
# Where the superindex indicates each individual system "E" = Electron, "L" = Lattice, "S" = Spin, and the subindex "i" indicates that we can, with this solver find solutions to multiple piece wise homogeneous layers.
#
#
# ### Aim
# * Calculate the energy deposit in a Nickel film via the transfer matrix method.
# * Do a 3 temperature simulation and calculate the temperature dynamics within a Nickel layer in space and time
# * Depict results
# * Compare them to [Ultrafast Spin Dynamics in Ferromagnetic Nickel](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.76.4250)
# * [Numerical Units](https://pypi.org/project/numericalunits/) is not required but used here to make the physical dimension of the variables used more clear.
#
# ### Setup
# * Initially the electron and phonon temperature of Nickel is at 300 K
# * The heating occurs through a laser source, gaussian in time and exponentially decaying in space. The fluence is of the 400 nm laser light is $5 \mathrm{mJ/cm^{2}}$, the polarization is "p" and the incident angle is 45°.
# * A 20 nm Nickel layer is considered and the physical parameters are taken from [Ultrafast Spin Dynamics in Ferromagnetic Nickel](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.76.4250).
#
from NTMpy import NTMpy as ntm
from matplotlib import pyplot as plt
import numpy as np
import numericalunits as u
u.reset_units('SI')
#Define the source, responsible for heating
s = ntm.source()
s.spaceprofile = "TMM"
s.timeprofile = "Gaussian"
s.FWHM = 0.1*u.ps
s.fluence = 5*u.mJ/u.cm**2
s.t0 = 0.5*u.ps
s.polarization = "p"
s.theta_in = np.pi/4 #rad (0 is grazing)
s.lambda_vac = 400 #nm
# In order to obtain the correct $C_{e,l,s}(T)$, i.e. heat capacity for all the systems under consideration, we are considering $C_e(T) = \gamma T $, where $\gamma = \frac{6000}{\rho}\mathrm{\frac{J}{kgK}}$ and $C_l = \frac{2\cdot 10^6}{\rho}\mathrm{\frac{J}{kg K}}$ from the paper by _Bigot et al._, mentioned above.
#
# For $C_s$ we are considering the total heat capacity $C_{tot}$, from [here](https://webbook.nist.gov/cgi/cbook.cgi?ID=C7440020&Mask=2&Type=JANAFS&Plot=on#JANAFS) and extract the slope $\gamma_l$ before and the slope after $\gamma_r$ Curie temperature.
#
# This gives us
# \begin{equation}
# C_s(T) = \gamma_s T ~\text{ where }~ \gamma_s = \gamma_l - \gamma_r
# \end{equation}
#
# Note, that this way of obtaining the coefficients for the respective heat capacities and the linearization of $C_i$ is an approximation, mainly valid for the low temperature regime.
#
# The coupling constants, responsible for the heat exchange for all the systems are given in the paper by _Bigot et al._
# and the heat conductivity, responsible for diffusion is assumed to be 1 $\mathrm{\frac{W}{mK}}$, since for a 20 nm thin film we can confidently assume uniform heating without diffusion playing a major role.
length = 20*u.nm
density = 8.908*u.g/(u.cm**3)
n_index = 1.7163+2.5925j
C_l = 2.2e6*u.J/(u.m**3*u.K)/density
gamma = 6e3*u.J/(u.m**3*u.K**2)/density
#The units of C_tot = J/kgK --> don´t devide over density any more!
C_tot = lambda T: np.piecewise(T, [T<600, (T>=600) & (T<700),T>= 700 ], \
[lambda T:1/0.058* (13.69160 + 82.49509*(T/1000) - 174.955*(T/1000)**2 + 161.6011*(T/1000)**3),
lambda T:1/0.058* (1248.045 - 1257.510*(T/1000) - 165.1266/(T/1000)**2),
lambda T:1/0.058* (16.49839 + 18.74913*(T/1000) - 6.639841*(T/1000)**2 + 1.717278*(T/1000)**3)])
C_e = lambda T: gamma *T
#Extract the slope of the total heat capacity before and after curie temperature
temp = np.linspace(300,2000,5000)
indexl = temp <= 500
indexh = temp > 750
z1 = np.polyfit(temp[indexl],C_tot(temp[indexl]),1)
Ce1 = np.poly1d(z1)
coef1 = Ce1.coef
print("Linear approx before Curie temp:")
print(Ce1)
z2 = np.polyfit(temp[indexh],C_tot(temp[indexh]),1)
Ce2 = np.poly1d(z2)
coef2 = Ce2.coef
print("Linear approx after Curie temp:")
print(Ce2)
gammaS = coef1[0]-coef2[0]
print(f"Difference of slopes gives gammaS: {gammaS:.3f}")
C_s = lambda Ts: gammaS * Ts
#Conductivity is not so important as we assume uniform heating all get the same conductivity
k = 1*u.W/(u.m*u.K)
#Coupling constants taken from paper
G_el = 8e17 *u.W/(u.m**3*u.K)
G_se = 6e17 *u.W/(u.m**3*u.K)
G_ls = 0.3e17 *u.W/(u.m**3*u.K)
#Depicting the different heat capacities
C_la = C_l*np.ones_like(temp)
plt.figure()
plt.grid()
plt.title("Different heat capacities in Nickel")
plt.xlabel("Temperature in K"); plt.ylabel("$C_i$ in J/kgK")
plt.plot(temp,C_tot(temp),'orange',label = "$C_{tot}(T)$")
plt.plot(temp,C_la,'k',label="$C_l$")
plt.plot(temp,C_e(temp),'r',label = "$C_e(T) =\gamma T$")
plt.plot(temp,C_s(temp),'b',label = "$C_s(T) = \gamma_s T$")
plt.legend(loc='upper left')
plt.show()
# Note, that $C_e(T)$ exceeds $C_{tot}$ after Curie temperature. However, the fluence of the laser is too small to cause heating above this temperature. Also we are trying to compare our findings to the paper mentioned above, which is why we are also taking their reported parameters under consideration.
#
# Now that all the parameters are defined, we can create the simulation object, provide it with the physical properties, which we just evaluated and run the simulation.
# * `sim = simulation(3,s)` creates the simulation object. The input arguments are the number of systems under consideration and the source object, crated above.
# * `sim.addLayer(length,refractive_index,[heat_conductivity],[heat_Capacity], density,[coupling])` crates layer stacks.
# Note that `[heat_conductivity]` is an array, where each entry corresponds to the conductivity of a system. The same holds for `heat_Capacity`. `[coupling]` indicates the linera coupling constant, as indicated in the equation above. Here the first entry of the array corresponds to the coupling between system 1-2; second entry: 2-3; third entry: 3-1.
# * The output of `sim.run()` is the full temperature map of each system. I.e. `Temp_map[0]` corrensponds to the temperature dynamics of the electron system in space, along dim-0 and time, along dim-1. `x` and `t` are vectors contianing the space and time grid respectively.
#
# Finally we crate the visual object by `v = visual(sim)`, where the simulation object gets passed on as input argument.
sim = ntm.simulation(3,s)
sim.addLayer(length,n_index,[k,k,k],[C_e,C_l,C_s],density,[G_el,G_ls,G_se])
sim.final_time = 8*u.ps
#To get the raw output in form of arrays
[x, t, Temp_map] = sim.run()
#Create a visual object
v = ntm.visual(sim)
[timegrid,Temp_vec] = v.average()
print("Shape of temp_vec = "+str(np.shape(Temp_vec)))
# The function `[timegrid,Temp_vec] = v.average()` has two outputs: the timegrid in vector form and the corresponding averaged temperatures. `Temp_vec` is in array form. That is different rows correspond to different systems and the data for each timestep is storead along the colum direction.
v.contour("1")
v.contour("2")
v.contour("3")
# Comparing this to Fig. 3, from [Ultrafast Spin Dynamics in Ferromagnetic Nickel](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.76.4250), we see, that our findings are in agreement with what is reported in the paper.
# ### New Spin heat capacity
#
# Now let us consider the follwing $C_s$:
# \begin{equation} C_s(T)=
# \begin{cases}
# \gamma_s T &\text{ for } T<T_C \\ \nonumber
# 0 &\text{ else }
# \end{cases}
# \end{equation}
# where the Curie temperature $T_C = 650$K
#
# We redefine $C_s$ and reset the simulation.
C_s = lambda T: np.piecewise(T,[T <= 650, T >= 650], [lambda T: gammaS*T,1e-2])
sim = simulation(3,s)
sim.addLayer(length,n_index,[k,k,k],[C_e,C_l,C_s],density,[G_el,G_ls,G_se])
sim.final_time = 8*u.ps
[x, t, Temp_map]= sim.run()
v = visual(sim)
# +
plt.figure()
plt.plot(temp,C_s(temp))
plt.title("Spin heat capacity $C_s$ with discontinuity at $T_C$")
plt.xlabel("Temperature (K)");plt.ylabel("$C_i$ in J/kgK")
[timegrid,Temp_vec] = v.average()
# -
# ### 1 TM- Simulation
#
# If there is only one system, then the heating should be way stronger on this system, since the heat does not get distributed among different systems. In order to qualitatively check this, we reset the problem and run a simulation again.
#
# #### Decrease the timestep automatically
# Note: In this specific case, the automatically calculated time step for stability calculations would be larger than the FWHM of the laser source. This would lead to an incorrect caption of the laser pulse, since too little time steps would be applied in order to correctly capture the shape of the source.
#
# Therefore a routine has been implemented, which makes the timestep around the peak of the gaussian smaller, in order to capture its shape and to correctly calculate the energy deposit in time.
#
#Source
s = source()
#Those are the default options for space and time profile
#s.spaceprofile = "TMM"
#s.timeprofile = "Gaussian"
s.FWHM = 0.1*u.ps
s.fluence = 5*u.mJ/u.cm**2
s.t0 = 0.5*u.ps
s.polarization = "p"
s.theta_in = np.pi/4
s.lambda_vac = 400
#1 TM simulation
sim = simulation(1,s)
sim.addLayer(length,n_index,[k],[C_e],density)
sim.final_time = 8*u.ps
v = visual(sim)
[timegrid,temp_vec] = v.average()
print("Shape of temp_vec = "+str(np.shape(temp_vec)))
v.timegrid()
| Examples/3TmNickel.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import scipy.stats as stats
from typing import Tuple
from nptyping import Array
from collections import defaultdict
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("darkgrid")
# -
# ## Confidence Intervals
# A point estimate can give us a rough approximation of a population parameter. A confidence interval is a range of values above and below a point estimate that captures the true population parameter at some predetermined confidence level.
#
#
#
# $$ \begin{align} \text{Confidence Interval} = \text{Point Estimate } \pm \text{Margin of Error}\end{align} $$
# $$ \begin{align} \text{Margin of Error = 'a few' Standard Errors}\end{align} $$
#
# $$ \begin{align} \text{point estimate} \pm z * SE \end{align} $$
#
# * $z$ is called the critical value and it corresponds to the confidence level that we chose. For instance, we know that roughly 95% of the data in a normal distribution lies within 2 standard deviations from the mean, so we could use 2 as the z-critical value for a 95% confidence interval
# * Standard error for a point estimate is estimated from the data and computed using a formula
# * The value $z * SE$ is called the margin of error
# ### Proportion
#
# **Assumptions**
# 1) $n*\hat{p}=10$ and $n*(1-\hat{p})=10$
# 2) Random Sample
#
# $$\text{Confidence Interval = } \text{point estimate} \pm z * \sqrt{\frac{\hat{p}(1-\hat{p})}{n}}$$
#
# We can enforce a *conservative* confidence interval by setting $\hat{p}$ equal to 0.5 which will increase the interval.
#
# $$\text{Confidence Interval = } \text{point estimate} \pm z * \sqrt{\frac{1}{2n}}$$
# +
def confidence_interval_one_proportion(
nobs: int,
proportion: float,
confidence: float = 0.975
) -> Tuple[float, float]:
z = stats.norm.ppf(confidence)
standard_error = np.sqrt((proportion * (1-proportion))/nobs)
margin_of_error = z * standard_error
lower_confidence_interval = proportion - margin_of_error
upper_confidence_interval = proportion + margin_of_error
return (lower_confidence_interval, upper_confidence_interval)
nobs = 659
proportion = 0.85
confidence_interval = confidence_interval_one_proportion(
nobs=nobs,
proportion=proportion
)
print(f"Confidence Interval: {confidence_interval}")
# -
# ### Difference in Proportions for Independent Groups
#
# **Assumptions**
# 1) $n_1*\hat{p_1}\geq10$ and $n_1*(1-\hat{p_1})\geq10$ and $n_2*\hat{p_2}\geq10$ and $n_2*(1-\hat{p_2})\geq10$
# 2) Random Sample
#
# $$\text{Confidence Interval = } (\hat{p_1} - \hat{p_2}) \pm z * \sqrt{\frac{\hat{p_1}(1-\hat{p_1})}{n_1} + \frac{\hat{p_2}(1-\hat{p_2})}{n_2}}$$
# +
def confidence_interval_two_proportions(
nobs_1: int,
proportion_1: float,
nobs_2: int,
proportion_2: float,
confidence: float = 0.975
) -> Tuple[float, float]:
z = stats.norm.ppf(confidence)
standard_error_1 = np.sqrt((proportion_1*(1-proportion_1))/nobs_1)
standard_error_2 = np.sqrt((proportion_2*(1-proportion_2))/nobs_2)
standard_error_diff = np.sqrt(standard_error_1**2 + standard_error_2**2)
margin_of_error = z * standard_error_diff
proportion_difference = proportion_1 - proportion_2
lower_confidence_interval = proportion_difference - margin_of_error
upper_confidence_interval = proportion_difference + margin_of_error
return (lower_confidence_interval, upper_confidence_interval)
nobs_1 = 2972
proportion_1 = 0.304845
nobs_2 = 2753
proportion_2 = 0.513258
confidence_interval = confidence_interval_two_proportions(
nobs_1=nobs_1,
proportion_1=proportion_1,
nobs_2=nobs_2,
proportion_2=proportion_2
)
print(f"Confidence Interval: {confidence_interval}")
# -
# ### Mean
#
# 1) Population normal (or $n\geq25$ enforce CLT)
# 2) Random Sample
#
# $$ \overline{x} \pm t * \frac{s}{ \sqrt{n} }$$ , degrees of freedom: $n-1$
#
#
# +
def confidence_interval_one_mean(
nobs: int,
mean: float,
std: float,
confidence: float = 0.975
) -> Tuple[float, float]:
degrees_freedom = nobs-1
t = stats.t.ppf(confidence, degrees_freedom)
standard_error = std/np.sqrt(nobs)
margin_of_error = t * standard_error
lower_confidence_interval = mean - margin_of_error
upper_confidence_interval = mean + margin_of_error
return (lower_confidence_interval, upper_confidence_interval)
nobs = 25
mean = 82.48
std = 15.058552387264852
confidence_interval = confidence_interval_one_mean(
nobs=nobs,
mean=mean,
std=std
)
print(f"Confidence Interval: {confidence_interval}")
# -
# ### Difference in Means for Paired Data
#
# $$ \overline{x_d} \pm t * \frac{s_d}{ \sqrt{n} }$$ , degrees of freedom: $n-1$
# +
url = "https://raw.githubusercontent.com/Opensourcefordatascience/Data-sets/master/blood_pressure.csv"
paired_data = pd.read_csv(url)
paired_data["difference"] = paired_data["bp_before"] - paired_data["bp_after"]
display(paired_data.head(4))
nobs = paired_data.shape[0]
mean = paired_data["difference"].mean()
std = paired_data["difference"].std()
confidence_interval = confidence_interval_one_mean(
nobs=nobs,
mean=mean,
std=std
)
print(f"Confidence Interval: {confidence_interval}")
# -
# ### Difference in Means for Independent Groups
# **Assumptions**
# 1) Population normal (or $n_1\geq25$, $n_2\geq25$ enforce CLT)
# 2) Random Sample
#
# *Unpooled* $\sigma_1 \neq \sigma_2$:
#
# $$ (\overline{x_1} - \overline{x_2}) \pm t * \sqrt{\frac{s_1^2}{n_1} + \frac{s_2^2}{n_2}} $$
#
# , degrees of freedom: $\min(n_1-1,n_2-1)$ or Welch approximation
#
# *Pooled* $\sigma_1 = \sigma_2$:
#
# $$ (\overline{x_1} - \overline{x_2}) \pm t * \sqrt{\frac{(n_1-1)s_1^2+(n_2-1)s_2^2}{n_1+n_2-2}}*\sqrt{\frac{1}{n_1}+\frac{1}{n_2}} $$
#
# , degrees of freedom: $n_1+n_2-2$
# +
def confidence_intervals_two_means(
nobs_1: int,
mean_1: float,
std_1: float,
nobs_2: int,
mean_2: float,
std_2: float,
unpooled: bool = True,
confidence: float = 0.975
) -> Tuple[float, float]:
if unpooled:
degrees_freedom = np.min([nobs_1-1, nobs_2-1])
t = stats.t.ppf(confidence, degrees_freedom)
standard_error_1 = std_1/np.sqrt(nobs_1)
standard_error_2 = std_2/np.sqrt(nobs_2)
standard_error_diff = np.sqrt(standard_error_1**2 + standard_error_2**2)
margin_of_error = t * standard_error_diff
else:
degrees_freedom = nobs_1 + nobs_2 - 2
t = stats.t.ppf(confidence, degrees_freedom)
margin_of_error = t \
* np.sqrt(((nobs_1 - 1)*(std_1**2) + (nobs_2 - 1)*(std_2**2))/ (nobs_1 + nobs_2 - 2) ) \
* np.sqrt(1/nobs_1 + 1/nobs_2)
mean_difference = mean_1 - mean_2
lower_confidence_interval = mean_difference - margin_of_error
upper_confidence_interval = mean_difference + margin_of_error
return (lower_confidence_interval, upper_confidence_interval)
nobs_1 = 2976
mean_1 = 29.939946
std_1 = 7.753319
nobs_2 = 2759
mean_2 = 28.778072
std_2 = 6.252568
unpooled_confidence_intervals = confidence_intervals_two_means(
nobs_1=nobs_1,
mean_1=mean_1,
std_1=std_1,
nobs_2=nobs_2,
mean_2=mean_2,
std_2=std_2,
unpooled=True
)
pooled_confidence_intervals = confidence_intervals_two_means(
nobs_1=nobs_1,
mean_1=mean_1,
std_1=std_1,
nobs_2=nobs_2,
mean_2=mean_2,
std_2=std_2,
unpooled=False
)
print(f"unpooled_confidence_intervals: {unpooled_confidence_intervals}")
print(f"pooled_confidence_intervals: {pooled_confidence_intervals}")
# -
# ## Confidence Interval Interpretation
#
# Confidence interval with a confidence of 95% can be interpreted in the following way.
#
# If we repeat the study many times each time producing a new sample (of same size) from which 95% confidence interval is computed, then 95% of the confidence intervals are expected to have the population parameter. The simulation below illustrates this, as we observe that not all confidence intervals overlap the orange line which marks the true mean.
# +
def simulate_confidence_intervals(
array: Array,
sample_size: int,
confidence: float = 0.95,
seed: int = 10,
simulations: int = 50
) -> pd.DataFrame:
np.random.seed(seed)
simulation = defaultdict(list)
for i in range(0, simulations):
simulation["sample_id"].append(i)
sample = np.random.choice(orders, size = sample_size)
sample_mean = sample.mean()
simulation["sample_mean"].append(sample_mean)
degrees_freedom = sample_size - 1
t = stats.t.ppf(confidence, degrees_freedom)
sample_std = sample.std()
margin_error = t * (sample_std / np.sqrt(sample_size))
condfidence_interval = sample_mean - margin_error, sample_mean + margin_error
simulation["sample_confidence_interval"].append(condfidence_interval)
return pd.DataFrame(simulation)
def visualise_confidence_interval_simulation(
df: pd.DataFrame,
):
fig = plt.figure(figsize=(15,8))
ax = plt.subplot(1, 1, 1)
ax.errorbar(
x=np.arange(0.1, df.shape[0]),
y=df["sample_mean"],
yerr=[(top - bot) / 2 for top, bot in df["sample_confidence_interval"]], fmt = 'o',
color="navy"
)
ax.hlines(
xmin = 0.1,
xmax = df.shape[0],
y=orders.mean(),
color="red",
linewidth=2
)
ax.set_title("Simulation of Confidence Intervals", fontsize=20)
ax.set_ylabel("Orders", fontsize= 14)
np.random.seed(10)
orders_1 = stats.poisson.rvs(mu=40, size=200000)
orders_2 = stats.poisson.rvs(mu=10, size=150000)
orders = np.concatenate([orders_1, orders_2])
simulation_data = simulate_confidence_intervals(
array=orders,
confidence = 0.95,
sample_size = 1000,
)
visualise_confidence_interval_simulation(df=simulation_data)
| statistics/confidence_intervals.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Manipulação de dados - I
# + [markdown] slideshow={"slide_type": "slide"}
# ## Introdução
#
# Vimos no início do curso que a ciência de dados pode ser mapeada em algumas etapas gerais, tais como: i) definição do problema; ii) aquisição de dados; iii) processamento de dados; iv) análise de dados; v) descoberta de dados e vi) solução.
#
# Neste capítulo, abordaremos tópicos compreendidos entre as etapas ii) e iii) utilizando a biblioteca _pandas_.
#
# ```{figure} ../figs/10/data-pipeline.png
# ---
# width: 500px
# name: Data pipeline
# ---
# Pipeline de dados. [Fonte: A.K. VandenBroek](http://ak.vbroek.org/files/2019/01/Data-Pipeline-Infographic-Blue.png)
# ```
#
# ## A biblioteca _pandas_
#
# _pandas_ é uma biblioteca para leitura, tratamento e manipulação de dados em *Python* que possui funções muito similares a softwares empregados em planilhamento, tais como _Microsoft Excel_, _LibreOffice Calc_ e _Apple Numbers_. Além de ser uma ferramenta de uso gratuito, ela possui inúmeras vantagens. Para saber mais sobre suas capacidades, veja [página oficial](https://pandas.pydata.org/about/index.html) da biblioteca.
#
# Nesta parte de nosso curso, aprenderemos duas novas estruturas de dados que *pandas* introduz:
#
# * *Series* e
# * *DataFrame*.
# + [markdown] slideshow={"slide_type": "slide"}
# Um *DataFrame* é uma estrutura de dados tabular com linhas e colunas rotuladas.
#
# | | Peso | Altura| Idade| Gênero |
# | :------------- |:-------------:| :-----:|:------:|:-----:|
# | Ana | 55 | 162 | 20 | `feminino` |
# | João | 80 | 178 | 19 | `masculino` |
# | Maria | 62 | 164 | 21 | `feminino` |
# | Pedro | 67 | 165 | 22 | `masculino`|
# | Túlio | 73 | 171 | 20 | `masculino` |
# + [markdown] slideshow={"slide_type": "slide"}
#
# As colunas do *DataFrame* são vetores unidimensionais do tipo *Series*, ao passo que as linhas são rotuladas por uma estrutura de dados especial chamada *index*. Os *index* no *Pandas* são listas personalizadas de rótulos que nos permitem realizar pesquisas rápidas e algumas operações importantes.
#
# Para utilizarmos estas estruturas de dados, importaremos as bibliotecas *numpy* utilizando o _placeholder_ usual *np* e *pandas* utilizando o _placeholder_ usual *pd*.
# + slideshow={"slide_type": "-"}
import numpy as np
import pandas as pd
# + [markdown] slideshow={"slide_type": "slide"}
# ## *Series*
#
# As *Series*:
# * são vetores, ou seja, são *arrays* unidimensionais;
# * possuem um *index* para cada entrada (e são muito eficientes para operar com base neles);
# * podem conter qualquer um dos tipos de dados (`int`, `str`, `float` etc.).
# + [markdown] slideshow={"slide_type": "slide"}
# ### Criando um objeto do tipo *Series*
# + [markdown] slideshow={"slide_type": "-"}
# O método padrão é utilizar a função *Series* da biblioteca pandas:
#
# ```python
# serie_exemplo = pd.Series(dados_de_interesse, index=indice_de_interesse)
# ```
# No exemplo acima, `dados_de_interesse` pode ser:
#
# * um dicionário (objeto do tipo `dict`);
# * uma lista (objeto do tipo `list`);
# * um objeto `array` do *numpy*;
# * um escalar, tal como o número inteiro 1.
#
# + [markdown] slideshow={"slide_type": "slide"}
# ### Criando *Series* a partir de dicionários
# -
dicionario_exemplo = {'Ana':20, 'João': 19, 'Maria': 21, 'Pedro': 22, 'Túlio': 20}
pd.Series(dicionario_exemplo)
# + [markdown] slideshow={"slide_type": "slide"}
# Note que o *index* foi obtido a partir das "chaves" dos dicionários. Assim, no caso do exemplo, o *index* foi dado por "Ana", "João", "Maria", "Pedro" e "Túlio". A ordem do *index* foi dada pela ordem de entrada no dicionário.
#
# Podemos fornecer um novo *index* ao dicionário já criado
# -
pd.Series(dicionario_exemplo, index=['Maria', 'Maria', 'ana', 'Paula', 'Túlio', 'Pedro'])
# + [markdown] slideshow={"slide_type": "-"}
# Dados não encontrados são assinalados por um valor especial. O marcador padrão do *pandas* para dados faltantes é o `NaN` (*not a number*).
# + [markdown] slideshow={"slide_type": "slide"}
# ### Criando *Series* a partir de listas
# -
lista_exemplo = [1,2,3,4,5]
pd.Series(lista_exemplo)
# + [markdown] slideshow={"slide_type": "-"}
# Se os *index* não forem fornecidos, o *pandas* atribuirá automaticamente os valores `0, 1, ..., N-1`, onde `N` é o número de elementos da lista.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Criando *Series* a partir de *arrays* do *numpy*
# -
array_exemplo = np.array([1,2,3,4,5])
pd.Series(array_exemplo)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Fornecendo um *index* na criação da *Series*
#
# O total de elementos do *index* deve ser igual ao tamanho do *array*. Caso contrário, um erro será retornado.
# + slideshow={"slide_type": "-"}
pd.Series(array_exemplo, index=['a','b','c','d','e','f'])
# + slideshow={"slide_type": "-"}
pd.Series(array_exemplo, index=['a','b','c','d','e'])
# + [markdown] slideshow={"slide_type": "slide"}
# Além disso, não é necessário que que os elementos no *index* sejam únicos.
# -
pd.Series(array_exemplo, index=['a','a','b','b','c'])
# + [markdown] slideshow={"slide_type": "slide"}
# Um erro ocorrerá se uma operação que dependa da unicidade dos elementos no *index* for realizada, a exemplo do método `reindex`.
# -
series_exemplo = pd.Series(array_exemplo, index=['a','a','b','b','c'])
series_exemplo.reindex(['b','a','c','d','e']) # 'a' e 'b' duplicados na origem
# + [markdown] slideshow={"slide_type": "slide"}
# ### Criando *Series* a partir de escalares
# -
pd.Series(1, index=['a', 'b', 'c', 'd'])
# Neste caso, um índice **deve** ser fornecido!
# + [markdown] slideshow={"slide_type": "slide"}
# ### *Series* comportam-se como *arrays* do *numpy*
#
# Uma *Series* do *pandas* comporta-se como um *array* unidimensional do *numpy*. Pode ser utilizada como argumento para a maioria das funções do *numpy*. A diferença é que o *index* aparece.
#
# Exemplo:
# -
series_exemplo = pd.Series(array_exemplo, index=['a','b','c','d','e'])
series_exemplo[2]
series_exemplo[:2]
np.log(series_exemplo)
# + [markdown] slideshow={"slide_type": "slide"}
# Mais exemplos:
# -
serie_1 = pd.Series([1,2,3,4,5])
serie_2 = pd.Series([4,5,6,7,8])
serie_1 + serie_2
serie_1 * 2 - serie_2 * 3
# + [markdown] slideshow={"slide_type": "slide"}
# Assim como *arrays* do *numpy*, as *Series* do *pandas* também possuem atributos *dtype* (data type).
# -
series_exemplo.dtype
# Se o interesse for utilizar os dados de uma *Series* do *pandas* como um *array* do *numpy*, basta utilizar o método `to_numpy` para convertê-la.
series_exemplo.to_numpy()
# + [markdown] slideshow={"slide_type": "slide"}
# ### *Series* comportam-se como dicionários
#
# Podemos acessar os elementos de uma *Series* através das chaves fornecidas no *index*.
# + slideshow={"slide_type": "-"}
series_exemplo
# -
series_exemplo['a']
# + [markdown] slideshow={"slide_type": "slide"}
# Podemos adicionar novos elementos associados a chaves novas.
# -
series_exemplo['f'] = 6
series_exemplo
'f' in series_exemplo
'g' in series_exemplo
# + [markdown] slideshow={"slide_type": "slide"}
# Neste examplo, tentamos acessar uma chave inexistente. Logo, um erro ocorre.
# -
series_exemplo['g']
series_exemplo.get('g')
# + [markdown] slideshow={"slide_type": "slide"}
# Entretanto, podemos utilizar o método `get` para lidar com chaves que possivelmente inexistam e adicionar um `NaN` do *numpy* como valor alternativo se, de fato, não exista valor atribuído.
# -
series_exemplo.get('g',np.nan)
# + [markdown] slideshow={"slide_type": "slide"}
# ### O atributo `name`
#
# Uma *Series* do *pandas* possui um atributo opcional `name` que nos permite identificar o objeto. Ele é bastante útil em operações envolvendo *DataFrames*.
# -
serie_com_nome = pd.Series(dicionario_exemplo, name = "Idade")
serie_com_nome
# + [markdown] slideshow={"slide_type": "slide"}
# ### A função `date_range`
#
# Em muitas situações, os índices podem ser organizados como datas. A função `data_range` cria índices a partir de datas. Alguns argumentos desta função são:
#
# - `start`: `str` contendo a data que serve como limite à esquerda das datas. Padrão: `None`
# - `end`: `str` contendo a data que serve como limite à direita das datas. Padrão: `None`
# - `freq`: frequência a ser considerada. Por exemplo, dias (`D`), horas (`H`), semanas (`W`), fins de meses (`M`), inícios de meses (`MS`), fins de anos (`Y`), inícios de anos (`YS`) etc. Pode-se também utilizar múltiplos (p.ex. `5H`, `2Y` etc.). Padrão: `None`.
# - `periods`: número de períodos a serem considerados (o período é determinado pelo argumento `freq`).
# + [markdown] slideshow={"slide_type": "slide"}
# Abaixo damos exemplos do uso de `date_range` com diferente formatos de data.
# -
pd.date_range(start='1/1/2020', freq='W', periods=10)
pd.date_range(start='2010-01-01', freq='2Y', periods=10)
pd.date_range('1/1/2020', freq='5H', periods=10)
pd.date_range(start='2010-01-01', freq='3YS', periods=3)
# + [markdown] slideshow={"slide_type": "slide"}
# O exemplo a seguir cria duas *Series* com valores aleatórios associados a um interstício de 10 dias.
# -
indice_exemplo = pd.date_range('2020-01-01', periods=10, freq='D')
serie_1 = pd.Series(np.random.randn(10),index=indice_exemplo)
serie_2 = pd.Series(np.random.randn(10),index=indice_exemplo)
| _build/jupyter_execute/ipynb/10a-pandas-series.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
from flask import Falsk, jsonify
engine=create_engine("sqlite://Resources/hawaii.sqlite")
#reflect an exiting database into a new model
Base=automap_base()
#refelct the tables
Base.prepare(engine, refelct= True)
#save reference to the table
Measurement = Base.classes.measurement
Station= Base.classes.station
#create session from python to the database
session= Session(engine)
#Falsk setup
app=Falsk(__name__)
year_ago='2016-08-23'
@app.route("/")
def index():
return(
f"Surf's up! Welcome to the Hawaii Weather API.<br/><br/>"
f"/api/v1.0/precipitation<br/>Returns a JSON list of percipitation data for between 8/23/16 and 8/23/17<br/><br/>"
f"/api/v1.0/stations<br/>Returns a JSON list of the weather stations<br/><br/>"
f"/api/v1.0/tobs<br/>Returns a JSON list of the Temperature Observations for each station between 8/23/16 and 8/23/17<br/><br/>"
f"/api/v1.0/<start><br/>Returns a JSON list of the minimum temperature, max temperature, and average temperature between the given start date and 8/23/17<br/><br/>"
f"/api/v1.0/<start>/<end><br/>Returns a JSON list of the minimum temperature, max temperature, and average temperature between the given start date and end date<br/><br/>")
@app.route("/api/v1.0/precipitation")
def precipitation():
session=Session(engine)
precipitation_date=session.query(Measurement.date, func.avg(Measurement.prcp))\
.group_by(Measurement.date).all()
session.close()
rainful_date=[]
for date, prcp in precipitation_date:
rainful_dic={}
rainful_dic["date"]=date
rainful_dic["prcp"]=prcp
rainful_date.append(rainful_dic)
return jsonify(rainful_date)
#?????????????? station and measurement
@app.route("/api/v1.0/stations")
def stations():
station_date=session.query(Station.station, station.name).all()
session.close()
return jsonify(station_date)
@app.route("/api/v1.0/tobs")
def tobs():
active_station_date=session.query(Measurement.date, Measurement.tobs)\
.filter(Measurement.station==USC00519281).all()
session.close()
return jsonify(active_station_date)
@app.route("/api/v1.0/<start>")
def start(date):
day_temp_date=session.query(func.min(Measurement.tobs),func.max(Measurement.tobs),func.avg(Measurement.tobs))\
.filter(Measurement.date>date).all()
session.close()
return jsonify(day_temp_date)
@app.route("/api/v1.0/<start>/<end>")
def startDateEndDate(start,end):
multi_day_temp_data = session.query(func.min(Measurement.tobs), func.max(Measurement.tobs), func.avg(Measurement.tobs)).filter(Measurement.date >= start).filter(Measurement.date <= end).all()
session.close()
return jsonify(multi_day_temp_data)
# Inputing "/api/v1.0/2016-08-27/2016-08-30" into my browser returns:
# 71.0, 81.0, 77.28... the min, max, and average over that 4-day period
if __name__ == "__main__":
app.run(debug=True)
# -
| climate_app.py.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: CMB 20191012
# language: python
# name: cmbenv-20191012
# ---
# + [markdown] toc-hr-collapsed=false
# # Introduction
#
# This lesson is a brief introduction to TOAST and its data representations. This next cell is just initializing some things for the notebook.
# +
# Load common tools for all lessons
import sys
sys.path.insert(0, "..")
from lesson_tools import (
fake_focalplane
)
# Capture C++ output in the jupyter cells
# %load_ext wurlitzer
# -
# ## Runtime Environment
#
# You can get the current TOAST runtime configuration from the "Environment" class.
# +
import toast
env = toast.Environment.get()
print(env)
# + [markdown] toc-hr-collapsed=true
# ## Data Model
#
# Before using TOAST for simulation or analysis, it is important to discuss how data is stored in memory and how that data can be distributed among many processes to parallelize large workflows.
#
# First, let's create a fake focalplane of detectors to use throughout this example.
# +
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# Generate a fake focalplane with 7 pixels, each with 2 detectors.
fp = fake_focalplane()
# +
# Make a plot of this focalplane layout.
detnames = list(sorted(fp.keys()))
detquat = {x: fp[x]["quat"] for x in detnames}
detfwhm = {x: fp[x]["fwhm_arcmin"] for x in detnames}
detlabels = {x: x for x in detnames}
detpolcol = {x: "red" if i % 2 == 0 else "blue" for i, x in enumerate(detnames)}
toast.tod.plot_focalplane(
detquat, 4.0, 4.0, None, fwhm=detfwhm, polcolor=detpolcol, labels=detlabels
);
# -
# ### Observations with Time Ordered Data
#
# TOAST works with data organized into *observations*. Each observation is independent of any other observation. An observation consists of co-sampled detectors for some span of time. The intrinsic detector noise is assumed to be stationary within an observation. Typically there are other quantities which are constant for an observation (e.g. elevation, weather conditions, satellite spin axis, etc).
#
# An observation is just a dictionary with at least one member ("tod") which is an instance of a class that derives from the `toast.TOD` base class.
#
# The inputs to a TOD class constructor are at least:
#
# 1. The detector names for the observation.
# 2. The number of samples in the observation.
# 3. The geometric offset of the detectors from the boresight.
# 4. Information about how detectors and samples are distributed among processes. More on this below.
#
# The TOD class can act as a storage container for different "flavors" of timestreams as well as a source and sink for the observation data (with the read_\*() and write_\*() methods):
# +
import toast.qarray as qa
nsamples = 1000
obs = dict()
obs["name"] = "20191014_000"
# +
# The type of TOD class is usually specific to the data processing job.
# For example it might be one of the simulation classes or it might be
# a class that loads experiment data. Here we just use a simple class
# that is only used for testing and which reads / writes data to internal memory
# buffers.
tod = toast.tod.TODCache(None, detnames, nsamples, detquats=detquat)
obs["tod"] = tod
# -
# Print the tod to get summary info:
print(tod)
# +
# The TOD class has methods to get information about the data:
print("TOD has detectors {}".format(", ".join(tod.detectors)))
print("TOD has {} total samples for each detector".format(tod.total_samples))
# +
# Write some data. Not every TOD derived class supports writing (for example,
# TOD classes that represent simulations).
t_delta = 1.0 / fp[detnames[0]]["rate"]
tod.write_times(stamps=np.arange(0.0, nsamples * t_delta, t_delta))
tod.write_boresight(
data=qa.from_angles(
(np.pi / 2) * np.ones(nsamples),
(2 * np.pi / nsamples) * np.arange(nsamples),
np.zeros(nsamples)
)
)
for d in detnames:
tod.write(detector=d, data=np.random.normal(scale=fp[d]["NET"], size=nsamples))
tod.write_flags(detector=d, flags=np.zeros(nsamples, dtype=np.uint8))
# +
# Read it back
print("TOD timestamps = {} ...".format(tod.read_times()[:5]))
print("TOD boresight = \n{} ...".format(tod.read_boresight()[:5,:]))
for d in detnames:
print("TOD detector {} = {} ...".format(d, tod.read(detector=d, n=5)))
print("TOD detector {} flags = {} ...".format(d, tod.read_flags(detector=d, n=5)))
# +
# Store some data in the cache. The "cache" member variable looks like a dictionary of
# numpy arrays, but the memory used is allocated in C, so that we can actually clear
# these buffers when needed.
for d in detnames:
processed = tod.read(detector=d)
processed /= 2.0
# By convention, we usually name buffers in the cache by <prefix>_<detector>
tod.cache.put("processed_{}".format(d), processed)
print("TOD cache now contains {} bytes".format(tod.cache.report(silent=True)))
# -
# ### Comm : Groups of Processes
#
# A toast.Comm instance takes the global number of processes available (MPI.COMM_WORLD) and divides them into groups. Each process group is assigned one or more observations. Since observations are independent, this means that different groups can be independently working on separate observations in parallel. It also means that inter-process communication needed when working on a single observation can occur with a smaller set of processes.
#
# At NERSC, this notebook is running on a login node, so we cannot use MPI. Constructing a default `toast.Comm` whenever MPI use is disabled will just produce a single group of one process. See the parallel example at the end of this notebook for a case with multiple groups.
comm = toast.Comm()
print(comm)
# ### Data : a Collection of Observations
#
# A toast.Data instance is mainly just a list of observations. However remember that each process group will have a different list. Since we have only one group of one process, this example is not so interesting. See the parallel case at the end of the notebook.
data = toast.Data(comm)
data.obs.append(obs)
# ### Data Distribution
#
# Recapping previous sections, we have some groups of processes, each of which has a set of observations. Within a single process group, the detector data is distributed across the processes within the group. That distribution is controlled by the size of the communicator passed to the TOD class, and also by the `detranks` parameter of the constructor. This detranks number sets the dimension of the process grid in the detector direction. For example, a value of "1" means that every process has all detectors for some span of time. A value equal to the size of the communicator results in every process having some number of detectors for the entire observation. The detranks parameter must divide evenly into the number of processes in the communicator and determines how the processes are arranged in a grid.
#
# As a concrete example, imagine that MPI.COMM_WORLD has 24 processes. We split this into 4 groups of 6 procesess. There are 6 observations of varying lengths and every group has one or 2 observations. Here is a picture of what data each process would have. The global process number is shown as well as the rank within the group:
# <img src="toast_data_dist.png">
# The parallel script at the bottom of this notebook has further examples of data distribution.
# + [markdown] toc-hr-collapsed=true
# ## Utilities
#
# There are many utilities in the TOAST package that use compiled code "under the hood". These include:
#
# - `toast.rng`: Streamed random number generation, with support for generating random samples from any location within a stream.
#
# - `toast.qarray`: Vectorized quaternion operations.
#
# - `toast.fft`: API Wrapper around different vendor FFT packages.
#
# - `toast.cache`: Class for dictionary of C-allocated numpy arrays.
#
# - `toast.healpix`: Subset of pixel projection routines, simd vectorized and threaded.
#
# - `toast.timing`: Simple serial timers, global named timers per process, a decorator to time calls to functions, and MPI tools to gather timing statistics from multiple processes.
#
# -
# ### Random Number Example
#
# Here is a quick example of a threaded generation of random numbers drawn from a unit-variance gaussian distribution. Note the "key" pair of uint64 values and the first value of the "counter" pair determine the stream, and the second value of the counter pair is effectively the sample in that stream. We can drawn randoms from anywhere in the stream in a reproducible fashion (i.e. this random generator is stateless). Under the hood, this uses the Random123 package on each thread.
# +
import toast.rng as rng
# Number of random samples
nrng = 10
# -
# Draw randoms from the beginning of a stream
rng1 = rng.random(
nrng, key=[12, 34], counter=[56, 0], sampler="gaussian", threads=True
)
# Draw randoms from some later starting point in the stream
rng2 = rng.random(
nrng, key=[12, 34], counter=[56, 4], sampler="gaussian", threads=True
)
# The returned objects are buffer providers, so can be used like a numpy array.
print("Returned RNG buffers:")
print(rng1)
print(rng2)
# Compare the elements. Note how the overlapping sample indices match. The
# randoms drawn for any given sample agree regardless of the starting sample.
print("------ rng1 ------")
for i in range(nrng):
print("rng1 {}: {}".format(i, rng1[i]))
print("------ rng2 ------")
for i in range(nrng):
print("rng2 {}: {}".format(i + 4, rng2[i]))
# ### Quaternion Array Example
#
# The quaternion manipulation functions internally attempt to improve performance using OpenMP SIMD directives and threading in cases where it makes sense. The Python API is modelled after the quaternionarray package (https://github.com/zonca/quaternionarray/). There are functions for common operations like multiplying quaternion arrays, rotating arrays of vectors, converting to and from angle representations, SLERP, etc.
# +
import toast.qarray as qa
# Number points for this example
nqa = 5
# +
# Make some fake rotation data by sweeping through theta / phi / pa angles
theta = np.linspace(0.0, np.pi, num=nqa)
phi = np.linspace(0.0, 2 * np.pi, num=nqa)
pa = np.zeros(nqa)
print("----- input angles -----")
print("theta = ", theta)
print("phi = ", phi)
print("pa = ", pa)
# +
# Convert to quaternions
quat = qa.from_angles(theta, phi, pa)
print("\n----- output quaternions -----")
print(quat)
# +
# Use these to rotate a vector
zaxis = np.array([0.0, 0.0, 1.0])
zrot = qa.rotate(quat, zaxis)
print("\n---- Z-axis rotated by quaternions ----")
print(zrot)
# +
# Rotate different vector by each quaternion
zout = qa.rotate(quat, zrot)
print("\n---- Arbitrary vectors rotated by quaternions ----")
print(zout)
# +
# Multiply two quaternion arrays
qcopy = np.array(quat)
qout = qa.mult(quat, qcopy)
print("\n---- Product of two quaternion arrays ----")
print(qout)
# +
# SLERP quaternions
qtime = 3.0 * np.arange(nqa)
qtargettime = np.arange(3.0 * (nqa - 1) + 1)
qslerped = qa.slerp(qtargettime, qtime, quat)
print("\n---- SLERP input ----")
for t, q in zip(qtime, quat):
print("t = {} : {}".format(t, q))
print("\n---- SLERP output ----")
for t, q in zip(qtargettime, qslerped):
print("t = {} : {}".format(t, q))
# -
# ### FFT Example
#
# The internal FFT functions in TOAST are very limited and focus only on batched 1D Real FFTs. These are used for simulated noise generation and timestream filtering. Internally the compiled code can use either FFTW or MKL for the backend calculation.
# +
# Number of batched FFTs
nbatch = 5
# FFT length
nfft = 65536
# +
# Create some fake data
infft = np.zeros((nbatch, nfft), dtype=np.float64)
for b in range(nbatch):
infft[b, :] = rng.random(nfft, key=[0, 0], counter=[b, 0], sampler="gaussian")
print("----- FFT input -----")
print(infft)
# +
# Forward FFT
outfft = toast.fft.r1d_forward(infft)
print("\n----- FFT output -----")
print(outfft)
# +
# Reverse FFT
backfft = toast.fft.r1d_backward(outfft)
print("\n----- FFT inverse output -----")
print(backfft)
# -
# ### Cache Example
#
# The Cache class provides a mechanism to work around the Python memory pool. There are times when we want to allocate memory and explicitly free it without waiting for garbage collection. Every instance of a `toast.Cache` acts as a dictionary of numpy arrays. Internally, the memory of each entry is a flat-packed std::vector with a custom allocator that ensures aligned memory allocation. Aligned memory is required for SIMD operations both in TOAST and in external libraries. Buffers in a Cache instance can be used directly for such operations.
# +
from toast.cache import Cache
# Example array dimensions
cnames = ["c1", "c2"]
cshapes = {
"c1" : (20,),
"c2" : (2, 3, 2)
}
ctyps = {
"c1" : np.float64,
"c2" : np.uint16
}
# +
# A cache instance
cache = Cache()
# +
# Create some empty arrays in the cache
for cn in cnames:
cache.create(cn, ctyps[cn], cshapes[cn])
print("---- Cache object ----")
print(cache)
print("\n---- Now contains ----")
for cn in cnames:
print("{}: {}".format(cn, cache.reference(cn)))
print("Size = ", cache.report(silent=True), " bytes")
# +
# Fill existing buffers
# Get a reference to the buffer
cdata = cache.reference("c1")
# Assign elements.
cdata[:] = np.random.random(cshapes["c1"])
# Delete the reference
del cdata
# +
cdata = cache.reference("c2")
idx = 0
for x in range(cshapes["c2"][0]):
for y in range(cshapes["c2"][1]):
for z in range(cshapes["c2"][2]):
cdata[x, y, z] = idx
idx += 1
del cdata
print("\n---- Contents after filling ----")
for cn in cnames:
print("{}: {}".format(cn, cache.reference(cn)))
print("Size = ", cache.report(silent=True), " bytes")
# +
# We can also "put" existing numpy arrays which will then be copied into
# the cache
np1 = np.random.normal(size=10)
np2 = np.random.randint(0, high=255, dtype=np.uint16, size=12).reshape((2, 3, 2))
cache.put("p1", np1)
cache.put("p2", np2)
print("\n---- Contents after putting numpy arrays ----")
for cn in list(cache.keys()):
print("{}: {}".format(cn, cache.reference(cn)))
print("Size = ", cache.report(silent=True), " bytes")
# -
# ## Running the Test Suite
#
# TOAST includes extensive tests built in to the package. Running all of them takes some time, but you can also run just one test by specifying the name of the file in the toast/tests directory (without the ".py" extension):
# +
import toast.tests
# Run just a couple simple tests in toast/tests/env.py
toast.tests.run("env")
# -
# Now run **ALL** the (serial) tests
toast.tests.run()
| lessons/01_Introduction/intro.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#Libraries
from numpy import arange, sin, cos
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.pyplot import *
x = arange(0,15,0.001)
y = sin(x)
z = cos(x)
ax = figure(figsize = (10,10)).add_subplot(111, projection = '3d')
ax.scatter( x, [0]*len(x), z, s = 1, c = 'k')
ax.scatter( x, y, [0]*len(x), s = 1, c = 'r')
ax.scatter( x, [0]*len(x), [0]*len(x), s = 1, c = 'b')
ax.set_xlabel("t (seconds)")
ax.set_ylabel("E (Newton/Coulomb)")
ax.set_zlabel("B (Tesla)")
show()
# -
| LabBook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
duffy = pd.read_csv('duffy_data.csv')
duffy.head()
duffy.describe()
duffy.info()
sns.lmplot(x='Cost per load',y='Dist',data=duffy)
sns.lmplot(x="Cost per load",y="Lead time(days)",data=duffy)
sns.lmplot(x="Cost per load",y="Wgt(Lb)",data=duffy)
X = pd.DataFrame(duffy[["Dist","Lead time(days)","Wgt(Lb)"]])
y = pd.DataFrame(duffy[["Cost per load"]])
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.3,random_state=101)
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
lm.fit(X_train,y_train)
print(lm.coef_)
predictions = lm.predict(X_test)
data = plt.scatter(y_test,predictions)
data
plt.xlabel('y_test')
plt.ylabel('y_predicted')
pd.DataFrame(lm.coef_)
np.transpose(pd.DataFrame(lm.coef_))
| Script.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="Z82c-Fcay0a3"
# ## Stage 1: Install dependencies and setting up GPU environment
# + id="OoH0-SMEOy-j"
# !pip install numpy==1.16.1
# + [markdown] id="JL3SBH6PzDwV"
# ## Stage 2: Importing project dependencies
# + id="ynShOu8nNtFt"
import numpy as np
import tensorflow as tf
from tensorflow.keras.datasets import imdb
# + id="Kw7-sPdOzf5l"
tf.__version__
# + [markdown] id="JEjlM2EazOf0"
# ## Stage 3: Dataset preprocessing
# + [markdown] id="wB0tNtXJzTfA"
# ### Setting up dataset parameters
# + id="Jw6_KU24SrYK"
number_of_words = 20000
max_len = 100
# + [markdown] id="ePywR8A4zaxT"
# ### Loading the IMDB dataset
# + id="6kCTV_hjOKmE"
(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=number_of_words)
# + [markdown] id="MZKDNoTKzi5w"
# ### Padding all sequences to be the same length
# + id="LHcMNzv7Pd1s"
X_train = tf.keras.preprocessing.sequence.pad_sequences(X_train, maxlen=max_len)
# + id="Fcxd--ESP3Rh"
X_test = tf.keras.preprocessing.sequence.pad_sequences(X_test, maxlen=max_len)
# + [markdown] id="7xDMP44Zz0dU"
# ### Setting up Embedding Layer parameters
# + id="nGHQ2upgQIGj"
vocab_size = number_of_words
vocab_size
# + id="PMyk2JcPQcjF"
embed_size = 128
# + [markdown] id="VG6LBKGnz7jT"
# ## Step 4: Building a Recurrent Neural Network
# + [markdown] id="TUVnz-9K0DcW"
# ### Defining the model
# + id="N2GHzwk6OMrV"
model = tf.keras.Sequential()
# + [markdown] id="lnXJZYR-0HXE"
# ### Adding the Embeding Layer
# + id="UWqC0DXbO9FU"
model.add(tf.keras.layers.Embedding(vocab_size, embed_size, input_shape=(X_train.shape[1],)))
# + [markdown] id="CM-lpTZX1mEG"
# ### Adding the LSTM Layer
#
# - units: 128
# - activation: tanh
# + id="5W7IXqhjQpAl"
model.add(tf.keras.layers.LSTM(units=128, activation='tanh'))
# + [markdown] id="9T9M5Ult10XM"
# ### Adding the Dense output layer
#
# - units: 1
# - activation: sigmoid
# + id="xe1nHzq7Q91-"
model.add(tf.keras.layers.Dense(units=1, activation='sigmoid'))
# + [markdown] id="VWcqM4Yr2ALS"
# ### Compiling the model
# + id="-z9ACOXcRUUN"
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
# + id="PiolKKO6RjVF"
model.summary()
# + [markdown] id="2bPUvbfe2GJI"
# ### Training the model
# + id="9FqUTA1CRpQ8"
model.fit(X_train, y_train, epochs=3, batch_size=128)
# + [markdown] id="_GJ4irh1bCX7"
#
# + [markdown] id="-wMo2wYpbCgb"
# ### Evaluating the model
# + id="a8kD_6q-RySO"
test_loss, test_acurracy = model.evaluate(X_test, y_test)
# + id="C0XnUtS-cEeI"
print("Test accuracy: {}".format(test_acurracy))
# + id="fN9QK49W3C29"
| rnn/imdb-reviews/RNN_IMDB_Reviews_Classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Spark Skeleton
# This is the skeleton for using Spark within Jupyter Notebook. Currently only Python 2 is supported. For running Python Spark programs use `spark-submit` from the command line.
#
# ## Initialization
"""
Load packages and create context objects...
"""
import os
import platform
import sys
sys.path.append('/usr/hdp/2.4.2.0-258/spark/python')
os.environ["SPARK_HOME"] = '/usr/hdp/2.4.2.0-258/spark'
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages com.databricks:spark-csv_2.11:1.2.0 pyspark-shell'
import py4j
import pyspark
from pyspark.context import SparkContext, SparkConf
from pyspark.sql import SQLContext, HiveContext
from pyspark.storagelevel import StorageLevel
sc = SparkContext()
import atexit
atexit.register(lambda: sc.stop())
print("""Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version %s
/_/
""" % sc.version)
# Now, the Spark context is initialized.
# ### Best Paractice:
# Close and halt Jupyter notebooks after you worked on them. The processes and resource allocations of notebooks persist even if you close the browser window.
#
# ## Example: Loading Data Files
# This Spark implementation reads data files from the Hadoop File System (HDFS). Larger files are usually broken up in many smaller chunks so that multiple processes can read them without interference. In addition most Hadoop components unzip compressed files on the fly.
#
# In this example we load a text file and create a Spark Resilient Distributued Data (RDD) set where each row holds a line from the text file. Our data file is a collection of Twitter status records: one JSON encoded record per line.
tweets = sc.textFile("/user/molnar/data/election2012/cache-117000000.json.gz")
# Let's look at the first record. We're going to decode the data string and encode it back to text for nicer formatting:
# +
the_very_first_tweet = tweets.take(1)[0] # `take()` always returns an array, even if there's just one row
import json
print json.dumps(json.loads(the_very_first_tweet), indent=4, sort_keys=True)
# -
# Let's extract hash tags. In this case there's only one hashtag, but still the data structure treats it as a list.
json.loads(the_very_first_tweet)['entities']['hashtags'][0]['text']
# The extraction process is a bit more complicated that what can written in the $\lambda$-function format. We, therefore, define a function that deals with cases that do not include hashtags.
#
# Conventionally, programmers test assumptions before they execute. I.e. checking first if hashtags exist. However, if we anticipate that these are only a few exceptions using the try-except structure make for cleaner code.
# +
# easier to ask for forgiveness than permission
def extract_hash_EAFP(tw):
try:
return json.loads(tw)['entities']['hashtags']
except:
return []
# look before you leap
def extract_hash_LBYL(tw):
t = json.loads(tw)
if 'entities' in t.keys():
ent = t['entities']
if 'hashtags' in ent.keys():
return ent['hashtags']
return []
hashtags = tweets.flatMap(extract_hash_EAFP).map(lambda x: (x['text'], 1))
# -
# %%time
hashtags.take(10)
# We added a secend element of 1 to each hashtag so that we can count them in the "good old-fashion map-reduce" way.
# %%time
tagcounts = hashtags.reduceByKey(lambda a, b: a+b) # the first element of the tuple is the key
# %%time
tagcounts.take(10)
# Now, let's just see them in order. For that we need to swap elements in the tuples: the count result becomes the key. Then we can sort.
# %%time
sorted_tagcounts = tagcounts.map(lambda (a,b): (b, a)).sortByKey(False)
# %%time
sorted_tagcounts.take(10)
# ## Example: Loading CSV Files
# Data files in CSV format can be treated in similar fashion to JSON files. One can either read them into RDDs and then create a table by splitting the text line into column values per row, or use the SparkSQL package to create Spark DataFrames.
#
# First we need to create a SQLContext, the use that new object to read a CSV file.
sqlctx = SQLContext(sc)
import pandas as pd
import numpy as np
# %%time
df_donorsummary = sqlctx.read.format('com.databricks.spark.csv')\
.options(header=True, inferschema=False)\
.load('/user/mgrace/red_cross/donor_summary912016.csv')
# Spark can infer the type of each column to some degree, however, this may take significantly longer and often leads to crashes. Therefore, it's usually better to load the data as string values and then perform the data transformation on each column.
df_donorsummary.printSchema()
df_donorsummary.show(20)
re.match(r'^\s*NA\s*$', ' NA ')
# The following code should replace `'NA'` strings with `None` values to properly deal with NAs.
# +
from pyspark.sql.functions import col, when
import re
def NA_as_null(x):
return when(col(x).map(lambda v: re.match(r'^\s*NA\s*$', v) == None), col(x)).otherwise(None)
##newdf = sqlctx.createDataFrame()
for c in df_donorsummary.columns:
dfna = df_donorsummary.withColumn(c, NA_as_null(c))
dfna.printSchema()
# -
dfna.show(10)
# ## Example: Hive Database
# An example to create a Hive DataFrame and save it into Hive.
#
# ### Best Practice:
# Preprocess and clean your raw data files (in JSON or CSV) and save them for further use into a Hive database.
hctx = HiveContext(sc)
tpdf = hctx.createDataFrame(tagcounts)
tpdf.printSchema()
tpdf.write.mode('overwrite').saveAsTable('elections2012_hashtags_ranked')
tagcntsdf = sqlctx.createDataFrame(tagcounts)
tagcntsdf.printSchema()
tagcntsdf.write.json("/user/molnar/data/election2012/top_hashtag_counts")
| SparkSkeleton_orig.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Inference: Metropolis Random Walk MCMC
#
# This example shows you how to perform Bayesian inference on a time series, using [Metropolis Random Walk MCMC](http://pints.readthedocs.io/en/latest/mcmc_samplers/metropolis_mcmc.html).
#
# It follows on from the [first sampling example](./first-example.ipynb).
# +
from __future__ import print_function
import pints
import pints.toy as toy
import pints.plot
import numpy as np
import matplotlib.pyplot as plt
# Load a forward model
model = toy.LogisticModel()
# Create some toy data
real_parameters = np.array([0.015, 500])
times = np.linspace(0, 1000, 1000)
org_values = model.simulate(real_parameters, times)
# Add noise
noise = 10
values = org_values + np.random.normal(0, noise, org_values.shape)
# Create an object with links to the model and time series
problem = pints.SingleOutputProblem(model, times, values)
# Create a log-likelihood function
log_likelihood = pints.GaussianKnownSigmaLogLikelihood(problem, noise)
# Create a uniform prior over the parameters
log_prior = pints.UniformLogPrior(
[0.01, 400],
[0.02, 600]
)
# Create a posterior log-likelihood (log(likelihood * prior))
log_posterior = pints.LogPosterior(log_likelihood, log_prior)
# Choose starting points for 3 mcmc chains
xs = [
real_parameters * 1.1,
real_parameters * 0.9,
real_parameters * 1.15,
]
# Choose a covariance matrix for the proposal step
sigma0 = np.abs(real_parameters) * 5e-4
# Create mcmc routine
mcmc = pints.MCMCController(log_posterior, 3, xs, sigma0=sigma0, method=pints.MetropolisRandomWalkMCMC)
# Add stopping criterion
mcmc.set_max_iterations(30000)
# Disable logging mode
#mcmc.set_log_to_screen(False)
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
# Show traces and histograms
pints.plot.trace(chains)
# Discard warm up
chains = chains[:, 10000:, :]
# Check convergence using rhat criterion
print('R-hat:')
print(pints.rhat_all_params(chains))
# Look at distribution in chain 0
pints.plot.pairwise(chains[0])
# Show graphs
plt.show()
| examples/sampling/metropolis-mcmc.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # <NAME>:
# We are in a competition to win the archery contest in Sherwood. With our bow and arrows we shoot on a target and try to hit as close as possible to the center.
#
# The center of the target is represented by the values (0, 0) on the coordinate axes.
#
# 
#
# ## Goals:
# * data structures: lists, sets, tuples
# * logical operators: if-elif-else
# * loop: while/for
# * minimum (optional sorting)
#
# ## Description:
# In the 2-dimensional space, a point can be defined by a pair of values that correspond to the horizontal coordinate (x) and the vertical coordinate (y). The space can be divided into 4 zones (quadrants): Q1, Q2, Q3, Q4. Whose single point of union is the point (0, 0).
#
# If a point is in Q1 both its x coordinate and the y are positive. I leave a link to wikipedia to familiarize yourself with these quadrants.
#
# https://en.wikipedia.org/wiki/Cartesian_coordinate_system
#
# https://en.wikipedia.org/wiki/Euclidean_distance
#
# ## Shots
# ```
# points = [(4, 5), (-0, 2), (4, 7), (1, -3), (3, -2), (4, 5),
# (3, 2), (5, 7), (-5, 7), (2, 2), (-4, 5), (0, -2),
# (-4, 7), (-1, 3), (-3, 2), (-4, -5), (-3, 2),
# (5, 7), (5, 7), (2, 2), (9, 9), (-8, -9)]
# ```
#
# ## Tasks
# 1. <NAME> is famous for hitting an arrow with another arrow. Did you get it?
# 2. Calculate how many arrows have fallen in each quadrant.
# 3. Find the point closest to the center. Calculate its distance to the center.
# 4. If the target has a radius of 9, calculate the number of arrows that must be picked up in the forest.
# +
# Variables
points = [(4, 5), (-0, 2), (4, 7), (1, -3), (3, -2), (4, 5),
(3, 2), (5, 7), (-5, 7), (2, 2), (-4, 5), (0, -2),
(-4, 7), (-1, 3), (-3, 2), (-4, -5), (-3, 2),
(5, 7), (5, 7), (2, 2), (9, 9), (-8, -9)]
points[4]
# +
# 1. <NAME> is famous for hitting an arrow with another arrow. Did you get it?
import math
class Punto:
def __init__(self, x=0, y=0):
self.x = x
self.y = y
def __str__(self):
return "({}, {})".format(self.x, self.y)
def cuadrante(self):
if self.x > 0 and self.y > 0:
print("{} pertenece al primer cuadrante".format(self))
elif self.x < 0 and self.y > 0:
print("{} pertenece al segundo cuadrante".format(self))
elif self.x < 0 and self.y < 0:
print("{} pertenece al tercer cuadrante".format(self))
elif self.x > 0 and self.y < 0:
print("{} pertenece al cuarto cuadrante".format(self))
else:
print("{} se encuentra sobre el origen".format(self))
def distancia(self, p):
d = math.sqrt( (p.x - self.x)**2 + (p.y - self.y)**2 )
print("La distancia entre el punto {} y origen {} es {}".format(self, p, d))
points = [(4, 5), (-0, 2), (4, 7), (1, -3), (3, -2), (4, 5),
(3, 2), (5, 7), (-5, 7), (2, 2), (-4, 5), (0, -2),
(-4, 7), (-1, 3), (-3, 2), (-4, -5), (-3, 2),
(5, 7), (5, 7), (2, 2), (9, 9), (-8, -9)]
#Se ordenan manualmente no se como hacerlo
A=Punto(4,5)
B=Punto(-0, 2)
C=Punto(4, 7)
D=Punto(1, -3)
E=Punto(3, -2)
F=Punto(4, 5)
G=Punto(3, 2)
H=Punto(5, 7)
I=Punto(-5, 7)
J=Punto(2, 2)
K=Punto(-4, 5)
L=Punto(0, -2)
M=Punto(-4, 7)
N=Punto(-1, 3)
P=Punto(-3, 2)
Q=Punto(-4, -5)
R=Punto(5, 7)
S=Punto(5, 7)
T=Punto(2, 2)
U=Punto(9, 9)
V=Punto(-8, -9)
O=Punto(0, 0)
A.cuadrante()
B.cuadrante()
C.cuadrante()
D.cuadrante()
E.cuadrante()
F.cuadrante()
G.cuadrante()
H.cuadrante()
I.cuadrante()
J.cuadrante()
K.cuadrante()
L.cuadrante()
M.cuadrante()
N.cuadrante()
P.cuadrante()
Q.cuadrante()
R.cuadrante()
S.cuadrante()
T.cuadrante()
U.cuadrante()
V.cuadrante()
O.cuadrante()
A.distancia(O)
B.distancia(O)
C.distancia(O)
D.distancia(O)
E.distancia(O)
F.distancia(O)
G.distancia(O)
H.distancia(O)
I.distancia(O)
J.distancia(O)
K.distancia(O)
L.distancia(O)
M.distancia(O)
N.distancia(O)
P.distancia(O)
Q.distancia(O)
R.distancia(O)
S.distancia(O)
T.distancia(O)
U.distancia(O)
V.distancia(O)
# +
# 2. Calculate how many arrows have fallen in each quadrant.
# +
# 3. Find the point closest to the center. Calculate its distance to the center
# Defining a function that calculates the distance to the center can help.
# +
# 4. If the target has a radius of 9, calculate the number of arrows that
# must be picked up in the forest.
# -
| robin-hood/robin-hood.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="M-rNkS6xWdou"
import pandas as pd
import numpy as np
import warnings
# + id="HMxSvfrsX7-Q"
warnings.filterwarnings('ignore')
# + [markdown] id="dhrdPsyCXdtr"
# **LOADING THE DATA**
# + colab={"base_uri": "https://localhost:8080/", "height": 224} id="wI2eakrSW_ND" outputId="889a4fbf-2e4c-4168-e4fd-da1fee763323"
data = pd.read_csv("bank-full.csv", delimiter=';')
data.head()
# + [markdown] id="JPL3XUU5Xgch"
# **SAVING A COPY OF THE DATA**
# + id="Ei_fyrpcXGb3"
data_copy = data.copy()
# + id="KIO8Ybdviz0S"
data.drop(['day', 'month', 'pdays'], axis=1, inplace=True)
# + [markdown] id="SZsQ7EY8Xs6S"
# **DATA VISUALIZATION OR EDA**
# + id="RWv_Fd4kXyfY"
import matplotlib.pyplot as plt
import seaborn as sns
# + [markdown] id="LKXK48A2XwGo"
# target class distribution
# + colab={"base_uri": "https://localhost:8080/", "height": 296} id="2ihd6FBZXozn" outputId="e38c9dd6-d53d-4a75-d234-76cb84ea789e"
sns.countplot(data['y'])
# + [markdown] id="smjbwzPGYBv4"
# There is imbalance in the target, I will fix this in the processing step
# + [markdown] id="i_UhOX2ZZmIa"
# Distribution of categorical columns
# + [markdown] id="jq273ltRbi-d"
# **JOB**
# + colab={"base_uri": "https://localhost:8080/", "height": 592} id="KWtIdSioX3iA" outputId="688bc47c-ee16-43a5-a7dd-d94f538e41e6"
plt.figure(figsize=(10,10))
sns.set(style='whitegrid')
labels = ['management', 'technician', 'entrepreneur', 'blue-collar',
'unknown', 'retired', 'admin.', 'services', 'self-employed',
'unemployed', 'housemaid', 'student']
values = data['job'].value_counts().tolist()
plt.pie(x=values, labels=labels, autopct="%1.2f%%", shadow=True)
plt.title("Job Distribution Pie Chart", fontdict={'fontsize': 14})
plt.show()
# + [markdown] id="kAW08cWjby5P"
# **MARITAL**
# + colab={"base_uri": "https://localhost:8080/", "height": 320} id="IE5oJEXMaltq" outputId="45fd9f31-298b-4bfd-a12d-61f6d47d69ac"
plt.figure(figsize=(5,5))
sns.set(style='whitegrid')
labels = ['married', 'single', 'divorced']
values = data['marital'].value_counts().tolist()
plt.pie(x=values, labels=labels, autopct="%1.2f%%", shadow=True)
plt.title("Marital Distribution Pie Chart", fontdict={'fontsize': 14})
plt.show()
# + [markdown] id="JlVKaHV4cG97"
# **Education**
# + colab={"base_uri": "https://localhost:8080/", "height": 320} id="k610TA5Tb3ac" outputId="01c6d6e2-b675-4928-a4e7-caf91740d09d"
plt.figure(figsize=(5,5))
sns.set(style='whitegrid')
labels = ['tertiary', 'secondary', 'unknown', 'primary']
values = data['education'].value_counts().tolist()
plt.pie(x=values, labels=labels, autopct="%1.2f%%", shadow=True)
plt.title("Education Distribution Pie Chart", fontdict={'fontsize': 14})
plt.show()
# + [markdown] id="3WLUwMpmco5B"
# **Default**
# + colab={"base_uri": "https://localhost:8080/", "height": 320} id="r2DqXutUcKrU" outputId="f2eba902-af95-4404-9237-c9fcfd4338fe"
plt.figure(figsize=(5,5))
sns.set(style='whitegrid')
labels = ['no', 'yes']
values = data['default'].value_counts().tolist()
plt.pie(x=values, labels=labels, autopct="%1.2f%%", shadow=True)
plt.title("Default Distribution Pie Chart", fontdict={'fontsize': 14})
plt.show()
# + [markdown] id="HPAdWxzlczoY"
# **Housing**
# + colab={"base_uri": "https://localhost:8080/", "height": 320} id="paAZ8V5jceQH" outputId="fb80637e-f9b5-4212-a1e0-ee34bdf503f7"
plt.figure(figsize=(5,5))
sns.set(style='whitegrid')
labels = ['no', 'yes']
values = data['housing'].value_counts().tolist()
plt.pie(x=values, labels=labels, autopct="%1.2f%%", shadow=True)
plt.title("Housing Distribution Pie Chart", fontdict={'fontsize': 14})
plt.show()
# + [markdown] id="OFvgNEk6c7Og"
# **Loan**
# + colab={"base_uri": "https://localhost:8080/", "height": 320} id="1IrvUlkKc2Xq" outputId="12d62619-c8f5-413b-d777-a2c4725d6c63"
plt.figure(figsize=(5,5))
sns.set(style='whitegrid')
labels = ['no', 'yes']
values = data['loan'].value_counts().tolist()
plt.pie(x=values, labels=labels, autopct="%1.2f%%", shadow=True)
plt.title("Loan Distribution Pie Chart", fontdict={'fontsize': 14})
plt.show()
# + [markdown] id="Yt4PU2EqeWVM"
# **Contact**
# + colab={"base_uri": "https://localhost:8080/", "height": 320} id="PdPFp8QpdGEI" outputId="8878c25f-cb1b-4ec6-c26d-cfbe227cf28a"
plt.figure(figsize=(5,5))
sns.set(style='whitegrid')
labels = ['unknown', 'cellular', 'telephone']
values = data['contact'].value_counts().tolist()
plt.pie(x=values, labels=labels, autopct="%1.2f%%", shadow=True)
plt.title("Contact Distribution Pie Chart", fontdict={'fontsize': 14})
plt.show()
# + [markdown] id="LyeHJsnQeZA0"
# **Poutcome**
# + colab={"base_uri": "https://localhost:8080/", "height": 320} id="SHKkJ9UDdMnq" outputId="b5b0dcbf-721b-4449-dd0d-b26984851e10"
plt.figure(figsize=(5,5))
sns.set(style='whitegrid')
labels = ['unknown', 'failure', 'other', 'success']
values = data['poutcome'].value_counts().tolist()
plt.pie(x=values, labels=labels, autopct="%1.2f%%", shadow=True)
plt.title("Postcome Distribution Pie Chart", fontdict={'fontsize': 14})
plt.show()
# + [markdown] id="c55rQCasebxv"
# **Target**
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="Jc7Rs9YvdmBr" outputId="5a7a1b8d-cfc4-4e87-9c6f-e10b97991df9"
plt.figure(figsize=(4,4))
sns.set(style='whitegrid')
labels = ['no', 'yes']
values = data['y'].value_counts().tolist()
plt.pie(x=values, labels=labels, autopct="%1.2f%%", shadow=True)
plt.title("Target Distribution Pie Chart", fontdict={'fontsize': 14})
plt.show()
# + [markdown] id="TBL9oZh7ed80"
# **Ages Distribution**
# + colab={"base_uri": "https://localhost:8080/", "height": 470} id="dyfp9CtSdtxY" outputId="85090006-3a7a-4f02-81c7-9eaa3f4b543b"
plt.style.use("classic")
sns.distplot(data['age'], color='magenta', kde=True)
plt.axvline(data['age'].mean(), color='orange', linestyle='-', linewidth=0.8)
min_ylim, max_ylim = plt.ylim()
plt.text(data['age'].mean()*1.05, max_ylim*0.95, 'Mean (μ): {:.2f}'.format(data['age'].mean()))
plt.xlabel("Age (in years)")
plt.title(f"Distribution of Ages")
plt.show()
# + [markdown] id="M3e8-lmVeRS0"
# **DATA PROCESSING**
# + [markdown] id="jcfbNO2beuoT"
# first lets check for null values
# + colab={"base_uri": "https://localhost:8080/"} id="fHR-HF9NeJvq" outputId="0347c225-bcec-406b-a52a-d81f221e2838"
data.isnull().sum()
# + [markdown] id="z6Rao-mohpxk"
# Encoding categorical data to numeric data
# + id="lQysXcN1h33j"
job_mapping = {'management':0, 'technician':1, 'entrepreneur':2, 'blue-collar':3,
'unknown':4, 'retired':5, 'admin.':6, 'services':7, 'self-employed':8,
'unemployed':9, 'housemaid':10, 'student':11}
marital_mapping = {'married':0, 'single':1, 'divorced':2}
education_mapping = {'tertiary':0, 'secondary':1, 'unknown':2, 'primary':3}
yes_no_mapping = {'no':0, 'yes':1}
contact_mapping = {'unknown':0, 'cellular':1, 'telephone':2}
poutcome_mapping = {'unknown':0, 'failure':1, 'other':2, 'success':3}
# + id="pWXief75j1gm"
data['job'] = data['job'].map(job_mapping)
data['marital'] = data['marital'].map(marital_mapping)
data['education'] = data['education'].map(education_mapping)
data['default'] = data['default'].map(yes_no_mapping)
data['housing'] = data['housing'].map(yes_no_mapping)
data['loan'] = data['loan'].map(yes_no_mapping)
data['contact'] = data['contact'].map(contact_mapping)
data['poutcome'] = data['poutcome'].map(poutcome_mapping)
data['y'] = data['y'].map(yes_no_mapping)
# + [markdown] id="QgDVhcqwkpto"
# Feature Selection
# + [markdown] id="ExS2v3Gtluf3"
# first lets drop all the non-negative rows
# + id="__INTf9JlSFB"
negatives = data[data['balance']<0].index
data.drop(negatives, inplace=True)
# + id="Irs5-i1kkcZ8"
X, y = data.drop('y', axis=1), data['y']
# + id="jbxoZ1sZkvJH"
from sklearn.feature_selection import SelectKBest, chi2
# + colab={"base_uri": "https://localhost:8080/"} id="nDPweQWLkzzv" outputId="c627969a-8e9d-46cd-e4db-bf2ba4ac5336"
selector = SelectKBest(chi2, k=9)
selector.fit_transform(X, y)
# + colab={"base_uri": "https://localhost:8080/"} id="BTNROFXAk7re" outputId="c53ee06f-2a06-4b7e-bf5a-2c6dd9779290"
X.columns[selector.get_support()]
# + id="DaRdUMBtmvZV"
X_new = data[['education', 'balance', 'housing', 'loan', 'contact', 'duration',
'campaign', 'previous', 'poutcome']]
y_new = data['y']
# + [markdown] id="qvbTcwtZnCFi"
# checking for class imbalance
# + colab={"base_uri": "https://localhost:8080/", "height": 476} id="K9t0d-K0nBGy" outputId="63089cbd-da21-44e4-ef9c-674113d5c0a7"
sns.countplot(y_new)
# + id="z2VX2Fs5nGkN"
from imblearn.over_sampling import SMOTE
# + id="AgRLJwGznTrN"
smote = SMOTE(random_state=56)
X, y = smote.fit_resample(X_new, y_new)
# + colab={"base_uri": "https://localhost:8080/", "height": 457} id="e3HHbDownbKe" outputId="9a22ae2c-cc44-4102-ed77-978959a09596"
sns.countplot(y)
# + [markdown] id="T2_OV7hrngTh"
# **SPLITTING DATA INTO TRAINING, TESTING AND VALIDATION SETS**
# + id="oJHKlAchncnq"
from sklearn.model_selection import train_test_split
# + id="xxlYKcGrn1fY"
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=56)
x_train_, x_val, y_train_, y_val = train_test_split(x_train, y_train, test_size=0.2, random_state=56)
# + [markdown] id="q5oHZ4J9n41o"
# **MODEL SELECTION**
# + id="nxmjjWuKn7CK"
from sklearn.ensemble import ExtraTreesClassifier, GradientBoostingClassifier, RandomForestClassifier, AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from xgboost import XGBClassifier
from lightgbm import LGBMClassifier
from sklearn.linear_model import LogisticRegression
# + id="xHbQLDmgoFqX"
from sklearn.metrics import f1_score, classification_report
# + id="1MQ39nCboHrY"
def model_selection(x_train_, x_val, y_train_, y_val, model):
model = model()
model.fit(x_train_, y_train_)
pred = model.predict(x_val)
f1 = f1_score(y_val, pred)
report = classification_report(y_val, pred)
train_score = model.score(x_train_, y_train_)
val_score = model.score(x_val, y_val)
print('F1 Score:', f1*100)
print('\n')
print('Classification report:', report)
print('\n')
print('Train Score:', train_score*100)
print('\n')
print('Val Score:', val_score*100)
print('\n')
print('Is overfitting:', True if train_score>val_score else False)
print('\n')
print('Overfitting by:',train_score*100-val_score*100)
# + colab={"base_uri": "https://localhost:8080/"} id="EDUjA0nIoL3m" outputId="f5d11d08-9cd6-48a7-c5e9-2d00ebd9c411"
extratrees = model_selection(x_train_, x_val, y_train_, y_val, ExtraTreesClassifier)
extratrees
# + colab={"base_uri": "https://localhost:8080/"} id="IyL3Mox5oNu6" outputId="43e826b8-c0c3-4133-ec07-4090848b2da7"
gradient = model_selection(x_train_, x_val, y_train_, y_val, GradientBoostingClassifier)
gradient
# + colab={"base_uri": "https://localhost:8080/"} id="x_BRD0IuoQI8" outputId="ce67f15b-29f8-4870-92db-5618cdaec5d4"
randomforest = model_selection(x_train_, x_val, y_train_, y_val, RandomForestClassifier)
randomforest
# + colab={"base_uri": "https://localhost:8080/"} id="TFJwdaqioV5i" outputId="f5711a1a-1468-4d8e-d882-f3725268bbd9"
ada = model_selection(x_train_, x_val, y_train_, y_val, AdaBoostClassifier)
ada
# + colab={"base_uri": "https://localhost:8080/"} id="qVW81JlgoZlA" outputId="cd125b4f-c8df-4d6e-f745-3d8c6df2eff9"
decisiontree = model_selection(x_train_, x_val, y_train_, y_val, DecisionTreeClassifier)
decisiontree
# + colab={"base_uri": "https://localhost:8080/"} id="IAF78Vg0ojE7" outputId="d8e6f1c8-4780-47c9-ee74-bdcb75d4958e"
xgb = model_selection(x_train_, x_val, y_train_, y_val, XGBClassifier)
xgb
# + colab={"base_uri": "https://localhost:8080/"} id="XKIJfTiBoosQ" outputId="42bf602b-2a0c-4372-c35a-e5063c764e2c"
lgbm = model_selection(x_train_, x_val, y_train_, y_val, LGBMClassifier)
lgbm
# + colab={"base_uri": "https://localhost:8080/"} id="mPeHSKHYos3w" outputId="28a4069a-801f-4b82-a9b5-cbf4d53d70b3"
logistic = model_selection(x_train_, x_val, y_train_, y_val, LogisticRegression)
logistic
# + [markdown] id="nrG0hIZ8o6Yc"
# I will choose LGBMClassifier cuz it has a good f1 score and the over fitting rate is low
# + [markdown] id="RRyPIjp8pAQz"
# **MODEL BUILDING AND TRAINING**
# + colab={"base_uri": "https://localhost:8080/"} id="5kPQafR5oxqT" outputId="87b5cce6-3429-4ee4-e12d-19706fbaf1e3"
model = LGBMClassifier()
model.fit(x_train, y_train)
# + [markdown] id="Hog5trVlpHDn"
# **PREDICTIONS**
# + colab={"base_uri": "https://localhost:8080/"} id="KWsBTG8cpF6w" outputId="7a12af06-3963-4027-d8f2-a2ea034c1cf1"
pred = model.predict(x_test)
pred
# + [markdown] id="rVfVFANSpLcE"
# **PRECISION, RECALL, ACCURACY AND AUC CHECK**
# + id="ep74EE6kpKuo"
from sklearn.metrics import roc_auc_score
# + colab={"base_uri": "https://localhost:8080/"} id="lii6oGN6pTfk" outputId="c04302ad-038d-43c4-b50f-413df245a4dd"
f1 = f1_score(y_test, pred)
f1*100
# + colab={"base_uri": "https://localhost:8080/", "height": 69} id="ajiPqZzipWwm" outputId="132e01b1-f21d-4983-d670-0cad8206b6fa"
classification_report(y_test, pred)
# + colab={"base_uri": "https://localhost:8080/"} id="Zn8xwBhFpZLM" outputId="762a9a5e-8aec-4473-a740-4b1b53b630da"
AUC = roc_auc_score(y_test, pred)
AUC*100
# + [markdown] id="CbhFyjm0pgEy"
# **CHECKING FOR OVERFITTING**
# + colab={"base_uri": "https://localhost:8080/"} id="UW33lFQepem-" outputId="c4f82e0b-1e9f-43cf-bba7-b48899564cc0"
train_score = model.score(x_train, y_train)
test_score = model.score(x_test, y_test)
print('Overfitting by:', train_score*100-test_score*100)
# + [markdown] id="3KGFIRQ8ptjp"
# The model is overfitting by less than 1%
# + id="Xqi4_iAxpr6p"
| Bank_Subscription_Prediction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# ## Gaussian Mixture Models (GMM)
#
# <NAME> (2016, 2018)
#
# KDE centers each bin (or kernel rather) at each point. In a [**mixture model**](https://en.wikipedia.org/wiki/Mixture_model) we don't use a kernel for each data point, but rather we fit for the *locations of the kernels*--in addition to the width. So a mixture model is sort of a hybrid between a tradtional (fixed bin location/size) histogram and KDE. Using lots of kernels (maybe even more than the BIC score suggests) may make sense if you just want to provide an accurate description of the data (as in density estimation). Using fewer kernels makes mixture models more like clustering (later today), where the suggestion is still to use many kernels in order to divide the sample into real clusters and "background".
# + [markdown] slideshow={"slide_type": "slide"}
# Gaussians are the most commonly used components for mixture models. So, the pdf is modeled by a sum of Gaussians:
# $$p(x) = \sum_{k=1}^N \alpha_k \mathscr{N}(x|\mu_k,\Sigma_k),$$
# where $\alpha_k$ are the "mixing coefficients" with $0\le \alpha_k \le 1$ and $\sum_{k=1}^N \alpha_k = 1$.
#
# We can solve for the parameters using maximum likelihood analyis as we have discussed previously.
# However, this can be complicated in multiple dimensions, requiring the use of [**Expectation Maximization (EM)**](https://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm) methods.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Expectation Maximization (ultra simplified version)
#
# (Note: all explanations of EM are far more complicated than seems necessary for our purposes, so here is my overly simplified explanation.)
#
# This may make more sense in terms of our earlier Bayesian analyses if we write this as
# $$p(z=c) = \alpha_k,$$
# and
# $$p(x|z=c) = \mathscr{N}(x|\mu_k,\Sigma_k),$$
# where $z$ is a "hidden" variable related to which "component" each point is assigned to.
#
# In the Expectation step, we hold $\mu_k, \Sigma_k$, and $\alpha_k$ fixed and compute the probability that each $x_i$ belongs to component, $c$.
#
# In the Maximization step, we hold the probability of the components fixed and maximize $\mu_k, \Sigma_k,$ and $\alpha_k$.
# + [markdown] slideshow={"slide_type": "notes"}
# Note that $\alpha$ is the relative weight of each Gaussian component and not the probability of each point belonging to a specific component. (Can think of as a 1-D case with 2 Gaussian and 1 background components.)
# + [markdown] slideshow={"slide_type": "slide"}
# We can use the following animation to illustrate the process.
#
# We start with a 2-component GMM, where the initial components can be randomly determined.
#
# The points that are closest to the centroid of a component will be more probable under that distribution in the "E" step and will pull the centroid towards them in the "M" step. Iteration between the "E" and "M" step eventually leads to convergence.
#
# In this particular example, 3 components better describes the data and similarly converges. Note that the process is not that sensitive to how the components are first initialized. We pretty much get the same result in the end.
# + slideshow={"slide_type": "slide"}
from IPython.display import YouTubeVideo
YouTubeVideo("B36fzChfyGU")
# + [markdown] slideshow={"slide_type": "slide"}
# A typical call to the [Gaussian Mixture Model](http://scikit-learn.org/stable/modules/mixture.html) algorithm looks like this:
# + slideshow={"slide_type": "slide"}
# Execute this cell
import numpy as np
from sklearn.mixture import GaussianMixture
X = np.random.normal(size=(1000,2)) #1000 points in 2D
gmm = GaussianMixture(3) #three components
gmm.fit(X)
log_dens = gmm.score(X)
BIC = gmm.bic(X)
# + [markdown] slideshow={"slide_type": "slide"}
# Let's start with the 1-D example given using eruption data from "Old Faithful" geyser at Yellowstone National Park.
# [http://www.stat.cmu.edu/~larry/all-of-statistics/=data/faithful.dat](http://www.stat.cmu.edu/~larry/all-of-statistics/=data/faithful.dat).
# + slideshow={"slide_type": "slide"}
#eruptions: Eruption time in mins
#waiting: Waiting time to next eruption
import pandas as pd
df = pd.read_csv('../data/faithful.dat', delim_whitespace=True)
df.head()
# + [markdown] slideshow={"slide_type": "slide"}
# Make two "fancy" histograms illustrating the distribution of `x=df['eruptions']` and `y=df['waiting']` times. Use `bins="freedman"` and `histtype="step"`.
# + slideshow={"slide_type": "slide"}
from astroML.plotting import hist as fancyhist
from matplotlib import pyplot as plt
fig = plt.figure(figsize=(14, 7))
ax = fig.add_subplot(121)
fancyhist(df['eruptions'],bins='freedman',histtype='step')
plt.xlabel('Eruptions')
plt.ylabel('N')
ax = fig.add_subplot(122)
fancyhist(df['waiting'],bins='freedman',histtype='step')
plt.xlabel('Waiting')
plt.ylabel('N')
# + slideshow={"slide_type": "slide"}
#Fit Gaussian Mixtures, first in 1-D
from sklearn.mixture import GaussianMixture
#First fit Eruptions
gmm1 = GaussianMixture(n_components=2) # 2-component gaussian mixture model
gmm1.fit(df['eruptions'][:,None]) # Fit step
xgrid1 = np.linspace(0, 8, 1000) # Make evaluation grid
logprob1 = gmm1.score_samples(xgrid1[:,None]) # Compute log likelihoods on that grid
pdf1 = np.exp(logprob1)
resp1 = gmm1.predict_proba(xgrid1[:,None])
pdf_individual1 = resp1 * pdf1[:, np.newaxis] # Compute posterior probabilities for each component
# -
#Then fit waiting
gmm2 = GaussianMixture(n_components=2)
gmm2.fit(df['waiting'][:,None])
xgrid2 = np.linspace(30, 120, 1000)
logprob2 = gmm2.score_samples(xgrid2[:,None])
pdf2 = np.exp(logprob2)
resp2 = gmm2.predict_proba(xgrid2[:,None])
pdf_individual2 = resp2 * pdf2[:, np.newaxis]
# + slideshow={"slide_type": "slide"}
#Make plots
fig = plt.figure(figsize=(14, 7))
ax = fig.add_subplot(121)
plt.hist(df['eruptions'], bins=6, normed=True, histtype='step')
plt.plot(xgrid1, pdf_individual1, '--', color='blue')
plt.plot(xgrid1, pdf1, '-', color='gray')
plt.xlabel("Eruptions")
ax = fig.add_subplot(122)
plt.hist(df['waiting'], bins=9, normed=True, histtype='step')
plt.plot(xgrid2, pdf_individual2, '--', color='blue')
plt.plot(xgrid2, pdf2, '-', color='gray')
plt.xlabel("Waiting")
# + [markdown] slideshow={"slide_type": "slide"}
# Let's now do a more complicated 1-D example (Ivezic, Figure 6.8), which compares a Mixture Model to KDE.
# [Note that the version at astroML.org has some bugs!]
# + slideshow={"slide_type": "slide"}
# Execute this cell
# Ivezic, Figure 6.8
# Author: <NAME>
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
# %matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from scipy import stats
from astroML.plotting import hist
from sklearn.mixture import GaussianMixture
from sklearn.neighbors import KernelDensity
#------------------------------------------------------------
# Generate our data: a mix of several Cauchy distributions
# this is the same data used in the Bayesian Blocks figure
np.random.seed(0)
N = 10000
mu_gamma_f = [(5, 1.0, 0.1),
(7, 0.5, 0.5),
(9, 0.1, 0.1),
(12, 0.5, 0.2),
(14, 1.0, 0.1)]
true_pdf = lambda x: sum([f * stats.cauchy(mu, gamma).pdf(x)
for (mu, gamma, f) in mu_gamma_f])
x = np.concatenate([stats.cauchy(mu, gamma).rvs(int(f * N))
for (mu, gamma, f) in mu_gamma_f])
np.random.shuffle(x)
x = x[x > -10]
x = x[x < 30]
#------------------------------------------------------------
# plot the results
fig = plt.figure(figsize=(10, 10))
fig.subplots_adjust(bottom=0.08, top=0.95, right=0.95, hspace=0.1)
N_values = (500, 5000)
subplots = (211, 212)
k_values = (10, 100)
for N, k, subplot in zip(N_values, k_values, subplots):
ax = fig.add_subplot(subplot)
xN = x[:N]
t = np.linspace(-10, 30, 1000)
kde = KernelDensity(0.1, kernel='gaussian')
kde.fit(xN[:, None])
dens_kde = np.exp(kde.score_samples(t[:, None]))
# Compute density via Gaussian Mixtures
# we'll try several numbers of clusters
n_components = np.arange(3, 16)
gmms = [GaussianMixture(n_components=n).fit(xN[:,None]) for n in n_components]
BICs = [gmm.bic(xN[:,None]) for gmm in gmms]
i_min = np.argmin(BICs)
t = np.linspace(-10, 30, 1000)
logprob = gmms[i_min].score_samples(t[:,None])
# plot the results
ax.plot(t, true_pdf(t), ':', color='black', zorder=3,
label="Generating Distribution")
ax.plot(xN, -0.005 * np.ones(len(xN)), '|k', lw=1.5)
ax.plot(t, np.exp(logprob), '-', color='gray',
label="Mixture Model\n(%i components)" % n_components[i_min])
ax.plot(t, dens_kde, '-', color='black', zorder=3,
label="Kernel Density $(h=0.1)$")
# label the plot
ax.text(0.02, 0.95, "%i points" % N, ha='left', va='top',
transform=ax.transAxes)
ax.set_ylabel('$p(x)$')
ax.legend(loc='upper right')
if subplot == 212:
ax.set_xlabel('$x$')
ax.set_xlim(0, 20)
ax.set_ylim(-0.01, 0.4001)
plt.show()
# + [markdown] slideshow={"slide_type": "slide"}
# Let's plot the BIC values and see why it picked that many components.
# + slideshow={"slide_type": "slide"}
fig = plt.figure(figsize=(10, 5))
plt.scatter(n_components,BICs)
plt.show()
# + [markdown] slideshow={"slide_type": "slide"}
# What do the individual components look like? Make a plot of those. Careful with the shapes of the arrays!
# + slideshow={"slide_type": "slide"}
# See Ivezic, Figure 4.2 for help: http://www.astroml.org/book_figures/chapter4/fig_GMM_1D.html
fig = plt.figure(figsize=(10, 5))
print(len(gmms[10].weights_))
logprob = gmms[10].score_samples(t[:,None])
pdf = np.exp(logprob) # Sum of the individual component pdf
resp = gmms[10].predict_proba(t[:,None]) # Array of "responsibilities" for each component
plt.plot(t,resp*pdf[:,None])
plt.xlim((0,20))
plt.show()
# + [markdown] slideshow={"slide_type": "slide"}
# Now let's look at the Old Faithful data again, but this time in 2-D.
# + slideshow={"slide_type": "slide"}
fig = plt.figure(figsize=(10, 5))
plt.scatter(df['eruptions'],df['waiting'])
plt.xlabel('Eruptions')
plt.ylabel('Waiting')
plt.xlim([1.5,5.3])
plt.ylim([40,100])
# + [markdown] slideshow={"slide_type": "slide"}
# Now we'll fit both features at the same time (i.e., the $x$ and $y$ axes above). Note that Scikit-Learn can handle Pandas DataFrames without further conversion.
# + slideshow={"slide_type": "slide"}
gmm3 = GaussianMixture(n_components=2)
gmm3.fit(df[['eruptions','waiting']])
# + [markdown] slideshow={"slide_type": "slide"}
# Once the components have been fit, we can plot the location of the centroids and the "error" ellipses.
# + slideshow={"slide_type": "slide"}
from astroML.plotting.tools import draw_ellipse
fig = plt.figure(figsize=(10, 5))
ax = fig.add_subplot(111)
plt.scatter(df['eruptions'],df['waiting'])
plt.xlabel('Eruptions')
plt.ylabel('Waiting')
plt.xlim([1.5,5.3])
plt.ylim([40,100])
ax.scatter(gmm3.means_[:,0], gmm3.means_[:,1], marker='s', c='red', s=80)
for mu, C, w in zip(gmm3.means_, gmm3.covariances_, gmm3.weights_):
draw_ellipse(mu, 2*C, scales=[1], ax=ax, fc='none', ec='k') #2 sigma ellipses for each component
# + [markdown] slideshow={"slide_type": "slide"}
# Ivezic, Figure 6.6 shows another 2-D example. In the first panel, we have the raw data. In the second panel we have a density plot (essentially a 2-D histogram). We then try to represent the data with a series of Gaussians. We allow up to 14 Gaussians and use the AIC/BIC to determine the best choice for this number. This is shown in the third panel. Finally, the fourth panel shows the chosen Gaussians with their centroids and 1-$\sigma$ contours.
#
# In this case 7 components are required for the best fit. While it looks like we could do a pretty good job with just 2 components, there does appear to be some "background" that is a high enough level to justify further components.
# + slideshow={"slide_type": "slide"}
# Execute this cell
# Ivezic, Figure 6.6
# Author: <NAME>
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
# %matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from scipy.stats import norm
#from sklearn.mixture import GMM
from sklearn.mixture import GMM
from sklearn.mixture import GaussianMixture
from astroML.datasets import fetch_sdss_sspp
from astroML.decorators import pickle_results
from astroML.plotting.tools import draw_ellipse
#------------------------------------------------------------
# Get the Segue Stellar Parameters Pipeline data
data = fetch_sdss_sspp(cleaned=True)
# Note how X was created from two columns of data
X = np.vstack([data['FeH'], data['alphFe']]).T
# truncate dataset for speed
X = X[::5]
#------------------------------------------------------------
# Compute GMM models & AIC/BIC
N = np.arange(1, 14)
#@pickle_results("GMM_metallicity.pkl")
def compute_GMM(N, covariance_type='full', n_iter=1000):
models = [None for n in N]
for i in range(len(N)):
#print N[i]
models[i] = GMM(n_components=N[i], n_iter=n_iter, covariance_type=covariance_type)
#models[i] = GaussianMixture(n_components=N[i], max_iter=n_iter, covariance_type=covariance_type)
models[i].fit(X)
return models
models = compute_GMM(N)
AIC = [m.aic(X) for m in models]
BIC = [m.bic(X) for m in models]
i_best = np.argmin(BIC)
gmm_best = models[i_best]
print("best fit converged:", gmm_best.converged_)
print("BIC: n_components = %i" % N[i_best])
#------------------------------------------------------------
# compute 2D density
FeH_bins = 51
alphFe_bins = 51
H, FeH_bins, alphFe_bins = np.histogram2d(data['FeH'], data['alphFe'], (FeH_bins, alphFe_bins))
Xgrid = np.array(map(np.ravel,
np.meshgrid(0.5 * (FeH_bins[:-1]
+ FeH_bins[1:]),
0.5 * (alphFe_bins[:-1]
+ alphFe_bins[1:])))).T
log_dens = gmm_best.score(Xgrid).reshape((51, 51))
#------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(12, 5))
fig.subplots_adjust(wspace=0.45, bottom=0.25, top=0.9, left=0.1, right=0.97)
# plot data
ax = fig.add_subplot(141)
ax.scatter(data['FeH'][::10],data['alphFe'][::10],marker=".",color='k',edgecolors='None')
ax.set_xlabel(r'$\rm [Fe/H]$')
ax.set_ylabel(r'$\rm [\alpha/Fe]$')
ax.xaxis.set_major_locator(plt.MultipleLocator(0.3))
ax.set_xlim(-1.101, 0.101)
ax.text(0.93, 0.93, "Input",
va='top', ha='right', transform=ax.transAxes)
# plot density
ax = fig.add_subplot(142)
ax.imshow(H.T, origin='lower', interpolation='nearest', aspect='auto',
extent=[FeH_bins[0], FeH_bins[-1],
alphFe_bins[0], alphFe_bins[-1]],
cmap=plt.cm.binary)
ax.set_xlabel(r'$\rm [Fe/H]$')
ax.set_ylabel(r'$\rm [\alpha/Fe]$')
ax.xaxis.set_major_locator(plt.MultipleLocator(0.3))
ax.set_xlim(-1.101, 0.101)
ax.text(0.93, 0.93, "Density",
va='top', ha='right', transform=ax.transAxes)
# plot AIC/BIC
ax = fig.add_subplot(143)
ax.plot(N, AIC, '-k', label='AIC')
ax.plot(N, BIC, ':k', label='BIC')
ax.legend(loc=1)
ax.set_xlabel('N components')
plt.setp(ax.get_yticklabels(), fontsize=7)
# plot best configurations for AIC and BIC
ax = fig.add_subplot(144)
ax.imshow(np.exp(log_dens),
origin='lower', interpolation='nearest', aspect='auto',
extent=[FeH_bins[0], FeH_bins[-1],
alphFe_bins[0], alphFe_bins[-1]],
cmap=plt.cm.binary)
ax.scatter(gmm_best.means_[:, 0], gmm_best.means_[:, 1], c='w')
for mu, C, w in zip(gmm_best.means_, gmm_best.covars_, gmm_best.weights_):
draw_ellipse(mu, C, scales=[1], ax=ax, fc='none', ec='k')
ax.text(0.93, 0.93, "Converged",
va='top', ha='right', transform=ax.transAxes)
ax.set_xlim(-1.101, 0.101)
ax.set_ylim(alphFe_bins[0], alphFe_bins[-1])
ax.xaxis.set_major_locator(plt.MultipleLocator(0.3))
ax.set_xlabel(r'$\rm [Fe/H]$')
ax.set_ylabel(r'$\rm [\alpha/Fe]$')
plt.show()
# + [markdown] slideshow={"slide_type": "slide"}
# That said, I'd say that there are *too* many components here. So, I'd be inclined to explore this a bit further if it were my data.
# + [markdown] slideshow={"slide_type": "notes"}
# Talk about how to use this to do outlier finding. Convolve with errors of unknown object.
# + [markdown] slideshow={"slide_type": "slide"}
# Lastly, let's look at a 2-D case where we are using GMM more to characterize the data than to find clusters.
# + slideshow={"slide_type": "slide"}
# Execute this cell
# Ivezic, Figure 6.7
# Author: <NAME>
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
import numpy as np
from matplotlib import pyplot as plt
from sklearn.mixture import GMM
from astroML.datasets import fetch_great_wall
from astroML.decorators import pickle_results
#------------------------------------------------------------
# load great wall data
X = fetch_great_wall()
#------------------------------------------------------------
# Create a function which will save the results to a pickle file
# for large number of clusters, computation will take a long time!
#@pickle_results('great_wall_GMM.pkl')
def compute_GMM(n_clusters, n_iter=1000, min_covar=3, covariance_type='full'):
clf = GMM(n_clusters, covariance_type=covariance_type,
n_iter=n_iter, min_covar=min_covar)
clf.fit(X)
print("converged:", clf.converged_)
return clf
#------------------------------------------------------------
# Compute a grid on which to evaluate the result
Nx = 100
Ny = 250
xmin, xmax = (-375, -175)
ymin, ymax = (-300, 200)
Xgrid = np.vstack(map(np.ravel, np.meshgrid(np.linspace(xmin, xmax, Nx),
np.linspace(ymin, ymax, Ny)))).T
#------------------------------------------------------------
# Compute the results
#
# we'll use 100 clusters. In practice, one should cross-validate
# with AIC and BIC to settle on the correct number of clusters.
clf = compute_GMM(n_clusters=100)
log_dens = clf.score(Xgrid).reshape(Ny, Nx)
#------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(10, 5))
fig.subplots_adjust(hspace=0, left=0.08, right=0.95, bottom=0.13, top=0.9)
ax = fig.add_subplot(211, aspect='equal')
ax.scatter(X[:, 1], X[:, 0], s=1, lw=0, c='k')
ax.set_xlim(ymin, ymax)
ax.set_ylim(xmin, xmax)
ax.xaxis.set_major_formatter(plt.NullFormatter())
plt.ylabel(r'$x\ {\rm (Mpc)}$')
ax = fig.add_subplot(212, aspect='equal')
ax.imshow(np.exp(log_dens.T), origin='lower', cmap=plt.cm.binary,
extent=[ymin, ymax, xmin, xmax])
ax.set_xlabel(r'$y\ {\rm (Mpc)}$')
ax.set_ylabel(r'$x\ {\rm (Mpc)}$')
plt.show()
# + [markdown] slideshow={"slide_type": "slide"}
# Note that this is very different than the non-parametric density estimates that we did last time in that the GMM isn't doing that great of a job of matching the distribution. However, the advantage is that we now have a *model*. This model can be stored very compactly with just a few numbers, unlike the KDE or KNN maps which require a floating point number for each grid point.
#
# One thing that you might imagine doing with this is subtracting the model from the data and looking for interesting things among the residuals.
| notebooks/MixtureModel.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Data-X Mindful Project
# ### Part 2 Data analysis and modeling
# +
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.metrics import classification_report
from sklearn.metrics import recall_score
from sklearn.metrics import precision_score
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import *
from sklearn.preprocessing import MinMaxScaler
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegressionCV
from IPython.display import Image, display
import random
# -
# ### Read data files
X = pd.read_csv('X_df.csv').drop('Unnamed: 0',axis=1)
Y = pd.read_csv('Y_df.csv').drop('Unnamed: 0',axis=1)
# ### Normalization
scaler = MinMaxScaler()
norm_X = scaler.fit_transform(X)
new_Y = Y.Depressed
X_train, X_test, y_train, y_test = train_test_split(norm_X, new_Y, test_size=0.1)
# ### Initial trial with different classifiers
# LogisticRegressionCV
clf = LogisticRegressionCV(penalty = 'l2', solver='liblinear', multi_class='ovr').fit(X_train, y_train)
print(clf.score(X_train, y_train))
print(clf.score(X_test, y_test))
#y_pred_train = clf.predict(X_train)
#y_pred_test = clf.predict(X_test)
#clf.predict_proba(X_test)
#recall_score(y_test, y_pred_test, average='micro')
#precision_score(y_test, y_pred_test, average='micro')
#confusion_matrix(y_train, y_pred_train)
#confusion_matrix(y_test, y_pred_test)
# Random Forest
clf = RandomForestClassifier(n_estimators=100, max_depth=3,
random_state=80)
clf.fit(X_train, y_train)
print(clf.score(X_train, y_train))
print(clf.score(X_test, y_test))
# Decision Tree
clf = DecisionTreeClassifier(max_depth=3,max_leaf_nodes=3,min_samples_leaf=1)
clf.fit(X_train, y_train)
print(clf.score(X_train, y_train))
print(clf.score(X_test, y_test))
#cross_val_score(clf, X_test, y_test, cv=10)
# ### Gradient Boost
# +
ENTITY_TYPE = "Gradient Boost"
clf = GradientBoostingClassifier(n_estimators=15, max_depth=5)
clf.fit(X_train, y_train)
print("Train: ", clf.score(X_train, y_train))
print("Test: ", clf.score(X_test, y_test))
print("Features used:", len(clf.feature_importances_))
print("-----")
importance_pairs = zip(X.columns, clf.feature_importances_)
sorted_importance_pairs = sorted(importance_pairs, key=lambda k: k[1], reverse=True)
for k, v in sorted_importance_pairs[:20]:
print(k, "\t", v, "\n")
# -
# Feature Importance
feat_imp = pd.Series(clf.feature_importances_, X.columns).sort_values(ascending=False).head(20)
feat_imp.plot(kind='bar', title='Feature Importances for ' + ENTITY_TYPE)
plt.ylabel('Feature Importance Scores' + " (" + ENTITY_TYPE + ")")
plt.tight_layout()
plt.show()
# +
# Recall and Precision
recall = recall_score(y_test, clf.predict(X_test))
precision = precision_score(y_test, clf.predict(X_test))
print('recall: ' + str(recall))
print('precision: ' +str(precision))
print("F1-Score: ", 2 * recall * precision / (recall + precision))
# -
# ### xgboost
# +
ENTITY_TYPE = "xgboost"
from xgboost import XGBClassifier
import xgboost
test_score = 0
precision, recall = 0,0
n = 1
for _ in range(n):
X_train, X_test, y_train, y_test = train_test_split(X, new_Y, test_size=0.1)
clf = XGBClassifier(estimators=20, max_depth = 5, eval_metric='aucpr')
clf.fit(X_train, y_train)
test_score += clf.score(X_test, y_test)
recall += recall_score(y_test, clf.predict(X_test))
precision += precision_score(y_test, clf.predict(X_test))
print("Train: ", clf.score(X_train, y_train))
print("Test: ", clf.score(X_test, y_test))
print("Features used:", len(clf.feature_importances_))
print("precision: ", precision/n)
print("recall: ", recall/n)
print("-----")
print(test_score/n)
fig, ax = plt.subplots(figsize=(16,16))
xgboost.plot_importance(clf,ax=ax)
plt.show()
# +
# Feature Importance
importance_pairs = zip(X_train.columns, clf.feature_importances_)
sorted_importance_pairs = sorted(importance_pairs, key=lambda k: k[1], reverse=True)
for k, v in sorted_importance_pairs:
print(k, "\t", v, "\n")
feat_imp = pd.Series(clf.feature_importances_, X_train.columns).sort_values(ascending=False)
plt.figure(figsize=(15,10))
feat_imp.plot(kind='bar', title='Feature Importances for ' + ENTITY_TYPE)
plt.ylabel('Feature Importance Scores' + " (" + ENTITY_TYPE + ")")
plt.tight_layout()
plt.show()
# +
# Recall and Precision
recall = recall_score(y_test, clf.predict(X_test))
precision = precision_score(y_test, clf.predict(X_test))
print('recall: ' + str(recall))
print('precision: ' +str(precision))
print("F1-Score: ", 2 * recall * precision / (recall + precision))
# -
X_results = X.copy()
X_results["results"] = clf.predict(X)
# Average features
X_results_avg = X_results.groupby("results").mean()
X_results_avg.columns
X_results_avg.loc[:,["Fruit", "Water", "F_Average", "F_None", "F_Decline", "Healthy", "Unhealthy","Dry_mouth", "Dry_skin"]]
# Look at correlation between some features
features = ["Fruit", "Water", "F_Average", "F_None", "F_Decline", "Healthy", "Unhealthy","Dry_mouth", "Dry_skin"]
correlations = X.loc[:,features].corr()
sns.heatmap(correlations)
plt.show()
# ### Example Person
example_person = X.iloc[[15],:]
example_person
# Get average "non-Depressed", and this example person
X_results_avg.iloc[[0]].reset_index().drop("results", axis=1).append(example_person)
# Get difference between average "non-Depressed" person, and this example person
example_difference = X_results_avg.iloc[[1]].reset_index().drop("results", axis=1).append(example_person).diff().iloc[[1]]
example_difference
# Weights and difference
weights_and_diff = pd.DataFrame(data=[feat_imp.values], columns=feat_imp.index).append(example_difference, sort=True)
weights_and_diff
weights_and_diff.iloc[0].multiply(weights_and_diff.iloc[1]).abs().sort_values(ascending=False).head(10)
# ### Sample output and response
# +
# Sample responses
responses = {
"Relaxed": "Mindfulness and meditation can really help overcome stressful times.",
"Hobby": "Find time for the things that make you happy! Reading, sports, music… Having a hobby really increases your quality of life. ",
"Sweat": "Do some intense exercise! Releasing some stress is always a good idea. ",
"Volunteering": "Have you considered engaging in some volunteering? Even the smallest effort can have huge impact!",
"SP_Late": "Watch out for your sleep habits! Having consistent sleep schedules is vital for getting a good night sleep. ",
"Snack": "Stop snacking all day! Comfort food is not the answer, eat a proper meal instead – I’m sure your cooking abilities are not that bad… 😉",
"Fruit": "Are you getting your daily vitamins? Fruit is a very important part of our diet, and it’s delicious! ",
"Water": "Drink some more water! We are 60% made of water, don’t let that percentage drop 😉",
"Lonely": "It’s normal to feel lonely sometimes, but it’s important to remember that there ARE people who care about us, and to keep in touch with them!",
"F_Average": "Maybe your food choices are not completely unhealthy, but don’t you think you could do better? Food impacts our mood more than you may think!",
"W_Late": "Get out of bed and take on the world! Waking up early and feeling productive is very comforting 🙂",
"Anxious": "Sometimes we are overwhelmed with projects, work, tasks… However, our mindset is very important in overcoming those situations. Tell yourself it’s going to be OK, you can do it!",
"Occupation": "Having an occupation makes us feel useful and is a self-esteem boost! Whether it’s your job, a class project, or housekeeping 😉",
"Energized": "It is very important to feel motivated and with energy! Every morning, think about the things that make you feel happy, excited and give you energy to make it successfully through the day!",
"W_Time": "Waking up on time and being prepared for all the tasks and commitments for the day is very comforting 🙂",
"Talk_2F": "How many friends do you have? And how many of them have you talk to recently? Make sure to keep in touch with the people that are important to us, it really makes us happier.",
"Average": "Watch out for your sleep habits! Having consistent sleep schedules, and relaxing before going to bed, is vital to get a good night sleep.",
"Oil": "Stop eating oily food! Comfort food is not the answer, if you give healthy food a try I’m sure it will make you feel better 😉",
"Sore": "Do some exercise! Is there a bigger feeling of accomplishment that being tired after an intense workout?",
"Fried": "Stop eating fried food! Comfort food is not the answer, if you give healthy food a try I’m sure it will make you feel better 😉",
"S_Late": "If only the day had more than 24 hours! However, staying up until late is not going to change that. Why don’t you try to go to sleep a little bit earlier? You’ll feel well rested the next day 😉",
"Veggies": "Veggies might not be your favourite food, I get that. But how good does it make us feel when we eat healthy and clean?",
"Thankful": "It is important to remember every day how lucky we are. Why don’t you try each morning to think about three things that you are grateful for?",
"Excited": "It is very important to feel motivated and excited! Every morning, think about the things that make you feel happy, excited and give you energy to make it successfully through the day!",
"Exercise": "Do some exercise! Releasing some stress is always a good idea.",
"Family": "Becoming a teenager, moving to a different city (or country!), always makes us become less attached to our family. Call your mom more often, she’ll always be there to help you!",
"Sugar": "Stop eating sugary food! Comfort food is not the answer, if you give healthy food a try I’m sure it will make you feel better 😉",
"Peaceful": "Mindfulness and meditation can really help overcome stressful times.",
"Vitamin": "Get some vitamins! It could really boost your defenses and make you feel better 🙂",
"SP_Tired": "Watch out for your sleep habits! Having consistent sleep schedules is vital for getting a good night sleep.",
"Meal": "Why don’t you eat a proper meal instead of snacking? I’m sure your cooking abilities are not that bad… 😉"
}
# +
##############################
# Example Person (2nd Time)
# RUN THIS CELL TO HAVE A GOOD TIME
##############################
example_person = X.iloc[[random.randint(0,len(X)-1)]]
if clf.predict(example_person.loc[:,:]) == 1:
display(Image("bad.png"))
example_diff = X_results_avg.iloc[[0]].reset_index().drop("results", axis=1).append(example_person).diff().iloc[1]
weights_and_diff = pd.DataFrame(data=[feat_imp.values], columns=feat_imp.index).append(example_diff, sort=True)
top_10_features = weights_and_diff.iloc[0].multiply(weights_and_diff.iloc[1]).abs().sort_values(ascending=False).head(10)
i = 1
for feat in top_10_features.index:
if feat in responses:
print(F"{i}) {responses[feat]}")
i += 1
else:
display(Image("good.png"))
# -
# ### Cosine Similarity Tests
AVG_POS = X_results_avg.loc[1, :]
AVG_NEG = X_results_avg.loc[0, :]
def dot(A,B):
return (sum(a*b for a,b in zip(A,B)))
def cosine_similarity(a,b):
return dot(a,b) / (1+( (dot(a,a) **.5) * (dot(b,b) ** .5) ))
def cosine_compare_pos(row):
return cosine_similarity(row, AVG_POS)
def cosine_compare_neg(row):
return cosine_similarity(row, AVG_NEG)
def cosine_ratio_pos(row):
return cosine_similarity(row, AVG_POS) / (cosine_similarity(row, AVG_NEG) + cosine_similarity(row, AVG_POS))
X_results[X_results["results"] == 0].drop("results", axis=1).apply(cosine_compare_neg, axis=1).mean()
X_results[X_results["results"] == 1].drop("results", axis=1).apply(cosine_compare_pos, axis=1).mean()
cosine_similarity(X.loc[10, :], AVG_NEG) / (cosine_similarity(X.loc[10, :], AVG_NEG) + cosine_similarity(X.loc[10, :], AVG_POS))
# +
cos_sims = []
for i in range(len(X)):
example_person = X.loc[i, :]
pos_score = cosine_similarity(example_person, AVG_POS) / (cosine_similarity(example_person, AVG_NEG) + cosine_similarity(example_person, AVG_POS))
# print(pos_score)
cos_sims.append(pos_score)
import numpy as np
print(F" max and min: {max(cos_sims), min(cos_sims)}")
print(F" One standard deviation is: {np.sqrt(np.var(cos_sims))}")
# -
X_results[X_results["results"] == 0].drop("results", axis=1).apply(cosine_ratio_pos, axis=1).mean()
X_results[X_results["results"] == 1].drop("results", axis=1).apply(cosine_ratio_pos, axis=1).mean()
| Data-X Mindful Part2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Motif Refinement
# ---
# Draft motifs identified from an IGR can be refined using this sheet by performing iterative searches taking predicted secondary structure into account.
#
# **Recommended: Restart kernel every time you are working on a new set of data.
# ## Step 0. Import Necessary Code
# ---
# Python has basic functions, but to do more advanced functions we need to import additional python packages. The last two lines (from src…) are importing code that is specific and unique to dimpl.
#
# +
# %cd /home/jovyan/work
import sys
import os
import shutil
import re
import pandas as pd
import glob
import tarfile
from Bio import AlignIO, SeqIO
import subprocess
import pandas as pd
import ipywidgets
from src.data.genome_context import get_all_images, run_rscape, tar_subdir_members, build_target_coords
from src.data.command_build import build_infernal_commands
# -
# ## Step 1. Set up folders and drop in .sto files
# ---
# Create a new directory under data/motif_collections.
#
# Create a subdirectory in that new directory with a name for the analysis step (e.g. infernal_step2). Folder/subfolder architecture is required, but folder names can be whatever you want.
#
# Finally, create a directory in the the step directory named "motifs". Place all the .sto files of interest in that directory.
#
# Change the variables in the codeblock below to reflect the collection_name directory and the step directory you just created.
# Change the variables below
collection_name = "riboswitch_candidates"
step_name = "infernal_step2"
# +
# Will move any .sto files found in the data_dir to the /motifs folder
data_dir = "data/motif_collections/{}".format(collection_name)
step_dir = "{}/{}".format(data_dir, step_name)
search_dir = "{}/motifs".format(step_dir)
sto_files = glob.glob("{}/*.sto".format(search_dir))
for sto_file in sto_files:
motif_name = sto_file[sto_file.rfind('/')+1:-4]
motif_folder = "{}/{}".format(step_dir, motif_name)
if not os.path.exists(motif_folder):
os.mkdir(motif_folder)
shutil.copyfile(sto_file, "{}/{}.sto".format(motif_folder, motif_name))
# Build the collection of commands required for infernal
build_infernal_commands(data_dir, step_name)
tarfilename = "{}_{}".format(collection_name, step_name)
with tarfile.open("data/export/{}.tar.gz".format(tarfilename), "w:gz") as tar:
tar.add(data_dir, arcname=tarfilename)
# -
# ## Step 2. Run code on cluster
# ---
# Move Tarfile to your high-performance cluster. Tarfile is located in data/export.
# If you typically log into cluster using `ssh`, you'll be able to use `scp` to do the file transfer. Example command: `scp /path/to/dimpl/data/export/tarfilename.tar.gz <EMAIL>:~/project/wherever`.
#
# Untar the uploaded file by using the command `tar xzvf tarfilename`. Remove tarfile using `rm tarfilename.tar.gz` (unpacked files will remain).
#
# Run the script named stepname_run.sh in the newly extracted folder by entering `./stepname_run.sh` into the terminal. If you get a permissions error, run `chmod +x stepname_run.sh` and then try again.
#
# You can monitor completion of the analysis using the command `squeue -u username` to see running tasks. (PD = pending, R = running)
# ## Step 3. Compress results
# ---
# When analysis is complete, compress the results of the analysis on the computational cluster using the command:
#
# `./make_tar.sh`
#
# ## Step 4. Move results from cluster to dimpl
# ---
# Place the downloaded tar.gz file in data/import directory of the DIMPL's file architecture.
#
# Example command: `scp <EMAIL>:~/path/to/tarfile/collection_name_step_name.done.tar.gz /path/to/dimpl/data/import`.
#
# ## Step 5. Prep data import
# ---
# Change the variable `import_tar_name`, `collection_name` and `step_name` below to reflect the data being imported.
# +
# Change the variables here to reflect data being imported
import_tar_name = "data/import/riboswitch_candidates_infernal_step2.done.tar.gz"
import_collection_name = "riboswitch_candidates"
import_step_name = "infernal_step2"
# Unpack the files
untar_dir = "data/motif_collections/{}".format(import_collection_name)
with tarfile.open(import_tar_name, "r:gz") as tar:
tar.extractall(path=untar_dir, members=tar_subdir_members(tar, import_tar_name))
print("\nTarfile created:",tar.name)
# -
# ### Collect all the potential motifs in the imported directory and generate selection dropdown
data_dir = "{}/{}".format(untar_dir, import_step_name)
tblout_files = glob.glob("{}/*/*.tblout".format(data_dir))
motif_list = list(file.split('/')[-1][:-10] for file in tblout_files)
motif_dropdown = ipywidgets.Dropdown(options=motif_list, description="Motif Name", layout={'width': 'max-content'})
motif_dropdown
# ## Step 6. Analysis
# ---
# After selecting an IGR from the dropdown below, run cells starting at this codeblock to generate the analysis data.
# +
# Saves the next motif in the list of motifs to motif_name
motif_name=motif_dropdown.value
outdir="{}/{}".format(data_dir,motif_name)
sto_filename = "{}/{}.cm.align.sto".format(outdir, motif_name)
dedupe_filename = "{}/{}.dedupe.fasta".format(outdir, motif_name)
sample_filename = "{}/{}.sample.fasta".format(outdir, motif_name)
results_filename ="{}/{}.cm.tblout".format(outdir, motif_name)
# Read in the results from the .tblout file
results_df = pd.read_csv(results_filename, skiprows=2, skipfooter=10, sep='\s+', engine='python', header=None,
names=['target_name', 'target_accession', 'query_name', 'query_accession', 'mdl', 'mdl_from', 'mdl_to',
'seq_from', 'seq_to', 'strand', 'trunc', 'pass', 'gc', 'bias', 'score', 'e_value', 'inc','description'])
# Remove duplicate accession numbers caused by duplicates in the searched database
results_df.drop_duplicates(inplace=True)
# Correct coordinates taking IGR database into account
results_df['target_coords'] = results_df.apply(lambda row: build_target_coords(row['target_name'], row['seq_from'], row['seq_to']), axis=1)
results_df.drop(columns=['target_accession', 'query_accession', 'inc', 'description', 'query_name'], inplace=True)
# Remove duplicate entries from the database
dedupe_fasta = list(SeqIO.parse(dedupe_filename, 'fasta'))
# Remove nn| prefix (if found) before adding id to list
dedupe_id_list = [(re.sub('^[0-9]+\|','',record.id)) for record in dedupe_fasta]
deduped_results_df = results_df[results_df['target_coords'].isin(dedupe_id_list)].copy()
# Output results
print("Results for: {}".format(motif_name))
print("Number of Unique Hits: {}".format(len(dedupe_id_list)))
# Display up to 100 rows
with pd.option_context('display.max_rows', 100):
display(deduped_results_df[['e_value', 'target_coords', 'gc', 'bias', 'score', 'strand', ]])
# -
# ### 6.1 RNAcode Analysis for Possible Protein Coding Regions
#
# Regions with a p-value < 0.05 are indicative of protein coding regions. SVG images of any such regions will be placed in the motif subfolder.
# +
clustal_filename = "{}/{}/{}.sample.clustal".format(data_dir,motif_name, motif_name)
sample_fasta = list(SeqIO.parse(sample_filename, 'fasta'))
sample_id_list = [record.id for record in sample_fasta]
sto_records = list(SeqIO.parse(sto_filename, 'stockholm'))
sampled_sto_records = [record for record in sto_records if record.id in sample_id_list]
alignment = sampled_sto_records[:200]
with open(clustal_filename, 'w') as clustal_file:
SeqIO.write(alignment, clustal_file, "clustal")
output = subprocess.run(['RNAcode', clustal_filename], capture_output=True)
print(output.stdout.decode())
print(output.stderr.decode())
# -
# ## 6.2 R-scape De-Novo Structure Prediction
#
# When fold=true, will use predictive mode to build the best consensus structure with the largest possible number of significantly covarying pairs, including predicting psuedoknots.
#
# After running the code block, a preview of the generated .svg will appear, containing the consensus and any predicted pseudoknots. An SVG images will be placed in the motif subfolder.
# change output=True to see R-scape table
run_rscape(outdir, sto_filename, fold=True, output=False)
# ## 6.3 CMfinder Realigned Structure
#
# CMFinder outputs its best folding prediction as a file ending in `cmfinder.realign.sto`. This codeblock will run that realign file through R-scape to evaluate a the structure for covariation. Because fold=False, it will not find psuedoknots unless they were already predicted in the structure.
#
# After running the code a preview of the generated .svg will appear. An SVG image will be placed in the motif subfolder.
# change output=False to see R-scape table
realign_filename = "{}/{}.cmfinder.realign.sto".format(outdir, motif_name)
run_rscape(outdir, realign_filename, fold=False, output=False)
# ## 6.4 CMfinder Analyzed Submotifs
#
# CMFinder will often generate several submotifs and use the submotifs to find the 'best' version and output it as the `cmfinder.realign.sto` file.
#
# Running this code will show you all submotifs. SVG images for each submotif will be placed in the motif subfolder.
# +
with open("{}/motif.list".format(outdir), "r") as motif_list_file:
motif_list = motif_list_file.read().split('\n')
motif_list.remove("")
motif_list = ["{}/{}".format(untar_dir,motif) for motif in motif_list ]
for file in motif_list:
renamed_file = "{}.sto".format(file)
if not os.path.isfile(renamed_file):
os.rename(file, renamed_file)
run_rscape(outdir, renamed_file, fold=False, output=False)
# -
# ## 6.5 Genome Context Images
#
# Generates list of genome context images for each unique hit. RNA motif is shown in blue, genes are shown in purple or red depending on directionality.
#
# Your match numbers may 'skip' (i.e. jumps from #1 to #4) - those missing are duplicates and were removed.
deduped_sto_records = [record for record in sto_records if re.sub('^[0-9]+\|','',record.id) in dedupe_id_list]
results_csv_filename = "{}/{}_results.csv".format(outdir, motif_name)
if not os.path.exists(results_csv_filename):
deduped_results_df['lineage']=''
deduped_results_df['assembly_accession']=''
deduped_results_df.to_csv(results_csv_filename)
get_all_images(results_csv_filename, deduped_sto_records)
| notebooks/4-Motif-Refinement.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="rKFR_sphgXUJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 122} outputId="0570ddef-6afd-4247-b6d0-0e1c7ba1013d" executionInfo={"status": "ok", "timestamp": 1571732968095, "user_tz": -330, "elapsed": 34131, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCOVPR0BiydUIlDxRDQBDTzMT0W_QRImvR8REStgWg=s64", "userId": "05841676353733988512"}}
from google.colab import drive
drive.mount('/content/gdrive')
# + id="FZyhZMFqlSUW" colab_type="code" colab={}
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# + id="Wsvd4iNslx_2" colab_type="code" colab={}
dataset=pd.read_csv('gdrive/My Drive/Google_Stock_Price_Train.csv')
# + id="-jLTjQDtmCmd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="b91b9fcf-6b03-4c59-a980-e946761d4284" executionInfo={"status": "ok", "timestamp": 1571733339890, "user_tz": -330, "elapsed": 1094, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCOVPR0BiydUIlDxRDQBDTzMT0W_QRImvR8REStgWg=s64", "userId": "05841676353733988512"}}
dataset.head()
# + id="aCjQe6edmzD6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="7606d114-934e-412d-818c-795a7d164e14" executionInfo={"status": "ok", "timestamp": 1571733490007, "user_tz": -330, "elapsed": 1170, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCOVPR0BiydUIlDxRDQBDTzMT0W_QRImvR8REStgWg=s64", "userId": "05841676353733988512"}}
training_set=dataset.iloc[:,1:2].values
training_set
# + id="tuKTzMlUncJb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 286} outputId="6715c9fb-e381-46ea-f484-5b0e26df2587" executionInfo={"status": "ok", "timestamp": 1571733622055, "user_tz": -330, "elapsed": 1444, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCOVPR0BiydUIlDxRDQBDTzMT0W_QRImvR8REStgWg=s64", "userId": "05841676353733988512"}}
plt.plot(training_set,color='red',label='5 Years Google Stock price')
# + id="b7YPQjRVn8YQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="5f57ce60-e742-4fad-ad08-c0ea0b4e9051" executionInfo={"status": "ok", "timestamp": 1571734105255, "user_tz": -330, "elapsed": 1385, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCOVPR0BiydUIlDxRDQBDTzMT0W_QRImvR8REStgWg=s64", "userId": "05841676353733988512"}}
#feature scaling
from sklearn.preprocessing import MinMaxScaler
sc=MinMaxScaler(feature_range=(0,1))
training_set_scaled=sc.fit_transform(training_set)
training_set_scaled
# + id="0rToQYnlpyRN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="84496d95-8499-4ba0-f60e-f670f84a2bea" executionInfo={"status": "ok", "timestamp": 1571734307751, "user_tz": -330, "elapsed": 1303, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCOVPR0BiydUIlDxRDQBDTzMT0W_QRImvR8REStgWg=s64", "userId": "05841676353733988512"}}
# creating the data structure with 60 time spot and 1 out put
X_train=[]
y_train=[]
for i in range(60,1258):
X_train.append(training_set_scaled[i-60:i,0])
y_train.append(training_set_scaled[i,0])
X_train,y_train=np.array(X_train),np.array(y_train)
y_train
# + id="8DKV9nMSqf5t" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="05004f4e-0b95-48a4-c3d4-1b9ce1ee849e" executionInfo={"status": "ok", "timestamp": 1571735385401, "user_tz": -330, "elapsed": 1297, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCOVPR0BiydUIlDxRDQBDTzMT0W_QRImvR8REStgWg=s64", "userId": "05841676353733988512"}}
X_train=np.reshape(X_train,(X_train.shape[0],X_train.shape[1],1))
X_train.shape
# + id="pLApIwhaq48v" colab_type="code" colab={}
#building the RNN
#import the keras libraries and packages
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
# + id="fPQHj4kjryYc" colab_type="code" colab={}
#inintializing the RNN
regressor=Sequential()
# + id="UThw86jCr-C4" colab_type="code" colab={}
#adding the first lstm layer and some dropout regularisation
regressor.add(LSTM(units=50,return_sequences=True,input_shape=(X_train.shape[1],1)))
regressor.add(Dropout(0.2))
# + id="C1JU96j_sqtl" colab_type="code" colab={}
regressor.add(LSTM(units=50, return_sequences=True))
regressor.add(Dropout(0.2))
# + id="Qr1oAfYts2oJ" colab_type="code" colab={}
regressor.add(LSTM(units=50,return_sequences=True))
regressor.add(Dropout(0.2))
# + id="r2RqCe6YtRmk" colab_type="code" colab={}
regressor.add(LSTM(units=50))
regressor.add(Dropout(0.2))
# + id="2I9k8t12tavl" colab_type="code" colab={}
regressor.add(Dense(units=1))
# + id="Y8MTKGxNtib5" colab_type="code" colab={}
#compiling the rnn
regressor.compile(optimizer='adam',loss='mean_squared_error')
# + id="57AftwgbvVrL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="5ed52f58-e02d-4a0f-b6fa-9eddc53ebf46" executionInfo={"status": "ok", "timestamp": 1571735594427, "user_tz": -330, "elapsed": 1348, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCOVPR0BiydUIlDxRDQBDTzMT0W_QRImvR8REStgWg=s64", "userId": "05841676353733988512"}}
X_train.shape
# + id="12IXBGq3vd8F" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="212d8828-02c4-42a5-ee39-c95cfa322565" executionInfo={"status": "ok", "timestamp": 1571735620901, "user_tz": -330, "elapsed": 880, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCOVPR0BiydUIlDxRDQBDTzMT0W_QRImvR8REStgWg=s64", "userId": "05841676353733988512"}}
y_train.shape
# + id="ffgQcqP2vkfp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="93aa54fd-b429-4831-c7ba-a5bade86a25d" executionInfo={"status": "ok", "timestamp": 1571736341782, "user_tz": -330, "elapsed": 660753, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCOVPR0BiydUIlDxRDQBDTzMT0W_QRImvR8REStgWg=s64", "userId": "05841676353733988512"}}
regressor.fit(X_train,y_train,epochs=100,batch_size=32)
# + id="Z35cg_epvzWT" colab_type="code" colab={}
#making the predection and visualising the results
#getting the real stock price of 2017
dataset_test=pd.read_csv('gdrive/My Drive/Google_Stock_Price_Test.csv')
real_stock_price=dataset_test.iloc[:,1:2].values
# + id="pX_jQOxAxeaJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 286} outputId="34c16bdf-8e8e-45e2-80b1-bc2cac83a849" executionInfo={"status": "ok", "timestamp": 1571736483120, "user_tz": -330, "elapsed": 1793, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCOVPR0BiydUIlDxRDQBDTzMT0W_QRImvR8REStgWg=s64", "userId": "05841676353733988512"}}
plt.plot(real_stock_price,color='red',label='Real Google Stock price')
# + id="3UaX2zFpxy48" colab_type="code" colab={}
#getting the predicted stock price of 2017
dataset_total=pd.concat((dataset['Open'],dataset_test['Open']),axis=0)
inputs=dataset_total[len(dataset_total)-len(dataset_test)-60:].values
inputs=inputs.reshape(-1,1)
inputs=sc.transform(inputs)
X_test=[]
for i in range(60,80):
X_test.append(inputs[i-60:i,0])
X_test=np.array(X_test)
X_test=np.reshape(X_test,(X_test.shape[0],X_test.shape[1],1))
predicted_stock_price=regressor.predict(X_test)
predicted_stock_price=sc.inverse_transform(predicted_stock_price)
# + id="Tkd_QcHY0egV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="b322eb78-928d-4aa4-c95b-80008e342c0b" executionInfo={"status": "ok", "timestamp": 1571737422906, "user_tz": -330, "elapsed": 1524, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCOVPR0BiydUIlDxRDQBDTzMT0W_QRImvR8REStgWg=s64", "userId": "05841676353733988512"}}
plt.plot(real_stock_price,color='red',label='real google price')
plt.plot(predicted_stock_price,color='blue',label='predicted google stock price')
plt.title('google stock price prediction')
plt.xlabel('time')
plt.ylabel('google stock price')
plt.legend()
plt.show()
# + id="TysFPP422cOx" colab_type="code" colab={}
| recurrent neural network 22-10-19 (1).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Les bases du langage Python
#
#
# - Langage de haut niveau
# - Langage interprété
# - Langage orienté objet
# - Langage le plus proche possible du langage naturel
#
# *Attention Python est sensible à la casse*
# + [markdown] slideshow={"slide_type": "slide"}
# # Quelques rappels en Python
#
# - Tout est objet même les "variables"
# - Un objet possède toujours des propriétés et des méthodes
#
# Avec iPython, on utilise pour connaître les méthodes et les propriétés :
# ```
# mon_objet.[tabulation]
# ```
# + [markdown] slideshow={"slide_type": "slide"}
# # Les types en Python
#
# Les types de base :
# - int
# - float
# - boolean
# - string
#
# En Python pas besoin de déclarer le type de données, le langage s’en charge tout seul
#
# Python est basé sur un typage fort - une fois un objet d'un type donné, Python ne l'adapte pas
#
# Pour connaître le type d’une variable, on utilise : type(nom_de_variable)
#
# + slideshow={"slide_type": "fragment"}
nom_de_variable=44.8
print(type(nom_de_variable))
# + [markdown] slideshow={"slide_type": "subslide"}
# **Exercice**
#
# Définir quatre variables distinctes, un entier, un float, un booléen et une chaîne de caractère et affichez-les dans la console.
#
# -
entier = 33
dec = 44.44
booleen = True
chaine = "Python"
chaine2 = 'Python'
chaine3 = """Python"""
print(entier,dec,booleen,chaine,chaine2,chaine3,sep = "\n")
# + [markdown] slideshow={"slide_type": "slide"}
# # Les opérations arithmétiques
#
# - +
# - -
# - *
# - /
# - ** (puissance)
# - % (modulo)
# + [markdown] slideshow={"slide_type": "slide"}
# # Les chaînes de caractères
#
# ### 3 codages équivalents :
# + slideshow={"slide_type": "fragment"}
print("Python",'Python',"""Python""")
# + [markdown] slideshow={"slide_type": "fragment"}
# De nombreuses opérations sur les chaînes :
# + slideshow={"slide_type": "fragment"}
chaine1="Python"
print(len(chaine1), chaine1.lower(), chaine1.upper(), str(chaine1))
# + [markdown] slideshow={"slide_type": "subslide"}
# **Exercice**
#
# Définir une variable comprenant la chaîne 'Python pour la Science', utilisez des opérations sur les chaînes pour afficher 'PYTHON POUR LA DATA SCIENCE'
#
# -
chaine_python = "Python pour la Science"
print(chaine_python.replace("la","la data").upper())
print((chaine_python[0:15] + "data" + chaine_python[14:]).upper())
print(chaine_python[-7:])
# + [markdown] slideshow={"slide_type": "slide"}
# # Les opérateurs booléens
#
# - `not`, `and` et `or`
# - Ordres de priorité
# - `not`
# - `and`
# - `or`
# + slideshow={"slide_type": "fragment"}
print(not True)
print(True or False)
print(True and False)
print(not True and False)
print(not True or False)
# + [markdown] slideshow={"slide_type": "slide"}
# # Les convertisseurs de type
#
# On peut convertir des types en utilisant :
# + slideshow={"slide_type": "fragment"}
entier1 = 44
print(type(float(entier1)))
# + slideshow={"slide_type": "fragment"}
print(type(bool('True')))
print(type(bool(0)))
# + [markdown] slideshow={"slide_type": "slide"}
# # Les opérateurs pour les conditions
#
# Permet de mettre en place des conditions
#
# On utilise la syntaxe :
# ```{python}
# if ... :
# ...
# elif ... :
# ...
# else:
# ...
# ```
# + slideshow={"slide_type": "fragment"}
bool1=True
# -
# **Exercice :** Testez si bool1 est vrai et d'autres conditions
if bool1 is True:
print("C'est vrai")
elif bool1 is False:
print("c'est faux")
else :
print("???")
if bool1 == True:
print("C'est vrai")
elif bool1 == False:
print("c'est faux")
else :
print("???")
if bool1:
print("C'est vrai")
elif not bool1:
print("c'est faux")
else :
print("???")
# + [markdown] slideshow={"slide_type": "fragment"}
# - Attention dans une condition, si on teste l'égalité des valeurs, il faut utiliser == (les autres opérateurs sont <, <=, >, >=, !=)
# - Il existe aussi l'opérateur is qui est très important en Python (lisibilité)
# - Il permet de tester non pas uniquement les valeurs mais les objets similaires ( True == 1 est vrai mais True is 1 est faux)
# - Veillez à bien respecter l’indentation
#
# + [markdown] slideshow={"slide_type": "slide"}
# # La boucle for
#
# - Les boucles for de Python ont une structure spécifique
# - L'itérateur prend comme valeur les éléments d'une liste
# - Structure :
# ```
# for indice in sequence:
# instructions
# ```
# - On peut utiliser `break` pour casser une boucle
#
# - `while` peut aussi être utilisé
#
# -
list_col = ["a","b","c"]
for i in range(3):
print(list_col[i])
for col in list_col:
print(col)
# + [markdown] slideshow={"slide_type": "fragment"}
# **La boucle de Python est un outil à manier avec parcimonie (elle est très lente)**
# + [markdown] slideshow={"slide_type": "subslide"}
# L'utilisation de `range(n)` permet de créer une liste de valeur de 0 à n-1
# + [markdown] slideshow={"slide_type": "subslide"}
# **Exercice :**
#
# Créez une boucle for qui parcourt un range de 0 à 5 et qui ajoute 1 à une variable à chaque boucle
# -
var = 0
for i in range(6):
var += 1
var
# + [markdown] slideshow={"slide_type": "fragment"}
# **Exercice :**
#
#
# Créer une boucle permettant d’afficher des phrases avec une valeur différente à chaque boucle "Nous sommes ..."
# -
liste_jours = ["lundi","mardi","mercredi"]
for jour in liste_jours:
print("Nous sommes", jour )
liste_jours = ["lundi","mardi","mercredi"]
liste_temp = [12,14,20]
for jour, temp in zip(liste_jours,liste_temp):
print("Nous sommes {} et il fait {} degrés".format(jour,temp))
for i in range(min(len(liste_jours),len(liste_temp))):
print("Nous sommes {} et il fait {} degrés".format(liste_jours[i],liste_temp[i]))
# + [markdown] slideshow={"slide_type": "slide"}
# # Les collections d’objet sous Python
#
# Il existe 3 structures principales de données sous Python :
# - Les tuples : suite de valeurs immuable définie par `( )`
# - Les listes : suite de valeurs modulable définie par `[ ]`
# - Les dictionnaires : valeurs indexées clé – valeur défini par `{ }`
#
#
# En Python, quelle que soit la structure, on utilise `[ ]` pour accéder à un élément d’une structure
#
#
# - Dans un tuple : `tup1[0]` permet d’accéder au premier élément
#
# + [markdown] slideshow={"slide_type": "slide"}
# # Les tuples
#
# - On ne peut pas modifier un tuple une fois créé
#
# **Attention sous Python, le premier indice est toujours le 0**
# + slideshow={"slide_type": "fragment"}
#on définit un tuple
tu = (1,3,5,7)
#on peut rechercher sa taille
print(len(tu))
#on accès aux indices en utilisant []
tu2=tu[1:3]
print(tu2)
# + [markdown] slideshow={"slide_type": "slide"}
# # Les listes
#
# Une liste est un tuple de taille dynamique et modifiable
#
# Les listes sont définies avec des `[ ]`
#
# Les méthodes sur les listes comprennent :
# - `.append()` ajoute une valeur en fin de liste
# - `.insert(i,val)` insert une valeur à l’indice i
# - `.pop(i)` extrait la valeur de l’indice i
# - `.reverse()` inverse la liste
# - `.extend()` étend la liste
#
# **Attention la plupart des méthodes des listes modifient la liste.**
# + [markdown] slideshow={"slide_type": "subslide"}
# **Exercice:**
#
# Créez une liste composée de 5 éléments, affichez la longueur de la liste.
# -
ma_liste = [2, 5 , 'python', True, 2.5]
ma_liste[2].upper()
ma_liste.append('youpi')
ma_liste
# + [markdown] slideshow={"slide_type": "fragment"}
# Modifiez la dernière valeur de la liste et construisez une seconde liste à partir des 3 dernières valeurs de la liste initiale.
# + [markdown] slideshow={"slide_type": "slide"}
# # Les listes (suite)
#
# On peut faire des recherché plus avancées dans les listes :
# - `val in list` renvoie `True` si la valeur `val` est dans la liste `list`
# - `.index(val)` renvoie l'indice de la valeur `val`
# - `.count(val)` renvoie le nombre d'occurrence de `val`
# - `.remove(val)` retire la valeur `val` de la liste (que la 1ère)
#
# Pour supprimer une liste ou un élément d’une liste, on utilise la commande `del`
#
# + [markdown] slideshow={"slide_type": "subslide"}
# # Un générateur de liste : la List Comprehension
#
# On peut définir des listes de manière plus complexe :
# + slideshow={"slide_type": "fragment"}
listinit=[5,2,6,7]
# -
res = [x**2 for x in listinit if (x % 2 == 0)]
res=[]
for x in listinit:
if x % 2 == 0:
res.append(x**2)
#res = res + [x**2]
print(res)
# + [markdown] slideshow={"slide_type": "subslide"}
# **Exercice :**
#
# A partir d’une liste de valeurs en degrés celsius `[0, 10, 20, 35]`, créez une liste en degrés fahrenheits sachant que la formule de transfert est `(9/5*temp+32)` et qu'on désire afficher uniquement le température plus grandes que 50.
# -
temp_celsius = [0, 10, 20, 35]
temp_fahr = [(9/5*temp+32) for temp in temp_celsius]
print(temp_fahr)
# + [markdown] slideshow={"slide_type": "slide"}
# # Les chaînes de caractères, des listes spécifiques
#
# Une chaîne de caractère est une liste avec des méthodes spécifiques (`.upper()`, `.find()`, `.count()`, `.replace()`...)
#
# Pour aller plus loin, on peut transformer une chaîne en liste :
# - Caractère par caractère : `list(str)`
# - En fonction de séparateurs : `.split(sep)`
# - Et effectuer l’opération inverse :
# ```
# sep.join(list)
# ```
# + [markdown] slideshow={"slide_type": "subslide"}
# # Chaîne de caractères
#
# Pour intégrer des valeurs spécifiques dans une phrase, on utilisera :
# - `%f`, `%s`, `%i` à l’intérieur de la chaîne (`float`, `str` ou `entier`)
# - `%( , )` après la chaîne de caractères
# + slideshow={"slide_type": "fragment"}
print("Aujourd'hui nous sommes %s" %("mercredi"))
# + [markdown] slideshow={"slide_type": "fragment"}
# Si on veut intégrer d'autres types, on utilisera la méthode `.format()` et le codage `{}` dans la chaîne
# + slideshow={"slide_type": "fragment"}
print("La liste {} est utilisée".format([4.5,3.6,6]))
# + [markdown] slideshow={"slide_type": "slide"}
# # Les dictionnaires
#
# Il s'agit d’une collection d’objets non ordonnés associant les notions "clé – valeur".
# + slideshow={"slide_type": "fragment"}
dico = {'machine_learning':['gbm','rf'], 'deep_learning':['rnn','cnn'], 'statistique':['test','ACP']}
# + slideshow={"slide_type": "fragment"}
# On utilise pour afficher la liste des clés et des valeurs
print(dico.keys())
print(dico.values())
# + slideshow={"slide_type": "fragment"}
# On accède aux valeurs par clé
print(dico["machine_learning"])
# + [markdown] slideshow={"slide_type": "subslide"}
# **Exercice :**
#
# Accédez à l'élément rnn du dictionnaire. Affichez le mot en majuscule.
#
# -
dico['deep_learning'][0].upper()
# + [markdown] slideshow={"slide_type": "slide"}
# # Les fonctions
#
# - Les niveaux d’une fonction dépendent de l’indentation
# - On définit une fonction avec `def fonction():`
# - Le contenu de la fonction suit
# - La valeur que retourne la fonction est définie par `return`
# - Les commentaires liés à une fonction sont inclus dans un docstring (commentaire utilisant `"""`)
#
# + slideshow={"slide_type": "fragment"}
def calcul_produit(val1,val2):
"""Cette fonction affiche le produit de 2 valeurs"""
return val1*val2
# -
produit = calcul_produit(4,6)
print(produit)
# Une fonction peut renvoyer plusieurs objets
def produit_division(val1,val2):
"""Cette fonction affiche le produit de 2 valeurs"""
return val1*val2, val1/val2
produit_division(4,6)
produit2, division = produit_division(4,6)
print(produit2,division)
# + [markdown] slideshow={"slide_type": "subslide"}
# Une fonction peut avoir de multiples arguments dont des argument facultatifs (avec des valeurs par défaut) :
# + slideshow={"slide_type": "fragment"}
def test(a,b,c=3):
return a+b+c
# Que renvoie :
# test(2,3)
# test(2,3,4)
# + [markdown] slideshow={"slide_type": "subslide"}
# On peut avoir un nombre indéfini d’arguments en utilisant `*args` (ils sont rassemblés dans un tuple)
# + slideshow={"slide_type": "fragment"}
def test2(a:int,b:float,*args):
val=0
for i in args:
val+=i
return a+b+val
# Que renvoie :
# test2(2,4,6,8)
test2(2.5,4)
# + [markdown] slideshow={"slide_type": "subslide"}
# On peut aussi stocker les options d'une fonction dans un dictionnaire avec `**kwargs`
# + slideshow={"slide_type": "fragment"}
def test3(a,b,*args,w=1,**kwargs):
x=0
for val in args:
x+=val
x+=a
x-=b
x*=w
option1=kwargs.get('option1',False)
option2=kwargs.get('option2',False)
if option1 is True and option2 is True:
print("Tout est vrai")
return x
else:
return x/2
# + slideshow={"slide_type": "fragment"}
# Que renvoie
#print(test3(1,2))
#print(test3(1,2,3,4))
#print(test3(1,2,option1=True))
print(test3(1,2,option1=True,option2=True))
#print(test3(1,2,3,4,w=2,option1=True,option2=True))
# -
dico_csv = {"sep": ";","decimal":","}
dico_csv
import pandas as pd
data = pd.read_csv("./data/base-dpt.csv",**dico_csv)
# + [markdown] slideshow={"slide_type": "subslide"}
# # Les fonctions lambda
#
# - Il s’agit de fonction très courte visant à simplifier votre code
# - On les trouve souvent dans du code en python
# - Les fonctions lambda peuvent être assigné a une variable qui devient une fonction ou être traitées toutes seules
# - On utilise le mot clé lambda
# - Il y a deux limites aux fonctions lambda :
# - Elle ne peut s’écrire que sur une ligne
# - Il n’y a qu’une instruction (on peut avoir plusieurs paramètres)
#
# - Quand utiliser des fonctions lambda :
# - A l’intérieur d’autres fonctions
# - Lorsqu’on veut appliquer une transformation sur des données
#
# + slideshow={"slide_type": "fragment"}
f=lambda x:x**3
print(f(5))
print((lambda x:x**3)(5))
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Exemples d'utilisations :
# + slideshow={"slide_type": "fragment"}
# Dans une fonction :
def ma_fonction_puissance(x):
return lambda a,b : (a+b)**x
# + slideshow={"slide_type": "fragment"}
fonction_cube=ma_fonction_puissance(3)
print(fonction_cube(2,3))
# + slideshow={"slide_type": "subslide"}
# Pour une transformation
from pandas import Series
ser1=Series([2,5,7,9])
ser1
# + slideshow={"slide_type": "fragment"}
ser1.apply(lambda x:x**2+2*x-5)
# + [markdown] slideshow={"slide_type": "subslide"}
# **Exercice sur les fonctions :**
#
# Construire une fonction prenant en entrée deux listes et qui retourne la moyenne de tous les éléments des deux listes (une seule valeur)
#
# -
def moyenne2listes(liste1,liste2):
return sum(liste1+liste2)/len(liste1+liste2)
print(moyenne2listes([1,4,6],[3,6,8]))
def moyenne_n_listes(*listes):
""" Cette fonction calcule la moyenne de tous les éléments de n listes
Entrée : listes
Sortie : moyenne:float
"""
somme = 0
longueur = 0
for liste in listes:
somme += sum(liste)
longueur += len(liste)
return somme / longueur
print(moyenne_n_listes([1,4,6],[3,6,8]))
print(moyenne_n_listes([1,4,6]))
# + [markdown] slideshow={"slide_type": "slide"}
# # Les packages et les modules en Python
#
# - Il s’agit d’un ensemble de fonctions qui pourront être appelées d’un autre programme
#
# - On utilise :
# ```
# import nom_pkg
# ```
# - Si on utilise `as nm` par exemple, ceci permet de raccourcir l'appel aux fonctions du package
# - On peut utiliser `from mon_pkg import *`, dans ce cas plus de préfixes mais attention aux conflits
# - On peut aussi utiliser des fonctions ou des classes spécifiques (plus besoin de préfixe)
# ```
# from pandas import Series
# ```
#
# *Si on cherche des informations sur un module, on peut utiliser `dir()` et `help()`*
# -
import pandas
import pandas as pd
import numpy as np
from sklearn.linear_model import LogisticRegression
# + [markdown] slideshow={"slide_type": "subslide"}
# # Créer votre module et votre package
#
# - Il est très simple de faire appel à des fonctions ou à des classes depuis un autre fichier
# - Il suffit de stocker vos classes et fonctions dans un fichier `.py`
# - Celui-ci pourra être stocké dans un répertoire appartenant aux chemins de Python ou dans votre répertoire de travail
# - Vous avez alors la possibilité de faire :
# ```
# import mon_fichier
# ```
# et d’utiliser les fonctions et classes de ce fichiers
#
# - Vous venez de créer un module.
# - **Attention ceci n’est pas un package !**
# + [markdown] slideshow={"slide_type": "subslide"}
# # Créer votre module et votre package (suite)
#
#
# - Un package est constitué de plus d’informations qu’un simple fichier `.py`
#
#
# - Voici les étapes pour créer votre premier package :
# - Choisissez un nom simple et traduisant bien ce que va faire votre package (si possible un nom qui n’est pas déjà utilisé)
# - Créez un répertoire du nom de votre package
# - Créez un fichier `__init__.py` dans ce répertoire, ce fichier peut être vide dans un premier temps
# - Dans le même répertoire ajoutez des fichiers avec vos fonctions et vos classes
#
# Si votre objectif est de publier le package sur PyPi, il faudra ajouter un fichier setup et les dépendances associées
#
# Pour importer votre package, il vous suffit de faire:
# ```
# import nom_dossier.nom_fichier
# ```
# + [markdown] slideshow={"slide_type": "subslide"}
# # Quelques détails sur la création de package
#
# - Le fichier `__init__.py` peut être vide mais il peut aussi contenir des informations
# - Ce fichier est lancé à chaque chargement du package, il peut inclure des dépendances, des vérifications de versions, des import vers les fonctions et classes des fichiers de votre package...
#
#
# - Où peut-on mettre nos packages ?
# - Par défaut, si le package se trouve dans le même répertoire que le script exécuté, ça fonctionne
# - Le package doit se trouver dans un répertoire lié au PythonPath
# - Pour rechercher les répertoire, on utilise le module `sys` et la fonction `path`
# - Pour créer un répertoire "temporaire", on utilise la fonction `path.insert()` du module `sys`
# -
# + [markdown] slideshow={"slide_type": "slide"}
# # Les classes
#
# Nous avons manipuler de nombreux objets de classes variées. Si nous voulons aller plus loin, il va falloir manipuler des classes et savoir les créer.
#
# Une classe est un type permettant de regrouper dans la même structure :
# - les informations (champs, propriétés, attributs) relatives à une entité ;
# - les procédures et fonctions permettant de les manipuler (méthodes).
# + [markdown] slideshow={"slide_type": "subslide"}
# Une classe commence par un constructeur :
# + slideshow={"slide_type": "fragment"}
class MaClasse:
def __init__(self,nom="Emmanuel",ville="Paris"):
self.nom= nom
self.ville=ville
# -
objet_classe=MaClasse("Emmanuel","Lyon")
objet_classe.nom="Aslane"
objet_classe.nom
# + [markdown] slideshow={"slide_type": "fragment"}
# Ensuite on peut définir d’autres fonctions dans la classe
#
#
# On définit ensuite une nouvelle instance et on peut ainsi remplir le nouvel objet
#
# + [markdown] slideshow={"slide_type": "subslide"}
# # Les classes
#
# - La programmation orientée objet, c’est un style de programmation qui permet de regrouper au même endroit le comportement (les fonctions) et les données (les structures) qui sont faites pour aller ensemble
# - La notion d’objet en Python concerne toutes les structures Python
# - L’idée est de créer vos propres objets
# - La création d’un objet se fait en deux étapes :
# - Description de l’objet
# - Fabrication de l’objet
# - La notion de classe en python représente la description de l’objet
# - La deuxième étape se fait uniquement en allouant des informations à l’objet. L’objet obtenu est une instance de la classe
#
# + slideshow={"slide_type": "subslide"}
# Une classe va ressembler à :
class MaClasse():
def __init__(self,val1=0,val2=0):
self.val1=val1
self.val2=val2
def methode1(self,param1):
self.val1+=param1
print(self.val1)
# -
objet_maclasse = MaClasse(val1=10,val2=20)
type(objet_maclasse)
objet_maclasse.methode1(10)
objet_maclasse.val1
# + [markdown] slideshow={"slide_type": "subslide"}
# **Exercice :**
#
# Définissez une classe `CompteBancaire()`, qui permette d'instancier des objets tels que `compte1`, `compte2`, etc.
#
# Le constructeur de cette classe initialisera deux attributs d'instance `nom` et `solde`, avec les valeurs par défaut `'A'` et `0`.
#
# Trois autres méthodes sont définies :
# - `depot(somme)` permettra d'ajouter une certaine somme au solde
# - `retrait(somme)` permettra de retirer une certaine somme du solde
# - `affiche()` permettra d'afficher le solde du compte et un message d’alerte en cas de solde négatif.
# -
class CompteBanquaire():
def __init__(self, nom = "", solde = 0):
self.nom = nom
self.solde = solde
def depot(self,somme):
self.solde += somme
def retrait(self,somme):
self.solde -= somme
def affiche(self):
print("Votre solde est de {}".format(self.solde))
mon_compte = CompteBanquaire(nom="Emmanuel",solde=100)
mon_compte.depot(10)
mon_compte.affiche()
# + [markdown] slideshow={"slide_type": "slide"}
# # La gestion des exceptions
#
# - Python possède un système pour gérer les exceptions
# - On utilise `try:` et `except:`
# - On peut intercepter différents types d’erreurs
# - `NameError`, `TypeError`, `ZeroDivisionError`
# - Si on ne veut rien faire dans l’exception on peut utiliser le mot clé `pass`
#
#
# + slideshow={"slide_type": "fragment"}
def ma_fonction(val):
try:
print(val**2)
except:
print("Erreur")
# + slideshow={"slide_type": "fragment"}
ma_fonction("r")
# + [markdown] slideshow={"slide_type": "subslide"}
# **Exercices :**
#
# Créer une gestion d’exception pour une fonction prenant en entrée deux `input()` puis divisez-les, utiliser les différentes erreurs d’exceptions
#
# -
def division():
try:
val1 = float(input("Entrez un nombre "))
val2 = float(input("Entrez un nombre "))
return val1/val2
except ZeroDivisionError as e:
print(e)
print("Erreur : ne pas diviser par 0")
except ValueError as e:
print("Erreur : vous devez entrer des valeurs numériques", e)
division()
| 03_bases_python.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="r-MTp1LogjF7"
# **Group Members**\
# **<NAME> - S20180010040**\
# **<NAME> - S20180010086**\
# **<NAME> - S20180010138**\
# **<NAME> - S20180010147**
# + id="errUzHSz53Nr"
import pandas as pd
import statistics
import numpy as np
import matplotlib.pyplot as plt
from pandas.plotting import scatter_matrix
from sklearn.model_selection import KFold ,RepeatedKFold,train_test_split
import statsmodels.api as sm
import statsmodels.formula.api as smf
from statsmodels.formula.api import ols
import seaborn as sns
from scipy.stats import shapiro,pearsonr
from scipy import stats
import scipy as sp
from sklearn.metrics import r2_score
from statsmodels.graphics.gofplots import qqplot
from statsmodels.stats.stattools import durbin_watson
from sklearn import preprocessing,metrics,datasets, linear_model,svm
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import KFold
from sklearn import linear_model,tree
# + id="9v6c3Vfaxuy0"
# !pip install factor_analyzer==0.2.3
# + id="TfGWj-0kKWVw"
sheets=pd.read_excel('/content/sample_data/stock portfolio performance data set.xlsx',sheet_name=['all period'],skiprows=[0], usecols = [1,2,3,4,5,6,13,14,15,16,17,18])
df=pd.concat(sheets[frame] for frame in sheets.keys())
df.keys()
df.describe()
# + id="joVUHp_RKWXG"
df.isnull().values.any()
# + id="9XwA6oLZMRVC"
df.head()
# + id="hmO-uQutKWf_"
X= df.drop(['Annual Return.1', 'Excess Return.1','Systematic Risk.1', 'Total Risk.1', 'Abs. Win Rate.1','Rel. Win Rate.1'],axis=1)
Y = df.drop([" Large B/P "," Large ROE "," Large S/P "," Large Return Rate in the last quarter "," Large Market Value "," Small systematic Risk"],axis=1)
for each in X.keys():
qqplot(X[each],line='s')
plt.show()
# + id="YZKItc6CKWpy"
boxplot = X.boxplot(grid=False,rot=45, fontsize=9)
# + id="vWpgyEU2KWyV"
df.shape
# + id="VPolFct3KWm2"
z = np.abs(stats.zscore(df))
df_o = df[(z < 3).all(axis=1)]
print(df_o.shape)
# + id="l8Erf98_KWcB"
X= df_o.drop(['Annual Return.1', 'Excess Return.1','Systematic Risk.1', 'Total Risk.1', 'Abs. Win Rate.1','Rel. Win Rate.1'],axis=1)
Y = df_o.drop([" Large B/P "," Large ROE "," Large S/P "," Large Return Rate in the last quarter "," Large Market Value "," Small systematic Risk"],axis=1)
boxplot = X.boxplot(grid=False,rot=45, fontsize=9)
# + id="wKftzi4iKWal"
correlation=df_o.corr()
print(correlation)
# + id="p4DqNTZXKWJr"
plt.figure(figsize=(10,8))
sns.heatmap(correlation, annot=True, cmap='coolwarm')
# + id="0yu1xk_kmhAC"
from sklearn.decomposition import PCA
pca = PCA(whiten=True)
pca.fit(X)
variance = pd.DataFrame(pca.explained_variance_ratio_)
print(variance)
cumulative=np.cumsum(pca.explained_variance_ratio_)
print(cumulative)
# + id="0EcZ0Npwp2FY"
df3 = pd.DataFrame({'variance_explained':cumulative,
'PC':['PC1','PC2','PC3','PC4','PC5','PC6']})
sns.barplot(x='PC',y="variance_explained", data=df3, color="c");
# + id="dXEE0EgcCQaO"
df2 = pd.DataFrame({'var':pca.explained_variance_ratio_,
'PC':['PC1','PC2','PC3','PC4','PC5','PC6']})
sns.barplot(x='PC',y="var", data=df2, color="c");
# + id="Z-mjrW9msda7"
components=pd.DataFrame(pca.components_,columns=X.columns,index = ['PC-1','PC-2','PC-3','PC-4','PC-5','PC-6'])
components.head(6)
# + id="nvIecw8RTfjO"
x_train, x_test, y_train, y_test = train_test_split(X, Y,test_size=0.2,random_state=1)
# + id="5eQlEYENUKlL"
targets=pd.DataFrame(columns=['Annual Return.1', 'Excess Return.1','Systematic Risk.1', 'Total Risk.1', 'Abs. Win Rate.1','Rel. Win Rate.1'],index=y_test.index.values)
for y in targets.keys():
reg = linear_model.LinearRegression()
reg.fit(x_train, y_train[y])
print("\n")
print("model for",y,'evaluation parameter')
print("The linear model is: Y = {:.5} + {:.5}*large b/p + {:.5}*large ROE + {:.5}*large s/p+ {:.5}*large return rates+ {:.5}*large market sales+ {:.5}*small system risk".format(reg.intercept_, reg.coef_[0], reg.coef_[1], reg.coef_[2],reg.coef_[3],reg.coef_[4],reg.coef_[5]))
print('Variance score: {}'.format(reg.score(x_test, y_test[y])))
y_pred = reg.predict(x_test)
targets[y]= y_pred
fig, ax = plt.subplots(1,1)
sns.regplot(x=y_pred, y=y_test[y], lowess=True, ax=ax, line_kws={'color': 'red'})
ax.set_title('Observed vs. Predicted Values', fontsize=16)
ax.set(xlabel='Predicted', ylabel='Observed')
# + id="9HxOZ3G9UKnU"
from yellowbrick.regressor import ResidualsPlot
# + id="S2woKpmRUKhy"
for y in targets.keys():
model_ols = sm.OLS(y_train[y],x_train).fit()
print(model_ols.summary())
# + id="64xf-nKqUKgX"
for y in targets.keys():
for x in X.keys():
colors = (0,0,0)
area = np.pi*3
df_o.plot.scatter(x=x, y=y)
# + id="Yj1xZXIAwIbr"
error_list=[]
for y in targets.keys():
error = y_test[y] - targets[y]
error_info = pd.DataFrame({'y_true': y_test[y], 'y_pred': targets[y], 'error': error}, columns=['y_true', 'y_pred', 'error'])
error_list.append(error_info)
plt.figure(figsize=(8,5))
g = sns.scatterplot(x="y_pred", y="error", data=error_info, color='blue')
g.set_title(f'Check Homoskedasticity {y}', fontsize=15)
g.set_xlabel("predicted values", fontsize=13)
g.set_ylabel("Residual", fontsize=13)
# + id="_8zj_ompwIrx"
for error in error_list:
fig, ax = plt.subplots(figsize=(8,5))
ax = error.error.plot()
dw=durbin_watson(error.error,axis=0)
print(dw)
ax.set_title('Uncorrelated errors', fontsize=15)
ax.set_xlabel("Data", fontsize=13)
ax.set_ylabel("Residual", fontsize=13)
# + id="w7_H_QOpwIPi"
for error in error_list:
fig, ax = plt.subplots(figsize=(6,4))
_ = sp.stats.probplot(error.error, plot=ax, fit=True)
ax.set_title('Probability plot', fontsize=15)
ax.set_xlabel("Theoritical Qunatiles", fontsize=13)
ax.set_ylabel("Ordered Values", fontsize=13)
ax = sm.qqplot(error.error, line='45')
plt.show()
# + id="ms-7ZLEq-iP5"
def coefficient_of_determination(ys_orig,ys_line):
y_mean_line = [statistics.mean(ys_orig) for y in ys_orig]
squared_error_regr = mean_squared_error(ys_orig, ys_line)
squared_error_y_mean = mean_squared_error(ys_orig, y_mean_line)
return 1 - (squared_error_regr/squared_error_y_mean)
def mean_absolute_percentage_error(y_true, y_pred):
y_true, y_pred = np.array(y_true), np.array(y_pred)
return np.mean(np.abs((y_true - y_pred) / y_true)) * 100
def mean_squared_error(y_true,y_pred):
return metrics.mean_squared_error(y_true, y_pred)
def goodness(y_true, y_pred):
mape = mean_absolute_percentage_error(y_true, y_pred)
mse = mean_squared_error(y_true, y_pred)
# + id="jTlvsfrE6fgk"
for y in targets.keys():
r_squared = coefficient_of_determination(y_test[y],targets[y])
print(r_squared)
# + id="8fH1DbYx-OZ6"
from factor_analyzer.factor_analyzer import calculate_bartlett_sphericity
chi_square_value,p_value=calculate_bartlett_sphericity(X)
chi_square_value, p_value
# + id="ZQ94cl5oCdli"
from factor_analyzer import FactorAnalyzer
# + id="A6KxyXFrCJC4"
# Create factor analysis object and perform factor analysis
fa = FactorAnalyzer()
fa.analyze(X,6, rotation=None)
# Check Eigenvalues
ev, v = fa.get_eigenvalues()
ev
# + id="-kh2YAwpCq-N"
# Create scree plot using matplotlib
plt.scatter(range(1,X.shape[1]+1),ev)
plt.plot(range(1,X.shape[1]+1),ev)
plt.title('Scree Plot')
plt.xlabel('Factors')
plt.ylabel('Eigenvalue')
plt.grid()
plt.show()
# + id="3leOWW7ZCKNJ"
fa.loadings
# + id="Ff23rsC-CkSw"
# Get variance of each factors
fa.get_factor_variance()
| Project_Code.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 3.10: データの結合
# +
# リスト 3.10.1 結合元の DataFrame
import numpy as np
import pandas as pd
from IPython import display
idx = ["R1", "R2"]
col = ["C1", "C2"]
a = pd.DataFrame([[1, 3], [2, 4]], index=idx, columns=col)
b = pd.DataFrame([[1, 5], [2, 6]], index=idx, columns=col)
c = pd.DataFrame(
[[1, 3, 7], [3, 4, 8], [5, 6, 9]], index=idx + ["R3"], columns=col + ["C3"]
)
display.display(a)
display.display(b)
display.display(c)
# +
# リスト 3.10.2 append メソッドによる DataFrame の追加
a.append(b)
# +
# リスト 3.10.3 列に差分がある場合
a.append(c)
# +
# リスト 3.10.4 インデックスの振り直し
a.append(b, ignore_index=True)
# +
# リスト 3.10.5 concat メソッドで縦方向に結合
pd.concat([a, b])
# +
# リスト 3.10.6 concat メソッドで横方向に結合
pd.concat([a, b], axis=1)
# +
# リスト 3.10.7 行に差分がある場合の結合
pd.concat([a, c], axis=0)
# +
# リスト 3.10.8 列に差分がある場合の結合
pd.concat([a, c], axis=1)
# +
# リスト 3.10.9 2 つの Series を結合
pd.concat([pd.Series(["a", "b"]), pd.Series(["A", "B"])], axis=1)
# +
# リスト 3.10.10 Series と DataFrame の結合
pd.concat([pd.Series(["a", "b"]), pd.DataFrame([["A", "C"], ["B", "D"]])])
# +
# リスト 3.10.11 merge メソッドによる結合
pd.merge(a, b, on="C1")
# +
# リスト 3.10.12 接尾辞を変更する場合
pd.merge(a, b, on="C1", suffixes=("_a", "_b"))
# +
# リスト 3.10.13 左外部結合
pd.merge(a, c, on="C1", how="left", suffixes=("_a", "_c"))
# +
# リスト 3.10.14 右外部結合
pd.merge(a, c, on="C1", how="right", suffixes=("_a", "_c"))
# +
# リスト 3.10.15 外部結合
pd.merge(a, c, on="C1", how="outer", suffixes=("_a", "_c"))
| notebooks/3-10.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Deep face recognition with Keras, Dlib and OpenCV
#
# Face recognition identifies persons on face images or video frames. In a nutshell, a face recognition system extracts features from an input face image and compares them to the features of labeled faces in a database. Comparison is based on a feature similarity metric and the label of the most similar database entry is used to label the input image. If the similarity value is below a certain threshold the input image is labeled as *unknown*. Comparing two face images to determine if they show the same person is known as face verification.
#
# This notebook uses a deep convolutional neural network (CNN) to extract features from input images. It follows the approach described in [[1]](https://arxiv.org/abs/1503.03832) with modifications inspired by the [OpenFace](http://cmusatyalab.github.io/openface/) project. [Keras](https://keras.io/) is used for implementing the CNN, [Dlib](http://dlib.net/) and [OpenCV](https://opencv.org/) for aligning faces on input images. Face recognition performance is evaluated on a small subset of the [LFW](http://vis-www.cs.umass.edu/lfw/) dataset which you can replace with your own custom dataset e.g. with images of your family and friends if you want to further experiment with this notebook. After an overview of the CNN architecure and how the model can be trained, it is demonstrated how to:
#
# - Detect, transform, and crop faces on input images. This ensures that faces are aligned before feeding them into the CNN. This preprocessing step is very important for the performance of the neural network.
# - Use the CNN to extract 128-dimensional representations, or *embeddings*, of faces from the aligned input images. In embedding space, Euclidean distance directly corresponds to a measure of face similarity.
# - Compare input embedding vectors to labeled embedding vectors in a database. Here, a support vector machine (SVM) and a KNN classifier, trained on labeled embedding vectors, play the role of a database. Face recognition in this context means using these classifiers to predict the labels i.e. identities of new inputs.
#
# ### Environment setup
#
# For running this notebook, create and activate a new [virtual environment](https://docs.python.org/3/tutorial/venv.html) and install the packages listed in [requirements.txt](requirements.txt) with `pip install -r requirements.txt`. Furthermore, you'll need a local copy of Dlib's face landmarks data file for running face alignment:
# +
import bz2
import os
from urllib.request import urlopen
def download_landmarks(dst_file):
url = 'http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2'
decompressor = bz2.BZ2Decompressor()
with urlopen(url) as src, open(dst_file, 'wb') as dst:
data = src.read(1024)
while len(data) > 0:
dst.write(decompressor.decompress(data))
data = src.read(1024)
dst_dir = 'models'
dst_file = os.path.join(dst_dir, 'landmarks.dat')
if not os.path.exists(dst_file):
os.makedirs(dst_dir)
download_landmarks(dst_file)
# -
# ### CNN architecture and training
#
# The CNN architecture used here is a variant of the inception architecture [[2]](https://arxiv.org/abs/1409.4842). More precisely, it is a variant of the NN4 architecture described in [[1]](https://arxiv.org/abs/1503.03832) and identified as [nn4.small2](https://cmusatyalab.github.io/openface/models-and-accuracies/#model-definitions) model in the OpenFace project. This notebook uses a Keras implementation of that model whose definition was taken from the [Keras-OpenFace](https://github.com/iwantooxxoox/Keras-OpenFace) project. The architecture details aren't too important here, it's only useful to know that there is a fully connected layer with 128 hidden units followed by an L2 normalization layer on top of the convolutional base. These two top layers are referred to as the *embedding layer* from which the 128-dimensional embedding vectors can be obtained. The complete model is defined in [model.py](model.py) and a graphical overview is given in [model.png](model.png). A Keras version of the nn4.small2 model can be created with `create_model()`.
# +
from model import create_model
nn4_small2 = create_model()
# -
# Model training aims to learn an embedding $f(x)$ of image $x$ such that the squared L2 distance between all faces of the same identity is small and the distance between a pair of faces from different identities is large. This can be achieved with a *triplet loss* $L$ that is minimized when the distance between an anchor image $x^a_i$ and a positive image $x^p_i$ (same identity) in embedding space is smaller than the distance between that anchor image and a negative image $x^n_i$ (different identity) by at least a margin $\alpha$.
#
# $$L = \sum^{m}_{i=1} \large[ \small {\mid \mid f(x_{i}^{a}) - f(x_{i}^{p})) \mid \mid_2^2} - {\mid \mid f(x_{i}^{a}) - f(x_{i}^{n})) \mid \mid_2^2} + \alpha \large ] \small_+$$
#
# $[z]_+$ means $max(z,0)$ and $m$ is the number of triplets in the training set. The triplet loss in Keras is best implemented with a custom layer as the loss function doesn't follow the usual `loss(input, target)` pattern. This layer calls `self.add_loss` to install the triplet loss:
# +
from keras import backend as K
from keras.models import Model
from keras.layers import Input, Layer
# Input for anchor, positive and negative images
in_a = Input(shape=(96, 96, 3))
in_p = Input(shape=(96, 96, 3))
in_n = Input(shape=(96, 96, 3))
# Output for anchor, positive and negative embedding vectors
# The nn4_small model instance is shared (Siamese network)
emb_a = nn4_small2(in_a)
emb_p = nn4_small2(in_p)
emb_n = nn4_small2(in_n)
class TripletLossLayer(Layer):
def __init__(self, alpha, **kwargs):
self.alpha = alpha
super(TripletLossLayer, self).__init__(**kwargs)
def triplet_loss(self, inputs):
a, p, n = inputs
p_dist = K.sum(K.square(a-p), axis=-1)
n_dist = K.sum(K.square(a-n), axis=-1)
return K.sum(K.maximum(p_dist - n_dist + self.alpha, 0), axis=0)
def call(self, inputs):
loss = self.triplet_loss(inputs)
self.add_loss(loss)
return loss
# Layer that computes the triplet loss from anchor, positive and negative embedding vectors
triplet_loss_layer = TripletLossLayer(alpha=0.2, name='triplet_loss_layer')([emb_a, emb_p, emb_n])
# Model that can be trained with anchor, positive negative images
nn4_small2_train = Model([in_a, in_p, in_n], triplet_loss_layer)
# -
# During training, it is important to select triplets whose positive pairs $(x^a_i, x^p_i)$ and negative pairs $(x^a_i, x^n_i)$ are hard to discriminate i.e. their distance difference in embedding space should be less than margin $\alpha$, otherwise, the network is unable to learn a useful embedding. Therefore, each training iteration should select a new batch of triplets based on the embeddings learned in the previous iteration. Assuming that a generator returned from a `triplet_generator()` call can generate triplets under these constraints, the network can be trained with:
# +
from data import triplet_generator
# triplet_generator() creates a generator that continuously returns
# ([a_batch, p_batch, n_batch], None) tuples where a_batch, p_batch
# and n_batch are batches of anchor, positive and negative RGB images
# each having a shape of (batch_size, 96, 96, 3).
generator = triplet_generator()
nn4_small2_train.compile(loss=None, optimizer='adam')
nn4_small2_train.fit_generator(generator, epochs=10, steps_per_epoch=100)
# Please note that the current implementation of the generator only generates
# random image data. The main goal of this code snippet is to demonstrate
# the general setup for model training. In the following, we will anyway
# use a pre-trained model so we don't need a generator here that operates
# on real training data. I'll maybe provide a fully functional generator
# later.
# -
# The above code snippet should merely demonstrate how to setup model training. But instead of actually training a model from scratch we will now use a pre-trained model as training from scratch is very expensive and requires huge datasets to achieve good generalization performance. For example, [[1]](https://arxiv.org/abs/1503.03832) uses a dataset of 200M images consisting of about 8M identities.
#
# The OpenFace project provides [pre-trained models](https://cmusatyalab.github.io/openface/models-and-accuracies/#pre-trained-models) that were trained with the public face recognition datasets [FaceScrub](http://vintage.winklerbros.net/facescrub.html) and [CASIA-WebFace](http://arxiv.org/abs/1411.7923). The Keras-OpenFace project converted the weights of the pre-trained nn4.small2.v1 model to [CSV files](https://github.com/iwantooxxoox/Keras-OpenFace/tree/master/weights) which were then [converted here](face-recognition-convert.ipynb) to a binary format that can be loaded by Keras with `load_weights`:
nn4_small2_pretrained = create_model()
nn4_small2_pretrained.load_weights('weights/nn4.small2.v1.h5')
# ### Custom dataset
# To demonstrate face recognition on a custom dataset, a small subset of the [LFW](http://vis-www.cs.umass.edu/lfw/) dataset is used. It consists of 100 face images of [10 identities](images). The metadata for each image (file and identity name) are loaded into memory for later processing.
# +
import numpy as np
import os.path
class IdentityMetadata():
def __init__(self, base, name, file):
# dataset base directory
self.base = base
# identity name
self.name = name
# image file name
self.file = file
def __repr__(self):
return self.image_path()
def image_path(self):
return os.path.join(self.base, self.name, self.file)
def load_metadata(path):
metadata = []
for i in os.listdir(path):
for f in os.listdir(os.path.join(path, i)):
# Check file extension. Allow only jpg/jpeg' files.
ext = os.path.splitext(f)[1]
if ext == '.jpg' or ext == '.jpeg':
metadata.append(IdentityMetadata(path, i, f))
return np.array(metadata)
metadata = load_metadata('images')
# -
# ### Face alignment
# The nn4.small2.v1 model was trained with aligned face images, therefore, the face images from the custom dataset must be aligned too. Here, we use [Dlib](http://dlib.net/) for face detection and [OpenCV](https://opencv.org/) for image transformation and cropping to produce aligned 96x96 RGB face images. By using the [AlignDlib](align.py) utility from the OpenFace project this is straightforward:
# +
import cv2
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from align import AlignDlib
# %matplotlib inline
def load_image(path):
img = cv2.imread(path, 1)
# OpenCV loads images with color channels
# in BGR order. So we need to reverse them
return img[...,::-1]
# Initialize the OpenFace face alignment utility
alignment = AlignDlib('models/landmarks.dat')
# Load an image of <NAME>
jc_orig = load_image(metadata[2].image_path())
# Detect face and return bounding box
bb = alignment.getLargestFaceBoundingBox(jc_orig)
# Transform image using specified face landmark indices and crop image to 96x96
jc_aligned = alignment.align(96, jc_orig, bb, landmarkIndices=AlignDlib.OUTER_EYES_AND_NOSE)
# Show original image
plt.subplot(131)
plt.imshow(jc_orig)
# Show original image with bounding box
plt.subplot(132)
plt.imshow(jc_orig)
plt.gca().add_patch(patches.Rectangle((bb.left(), bb.top()), bb.width(), bb.height(), fill=False, color='red'))
# Show aligned image
plt.subplot(133)
plt.imshow(jc_aligned);
# -
# As described in the OpenFace [pre-trained models](https://cmusatyalab.github.io/openface/models-and-accuracies/#pre-trained-models) section, landmark indices `OUTER_EYES_AND_NOSE` are required for model nn4.small2.v1. Let's implement face detection, transformation and cropping as `align_image` function for later reuse.
def align_image(img):
return alignment.align(96, img, alignment.getLargestFaceBoundingBox(img),
landmarkIndices=AlignDlib.OUTER_EYES_AND_NOSE)
# ### Embedding vectors
# Embedding vectors can now be calculated by feeding the aligned and scaled images into the pre-trained network.
# +
embedded = np.zeros((metadata.shape[0], 128))
for i, m in enumerate(metadata):
img = load_image(m.image_path())
img = align_image(img)
# scale RGB values to interval [0,1]
img = (img / 255.).astype(np.float32)
# obtain embedding vector for image
embedded[i] = nn4_small2_pretrained.predict(np.expand_dims(img, axis=0))[0]
# -
# Let's verify on a single triplet example that the squared L2 distance between its anchor-positive pair is smaller than the distance between its anchor-negative pair.
# +
def distance(emb1, emb2):
return np.sum(np.square(emb1 - emb2))
def show_pair(idx1, idx2):
plt.figure(figsize=(8,3))
plt.suptitle(f'Distance = {distance(embedded[idx1], embedded[idx2]):.2f}')
plt.subplot(121)
plt.imshow(load_image(metadata[idx1].image_path()))
plt.subplot(122)
plt.imshow(load_image(metadata[idx2].image_path()));
show_pair(2, 3)
show_pair(2, 12)
# -
# As expected, the distance between the two images of <NAME> is smaller than the distance between an image of Jacques Chirac and an image of <NAME> (0.30 < 1.12). But we still do not know what distance threshold $\tau$ is the best boundary for making a decision between *same identity* and *different identity*.
# ### Distance threshold
# To find the optimal value for $\tau$, the face verification performance must be evaluated on a range of distance threshold values. At a given threshold, all possible embedding vector pairs are classified as either *same identity* or *different identity* and compared to the ground truth. Since we're dealing with skewed classes (much more negative pairs than positive pairs), we use the [F1 score](https://en.wikipedia.org/wiki/F1_score) as evaluation metric instead of [accuracy](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html).
# +
from sklearn.metrics import f1_score, accuracy_score
distances = [] # squared L2 distance between pairs
identical = [] # 1 if same identity, 0 otherwise
num = len(metadata)
for i in range(num - 1):
for j in range(1, num):
distances.append(distance(embedded[i], embedded[j]))
identical.append(1 if metadata[i].name == metadata[j].name else 0)
distances = np.array(distances)
identical = np.array(identical)
thresholds = np.arange(0.3, 1.0, 0.01)
f1_scores = [f1_score(identical, distances < t) for t in thresholds]
acc_scores = [accuracy_score(identical, distances < t) for t in thresholds]
opt_idx = np.argmax(f1_scores)
# Threshold at maximal F1 score
opt_tau = thresholds[opt_idx]
# Accuracy at maximal F1 score
opt_acc = accuracy_score(identical, distances < opt_tau)
# Plot F1 score and accuracy as function of distance threshold
plt.plot(thresholds, f1_scores, label='F1 score');
plt.plot(thresholds, acc_scores, label='Accuracy');
plt.axvline(x=opt_tau, linestyle='--', lw=1, c='lightgrey', label='Threshold')
plt.title(f'Accuracy at threshold {opt_tau:.2f} = {opt_acc:.3f}');
plt.xlabel('Distance threshold')
plt.legend();
# -
# The face verification accuracy at $\tau$ = 0.56 is 95.7%. This is not bad given a baseline of 89% for a classifier that always predicts *different identity* (there are 980 pos. pairs and 8821 neg. pairs) but since nn4.small2.v1 is a relatively small model it is still less than what can be achieved by state-of-the-art models (> 99%).
#
# The following two histograms show the distance distributions of positive and negative pairs and the location of the decision boundary. There is a clear separation of these distributions which explains the discriminative performance of the network. One can also spot some strong outliers in the positive pairs class but these are not further analyzed here.
# +
dist_pos = distances[identical == 1]
dist_neg = distances[identical == 0]
plt.figure(figsize=(12,4))
plt.subplot(121)
plt.hist(dist_pos)
plt.axvline(x=opt_tau, linestyle='--', lw=1, c='lightgrey', label='Threshold')
plt.title('Distances (pos. pairs)')
plt.legend();
plt.subplot(122)
plt.hist(dist_neg)
plt.axvline(x=opt_tau, linestyle='--', lw=1, c='lightgrey', label='Threshold')
plt.title('Distances (neg. pairs)')
plt.legend();
# -
# ### Face recognition
# Given an estimate of the distance threshold $\tau$, face recognition is now as simple as calculating the distances between an input embedding vector and all embedding vectors in a database. The input is assigned the label (i.e. identity) of the database entry with the smallest distance if it is less than $\tau$ or label *unknown* otherwise. This procedure can also scale to large databases as it can be easily parallelized. It also supports one-shot learning, as adding only a single entry of a new identity might be sufficient to recognize new examples of that identity.
#
# A more robust approach is to label the input using the top $k$ scoring entries in the database which is essentially [KNN classification](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm) with a Euclidean distance metric. Alternatively, a linear [support vector machine](https://en.wikipedia.org/wiki/Support_vector_machine) (SVM) can be trained with the database entries and used to classify i.e. identify new inputs. For training these classifiers we use 50% of the dataset, for evaluation the other 50%.
# +
from sklearn.preprocessing import LabelEncoder
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import LinearSVC
targets = np.array([m.name for m in metadata])
encoder = LabelEncoder()
encoder.fit(targets)
# Numerical encoding of identities
y = encoder.transform(targets)
train_idx = np.arange(metadata.shape[0]) % 2 != 0
test_idx = np.arange(metadata.shape[0]) % 2 == 0
# 50 train examples of 10 identities (5 examples each)
X_train = embedded[train_idx]
# 50 test examples of 10 identities (5 examples each)
X_test = embedded[test_idx]
y_train = y[train_idx]
y_test = y[test_idx]
knn = KNeighborsClassifier(n_neighbors=1, metric='euclidean')
svc = LinearSVC()
knn.fit(X_train, y_train)
svc.fit(X_train, y_train)
acc_knn = accuracy_score(y_test, knn.predict(X_test))
acc_svc = accuracy_score(y_test, svc.predict(X_test))
print(f'KNN accuracy = {acc_knn}, SVM accuracy = {acc_svc}')
# -
# The KNN classifier achieves an accuracy of 96% on the test set, the SVM classifier 98%. Let's use the SVM classifier to illustrate face recognition on a single example.
# +
import warnings
# Suppress LabelEncoder warning
warnings.filterwarnings('ignore')
example_idx = 29
example_image = load_image(metadata[test_idx][example_idx].image_path())
example_prediction = svc.predict([embedded[test_idx][example_idx]])
example_identity = encoder.inverse_transform(example_prediction)[0]
plt.imshow(example_image)
plt.title(f'Recognized as {example_identity}');
# -
# Seems reasonable :-) Classification results should actually be checked whether (a subset of) the database entries of the predicted identity have a distance less than $\tau$, otherwise one should assign an *unknown* label. This step is skipped here but can be easily added.
#
#
# ### Dataset visualization
# To embed the dataset into 2D space for displaying identity clusters, [t-distributed Stochastic Neighbor Embedding](https://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embedding) (t-SNE) is applied to the 128-dimensional embedding vectors. Except from a few outliers, identity clusters are well separated.
# +
from sklearn.manifold import TSNE
X_embedded = TSNE(n_components=2).fit_transform(embedded)
for i, t in enumerate(set(targets)):
idx = targets == t
plt.scatter(X_embedded[idx, 0], X_embedded[idx, 1], label=t)
plt.legend(bbox_to_anchor=(1, 1));
# -
# ### References
#
# - [1] [FaceNet: A Unified Embedding for Face Recognition and Clustering](https://arxiv.org/abs/1503.03832)
# - [2] [Going Deeper with Convolutions](https://arxiv.org/abs/1409.4842)
| face-recognition.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Sustainable energy transitions data model
import pandas as pd, numpy as np, json, copy, zipfile, random, requests, StringIO
import matplotlib.pyplot as plt
# %matplotlib inline
plt.style.use('ggplot')
from IPython.core.display import Image
Image('favicon.png')
# ## Country and region name converters
# +
#country name converters
#EIA->pop
clist1={'North America':'Northern America',
'United States':'United States of America',
'Central & South America':'Latin America and the Caribbean',
'Bahamas, The':'Bahamas',
'Saint Vincent/Grenadines':'Saint Vincent and the Grenadines',
'Venezuela':'Venezuela (Bolivarian Republic of)',
'Macedonia':'The former Yugoslav Republic of Macedonia',
'Moldova':'Republic of Moldova',
'Russia':'Russian Federation',
'Iran':'Iran (Islamic Republic of)',
'Palestinian Territories':'State of Palestine',
'Syria':'Syrian Arab Republic',
'Yemen':'Yemen ',
'Congo (Brazzaville)':'Congo',
'Congo (Kinshasa)':'Democratic Republic of the Congo',
'Cote dIvoire (IvoryCoast)':"C\xc3\xb4te d'Ivoire",
'Gambia, The':'Gambia',
'Libya':'Libyan Arab Jamahiriya',
'Reunion':'R\xc3\xa9union',
'Somalia':'Somalia ',
'Sudan and South Sudan':'Sudan',
'Tanzania':'United Republic of Tanzania',
'Brunei':'Brunei Darussalam',
'Burma (Myanmar)':'Myanmar',
'Hong Kong':'China, Hong Kong Special Administrative Region',
'Korea, North':"Democratic People's Republic of Korea",
'Korea, South':'Republic of Korea',
'Laos':"Lao People's Democratic Republic",
'Macau':'China, Macao Special Administrative Region',
'Timor-Leste (East Timor)':'Timor-Leste',
'Virgin Islands, U.S.':'United States Virgin Islands',
'Vietnam':'Viet Nam'}
#BP->pop
clist2={u' European Union #':u'Europe',
u'Rep. of Congo (Brazzaville)':u'Congo (Brazzaville)',
'Republic of Ireland':'Ireland',
'China Hong Kong SAR':'China, Hong Kong Special Administrative Region',
u'Total Africa':u'Africa',
u'Total North America':u'Northern America',
u'Total S. & Cent. America':'Latin America and the Caribbean',
u'Total World':u'World',
u'Total World ':u'World',
'South Korea':'Republic of Korea',
u'Trinidad & Tobago':u'Trinidad and Tobago',
u'US':u'United States of America'}
#WD->pop
clist3={u"Cote d'Ivoire":"C\xc3\xb4te d'Ivoire",
u'Congo, Rep.':u'Congo (Brazzaville)',
u'Caribbean small states':'Carribean',
u'East Asia & Pacific (all income levels)':'Eastern Asia',
u'Egypt, Arab Rep.':'Egypt',
u'European Union':u'Europe',
u'Hong Kong SAR, China':u'China, Hong Kong Special Administrative Region',
u'Iran, Islamic Rep.':u'Iran (Islamic Republic of)',
u'Kyrgyz Republic':u'Kyrgyzstan',
u'Korea, Rep.':u'Republic of Korea',
u'Latin America & Caribbean (all income levels)':'Latin America and the Caribbean',
u'Macedonia, FYR':u'The former Yugoslav Republic of Macedonia',
u'Korea, Dem. Rep.':u"Democratic People's Republic of Korea",
u'South Asia':u'Southern Asia',
u'Sub-Saharan Africa (all income levels)':u'Sub-Saharan Africa',
u'Slovak Republic':u'Slovakia',
u'Venezuela, RB':u'Venezuela (Bolivarian Republic of)',
u'Yemen, Rep.':u'Yemen ',
u'Congo, Dem. Rep.':u'Democratic Republic of the Congo'}
#COMTRADE->pop
clist4={u"Bosnia Herzegovina":"Bosnia and Herzegovina",
u'Central African Rep.':u'Central African Republic',
u'China, Hong Kong SAR':u'China, Hong Kong Special Administrative Region',
u'China, Macao SAR':u'China, Macao Special Administrative Region',
u'Czech Rep.':u'Czech Republic',
u"Dem. People's Rep. of Korea":"Democratic People's Republic of Korea",
u'Dem. Rep. of the Congo':"Democratic Republic of the Congo",
u'Dominican Rep.':u'Dominican Republic',
u'Fmr Arab Rep. of Yemen':u'Yemen ',
u'Fmr Ethiopia':u'Ethiopia',
u'Fmr Fed. Rep. of Germany':u'Germany',
u'Fmr Panama, excl.Canal Zone':u'Panama',
u'Fmr Rep. of Vietnam':u'Viet Nam',
u"Lao People's Dem. Rep.":u"Lao People's Democratic Republic",
u'Occ. Palestinian Terr.':u'State of Palestine',
u'Rep. of Korea':u'Republic of Korea',
u'Rep. of Moldova':u'Republic of Moldova',
u'Serbia and Montenegro':u'Serbia',
u'US Virgin Isds':u'United States Virgin Islands',
u'Solomon Isds':u'Solomon Islands',
u'United Rep. of Tanzania':u'United Republic of Tanzania',
u'TFYR of Macedonia':u'The former Yugoslav Republic of Macedonia',
u'USA':u'United States of America',
u'USA (before 1981)':u'United States of America',
}
#Jacobson->pop
clist5={u"Korea, Democratic People's Republic of":"Democratic People's Republic of Korea",
u'All countries':u'World',
u"Cote d'Ivoire":"C\xc3\xb4te d'Ivoire",
u'Iran, Islamic Republic of':u'Iran (Islamic Republic of)',
u'Macedonia, Former Yugoslav Republic of':u'The former Yugoslav Republic of Macedonia',
u'Congo, Democratic Republic of':u"Democratic Republic of the Congo",
u'Korea, Republic of':u'Republic of Korea',
u'Tanzania, United Republic of':u'United Republic of Tanzania',
u'Moldova, Republic of':u'Republic of Moldova',
u'Hong Kong, China':u'China, Hong Kong Special Administrative Region',
u'All countries.1':"World"
}
#NREL solar->pop
clist6={u"Antigua & Barbuda":u'Antigua and Barbuda',
u"Bosnia & Herzegovina":u"Bosnia and Herzegovina",
u"Brunei":u'Brunei Darussalam',
u"Cote d'Ivoire":"C\xc3\xb4te d'Ivoire",
u"Iran":u'Iran (Islamic Republic of)',
u"Laos":u"Lao People's Democratic Republic",
u"Libya":'Libyan Arab Jamahiriya',
u"Moldova":u'Republic of Moldova',
u"North Korea":"Democratic People's Republic of Korea",
u"Reunion":'R\xc3\xa9union',
u'Sao Tome & Principe':u'Sao Tome and Principe',
u'Solomon Is.':u'Solomon Islands',
u'St. Lucia':u'Saint Lucia',
u'St. Vincent & the Grenadines':u'Saint Vincent and the Grenadines',
u'The Bahamas':u'Bahamas',
u'The Gambia':u'Gambia',
u'Virgin Is.':u'United States Virgin Islands',
u'West Bank':u'State of Palestine'
}
#NREL wind->pop
clist7={u"Antigua & Barbuda":u'Antigua and Barbuda',
u"Bosnia & Herzegovina":u"Bosnia and Herzegovina",
u'Occupied Palestinian Territory':u'State of Palestine',
u'China Macao SAR':u'China, Macao Special Administrative Region',
#"C\xc3\xb4te d'Ivoire":"C\xc3\xb4te d'Ivoire",
u'East Timor':u'Timor-Leste',
u'TFYR Macedonia':u'The former Yugoslav Republic of Macedonia',
u'IAM-country Total':u'World'
}
#country entroids->pop
clist8={u'Burma':'Myanmar',
u"Cote d'Ivoire":"C\xc3\xb4te d'Ivoire",
u'Republic of the Congo':u'Congo (Brazzaville)',
u'Reunion':'R\xc3\xa9union'
}
def cnc(country):
if country in clist1: return clist1[country]
elif country in clist2: return clist2[country]
elif country in clist3: return clist3[country]
elif country in clist4: return clist4[country]
elif country in clist5: return clist5[country]
elif country in clist6: return clist6[country]
elif country in clist7: return clist7[country]
elif country in clist8: return clist8[country]
else: return country
# -
# # Population
# Consult the notebook entitled *pop.ipynb* for the details of mining the data from the UN statistics division online database.
# Due to being the reference database for country names cell, the cell below needs to be run first, before any other databases.
try:
import zlib
compression = zipfile.ZIP_DEFLATED
except:
compression = zipfile.ZIP_STORED
#pop_path='https://dl.dropboxusercontent.com/u/531697/datarepo/Set/db/
pop_path='E:/Dropbox/Public/datarepo/Set/db/'
#suppres warnings
import warnings
warnings.simplefilter(action = "ignore")
cc=pd.read_excel(pop_path+'Country Code and Name ISO2 ISO3.xls')
#http://unstats.un.org/unsd/tradekb/Attachment321.aspx?AttachmentType=1
ccs=cc['Country Code'].values
neighbors=pd.read_csv(pop_path+'contry-geotime.csv')
#https://raw.githubusercontent.com/ppKrauss/country-geotime/master/data/contry-geotime.csv
#country name converter from iso to comtrade and back
iso2c={}
isoc2={}
for i in cc.T.iteritems():
iso2c[i[1][0]]=i[1][1]
isoc2[i[1][1]]=i[1][0]
#country name converter from pop to iso
pop2iso={}
for i in cc.T.iteritems():
pop2iso[cnc(i[1][1])]=int(i[1][0])
#country name converter from alpha 2 to iso
c2iso={}
for i in neighbors.T.iteritems():
c2iso[str(i[1][0])]=i[1][1]
c2iso['NA']=c2iso['nan'] #adjust for namibia
c2iso.pop('nan');
#create country neighbor adjacency list based on iso country number codes
c2neighbors={}
for i in neighbors.T.iteritems():
z=str(i[1][4]).split(' ')
if (str(i[1][1])!='nan'): c2neighbors[int(i[1][1])]=[c2iso[k] for k in z if k!='nan']
#extend iso codes not yet encountered
iso2c[729]="Sudan"
iso2c[531]="Curacao"
iso2c[535]="Bonaire, Sint Eustatius and Saba"
iso2c[728]="South Sudan"
iso2c[534]="Sint Maarten (Dutch part)"
iso2c[652]="Saint Barthélemy"
#load h2 min
h2=json.loads(file(pop_path+'h2.json','r').read())
#load tradealpha d
#predata=json.loads(file(pop_path+'/trade/traded.json','r').read())
predata=json.loads(file(pop_path+'/trade/smalltrade.json','r').read())
tradealpha={}
for c in predata:
tradealpha[c]={}
for year in predata[c]:
tradealpha[c][int(year)]=predata[c][year]
predata={}
#load savedata
predata=json.loads(file(pop_path+'savedata6.json','r').read())
data={}
for c in predata:
data[c]={}
for year in predata[c]:
data[c][int(year)]=predata[c][year]
predata={}
#load grids
grid=json.loads(file(pop_path+'grid.json','r').read())
grid5=json.loads(file(pop_path+'grid5.json','r').read())
gridz=json.loads(file(pop_path+'gridz.json','r').read())
gridz5=json.loads(file(pop_path+'gridz5.json','r').read())
#load ndists
ndists=json.loads(file(pop_path+'ndists.json','r').read())
distancenorm=7819.98
#load goodcountries
goodcountries=list(set(data.keys()).intersection(set(tradealpha.keys())))
#goodcountries=goodcountries[:20] #dev
rgc={} #reverse goodcountries coder
for i in range(len(goodcountries)):
rgc[goodcountries[i]]=i
cid={} #reverse goodcountries coder
for i in range(len(goodcountries)):
cid[goodcountries[i]]=i
def save3(sd,countrylist=[]):
#if True:
print 'saving... ',sd,
popsave={}
countries=[]
if countrylist==[]:
c=sorted(goodcountries)
else: c=countrylist
for country in c:
popdummy={}
tosave=[]
for year in data[country]:
popdummy[year]=data[country][year]['population']
for fuel in data[country][year]['energy']:
#for fuel in allfuels:
if fuel not in {'nrg','nrg_sum'}:
tosave.append({"t":year,"u":fuel,"g":"f","q1":"pp","q2":999,
"s":round(0 if (('navg3' in data[country][year]['energy'][fuel]['prod']) \
and (np.isnan(data[country][year]['energy'][fuel]['prod']['navg3']))) else \
data[country][year]['energy'][fuel]['prod']['navg3'] if \
'navg3' in data[country][year]['energy'][fuel]['prod'] else 0,3)
})
tosave.append({"t":year,"u":fuel,"g":"m","q1":"cc","q2":999,
"s":round(0 if (('navg3' in data[country][year]['energy'][fuel]['cons']) \
and (np.isnan(data[country][year]['energy'][fuel]['cons']['navg3']))) else \
data[country][year]['energy'][fuel]['cons']['navg3'] if \
'navg3' in data[country][year]['energy'][fuel]['cons'] else 0,3)
})
#save balances - only for dev
#if (year > min(balance.keys())):
# if year in balance:
# if country in balance[year]:
# tosave.append({"t":year,"u":"balance","g":"m","q1":"cc","q2":999,
# "s":balance[year][country]})
#no import export flows on global
if country not in {"World"}:
flowg={"Import":"f","Export":"m","Re-Export":"m","Re-Import":"f"}
if country in tradealpha:
for year in tradealpha[country]:
for fuel in tradealpha[country][year]:
for flow in tradealpha[country][year][fuel]:
for partner in tradealpha[country][year][fuel][flow]:
tosave.append({"t":int(float(year)),"u":fuel,"g":flowg[flow],"q1":flow,"q2":partner,
"s":round(tradealpha[country][year][fuel][flow][partner],3)
})
popsave[country]=popdummy
countries.append(country)
file('E:/Dropbox/Public/datarepo/Set/json/'+str(sd)+'/data.json','w').write(json.dumps(tosave))
zf = zipfile.ZipFile('E:/Dropbox/Public/datarepo/Set/json/'+str(sd)+'/'+str(country.encode('utf-8').replace('/','&&'))+'.zip', mode='w')
zf.write('E:/Dropbox/Public/datarepo/Set/json/'+str(sd)+'/data.json','data.json',compress_type=compression)
zf.close()
#save all countries list
file('E:/Dropbox/Public/datarepo/Set/universal/countries.json','w').write(json.dumps(countries))
#save countries populations
#file('E:/Dropbox/Public/datarepo/Set/json/pop.json','w').write(json.dumps(popsave))
print ' done'
# ## Impex updating
# +
def updatenormimpex(reporter,partner,flow,value,weight=0.1):
global nimportmatrix
global nexportmatrix
global nrimportmatrix
global nrexportmatrix
i=cid[reporter]
j=cid[partner]
if flow in {"Export","Re-Export"}:
nexportmatrix[i][j]=(nexportmatrix[i][j]*(1-weight))+(value*weight)
nrimportmatrix[j][i]=(nrimportmatrix[j][i]*(1-weight))+(value*weight)
if flow in {"Import","Re-Import"}:
nimportmatrix[i][j]=(nrimportmatrix[i][j]*(1-weight))+(value*weight)
nrexportmatrix[j][i]=(nrexportmatrix[j][i]*(1-weight))+(value*weight)
return
def influence(reporter,partner,selfinfluence=1.0,expfactor=3.0):
#country trade influence will tend to have an exponential distribution, therefore we convert to linear
#with a strength of expfactor
i=cid[reporter]
j=cid[partner]
if i==j: return selfinfluence
else: return (12.0/36*nimportmatrix[i][j]\
+6.0/36*nexportmatrix[j][i]\
+4.0/36*nrimportmatrix[i][j]\
+2.0/36*nrexportmatrix[j][i]\
+6.0/36*nexportmatrix[i][j]\
+3.0/36*nimportmatrix[j][i]\
+2.0/36*nrexportmatrix[i][j]\
+1.0/36*nrimportmatrix[j][i])**(1.0/expfactor)
# -
#load ! careful, need to rebuild index if tradealpha or data changes
predata=json.loads(file(pop_path+'trade/nimpex.json','r').read())
nexportmatrix=predata["nexport"]
nimportmatrix=predata["nimport"]
nrexportmatrix=predata["nrexport"]
nrimportmatrix=predata["nrimport"]
predata={}
import scipy
import pylab
import scipy.cluster.hierarchy as sch
import matplotlib as mpl
import matplotlib.font_manager as font_manager
from matplotlib.ticker import NullFormatter
path = 'Inconsolata-Bold.ttf'
prop = font_manager.FontProperties(fname=path)
labeler=json.loads(file(pop_path+'../universal/labeler.json','r').read())
isoico=json.loads(file(pop_path+'../universal/isoico.json','r').read())
risoico=json.loads(file(pop_path+'../universal/risoico.json','r').read())
def dendro(sd='00',selfinfluence=1.0,expfactor=3.0):
returnmatrix=scipy.zeros([len(goodcountries),len(goodcountries)])
matrix=scipy.zeros([len(goodcountries),len(goodcountries)])
global labs
global labsorder
global labs2
global labs3
labs=[]
labs2=[]
labs3=[]
for i in range(len(goodcountries)):
labs.append(labeler[goodcountries[i]])
labsorder = pd.Series(np.array(labs)) #create labelorder
labsorder=labsorder.rank(method='dense').values.astype(int)-1
alphabetvector=[0 for i in range(len(labsorder))]
for i in range(len(labsorder)):
alphabetvector[labsorder[i]-1]=i
labs=[]
for i in range(len(goodcountries)):
labs.append(labeler[goodcountries[alphabetvector[i]]])
labs2.append(goodcountries[alphabetvector[i]])
labs3.append(isoico[goodcountries[alphabetvector[i]]])
for j in alphabetvector:
matrix[i][j]=influence(goodcountries[alphabetvector[i]],goodcountries[alphabetvector[j]],selfinfluence,expfactor)
returnmatrix[i][j]=influence(goodcountries[i],goodcountries[j],selfinfluence,expfactor)
title=u'Partner Importance of COLUMN Country for ROW Country in Energy Trade [self-influence $q='+\
str(selfinfluence)+'$, power factor $p='+str(expfactor)+'$]'
#cmap=plt.get_cmap('RdYlGn_r') #for logplot
cmap=plt.get_cmap('YlGnBu')
labelpad=32
# Generate random features and distance matrix.
D = scipy.zeros([len(matrix),len(matrix)])
for i in range(len(matrix)):
for j in range(len(matrix)):
D[i,j] =matrix[i][j]
# Compute and plot first dendrogram.
fig = pylab.figure(figsize=(17,15))
sch.set_link_color_palette(10*["#ababab"])
# Plot original matrix.
axmatrix = fig.add_axes([0.3,0.1,0.6,0.6])
im = axmatrix.matshow(D[::-1], aspect='equal', origin='lower', cmap=cmap)
#im = axmatrix.matshow(E[::-1], aspect='auto', origin='lower', cmap=cmap) #for logplot
axmatrix.set_xticks([])
axmatrix.set_yticks([])
# Plot colorbar.
axcolor = fig.add_axes([0.87,0.1,0.02,0.6])
pylab.colorbar(im, cax=axcolor)
# Label up
axmatrix.set_xticks(range(len(matrix)))
mlabs=list(labs)
for i in range(len(labs)):
kz='-'
for k in range(labelpad-len(labs[i])):kz+='-'
if i%2==1: mlabs[i]=kz+u' '+labs[i]+u' '+'-'
else: mlabs[i]='-'+u' '+labs[i]+u' '+kz
axmatrix.set_xticklabels(mlabs, minor=False,fontsize=7,fontproperties=prop)
axmatrix.xaxis.set_label_position('top')
axmatrix.xaxis.tick_top()
pylab.xticks(rotation=-90, fontsize=8)
axmatrix.set_yticks(range(len(matrix)))
mlabs=list(labs)
for i in range(len(labs)):
kz='-'
for k in range(labelpad-len(labs[i])):kz+='-'
if i%2==0: mlabs[i]=kz+u' '+labs[i]+u' '+'-'
else: mlabs[i]='-'+u' '+labs[i]+u' '+kz
axmatrix.set_yticklabels(mlabs[::-1], minor=False,fontsize=7,fontproperties=prop)
axmatrix.yaxis.set_label_position('left')
axmatrix.yaxis.tick_left()
xlabels = axmatrix.get_xticklabels()
for label in range(len(xlabels)):
xlabels[label].set_rotation(90)
axmatrix.text(1.1, 0.5, title,
horizontalalignment='left',
verticalalignment='center',rotation=270,
transform=axmatrix.transAxes,size=10)
axmatrix.xaxis.grid(False)
axmatrix.yaxis.grid(False)
plt.savefig('E:/Dropbox/Public/datarepo/Set/json/'+str(sd)+'/'+'si'+str(selfinfluence)+'expf'+str(expfactor)+'dendrogram.png',dpi=150,bbox_inches = 'tight', pad_inches = 0.1, )
plt.close()
m1='centroid'
m2='single'
# Compute and plot first dendrogram.
fig = pylab.figure(figsize=(17,15))
ax1 = fig.add_axes([0.1245,0.1,0.1,0.6])
Y = sch.linkage(D, method=m1)
Z1 = sch.dendrogram(Y,above_threshold_color="#ababab", orientation='left')
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_axis_bgcolor('None')
# Compute and plot second dendrogram.
ax2 = fig.add_axes([0.335,0.825,0.5295,0.1])
Y = sch.linkage(D, method=m2)
Z2 = sch.dendrogram(Y,above_threshold_color="#ababab")
ax2.set_xticks([])
ax2.set_yticks([])
ax2.set_axis_bgcolor('None')
# Plot distance matrix.
axmatrix = fig.add_axes([0.3,0.1,0.6,0.6])
idx1 = Z1['leaves']
idx2 = Z2['leaves']
#D = E[idx1,:] #for logplot
D = D[idx1,:]
D = D[:,idx2]
im = axmatrix.matshow(D, aspect='equal', origin='lower', cmap=cmap)
axmatrix.set_xticks([])
axmatrix.set_yticks([])
# Plot colorbar.
axcolor = fig.add_axes([0.87,0.1,0.02,0.6])
ac=pylab.colorbar(im, cax=axcolor)
# Label up
axmatrix.set_xticks(np.arange(len(matrix))-0)
mlabs=list(np.array(labs)[idx2])
for i in range(len(np.array(labs)[idx2])):
kz='-'
for k in range(labelpad-len(np.array(labs)[idx2][i])):kz+='-'
if i%2==1: mlabs[i]=kz+u' '+np.array(labs)[idx2][i]+u' '+'-'
else: mlabs[i]='-'+u' '+np.array(labs)[idx2][i]+u' '+kz
axmatrix.set_xticklabels(mlabs, minor=False,fontsize=7,fontproperties=prop)
axmatrix.xaxis.set_label_position('top')
axmatrix.xaxis.tick_top()
pylab.xticks(rotation=-90, fontsize=8)
axmatrix.set_yticks(np.arange(len(matrix))+0)
mlabs=list(np.array(labs)[idx1])
for i in range(len(np.array(labs)[idx1])):
kz='-'
for k in range(labelpad-len(np.array(labs)[idx1][i])):kz+='-'
if i%2==0: mlabs[i]=kz+u' '+np.array(labs)[idx1][i]+u' '+'-'
else: mlabs[i]='-'+u' '+np.array(labs)[idx1][i]+u' '+kz
axmatrix.set_yticklabels(mlabs, minor=False,fontsize=7,fontproperties=prop)
axmatrix.yaxis.set_label_position('left')
axmatrix.yaxis.tick_left()
xlabels = axmatrix.get_xticklabels()
for label in xlabels:
label.set_rotation(90)
axmatrix.text(1.11, 0.5, title,
horizontalalignment='left',
verticalalignment='center',rotation=270,
transform=axmatrix.transAxes,size=10)
axmatrix.xaxis.grid(False)
axmatrix.yaxis.grid(False)
plt.savefig('E:/Dropbox/Public/datarepo/Set/json/'+str(sd)+'/'+'si'+str(selfinfluence)+'expf'+str(expfactor)+'dendrogram2.png',dpi=150,bbox_inches = 'tight', pad_inches = 0.1, )
plt.close()
return [returnmatrix,returnmatrix.T]
# ##################################
#run once
GC=[] #create backup of global country list
for i in goodcountries: GC.append(i)
file('E:/Dropbox/Public/datarepo/Set/db/GC.json','w').write(json.dumps(GC))
#create mini-world
goodcountries=["Austria","Germany","Hungary","France","Spain",
"United Kingdom","Morocco","Algeria","Denmark","United States of America","Japan","Saudi Arabia"]
goodcountries=GC
goodcountries2=["United States of America",#mostinfluential
"Russian Federation",
"Netherlands",
"United Kingdom",
"Italy",
"France",
"Saudi Arabia",
"Singapore",
"Germany",
"United Arab Emirates",
"China",
"India",
"Iran (Islamic Republic of)",
"Nigeria",
"Venezuela (Bolivarian Republic of)",
"South Africa"]
# ######################################
# +
#[importancematrix,influencematrix]=dendro('00',1,5)
# -
c=['seaGreen','royalBlue','#dd1c77']
levels=[1,3,5]
toplot=[cid[i] for i in goodcountries2]
tolabel=[labeler[i] for i in goodcountries2]
fig,ax=plt.subplots(1,2,figsize=(12,5))
for j in range(len(levels)):
[importancematrix,influencematrix]=dendro('00',1,levels[j])
z=[np.mean(i) for i in influencematrix] #sum country influence on columns
#if you wanted weighted influence, introduce weights (by trade volume i guess) here in the above mean
s = pd.Series(1/np.array(z)) #need to 1/ to create inverse order
s=s.rank(method='dense').values.astype(int)-1 #start from 0 not one
#s is a ranked array on which country ranks where in country influence
#we then composed the ordered vector of country influence
influencevector=[0 for i in range(len(s))]
for i in range(len(s)):
influencevector[s[i]]=i
zplot=[]
zplot2=[]
for i in toplot:
zplot.append(s[i]+1)
zplot2.append(z[i])
ax[0].scatter(np.array(zplot),np.arange(len(zplot))-0.2+0.2*j,40,color=c[j],label=u'$p='+str(levels[j])+'$')
ax[1].scatter(np.array(zplot2),np.arange(len(zplot))-0.2+0.2*j,40,color=c[j],label=u'$p='+str(levels[j])+'$')
ax[0].set_ylim(-1,len(toplot))
ax[1].set_ylim(-1,len(toplot))
ax[0].set_xlim(0,20)
ax[1].set_xscale('log')
ax[0].set_yticks(range(len(toplot)))
ax[0].set_yticklabels(tolabel)
ax[1].set_yticks(range(len(toplot)))
ax[1].set_yticklabels([])
ax[0].set_xlabel("Rank in Country Influence Vector")
ax[1].set_xlabel("Average Country Influence")
ax[1].legend(loc=1,framealpha=0)
plt.subplots_adjust(wspace=0.1)
plt.suptitle("Power Factor ($p$) Sensitivity of Country Influence",fontsize=14)
plt.savefig('powerfactor.png',dpi=150,bbox_inches = 'tight', pad_inches = 0.1, )
plt.show()
c=['seaGreen','royalBlue','#dd1c77']
levels=[1,3,5]
toplot=[cid[i] for i in goodcountries2]
tolabel=[labeler[i] for i in goodcountries2]
fig,ax=plt.subplots(1,2,figsize=(12,5))
for j in range(len(levels)):
[importancematrix,influencematrix]=dendro('00',1,levels[j])
z=[np.mean(i) for i in importancematrix] #sum country influence on columns
#if you wanted weighted influence, introduce weights (by trade volume i guess) here in the above mean
s = pd.Series(1/np.array(z)) #need to 1/ to create inverse order
s=s.rank(method='dense').values.astype(int)-1 #start from 0 not one
#s is a ranked array on which country ranks where in country influence
#we then composed the ordered vector of country influence
influencevector=[0 for i in range(len(s))]
for i in range(len(s)):
influencevector[s[i]]=i
zplot=[]
zplot2=[]
for i in toplot:
zplot.append(s[i]+1)
zplot2.append(z[i])
ax[0].scatter(np.array(zplot),np.arange(len(zplot))-0.2+0.2*j,40,color=c[j],label=u'$p='+str(levels[j])+'$')
ax[1].scatter(np.array(zplot2),np.arange(len(zplot))-0.2+0.2*j,40,color=c[j],label=u'$p='+str(levels[j])+'$')
ax[0].set_ylim(-1,len(toplot))
ax[1].set_ylim(-1,len(toplot))
ax[0].set_xlim(0,20)
ax[1].set_xscale('log')
ax[0].set_yticks(range(len(toplot)))
ax[0].set_yticklabels(tolabel)
ax[1].set_yticks(range(len(toplot)))
ax[1].set_yticklabels([])
ax[0].set_xlabel("Rank in Country Dependence Vector")
ax[1].set_xlabel("Average Country Dependence")
ax[1].legend(loc=1,framealpha=0)
plt.subplots_adjust(wspace=0.1)
plt.suptitle("Power Factor ($p$) Sensitivity of Country Dependence",fontsize=14)
plt.savefig('powerfactor2.png',dpi=150,bbox_inches = 'tight', pad_inches = 0.1, )
plt.show()
# Create energy cost by filling the matrix with the cost of row importing 1TWh from column. neglecting transport energy costs for now, this will be the extraction energy cost. Let us consider only solar for now. Try optimization with all three source, choose one with best objective value. 1TWh tier changes based on granurality.
#weighted resource class calculator
def re(dic,total):
if dic!={}:
i=max(dic.keys())
mi=min(dic.keys())
run=True
keys=[]
weights=[]
counter=0
while run:
counter+=1 #safety break
if counter>1000: run=False
if i in dic:
if total<dic[i]:
keys.append(i)
weights.append(total)
run=False
else:
total-=dic[i]
keys.append(i)
weights.append(dic[i])
i-=1
if i<mi: run=False
if sum(weights)==0: return 0
else: return np.average(keys,weights=weights)
else: return 0
region=pd.read_excel(pop_path+'regions.xlsx').set_index('Country')
#load
aroei=json.loads(file(pop_path+'aroei.json','r').read())
groei=json.loads(file(pop_path+'groei.json','r').read())
ndists=json.loads(file(pop_path+'ndists.json','r').read())
#average resource quality calculator for the globe
def update_aroei():
global aroei
aroei={}
groei={}
for c in res:
for r in res[c]:
if r not in groei: groei[r]={}
for cl in res[c][r]['res']:
if cl not in groei[r]: groei[r][cl]=0
groei[r][cl]+=res[c][r]['res'][cl]
for r in groei:
x=[]
y=[]
for i in range(len(sorted(groei[r].keys()))):
x.append(float(sorted(groei[r].keys())[i]))
y.append(float(groei[r][sorted(groei[r].keys())[i]]))
aroei[r]=np.average(x,weights=y)
#https://www.researchgate.net/publication/299824220_First_Insights_on_the_Role_of_solar_PV_in_a_100_Renewable_Energy_Environment_based_on_hourly_Modeling_for_all_Regions_globally
cost=pd.read_excel(pop_path+'/maps/storage.xlsx')
#1Bdi - grid
def normdistance(a,b):
return ndists[cid[a]][cid[b]]
def gridtestimator(country,partner,forceptl=False):
#return normdistance(country,partner)
def electricitytrade(country,partner):
scaler=1
gridpartners=grid5['electricity']
#existing trade partners
if ((partner in gridpartners[country]) or (country in gridpartners[partner])):
scaler+=cost.loc[region.loc[country]]['egrid'].values[0]/2.0
#neighbors, but need to build
elif pop2iso[country] in c2neighbors:
if (pop2iso[partner] in c2neighbors[pop2iso[country]]):
scaler+=cost.loc[region.loc[country]]['grid'].values[0]/2.0*normdistance(country,partner)
#not neighbors or partners but in the same region, need to build
elif (region.loc[country][0]==region.loc[partner][0]):
scaler+=cost.loc[region.loc[country]]['grid'].values[0]*3.0/2.0*normdistance(country,partner)
#need to build supergrid, superlative costs
else:
scaler+=cost.loc[region.loc[country]]['grid'].values[0]*10.0/2.0*normdistance(country,partner)
#need to build supergrid, superlative costs
else:
scaler+=cost.loc[region.loc[country]]['grid'].values[0]*10.0/2.0*normdistance(country,partner)
return scaler
def ptltrade(country,partner):
#ptg costs scale with distance
scaler=1+cost.loc[11]['ptg']*100.0*normdistance(country,partner)
return scaler
if ptltrade(country,partner)<electricitytrade(country,partner) or forceptl:
return {"scaler":ptltrade(country,partner),"tradeway":"ptl"}
else: return {"scaler":electricitytrade(country,partner),"tradeway":"grid"}
#1Bdii - storage &curtailment
def storagestimator(country):
return cost.loc[region.loc[country]]['min'].values[0]
#curtoversizer
def curtestimator(country):
return cost.loc[region.loc[country]]['curt'].values[0]
#global benchmark eroei, due to state of technology
eroei={
#'oil':13,
#'coal':27,
#'gas':14,
#'nuclear':10,
#'biofuels':1.5,
#'hydro':84,
#'geo_other':22,
'pv':17.6,
'csp':10.2,
'wind':20.2 #was 24
}
#without esoei
#calibrated from global
# # ALLINONE
#initialize renewable totals for learning
total2014={'csp':0,'solar':0,'wind':0}
learning={'csp':0.04,'solar':0.04,'wind':0.02}
year=2014
for fuel in total2014:
total2014[fuel]=np.nansum([np.nansum(data[partner][year]['energy'][fuel]['cons']['navg3'])\
for partner in goodcountries if fuel in data[partner][year]['energy']])
total2014
# +
#scenario id (folder id)
#first is scenario family, then do 4 variations of scenarios (2 selfinluence, 2 power factor) as 01, 02...
sd='00' #only fossil profiles and non-scalable
#import resources
###################################
###################################
#load resources
predata=json.loads(file(pop_path+'maps/newres.json','r').read())
res={}
for c in predata:
res[c]={}
for f in predata[c]:
res[c][f]={}
for r in predata[c][f]:
res[c][f][r]={}
for year in predata[c][f][r]:
res[c][f][r][int(year)]=predata[c][f][r][year]
predata={}
print 'scenario',sd,'loaded resources',
###################################
###################################
#load demand2
predata=json.loads(file(pop_path+'demand2.json','r').read())
demand2={}
for c in predata:
demand2[c]={}
for year in predata[c]:
demand2[c][int(year)]=predata[c][year]
predata={}
print 'demand',
###################################
###################################
#load tradealpha d
#predata=json.loads(file(pop_path+'/trade/traded.json','r').read())
predata=json.loads(file(pop_path+'/trade/smalltrade.json','r').read())
tradealpha={}
for c in predata:
tradealpha[c]={}
for year in predata[c]:
tradealpha[c][int(year)]=predata[c][year]
predata={}
print 'tradedata',
###################################
###################################
#reload impex and normalize
predata=json.loads(file(pop_path+'trade/nimpex.json','r').read())
nexportmatrix=predata["nexport"]
nimportmatrix=predata["nimport"]
nrexportmatrix=predata["nrexport"]
nrimportmatrix=predata["nrimport"]
predata={}
print 'impex',
###################################
###################################
#load latest savedata
#we dont change the data for now, everything is handled through trade
predata=json.loads(file(pop_path+'savedata5.json','r').read())
data={}
for c in predata:
data[c]={}
for year in predata[c]:
data[c][int(year)]=predata[c][year]
predata={}
print 'data'
###################################
###################################
# -
save3('00') #save default
#reset balance
ybalance={}
#recalculate balances
for year in range(2015,2101):
balance={}
if year not in ybalance:ybalance[year]={}
for c in goodcountries:
balance[c]=0
if c in tradealpha:
f1=0
for fuel in tradealpha[c][year]:
if 'Import' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,sum(tradealpha[c][year][fuel]['Import'].values())])
if 'Re-Import' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,sum(tradealpha[c][year][fuel]['Re-Import'].values())])
if 'Export' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,-sum(tradealpha[c][year][fuel]['Export'].values())])
if 'Re-Export' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,-sum(tradealpha[c][year][fuel]['Re-Export'].values())])
if fuel in data[c][year]['energy']:
f1=np.nansum([f1,data[c][year]['energy'][fuel]['prod']['navg3']])
balance[c]-=f1
balance[c]+=demand2[c][year]*8760*1e-12
if 'balance' not in data[c][year]['energy']:
data[c][year]['energy']['balance']={'prod':{'navg3':0},'cons':{'navg3':0}}
data[c][year]['energy']['balance']['prod']['navg3']=max(0,balance[c])#balance can't be negative
data[c][year]['energy']['balance']['cons']['navg3']=max(0,balance[c])
ybalance[year]=balance
save3('01') #save default
def cbalance(year,c):
balance=0
if c in tradealpha:
f1=0
for fuel in tradealpha[c][year]:
if 'Import' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,sum(tradealpha[c][year][fuel]['Import'].values())])
if 'Re-Import' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,sum(tradealpha[c][year][fuel]['Re-Import'].values())])
if 'Export' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,-sum(tradealpha[c][year][fuel]['Export'].values())])
if 'Re-Export' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,-sum(tradealpha[c][year][fuel]['Re-Export'].values())])
if '_' in fuel:
fuel=fuel[fuel.find('_')+1:]
if fuel in data[c][year]['energy']:
f1=np.nansum([f1,data[c][year]['energy'][fuel]['prod']['navg3']])
balance-=f1
balance+=demand2[c][year]*8760*1e-12
return balance
def res_adv(country,fuel): #this country's wavg resource compared to global
x=[]
y=[]
if fuel=='solar':fuel='pv'
d=groei[fuel] #global wavg resource class
for i in range(len(sorted(d.keys()))):
if float(d[sorted(d.keys())[i]])>0.1:
x.append(float(sorted(d.keys())[i]))
y.append(float(d[sorted(d.keys())[i]]))
x2=[]
y2=[]
if country not in res: return 0
d2=res[country][fuel]['res'] #country's wavg resource class
for i in range(len(sorted(d2.keys()))):
if float(d2[sorted(d2.keys())[i]])>0.1:
x2.append(float(sorted(d2.keys())[i]))
y2.append(float(d2[sorted(d2.keys())[i]]))
if y2!=[]: return np.average(x2,weights=y2)*1.0/np.average(x,weights=y)
else: return 0
def costvectorranker(cv):
k={}
for i in cv:
for j in cv[i]:
k[(i)+'_'+str(j)]=cv[i][j]
return sorted(k.items(), key=lambda value: value[1])
def trade(country,partner,y0,fuel,value,lifetime):
tradeable[partner][fuel]-=value
key=tradeway[country][partner]+'_'+fuel
for year in range(y0,min(2101,y0+lifetime)):
#add production
if fuel not in data[partner][year]['energy']:
data[partner][year]['energy'][fuel]={'prod':{'navg3':0},'cons':{'navg3':0}}
data[partner][year]['energy'][fuel]['prod']['navg3']+=value
data[partner][year]['energy']['nrg_sum']['prod']['navg3']+=value
#add consumption
if fuel not in data[country][year]['energy']:
data[country][year]['energy'][fuel]={'prod':{'navg3':0},'cons':{'navg3':0}}
data[country][year]['energy'][fuel]['cons']['navg3']+=value
data[country][year]['energy']['nrg_sum']['cons']['navg3']+=value
#add storage on country side (if not ptl)
if tradeway[country][partner]=='grid':
if fuel not in {'csp'}:
if 'storage' not in data[country][year]['energy']:
data[country][year]['energy']['storage']={'prod':{'navg3':0},'cons':{'navg3':0}}
data[country][year]['energy']['storage']['prod']['navg3']+=value*storagestimator(country)
data[country][year]['energy']['storage']['cons']['navg3']+=value*storagestimator(country)
if country!=partner:
#add import flow
if key not in tradealpha[country][year]:tradealpha[country][year][key]={}
if 'Import' not in tradealpha[country][year][key]:tradealpha[country][year][key]["Import"]={}
if str(pop2iso[partner]) not in tradealpha[country][year][key]["Import"]:
tradealpha[country][year][key]["Import"][str(pop2iso[partner])]=0
tradealpha[country][year][key]["Import"][str(pop2iso[partner])]+=value
#add export flow
if key not in tradealpha[partner][year]:tradealpha[partner][year][key]={}
if 'Export' not in tradealpha[partner][year][key]:tradealpha[partner][year][key]["Export"]={}
if str(pop2iso[country]) not in tradealpha[partner][year][key]["Export"]:
tradealpha[partner][year][key]["Export"][str(pop2iso[country])]=0
tradealpha[partner][year][key]["Export"][str(pop2iso[country])]+=value
def fill(cv,divfactor,divshare):
#trade diversificatioin necessity
divbalance=balance*divshare
scaler=min(1.0,divbalance/\
sum([tradeable[cv[i][0][:cv[i][0].find('_')]]\
[cv[i][0][cv[i][0].find('_')+1:]] for i in range(divfactor)])) #take all or partial
for i in range(divfactor):
partner=cv[i][0][:cv[i][0].find('_')]
fuel=cv[i][0][cv[i][0].find('_')+1:]
trade(country,partner,year,fuel,max(0,tradeable[partner][fuel])*scaler,lifetime)
#trade rest
totrade=[]
tradesum=0
for i in range(len(cv)):
partner=cv[i][0][:cv[i][0].find('_')]
fuel=cv[i][0][cv[i][0].find('_')+1:]
if tradeable[partner][fuel]>balance*(1-divshare)-tradesum:
totrade.append((cv[i][0],balance*(1-divshare)-tradesum))
tradesum+=balance*(1-divshare)-tradesum
break
else:
totrade.append((cv[i][0],tradeable[partner][fuel]))
tradesum+=tradeable[partner][fuel]
if i==len(cv)-1:print 'not enough',year,country
for i in totrade:
partner=i[0][:i[0].find('_')]
fuel=i[0][i[0].find('_')+1:]
trade(country,partner,year,fuel,i[1],lifetime)
def nrgsum(country,year):
return np.nansum([data[country][year]['energy'][i]['prod']['navg3'] for i in data[country][year]['energy'] if i not in ['nrg_sum','sum','nrg']])
[importancematrix,influencematrix]=dendro('03',4,3) #2,5, or 4,3
#load data - if already saved
predata=json.loads(file(pop_path+'savedata6.json','r').read())
data={}
for c in predata:
data[c]={}
for year in predata[c]:
data[c][int(year)]=predata[c][year]
predata=json.loads(file(pop_path+'/trade/smalltrade.json','r').read())
tradealpha={}
for c in predata:
tradealpha[c]={}
for year in predata[c]:
tradealpha[c][int(year)]=predata[c][year]
predata={}
# +
fc={"solar":'pv',"csp":'csp',"wind":'wind'}
divfactor=10 #min trade partners in diversification
divshare=0.2 #min share of the diversification
tradeway={}
maxrut=0.001 #for each type #max rampup total, if zero 5% of 1%
maxrur=0.5 #growth rate for each techno #max rampup rate
lifetime=20+int(random.random()*20)
for year in range(2015,2101):
tradeable={}
for i in range(len(goodcountries)):
country=goodcountries[i]
if country not in tradeable:tradeable[country]={'solar':0,'csp':0,'wind':0}
for fuel in {"solar","csp","wind"}:
if fuel not in data[country][year-1]['energy']:
tradeable[country][fuel]=nrgsum(country,year-1)*maxrut
#default starter plant
#tradeable[country][fuel]= 0.1
elif data[country][year-1]['energy'][fuel]['prod']['navg3']==0:
tradeable[country][fuel]=nrgsum(country,year-1)*maxrut
#default starter plant
#tradeable[country][fuel]= 0.1
else: tradeable[country][fuel]=max(nrgsum(country,year-1)*maxrut,
data[country][year-1]['energy'][fuel]['prod']['navg3']*maxrur)
for i in range(len(influencevector))[6:7]:#4344
country=goodcountries[influencevector[i]]
balance=cbalance(year,country)
if year==2015:
costvector={}
for j in range(len(goodcountries)):
partner=goodcountries[j]
if partner not in costvector:costvector[partner]={}
transactioncost=gridtestimator(country,partner)
if country not in tradeway:tradeway[country]={}
if partner not in tradeway[country]:tradeway[country][partner]=transactioncost["tradeway"]
for fuel in {"solar","csp","wind"}:
costvector[partner][fuel]=1.0/influencematrix[influencevector[i]][j]*\
transactioncost['scaler']*\
1.0/(eroei[fc[fuel]]*1.0/np.mean(eroei.values())*\
res_adv(partner,fuel)*\
aroei[fc[fuel]]*1.0/np.mean(aroei.values()))
cv=costvectorranker(costvector)
if balance>0:
fill(cv,divfactor,divshare)
if year%10==0: print year,country
# -
save3('03',['United Kingdom','Argentina','Germany'])
cv[:5]
tradeable
for k in range(len(influencevector)):
print k,influencevector[k],goodcountries[influencevector[k]]
res uti!!
# +
###################################
###################################
###################################
###################################
gi={"open":{},"notrade":{}}
eroei={}
once=True
release={} #release reserves
for year in range(2015,2040):
print year
#SET PARAMETERS
#------------------------------------------------
#reset balance
balance={}
#recalculate balances
for c in goodcountries:
balance[c]=0
if c in tradealpha:
f1=0
for fuel in tradealpha[c][year]:
if 'Import' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,sum(tradealpha[c][year][fuel]['Import'].values())])
if 'Re-Import' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,sum(tradealpha[c][year][fuel]['Re-Import'].values())])
if 'Export' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,-sum(tradealpha[c][year][fuel]['Export'].values())])
if 'Re-Export' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,-sum(tradealpha[c][year][fuel]['Re-Export'].values())])
if fuel in data[c][year]['energy']:
f1=np.nansum([f1,data[c][year]['energy'][fuel]['prod']['navg3']])
balance[c]=-(demand2[c][year]*8760*1e-12-f1)
#1A
avgbalance=np.mean(balance.values())
needers=sorted([c for c in balance if balance[c]<0])[:]
givers=sorted([c for c in balance if balance[c]>avgbalance])
#update global technical eroei
fuel2={'csp':'csp','pv':'solar','wind':'wind'}
for t in fuel2:
fuel=fuel2[t]
eroei[t]=eroei0[t]*(np.nansum([np.nansum(data[partner][year]['energy'][fuel]['prod']['navg3'])\
for partner in goodcountries if fuel in data[partner][year]['energy']])*1.0/total2015[fuel])**learning[fuel]
#################################################
#1B
#import random
#random.seed(sd*year)
#shuffle order of parsing countries
#random.shuffle(needers)
#------------------------------------------------
#1Ba
#country for parsing the needers list
for counter in range(len(needers)):
country=needers[counter]
#print country,
need=-balance[country] #as a convention switch to positive, defined as 'need'
mintier=1 #in TWh
midtier=10 #mid tier TWh
hitier=100 #mid tier TWh
if need>hitier: tiernumber=10
elif need>midtier: tiernumber=5
elif need>mintier: tiernumber=3
else: tiernumber=1
#OVERWRITE TIERNUMBER
tiernumber=3
#MIN SHARE LIMIT
homeshare={'csp':False,'pv':False,'wind':False}
minshare=0.10
homesum=np.sum([data[country][year]['energy'][ii]['prod']['navg3'] \
for ii in data[country][year]['energy'] if ii not in {"nrg","nrg_sum"}])
if homesum>0:
for fuel in {'csp','pv','wind'}:
if fuel2[fuel] in data[country][year]['energy']:
if (minshare>data[country][year]['energy'][fuel2[fuel]]['prod']['navg3']*1.0/homesum):
homeshare[fuel]=True
#if all are fulfilled, no need for the constraint
if np.array(homeshare.values()).all(): homeshare={'csp':False,'pv':False,'wind':False}
for tier in range(tiernumber):
tierneed=need*1.0/tiernumber
#------------------------------------------------
#1Bb
costvector={}
update_aroei() #update sate of the resources globally to be able to rank between technologies
for partner in givers+[country]:
if partner in res:
for fuel in {'csp','pv','wind'}:
#if satisfies min share constraint
if not homeshare[fuel]:
#at each time step you much import each fuel typeat least once
if res[partner][fuel]['res']!={}:
#query if giver can ramp up production this fast
#max investment cannot exceed rampuplimit (=15%)
ok=False
su=np.sum([data[partner][year]['energy'][ii]['prod']['navg3'] \
for ii in data[partner][year]['energy'] if ii not in {"nrg","nrg_sum"}])
if su*rampuplimit>tierneed: #not tierneed
if fuel2[fuel] in data[partner][year]['energy']:
if np.isnan(data[partner][year]['energy'][fuel2[fuel]]['prod']['navg3']): ok=True
elif data[partner][year]['energy'][fuel2[fuel]]['prod']['navg3']==0: ok=True
elif (tierneed<data[partner][year]['energy'][fuel2[fuel]]['prod']['navg3']*fuelrampuplimit):ok=True
#again not tierneed
else: ok=False
else: ok=True #new resource, build it
if ok:
#rq (resource query) returns the average resource class at which this tierneed can be provided
#we multiply by the storage/curtailment needs
storagescaler=(1+storagestimator(partner)+curtestimator(partner))
rq=re(res[partner][fuel]['res'],tierneed)/storagescaler
#the costvector takes the resource class and converts it to eroei by comparing it
#the average resource class at a known point with a know eroei (at start in 2015)
#we are looking figh highvalues, as a marginal quality of resource
costvector[fuel+'_'+partner]=(rq/aroei[fuel]*eroei[fuel]) #normalized resource quality over eroei
if costvector=={}:
print 'impossible to fullfill demand', country, ' in tier ', tier
#1Bbi - norlmalize costvector to be able to compare with trade influence
else:
normcostvector=copy.deepcopy(costvector)
for i in normcostvector:
costvector[i]/=np.nanmean(costvector.values())
#1Bbii - create costfactor, weights are tweakable
costfactor={}
for key in costvector:
partner=key[key.find('_')+1:]
costfactor[key]=((costvector[key]**2)*(influence(country,partner,selfinfluence)**2))**(1/4.0)
#costfactor[key]=costvector[key]
#The geometric mean is more appropriate than the arithmetic mean for describing proportional growth,
#both exponential growth (constant proportional growth) and varying growth; i
#n business the geometric mean of growth rates is known as the compound annual growth rate (CAGR).
#The geometric mean of growth over periods yields the equivalent constant growth rate that would
#yield the same final amount.
#influence(country,partner,2) - third parameter : relative importance of
#self comparted to most influential country
#1Bc - choose partner
best=max(costfactor, key=costfactor.get)
tradepartner=best[best.find('_')+1:]
tradefuel=best[:best.find('_')]
#------------------------------------------------
#1Be - IMPLEMENT TRADE
lt=int(20+random.random()*15) #lifetime
#otherwise we have to implement resource updating
#1Beii - Reduce provider reserves within year
levels=res[tradepartner][tradefuel]['res'].keys()
level=max(levels)
tomeet=tierneed*1.0
#record release lt years in the future
if year+lt not in release:release[year+lt]={}
if tradepartner not in release[year+lt]:release[year+lt][tradepartner]={}
if tradefuel not in release[year+lt][tradepartner]:release[year+lt][tradepartner][tradefuel]={}
#hold resources for lt
while level>min(levels):
if level not in res[tradepartner][tradefuel]['res']: level-=1
elif res[tradepartner][tradefuel]['res'][level]<tomeet:
tomeet-=res[tradepartner][tradefuel]['res'][level]
if level not in release[year+lt][tradepartner][tradefuel]:
release[year+lt][tradepartner][tradefuel][level]=0
release[year+lt][tradepartner][tradefuel][level]+=res[tradepartner][tradefuel]['res'][level]
res[tradepartner][tradefuel]['res'].pop(level)
level-=1
else:
res[tradepartner][tradefuel]['res'][level]-=tomeet
if level not in release[year+lt][tradepartner][tradefuel]:
release[year+lt][tradepartner][tradefuel][level]=0
release[year+lt][tradepartner][tradefuel][level]+=tomeet
level=0
#------------------------------------------------
#1Be-implement country trade
#only production capacity stays, trade does not have to
gyear=int(1.0*year)
for year in range(gyear,min(2100,gyear+lt)):
#update globalinvestment
if year not in globalinvestment:globalinvestment[year]={"net":0,"inv":0}
globalinvestment[year]["net"]+=tierneed
globalinvestment[year]["inv"]+=tierneed/normcostvector[best]
#add production
if tradefuel not in data[tradepartner][year]['energy']:
data[tradepartner][year]['energy'][tradefuel]={'prod':{'navg3':0},'cons':{'navg3':0}}
data[tradepartner][year]['energy'][tradefuel]['prod']['navg3']+=tierneed
#add storage
if tradefuel not in {'csp'}:
if 'storage' not in data[tradepartner][year]['energy']:
data[tradepartner][year]['energy']['storage']={'prod':{'navg3':0},'cons':{'navg3':0}}
data[tradepartner][year]['energy']['storage']['prod']['navg3']+=tierneed*storagestimator(tradepartner)
data[tradepartner][year]['energy']['storage']['cons']['navg3']+=tierneed*storagestimator(tradepartner)
year=gyear
#add consumption
if tradefuel not in data[country][year]['energy']:
data[country][year]['energy'][tradefuel]={'prod':{'navg3':0},'cons':{'navg3':0}}
data[country][year]['energy'][tradefuel]['cons']['navg3']+=tierneed
#add trade flows if not self
key=gridtestimator(country,partner)['tradeway']+'_'+tradefuel
if country!=tradepartner:
#add import flow
if key not in tradealpha[country][year]:tradealpha[country][year][key]={}
if 'Import' not in tradealpha[country][year][key]:tradealpha[country][year][key]["Import"]={}
if str(pop2iso[tradepartner]) not in tradealpha[country][year][key]["Import"]:
tradealpha[country][year][key]["Import"][str(pop2iso[tradepartner])]=0
tradealpha[country][year][key]["Import"][str(pop2iso[tradepartner])]+=tierneed
#add export flow
if key not in tradealpha[tradepartner][year]:tradealpha[tradepartner][year][key]={}
if 'Export' not in tradealpha[tradepartner][year][key]:tradealpha[tradepartner][year][key]["Export"]={}
if str(pop2iso[country]) not in tradealpha[tradepartner][year][key]["Export"]:
tradealpha[tradepartner][year][key]["Export"][str(pop2iso[country])]=0
tradealpha[tradepartner][year][key]["Export"][str(pop2iso[country])]+=tierneed
#record trade to influence - counld be weighted, deaful is 10%
updatenormimpex(country,tradepartner,'Import',tierneed/need)
updatenormimpex(tradepartner,country,'Export',tierneed/need)
#save data for processed countries
print 'saving...'
if selfinfluence==10:
sde=10
sdk="open"
else:
sde=20
sdk="notrade"
gi[sdk]=globalinvestment
save3(sde,goodcountries)
file('E:/Dropbox/Public/datarepo/Set/gi.json','w').write(json.dumps(gi))
print 'done',sde
# -
###################################
###################################
###################################
###################################
gi={"open":{},"notrade":{}}
eroei={}
once=True
rampuplimit=0.08 #overall generation ramp up limit
fuelrampuplimit=0.25 #inditvidual fuel ramp up limit
for selfinfluence in {1,10}:
globalinvestment={}
release={} #release reserves
for year in range(2015,2040):
print year
#SET PARAMETERS
#------------------------------------------------
#release reserves
if year in release:
for c in release[year]:
for fuel in release[year][c]:
for level in release[year][c][fuel]:
if level in res[c][fuel]['res']:
res[c][fuel]['res'][level]+=release[year][c][fuel][level]
else: res[c][fuel]['res'][level]=release[year][c][fuel][level]
#reset balance
balance={}
#recalculate balances
for c in goodcountries:
balance[c]=0
if c in tradealpha:
f1=0
for fuel in tradealpha[c][year]:
if 'Import' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,sum(tradealpha[c][year][fuel]['Import'].values())])
if 'Re-Import' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,sum(tradealpha[c][year][fuel]['Re-Import'].values())])
if 'Export' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,-sum(tradealpha[c][year][fuel]['Export'].values())])
if 'Re-Export' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,-sum(tradealpha[c][year][fuel]['Re-Export'].values())])
if fuel in data[c][year]['energy']:
f1=np.nansum([f1,data[c][year]['energy'][fuel]['prod']['navg3']])
balance[c]=-(demand2[c][year]*8760*1e-12-f1)
#1A
avgbalance=np.mean(balance.values())
needers=sorted([c for c in balance if balance[c]<0])[:]
givers=sorted([c for c in balance if balance[c]>avgbalance])
#update global technical eroei
fuel2={'csp':'csp','pv':'solar','wind':'wind'}
for t in fuel2:
fuel=fuel2[t]
eroei[t]=eroei0[t]*(np.nansum([np.nansum(data[partner][year]['energy'][fuel]['prod']['navg3'])\
for partner in goodcountries if fuel in data[partner][year]['energy']])*1.0/total2015[fuel])**learning[fuel]
#################################################
#1B
#import random
#random.seed(sd*year)
#shuffle order of parsing countries
#random.shuffle(needers)
#------------------------------------------------
#1Ba
#country for parsing the needers list
for counter in range(len(needers)):
country=needers[counter]
#print country,
need=-balance[country] #as a convention switch to positive, defined as 'need'
mintier=1 #in TWh
midtier=10 #mid tier TWh
hitier=100 #mid tier TWh
if need>hitier: tiernumber=10
elif need>midtier: tiernumber=5
elif need>mintier: tiernumber=3
else: tiernumber=1
#OVERWRITE TIERNUMBER
tiernumber=3
#MIN SHARE LIMIT
homeshare={'csp':False,'pv':False,'wind':False}
minshare=0.10
homesum=np.sum([data[country][year]['energy'][ii]['prod']['navg3'] \
for ii in data[country][year]['energy'] if ii not in {"nrg","nrg_sum"}])
if homesum>0:
for fuel in {'csp','pv','wind'}:
if fuel2[fuel] in data[country][year]['energy']:
if (minshare>data[country][year]['energy'][fuel2[fuel]]['prod']['navg3']*1.0/homesum):
homeshare[fuel]=True
#if all are fulfilled, no need for the constraint
if np.array(homeshare.values()).all(): homeshare={'csp':False,'pv':False,'wind':False}
for tier in range(tiernumber):
tierneed=need*1.0/tiernumber
#------------------------------------------------
#1Bb
costvector={}
update_aroei() #update sate of the resources globally to be able to rank between technologies
for partner in givers+[country]:
if partner in res:
for fuel in {'csp','pv','wind'}:
#if satisfies min share constraint
if not homeshare[fuel]:
#at each time step you much import each fuel typeat least once
if res[partner][fuel]['res']!={}:
#query if giver can ramp up production this fast
#max investment cannot exceed rampuplimit (=15%)
ok=False
su=np.sum([data[partner][year]['energy'][ii]['prod']['navg3'] \
for ii in data[partner][year]['energy'] if ii not in {"nrg","nrg_sum"}])
if su*rampuplimit>tierneed: #not tierneed
if fuel2[fuel] in data[partner][year]['energy']:
if np.isnan(data[partner][year]['energy'][fuel2[fuel]]['prod']['navg3']): ok=True
elif data[partner][year]['energy'][fuel2[fuel]]['prod']['navg3']==0: ok=True
elif (tierneed<data[partner][year]['energy'][fuel2[fuel]]['prod']['navg3']*fuelrampuplimit):ok=True
#again not tierneed
else: ok=False
else: ok=True #new resource, build it
if ok:
#rq (resource query) returns the average resource class at which this tierneed can be provided
#we multiply by the storage/curtailment needs
storagescaler=(1+storagestimator(partner)+curtestimator(partner))
rq=re(res[partner][fuel]['res'],tierneed)/storagescaler
#the costvector takes the resource class and converts it to eroei by comparing it
#the average resource class at a known point with a know eroei (at start in 2015)
#we are looking figh highvalues, as a marginal quality of resource
costvector[fuel+'_'+partner]=(rq/aroei[fuel]*eroei[fuel]) #normalized resource quality over eroei
if costvector=={}:
print 'impossible to fullfill demand', country, ' in tier ', tier
#1Bbi - norlmalize costvector to be able to compare with trade influence
else:
normcostvector=copy.deepcopy(costvector)
for i in normcostvector:
costvector[i]/=np.nanmean(costvector.values())
#1Bbii - create costfactor, weights are tweakable
costfactor={}
for key in costvector:
partner=key[key.find('_')+1:]
costfactor[key]=((costvector[key]**2)*(influence(country,partner,selfinfluence)**2))**(1/4.0)
#costfactor[key]=costvector[key]
#The geometric mean is more appropriate than the arithmetic mean for describing proportional growth,
#both exponential growth (constant proportional growth) and varying growth; i
#n business the geometric mean of growth rates is known as the compound annual growth rate (CAGR).
#The geometric mean of growth over periods yields the equivalent constant growth rate that would
#yield the same final amount.
#influence(country,partner,2) - third parameter : relative importance of
#self comparted to most influential country
#1Bc - choose partner
best=max(costfactor, key=costfactor.get)
tradepartner=best[best.find('_')+1:]
tradefuel=best[:best.find('_')]
#------------------------------------------------
#1Be - IMPLEMENT TRADE
lt=int(20+random.random()*15) #lifetime
#otherwise we have to implement resource updating
#1Beii - Reduce provider reserves within year
levels=res[tradepartner][tradefuel]['res'].keys()
level=max(levels)
tomeet=tierneed*1.0
#record release lt years in the future
if year+lt not in release:release[year+lt]={}
if tradepartner not in release[year+lt]:release[year+lt][tradepartner]={}
if tradefuel not in release[year+lt][tradepartner]:release[year+lt][tradepartner][tradefuel]={}
#hold resources for lt
while level>min(levels):
if level not in res[tradepartner][tradefuel]['res']: level-=1
elif res[tradepartner][tradefuel]['res'][level]<tomeet:
tomeet-=res[tradepartner][tradefuel]['res'][level]
if level not in release[year+lt][tradepartner][tradefuel]:
release[year+lt][tradepartner][tradefuel][level]=0
release[year+lt][tradepartner][tradefuel][level]+=res[tradepartner][tradefuel]['res'][level]
res[tradepartner][tradefuel]['res'].pop(level)
level-=1
else:
res[tradepartner][tradefuel]['res'][level]-=tomeet
if level not in release[year+lt][tradepartner][tradefuel]:
release[year+lt][tradepartner][tradefuel][level]=0
release[year+lt][tradepartner][tradefuel][level]+=tomeet
level=0
#------------------------------------------------
#1Be-implement country trade
#only production capacity stays, trade does not have to
gyear=int(1.0*year)
for year in range(gyear,min(2100,gyear+lt)):
#update globalinvestment
if year not in globalinvestment:globalinvestment[year]={"net":0,"inv":0}
globalinvestment[year]["net"]+=tierneed
globalinvestment[year]["inv"]+=tierneed/normcostvector[best]
#add production
if tradefuel not in data[tradepartner][year]['energy']:
data[tradepartner][year]['energy'][tradefuel]={'prod':{'navg3':0},'cons':{'navg3':0}}
data[tradepartner][year]['energy'][tradefuel]['prod']['navg3']+=tierneed
#add storage
if tradefuel not in {'csp'}:
if 'storage' not in data[tradepartner][year]['energy']:
data[tradepartner][year]['energy']['storage']={'prod':{'navg3':0},'cons':{'navg3':0}}
data[tradepartner][year]['energy']['storage']['prod']['navg3']+=tierneed*storagestimator(tradepartner)
data[tradepartner][year]['energy']['storage']['cons']['navg3']+=tierneed*storagestimator(tradepartner)
year=gyear
#add consumption
if tradefuel not in data[country][year]['energy']:
data[country][year]['energy'][tradefuel]={'prod':{'navg3':0},'cons':{'navg3':0}}
data[country][year]['energy'][tradefuel]['cons']['navg3']+=tierneed
#add trade flows if not self
key=gridtestimator(country,partner)['tradeway']+'_'+tradefuel
if country!=tradepartner:
#add import flow
if key not in tradealpha[country][year]:tradealpha[country][year][key]={}
if 'Import' not in tradealpha[country][year][key]:tradealpha[country][year][key]["Import"]={}
if str(pop2iso[tradepartner]) not in tradealpha[country][year][key]["Import"]:
tradealpha[country][year][key]["Import"][str(pop2iso[tradepartner])]=0
tradealpha[country][year][key]["Import"][str(pop2iso[tradepartner])]+=tierneed
#add export flow
if key not in tradealpha[tradepartner][year]:tradealpha[tradepartner][year][key]={}
if 'Export' not in tradealpha[tradepartner][year][key]:tradealpha[tradepartner][year][key]["Export"]={}
if str(pop2iso[country]) not in tradealpha[tradepartner][year][key]["Export"]:
tradealpha[tradepartner][year][key]["Export"][str(pop2iso[country])]=0
tradealpha[tradepartner][year][key]["Export"][str(pop2iso[country])]+=tierneed
#record trade to influence - counld be weighted, deaful is 10%
updatenormimpex(country,tradepartner,'Import',tierneed/need)
updatenormimpex(tradepartner,country,'Export',tierneed/need)
#save data for processed countries
print 'saving...'
if selfinfluence==10:
sde=10
sdk="open"
else:
sde=20
sdk="notrade"
gi[sdk]=globalinvestment
save3(sde,goodcountries)
file('E:/Dropbox/Public/datarepo/Set/gi.json','w').write(json.dumps(gi))
print 'done',sde
| arh/netset7barebone-Copy2-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import logging;
import datetime;
# +
def logFuncInfo(func):
logging.basicConfig(filename="Excuation.log",level=logging.INFO);
now=datetime.datetime.now();
exeTime=now.strftime("%Y:%M:%d-%H:%M:%S");
print(exeTime)
def wrapper(*args,**kwargs):
logging.info("function {} with args {} and kwargs {} called at {}".format(func.__name__,args,kwargs,exeTime));
return func(*args,**kwargs);
return wrapper;
# -
@logFuncInfo
def display(name,age):
print("Name:{} \nAge:{}".format(name,age));
display("<NAME>",25)
class LogFunction(object):
logging.basicConfig(filename="ClassDecorator.log",level=logging.INFO);
def __init__(self,function):
self.func=function;
self.exeTime=datetime.datetime.now().strftime("%Y:%M:%d-%H:%M:%S");
def __call__(self,*args,**kwargs):
logging.info(f"function {self.func.__name__} with args {args} kwargs {kwargs} executed at time {self.exeTime}")
return self.func(*args,**kwargs);
@LogFunction
def check(x=2):
if x%2==0:
print("positive");
else:
print("negative")
check(500)
import time;
class calculateTime(object):
def __init__(self,func):
self.func=func;
self.now=time.time();
def __call__(self,*args,**kwargs):
result=self.func(*args,**kwargs);
exe=time.time()-self.now;
print(exe);
return result;
@calculateTime
def check(x=2):
if x%2==0:
print("positive");
else:
print("negative")
check(25)
c=check;
dir(c)
c.func
c.now
c.func.__name__
c.func.__str__
# ### Multi Decorators
from functools import wraps;
def logFuncInfo(func):
logging.basicConfig(filename="multiDecorators.log",level=logging.INFO);
now=datetime.datetime.now();
exeTime=now.strftime("%Y:%M:%d-%H:%M:%S");
@wraps(func)
def wrapper(*args,**kwargs):
logging.info("function {} with args {} and kwargs {} called at {}".format(func.__name__,args,kwargs,exeTime));
return func(*args,**kwargs);
return wrapper;
def calTime(func):
now=time.time();
@wraps(func)
def wrapper(*args,**kwargs):
result=func(*args,**kwargs);
exe=time.time()-now;
print(f"{func.__name__} execuate within {exe}");
return result;
return wrapper;
@logFuncInfo
@calTime
def say_hello(name="<NAME>"):
print(f"hello {name}")
say_hello("DEVELOPER")
| Writing Functions In Python/Decorators/practice/Examples.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
from sklearn.metrics import fbeta_score
from sklearn.model_selection import train_test_split
# from sklearn.cross_validation import KFold
from sklearn.model_selection import KFold
import lightgbm as lgb
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
model_sample = pd.read_csv('dataset/model.csv')
model_sample.set_index('user_id',inplace=True)
label = model_sample[['y']]
model_sample = model_sample.drop('y',axis=1)
def get_features_middle(data):
model_sample_strong_feature = data.copy()
# 将身份信息以及财产信息进行编码
first_strong_features = ['x_003','x_004','x_005','x_006','x_007','x_008','x_009','x_010','x_011','x_012','x_013','x_014','x_015','x_016','x_017','x_018','x_019']
res = 0
for i in range(len(first_strong_features)):
res += 2 ** i * data[first_strong_features[i]]
model_sample_strong_feature['x_1_strong'] = res
# 借记卡的比例特征
model_sample_strong_feature['x_022/x_020'] = data['x_022'] / (data['x_020'] + 1e-10)
model_sample_strong_feature['x_023/x_020'] = data['x_023'] / (data['x_020'] + 1e-10)
model_sample_strong_feature['x_024/x_020'] = data['x_024'] / (data['x_020'] + 1e-10)
model_sample_strong_feature['x_025/x_020'] = data['x_025'] / (data['x_020']+ 1e-10)
model_sample_strong_feature['x_026/x_020'] = data['x_026'] / (data['x_020'] + 1e-10)
# 贷记卡的比例特征
model_sample_strong_feature['x_028/x_021'] = data['x_028'] / (data['x_021'] + 1e-10)
model_sample_strong_feature['x_029/x_021'] = data['x_029'] / (data['x_021'] + 1e-10)
model_sample_strong_feature['x_030/x_021'] = data['x_030'] / (data['x_021'] + 1e-10)
model_sample_strong_feature['x_031/x_021'] = data['x_031'] / (data['x_021'] + 1e-10)
model_sample_strong_feature['x_032/x_021'] = data['x_032'] / (data['x_021'] + 1e-10)
# 银行卡的比例特征
model_sample_strong_feature['all_cards'] = (data['x_034'] + data['x_035'] + data['x_036'] + data['x_037'] + data['x_038'] + data['x_039'] + data['x_040'] ).values
model_sample_strong_feature['x_034/all_cards'] = data['x_034'] / (model_sample_strong_feature['all_cards'] + 1e-10)
model_sample_strong_feature['x_035/all_cards'] = data['x_035'] / (model_sample_strong_feature['all_cards'] + 1e-10)
model_sample_strong_feature['x_036/all_cards'] = data['x_036'] / (model_sample_strong_feature['all_cards'] + 1e-10)
model_sample_strong_feature['x_037/all_cards'] = data['x_037'] / (model_sample_strong_feature['all_cards'] + 1e-10)
model_sample_strong_feature['x_038/all_cards'] = data['x_038'] / (model_sample_strong_feature['all_cards'] + 1e-10)
model_sample_strong_feature['x_039/all_cards'] = data['x_039'] / (model_sample_strong_feature['all_cards'] + 1e-10)
model_sample_strong_feature['x_040/all_cards'] = data['x_040'] / (model_sample_strong_feature['all_cards'] + 1e-10)
# 标准差还原
model_sample_strong_feature['x_043/x_044'] = data['x_043'] / (data['x_044'] + 1e-10)
model_sample_strong_feature['x_046/x_047'] = data['x_046'] / (data['x_047'] + 1e-10)
model_sample_strong_feature['x_050/x_051'] = data['x_050'] / (data['x_051'] + 1e-10)
model_sample_strong_feature['x_053/x_054'] = data['x_053'] / (data['x_054'] + 1e-10)
model_sample_strong_feature['x_057/x_058'] = data['x_057'] / (data['x_058'] + 1e-10)
model_sample_strong_feature['x_060/x_061'] = data['x_060'] / (data['x_061'] + 1e-10)
model_sample_strong_feature['x_076/x_077'] = data['x_076'] / (data['x_077'] + 1e-10)
model_sample_strong_feature['x_079/x_080'] = data['x_079'] / (data['x_080'] + 1e-10)
model_sample_strong_feature['x_083/x_084'] = data['x_083'] / (data['x_084'] + 1e-10)
model_sample_strong_feature['x_086/x_087'] = data['x_086'] / (data['x_087'] + 1e-10)
model_sample_strong_feature['x_090/x_091'] = data['x_090'] / (data['x_091'] + 1e-10)
model_sample_strong_feature['x_094/x_095'] = data['x_094'] / (data['x_095'] + 1e-10)
model_sample_strong_feature['x_098/x_099'] = data['x_098'] / (data['x_099'] + 1e-10)
model_sample_strong_feature['x_123/x_124'] = data['x_123'] / (data['x_124'] + 1e-10)
model_sample_strong_feature['x_126/x_127'] = data['x_126'] / (data['x_127'] + 1e-10)
# 每张卡(信用or其他)交易金额等;每笔(异地每笔)交易金额等;每笔还款金额等;每笔商旅,保险,家装,金融等的均值特征;每个月的平均交易笔数;其他有意义的均值特征
model_sample_strong_feature['x_045/x_41'] = data['x_045'] / (data['x_041'] + 1e-10)
model_sample_strong_feature['x_052/x_48'] = data['x_052'] / (data['x_048'] + 1e-10)
model_sample_strong_feature['x_059/x_55'] = data['x_059'] / (data['x_055'] + 1e-10)
model_sample_strong_feature['x_064/x_062'] = data['x_064'] / (data['x_062'] + 1e-10)
model_sample_strong_feature['x_067/x_065'] = data['x_067'] / (data['x_065'] + 1e-10)
model_sample_strong_feature['x_070/x_068'] = data['x_070'] / (data['x_068'] + 1e-10)
model_sample_strong_feature['x_073/x_071'] = data['x_073'] / (data['x_071'] + 1e-10)
model_sample_strong_feature['x_078/x_074'] = data['x_078'] / (data['x_074'] + 1e-10)
model_sample_strong_feature['x_085/x_081'] = data['x_085'] / (data['x_081'] + 1e-10)
model_sample_strong_feature['x_100/x_101'] = data['x_100'] / (data['x_101'] + 1e-10)
model_sample_strong_feature['x_102/x_103'] = data['x_102'] / (data['x_103'] + 1e-10)
model_sample_strong_feature['x_108/x_105'] = data['x_108'] / (data['x_105'] + 1e-10)
model_sample_strong_feature['x_104/x_102'] = data['x_104'] / (data['x_102'] + 1e-10)
model_sample_strong_feature['x_109/x_110'] = data['x_109'] / (data['x_110'] + 1e-10)
model_sample_strong_feature['x_111/x_109'] = data['x_111'] / (data['x_109'] + 1e-10)
model_sample_strong_feature['x_112/x_113'] = data['x_112'] / (data['x_113'] + 1e-10)
model_sample_strong_feature['x_114/x_112'] = data['x_114'] / (data['x_112'] + 1e-10)
model_sample_strong_feature['x_115/x_116'] = data['x_115'] / (data['x_116'] + 1e-10)
model_sample_strong_feature['x_117/x_115'] = data['x_117'] / (data['x_115'] + 1e-10)
model_sample_strong_feature['x_118/x_119'] = data['x_118'] / (data['x_119'] + 1e-10)
model_sample_strong_feature['x_120/x_118'] = data['x_120'] / (data['x_118'] + 1e-10)
model_sample_strong_feature['x_125/x_121'] = data['x_125'] / (data['x_121'] + 1e-10)
model_sample_strong_feature['x_128/x_129'] = data['x_128'] / (data['x_129'] + 1e-10)
model_sample_strong_feature['x_130/x_128'] = data['x_130'] / (data['x_128'] + 1e-10)
# 每笔放款金额,每个机构的放款笔数,每个机构的放款金额
model_sample_strong_feature['x_133/x_134'] = data['x_133'] / (data['x_134'] + 1e-10)
model_sample_strong_feature['x_133/x_132'] = data['x_133'] / (data['x_132'] + 1e-10)
model_sample_strong_feature['x_134/x_132'] = data['x_134'] / (data['x_132'] + 1e-10)
model_sample_strong_feature['x_138/x_139'] = data['x_138'] / (data['x_139'] + 1e-10)
model_sample_strong_feature['x_138/x_137'] = data['x_138'] / (data['x_137'] + 1e-10)
model_sample_strong_feature['x_139/x_137'] = data['x_139'] / (data['x_137'] + 1e-10)
model_sample_strong_feature['x_143/x_142'] = data['x_143'] / (data['x_142'] + 1e-10)
model_sample_strong_feature['x_143/x_144'] = data['x_143'] / (data['x_144'] + 1e-10)
model_sample_strong_feature['x_144/x_142'] = data['x_144'] / (data['x_142'] + 1e-10)
# 每个机构的放款均值,失败还款笔数占比
model_sample_strong_feature['x_151/x_149'] = data['x_151'] / (data['x_149'] + 1e-10)
model_sample_strong_feature['x_152/x_149'] = data['x_152'] / (data['x_149'] + 1e-10)
model_sample_strong_feature['x_152/x_151'] = data['x_152'] / (data['x_151'] + 1e-10)
model_sample_strong_feature['x_154/x_153'] = data['x_154'] / (data['x_153'] + 1e-10)
model_sample_strong_feature['x_156/x_153'] = data['x_156'] / (data['x_153'] + 1e-10)
model_sample_strong_feature['x_157/x_153'] = data['x_157'] / (data['x_153'] + 1e-10)
model_sample_strong_feature['x_158/x_153'] = data['x_158'] / (data['x_153'] + 1e-10)
model_sample_strong_feature['x_159/x_153'] = data['x_159'] / (data['x_153'] + 1e-10)
model_sample_strong_feature['x_154/x_155'] = data['x_154'] / (data['x_155'] + 1e-10)
model_sample_strong_feature['x_164/x_162'] = data['x_164'] / (data['x_162'] + 1e-10)
model_sample_strong_feature['x_165/x_162'] = data['x_165'] / (data['x_162'] + 1e-10)
model_sample_strong_feature['x_165/x_164'] = data['x_165'] / (data['x_164'] + 1e-10)
model_sample_strong_feature['x_167/x_166'] = data['x_167'] / (data['x_166'] + 1e-10)
model_sample_strong_feature['x_169/x_166'] = data['x_169'] / (data['x_166'] + 1e-10)
model_sample_strong_feature['x_170/x_166'] = data['x_170'] / (data['x_166'] + 1e-10)
model_sample_strong_feature['x_171/x_166'] = data['x_171'] / (data['x_166'] + 1e-10)
model_sample_strong_feature['x_180/x_181'] = data['x_180'] / (data['x_181'] + 1e-10)
model_sample_strong_feature['x_167/x_168'] = data['x_167'] / (data['x_168'] + 1e-10)
model_sample_strong_feature['x_172/x_167'] = data['x_172'] / (data['x_167'] + 1e-10)
model_sample_strong_feature['x_177/x_175'] = data['x_177'] / (data['x_175'] + 1e-10)
model_sample_strong_feature['x_178/x_175'] = data['x_178'] / (data['x_175'] + 1e-10)
model_sample_strong_feature['x_178/x_177'] = data['x_178'] / (data['x_177'] + 1e-10)
model_sample_strong_feature['x_180/x_179'] = data['x_180'] / (data['x_179'] + 1e-10)
model_sample_strong_feature['x_182/x_179'] = data['x_182'] / (data['x_179'] + 1e-10)
model_sample_strong_feature['x_183/x_179'] = data['x_183'] / (data['x_179'] + 1e-10)
model_sample_strong_feature['x_184/x_179'] = data['x_184'] / (data['x_179'] + 1e-10)
model_sample_strong_feature['x_180/x_181'] = data['x_180'] / (data['x_181'] + 1e-10)
model_sample_strong_feature['x_185/x_180'] = data['x_185'] / (data['x_180'] + 1e-10)
# 90天与30天的申请贷款机构的趋势,180天与90天的申请贷款机构的趋势,180天与30天的申请贷款机构的趋势;90天与30天的成功申请贷款机构的趋势,180天与90天的成功申请贷款机构的趋势,180天;
# 30天的成功申请贷款机构的趋势;90天与30天的申请贷款笔数的趋势,180天与90天的申请贷款笔数的趋势,180天与30天的申请贷款笔数的趋势90天的申请贷款笔数的趋势
model_sample_strong_feature['x_189/x_188'] = data['x_189'] / (data['x_188'] + 1e-10)
model_sample_strong_feature['x_191/x_190'] = data['x_191'] / (data['x_190'] + 1e-10)
model_sample_strong_feature['x_193/x_192'] = data['x_193'] / (data['x_192'] + 1e-10)
model_sample_strong_feature['x_195/x_194'] = data['x_195'] / (data['x_194'] + 1e-10)
model_sample_strong_feature['x_197/x_196'] = data['x_197'] / (data['x_196'] + 1e-10)
model_sample_strong_feature['x_199/x_198'] = data['x_199'] / (data['x_198'] + 1e-10)
model_sample_strong_feature['x_196/x_188'] = data['x_196'] / (data['x_188'] + 1e-10)
model_sample_strong_feature['x_192/x_188'] = data['x_192'] / (data['x_188'] + 1e-10)
model_sample_strong_feature = model_sample_strong_feature.fillna(-999)
return model_sample_strong_feature
get_features_middle(model_sample).descibe()
def get_score(y_pred, y_true):
acc_ = accuracy_score(y_true=y_true,y_pred=y_pred)
TP = np.sum(((y_pred == 1) & (y_true == 1)))
precision = TP / np.sum(y_pred)
recall = TP / np.sum(y_true)
print('TP: ',TP,'/', np.sum(y_true), 'all ',np.sum(y_pred), ' accuracy: ',acc_, ' precision: ',precision, ' recall: ',recall, ' F_score: ', 2 * precision * recall / (precision + recall),fbeta_score(y_true=y_true,y_pred=y_pred,beta=1) )
def N_Fold_Predict( train_fea , train_y, test_fea, cv_ = 5):
###########################################################
train_fea = train_fea.fillna(-1)
test_fea = test_fea.fillna(-1)
features_col = [c for c in train_fea.columns if c not in ['user_id','y']]
X = train_fea[features_col]
X_pred = test_fea[features_col]
pred_out_lgb = 0
pred_out_gbdt = 0
pred_out_rf = 0
for cv in range(cv_):
X_train, X_test, y_train, y_test = train_test_split(X, train_y, test_size=0.25, random_state=np.random.randint(1000))
# create dataset for lightgbm
lgb_train = lgb.Dataset(X_train, y_train)
lgb_eval = lgb.Dataset(X_test, y_test, reference=lgb_train)
# specify your configurations as a dict
params = {
'task':'train',
'boosting_type':'gbdt',
'num_leaves': 31,
'objective': 'binary',
'learning_rate': 0.05,
'bagging_freq': 2,
'max_bin':256,
'num_threads': 32
}
# train
gbm = lgb.train(params,
lgb_train,
verbose_eval= 0,
num_boost_round=10000,
valid_sets=lgb_eval,
early_stopping_rounds=100)
lgb_pred = gbm.predict(X_pred, num_iteration=gbm.best_iteration)
gbdt = GradientBoostingClassifier(n_estimators=250,learning_rate=0.01,max_depth=6,min_samples_leaf=5,min_samples_split=5)
gbdt.fit(X_train, y_train)
gbdt_pred = gbdt.predict_proba(X_pred)[:,1]
rf = RandomForestClassifier(n_estimators=500,max_depth=6,min_samples_leaf=5,min_samples_split=5)
rf.fit(X_train, y_train)
rf_pred = rf.predict_proba(X_pred)[:,1]
if cv == 0:
pred_out_lgb = lgb_pred
pred_out_gbdt = gbdt_pred
pred_out_rf = rf_pred
else:
pred_out_lgb += lgb_pred
pred_out_gbdt += gbdt_pred
pred_out_rf += rf_pred
pred_out_lgb = pred_out_lgb * 1.0 / cv_
pred_out_gbdt = pred_out_gbdt * 1.0 / cv_
pred_out_rf = pred_out_rf * 1.0 / cv_
return pred_out_lgb, pred_out_gbdt, pred_out_rf
def get_features_final(data):
model_sample_strong_feature = data.copy()
first_strong_features = ['x_003','x_004','x_005','x_006','x_007','x_008','x_009','x_010','x_011','x_012','x_013','x_014','x_015','x_016','x_017','x_018','x_019']
res = 0
for i in range(len(first_strong_features)):
res += 2 ** i * data[first_strong_features[i]]
model_sample_strong_feature['x_1_strong'] = res
model_sample_strong_feature['x_022/x_020'] = data['x_022'] / (data['x_020'] + 1e-10)
model_sample_strong_feature['x_023/x_020'] = data['x_023'] / (data['x_020'] + 1e-10)
model_sample_strong_feature['x_024/x_020'] = data['x_024'] / (data['x_020'] + 1e-10)
model_sample_strong_feature['x_025/x_020'] = data['x_025'] / (data['x_020'] + 1e-10)
model_sample_strong_feature['x_026/x_020'] = data['x_026'] / (data['x_020'] + 1e-10)
model_sample_strong_feature['x_028/x_021'] = data['x_028'] / (data['x_021'] + 1e-10)
model_sample_strong_feature['x_029/x_021'] = data['x_029'] / (data['x_021'] + 1e-10)
model_sample_strong_feature['x_030/x_021'] = data['x_030'] / (data['x_021'] + 1e-10)
model_sample_strong_feature['x_031/x_021'] = data['x_031'] / (data['x_021'] + 1e-10)
model_sample_strong_feature['x_032/x_021'] = data['x_032'] / (data['x_021'] + 1e-10)
model_sample_strong_feature['all_cards'] = (data['x_034'] + data['x_035'] + data['x_036'] + data['x_037'] + data['x_038'] + data['x_039'] + data['x_040']).values
model_sample_strong_feature['x_034/all_cards'] = data['x_034'] / (model_sample_strong_feature['all_cards'] + 1e-10)
model_sample_strong_feature['x_035/all_cards'] = data['x_035'] / (model_sample_strong_feature['all_cards'] + 1e-10)
model_sample_strong_feature['x_036/all_cards'] = data['x_036'] / (model_sample_strong_feature['all_cards'] + 1e-10)
model_sample_strong_feature['x_037/all_cards'] = data['x_037'] / (model_sample_strong_feature['all_cards'] + 1e-10)
model_sample_strong_feature['x_038/all_cards'] = data['x_038'] / (model_sample_strong_feature['all_cards'] + 1e-10)
model_sample_strong_feature['x_039/all_cards'] = data['x_039'] / (model_sample_strong_feature['all_cards'] + 1e-10)
model_sample_strong_feature['x_040/all_cards'] = data['x_040'] / (model_sample_strong_feature['all_cards'] + 1e-10)
model_sample_strong_feature['x_027/x_033'] = data['x_027'] - (data['x_033'] + 1e-10)
model_sample_strong_feature['x_043/x_044'] = data['x_043'] / (data['x_044'] + 1e-10)
model_sample_strong_feature['x_046/x_047'] = data['x_046'] / (data['x_047'] + 1e-10)
model_sample_strong_feature['x_050/x_051'] = data['x_050'] / (data['x_051'] + 1e-10)
model_sample_strong_feature['x_053/x_054'] = data['x_053'] / (data['x_054'] + 1e-10)
model_sample_strong_feature['x_057/x_058'] = data['x_057'] / (data['x_058'] + 1e-10)
model_sample_strong_feature['x_060/x_061'] = data['x_060'] / (data['x_061'] + 1e-10)
model_sample_strong_feature['x_076/x_077'] = data['x_076'] / (data['x_077'] + 1e-10)
model_sample_strong_feature['x_079/x_080'] = data['x_079'] / (data['x_080'] + 1e-10)
model_sample_strong_feature['x_083/x_084'] = data['x_083'] / (data['x_084'] + 1e-10)
model_sample_strong_feature['x_086/x_087'] = data['x_086'] / (data['x_087'] + 1e-10)
model_sample_strong_feature['x_090/x_091'] = data['x_090'] / (data['x_091'] + 1e-10)
model_sample_strong_feature['x_094/x_095'] = data['x_094'] / (data['x_095'] + 1e-10)
model_sample_strong_feature['x_098/x_099'] = data['x_098'] / (data['x_099'] + 1e-10)
model_sample_strong_feature['x_123/x_124'] = data['x_123'] / (data['x_124'] + 1e-10)
model_sample_strong_feature['x_126/x_127'] = data['x_126'] / (data['x_127'] + 1e-10)
model_sample_strong_feature['x_064/x_063'] = data['x_064'] / (data['x_063'] + 1e-10)
model_sample_strong_feature['x_067/x_066'] = data['x_067'] / (data['x_066'] + 1e-10)
model_sample_strong_feature['x_070/x_069'] = data['x_070'] / (data['x_069'] + 1e-10)
model_sample_strong_feature['x_073/x_072'] = data['x_073'] / (data['x_072'] + 1e-10)
model_sample_strong_feature['x_059/x_55'] = data['x_059'] / (data['x_055'] + 1e-10)
model_sample_strong_feature['x_067/x_065'] = data['x_067'] / (data['x_065'] + 1e-10)
model_sample_strong_feature['x_070/x_068'] = data['x_070'] / (data['x_068'] + 1e-10)
model_sample_strong_feature['x_073/x_071'] = data['x_073'] / (data['x_071'] + 1e-10)
model_sample_strong_feature['x_078/x_074'] = data['x_078'] / (data['x_074'] + 1e-10)
model_sample_strong_feature['x_085/x_081'] = data['x_085'] / (data['x_081'] + 1e-10)
model_sample_strong_feature['x_100/x_101'] = data['x_100'] / (data['x_101'] + 1e-10)
model_sample_strong_feature['x_102/x_103'] = data['x_102'] / (data['x_103'] + 1e-10)
model_sample_strong_feature['x_108/x_105'] = data['x_108'] / (data['x_105'] + 1e-10)
model_sample_strong_feature['x_104/x_102'] = data['x_104'] / (data['x_102'] + 1e-10)
model_sample_strong_feature['x_109/x_110'] = data['x_109'] / (data['x_110'] + 1e-10)
model_sample_strong_feature['x_111/x_109'] = data['x_111'] / (data['x_109'] + 1e-10)
model_sample_strong_feature['x_112/x_113'] = data['x_112'] / (data['x_113'] + 1e-10)
model_sample_strong_feature['x_114/x_112'] = data['x_114'] / (data['x_112'] + 1e-10)
model_sample_strong_feature['x_115/x_116'] = data['x_115'] / (data['x_116'] + 1e-10)
model_sample_strong_feature['x_117/x_115'] = data['x_117'] / (data['x_115'] + 1e-10)
model_sample_strong_feature['x_118/x_119'] = data['x_118'] / (data['x_119'] + 1e-10)
model_sample_strong_feature['x_120/x_118'] = data['x_120'] / (data['x_118'] + 1e-10)
model_sample_strong_feature['x_125/x_121'] = data['x_125'] / (data['x_121'] + 1e-10)
model_sample_strong_feature['x_128/x_129'] = data['x_128'] / (data['x_129'] + 1e-10)
model_sample_strong_feature['x_130/x_128'] = data['x_130'] / (data['x_128'] + 1e-10)
model_sample_strong_feature['x_133/x_134'] = data['x_133'] / (data['x_134'] + 1e-10)
model_sample_strong_feature['x_133/x_132'] = data['x_133'] / (data['x_132'] + 1e-10)
model_sample_strong_feature['x_134/x_132'] = data['x_134'] / (data['x_132'] + 1e-10)
model_sample_strong_feature['x_138/x_139'] = data['x_138'] / (data['x_139'] + 1e-10)
model_sample_strong_feature['x_138/x_137'] = data['x_138'] / (data['x_137'] + 1e-10)
model_sample_strong_feature['x_139/x_137'] = data['x_139'] / (data['x_137'] + 1e-10)
model_sample_strong_feature['x_143/x_142'] = data['x_143'] / (data['x_142'] + 1e-10)
model_sample_strong_feature['x_143/x_144'] = data['x_143'] / (data['x_144'] + 1e-10)
model_sample_strong_feature['x_144/x_142'] = data['x_144'] / (data['x_142'] + 1e-10)
model_sample_strong_feature['x_151/x_149'] = data['x_151'] / (data['x_149'] + 1e-10)
model_sample_strong_feature['x_152/x_149'] = data['x_152'] / (data['x_149'] + 1e-10)
model_sample_strong_feature['x_152/x_151'] = data['x_152'] / (data['x_151'] + 1e-10)
model_sample_strong_feature['x_154/x_153'] = data['x_154'] / (data['x_153'] + 1e-10)
model_sample_strong_feature['x_156/x_153'] = data['x_156'] / (data['x_153'] + 1e-10)
model_sample_strong_feature['x_157/x_153'] = data['x_157'] / (data['x_153'] + 1e-10)
model_sample_strong_feature['x_158/x_153'] = data['x_158'] / (data['x_153'] + 1e-10)
model_sample_strong_feature['x_159/x_153'] = data['x_159'] / (data['x_153'] + 1e-10)
model_sample_strong_feature['x_154/x_155'] = data['x_154'] / (data['x_155'] + 1e-10)
model_sample_strong_feature['x_164/x_162'] = data['x_164'] / (data['x_162'] + 1e-10)
model_sample_strong_feature['x_165/x_162'] = data['x_165'] / (data['x_162'] + 1e-10)
model_sample_strong_feature['x_165/x_164'] = data['x_165'] / (data['x_164'] + 1e-10)
model_sample_strong_feature['x_167/x_166'] = data['x_167'] / (data['x_166'] + 1e-10)
model_sample_strong_feature['x_169/x_166'] = data['x_169'] / (data['x_166'] + 1e-10)
model_sample_strong_feature['x_170/x_166'] = data['x_170'] / (data['x_166'] + 1e-10)
model_sample_strong_feature['x_171/x_166'] = data['x_171'] / (data['x_166'] + 1e-10)
model_sample_strong_feature['x_180/x_181'] = data['x_180'] / (data['x_181'] + 1e-10)
model_sample_strong_feature['x_167/x_168'] = data['x_167'] / (data['x_168'] + 1e-10)
model_sample_strong_feature['x_172/x_167'] = data['x_172'] / (data['x_167'] + 1e-10)
model_sample_strong_feature['x_177/x_175'] = data['x_177'] / (data['x_175'] + 1e-10)
model_sample_strong_feature['x_178/x_175'] = data['x_178'] / (data['x_175'] + 1e-10)
model_sample_strong_feature['x_178/x_177'] = data['x_178'] / (data['x_177'] + 1e-10)
model_sample_strong_feature['x_180/x_179'] = data['x_180'] / (data['x_179'] + 1e-10)
model_sample_strong_feature['x_182/x_179'] = data['x_182'] / (data['x_179'] + 1e-10)
model_sample_strong_feature['x_183/x_179'] = data['x_183'] / (data['x_179'] + 1e-10)
model_sample_strong_feature['x_184/x_179'] = data['x_184'] / (data['x_179'] + 1e-10)
model_sample_strong_feature['x_180/x_181'] = data['x_180'] / (data['x_181'] + 1e-10)
model_sample_strong_feature['x_185/x_180'] = data['x_185'] / (data['x_180'] + 1e-10)
model_sample_strong_feature['x_189/x_188'] = data['x_189'] / (data['x_188'] + 1e-10)
model_sample_strong_feature['x_191/x_190'] = data['x_191'] / (data['x_190'] + 1e-10)
model_sample_strong_feature['x_193/x_192'] = data['x_193'] / (data['x_192'] + 1e-10)
model_sample_strong_feature['x_195/x_194'] = data['x_195'] / (data['x_194'] + 1e-10)
model_sample_strong_feature['x_197/x_196'] = data['x_197'] / (data['x_196'] + 1e-10)
model_sample_strong_feature['x_199/x_198'] = data['x_199'] / (data['x_198'] + 1e-10)
model_sample_strong_feature['x_196/x_188'] = data['x_196'] / (data['x_188'] + 1e-10)
model_sample_strong_feature['x_192/x_188'] = data['x_192'] / (data['x_188'] + 1e-10)
model_sample_strong_feature = model_sample_strong_feature.fillna(-999)
return model_sample_strong_feature
model_sample_strong_feature_middle = get_features_middle(model_sample)
model_sample_strong_feature_final = get_features_final(model_sample)
model_sample_strong_feature_middle = model_sample_strong_feature_middle.fillna(-999)
model_sample_strong_feature_final = model_sample_strong_feature_final.fillna(-999)
model_sample_ = model_sample.fillna(-999)
# +
for rnd in [1,10,100,1000]:
print('Random Seed is: ',rnd)
train_X,test_X, train_y, test_y = train_test_split(model_sample_strong_feature_final,label,test_size=0.2,random_state=rnd)
train_X_orig = model_sample_.loc[train_X.index]
test_X_orig = model_sample_.loc[test_X.index]
train_X_middle = model_sample_strong_feature_middle.loc[train_X.index]
test_X_middle = model_sample_strong_feature_middle.loc[test_X.index]
print('5 fold no feature engineering')
pred_out_lgb, pred_out_gbdt, pred_out_rf = N_Fold_Predict(train_X_orig, train_y['y'].values, test_X_orig, cv_ = 3)
pred =pred_out_rf >= 0.215
get_score(pred, test_y['y'].values)
pred =pred_out_gbdt >= 0.215
get_score(pred, test_y['y'].values)
pred =pred_out_lgb >= 0.215
get_score(pred, test_y['y'].values)
pred =pred_out_lgb * 0.55 + 0.45 * pred_out_gbdt>= 0.215
get_score(pred, test_y['y'].values)
pred =pred_out_lgb * 0.5 + 0.5 * pred_out_gbdt>= 0.215
get_score(pred, test_y['y'].values)
pred =(pred_out_lgb * 0.5 + 0.5 * pred_out_gbdt) * 0.9 + 0.1 * pred_out_rf>= 0.215
get_score(pred, test_y['y'].values)
print('*' * 50)
print('5 fold feature engineering middle')
pred_out_lgb_middle, pred_out_gbdt_middle, pred_out_rf_middle = N_Fold_Predict(train_X_middle,train_y['y'].values, test_X_middle, cv_ = 3)
pred = pred_out_rf_middle >= 0.215
get_score(pred, test_y['y'].values)
pred =pred_out_gbdt_middle >= 0.215
get_score(pred, test_y['y'].values)
pred =pred_out_lgb_middle >= 0.215
get_score(pred, test_y['y'].values)
pred =pred_out_lgb_middle * 0.55 + 0.45 * pred_out_gbdt_middle>= 0.215
get_score(pred, test_y['y'].values)
pred =pred_out_lgb_middle * 0.5 + 0.5 * pred_out_gbdt_middle>= 0.215
get_score(pred, test_y['y'].values)
pred =(pred_out_lgb_middle * 0.5 + 0.5 * pred_out_gbdt_middle) * 0.9 + 0.1 * pred_out_rf_middle >= 0.215
get_score(pred, test_y['y'].values)
# pred =(pred_out_lgb_middle * 0.5 + 0.5 * pred_out_gbdt_middle) * 0.9 + 0.05 * (pred_out_rf_middle + pred_out_rf)>= 0.215
# get_score(pred, test_y['y'].values)
print('*' * 50)
print('5 fold feature engineering final')
pred_out_lgb_final, pred_out_gbdt_final, pred_out_rf_final = N_Fold_Predict(train_X,train_y['y'].values, test_X, cv_ = 3)
pred =pred_out_rf_final >= 0.215
get_score(pred, test_y['y'].values)
pred =pred_out_gbdt_final >= 0.215
get_score(pred, test_y['y'].values)
pred =pred_out_lgb_final >= 0.215
get_score(pred, test_y['y'].values)
pred =pred_out_lgb_final * 0.55 + 0.45 * pred_out_gbdt_final>= 0.215
get_score(pred, test_y['y'].values)
pred =pred_out_lgb_final * 0.5 + 0.5 * pred_out_gbdt_final>= 0.215
get_score(pred, test_y['y'].values)
pred =(pred_out_lgb_final * 0.5 + 0.5 * pred_out_gbdt_final) * 0.9 + 0.1 * pred_out_rf_final>= 0.215
get_score(pred, test_y['y'].values)
print('*' * 50)
print('Fire!')
print('middle and original ')
pred =((pred_out_lgb_middle * 0.5 + pred_out_lgb * 0.5 )*0.55 + 0.45 * (pred_out_gbdt_middle * 0.5 + 0.5 * pred_out_gbdt))>= 0.215
get_score(pred, test_y['y'].values)
pred =((pred_out_lgb_middle * 0.5 + pred_out_lgb* 0.5 )*0.5 + 0.5 * (pred_out_gbdt_middle *0.5 + 0.5 * pred_out_gbdt ))>= 0.215
get_score(pred, test_y['y'].values)
print('final and original ')
pred =((pred_out_lgb_final * 0.5 + pred_out_lgb * 0.5 )*0.55 + 0.45 * (pred_out_gbdt_final * 0.5 + 0.5 * pred_out_gbdt))>= 0.215
get_score(pred, test_y['y'].values)
pred =((pred_out_lgb_final * 0.5 + pred_out_lgb* 0.5 )*0.5 + 0.5 * (pred_out_gbdt_final *0.5 + 0.5 * pred_out_gbdt ))>= 0.215
get_score(pred, test_y['y'].values)
print('final and middle and original ')
pred =((pred_out_lgb_final * 0.3 + pred_out_lgb * 0.3 + 0.4 * pred_out_lgb_middle)*0.55 + 0.45 * (pred_out_gbdt_final * 0.3 + 0.3 * pred_out_gbdt + 0.4 * pred_out_gbdt_middle))>= 0.215
get_score(pred, test_y['y'].values)
pred =((pred_out_lgb_final * 1.0 /3 + pred_out_lgb* 1.0 /3 + pred_out_lgb_middle* 1.0 /3)*0.5 + 0.5 * (pred_out_gbdt_final * 1.0 /3 + 1.0 /3 * pred_out_gbdt + 1.0 /3 * pred_out_gbdt_middle))>= 0.215
get_score(pred, test_y['y'].values)
pred = 0.1 * (pred_out_rf_middle + pred_out_rf + pred_out_rf_final ) /3.0 + 0.9 * ((pred_out_lgb_final * 1.0 /3 + pred_out_lgb* 1.0 /3 + pred_out_lgb_middle* 1.0 /3)*0.5 + 0.5 * (pred_out_gbdt_final * 1.0 /3 + 1.0 /3 * pred_out_gbdt + 1.0 /3 * pred_out_gbdt_middle))>= 0.215
get_score(pred, test_y['y'].values)
pred = 0.15 * (pred_out_rf_middle + pred_out_rf + pred_out_rf_final ) /3.0 + 0.85 * ((pred_out_lgb_final * 1.0 /3 + pred_out_lgb* 1.0 /3 + pred_out_lgb_middle* 1.0 /3)*0.5 + 0.5 * (pred_out_gbdt_final * 1.0 /3 + 1.0 /3 * pred_out_gbdt + 1.0 /3 * pred_out_gbdt_middle))>= 0.215
get_score(pred, test_y['y'].values)
# -
| FinalAssignment/standard.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Analysis of gaming ranking vs price move
# !pip install gamedatacrunch
# !pip install steamspypi
import steamspypi
from pygooglenews import GoogleNews
from bs4 import BeautifulSoup
from langdetect import detect
import numpy as np
from datetime import datetime, timedelta
import yfinance as yf
import matplotlib.pyplot as plt
from IPython.display import display, DisplayObject
from textblob import TextBlob
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
import requests
import pandas as pd
import time, calendar
import os
def get_stock_data_by_interval(symbol, start_datetime, end_datetime, interval='1m'):
# Interval argument examples : '1m', '1h', '1d', '5d', '1mo', '3mo', '6mo', '1y', '2y', '5y', '10y', 'ytd', 'max'
# Start and end datetime format example : '2000-01-01 12:34:00'
start_timestamp = calendar.timegm(time.strptime(start_datetime, '%Y-%m-%d %H:%M:%S')) - 86400
end_timestamp = calendar.timegm(time.strptime(end_datetime, '%Y-%m-%d %H:%M:%S')) + 86400
url = f'https://query1.finance.yahoo.com/v8/finance/chart/{symbol}?interval={interval}&period1={start_timestamp}&period2={end_timestamp}'
result = requests.get(url)
data = result.json()
body = data['chart']['result'][0]
price_data_body = body['indicators']['quote'][0]
df = pd.DataFrame()
df['Datetime'] = body['timestamp']
df['Datetime'] = pd.to_datetime(df['Datetime'],unit='s')
df['high'] = price_data_body['high']
df['low'] = price_data_body['low']
df['volume'] = price_data_body['volume']
df['open'] = price_data_body['open']
df['close'] = price_data_body['close']
return df
get_stock_data_by_interval(symbol='TTWO', start_datetime='1996-01-01 12:34:00', end_datetime='2021-02-20 23:00:00', interval='1d')
# +
data_request = dict()
data_request['request'] = 'name'
data_request['name'] = 'Counter-Strike: Global Offensive'
data = steamspypi.download(data_request)
data
# +
import gamedatacrunch as gdc
data = gdc.load_as_steam_api()
data
| data_collection/altdata_service/news/notebooks/gaming_ranking_vs_price_move.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python2
# language: python
# name: conda_python2
# ---
# # Train an ML Model using Apache Spark in EMR and deploy in SageMaker
# In this notebook, we will see how you can train your Machine Learning (ML) model using Apache Spark and then take the trained model artifacts to create an endpoint in SageMaker for online inference. Apache Spark is one of the most popular big-data analytics platforms & it also comes with an ML library with a wide variety of feature transformers and algorithms that one can use to build an ML model.
#
# Apache Spark is designed for offline batch processing workload and is not best suited for low latency online prediction. In order to mitigate that, we will use [MLeap](https://github.com/combust/mleap) library. MLeap provides an easy-to-use Spark ML Pipeline serialization format & execution engine for low latency prediction use-cases. Once the ML model is trained using Apache Spark in EMR, we will serialize it with `MLeap` and upload to S3 as part of the Spark job so that it can be used in SageMaker in inference.
#
# After the model training is completed, we will use SageMaker **Inference** to perform predictions against this model. The underlying Docker image that we will use in inference is provided by [sagemaker-sparkml-serving](https://github.com/aws/sagemaker-sparkml-serving-container). It is a Spring based HTTP web server written following SageMaker container specifications and its operations are powered by `MLeap` execution engine.
#
# In the first segment of the notebook, we will work with `Sparkmagic (PySpark)` kernel while performing operations on the EMR cluster and in the second segment, we need to switch to `conda_python2` kernel to invoke SageMaker APIs using `sagemaker-python-sdk`.
# ## Setup an EMR cluster and connect a SageMaker notebook to the cluster
# In order to perform the steps mentioned in this notebook, you will need to have an EMR cluster running and make sure that the notebook can connect to the master node of the cluster.
#
# **This solution has been tested with Mleap 0.17, EMR 5.30.2 and Spark 2.4.5**
#
# Please follow the guide here on how to setup an EMR cluster and connect it to a notebook.
# https://aws.amazon.com/blogs/machine-learning/build-amazon-sagemaker-notebooks-backed-by-spark-in-amazon-emr/ .
#
# This notebook is written in Python2, but you should be able to use Python3 with minimal changes in the instruction here. Python2 or 3 has no impact on the model serialization or inference.
# ## Install additional Python dependencies and JARs in the EMR cluster
# In order to serialize a Spark model with `MLeap` and upload to S3, we will need some additional Python dependencies and JARs present in the EMR cluster. Also, you need to setup your cluster with proper aws configurations.
# ### Configure `aws` credentials
# First, please configure the aws credentials in all the nodes using `aws configure`.
# ### Install Python dependencies
# Please download the necessary dependencies from PyPI.
#
# You can run the below commands on EMR master node console to update the distribution, remove outdated dependencies and download the new dependencies from PyPI. The `MLeap 0.17` used here, [compatible](https://github.com/combust/mleap) with `Spark 2.4.5`
#
# ```bash
# sudo su -
# yum update -y
# pip uninstall python37-sagemaker-pyspark numpy
# pip install boto3 cython pandoc pypandoc sagemaker-pyspark mleap==0.17
# ```
# ### Install the `MLeap` JARs in the cluster
# You need to have the MLeap JARs in the classpath to be successfully able to use it during model serialization. Please download the JARs using `spark-shell` and overwriting the `spark.jars.ivy` location to `/usr/lib/spark/`. `spark-shell` will store it within the jars folder automatically.
#
# Ivy Default Cache set to: `/usr/lib/spark/cache`
#
# The jars for the packages stored in: `/usr/lib/spark/jars`
#
# ```bash
# sudo spark-shell --conf spark.jars.ivy=/usr/lib/spark/ --packages ml.combust.mleap:mleap-spark_2.11:0.17.0
# ```
#
# You can quit the `spark-shell` with `:quit` command. The JARs are now copied to `/usr/lib/spark/jars/` in the master node. Let's verify them. You will find `ml.combust` prefix jars in the path.
#
# ```bash
# # cd /usr/lib/spark/jars/
# # ls -l | grep 'ml.combust'
# ```
# ## Checking that the Spark connection is set up properly
# Following the steps mentioned above, we test that the Spark connection setup is done properly by invoking `%%info` in the following cell.
# %%info
# ## Importing PySpark dependencies
# Next we will import all the necessary dependencies that will be needed to execute the following cells on our Spark cluster. Please note that we are also importing the `boto3` and `mleap` modules here.
#
# You need to ensure that the import cell runs without any error to verify that you have installed the dependencies from PyPI properly. Also, this cell will provide you with a valid `SparkSession` named as `spark`.
# +
from __future__ import print_function
import os
import shutil
import boto3
import pyspark
from pyspark.sql import SparkSession
from pyspark.ml import Pipeline
from pyspark.ml.regression import RandomForestRegressor
from pyspark.sql.types import StructField, StructType, StringType, DoubleType
from pyspark.ml.feature import (
StringIndexer,
VectorIndexer,
OneHotEncoderEstimator,
VectorAssembler,
IndexToString,
)
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.sql.functions import *
from mleap.pyspark.spark_support import SimpleSparkSerializer
# -
# ## Machine Learning task: Predict the age of an Abalone from its physical measurement
# The dataset is available from [UCI Machine Learning](https://archive.ics.uci.edu/ml/datasets/abalone). The aim for this task is to determine age of an Abalone (a kind of shellfish) from its physical measurements. At the core, it's a regression problem. The dataset contains several features - `sex` (categorical), `length` (continuous), `diameter` (continuous), `height` (continuous), `whole_weight` (continuous), `shucked_weight` (continuous), `viscera_weight` (continuous), `shell_weight` (continuous) and `rings` (integer).Our goal is to predict the variable `rings` which is a good approximation for age (age is `rings` + 1.5).
#
# We'll use SparkML to pre-process the dataset (apply one or more feature transformers) and train it with the [Random Forest](https://en.wikipedia.org/wiki/Random_forest) algorithm from SparkML.
# ## Downloading dataset and uploading to your S3 bucket
# You can download the dataset from here using `wget`:
#
# https://sagemaker-sample-files.s3.amazonaws.com/datasets/tabular/uci_abalone/abalone.csv
#
# Name it as `abalone.csv` and upload into one of the S3 buckets used by you
#
# For this example, we will leverage EMR's capability to work directly with files residing in S3. Hence, after you download the data, you have to upload it to an S3 bucket in your account in the same region where your EMR cluster is running.
#
# Alternatively, you can also use the HDFS storage in your EMR cluster to save this data.
# ## Define the schema of the dataset
# In the next cell, we will define the schema of the `Abalone` dataset and provide it to Spark so that it can parse the CSV file properly.
schema = StructType(
[
StructField("sex", StringType(), True),
StructField("length", DoubleType(), True),
StructField("diameter", DoubleType(), True),
StructField("height", DoubleType(), True),
StructField("whole_weight", DoubleType(), True),
StructField("shucked_weight", DoubleType(), True),
StructField("viscera_weight", DoubleType(), True),
StructField("shell_weight", DoubleType(), True),
StructField("rings", DoubleType(), True),
]
)
# ## Read data directly from S3
# Next we will use in-built CSV reader from Spark to read data directly from S3 into a `Dataframe` and inspect its first five rows.
#
# After that, we will split the `Dataframe` into **80-20** train and validation so that we can train the model on the train part and measure its performance on the validation part.
# Please replace the bucket name with your bucket-name and the file-name/key with your file-name/key
total_df = spark.read.csv(
"s3://<your-input-bucket>/abalone/abalone.csv", header=False, schema=schema
)
total_df.show(5)
(train_df, validation_df) = total_df.randomSplit([0.8, 0.2])
# ## Define the feature transformers
# Abalone dataset has one categorical column - `sex` which needs to be converted to integer format before it can be passed to the Random Forest algorithm.
#
# For that, we are using `StringIndexer` and `OneHotEncoderEstimator` from Spark to transform the categorical column and then use a `VectorAssembler` to produce a flat one dimensional vector for each data-point so that it can be used with the Random Forest algorithm.
# +
sex_indexer = StringIndexer(inputCol="sex", outputCol="indexed_sex")
sex_encoder = OneHotEncoderEstimator(inputCols=["indexed_sex"], outputCols=["sex_vec"])
assembler = VectorAssembler(
inputCols=[
"sex_vec",
"length",
"diameter",
"height",
"whole_weight",
"shucked_weight",
"viscera_weight",
"shell_weight",
],
outputCol="features",
)
# -
# ## Define the Random Forest model and perform training
# After the data is preprocessed, we define a `RandomForestClassifier`, define our `Pipeline` comprising of both feature transformation and training stages and train the Pipeline calling `.fit()`.
# +
rf = RandomForestRegressor(labelCol="rings", featuresCol="features", maxDepth=6, numTrees=18)
pipeline = Pipeline(stages=[sex_indexer, sex_encoder, assembler, rf])
model = pipeline.fit(train_df)
# -
# ## Use the trained `Model` to transform train and validation dataset
# Next we will use this trained `Model` to convert our training and validation dataset to see some sample output and also measure the performance scores.The `Model` will apply the feature transformers on the data before passing it to the Random Forest.
# +
transformed_train_df = model.transform(train_df)
transformed_validation_df = model.transform(validation_df)
transformed_validation_df.select("prediction").show(5)
# -
# ## Evaluating the model on train and validation dataset
# Using Spark's `RegressionEvaluator`, we can calculate the `rmse` (Root-Mean-Squared-Error) on our train and validation dataset to evaluate its performance. If the performance numbers are not satisfactory, we can train the model again and again by changing parameters of Random Forest or add/remove feature transformers.
evaluator = RegressionEvaluator(labelCol="rings", predictionCol="prediction", metricName="rmse")
train_rmse = evaluator.evaluate(transformed_train_df)
validation_rmse = evaluator.evaluate(transformed_validation_df)
print("Train RMSE = %g" % train_rmse)
print("Validation RMSE = %g" % validation_rmse)
# ## Using `MLeap` to serialize the model
# By calling the `serializeToBundle` method from the `MLeap` library, we can store the `Model` in a specific serialization format that can be later used for inference by `sagemaker-sparkml-serving`.
#
# **If this step fails with an error - `JavaPackage is not callable`, it means you have not setup the MLeap JAR in the classpath properly.**
model.serializeToBundle("jar:file:/tmp/model.zip", transformed_validation_df)
# ## Convert the model to `tar.gz` format
# SageMaker expects any model format to be present in `tar.gz` format, but MLeap produces the model `zip` format. In the next cell, we unzip the model artifacts and store it in `tar.gz` format.
# +
import zipfile
with zipfile.ZipFile("/tmp/model.zip") as zf:
zf.extractall("/tmp/model")
import tarfile
with tarfile.open("/tmp/model.tar.gz", "w:gz") as tar:
tar.add("/tmp/model/bundle.json", arcname="bundle.json")
tar.add("/tmp/model/root", arcname="root")
# -
# ## Upload the trained model artifacts to S3
# At the end, we need to upload the trained and serialized model artifacts to S3 so that it can be used for inference in SageMaker.
#
# Please note down the S3 location to where you are uploading your model.
# Please replace the bucket name with your bucket name where you want to upload the model
s3 = boto3.resource("s3")
file_name = os.path.join("emr/abalone/mleap", "model.tar.gz")
s3.Bucket("<your-output-bucket-name>").upload_file("/tmp/model.tar.gz", file_name)
# ## Delete model artifacts from local disk (optional)
# If you are training multiple ML models on the same host and using the same location to save the `MLeap` serialized model, then you need to delete the model on the local disk to prevent `MLeap` library failing with an error - `file already exists`.
os.remove("/tmp/model.zip")
os.remove("/tmp/model.tar.gz")
shutil.rmtree("/tmp/model")
# ## Hosting the model in SageMaker
# Now the second phase of this Notebook begins, where we will host this model in SageMaker and perform predictions against it.
#
# **For this, please change your kernel to `conda_python3`.**
# ### Hosting a model in SageMaker requires two components
#
# * A Docker image residing in ECR.
# * a trained Model residing in S3.
#
# For SparkML, Docker image for MLeap based SparkML serving has already been prepared and uploaded to ECR by SageMaker team which anyone can use for hosting. For more information on this, please see [SageMaker SparkML Serving](https://github.com/aws/sagemaker-sparkml-serving-container/).
#
# MLeap serialized model was uploaded to S3 as part of the Spark job we executed in EMR in the previous steps.
# ## Creating the endpoint for prediction
# Next we'll create the SageMaker endpoint which will be used for performing online prediction.
#
# For this, we have to create an instance of `SparkMLModel` from `sagemaker-python-sdk` which will take the location of the model artifacts that we uploaded to S3 as part of the EMR job.
# ### Passing the schema of the payload via environment variable
# SparkML server also needs to know the payload of the request that'll be passed to it while calling the `predict` method. In order to alleviate the pain of not having to pass the schema with every request, `sagemaker-sparkml-serving` lets you to pass it via an environment variable while creating the model definitions.
#
# We'd see later that you can overwrite this schema on a per request basis by passing it as part of the individual request payload as well.
#
# This schema definition should also be passed while creating the instance of `SparkMLModel`.
# +
import json
schema = {
"input": [
{"name": "sex", "type": "string"},
{"name": "length", "type": "double"},
{"name": "diameter", "type": "double"},
{"name": "height", "type": "double"},
{"name": "whole_weight", "type": "double"},
{"name": "shucked_weight", "type": "double"},
{"name": "viscera_weight", "type": "double"},
{"name": "shell_weight", "type": "double"},
],
"output": {"name": "prediction", "type": "double"},
}
schema_json = json.dumps(schema, indent=2)
print(schema_json)
# +
from time import gmtime, strftime
import time
timestamp_prefix = strftime("%Y-%m-%d-%H-%M-%S", gmtime())
import boto3
import sagemaker
from sagemaker import get_execution_role
from sagemaker.sparkml.model import SparkMLModel
boto3_session = boto3.session.Session()
sagemaker_client = boto3.client("sagemaker")
sagemaker_runtime_client = boto3.client("sagemaker-runtime")
# Initialize sagemaker session
session = sagemaker.Session(
boto_session=boto3_session,
sagemaker_client=sagemaker_client,
sagemaker_runtime_client=sagemaker_runtime_client,
)
role = get_execution_role()
# S3 location of where you uploaded your trained and serialized SparkML model
sparkml_data = "s3://{}/{}/{}".format(
"<your-output-bucket-name>", "emr/abalone/mleap", "model.tar.gz"
)
model_name = "sparkml-abalone-" + timestamp_prefix
sparkml_model = SparkMLModel(
model_data=sparkml_data,
role=role,
sagemaker_session=session,
name=model_name,
# passing the schema defined above by using an environment
# variable that sagemaker-sparkml-serving understands
env={"SAGEMAKER_SPARKML_SCHEMA": schema_json},
)
endpoint_name = "sparkml-abalone-ep-" + timestamp_prefix
sparkml_model.deploy(
initial_instance_count=1, instance_type="ml.c4.xlarge", endpoint_name=endpoint_name
)
# -
# ### Invoking the newly created inference endpoint with a payload to transform the data
# Now we will invoke the endpoint with a valid payload that `sagemaker-sparkml-serving` can recognize. There are three ways in which input payload can be passed to the request:
#
# * Pass it as a valid CSV string. In this case, the schema passed via the environment variable will be used to determine the schema. For CSV format, every column in the input has to be a basic datatype (e.g. int, double, string) and it can not be a Spark `Array` or `Vector`.
#
# * Pass it as a valid JSON string. In this case as well, the schema passed via the environment variable will be used to infer the schema. With JSON format, every column in the input can be a basic datatype or a Spark `Vector` or `Array` provided that the corresponding entry in the schema mentions the correct value.
#
# * Pass the request in JSON format along with the schema and the data. In this case, the schema passed in the payload will take precedence over the one passed via the environment variable (if any).
# #### Passing the payload in CSV format
# We will first see how the payload can be passed to the endpoint in CSV format.
# +
from sagemaker.predictor import Predictor
from sagemaker.serializers import CSVSerializer, JSONSerializer
from sagemaker.deserializers import JSONDeserializer
payload = "F,0.515,0.425,0.14,0.766,0.304,0.1725,0.255"
predictor = Predictor(
endpoint_name=endpoint_name, sagemaker_session=session, serializer=CSVSerializer()
)
print(predictor.predict(payload))
# -
# #### Passing the payload in JSON format
# We will now pass a different payload in JSON format.
# +
payload = {"data": ["F", 0.515, 0.425, 0.14, 0.766, 0.304, 0.1725, 0.255]}
predictor = Predictor(
endpoint_name=endpoint_name, sagemaker_session=session, serializer=JSONSerializer()
)
print(predictor.predict(payload))
# -
# #### Passing the payload with both schema and the data
# Next we will pass the input payload comprising of both the schema and the data. If you notice carefully, this schema will be slightly different than what we have passed via the environment variable. The locations of `length` and `sex` column have been swapped and so the data. The server now parses the payload with this schema and works properly.
# +
payload = {
"schema": {
"input": [
{"name": "length", "type": "double"},
{"name": "sex", "type": "string"},
{"name": "diameter", "type": "double"},
{"name": "height", "type": "double"},
{"name": "whole_weight", "type": "double"},
{"name": "shucked_weight", "type": "double"},
{"name": "viscera_weight", "type": "double"},
{"name": "shell_weight", "type": "double"},
],
"output": {"name": "prediction", "type": "double"},
},
"data": [0.515, "F", 0.425, 0.14, 0.766, 0.304, 0.1725, 0.255],
}
predictor = Predictor(
endpoint_name=endpoint_name, sagemaker_session=session, serializer=JSONSerializer()
)
print(predictor.predict(payload))
# -
# ### Deleting the Endpoint (Optional)
# Next we will delete the endpoint so that you do not incur the cost of keeping it running.
session.delete_endpoint(endpoint_name)
| sagemaker-python-sdk/sparkml_serving_emr_mleap_abalone/sparkml_serving_emr_mleap_abalone.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Solving Maze with Q-learning ( Reinforcement Learning)
# Maze generator consists of maze class and field class. It generates a square shaped maze. The maze has route tiles, wall and block tiles, starting and goal point. The route tiles have -1 or 0 on it, which is the point you can get by stepping it. Apparently you will get 1 point subtracted if you step on -1 tile. The wall and block tiles, in #, are where you cannot interude. You have to bypass #. The starting point, namely S, is where you start the maze and goal point, which is shown as 50, is where you head to. You will earn 50 points when you made to the goal.
# ### Objective of this notebook is to solve self-made maze with Q-learning.
# ### The maze is in square shape, consists of start point, goal point and tiles in the mid of them.
# ### Each tile has numericals as its point. In other words, if you step on to the tile with -1, you get 1 point subtracted.
# ### The maze has blocks to prevent you from taking the route.
import numpy as np
import pandas as pds
import random
import copy
from keras.models import Sequential
from keras.layers import Dense, Activation, Flatten
from keras.optimizers import Adam, RMSprop
from collections import deque
from keras import backend as K
# # Maze Class
class Maze(object):
def __init__(self, size=10, blocks_rate=0.1):
self.size = size if size > 3 else 10
self.blocks = int((size ** 2) * blocks_rate)
self.s_list = []
self.maze_list = []
self.e_list = []
def create_mid_lines(self, k):
if k == 0: self.maze_list.append(self.s_list)
elif k == self.size - 1: self.maze_list.append(self.e_list)
else:
tmp_list = []
for l in range(0,self.size):
if l == 0: tmp_list.extend("#")
elif l == self.size-1: tmp_list.extend("#")
else:
a = random.randint(-1, 0)
tmp_list.extend([a])
self.maze_list.append(tmp_list)
def insert_blocks(self, k, s_r, e_r):
b_y = random.randint(1, self.size-2)
b_x = random.randint(1, self.size-2)
if [b_y, b_x] == [1, s_r] or [b_y, b_x] == [self.size - 2, e_r]: k = k-1
else: self.maze_list[b_y][b_x] = "#"
def generate_maze(self):
s_r = random.randint(1, (self.size / 2) - 1)
for i in range(0, self.size):
if i == s_r: self.s_list.extend("S")
else: self.s_list.extend("#")
start_point = [0, s_r]
e_r = random.randint((self.size / 2) + 1, self.size - 2)
for j in range(0, self.size):
if j == e_r: self.e_list.extend([50])
else: self.e_list.extend("#")
goal_point = [self.size - 1, e_r]
for k in range(0, self.size):
self.create_mid_lines(k)
for k in range(self.blocks):
self.insert_blocks(k, s_r, e_r)
return self.maze_list, start_point, goal_point
# # Maze functions
class Field(object):
def __init__(self, maze, start_point, goal_point):
self.maze = maze
self.start_point = start_point
self.goal_point = goal_point
self.movable_vec = [[1,0],[-1,0],[0,1],[0,-1]]
def display(self, point=None):
field_data = copy.deepcopy(self.maze)
if not point is None:
y, x = point
field_data[y][x] = "@@"
else:
point = ""
for line in field_data:
print ("\t" + "%3s " * len(line) % tuple(line))
def get_actions(self, state):
movables = []
if state == self.start_point:
y = state[0] + 1
x = state[1]
a = [[y, x]]
return a
else:
for v in self.movable_vec:
y = state[0] + v[0]
x = state[1] + v[1]
if not(0 < x < len(self.maze) and
0 <= y <= len(self.maze) - 1 and
maze[y][x] != "#" and
maze[y][x] != "S"):
continue
movables.append([y,x])
if len(movables) != 0:
return movables
else:
return None
def get_val(self, state):
y, x = state
if state == self.start_point: return 0, False
else:
v = float(self.maze[y][x])
if state == self.goal_point:
return v, True
else:
return v, False
# # Generate a maze
size = 10
barriar_rate = 0.1
maze_1 = Maze(size, barriar_rate)
maze, start_point, goal_point = maze_1.generate_maze()
maze_field = Field(maze, start_point, goal_point)
maze_field.display()
# # Solving the maze in Q-learning
class QLearning_Solver(object):
def __init__(self, maze, display=False):
self.Qvalue = {}
self.Field = maze
self.alpha = 0.2
self.gamma = 0.9
self.epsilon = 0.2
self.steps = 0
self.score = 0
self.display = display
def qlearn(self, greedy_flg=False):
state = self.Field.start_point
while True:
if greedy_flg:
self.steps += 1
action = self.choose_action_greedy(state)
print("current state: {0} -> action: {1} ".format(state, action))
if self.display:
self.Field.display(action)
reward, tf = self.Field.get_val(action)
self.score = self.score + reward
print("current step: {0} \t score: {1}\n".format(self.steps, self.score))
if tf == True:
print("Goal!")
break
else:
action = self.choose_action(state)
if self.update_Qvalue(state, action):
break
else:
state = action
def update_Qvalue(self, state, action):
Q_s_a = self.get_Qvalue(state, action)
mQ_s_a = max([self.get_Qvalue(action, n_action) for n_action in self.Field.get_actions(action)])
r_s_a, finish_flg = self.Field.get_val(action)
q_value = Q_s_a + self.alpha * ( r_s_a + self.gamma * mQ_s_a - Q_s_a)
self.set_Qvalue(state, action, q_value)
return finish_flg
def get_Qvalue(self, state, action):
state = (state[0],state[1])
action = (action[0],action[1])
try:
return self.Qvalue[state][action]
except KeyError:
return 0.0
def set_Qvalue(self, state, action, q_value):
state = (state[0],state[1])
action = (action[0],action[1])
self.Qvalue.setdefault(state,{})
self.Qvalue[state][action] = q_value
def choose_action(self, state):
if self.epsilon < random.random():
return random.choice(self.Field.get_actions(state))
else:
return self.choose_action_greedy(state)
def choose_action_greedy(self, state):
best_actions = []
max_q_value = -100
for a in self.Field.get_actions(state):
q_value = self.get_Qvalue(state, a)
if q_value > max_q_value:
best_actions = [a,]
max_q_value = q_value
elif q_value == max_q_value:
best_actions.append(a)
return random.choice(best_actions)
def dump_Qvalue(self):
print("##### Dump Qvalue #####")
for i, s in enumerate(self.Qvalue.keys()):
for a in self.Qvalue[s].keys():
print("\t\tQ(s, a): Q(%s, %s): %s" % (str(s), str(a), str(self.Qvalue[s][a])))
if i != len(self.Qvalue.keys())-1:
print('\t------------------state action reward')
learning_count = 1000
QL_solver = QLearning_Solver(maze_field, display=True)
for i in range(learning_count):
QL_solver.qlearn()
QL_solver.dump_Qvalue()
QL_solver.qlearn(greedy_flg=True)
| Lab7_mazeQlearning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import numpy as np
import pandas as pd
# Set some Pandas options
pd.set_option('html', False)
pd.set_option('max_columns', 30)
pd.set_option('max_rows', 10)
data = pd.read_hdf('/var/datasets/dshs/CD2007Q1/reduced_PUDF_base1q2007.h5','data')
# Based on the example in
# http://www.christianpeccei.com/zipmap/
#
# ZIP area data downloaded from
# ftp://ftp.cs.brown.edu/u/spr/zipdata
#
# The mapping from states to numbers can be seen here:
# https://github.com/ssoper/zip-code-boundaries/blob/master/raw.html
import os.path
if not os.path.exists('zipdata/zt06_d00_ascii.zip'):
# !wget -P zipdata ftp://ftp.cs.brown.edu/u/spr/zipdata/zt06_d00_ascii.zip
# !unzip -d zipdata zipdata/zt06_d00_ascii.zip
if not os.path.exists('zipdata/zt48_d00_ascii.zip'):
# !wget -P zipdata ftp://ftp.cs.brown.edu/u/spr/zipdata/zt48_d00_ascii.zip
# !unzip -d zipdata zipdata/zt48_d00_ascii.zip
def read_ascii_boundary(filestem):
'''
Reads polygon data from an ASCII boundary file.
Returns a dictionary with polygon IDs for keys. The value for each
key is another dictionary with three keys:
'name' - the name of the polygon
'polygon' - list of (longitude, latitude) pairs defining the main
polygon boundary
'exclusions' - list of lists of (lon, lat) pairs for any exclusions in
the main polygon
'''
metadata_file = filestem + 'a.dat'
data_file = filestem + '.dat'
# Read metadata
lines = [line.strip().strip('"') for line in open(metadata_file)]
polygon_ids = lines[::6]
polygon_names = lines[2::6]
polygon_data = {}
for polygon_id, polygon_name in zip(polygon_ids, polygon_names):
# Initialize entry with name of polygon.
# In this case the polygon_name will be the 5-digit ZIP code.
polygon_data[polygon_id] = {'name': polygon_name}
del polygon_data['0']
# Read lon and lat.
f = open(data_file)
for line in f:
fields = line.split()
if len(fields) == 3:
# Initialize new polygon
polygon_id = fields[0]
polygon_data[polygon_id]['polygon'] = []
polygon_data[polygon_id]['exclusions'] = []
elif len(fields) == 1:
# -99999 denotes the start of a new sub-polygon
if fields[0] == '-99999':
polygon_data[polygon_id]['exclusions'].append([])
else:
# Add lon/lat pair to main polygon or exclusion
lon = float(fields[0])
lat = float(fields[1])
if polygon_data[polygon_id]['exclusions']:
polygon_data[polygon_id]['exclusions'][-1].append((lon, lat))
else:
polygon_data[polygon_id]['polygon'].append((lon, lat))
return polygon_data
import csv
from pylab import *
# From:
# http://mpld3.github.io/
import mpld3
if True:
mpld3.enable_notebook()
# +
reduced = data[['Pat_ZIP', 'Total_Charges']]
chargesbyzip = reduced.groupby('Pat_ZIP').mean()
countbyzip = reduced.groupby('Pat_ZIP').count()
# +
#def makezipfigure(series, zipstem = 'zipdata/zt48_d00'):
series = chargesbyzip
series = countbyzip
zipstem = 'zipdata/zt48_d00'
maxvalue = series.max().values[0]
valuename = series.keys()[0]
# Read in ZIP code boundaries for Te
d = read_ascii_boundary(zipstem)
# Create figure and two axes: one to hold the map and one to hold
# the colorbar
figure(figsize=(5, 5), dpi=100)
map_axis = axes([0.0, 0.0, 0.8, 0.9])
cb_axis = axes([0.83, 0.1, 0.03, 0.8])
#map_axis = axes([0.0, 0.0, 4.0, 4.5])
#cb_axis = axes([4.15, 0.5, 0.15, 3.0])
#map_axis = axes([0.0, 0.0, 1.6, 1.8])
#cb_axis = axes([1.66, 0.2, 0.06, 1.2])
# Define colormap to color the ZIP codes.
# You can try changing this to cm.Blues or any other colormap
# to get a different effect
cmap = cm.PuRd
# Create the map axis
axes(map_axis)
gca().set_axis_off()
# Loop over the ZIP codes in the boundary file
for polygon_id in d:
polygon_data = array(d[polygon_id]['polygon'])
zipcode = d[polygon_id]['name']
try:
value = series.xs(zipcode).values[0]
# Define the color for the ZIP code
fc = cmap(float(value) / maxvalue)
except:
fc = (1.0, 1.0, 1.0, 1.0)
edgecolor = [ square(min(fc[:3])) ]*3 + [0.5]
# Draw the ZIP code
patch = Polygon(array(polygon_data), facecolor=fc,
edgecolor=edgecolor, linewidth=.1)
# patch = Polygon(array(polygon_data), facecolor=fc,
# edgecolor=(.5, .5, .5, 1), linewidth=.2)
gca().add_patch(patch)
gca().autoscale()
title(valuename + " per ZIP Code in Texas")
# Draw colorbar
cb = mpl.colorbar.ColorbarBase(cb_axis, cmap=cmap,
norm = mpl.colors.Normalize(vmin=0, vmax=maxvalue))
cb.set_label(valuename)
savefig('texas.pdf', dpi=100)
# Change all fonts to Arial
#for o in gcf().findobj(matplotlib.text.Text):
# o.set_fontname('Arial')
# -
# #### TODO: Compare map to census population.
# #### TODO: Use hospital location and zipcode distances to calculate distance to hospital per zipcode
# #### TODO: Consider 3 letter zipcodes for smoother results
# #### TODO: Consider claim density: number/[region area]
# There might be an algorithm to calculate the area of the convex hull
| 21. Zipcode Visualization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Iterable variable
x=[1,2,3,4,5]
y={'a':1,'b':2,'c':3}
z='Hello'
len(x)
len(y)
len(z)
# # for, indentation
x=[1,2,3,4,5]
for i in x:
print(i)
y={'a':1,'b':2,'c':3}
for k,v in y.items():
print(k,v)
z='Hello'
for i in z:
print(i)
# # if-else
1==1 #1 is equal 1
2==5 #2 is equal 5
2<5 #2 is less 5
2<=2 #2 is less or equal 2
2<2 #2 is less 2
2!=2 #2 is not wqual 2
3!=2 #3 is not wqual 2
x=5
if x==5:
print('x is 5')
else:
print('x is not 5')
x=4
if x==4:
print('x is 4')
elif x==5:
print('x is 5')
else:
print('x is neither 4 nor 5')
# # String
x='hello'
type(x)
x='''hello'''
type(x)
x="hello"
type(x)
x="""hello"""
type(x)
# # String Operator
x='Hello'
x.upper()
x='Hello'
x.lower()
x.capitalize()
# # split()
x='There are apples and bananas'
x.split('a')
x.split()
x.lower()
x.lower().split()
# # join()
x=["There","are","apples","and","bananas"]
"".join(x)
" ".join(x)
"_".join(x)
# # wordcount
x='''
A string datatype is a datatype modeled on the idea of a formal string.
String is such an important and useful datatype.
String is implemented in nearly every programming language.
'''
count={}
for i in x.lower().split():
count[i]=count.get(i,0)+1
count
| dsi200_demo/04 Iterable, For-Loop, String.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data import for Schiller_2020/Mayr datasets:
import scanpy as sc
import pandas as pd
import numpy as np
from anndata import AnnData, concat
adata = sc.read("../../../data/HLCA_extended/extension_datasets/raw/Schiller/munich_cohort_human_dataset.h5ad")
adata_raw = AnnData(X=adata.layers["counts"], obs=adata.obs, var=adata.var)
adata_raw
# # Ensure consistent naming
# + tags=[]
adata_raw.obs.rename(columns={"patient_id": "subject_ID",
"health_status": "condition",
"cell_state_label": "original_celltype_ann",
"cell_type": "low_res_cell_type"}, inplace=True)
# -
adata_raw.obs["sample"] = adata_raw.obs.subject_ID
adata_raw.obs["study"] = "Schiller2020"
adata_raw.obs["dataset"] = adata_raw.obs.study
# + [markdown] tags=[]
# # Remove unnecessary obs columns
# -
adata_raw.obs.drop(columns=["n_counts", "n_genes", "percent.mito", "louvain", "tissue_label"], inplace=True)
adata_raw.var.drop(columns=["n_cells", "highly_variable", "n_counts"], inplace=True)
# # Add age & sex & disease
# +
def add_age (row):
if row['subject_ID'] == "muc10380":
return '73'
if row['subject_ID'] == "muc10381":
return '51'
if row['subject_ID'] == "muc3843":
return '63'
if row['subject_ID'] == "muc4658":
return '59'
if row['subject_ID'] == "muc4659":
return '75'
if row['subject_ID'] == "muc5103":
return '60'
if row['subject_ID'] == "muc5104":
return '76'
if row['subject_ID'] == "muc5105":
return '65'
if row['subject_ID'] == "muc5212":
return '62'
if row['subject_ID'] == "muc5213":
return '23'
if row['subject_ID'] == "muc5288":
return '76'
if row['subject_ID'] == "muc5289":
return '40'
if row['subject_ID'] == "muc8257":
return '40'
if row['subject_ID'] == "muc8258":
return '40'
if row['subject_ID'] == "muc9826":
return '57'
if row['subject_ID'] == "muc9832":
return '81'
if row['subject_ID'] == "muc9833":
return '52'
return 'undefined'
adata_raw.obs['age'] = adata_raw.obs.apply (lambda row: add_age(row), axis=1)
# +
def add_sex (row):
if row['subject_ID'] == "muc10380":
return 'M'
if row['subject_ID'] == "muc10381":
return 'M'
if row['subject_ID'] == "muc3843":
return 'M'
if row['subject_ID'] == "muc4658":
return 'M'
if row['subject_ID'] == "muc4659":
return 'F'
if row['subject_ID'] == "muc5103":
return 'M'
if row['subject_ID'] == "muc5104":
return 'M'
if row['subject_ID'] == "muc5105":
return 'M'
if row['subject_ID'] == "muc5212":
return 'M'
if row['subject_ID'] == "muc5213":
return 'F'
if row['subject_ID'] == "muc5288":
return 'M'
if row['subject_ID'] == "muc5289":
return 'F'
if row['subject_ID'] == "muc8257":
return 'F'
if row['subject_ID'] == "muc8258":
return 'F'
if row['subject_ID'] == "muc9826":
return 'M'
if row['subject_ID'] == "muc9832":
return 'F'
if row['subject_ID'] == "muc9833":
return 'F'
return 'undefined'
adata_raw.obs['sex'] = adata_raw.obs.apply (lambda row: add_sex(row), axis=1)
# +
def add_disease (row):
if row['subject_ID'] == "muc10380":
return 'Donor'
if row['subject_ID'] == "muc10381":
return 'IPF'
if row['subject_ID'] == "muc3843":
return 'Donor'
if row['subject_ID'] == "muc4658":
return 'Donor'
if row['subject_ID'] == "muc4659":
return 'Donor'
if row['subject_ID'] == "muc5103":
return 'Donor'
if row['subject_ID'] == "muc5104":
return 'Donor'
if row['subject_ID'] == "muc5105":
return 'Donor'
if row['subject_ID'] == "muc5212":
return 'Donor'
if row['subject_ID'] == "muc5213":
return 'Donor'
if row['subject_ID'] == "muc5288":
return 'Donor'
if row['subject_ID'] == "muc5289":
return 'COPD'
if row['subject_ID'] == "muc8257":
return 'EAA'
if row['subject_ID'] == "muc8258":
return 'EAA'
if row['subject_ID'] == "muc9826":
return 'IPF'
if row['subject_ID'] == "muc9832":
return 'Donor'
if row['subject_ID'] == "muc9833":
return 'Donor'
return 'undefined'
adata_raw.obs['disease'] = adata_raw.obs.apply (lambda row: add_disease(row), axis=1)
# -
# # Remove duplicated subject
# MLT211 was harvested twice. Hence, muc8257 and muc8258 match to the same subject.
# subject_ID -> 8257 both
# sample -> 8257_1 8257_2
adata_raw.obs.subject_ID.replace({"muc8258": "muc8257"}, inplace=True)
adata_raw.obs["sample"].replace({"muc8257": "muc8257_1",
"muc8258": "muc8257_2"}, inplace=True)
# # Subset to 2000 HVGs
def subset_and_pad_adata(adata, gene_set):
"""
This function uses a gene list provided as a Pandas dataframe with gene symbols and
Ensembl IDs and subsets a larger Anndata object to only the genes in this list. If
Not all genes are found in the AnnData object, then zero-padding is performed.
"""
# Example inputs:
# genes_filename = '/storage/groups/ml01/workspace/hlca_lisa.sikkema_malte.luecken/genes_for_mapping.csv'
# data_filename = '/storage/groups/ml01/workspace/hlca_lisa.sikkema_malte.luecken/ready/adams.h5ad'
# gene_set = pd.read_csv(genes_filename)
# adata = sc.read(data_filename)
# Prep objects
if 'gene_symbols' in gene_set.columns:
gene_set.index = gene_set['gene_symbols']
else:
raise ValueError('The input gene list was not of the expected type!\n'
'Gene symbols and ensembl IDs are expected in column names:\n'
'\t`gene_symbols` and `Unnamed: 0`')
# Subset adata object
common_genes = [gene for gene in gene_set['gene_symbols'].values if gene in adata.var_names]
if len(common_genes) == 0:
print("WARNING: YOU SHOULD PROBABLY SWITCH YOUR ADATA.VAR INDEX COLUMN TO GENE NAMES"
" RATHER THAN IDS! No genes were recovered.")
return
adata_sub = adata[:,common_genes].copy()
# Pad object with 0 genes if needed
if len(common_genes) < len(gene_set):
diff = len(gene_set) - len(common_genes)
print(f'not all genes were recovered, filling in 0 counts for {diff} missing genes...')
# Genes to pad with
genes_to_add = set(gene_set['gene_symbols'].values).difference(set(adata_sub.var_names))
new_var = gene_set.loc[genes_to_add]
if 'Unnamed: 0' in new_var.columns:
# Assumes the unnamed column are ensembl values
new_var['ensembl'] = new_var['Unnamed: 0']
del new_var['Unnamed: 0']
df_padding = pd.DataFrame(data=np.zeros((adata_sub.shape[0],len(genes_to_add))), index=adata_sub.obs_names, columns=new_var.index)
adata_padding = sc.AnnData(df_padding, var=new_var)
# Concatenate object
adata_sub = concat([adata_sub, adata_padding], axis=1, join='outer', index_unique=None, merge='unique')
# Ensure ensembl IDs are available
adata_sub.var['ensembl'] = gene_set['Unnamed: 0']
return adata_sub
# + tags=[]
gene_set = pd.read_csv("genes_for_mapping.csv")
adata_raw_subsetted = subset_and_pad_adata(adata_raw, gene_set)
# -
# # Write out object
adata_raw.write("../../../data/HLCA_extended/extension_datasets/ready/full/mayr.h5ad")
adata_raw_subsetted.write("../../../data/HLCA_extended/extension_datasets/ready/subsetted/mayr_sub.h5ad")
| notebooks/3_atlas_extension/HLCA_extension_data_preprocessing/Schiller_2020_mayr.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6
# language: python
# name: python3
# ---
# <h1>Table of contents</h1>
#
# <div class="alert-info" style="margin-top: 20px">
# <ol>
# <li><a href="#load_dataset">Load the Cancer data</a></li>
# <li><a href="#modeling">Modeling</a></li>
# <li><a href="#evaluation">Evaluation</a></li>
# <li><a href="#practice">Practice</a></li>
# </ol>
# </div>
# <br>
# <hr>
# +
# Import library
import pandas as pd
import pylab as pl
import numpy as np
import scipy.optimize as opt
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
# %matplotlib inline
import matplotlib.pyplot as plt
print('imported')
# -
# ## Load the data
# download the data
# !wget -O cell_samples.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/cell_samples.csv
# read data from the .csv file
cell_df = pd.read_csv("cell_samples.csv")
cell_df.head()
# <h2 id="load_dataset">Load the Cancer data</h2>
# The dataset consists of several hundred human cell sample records, each of which contains the values of a set of cell characteristics. The fields in each record are:
#
# | Field name | Description |
# | ----------- | --------------------------- |
# | ID | Clump thickness |
# | Clump | Clump thickness |
# | UnifSize | Uniformity of cell size |
# | UnifShape | Uniformity of cell shape |
# | MargAdh | Marginal adhesion |
# | SingEpiSize | Single epithelial cell size |
# | BareNuc | Bare nuclei |
# | BlandChrom | Bland chromatin |
# | NormNucl | Normal nucleoli |
# | Mit | Mitoses |
# | Class | Benign or malignant |
#
# <br>
# The ID field contains the patient identifiers. The characteristics of the cell samples from each patient are contained in fields Clump to Mit. The values are graded from 1 to 10, with 1 being the closest to benign (harmless).
#
# The Class field contains the diagnosis, as confirmed by separate medical procedures, as to whether the samples are benign (value = 2) or malignant (value = 4).
#
# Lets look at the distribution of the classes based on Clump thickness and Uniformity of cell size:
# +
ax = cell_df[cell_df['Class'] == 4][0:50].plot(kind='scatter', x='Clump', y='UnifSize', color='green', label='malignant');
cell_df[cell_df['Class'] == 2][0:50].plot(kind='scatter', x='Clump', y='UnifSize', color='darkred', label='benign', ax=ax);
plt.show()
# -
# ## Data pre-processing and selection
# look at columns data types
cell_df.dtypes
# It looks like the **BareNuc** column includes some values that are not numerical. Convert those rows into 'int'.
# +
cell_df = cell_df[pd.to_numeric(cell_df['BareNuc'], errors='coerce').notnull()]
cell_df['BareNuc'] = cell_df['BareNuc'].astype('int')
cell_df.dtypes
# -
feature_df = cell_df[['Clump', 'UnifSize', 'UnifShape', 'MargAdh', 'SingEpiSize', 'BareNuc', 'BlandChrom', 'NormNucl', 'Mit']]
X = np.asarray(feature_df)
X[0:5]
# We want the model to predict the value of Class (that is, benign (=2) or malignant (=4)). As this field can have one of only two possible values, we need to change its measurement level to reflect this.
# +
cell_df['Class'] = cell_df['Class'].astype('int')
y = np.asarray(cell_df['Class'])
y [0:5]
# -
# ## Train/Test dataset
# +
# Train/Test split 70/30
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.3, random_state=4)
print ('Train set:', X_train.shape, y_train.shape)
print ('Test set:', X_test.shape, y_test.shape)
# -
# ## Modeling (SVM with Scikit-learn )
# The SVM algorithm offers a choice of kernel functions for performing its processing. Mapping data into a higher dimensional space is called kernelling. The mathematical function used for the transformation is known as the kernel function, and can be of different types, such as:
#
# ```
# 1.Linear
# 2.Polynomial
# 3.Radial basis function (RBF)
# 4.Sigmoid
# ```
#
# kernel:{‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’}, default=’rbf’}
#
# Each of these functions has its characteristics, its pros and cons, and its equation, but as there's no easy way of knowing which function performs best with any given dataset, we usually choose different functions in turn and compare the results.
from sklearn import svm
clf = svm.SVC(kernel='linear')
clf.fit(X_train, y_train)
# use model to predict new values
yhat = clf.predict(X_test)
yhat [0:5]
# ## Evaluation
from sklearn.metrics import classification_report, confusion_matrix
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# +
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_test, yhat, labels=[2,4])
np.set_printoptions(precision=2)
print (classification_report(y_test, yhat))
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=['Benign(2)','Malignant(4)'],normalize= False, title='Confusion matrix')
# -
# use the f1_score from sklearn library
from sklearn.metrics import f1_score
print("Average F1-Score: %.4f" %f1_score(y_test, yhat, average='weighted'))
# try jaccard index for accuracy
from sklearn.metrics import jaccard_similarity_score
print("Jaccard Score: %.4f" %jaccard_similarity_score(y_test, yhat))
# try accuracy_score
from sklearn.metrics import accuracy_score
print("Accuracy-Score: %.4f" % accuracy_score(y_test, yhat))
| SVM ( Support Vector Machines).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Classification of Iris¶
# ## Package imports
# +
# For building neural networks.
import keras as kr
# For interacting with data sets.
import pandas as pd
# For encoding categorical variables.
import sklearn.preprocessing as pre
# For splitting into training and test sets.
import sklearn.model_selection as mod
# -
# Load the iris data set from a URL.
df = pd.read_csv("https://raw.githubusercontent.com/uiuc-cse/data-fa14/gh-pages/data/iris.csv")
df
# ## Inputs
# Separate the inputs from the rest of the variables.
inputs = df[['petal_length', 'petal_width', 'sepal_length', 'sepal_width']]
inputs
# +
# Encode the classes as above.
encoder = pre.LabelBinarizer()
encoder.fit(df['species'])
outputs = encoder.transform(df['species'])
outputs
# -
# ## Idea
# The neural network will turn four floating point inputs into three "floating point" outputs.
#
# [5.1,3.5,1.4,0.2]→[0.8,0.19,0.01]
#
# [5.1,3.5,1.4,0.2]→[1,0,0]
# ## Build model
# +
model = kr.models.Sequential()
# Add a hidden layer with 64 neurons and an input layer with 4.
model.add(kr.layers.Dense(units=64, activation='relu', input_dim=4))
# Add a three neuron output layer.
model.add(kr.layers.Dense(units=3, activation='softmax'))
# Build the graph.
model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])
# -
# ## Split
# Split the inputs and outputs into training and test sets.
inputs_train, inputs_test, outputs_train, outputs_test = mod.train_test_split(inputs, outputs, test_size=0.5)
inputs_test.iloc[0]
model.predict(inputs_test.as_matrix()[0:1])
# ## Train
# Train the neural network.
model.fit(inputs_train, outputs_train, epochs=15, batch_size=10)
model.predict(inputs_test.as_matrix()[0:1])
# Have the network predict the classes of the test inputs.
predictions = model.predict(inputs_test)
predictions_labels = encoder.inverse_transform(predictions)
predictions_labels
# ## Evaluate
# Compare the predictions to the actual classes.
predictions_labels == encoder.inverse_transform(outputs_test)
(predictions_labels == encoder.inverse_transform(outputs_test)).sum()
| Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Create SVI Data Dictionary
# This code just parses the SVI 2016 documentation to grab the variable names and descriptions and saves it to a json file "svi_dictionary.json"
with open('../data/raw/SVI2016_US_raw_data_dictionary.txt','r') as f:
data = f.readlines()
svi_dict = {}
for idx,line in enumerate(data):
if line.startswith('E_') or line.startswith('EP_') or line.startswith('EPL_') or line.startswith('F_'):
val = line.split()[0]
if val in svi_dict:
continue
if line.split(' ')[0].endswith('\n'):
svi_dict[val] = data[idx+1].strip()
else:
svi_dict[val] = line.split(' ',1)[1].strip()
svi_dict
import json
with open('../data/processed/svi_dictionary.json','w') as f:
json.dump(svi_dict,f,indent=4)
| notebooks/thw_9.0_create_svi_dictionary.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.7 (''venv'': venv)'
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import seaborn as sns
import statistics
import math
import scipy.stats as stats
import statsmodels.api as sm
import matplotlib.pyplot as plt
import statsmodels.formula.api as smf
import statsmodels.stats.api as sms
from patsy import dmatrices
from statsmodels.stats.outliers_influence import variance_inflation_factor
from statsmodels.compat import lzip
# +
# importando CSV que foi baixado em: http://staff.pubhealth.ku.dk/~tag/Teaching/share/data/Bodyfat.html
path = 'C:/Users/2104734084/Documents/Modelos_MachineLearning/regressao-linear-multipla/data/'
file_name = 'Bodyfat.csv'
df = pd.read_csv(f"{path}{file_name}")
df
# +
#Descritiva entre as variáveis:bodyfat e Wrist
# -
df['bodyfat'].describe()
df['Wrist'].describe()
sns.boxplot(x=df['bodyfat'])
#Repare que existem valores muito discrepantes, portanto, é necessário entender se existe
# necessidade de eliminar os dados.
sns.boxplot(x=df['Wrist'])
df['bodyfat'].hist()
df['Wrist'].hist()
# +
#Principio de regressão linear está em cima do erro. O método matemático para estimação dos parametros
# é o método dos minímos quadrados (pois queremos errar o menos possível).
#Parametro é um argumento, um número (que consigo chegar nele através de um método matemático)
#Equação da Regressão Linear: Y = B0+B1x+B2x+Bnx
#O bom da regressão linear é que podemos interpretar parâmetros, onde:
# - B0: A cada incremento de uma unidade na sua variável x1 eu tenho um incremento estimado de B0
#Quão bom o meu modelo é? Métricas de qualidade para modelos de Regressão
# - R2: O quanto o meu modelo está expicado da variabilidade dos dados
# - MSE: Erro médio ao quadrado (Ele não é interpretado)
# - RMSE: Se deu 50, significa que em média o seu modelo está errando 50 para cima ou para baixo.
# -
#Entender se os dados possuem uma relação Linear com o y a ser estimado
sns.scatterplot(data=df, x='Wrist', y='bodyfat')
# +
#Olhando os dados acima, podemos ver que eles estão super espaçados, portanto, existe uma grande variabilidade.
#Teremos um R² alto, pois o R² baixo seria quando os pontos estão muito próximos da reta.
# +
#Realizando a Regressão Linear
# -
formula= 'bodyfat~Wrist'
model = smf.ols(formula=formula, data=df).fit()
print(model.summary())
# +
# Como podemos identificar acima, o R²(R-squared) deu um valor extremamente baixo, ou seja, significa
# que essa variável não é variável boa para o contexto.
# Precisamos de criatividade para criar variávels que façam sentido para o contexto.
# +
# Regressão Linear Múltipla
# Na regressão linear existe o princípio da parcimônia que é fazer menos com mais, portanto, quanto menos
# variáveis melhor e no modelo, temos o R2 ajustado que nos diz qual modelo é melhor, mas o melhor é usarmos
# o RMSE para definir o melhor modelo.
# -
#
# +
# Criando um outro modelo com outra variável
# -
formula= 'bodyfat~Abdomen+Biceps'
model = smf.ols(formula=formula, data=df).fit()
print(model.summary())
# +
# Tivemos um R2 de 0,66. Significa que o meu modelo entende aproximadamente 67% da variabilidade dos meus dados.
# Já no modelo anterior o modelo entendia apenas 12%.
# Comparando os dois modelos olhando o R2 Ajustado, tinhamos no anterior 0,11 contra 0,66, ou seja,
# o segundo modelo é melhor.
# +
# Mas então quais variáveis são significativa? Devo colocar todas variáveis?
# F: Teste Golbal, qro saber se alguma variável presta
# T: Teste individual, vou testar parametro a parametro
# +
# Se algum parâmetro for zero, siginifica que aquela variável não é significcativa para o modelo.
# O Teste T fica na coluna P> |t| e se for diferente de zero, significa se a variável é importante para o meu
#modelo.
# +
#Modelo usando todas as variáveis para observar todas as variáveis.
# -
df.columns
formula= 'bodyfat~Age+Weight+Height+Neck+Chest+Abdomen+Hip+Thigh+Knee+Ankle+Biceps+Forearm+Wrist'
model = smf.ols(formula=formula, data=df).fit()
print(model.summary())
# +
#Para eu escrever a minha função, preciso excluir as variáveis não significativas.
#Quanto quanto maior o Pvalor (P>|t|) mais inutil é a minha variável
#No caso acima a variável mais inutil seria a Chest
# +
#Portanto para gerarmos a equação, precisamos tirar o mais inutil e rodar a função novamente.
#E então excluo o parametro mais inútil e rodo novamente e sigo fazendo esse looping
# +
# Precisamos escolher algum método para seleção de variáveis:
# Forward: A festa começa sozinha, o segurança chega na hora e olha quem é o mais legal e coloca para dentro,
# escolhe o segundo mais legal e coloca para dentro.
# Escolhe isso baseado no Pvalor
# +
# Backward: a festa começa com todo mundo, o segurança vai eliminando os problemáticos na festa.
# +
# Stepwise: Começa sem nenhuma variável, vou adicionando e vendo se estão indo bem, se elas "tretarem", eu vou lá e tiro.
# +
# Poderia adotar outra estratégia, onde eu olharia os erros (AIC), vendo se aquela variável melhora ou piora
# o meu erro.
# -
| src/model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2 with Spark 2.1
# language: python
# name: python2-spark21
# ---
# 
#
# # Introduction to NLP: Basic Concepts<a class="anchor" id="bullet-1"></a>
#
# This notebook guides you through the basic concepts to start working with Natural Language Processing, including how to set up your environment, create and analyze data sets, and work with data files.
#
# This notebook uses NLTK, a python framework for Natural Language Processing. Some knowledge of Python is recommended.
#
# If you are new to notebooks, here's how the user interface works: [Parts of a notebook](http://datascience.ibm.com/docs/content/analyze-data/parts-of-a-notebook.html)
#
# ## About Natural Language Processing
#
# Natural language processing (NLP) is a field of computer science, artificial intelligence and computational linguistics concerned with the interactions between computers and human (natural) languages, and, in particular, concerned with programming computers to fruitfully process large natural language corpora. Challenges in natural language processing frequently involve natural language understanding, natural language generation (frequently from formal, machine-readable logical forms), connecting language and machine perception, dialog systems, or some combination thereof.
#
# <img src='http://web.stanford.edu/class/cs224n/images/treeFrontSentiment.png' width="50%" height="50%"></img>
#
# ## NLP Methods
#
# - Automatic Summarization
# - Translations
# - Named Entity Recognition
# - Natural Language generation
# - Optical Character Recognition (OCR)
# - Part of Speech tagging (POS)
# - Parsing
# - Question Answering
# - Sentiment Analysis
# - Speech Recognition
# - Word sense disambiguation
# - Information Retrieval
# - Stemming
#
# ## Table of Contents
#
# 1. [Introduction](#bullet-1)<br/>
# <br/>
# 2. [Prerequisites](#bullet-2)<br/>
# <br/>
# 3. [Preprocessing](#bullet-3)<br/>
# 3.1 [Noise Removal](#bullet-4)<br/>
# 3.2 [Normalization](#bullet-5)<br/>
# 3.3 [Standardization](#bullet-6)<br/>
# <br/>
# 4. [Parsing](#bullet-7)<br/>
# 4.1 [Tokenization](#bullet-8)<br/>
# 4.2 [POS Tagging](#bullet-9)<br/>
# 4.3 [Word Sense](#bullet-10)<br/>
# <br/>
# 5. [Use Case: Quora Feature Engineering](#bullet-11)<br/>
# 5.1 [Introduction](#bullet-12)<br/>
# 5.1 [Data preview & pre-processing](#bullet-13)<br/>
# 5.2 [What is feature engineering?](#bullet-14)<br/>
# <br/>
# 6. [Syntax](#bullet-15)<br/>
# 6.1 [Basic string cleaning](#bullet-16)<br/>
# 6.2 [Simplify question pairs](#bullet-17)<br/>
# 6.3 [Measuring similarity](#bullet-18)<br/>
# <br/>
# 7. [Semantics](#bullet-19)<br/>
# 7.1 [Single word analysis](#bullet-20)<br/>
# 7.2 [Sentence analysis](#bullet-21)<br/>
# 7.3 [Weighted analysis](#bullet-22)<br/>
# 7.4 [Feature creation](#bullet-23)
#
# ## Prerequisites<a class="anchor" id="bullet-2"></a>
# We'll be working with Quora's first [public dataset](https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs) later in the notebook.
#
# ### Load data
#
# ### <span style="color: red"> _User Input_</span>
#
# - Download Quora Dataset [here](https://ibm.box.com/shared/static/0zgbgec9n8zmunxvh0its1at53u34cpd.csv) and add to Project
# - Insert questions.csv to code below as Pandas Data Frame
# +
# The code was removed by DSX for sharing.
# -
# ### Cache Bucket Details for Output Later
# Let's capture our bucket details from our data load so we can save back to that location in the future.
#
# ### <span style="color: red"> _User Input_</span>
#
# - Copy and Paste Bucket Name (Portion after Bucket= in body definition above into bucket variable replacing $BUCKET_NAME)
# Define Bucket to Save Project Data
bucket = '$BUCKET_NAME'
# Name of Output File for Features
outfile = 'quora_features.csv'
# ### Install Library Dependencies
#
# This notebook has a few external library dependencies. We will install and load these here.
# Install Various Libraries Required
# !pip install fuzzywuzzy
# !pip install gensim
# !pip install nltk
# !pip install pyemd
# !pip install python-Levenshtein
# !pip install stemming
#
# #### Natural Language Toolkit
#
# Python's Natural Language Toolkit (NLTK) is a comprehensive NLP suite of offerings. It includes detailed Corpora for a variety of languages and uses. You can find more detailed information about the Corpora here; [NLTK Corpora](http://www.nltk.org/nltk_data/)
#
# *NLTK Requires Additional Configuration*
# +
# NLTK Import with Corpus Definition
import nltk
nltk.download()
print "\nNLTK Can Also Specify Corpus Manually"
print "Try nltk.download('popular') for yourselves."
# -
# ## Preprocessing<a class="anchor" id="bullet-3"></a>
#
# Text is messy data, various types of noise are present in it and the data is not readily analyzable without any pre-processing. The entire process of cleaning and standardization of text, making it noise-free and ready for analysis is known as text preprocessing.
#
# It is predominantly comprised of three steps:
#
# - Noise Removal
# - Normalization
# - Standardization
# ### Noise Removal<a class="anchor" id="bullet-4"></a>
#
# Any piece of text which is not relevant to the context of the data and the end-output can be specified as the noise.
#
# For example – language stopwords (commonly used words of a language – is, am, the, of, in etc), URLs or links, social media entities (mentions, hashtags), punctuations and industry specific words. This step deals with removal of all types of noisy entities present in the text.
#
# A general approach for noise removal is to prepare a dictionary of noisy entities, and iterate the text object by tokens (or by words), eliminating those tokens which are present in the noise dictionary.
#
# Following is the python code for the same purpose.
#
# #### Stopword Filtering
# +
noise_list = ["is", "a", "this", "..."]
def _remove_noise(input_text):
words = input_text.split()
noise_free_words = [word for word in words if word not in noise_list]
noise_free_text = " ".join(noise_free_words)
return noise_free_text
_remove_noise("this is a sample text")
# -
# ### Regex Filtering
#
# Another approach is to use the regular expressions while dealing with special patterns of noise. We have explained regular expressions in detail in one of our previous article. Following python code removes a regex pattern from the input text:
# +
import re
def _remove_regex(input_text, regex_pattern):
urls = re.finditer(regex_pattern, input_text)
for i in urls:
input_text = re.sub(i.group().strip(), '', input_text)
return input_text
regex_pattern = "#[\w]*"
_remove_regex("remove this #DSXRocks from tweet text", regex_pattern)
# -
# ### Normalization<a class="anchor" id="bullet-5"></a>
#
# Another type of textual noise is about the multiple representations exhibited by single word.
#
# For example – “play”, “player”, “played”, “plays” and “playing” are the different variations of the word – “play”, Though they mean different but contextually all are similar. The step converts all the disparities of a word into their normalized form (also known as lemma). Normalization is a pivotal step for feature engineering with text as it converts the high dimensional features (N different features) to the low dimensional space (1 feature), which is an ideal ask for any ML model.
#
# The most common lexicon normalization practices are :
#
# Stemming: Stemming is a rudimentary rule-based process of stripping the suffixes (“ing”, “ly”, “es”, “s” etc) from a word.
# Lemmatization: Lemmatization, on the other hand, is an organized & step by step procedure of obtaining the root form of the word,
# it makes use of vocabulary (dictionary importance of words) and morphological analysis (word structure and grammar relations).
#
# Below is the sample code that performs lemmatization and stemming using python’s popular library – NLTK.
# +
from nltk.stem.wordnet import WordNetLemmatizer
lem = WordNetLemmatizer()
from nltk.stem.porter import PorterStemmer
stem = PorterStemmer()
word = "multiplying"
lem.lemmatize(word, "v")
stem.stem(word)
# -
# ### Standardization<a class="anchor" id="bullet-6"></a>
#
# Text data often contains words or phrases which are not present in any standard lexical dictionaries. These pieces are not recognized by search engines and models.
#
# Some of the examples are – acronyms, hashtags with attached words, and colloquial slangs. With the help of regular expressions and manually prepared data dictionaries, this type of noise can be fixed, the code below uses a dictionary lookup method to replace social media slangs from a text.
translation_dict = {'rt':'Retweet', 'dm':'direct message', "awsm" : "awesome", "luv" :"love"}
# ## Parsing<a class="anchor" id="bullet-7"></a>
#
# Syntactical parsing involves the analysis of words in the sentence for grammar and their arrangement in a manner that shows the relationships among the words. Dependency Grammar and Part of Speech tags are the important attributes of text syntactics.
#
# Dependency Trees – Sentences are composed of some words sewed together. The relationship among the words in a sentence is determined by the basic dependency grammar. Dependency grammar is a class of syntactic text analysis that deals with (labeled) asymmetrical binary relations between two lexical items (words). Every relation can be represented in the form of a triplet (relation, governor, dependent). For example: consider the sentence – “Bills on ports and immigration were submitted by <NAME>, Republican of Kansas.” The relationship among the words can be observed in the form of a tree representation as shown:
#
# <img src='https://s3-ap-south-1.amazonaws.com/av-blog-media/wp-content/uploads/2017/01/11181146/image-2.png' width="50%" height="50%"></img>
#
# The tree shows that “submitted” is the root word of this sentence, and is linked by two sub-trees (subject and object subtrees). Each subtree is a itself a dependency tree with relations such as – (“Bills” <-> “ports” <by> “proposition” relation), (“ports” <-> “immigration” <by> “conjugation” relation).
#
# This type of tree, when parsed recursively in top-down manner gives grammar relation triplets as output which can be used as features for many nlp problems like entity wise sentiment analysis, actor & entity identification, and text classification. The python wrapper StanfordCoreNLP (by Stanford NLP Group, only commercial license) and NLTK dependency grammars can be used to generate dependency trees.
# ### Tokenization<a class="anchor" id="bullet-8"></a>
#
# The process of splitting text into smaller pieces or units. We want to tokenize text into sentences, and sentences into tokens. The library provides a tokenization module, nltk.tokenize
# +
from nltk import sent_tokenize, word_tokenize
from IPython.display import Image
sentences = sent_tokenize('IBM Data Science Experience (DSX) offers a wealth of functionality to any software ' \
'developer, especially those interested in data science. An important part of that ' \
'functionality is the ability to use Notebooks, which are a convenient and intuitive ' \
'way to compartmentalize different segments of a code base. The IBM Watson Data ' \
'Platform (WDP) Integration team manages system verification defects for various ' \
'services and utilizes GitHub’s “issues” feature to keep track of each defect’s ' \
'status, details, and assignments. Currently, there is no way for us to quantify the ' \
'team’s activity each week. How many defects are being opened, closed, and worked on ' \
'each week for each service? How severe are those defects?')
sentences
# -
tokens = word_tokenize(sentences[2])
tokens
# ### Part of Speech Tagging<a class="anchor" id="bullet-9"></a>
#
# Apart from the grammar relations, every word in a sentence is also associated with a part of speech (pos) tag (nouns, verbs, adjectives, adverbs etc). The pos tags defines the usage and function of a word in the sentence. Here is a list of all possible pos-tags defined by Pennsylvania university. Following code using NLTK performs pos tagging annotation on input text. (It provides several implementations, the default one is perceptron tagger)
# +
from nltk import pos_tag
#this is a Classifier, given a token assign a class
#pos_tag Already defined in the library. We can train our own.
tags = pos_tag(tokens)
text = "I am quickly using Data Science Experience at IBM in Manhattan for natural language processing."
tokens = word_tokenize(text)
print pos_tag(tokens)
# Let's apply this to our sample text from our website.
tags
# -
# Part of Speech tagging is used for many important purposes in NLP:
#
# #### Word Sense Disambiguation
#
# Some language words have multiple meanings according to their usage. For example, in the two sentences below:
#
# I. “Please book my flight for Delhi”
#
# II. “I am going to read this book in the flight”
#
# “Book” is used with different context, however the part of speech tag for both of the cases are different. In sentence I, the word “book” is used as v erb, while in II it is used as no un. (Lesk Algorithm is also used for similar purposes)
#
# #### Improving word-based features
#
# A learning model could learn different contexts of a word when used word as the features, however if the part of speech tag is linked with them, the context is preserved, thus making strong features. For example:
#
# Sentence -“book my flight, I will read this book”
#
# Tokens – (“book”, 2), (“my”, 1), (“flight”, 1), (“I”, 1), (“will”, 1), (“read”, 1), (“this”, 1)
#
# Tokens with POS – (“book_VB”, 1), (“my_PRP$”, 1), (“flight_NN”, 1), (“I_PRP”, 1), (“will_MD”, 1), (“read_VB”, 1), (“this_DT”, 1), (“book_NN”, 1)
#
# #### Normalization and Lemmatization
#
# POS tags are the basis of lemmatization process for converting a word to its base form (lemma).
#
# #### Efficient Stopword Removal
#
# POS tags are also useful in efficient removal of stopwords.
#
# For example, there are some tags which always define the low frequency / less important words of a language. For example: (IN – “within”, “upon”, “except”), (CD – “one”,”two”, “hundred”), (MD – “may”, “must” etc)
#
#
# ### Word Senses<a class="anchor" id="bullet-10"></a>
#
# In linguistics, a word sense is one of the meanings of a word. Until now, we worked with tokens and POS. So, for instance in "the man sit down on the bench near the river.", the token [bench] could be bench as a constructed object by humans where people sit, or the natural side where the river meets the land.
#
# - WordNet: A semantic graph for words. NLTK provides a interface to the API
#
# <img src="https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcSFZ2l8g_3316qek21ZEIkTS0WIYs8-lfTvXtO3YGWHEGpdDiMG">
#
# Lets see some functions to handle meanings in tokens. Wordnet provides the concept of synsets, as syntactic units for tokens
# +
from nltk.corpus import wordnet as wn #loading wordnet module
wn.synsets('human')
# -
wn.synsets('human')[0].definition
wn.synsets('human')[1].definition
human = wn.synsets('Human',pos=wn.NOUN)[0]
human
human.hypernyms()
human.hyponyms()
bike = wn.synsets('bicycle')[0]
bike
girl = wn.synsets('girl')[1]
girl
bike.wup_similarity(human)
girl.wup_similarity(human)
# ## Quora Feature engineering for semantic analysis<a class="anchor" id="bullet-11"></a>
#
# These techniques will be explained and applied in context of feature engineering a Quora dataset.
#
# I hope that by the end of this notebook, you'll gain familiarity with standard practices as well as recent methods used for NLP tasks. The ever-evolving field has a range of applications from [information retrieval](https://cloud.google.com/natural-language/) to [AI](https://www.ibm.com/developerworks/library/os-ind-watson/), and is well worth a [deeper](https://www.ibm.com/watson/developercloud/doc/natural-language-understanding/index.html) [dive](https://www.ted.com/talks/deb_roy_the_birth_of_a_word).
#
#
# ## 1.0 Introduction<a class="anchor" id="bullet-12"></a>
#
# Quora is a knowledge sharing platform that functions simply on questions and answers. Their mission, plainly stated: "We want the Quora answer to be the definitive answer for everybody forever." In order to ensure the quality of these answers, Quora must protect the integrity of the questions. They accomplish this by adhering to a principle that each logically distinct question should reside on its own page. Unfortunately, the English language is a fickle thing, and intention can vary significantly with subtle shifts in syntactic structure.
#
# Our goal is to create features for syntactically similar, but semantically distinct pairs of strings. We'll be working with Quora's first [public dataset](https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs).
# ### 1.1 Data preview & pre-processing<a class="anchor" id="bullet-13"></a>
# The Quora dataset is simple, containing columns for question strings, unique IDs, and a binary variable indicating whether the pair is logically distinct.
# +
from IPython.display import display
pd.set_option('display.max_colwidth', -1)
# checking for missing values
df_data_1.isnull().any()
# drop rows with missing values
df_data_1=df_data_1.dropna()
print df_data_1.shape
display(df_data_1[14:19])
# -
# ### 1.2 What is feature engineering?<a class="anchor" id="bullet-14"></a>
#
# **Feature engineering** is the practice of generating data attributes that are useful for prediction. Although the task is loosely defined and depends heavily on the domain in question, it is a key process for optimizing model building. The goal is to find information which best describes the target to be predicted.
#
# In our case, the target is logical distinction - will one answer suffice for each pair of questions? This target is described by the binary is_duplicate label in the dataset. We will need to process the Quora data to create features that capture the structure and semantics of each question. This will be accomplished by using natural language processing (NLP) methods on the strings.
# +
import nltk
from nltk.tokenize import word_tokenize
nltk.download('punkt')
teststring = df_data_1['question1'][12]
tokens = word_tokenize(df_data_1['question1'][12])
print teststring
print tokens
# -
# ### 1.3 Basic String Cleaning<a class="anchor" id="bullet-15"></a>
# **Stopwords**<br/>
# The NLTK library includes a set of English language stopwords (e.g. I, you, this, that), which we'll remove from the list of word tokens.
# +
from nltk.corpus import stopwords
stop_words = stopwords.words('english')
stop_words += ['?'] # adding ? character to stop words, since we are working with a corpus of questions
filtered_tokens = [t for t in tokens if not t in stop_words]
print filtered_tokens
# -
# **Stemming**<br/>
# Removes prefixes and suffixes to extract the **stem** of a word, which may be derived from a **root**. For example, the word "destabilized", has the stem "destablize", but the root "stabil-". The Porter stemming algorithm is often used in practice to handle this task.
# +
from stemming.porter2 import stem
stem_tokens = [stem(t) for t in filtered_tokens]
print stem_tokens
# -
# ### 2.4 Simplify question pairs <a class="anchor" id="bullet-15"></a>
# We will combine the string cleaning methods into a function, and apply that across both question columns in the dataset. To prepare for basic comparison, the function will also convert the words to lowercase and sort them alphabetically.
# +
import string
def simplify(s):
s = str(s).lower().decode('utf-8')
tokens = word_tokenize(s)
stop_words = stopwords.words('english')
stop_words += string.punctuation
filtered_tokens = [t for t in tokens if not t in stop_words]
stem_tokens = [stem(t) for t in filtered_tokens]
sort_tokens = sorted(stem_tokens)
if sort_tokens is not []:
tokenstr = " ".join(sort_tokens)
else:
tokenstr = ""
return tokenstr.encode('utf-8')
df_data_1['q1_tokens'] = df_data_1['question1'].map(simplify)
df_data_1['q2_tokens'] = df_data_1['question2'].map(simplify)
simplifydf=df_data_1[['question1','q1_tokens','question2','q2_tokens','is_duplicate']]
display(simplifydf[12:13])
# -
# ### 2.5 Measuring similarity<a class="anchor" id="bullet-16"></a>
#
# The simplest way to compare the difference between two strings is by **edit distance**.
#
# **Levenshtein distance**: calculates edit distance by counting the number of operations (add, replace, or delete) that are required to transform one string into another.
#
# **Token sort ratio**: A method from the [FuzzyWuzzy library](http://chairnerd.seatgeek.com/fuzzywuzzy-fuzzy-string-matching-in-python/) that uses Levenshtein distance to get the proportion of common tokens between two strings. The score is normalized from 0-100 for easier interpretation.
#
# We'll create our first two features with these methods.
# +
from Levenshtein import distance
from fuzzywuzzy import fuzz
df_data_1['edit_distance'] = df_data_1.apply(lambda x: distance(x['q1_tokens'], x['q2_tokens']), axis=1)
df_data_1['in_common'] = df_data_1.apply(lambda x: fuzz.token_sort_ratio(x['q1_tokens'], x['q2_tokens']), axis=1)
syntaxdf=df_data_1[['question1','q1_tokens','question2','q2_tokens','edit_distance','in_common','is_duplicate']]
display(syntaxdf[508:510]) # example
# -
# Clearly the edit distance or proportion of common tokens is not sufficient to predict duplicate intention. For example, question pair 508 is duplicate, but has a larger edit distance and smaller proportion of common tokens than pair 509.
#
# Let's try to improve on our features by working with semantic methods.
#
#
# ## 3.0 Semantics<a class="anchor" id="bullet-17"></a>
#
# To a machine, words look like characters stored next to one another. Syntax methods allow us to compare words by manipulating them mathematically - counting the number of characters, measuring the amount of work needed to turn one set of characters into another.
#
# Semantic analysis strives to represent how each sequence of characters is related to any other sequence of characters. These relationships can be derived from large bodies of language as a separate machine learning task. A **document** is the group of words in question. In our case, each question from the Quora corpus is one document.
#
# To start, we'll create lists of word tokens (filtered for stopwords, but not stemmed), to support the methods we'll use in this section.
# +
def word_set(s,t,q):
s = str(s).lower().decode('utf-8')
t = str(t).lower().decode('utf-8')
s_tokens, t_tokens = word_tokenize(s), word_tokenize(t)
stop_words = stopwords.words('english')
stop_words += string.punctuation
s_tokens = [x for x in s_tokens if not x in stop_words]
t_tokens = [x for x in t_tokens if not x in stop_words]
s_temp = set(s_tokens)
t_temp = set(t_tokens)
s_distinct = [x for x in s_tokens if x not in t_temp]
t_distinct = [x for x in t_tokens if x not in s_temp]
if q == "q1_words":
return s_tokens
elif q == "q2_words":
return t_tokens
elif q == "q1_distinct":
return s_distinct
elif q == "q2_distinct":
return t_distinct
df_data_1['q1_words'] = df_data_1.apply(lambda x: word_set(x['question1'], x['question2'],"q1_words"), axis=1)
df_data_1['q2_words'] = df_data_1.apply(lambda x: word_set(x['question1'], x['question2'],"q2_words"), axis=1)
wordsdf=df_data_1[['question1','q1_words','question2','q2_words','is_duplicate']]
display(wordsdf[508:510])
# -
# ### 3.1 Single word analysis<a class="anchor" id="bullet-18"></a>
#
# **Word embeddings**<br/>
# This method represents individual words as vectors, and semantic relationships as the distance between vectors. The more related words are, the closer they should exist in vector space. Word embeddings come from the field of [distributional semantics](https://en.wikipedia.org/wiki/Distributional_semantics), which suggests that words are semantically related if they are frequently used in similar contexts (i.e. they are often surrounded by the same words).
#
# For example, 'Canada' and 'Toronto' should exist closer together in the vector space than 'Canada' and 'Camara' (which would be closer in edit distance).
#
# **Word2Vec**<br/>
# The mapping of words to vectors is in itself the result of a machine learning algorithm. Developed by Google in 2013, the Word2Vec algorithm is a neural network that takes a large corpus as training data, and produces vector co-ordinates for each word by the word embedding concept.
#
# We will be using an pre-trained model from Google that was created from over 100 billion words from Google News. The model needs to be [downloaded](https://code.google.com/archive/p/word2vec/) and handled using the [gensim library](https://radimrehurek.com/gensim/) for word vectors. The model is a dictionary that contains every word and its corresponding vector representation, which look like 300 dimensional co-ordinates stored in an array.
#
# **Comparing word vectors**<br/>
# To compare word vectors, we can use cosine similarity. As the name suggests, this metric measures similarity by taking the cosine of the angle between vectors. The cosine function scales the similarity between 0 and 1, representing words from least to most semantically related.
# #### Build Word2Vec Downloader Script
#
# Word2Vec Model needs to be fetched. It's a rather large file so download may take some time.
#
# ### <span style="color: red"> _This Download Can Take Significant Time!_ </span>
#
# **We Can Save Time By Skipping the Sections Below and Downloading the Completed Output Below**
#
# Upload the finished results to your project data.
#
# [Completed Analysis](https://ibm.box.com/shared/static/kh5h4h4qfx276p7i13v9637msjv6i444.csv)
# !wget http://bit.ly/2iU22lc
# !mv 2iU22lc GoogleNews-vectors-negative300.bin.gz
# +
# download word2vec google model
import gensim
model = gensim.models.KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin.gz', binary=True)
# using gensim built-in similarity function for examples
print "Cosine of angle between Canada, Toronto:" + "\n",
print model.similarity('Canada','Toronto')
print "\n" + "Cosine of angle between Canada, Camara:" + "\n",
print model.similarity('Canada','Camara')
# -
# ### 3.2 Sentence analysis<a class="anchor" id="bullet-19"></a>
#
# Since the model works like a dictionary, it can only give us vector representations for single words. There are two ways to get a vector representation of a sentence:
#
# 1. Train a model on ordered words (e.g. sentences or phrases). Since word order is included during training, the resulting vectors will preserve the relationships between words. I won't be training a new model in this notebook, as it is computationally heavy, but here are some [resources](https://rare-technologies.com/doc2vec-tutorial) for the curious.
# <br/>
# <br/>
# 2. Convert a sentence to a set of words, and get the corresponding set of vectors. Averaging the vector set (summing and dividing by total vector length) will give us a single vector that represents that particular set of words. This method can only give a 'bag of words' representation - i.e. word order is not captured.
#
# *Comment: I think that getting new embeddings specific to a corpus is the best-performing method in practice. For the purpose of illustrating NLP problem-solving, I will do my best with bag-of-words methods.*
#
# The following function implements the second method to get the average embedded vector from a set of words.
# +
import numpy as np
from __future__ import division
def vectorize(words):
V = np.zeros(300)
for w in words:
try:
V = np.add(V,model[w])
except:
continue
else:
avg_vector = V / np.sqrt((V ** 2).sum())
return avg_vector
# -
# Let's see how the average vectors compare for question pair 508:
# +
from sklearn.metrics.pairwise import cosine_similarity
sent1_q508 = wordsdf['q1_words'][508]
sent2_q508 = wordsdf['q2_words'][508]
vec1_q508 = vectorize(sent1_q508).reshape(1,-1)
vec2_q508 = vectorize(sent2_q508).reshape(1,-1)
display(wordsdf[508:509])
print "\n" + "Cosine similarity of [best, way, learn, algebra] and [learn, algebra, 1, fast]:" + "\n",
print cosine_similarity(vec1_q508, vec2_q508)[0][0]
# -
# How do the averaged vectors represent the cosine similarities of its components?
#
# Intuitively, if our question pair differs by a closely related word (best vs. ideal) we would get a larger cosine similarity. And if our question pair differs by a very distinct word (algebra vs. juggling), the cosine similarity is smaller.
# +
# bag of words, so same set of words in a different order does not matter
print "\n" + "Distance between [best, way, learn, algebra] and [learn, algebra, best, way]:" + "\n",
print model.n_similarity(['best','way','learn','algebra'],['learn','algebra','best','way'])
# difference is a semantically similar word
print "\n" + "Distance between [best, way, learn, algebra] and [ideal, way, learn, algebra]:" + "\n",
print model.n_similarity(['best','way','learn','algebra'],['ideal','way','learn','algebra'])
# difference is not semantically similar
print "\n" + "Distance between [best, way, learn, algebra] and [best, way, learn, juggling]:" + "\n",
print model.n_similarity(['best','way','learn','algebra'],['best','way','learn','juggling'])
# -
# **Word mover's distance** <br/>
# An implementation of Earth mover's distance for natural language processing problems by Kusner et al. <a href="#footnote-1"><sup>[1]</sup></a>
#
# WM distance is an approach that combines the ideas of edit distance with vector representation. It measures the work required to transform one set of vectors into another. Instead of counting edit operations, we use distance between word vectors - how far one vector would have to move to occupy the same spot as the second.
#
# How Word Mover's Distance is calculated:
# </a><br/><img src="https://github.com/krondor/data-science-pot/blob/master/wmd.png?raw=TRUE" width="400" height="400"/>
# 1. All the words in each set are paired off with each other
# 2. Calculate the distance between each pair (instead of cosine similarity, Euclidean distance is used here)
# 3. Sum the distances between pairs with minimum distances
#
# If the two sets do not have the same number of words, the problem becomes an optimization of another measurement called **flow**.
#
# 1. The flow is equal to 1/(number of words in the set), so words from the smaller set have a larger flow<br/>
# (words on the bottom have a flow of 0.33, while words on the top have a flow of 0.25)
# 2. Extra flow gets attributed to the next most similar words<br/>
# (see the arrows drawn from the bottom words to more than one word in the top row)
# 3. The optimization problem identifies the pairs with minimum distances by solving for minimum flow.
#
# We can use the WM distance method directly from gensim.
# +
from pyemd import emd
print "\n" + "WM distance between [best, way, learn, algebra] and [learn, algebra, 1, fast]:" + "\n",
print model.wmdistance(sent1_q508, sent2_q508)
# -
# ### 3.3 Weighted analysis<a class="anchor" id="bullet-20"></a>
#
# In the example below, we can see that the words are the same except for the name of the country in question (Canada vs. Japan). However, the country name makes all the semantic difference, which we fail to capture using only cosine similarity or WM distance.
# +
display(wordsdf[14:15])
sent1_q14 = wordsdf['q1_words'][14]
sent2_q14 = wordsdf['q2_words'][14]
print "\n" + "Cosine angle:" + "\n",
print model.n_similarity(sent1_q14, sent2_q14)
print "\n" + "WM distance:" + "\n",
print model.wmdistance(sent1_q14, sent2_q14)
# -
# **Weighing uncommon words**<br/>
# Let's assume that 'rare' words are more likely to be semantically significant. We can represent this at the word vector level by multiplying those words by a numerical weight.
#
# **Term frequency-inverse document frequency** (tf-idf) is a method that assigns weights to word vectors depending on how common they are to a document. The frequency of a word is measured in two ways:
#
# * How many documents contain the word (N)
# * How many times a word appears in one document (f)
#
# The weight is calculated from the frequency as log(N/f), so the less frequently a word appears in some documents, the higher its weight.
#
# This method can be implemented via sci-kit learn's built in [Tf-idf Vectorizer](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html), which generates weights given a corpus. To save memory and computing time, I decided to simplify the premise of tf-idf for use on pairs of similar questions.
#
# (1) Assume that distinct words are the most important in telling the difference between question pairs.
# +
# get list of distinct words for each question
df_data_1['q1_distinct'] = df_data_1.apply(lambda x: word_set(x['question1'], x['question2'],"q1_distinct"), axis=1)
df_data_1['q2_distinct'] = df_data_1.apply(lambda x: word_set(x['question1'], x['question2'],"q2_distinct"), axis=1)
distinctdf=df_data_1[['question1','q1_words','q1_distinct','question2','q2_words','q2_distinct','is_duplicate']]
display(distinctdf[14:15])
# -
# It might be useful to get features for the cosine similarity and WM distance for distinct words.
# +
distinct1 = distinctdf['q1_distinct'][14]
distinct2 = distinctdf['q2_distinct'][14]
distinct_vec1 = vectorize(distinct1).reshape(1,-1)
distinct_vec2 = vectorize(distinct2).reshape(1,-1)
print "Cosine similarity between distinct vectors ({0}, {1}):".format(distinct1[0], distinct2[0]) + "\n",
print cosine_similarity(distinct_vec1, distinct_vec2)[0][0]
print "\n" + "WM distance between distinct vectors ({0}, {1}):".format(distinct1[0], distinct2[0]) + "\n",
print model.wmdistance(distinct1, distinct2)
# -
# (2) Distinct words only appear in one of the two questions, so we can take N = 1. We assumed that distinct words are important, so we assign the distinct words a small frequency of 1/(number of words in the question) for a larger weight.
# modify vectorize function to add weights
def get_weight(words):
n = len(words)
weight = 1
if n != 0:
weight = np.log(1/(1/n))
return weight
# (3) Generate an array containing the weights for every question in the dataset.
# +
# empty arrays
q1_weights = np.zeros((df_data_1.shape[0],300))
q2_weights = np.zeros((df_data_1.shape[0],300))
# fill arrays with weights for each question
for i, q in enumerate(df_data_1.q1_words.values):
q1_weights[i, :] = get_weight(q)
for i, q in enumerate(df_data_1.q2_words.values):
q2_weights[i, :] = get_weight(q)
# -
# (4) Calculate the average weighted vectors. We can see how weighing distinct words translates to reduced cosine similarity.
# +
avg_vec1 = vectorize(sent1_q14).reshape(1,-1)
avg_vec2 = vectorize(sent2_q14).reshape(1,-1)
print "\n" + "Cosine similarity between averaged question vectors:" + "\n",
print cosine_similarity(avg_vec1, avg_vec2)[0][0]
w_distinct_vec1 = distinct_vec1*q1_weights[14]
w_distinct_vec2 = distinct_vec2*q2_weights[14]
avg_weight_distinct_vec1 = np.add(avg_vec1, -(distinct_vec1), w_distinct_vec1)
avg_weight_distinct_vec2 = np.add(avg_vec2, -(distinct_vec2), w_distinct_vec2)
print "\n" + "Cosine similiarity between weighted question vectors:" + "\n",
print cosine_similarity(avg_weight_distinct_vec1, avg_weight_distinct_vec2)[0][0]
# -
# ### 3.4 Feature creation<a class="anchor" id="bullet-21"></a> <a href="#footnote-1"><sup>[2]</sup></a>
# We can apply these methods to our dataset to create the following features:
#
# * Word mover's distance between sentence sets
# * Word mover's distance between distinct word sets
# * Angle between averaged sentence vectors
# * Angle between averaged distinct word vectors
# * Angle between weighted sentence vectors
# +
# word mover's distance between sentence sets
df_data_1['wm_dist_words'] = df_data_1.apply(lambda x: model.wmdistance(x['q1_words'], x['q2_words']), axis=1)
# word mover's distance between distinct sets
df_data_1['wm_dist_distinct'] = df_data_1.apply(lambda x: model.wmdistance(x['q1_distinct'], x['q2_distinct']), axis=1)
# angle between averaged sentence vectors
q1_avg_vectors = np.zeros((df_data_1.shape[0], 300))
q2_avg_vectors = np.zeros((df_data_1.shape[0], 300))
for i, q in enumerate(df_data_1.q1_words.values):
q1_avg_vectors[i, :] = vectorize(q)
for i, q in enumerate(df_data_1.q2_words.values):
q2_avg_vectors[i, :] = vectorize(q)
df_data_1['cos_angle_words'] = [cosine_similarity(x.reshape(1,-1), y.reshape(1,-1))[0][0]
for (x, y) in zip(np.nan_to_num(q1_avg_vectors),
np.nan_to_num(q2_avg_vectors))]
# angle between averaged distinct sentence vectors
q1_dist_vectors = np.zeros((df_data_1.shape[0], 300))
q2_dist_vectors = np.zeros((df_data_1.shape[0], 300))
for i, q in enumerate(df_data_1.q1_distinct.values):
q1_dist_vectors[i, :] = vectorize(q)
for i, q in enumerate(df_data_1.q2_distinct.values):
q2_dist_vectors[i, :] = vectorize(q)
df_data_1['cos_angle_distinct'] = [cosine_similarity(x.reshape(1,-1), y.reshape(1,-1))[0][0]
for (x, y) in zip(np.nan_to_num(q1_dist_vectors),
np.nan_to_num(q2_dist_vectors))]
# get array of weighted distinct vectors
q1_weight_distinct_vec = np.multiply(q1_dist_vectors,q1_weights)
q2_weight_distinct_vec = np.multiply(q2_dist_vectors,q2_weights)
# get sentence vectors with weights
q1_avg_weight_vectors = np.add(q1_avg_vectors, -(q1_dist_vectors), + q1_weight_distinct_vec)
q2_avg_weight_vectors = np.add(q2_avg_vectors, -(q2_dist_vectors), + q2_weight_distinct_vec)
df_data_1['cos_angle_weighted'] = [cosine_similarity(x.reshape(1,-1), y.reshape(1,-1))[0][0]
for (x, y) in zip(np.nan_to_num(q1_avg_weight_vectors),
np.nan_to_num(q2_avg_weight_vectors))]
df_data_1[14:15]
# -
# You can now export the feature engineered dataset for use with your preferred model!
# +
import StringIO as io
featuredf = df_data_1.drop(['q1_tokens','q2_tokens','q1_words','q2_words','q1_distinct','q2_distinct'], axis=1)
# Create StringIO object to Stream to Object Store
csv_buffer = io.StringIO()
featuredf.to_csv(csv_buffer, index=False)
client_01da3b8d07aa40ca85ec5cee0637167f.put_object(Body=csv_buffer.getvalue(), Bucket=bucket, Key=outfile)
# -
# <NAME> | April 2017
#
# <NAME> | November 2017
#
# ## Further reading
#
# * Follow [this tutorial](http://nbviewer.jupyter.org/gist/nllho/4496a06e2bec93f06858851b5d822298) to build an XGBoost classifier, and make predictions using our new features
# * Try [Doc2Vec](https://rare-technologies.com/doc2vec-tutorial) to train a model for sentences or phrases
# * Try [Tf-idf Vectorizer](http://www.markhneedham.com/blog/2015/02/15/pythonscikit-learn-calculating-tfidf-on-how-i-met-your-mother-transcripts/) to generate specific weights based on word frequency in a corpus
#
#
# ## References
#
# <p id="footnote-1"><sup>[1]</sup> <NAME>. and <NAME>. and <NAME>. and <NAME>. (2015) [From Word Embeddings to Document Distances](http://proceedings.mlr.press/v37/kusnerb15.pdf)
#
# <p id="footnote-1"><sup>[2]</sup> <NAME>. (April 2017) [Is that a duplicate Quora Question?](https://www.linkedin.com/pulse/duplicate-quora-question-abhishek-thakur)
| Lab 2A - NLP and Feature Engineering.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#Imports
import tensorflow as tf
from tensorflow.keras.applications.mobilenet_v2 import MobileNetV2,preprocess_input
from tensorflow.keras.layers import Input,GlobalMaxPooling2D,Dense
from tensorflow.keras.models import Model
from tensorflow.keras.preprocessing.image import img_to_array,load_img
import json
import numpy as np
import cv2
from cv2 import resize
from os import path, listdir
import praw,requests,re
import time
import psaw
import datetime as dt
import os
import sys
# + pycharm={"name": "#%%\n"}
#Functions
def compare2images(original,duplicate):
if original is None or duplicate is None:
return True #delete emtpy pictures
if original.shape == duplicate.shape:
#print("The images have same size and channels")
difference = cv2.subtract(original, duplicate)
b, g, r = cv2.split(difference)
if cv2.countNonZero(b) == 0 and cv2.countNonZero(g) == 0 and cv2.countNonZero(r) == 0:
return True
else:
return False
else:
return False
def submissions_pushshift_praw(subreddit, start=None, end=None, limit=20000, extra_query=""):
"""
A simple function that returns a list of PRAW submission objects during a particular period from a defined sub.
This function serves as a replacement for the now deprecated PRAW `submissions()` method.
:param subreddit: A subreddit name to fetch submissions from.
:param start: A Unix time integer. Posts fetched will be AFTER this time. (default: None)
:param end: A Unix time integer. Posts fetched will be BEFORE this time. (default: None)
:param limit: There needs to be a defined limit of results (default: 100), or Pushshift will return only 25.
:param extra_query: A query string is optional. If an extra_query string is not supplied,
the function will just grab everything from the defined time period. (default: empty string)
Submissions are yielded newest first.
For more information on PRAW, see: https://github.com/praw-dev/praw
For more information on Pushshift, see: https://github.com/pushshift/api
"""
matching_praw_submissions = []
# Default time values if none are defined (credit to u/bboe's PRAW `submissions()` for this section)
utc_offset = 28800
now = int(time.time())
start = max(int(start) + utc_offset if start else 0, 0)
end = min(int(end) if end else now, now) + utc_offset
# Format our search link properly.
search_link = ('https://api.pushshift.io/reddit/submission/search/'
'?subreddit={}&after={}&before={}&sort_type=score&sort=asc&limit={}&q={}')
search_link = search_link.format(subreddit, start, end, limit, extra_query)
# Get the data from Pushshift as JSON.
retrieved_data = requests.get(search_link)
returned_submissions = retrieved_data.json()['data']
# Iterate over the returned submissions to convert them to PRAW submission objects.
for submission in returned_submissions:
# Take the ID, fetch the PRAW submission object, and append to our list
praw_submission = reddit.submission(id=submission['id'])
matching_praw_submissions.append(praw_submission)
# Return all PRAW submissions that were obtained.
return matching_praw_submissions
# + pycharm={"name": "#%%\n"}
with open('config.json') as config_file:
config = json.load(config_file)['keys']
# Sign into Reddit using API Key
reddit = praw.Reddit(user_agent="Downloading images from r/art for a machine learning project",
client_id=config['client_id'],
client_secret=config['client_secret'],
username=config['username'],
password=config['password'])
# + pycharm={"name": "#%% Downloading pictures from Reddit r/art using PSAW and PRAW\n"}
#187mb for 200 pics, approx 18.7gb for 20000
#Relatively arbitrary start date, representative of modern times
Jan12018 = int(dt.datetime(2018,1,1).timestamp())
#Pass a PRAW instances so that scores are accurate
api = psaw.PushshiftAPI(reddit)
n = 30000
print("Looking for posts using Pushshift...")
posts = list(api.search_submissions(after = Jan12018, subreddit='art', limit = n*10))
print(f"Number of posts found: {len(posts)}")
files=[]
counter = 0
#Some images are deleted. Load a template and don't include files that are deleted.
for post in posts:
counter +=1
sys.stdout.write('\r')
sys.stdout.write("Downloading: [{:{}}] {:.1f}%".format("="*counter, n-1, (100/(n-1)*counter)))
sys.stdout.flush()
url = (post.url)
#Save score for ML training, and post id for unique file names
file_name = str(post.score) + "_" + str(post.id) + ".jpg"
try:
#use requests to get image
r = requests.get(url)
fullfilename = "pics/"+file_name
files.append(file_name)
#save image
with open(fullfilename,"wb") as f:
f.write(r.content)
except (
requests.ConnectionError,
requests.exceptions.ReadTimeout,
requests.exceptions.Timeout,
requests.exceptions.ConnectTimeout,
) as e:
print(e)
#Number of files downloaded not always the same due to connection errors
print(f'\nNumber of files downloaded: {len(files)}')
# + pycharm={"name": "#%%\n"}
path = "pics/"
files = [f for f in os.listdir(path) if os.path.isfile(os.path.join(path, f))]
print(files)
#Check if image is just a blank image by comparing to template
cull = []
counter = 0
length = len(files)
print(f"Original Number of files: {len(files)}")
for file in files:
counter+=1
sys.stdout.write('\r')
sys.stdout.write("Scanning: [{:{}}] {:.1f}%".format("="*counter, length-1, (100/(length-1)*counter)))
sys.stdout.flush()
fullfilename = "pics/" + file
deletedtemplate = cv2.imread("exampledeleted.jpg")
checkdeleted = cv2.imread(fullfilename)
if compare2images(deletedtemplate,checkdeleted):
#delete if so
os.remove(fullfilename)
cull.append(file)
counter = 0
length = len(cull)
for file in cull:
files.remove(file)
sys.stdout.write('\r')
sys.stdout.write("Deleting: [{:{}}] {:.1f}%".format("="*counter, length-1, (100/(length-1)*counter)))
sys.stdout.flush()
print(f"Final Number of files: {len(files)}")
| src/.ipynb_checkpoints/main-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="Sx6ZrHwtosad" colab_type="text"
# # Set up environment
# + id="brdGVj2u7BfJ" colab_type="code" colab={}
from google.colab import drive
drive.mount('/content/gdrive')
# !ln -s /content/gdrive/My\ Drive/Manchester/Courses/Masters/datasets /content/datasets
# !ln -s /content/gdrive/My\ Drive/Manchester/Courses/Masters/models /content/models
# + id="jDKOHPjg6uBT" colab_type="code" colab={}
# !pip install bert-embedding
# !pip install -e git+https://github.com/negedng/bert-embedding#egg=bert_embedding
# !pip install vaderSentiment
# !pip install xmltodict
# !pip install gensim
# !pip install -U spacy
# #!python -m spacy download en_core_web_lg
import nltk
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
nltk.download('tagsets')
# !wget -P /root/input/ -c "https://s3.amazonaws.com/dl4j-distribution/GoogleNews-vectors-negative300.bin.gz"
# !git clone https://github.com/negedng/argument_BERT
# + [markdown] id="x8p8ivwwo1oX" colab_type="text"
# # Trainer and predictor example
# + id="4_ObcuVFop9q" colab_type="code" colab={}
from argument_BERT import argumentor, trainer
# + [markdown] id="61wgEeS5ImXl" colab_type="text"
# Training a relation detection model
# + id="svMK-ihf6mtq" colab_type="code" colab={}
trainer.trainer("/content/datasets/brat-project/", ADU=False, train_generable=True, rst_files=False, verbose=1)
# + id="aV0fhq2UWZYk" colab_type="code" colab={}
essay01 = """Should students be taught to compete or to cooperate?
It is always said that competition can effectively promote the development of economy. In order to survive in the competition, companies continue to improve their products and service, and as a result, the whole society prospers. However, when we discuss the issue of competition or cooperation, what we are concerned about is not the whole society, but the development of an individual's whole life. From this point of view, I firmly believe that we should attach more importance to cooperation during primary education.
First of all, through cooperation, children can learn about interpersonal skills which are significant in the future life of all students. What we acquired from team work is not only how to achieve the same goal with others but more importantly, how to get along with others. During the process of cooperation, children can learn about how to listen to opinions of others, how to communicate with others, how to think comprehensively, and even how to compromise with other team members when conflicts occurred. All of these skills help them to get on well with other people and will benefit them for the whole life.
On the other hand, the significance of competition is that how to become more excellence to gain the victory. Hence it is always said that competition makes the society more effective. However, when we consider about the question that how to win the game, we always find that we need the cooperation. The greater our goal is, the more competition we need. Take Olympic games which is a form of competition for instance, it is hard to imagine how an athlete could win the game without the training of his or her coach, and the help of other professional staffs such as the people who take care of his diet, and those who are in charge of the medical care. The winner is the athlete but the success belongs to the whole team. Therefore without the cooperation, there would be no victory of competition.
Consequently, no matter from the view of individual development or the relationship between competition and cooperation we can receive the same conclusion that a more cooperative attitudes towards life is more profitable in one's success."""
# + [markdown] id="iW9vdC4fIxy8" colab_type="text"
# Full argument annotation using the essay example
# + id="P54Z1uT-sdQS" colab_type="code" colab={}
argumentor.argumentor(essay01, "models/model_pc.h5", "models/model_rsa.h5", "brat-project", "essay01.ann.xml", verbose=0)
# + id="rAq0z8Wy7wBh" colab_type="code" colab={}
| Colab_argument_BERT.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.1 64-bit
# name: python38164bit689ce9532f7a4f68816de0f6cf09db80
# ---
import pandas as pd
import numpy as np
from sklearn.cluster import KMeans
df = pd.read_csv(
r'/home/kekeing/Desktop/code/DateMining/data/lianjia_processed.csv', sep=',')
df
dff = df[['deal_totalPrice', 'gross_area']]
da = dff.to_numpy()
# +
k_means = KMeans(init='k-means++', n_clusters=3, n_init=10)
k_means.fit(da)
res0Series = pd.Series(k_means.labels_)
res0 = res0Series[res0Series.values == 0]
res1 = res0Series[res0Series.values == 1]
res2 = res0Series[res0Series.values == 2]
dft = dff.iloc[res0.index]
# dft
# dd = pd.DataFrame(dft,columns=['deal_totalPrice','gross_area'])
result = pd.merge(dft, df, how='left',on=['deal_totalPrice', 'gross_area'])
result
# -
| notebook/data_merge.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/menon92/DL-Sneak-Peek/blob/master/%E0%A6%9F%E0%A7%87%E0%A6%A8%E0%A7%8D%E0%A6%B8%E0%A6%B0%E0%A6%AB%E0%A7%8D%E0%A6%B2%E0%A7%8B_%E0%A7%A8_%E0%A7%A6_%E0%A6%93_%E0%A6%95%E0%A7%87%E0%A6%B0%E0%A6%BE%E0%A6%B8_%E0%A6%AA%E0%A6%B0%E0%A6%BF%E0%A6%9A%E0%A6%BF%E0%A6%A4%E0%A6%BF_%E0%A6%AA%E0%A6%B0%E0%A7%8D%E0%A6%AC_%E0%A7%AA.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] colab_type="text" id="HSOLRMd2m_mI"
# # ইমেজ ডাটা - প্রথম পর্ব
#
# এখানে আমরা দেখব ডিপলার্নিং মডেল ট্রেনিং করার আগে ডাটা লোডিং পাইপলাইন কিভাবে করা হয় । এইজন্য ডাটাসেট হিসাবে ব্যাবহার করব
# <a herf="https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"> flower photos</a>। এটা 218 MB সাইজের একটা ডাটাসেট। এই ডাটাসেট ব্যাবহার করে আমরা নিচের বিষয়গুলো কভার করার চেষ্টা করব ।
#
# * `tf.keras.preprocessing.image.ImageDataGenerator` দিয়ে কিভাবে ইমেজ ডাটা লোড করা যায় ।
# * `tf.data` দিয়ে কিভাবে ইমেজ ডাটা লোড করা যায় ।
# * উপরের দুইটার কোনটাতে কি সুবিধা পাওয়া যাবে ।
# * কোনটাতে কেমন সময় লাবে ।
# + [markdown] colab_type="text" id="4OsJZaUSnHIf"
# ### প্রয়োজনীয় প্যাকেজ ইম্পোট করে নেই
# + colab={} colab_type="code" id="PYmgKCIunLDZ"
try:
# # %tensorflow_version এই কমান্ড কেবলমাত্র colab এ কাজ করে লোকাল নোটবুকে কাজ করবে না ।
# এই জন্য try except ব্যাবহার করা হয়েছে ।
# %tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
# + colab={} colab_type="code" id="KSwtE_dqnL30"
AUTOTUNE = tf.data.experimental.AUTOTUNE
# + colab={} colab_type="code" id="Ycp-fLMhnZ6D"
import IPython.display as display
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
import os
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="CpikdcrrndSr" outputId="9c9f3f29-d7a0-4ffb-858a-f2101a10c6da"
# টেন্সফ্লো ভার্সন চেক করে নিন
tf.__version__
# + [markdown] colab_type="text" id="FYGyMs5Fnhm0"
# ### ডাটাসেট ডাউনলোড
#
# ডাটাসেট ডাউনলোড করার জন্য আমরা `tf.keras.utils.get_file` এই ফাংশন ব্যাবহার করব । এখানে আমরা ডাটাসেট url এবং কি নামের (`fname='flower_photos'`) ফোল্ডারে ডাউনলোড হয়ে জমা হবে সেটা বলে দিচ্ছি । এবং `flower_photos.tgz` যেহেতু একটা জিপ ফাইল তাই এটাকে আনজিপ করার জন্য আমরা `untar=True` দিয়ে দিব ।
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="-opqFc3VntaF" outputId="92028ca9-6b9b-4366-a78b-27af2447369d"
import pathlib
DATASET_URL = 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz'
data_dir = tf.keras.utils.get_file(origin=DATASET_URL, fname='flower_photos', untar=True)
data_dir = pathlib.Path(data_dir)
print('Dataset directory:', data_dir)
# + colab={"base_uri": "https://localhost:8080/", "height": 208} colab_type="code" id="SuxVEDOQnuxW" outputId="a7677e3d-9f43-4935-da44-ee63b0ce8454"
# !sudo apt install -qq tree
# !tree -L 1 /root/.keras/datasets/flower_photos
# + [markdown] colab_type="text" id="FYnPUqzeoWlb"
# আমরা দেখতে পাচ্ছি যে ৫ টা আলাদা আলাদা ফুলের ফোল্ডার আছে এবং একটা টেক্সট ফাইল আছে ।
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="T0_LHRC5oX6R" outputId="f133d8cd-3b4d-4167-8479-3e8d161c12de"
image_count = len(list(data_dir.glob('*/*.jpg')))
image_count
# + [markdown] colab_type="text" id="qNmiZFdqotVO"
# আমরা দেখতে পাচ্ছি যে সব মিলে ৩৬৭০ টা ফুলের ছবি আছে । এখন আমরা দেখব যে কোন ফুল কয়টা করে আছে ।
# + colab={"base_uri": "https://localhost:8080/", "height": 104} colab_type="code" id="f5d5EPV3omXk" outputId="ee3acf2c-03a7-4dc4-b307-bfc36cb6a0af"
FLOWERS = ['daisy', 'dandelion', 'roses','sunflowers','tulips']
for flower in FLOWERS:
total_flower = len(list(data_dir.glob(flower + "/*.jpg")))
print("`{:10s}` folder contain {} flower images".format(flower, total_flower))
# + [markdown] colab_type="text" id="DRgiz644o_LB"
# আমরা কিছু ৩ টা টিউলিপ ফুল প্লট করে দেখি । আমরা প্রত্যেকটা ফুলের ছবির সাথে তাদের সাইজ প্রিন্ট করেছি । এবং আমরা দেখতে পাচ্ছি যে এক একটা ছবির সাইজ এক এক রকম । আমাদের মডেলে ডটা ফিট করার আগে অবশ্যই সব ছবিকে একটা ফিক্স সাইজে নিয়ে আসতে হবে ।
# + colab={"base_uri": "https://localhost:8080/", "height": 790} colab_type="code" id="B7qIpQ4uo6oq" outputId="fbe3162b-d7d0-4a8a-87ae-8fbb76afaec1"
roses = list(data_dir.glob('tulips/*'))
for image_path in roses[:3]:
image = Image.open(str(image_path))
print('image size', image.size)
display.display(image)
# + [markdown] colab_type="text" id="U1bEFr2NpL9n"
# ২য় পর্বে আমরা দেখব কিভাবে `tf.data` দিয়ে ইমেজ লোড করা যায়
# + colab={} colab_type="code" id="BEVX97x4pu7Q"
| vision/dl_computer_vision_part_1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### RSA encryption
# The key generated by RSA.generate() provides 2 prime numbers, *p* and *q*, where *n* = *pq*
#
# Knowing *p* and *q* allows the totient of *n* to be easily calculated: $\phi$(*n*) = (*p* - 1)(*q* - 1)
# +
# #!/usr/bin/env python3
# -*- coding: utf-8 -*-
'''
Practice with RSA encryption.
Using some helper functions from Udacity Applied Cryptography course.
These can be found at:
https://www.udacity.com/course/applied-cryptography--cs387
'''
from Crypto.PublicKey import RSA
# Generate an RSA key
key = RSA.generate(2048)
print('p =')
print(key.p)
print('\n')
print('q =')
print(key.q)
print('\n')
print('n =')
print(key.n)
print('\n')
# Totient(n)
phi_n = (key.p - 1)*(key.q - 1)
print('phi(n) =')
print(phi_n)
print('\n')
# key.d * key.e - 1 = k * phi(n), for some int k
# print('k = ' + str((key.d*key.e - 1) / phi_n) + '\n')
# +
###########################################################
# Helper functions from Udacity Applied Cryptography course
BITS = ('0', '1')
ASCII_BITS = 7
def display_bits(b):
"""converts list of {0, 1}* to string"""
return ''.join([BITS[e] for e in b])
def seq_to_bits(seq):
return [0 if b == '0' else 1 for b in seq]
def pad_bits(bits, pad):
"""pads seq with leading 0s up to length pad"""
assert len(bits) <= pad
return [0] * (pad - len(bits)) + bits
def convert_to_bits(n):
"""converts an integer `n` to bit array"""
result = []
if n == 0:
return [0]
while n > 0:
result = [(n % 2)] + result
n = n // 2
return result
def string_to_bits(s):
def chr_to_bit(c):
return pad_bits(convert_to_bits(ord(c)), ASCII_BITS)
return [b for group in
map(chr_to_bit, s)
for b in group]
def bits_to_char(b):
assert len(b) == ASCII_BITS
value = 0
for e in b:
value = (value * 2) + e
return chr(value)
def bits_to_string(b):
return ''.join([bits_to_char(b[i:i + ASCII_BITS])
for i in range(0, len(b), ASCII_BITS)])
###########################################################
# -
# The example message below gets encoded as: *E(m)* = *c* = *m<sup>e</sup>* mod *n*
# +
# Create a message to encrypt
message = 'Grocery list: sweet potatoes, broccoli, apples, bananas, yogurt'
# Convert to bits
msg_bits = string_to_bits(message)
msg_bits = int(display_bits(msg_bits))
# print(msg_bits)
# Encrypted message: E(m) = m**e % n
E_m = pow(msg_bits, key.e, key.n)
print('Cipher: \n' + str(E_m) + '\n')
# -
# Then, the cipher can be decoded as: *D(c)* = *m* = *c<sup>d</sup>* mod *n*
# Decrypted message: D(c) = c**d % n
D_c = pow(E_m, key.d, key.n)
D_c_str = seq_to_bits(str(D_c))
D_c_str = bits_to_string(D_c_str)
print('Decrypted message: \n' + D_c_str)
| RSA_practice.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# # Simple Readout Fidelity Measurement
#
# Run an identity, `I`, or an `X` pulse to estimate readout fidelity.
# ## setup
# +
from math import pi
from typing import List
import numpy as np
from pyquil import Program, get_qc
from pyquil.api import QuantumComputer
from pyquil.gates import *
# -
# ## measuring readout fidelity
#
# `run_readout` constructs and measures a program for a single qubit depending on whether the $\left|0\right>$ state or the $\left|1\right>$ state is targeted. When $\left|0\right>$ is targeted, we prepare a qubit in the ground state with an identity operation. Similarly, When $\left|1\right>$ is targeted, we prepare a qubit in the excited state with an `X` gate.
def run_readout(qc: QuantumComputer, qubit: int, target: int, n_shots: int = 1000):
"""
Measure a qubit. Optionally flip a bit first.
:param qc: The quantum computer to run on
:param qubit: The qubit to probe
:param target: What state we want. Either 0 or 1.
:param n_shots: The number of times to repeat the experiment
"""
# Step 2. Construct and compile your program
program = Program()
ro = program.declare('ro', 'BIT', 1)
# Uncomment to enable active reset
# program += RESET()
if target == 0:
program += I(qubit)
elif target == 1:
program += RX(pi, qubit)
else:
raise ValueError("Target should be 0 or 1")
program += MEASURE(qubit, ro[0])
program = program.wrap_in_numshots_loop(n_shots)
nq_program = program
# Uncomment to test the quilc compiler
# nq_program = qc.compiler.quil_to_native_quil(program)
executable = qc.compiler.native_quil_to_executable(nq_program)
bitstrings = qc.run(executable)
return np.mean(bitstrings[:, 0])
# `run_readouts` is a wrapper for `run_readout` that runs on each qubit, targetting $\left|0\right>$ for many measurements (default 1000) and then $\left|1\right>$. When we prepare in the $\left|0\right>$ state, we expect to measure in the $\left|0\right>$ state, so the percentage of the time we do in fact measure $\left|0\right>$ gives us `p(0|0)`. We do the same for `p(1|1)` and print the results.
def run_readouts(qc: QuantumComputer):
"""
Characterize readout on several qubits, one at a time.
This collects results and prints them out at the end because we
print the compiled and uncompiled programs within ``run_readout``.
:param qc: The QuantumComputer to run on
:param qubits: A list of qubits to characterize one at a time.
"""
results = []
for qubit in qc.qubits():
p0 = run_readout(qc=qc, qubit=qubit, target=0)
p1 = run_readout(qc=qc, qubit=qubit, target=1)
results += [(qubit, p0, p1)]
print('q p(0|0) p(1|1)')
for qubit, p0, p1 in results:
print(f'q{qubit:<3d}{1-p0:10f}{p1:10f}')
run_readouts(qc=get_qc('9q-square-noisy-qvm'))
# ## Using forest_benchmarking.readout.py
# Forest-benchmarking also provides a readout module with convenience functions for estimating readout and reset confusion matrices for groups of qubits.
from forest_benchmarking.readout import (estimate_joint_confusion_in_set,
estimate_joint_reset_confusion,
marginalize_confusion_matrix)
qc = get_qc('9q-square-noisy-qvm')
qubits = qc.qubits()
# ## Convenient estimation of (possibly joint) confusion matrices for groups of qubits
# +
# get all single qubit confusion matrices
one_q_ro_conf_matxs = estimate_joint_confusion_in_set(qc, use_active_reset=True)
# get all pairwise confusion matrices from subset of qubits.
subset = qubits[:4] # only look at 4 qubits of interest, this will mean (4 choose 2) = 6 matrices
two_q_ro_conf_matxs = estimate_joint_confusion_in_set(qc, qubits=subset, joint_group_size=2, use_active_reset=True)
# -
# ## extract the 1q ro fidelities
for qubit in qubits:
conf_mat = one_q_ro_conf_matxs[(qubit,)]
ro_fidelity = np.trace(conf_mat)/2 # average P(0 | 0) and P(1 | 1)
print(f'q{qubit:<3d}{ro_fidelity:10f}')
# ## Pick a 2q joint confusion matrix and compare marginal to 1q
# Comparing a joint n-qubit matrix to the estimated <n-qubit matrices can help reveal correlated errors
two_q_conf_mat = two_q_ro_conf_matxs[(subset[0],subset[-1])]
print(two_q_conf_mat)
print()
marginal = marginalize_confusion_matrix(two_q_conf_mat, [subset[0], subset[-1]], [subset[0]])
print(marginal)
print(one_q_ro_conf_matxs[(subset[0],)])
# ## Estimate confusion matrix for active reset error
subset = tuple(qubits[:4])
subset_reset_conf_matx = estimate_joint_reset_confusion(qc, subset,
joint_group_size=len(subset),
show_progress_bar=True)
for row in subset_reset_conf_matx[subset]:
pr_sucess = row[0]
print(pr_sucess)
| examples/readout_fidelity.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# + [markdown] origin_pos=0
# # Word Embedding with Global Vectors (GloVe)
# :label:`sec_glove`
#
#
# Word-word co-occurrences
# within context windows
# may carry rich semantic information.
# For example,
# in a large corpus
# word "solid" is
# more likely to co-occur
# with "ice" than "steam",
# but word "gas"
# probably co-occurs with "steam"
# more frequently than "ice".
# Besides,
# global corpus statistics
# of such co-occurrences
# can be precomputed:
# this can lead to more efficient training.
# To leverage statistical
# information in the entire corpus
# for word embedding,
# let us first revisit
# the skip-gram model in :numref:`subsec_skip-gram`,
# but interpreting it
# using global corpus statistics
# such as co-occurrence counts.
#
# ## Skip-Gram with Global Corpus Statistics
# :label:`subsec_skipgram-global`
#
# Denoting by $q_{ij}$
# the conditional probability
# $P(w_j\mid w_i)$
# of word $w_j$ given word $w_i$
# in the skip-gram model,
# we have
#
# $$q_{ij}=\frac{\exp(\mathbf{u}_j^\top \mathbf{v}_i)}{ \sum_{k \in \mathcal{V}} \text{exp}(\mathbf{u}_k^\top \mathbf{v}_i)},$$
#
# where
# for any index $i$
# vectors $\mathbf{v}_i$ and $\mathbf{u}_i$
# represent word $w_i$
# as the center word and context word,
# respectively, and $\mathcal{V} = \{0, 1, \ldots, |\mathcal{V}|-1\}$
# is the index set of the vocabulary.
#
# Consider word $w_i$
# that may occur multiple times
# in the corpus.
# In the entire corpus,
# all the context words
# wherever $w_i$ is taken as their center word
# form a *multiset* $\mathcal{C}_i$
# of word indices
# that *allows for multiple instances of the same element*.
# For any element,
# its number of instances is called its *multiplicity*.
# To illustrate with an example,
# suppose that word $w_i$ occurs twice in the corpus
# and indices of the context words
# that take $w_i$ as their center word
# in the two context windows
# are
# $k, j, m, k$ and $k, l, k, j$.
# Thus, multiset $\mathcal{C}_i = \{j, j, k, k, k, k, l, m\}$, where
# multiplicities of elements $j, k, l, m$
# are 2, 4, 1, 1, respectively.
#
# Now let us denote the multiplicity of element $j$ in
# multiset $\mathcal{C}_i$ as $x_{ij}$.
# This is the global co-occurrence count
# of word $w_j$ (as the context word)
# and word $w_i$ (as the center word)
# in the same context window
# in the entire corpus.
# Using such global corpus statistics,
# the loss function of the skip-gram model
# is equivalent to
#
# $$-\sum_{i\in\mathcal{V}}\sum_{j\in\mathcal{V}} x_{ij} \log\,q_{ij}.$$
# :eqlabel:`eq_skipgram-x_ij`
#
# We further denote by
# $x_i$
# the number of all the context words
# in the context windows
# where $w_i$ occurs as their center word,
# which is equivalent to $|\mathcal{C}_i|$.
# Letting $p_{ij}$
# be the conditional probability
# $x_{ij}/x_i$ for generating
# context word $w_j$ given center word $w_i$,
# :eqref:`eq_skipgram-x_ij`
# can be rewritten as
#
# $$-\sum_{i\in\mathcal{V}} x_i \sum_{j\in\mathcal{V}} p_{ij} \log\,q_{ij}.$$
# :eqlabel:`eq_skipgram-p_ij`
#
# In :eqref:`eq_skipgram-p_ij`, $-\sum_{j\in\mathcal{V}} p_{ij} \log\,q_{ij}$ calculates
# the cross-entropy
# of
# the conditional distribution $p_{ij}$
# of global corpus statistics
# and
# the
# conditional distribution $q_{ij}$
# of model predictions.
# This loss
# is also weighted by $x_i$ as explained above.
# Minimizing the loss function in
# :eqref:`eq_skipgram-p_ij`
# will allow
# the predicted conditional distribution
# to get close to
# the conditional distribution
# from the global corpus statistics.
#
#
# Though being commonly used
# for measuring the distance
# between probability distributions,
# the cross-entropy loss function may not be a good choice here.
# On one hand, as we mentioned in :numref:`sec_approx_train`,
# the cost of properly normalizing $q_{ij}$
# results in the sum over the entire vocabulary,
# which can be computationally expensive.
# On the other hand,
# a large number of rare
# events from a large corpus
# are often modeled by the cross-entropy loss
# to be assigned with
# too much weight.
#
# ## The GloVe Model
#
# In view of this,
# the *GloVe* model makes three changes
# to the skip-gram model based on squared loss :cite:`Pennington.Socher.Manning.2014`:
#
# 1. Use variables $p'_{ij}=x_{ij}$ and $q'_{ij}=\exp(\mathbf{u}_j^\top \mathbf{v}_i)$
# that are not probability distributions
# and take the logarithm of both, so the squared loss term is $\left(\log\,p'_{ij} - \log\,q'_{ij}\right)^2 = \left(\mathbf{u}_j^\top \mathbf{v}_i - \log\,x_{ij}\right)^2$.
# 2. Add two scalar model parameters for each word $w_i$: the center word bias $b_i$ and the context word bias $c_i$.
# 3. Replace the weight of each loss term with the weight function $h(x_{ij})$, where $h(x)$ is increasing in the interval of $[0, 1]$.
#
# Putting all things together, training GloVe is to minimize the following loss function:
#
# $$\sum_{i\in\mathcal{V}} \sum_{j\in\mathcal{V}} h(x_{ij}) \left(\mathbf{u}_j^\top \mathbf{v}_i + b_i + c_j - \log\,x_{ij}\right)^2.$$
# :eqlabel:`eq_glove-loss`
#
# For the weight function, a suggested choice is:
# $h(x) = (x/c) ^\alpha$ (e.g $\alpha = 0.75$) if $x < c$ (e.g., $c = 100$); otherwise $h(x) = 1$.
# In this case,
# because $h(0)=0$,
# the squared loss term for any $x_{ij}=0$ can be omitted
# for computational efficiency.
# For example,
# when using minibatch stochastic gradient descent for training,
# at each iteration
# we randomly sample a minibatch of *non-zero* $x_{ij}$
# to calculate gradients
# and update the model parameters.
# Note that these non-zero $x_{ij}$ are precomputed
# global corpus statistics;
# thus, the model is called GloVe
# for *Global Vectors*.
#
# It should be emphasized that
# if word $w_i$ appears in the context window of
# word $w_j$, then *vice versa*.
# Therefore, $x_{ij}=x_{ji}$.
# Unlike word2vec
# that fits the asymmetric conditional probability
# $p_{ij}$,
# GloVe fits the symmetric $\log \, x_{ij}$.
# Therefore, the center word vector and
# the context word vector of any word are mathematically equivalent in the GloVe model.
# However in practice, owing to different initialization values,
# the same word may still get different values
# in these two vectors after training:
# GloVe sums them up as the output vector.
#
#
#
# ## Interpreting GloVe from the Ratio of Co-occurrence Probabilities
#
#
# We can also interpret the GloVe model from another perspective.
# Using the same notation in
# :numref:`subsec_skipgram-global`,
# let $p_{ij} \stackrel{\mathrm{def}}{=} P(w_j \mid w_i)$ be the conditional probability of generating the context word $w_j$ given $w_i$ as the center word in the corpus.
# :numref:`tab_glove`
# lists several co-occurrence probabilities
# given words "ice" and "steam"
# and their ratios based on statistics from a large corpus.
#
#
# :Word-word co-occurrence probabilities and their ratios from a large corpus (adapted from Table 1 in :cite:`Pennington.Socher.Manning.2014`:)
#
#
# |$w_k$=|solid|gas|water|fashion|
# |:--|:-|:-|:-|:-|
# |$p_1=P(w_k\mid \text{ice})$|0.00019|0.000066|0.003|0.000017|
# |$p_2=P(w_k\mid\text{steam})$|0.000022|0.00078|0.0022|0.000018|
# |$p_1/p_2$|8.9|0.085|1.36|0.96|
# :label:`tab_glove`
#
#
# We can observe the following from :numref:`tab_glove`:
#
# * For a word $w_k$ that is related to "ice" but unrelated to "steam", such as $w_k=\text{solid}$, we expect a larger ratio of co-occurence probabilities, such as 8.9.
# * For a word $w_k$ that is related to "steam" but unrelated to "ice", such as $w_k=\text{gas}$, we expect a smaller ratio of co-occurence probabilities, such as 0.085.
# * For a word $w_k$ that is related to both "ice" and "steam", such as $w_k=\text{water}$, we expect a ratio of co-occurence probabilities that is close to 1, such as 1.36.
# * For a word $w_k$ that is unrelated to both "ice" and "steam", such as $w_k=\text{fashion}$, we expect a ratio of co-occurence probabilities that is close to 1, such as 0.96.
#
#
#
#
# It can be seen that the ratio
# of co-occurrence probabilities
# can intuitively express
# the relationship between words.
# Thus, we can design a function
# of three word vectors
# to fit this ratio.
# For the ratio of co-occurrence probabilities
# ${p_{ij}}/{p_{ik}}$
# with $w_i$ being the center word
# and $w_j$ and $w_k$ being the context words,
# we want to fit this ratio
# using some function $f$:
#
# $$f(\mathbf{u}_j, \mathbf{u}_k, {\mathbf{v}}_i) \approx \frac{p_{ij}}{p_{ik}}.$$
# :eqlabel:`eq_glove-f`
#
# Among many possible designs for $f$,
# we only pick a reasonable choice in the following.
# Since the ratio of co-occurrence probabilities
# is a scalar,
# we require that
# $f$ be a scalar function, such as
# $f(\mathbf{u}_j, \mathbf{u}_k, {\mathbf{v}}_i) = f\left((\mathbf{u}_j - \mathbf{u}_k)^\top {\mathbf{v}}_i\right)$.
# Switching word indices
# $j$ and $k$ in :eqref:`eq_glove-f`,
# it must hold that
# $f(x)f(-x)=1$,
# so one possibility is $f(x)=\exp(x)$,
# i.e.,
#
# $$f(\mathbf{u}_j, \mathbf{u}_k, {\mathbf{v}}_i) = \frac{\exp\left(\mathbf{u}_j^\top {\mathbf{v}}_i\right)}{\exp\left(\mathbf{u}_k^\top {\mathbf{v}}_i\right)} \approx \frac{p_{ij}}{p_{ik}}.$$
#
# Now let us pick
# $\exp\left(\mathbf{u}_j^\top {\mathbf{v}}_i\right) \approx \alpha p_{ij}$,
# where $\alpha$ is a constant.
# Since $p_{ij}=x_{ij}/x_i$, after taking the logarithm on both sides we get $\mathbf{u}_j^\top {\mathbf{v}}_i \approx \log\,\alpha + \log\,x_{ij} - \log\,x_i$.
# We may use additional bias terms to fit $- \log\, \alpha + \log\, x_i$, such as the center word bias $b_i$ and the context word bias $c_j$:
#
# $$\mathbf{u}_j^\top \mathbf{v}_i + b_i + c_j \approx \log\, x_{ij}.$$
# :eqlabel:`eq_glove-square`
#
# Measuring the squared error of
# :eqref:`eq_glove-square` with weights,
# the GloVe loss function in
# :eqref:`eq_glove-loss` is obtained.
#
#
#
# ## Summary
#
# * The skip-gram model can be interpreted using global corpus statistics such as word-word co-occurrence counts.
# * The cross-entropy loss may not be a good choice for measuring the difference of two probability distributions, especially for a large corpus. GloVe uses squared loss to fit precomputed global corpus statistics.
# * The center word vector and the context word vector are mathematically equivalent for any word in GloVe.
# * GloVe can be interpreted from the ratio of word-word co-occurrence probabilities.
#
#
# ## Exercises
#
# 1. If words $w_i$ and $w_j$ co-occur in the same context window, how can we use their distance in the text sequence to redesign the method for calculating the conditional probability $p_{ij}$? Hint: see Section 4.2 of the GloVe paper :cite:`Pennington.Socher.Manning.2014`.
# 1. For any word, are its center word bias and context word bias mathematically equivalent in GloVe? Why?
#
#
# [Discussions](https://discuss.d2l.ai/t/385)
#
| d2l/tensorflow/chapter_natural-language-processing-pretraining/glove.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # OLCI atmospheric correction effects: Level-1B to Level-2 spectral comparison
# Version: 2.0
# Date: 10/04/2019
# Author: <NAME> and <NAME> (Plymouth Marine Laboratory)
# Credit: This code was developed for EUMETSAT under contracts for the Copernicus
# programme.
# License: This code is offered as free-to-use in the public domain, with no warranty.
# This routine shows examples of how to use python netcdf libraries to compare OLCI L1 and L2 spectra, either for a single point or averaged over and area.
#
# You should try the OLCI_spectral_interrogation.ipynb routine prior to using this one.
# We start by import libraries for all the functions we need
# +
# %matplotlib inline
import xarray as xr
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import os
import fnmatch
import datetime
import logging
# -
# Here we define a function to calculate spherical distance between points so that we can define polygons
# function
def spheric_dist(lat1,lat2,lon1,lon2,mode="global"):
'''
function dist=spheric_dist(lat1,lat2,lon1,lon2)
compute distances for a simple spheric earth
input:
lat1 : latitude of first point (matrix or point)
lon1 : longitude of first point (matrix or point)
lat2 : latitude of second point (matrix or point)
lon2 : longitude of second point (matrix or point)
output:
dist : distance from first point to second point (matrix)
'''
R = 6367442.76
# Determine proper longitudinal shift.
l = np.abs(lon2-lon1)
try:
l[l >= 180] = 360 - l[l >= 180]
except:
pass
# Convert Decimal degrees to radians.
deg2rad = np.pi/180
phi1 = (90-lat1)*deg2rad
phi2 = (90-lat2)*deg2rad
theta1 = lon1*deg2rad
theta2 = lon2*deg2rad
lat1 = lat1*deg2rad
lat2 = lat2*deg2rad
l = l*deg2rad
if mode=="global":
# Compute the distances: new
cos = (np.sin(phi1)*np.sin(phi2)*np.cos(theta1 - theta2) +
np.cos(phi1)*np.cos(phi2))
arc = np.arccos( cos )
dist = R*arc
elif mode=="regional":
# Compute the distances: 1 old, deprecated ROMS version - unsuitable for global
dist = R*np.arcsin(np.sqrt(((np.sin(l)*np.cos(lat2))**2) + (((np.sin(lat2)*np.cos(lat1)) - \
(np.sin(lat1)*np.cos(lat2)*np.cos(l)))**2)))
elif mode=="local":
#uses approx for now:
x = [lon2-lon1] * np.cos(0.5*[lat2+lat1])
y = lat2-lat1
dist = R*[x*x+y*y]^0.5
else:
print("incorrect mode")
return dist
# To help to find your data, please complete the MYPATH variable below with the output generated by the /Configuration_Testing/Data_Path_Checker.ipynb Jupyter notebook in the Configuration_Testing folder.
# e.g. MYPATH = os.path.join("C:/","Users","me","Desktop")
MYPATH = "/users/rsg/olcl/scratch/eum_repos/ocean/"
#-default parameters------------------------------------------------------------
DEFAULT_LOG_PATH = os.getcwd()
DEFAULT_L1_FILE_FILTER = '*radiance*.nc'
DEFAULT_L2_FILE_FILTER = '*reflectance*.nc'
# And here we enter the programme proper, setting up our preliminaries and logfile.
# +
#-main-------------------------------------------------------------------------
# preliminary stuff
logfile = os.path.join(DEFAULT_LOG_PATH,"OLCI_spectral_AC_"+datetime.datetime.now().strftime('%Y%m%d_%H%M')+".log")
# we define a verbose flag to control how much info we want to see. It can also be useful to define a debug flag
# for even more information.
verbose = False
# set file logger
try:
if os.path.exists(logfile):
os.remove(logfile)
print("logging to: "+logfile)
logging.basicConfig(filename=logfile,level=logging.DEBUG)
except:
print("Failed to set logger")
# -
# Next we define the a box to average our spectra over. If we set the lon values equal, and lat values equal, then we take the spectra from the nearest single point.
# +
# makes box average. Will choose nearest point if latmin == latmax and lonmin == lonmax
lonmin = 70.0
lonmax = 70.25
latmin = 14.0
latmax = 14.25
nearest_flag = False
if lonmin == lonmax and latmin == latmax:
print('Using nearest point')
nearest_flag = True
else:
print('Making box average')
# -
# Now we load our latitude and longitude fields so that we can extract the indices for the box (or point) we want to get the spectra for.
# +
# get the paths and coordinates files
input_root = os.path.join(MYPATH,'OLCI_test_data')
input_path_L1 = "S3A_OL_1_EFR____20171226T045629_20171226T045929_20171227T093108_0179_026_076_2700_MAR_O_NT_002.SEN3"
input_path_L2 = "S3A_OL_2_WFR____20171226T045629_20171226T045929_20171227T105453_0179_026_076_2700_MAR_O_NT_002.SEN3"
file_name_geo = "geo_coordinates.nc"
GEO_file = xr.open_dataset(os.path.join(input_root,input_path_L2,file_name_geo))
LAT = GEO_file.variables['latitude'][:]
LON = GEO_file.variables['longitude'][:]
GEO_file.close()
dist_i1 = spheric_dist(latmin,LAT,lonmin,LON)
#J is the X-coord
I1,J1 = np.where(dist_i1 == np.nanmin(dist_i1))
if nearest_flag:
I1 = I1[0]
J1 = J1[0]
I2 = I1+1
J2 = J1+1
else:
dist_i2 = spheric_dist(latmax,LAT,lonmax,LON)
I2, J2 = np.where(dist_i2 == np.nanmin(dist_i2))
I1 = I1[0]
J1 = J1[0]
I2 = I2[0]
J2 = J2[0]
# re-arrange coordinates so that we count upwards...
if J2 < J1:
J1f = J2
J2f = J1
else:
J1f = J1
J2f = J2
if I2 < I1:
I1f = I2
I2f = I1
else:
I1f = I1
I2f = I2
# -
# The values for the wavelengths of each channel are stored in the xml manifest files, so lets read through that line by line and extract them, as well as the bandwidths.
# +
# get wavelengths from the L1 xml file
bands_L1 = []
wavelengths_L1 = []
bandwidths_L1 = []
xml_file = os.path.join(input_root,input_path_L1,'xfdumanifest.xml')
with open(xml_file, 'r') as input_file:
for line in input_file:
if "<sentinel3:band name=" in line:
bands_L1.append(line.replace('<sentinel3:band name="',"").replace('">','').replace(' ','').replace('\n',''))
if "<sentinel3:centralWavelength>" in line:
wavelengths_L1.append(float(line.replace('<sentinel3:centralWavelength>','').replace('</sentinel3:centralWavelength>','')))
if "<sentinel3:bandwidth>" in line:
bandwidths_L1.append(float(line.replace('<sentinel3:bandwidth>','').replace('</sentinel3:bandwidth>','')))
# get wavelengths from the L2 xml file
bands_L2 = []
wavelengths_L2 = []
bandwidths_L2 = []
xml_file = os.path.join(input_root,input_path_L2,'xfdumanifest.xml')
with open(xml_file, 'r') as input_file:
for line in input_file:
if "<sentinel3:band name=" in line:
bands_L2.append(line.replace('<sentinel3:band name="',"").replace('">','').replace(' ','').replace('\n',''))
if "<sentinel3:centralWavelength>" in line:
wavelengths_L2.append(float(line.replace('<sentinel3:centralWavelength>','').replace('</sentinel3:centralWavelength>','')))
if "<sentinel3:bandwidth>" in line:
bandwidths_L2.append(float(line.replace('<sentinel3:bandwidth>','').replace('</sentinel3:bandwidth>','')))
# -
# Now lets loop through all of the radiances/reflectances, one-by-one and pull out the value over the boox or point for each field, storing them in a list.
# +
# -get the files by band name-------------------------------------------------------------
nc_files_L1=[]
for root, _, filenames in os.walk(os.path.join(input_root,input_path_L1)):
for filename in fnmatch.filter(filenames, DEFAULT_L1_FILE_FILTER):
nc_files_L1.append(os.path.join(root, filename))
if verbose:
print('Found: '+filename)
logging.info('Found: '+os.path.join(root, filename))
nc_files_L2=[]
for root, _, filenames in os.walk(os.path.join(input_root,input_path_L2)):
for filename in fnmatch.filter(filenames, DEFAULT_L2_FILE_FILTER):
nc_files_L2.append(os.path.join(root, filename))
if verbose:
print('Found: '+filename)
logging.info('Found: '+os.path.join(root, filename))
# get the radiances
radiances = []
radiance_errors = []
radiance_variability = []
for nc_file in sorted(nc_files_L1):
varname = os.path.basename(nc_file).split('.')[0]
nc_fid = xr.open_dataset(nc_file)
radiances.append(np.nanmean(nc_fid.variables[varname][I1f:I2f,J1f:J2f]))
radiance_variability.append(np.nanstd(nc_fid.variables[varname][I1f:I2f,J1f:J2f]))
radiance_errors.append(0.0)
nc_fid.close()
radiance_tops = np.asarray([x + y for x, y in zip(radiances, radiance_variability)])
radiance_bottoms = np.asarray([x - y for x, y in zip(radiances, radiance_variability)])
# get the reflectances
reflectances = []
reflectance_errors = []
reflectance_variability = []
for nc_file in sorted(nc_files_L2):
varname = os.path.basename(nc_file).split('.')[0]
nc_fid = xr.open_dataset(nc_file)
reflectances.append(np.nanmean(nc_fid.variables[varname][I1f:I2f,J1f:J2f]))
reflectance_variability.append(np.nanstd(nc_fid.variables[varname][I1f:I2f,J1f:J2f]))
reflectance_errors.append(np.nanmean(nc_fid.variables[varname+'_err'][I1f:I2f,J1f:J2f]))
nc_fid.close()
reflectance_tops = np.asarray([x + y for x, y in zip(reflectances, reflectance_variability)])
reflectance_bottoms = np.asarray([x - y for x, y in zip(reflectances, reflectance_variability)])
# -
# Finally, we plot the spectra and save.
# +
# print the spectra
fig1 = plt.figure(figsize=(10, 10), dpi=300)
plt.errorbar(wavelengths_L1, radiances, xerr=bandwidths_L1, yerr=radiance_errors, color='b', linewidth=1.0)
plt.plot(wavelengths_L1,radiance_tops,'b--',linewidth=1.0)
plt.plot(wavelengths_L1,radiance_bottoms,'b--',linewidth=1.0)
plt.fill_between(wavelengths_L1,radiance_tops,radiance_bottoms,color='b',alpha=0.25)
plt.xlabel('Wavelength [nm]', fontsize=16)
plt.ylabel('Radiance [mW.m$^{-2}$.sr$^{-1}$.nm$^{-1}$]', fontsize=16, color='b')
plt.xticks(fontsize=12)
plt.yticks(fontsize=12, color='b')
ax1 = plt.twinx()
plt.errorbar(wavelengths_L2, reflectances, xerr=bandwidths_L2, yerr=reflectance_errors, color='r', linewidth=1.0)
plt.plot(wavelengths_L2,reflectance_tops,'r--',linewidth=1.0)
plt.plot(wavelengths_L2,reflectance_bottoms,'r--',linewidth=1.0)
plt.fill_between(wavelengths_L2,reflectance_tops,reflectance_bottoms,color='r',alpha=0.25)
plt.xlabel('Wavelength [nm]', fontsize=16)
plt.ylabel('Reflectance [sr$^{-1}$]', fontsize=16, color='r')
plt.xticks(fontsize=12)
plt.yticks(fontsize=12, color='r')
plt.title('Co-located OLCI L1/L2 Radiances/Reflectances', fontsize=16);
plt.show()
# -
fig1.savefig('OLCI_spectra_AC_demo.png',bbox_inches='tight');
# <br> <a href="./15_OLCI_CHL_comparison.ipynb"><< 15 - Ocean and Land Colour Instrument - CHL algorithm comparison</a><span style="float:right;"><a href="../SLSTR/21_SLSTR_spatial_interrogation.ipynb">21 - SLSTR spatial plotting, quality control and data interrogation >></a> <hr> <p style="text-align:left;">This project is licensed under the <a href="/edit/LICENSE">MIT License</a> <span style="float:right;"><a href="https://gitlab.eumetsat.int/eo-lab-usc-open/ocean">View on GitLab</a> | <a href="https://training.eumetsat.int/">EUMETSAT Training</a> | <a href=mailto:<EMAIL>>Contact</a></span></p>
| OLCI/16_OLCI_spectral_AC_L1_L2_comparison.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: nsc
# language: python
# name: nsc
# ---
import math
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import scipy.stats as stats
sns.set()
from collections import defaultdict
import ipdb
# ### 1) CoupledLogarithm
def coupled_logarithm(value: [int, float, np.ndarray], kappa: [int, float] = 0.0, dim: int = 1) -> [float, np.ndarray]:
"""
Generalization of the logarithm function, which defines smooth
transition to power functions.
Parameters
----------
value : Input variable in which the coupled logarithm is applied to.
Accepts int, float, and np.ndarray data types.
kappa : Coupling parameter which modifies the coupled logarithm function.
Accepts int and float data types.
dim : The dimension (or rank) of value. If value is scalar, then dim = 1.
Accepts only int data type.
"""
# convert value into np.ndarray (if scalar) to keep consistency
value = np.array(value) if isinstance(value, (int, float)) else value
assert isinstance(value, np.ndarray), "value must be an int, float, or np.ndarray."
assert 0. not in value, "value must not be or contain any zero(s)."
if kappa == 0.:
coupled_log_value = np.log(value) # divide by 0 if x == 0
else:
coupled_log_value = (1. / kappa) * (value**(kappa / (1. + dim*kappa)) - 1.)
return coupled_log_value
# #### Test with scalar --> np.array
X = 3.69369395
kappa = 0.
coupled_logarithm(X, kappa)
# #### Test with np.array
# 1000 linearly spaced numbers, starting from ALMOST 0
X = np.linspace(1e-6, 5, 1000)
y = {}
# +
fig, ax = plt.subplots(figsize=(12, 8))
ax.axvline(c='black', lw=1)
ax.axhline(c='black', lw=1)
cm = plt.get_cmap('PiYG')
kappa_values = [round(value, 1) for value in np.arange(-0.8, 0.9, 0.1)]
n = len(kappa_values)
ax.set_prop_cycle(color=['gold' if kappa==0 else cm(1.*i/n) for i, kappa in enumerate(kappa_values)])
plt.xlim(-5, 5)
plt.ylim(-5, 5)
for kappa in kappa_values:
y[kappa] = coupled_logarithm(X, kappa)
plt.plot(X, y[kappa], label=kappa)
plt.legend()
plt.show();
# -
# ### 2) CoupledExponential
# +
def coupled_exponential(value: [int, float, np.ndarray], kappa: float = 0.0, dim: int = 1) -> [float, np.ndarray]:
"""
Generalization of the exponential function.
Parameters
----------
value : [float, Any]
Input values in which the coupled exponential is applied to.
kappa : float,
Coupling parameter which modifies the coupled exponential function.
The default is 0.0.
dim : int, optional
The dimension of x, or rank if x is a tensor. The default is 1.
Returns
-------
float
The coupled exponential values.
"""
# convert number into np.ndarray to keep consistency
value = np.array(value) if isinstance(value, (int, float)) else value
assert isinstance(value, np.ndarray), "value must be an int, float, or np.ndarray."
# assert 0 not in value, "value must not be or contain any zero(s)."
assert isinstance(dim, int) and dim >= 0, "dim must be an integer greater than or equal to 0."
# check that -1/d <= kappa
assert -1/dim <= kappa, "kappa must be greater than or equal to -1/dim."
if kappa == 0:
coupled_exp_value = np.exp(value)
elif kappa > 0:
coupled_exp_value = (1 + kappa*value)**((1 + dim*kappa)/kappa)
# coupled_exp_value = (1 + kappa*value)**(1 / (kappa / (1 + dim*kappa)))
# the following is given that kappa < 0
else:
def _compact_support(value, kappa, dim):
# if (1 + kappa*value) >= tolerance:
if (1 + kappa*value) >= 0:
try:
# outside of tolerance
# if within the tolerance, then treat it as if zero
return (1 + kappa*value)**((1 + dim*kappa)/kappa)
# return (1 + kappa*value)**(1 / (kappa / (1 + dim*kappa)))
except ZeroDivisionError:
print("Skipped ZeroDivisionError at the following: " + \
f"value = {value}, kappa = {kappa}. Therefore," + \
f"(1+kappa*value) = {(1+kappa*value)}"
)
# elif ((1 + dim*kappa)/kappa) > tolerance:
# tolerance: start with machine precision
elif ((1 + dim*kappa)/kappa) > 0:
return 0.
else:
return float('inf')
compact_support = np.vectorize(_compact_support)
coupled_exp_value = compact_support(value, kappa, dim)
return coupled_exp_value
# -
# #### Test with scalar --> np.array
X = 3.69369395
kappa = 0.5
coupled_exponential(X, kappa)
# #### Test with np.array
# 100 linearly spaced numbers, starting from CLOSE to 0
X = np.linspace(-5, 5, 1000)
y = {}
# +
fig, ax = plt.subplots(figsize=(8, 12))
ax.axvline(c='black', lw=1)
ax.axhline(c='black', lw=1)
cm = plt.get_cmap('PiYG')
kappa_values = [round(value, 1) for value in np.arange(-0.8, 0.9, 0.1)]
n = len(kappa_values)
ax.set_prop_cycle(color=['gold' if kappa==0 else cm(1.*i/n) for i, kappa in enumerate(kappa_values)])
plt.xlim(-1, 5)
plt.ylim(-1, 20)
for kappa in kappa_values:
y[kappa] = coupled_exponential(X, kappa)
plt.plot(X, y[kappa], label=kappa)
plt.legend()
plt.show();
# +
fig, ax = plt.subplots(figsize=(12, 8))
ax.axvline(c='black', lw=1)
ax.axhline(c='black', lw=1)
cm = plt.get_cmap('PiYG')
kappa_values = [round(value, 1) for value in np.arange(-0.8, 0.9, 0.1)]
n = len(kappa_values)
ax.set_prop_cycle(color=['gold' if kappa==0 else cm(1.*i/n) for i, kappa in enumerate(kappa_values)])
plt.xlim(-1, 3)
plt.ylim(-1, 3)
for kappa in kappa_values:
y[kappa] = 1/coupled_exponential(X, kappa)
plt.plot(X, y[kappa], label=kappa)
plt.legend()
plt.show();
# -
# Updated CoupledExponential tests
def coupled_exponential(value: [int, float, np.ndarray],
kappa: float = 0.0,
dim: int = 1
) -> [float, np.ndarray]:
"""
Generalization of the exponential function.
Parameters
----------
value : [float, np.ndarray]
Input values in which the coupled exponential is applied to.
kappa : float,
Coupling parameter which modifies the coupled exponential function.
The default is 0.0.
dim : int, optional
The dimension of x, or rank if x is a tensor. The default is 1.
Returns
-------
float
The coupled exponential values.
"""
# convert number into np.ndarray to keep consistency
value = np.array(value) if isinstance(value, (int, float)) else value
assert isinstance(value, np.ndarray), "value must be an int, float, or np.ndarray."
# assert 0 not in value, "value must not be or contain np.ndarray zero(s)."
assert isinstance(dim, int) and dim >= 0, "dim must be an integer greater than or equal to 0."
# check that -1/d <= kappa
assert -1/dim <= kappa, "kappa must be greater than or equal to -1/dim."
if kappa == 0:
coupled_exp_value = np.exp(value)
elif kappa > 0:
coupled_exp_value = (1 + kappa*value)**((1 + dim*kappa)/kappa)
# the following is given that kappa < 0
else:
def _compact_support(value, kappa, dim):
if (1 + kappa*value) >= 0:
try:
return (1 + kappa*value)**((1 + dim*kappa)/kappa)
except ZeroDivisionError:
print("Skipped ZeroDivisionError at the following: " + \
f"value = {value}, kappa = {kappa}. Therefore," + \
f"(1+kappa*value) = {(1+kappa*value)}"
)
elif ((1 + dim*kappa)/kappa) > 0:
return 0.
else:
return float('inf')
compact_support = np.vectorize(_compact_support)
coupled_exp_value = compact_support(value, kappa, dim)
return coupled_exp_value
def coupled_exponential_kenric(value: [int, float, np.ndarray],
kappa: float = 0.0,
dim: int = 1
) -> [float, np.ndarray]:
"""
Generalization of the exponential function.
Parameters
----------
value : [float, np.ndarray]
Input values in which the coupled exponential is applied to.
kappa : float,
Coupling parameter which modifies the coupled exponential function.
The default is 0.0.
dim : int, optional
The dimension of x, or rank if x is a tensor. The default is 1.
Returns
-------
float
The coupled exponential values.
"""
# convert number into np.ndarray to keep consistency
value = np.array(value) if isinstance(value, (int, float)) else value
assert isinstance(value, np.ndarray), "value must be an int, float, or np.ndarray."
# assert 0 not in value, "value must not be or contain np.ndarray zero(s)."
assert isinstance(dim, int) and dim >= 0, "dim must be an integer greater than or equal to 0."
# check that -1/d <= kappa
assert -1/dim <= kappa, "kappa must be greater than or equal to -1/dim."
if kappa == 0:
coupled_exp_value = np.exp(value)
elif kappa > 0: # KPN 4/13/21 adding logic for 1 + kappa*value <=0
if (1 + kappa*value) > 0:
return (1 + kappa*value)**((1 + dim*kappa)/kappa)
else: # KPN 4/13/21 since kappa > 0 (1+dim*kappa)/kappa > 0
return 0.
# the following is given that kappa < 0
else:
def _compact_support(value, kappa, dim):
if (1 + kappa*value) > 0: # KPN 4/13/21 removed equal sign; if = 0, then result is either 0 or inf
try:
return (1 + kappa*value)**((1 + dim*kappa)/kappa)
except ZeroDivisionError: # KPN 4/13/21 ZeroDivisionError may no longer be necessary
print("Skipped ZeroDivisionError at the following: " + \
f"value = {value}, kappa = {kappa}. Therefore," + \
f"(1+kappa*value) = {(1+kappa*value)}"
)
elif ((1 + dim*kappa)/kappa) > 0:
return 0.
else:
return float('inf')
compact_support = np.vectorize(_compact_support)
coupled_exp_value = compact_support(value, kappa, dim)
return coupled_exp_value
# Revised coupled exponential function
def coupled_exponential(value: [int, float, np.ndarray],
kappa: float = 0.0,
dim: int = 1
) -> [float, np.ndarray]:
"""
Generalization of the exponential function.
Parameters
----------
value : [float, np.ndarray]
Input values in which the coupled exponential is applied to.
kappa : float,
Coupling parameter which modifies the coupled exponential function.
The default is 0.0.
dim : int, optional
The dimension of x, or rank if x is a tensor. The default is 1.
Returns
-------
float
The coupled exponential values.
"""
# convert number into np.ndarray to keep consistency
value = np.array(value) if isinstance(value, (int, float)) else value
assert isinstance(value, np.ndarray), "value must be an int, float, or np.ndarray."
# assert 0 not in value, "value must not be or contain np.ndarray zero(s)."
assert isinstance(dim, int) and dim >= 0, "dim must be an integer greater than or equal to 0."
# check that -1/d <= kappa
assert -1/dim <= kappa, "kappa must be greater than or equal to -1/dim."
if kappa == 0:
# Does not have to be vectorized
coupled_exp_value = np.exp(value)
else:
# inner function that takes in the value on a scalar-by-sclar basis
def _coupled_exponential_scalar(value, kappa, dim):
if (1 + kappa*value) > 0:
return (1 + kappa*value)**((1 + dim*kappa)/kappa)
elif ((1 + dim*kappa)/kappa) > 0:
return 0.
else:
return float('inf')
coupled_exp_value = np.vectorize(_coupled_exponential_scalar)(value, kappa, dim)
return coupled_exp_value
n_sample = 10000
# n_sample of linearly spaced numbers, starting from -5
X = np.linspace(-5, 5, n_sample)
y = {}
# coupled_exponential
# +
fig, ax = plt.subplots(figsize=(8, 12))
ax.axvline(c='black', lw=1)
ax.axhline(c='black', lw=1)
cm = plt.get_cmap('PiYG')
kappa_values = [round(value, 1) for value in np.arange(-0.8, 0.9, 0.1)]
n = len(kappa_values)
ax.set_prop_cycle(color=['gold' if kappa==0 else cm(1.*i/n) for i, kappa in enumerate(kappa_values)])
plt.xlim(-5, 5)
plt.ylim(-5, 15)
for kappa in kappa_values:
y[kappa] = coupled_exponential(X, kappa)
plt.plot(X, y[kappa], label=kappa)
plt.legend()
plt.show();
# -
# coupled_exponential_kenric. Has an issue at kappa = 0.2
# +
fig, ax = plt.subplots(figsize=(8, 12))
ax.axvline(c='black', lw=1)
ax.axhline(c='black', lw=1)
cm = plt.get_cmap('PiYG')
kappa_values = [round(value, 1) for value in np.arange(-0.8, 0.9, 0.1)]
n = len(kappa_values)
ax.set_prop_cycle(color=['gold' if kappa==0 else cm(1.*i/n) for i, kappa in enumerate(kappa_values)])
plt.xlim(-5, 5)
plt.ylim(-5, 15)
for kappa in kappa_values:
print(kappa)
y[kappa] = coupled_exponential_kenric(X, kappa)
# y[kappa] = nsc_func.coupled_exponential(X, kappa)
plt.plot(X, y[kappa], label=kappa)
plt.legend()
plt.show();
# -
coupled_exponential_kenric(X, 0.2)
| workspace/kevin/function_scrapwork.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Homework 4
# ## Download the Module 4 homework here: [Homework4](https://sites.psu.edu/math452/files/2022/02/HWWeek4.pdf)
| _build/jupyter_execute/Module4/m4_hw.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
import scipy.stats as st
from scipy.stats import linregress
from api_keys import weather_api_key
from citipy import citipy
output_data= "cities.csv"
lat_range= (-90, 90)
lng_range= (-180, 180)
# +
lat_lngs= []
cities= []
lats= np.random.uniform(low=-90.000, high=90.000, size=1500)
lngs= np.random.uniform(low=-180.000, high=180.000, size=1500)
lat_lngs= zip(lats, lngs)
for lat_lng in lat_lngs:
city= citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
if city not in cities:
cities.append(city)
len(cities)
# +
query_url= f"http://api.openweathermap.org/data/2.5/weather?appid={weather_api_key}&q="
city_two= []
cloudinesses= []
dates= []
humidities = []
lats= []
lngs= []
max_temps= []
wind_speeds= []
countries= []
count_one= 0
set_one= 1
for city in cities:
try:
response= requests.get(query_url + city.replace(" ","&")).json()
cloudinesses.append(response['clouds']['all'])
countries.append(response['sys']['country'])
dates.append(response['dt'])
humidities.append(response['main']['humidity'])
lats.append(response['coord']['lat'])
lngs.append(response['coord']['lon'])
max_temps.append(response['main']['temp_max'])
wind_speeds.append(response['wind']['speed'])
if count_one > 49:
count_one= 1
set_one += 1
city_two.append(city)
else:
count_one += 1
city_two.append(city)
print(f"Processing Record {count_one} of Set {set_one} | {city}")
except Exception:
print("City not found. Skipping...")
print("------------------------------")
print("Data Retrieval Complete")
print("------------------------------")
# +
city_data= {"City":city_two,
"Cloudiness":cloudinesses,
"Country":countries,
"Date":dates,
"Humidity":humidities,
"Lat":lats,
"Lng":lngs,
"Max Temp":max_temps,
"Wind Speed":wind_speeds
}
city_weather_df= pd.DataFrame(city_data)
city_weather_df.to_csv(output_data, index = False)
city_weather_df.head()
# -
plt.scatter(city_weather_df["Lat"], city_weather_df["Max Temp"], edgecolors="black", facecolors="red")
plt.title("City Latitude vs. Max Temperature (05/28/19)")
plt.xlabel("Latitude")
plt.ylabel("Max Temperature (F)")
plt.ylim(-40,110)
plt.grid (b=True, which="major", axis="both", linestyle="-", color="lightgrey")
plt.savefig("City Latitude vs. Max Temperature.png")
plt.show()
plt.scatter(city_weather_df["Lat"], city_weather_df["Humidity"], edgecolors="black", facecolors="blue")
plt.title("City Latitude vs. Humidity (05/28/19)")
plt.xlabel("Latitude")
plt.ylabel("Humidity (%)")
plt.ylim(0,300)
plt.grid (b=True, which="major", axis="both", linestyle="-", color="lightgrey")
plt.savefig("City Latitude vs. Humidity.png")
plt.show()
plt.scatter(city_weather_df["Lat"], city_weather_df["Cloudiness"], edgecolors="black", facecolors="skyblue")
plt.title("City Latitude vs. Cloudiness (05/28/19)")
plt.xlabel("Latitude")
plt.ylabel("Cloudiness (%)")
plt.ylim(-5,105)
plt.grid (b=True, which="major", axis="both", linestyle="-", color="lightgrey")
plt.savefig("City Latitude vs. Cloudiness.png")
plt.show()
plt.scatter(city_weather_df["Lat"], city_weather_df["Wind Speed"], edgecolors="black", facecolors="green")
plt.title("City Latitude vs. Wind Speed (05/28/19)")
plt.xlabel("Latitude")
plt.ylabel("Wind Speed (mph)")
plt.ylim(-5,50)
plt.grid (b=True, which="major", axis="both", linestyle="-", color="lightgrey")
plt.savefig("City Latitude vs. Windspeed.png")
plt.show()
north_hemisphere= city_weather_df.loc[city_weather_df["Lat"] >= 0]
south_hemisphere= city_weather_df.loc[city_weather_df["Lat"] < 0]
# +
north_lat= north_hemisphere["Lat"]
north_max= north_hemisphere["Max Temp"]
print(f"The r-squared is : {round(st.pearsonr(north_lat, north_max)[0],2)}")
(slope, intercept, rvalue, pvalue, stderr)= linregress(north_hemisphere["Lat"], north_hemisphere["Max Temp"])
regress_values= north_hemisphere["Lat"] * slope + intercept
line_eq= "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(north_hemisphere["Lat"], north_hemisphere["Max Temp"])
plt.title("Northern Hemisphere - Max Temp vs. Latitude Linear Regression")
plt.xlabel("Latitude")
plt.ylabel("Max Temp (F)")
plt.plot(north_hemisphere["Lat"],regress_values,"r-")
plt.annotate(line_eq, (0,0), fontsize=15, color="red")
print("There is a strong negative correlation between Max Temp and Latitude in the N. Hemisphere.")
# +
south_lat= south_hemisphere["Lat"]
south_max= south_hemisphere["Max Temp"]
print(f"The r-squared is : {round(st.pearsonr(south_lat, south_max)[0],2)}")
(slope, intercept, rvalue, pvalue, stderr)= linregress(south_hemisphere["Lat"], south_hemisphere["Max Temp"])
regress_values= south_hemisphere["Lat"] * slope + intercept
line_eq= "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(south_hemisphere["Lat"], south_hemisphere["Max Temp"])
plt.title("Southern Hemisphere - Max Temp vs. Latitude Linear Regression")
plt.xlabel("Latitude")
plt.ylabel("Max Temp (F)")
plt.plot(south_hemisphere["Lat"],regress_values,"r-")
plt.annotate(line_eq, (0,0), fontsize=15, color="red")
print("There is a strong negative correlation between Max Temp and Latitude in the S. Hemisphere.")
# +
north_humidity= north_hemisphere["Humidity"]
print(f"The r-squared is : {round(st.pearsonr(north_lat, north_humidity)[0],2)}")
(slope, intercept, rvalue, pvalue, stderr)= linregress(north_hemisphere["Lat"], north_hemisphere["Humidity"])
regress_values= north_hemisphere["Lat"] * slope + intercept
line_eq= "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(north_hemisphere["Lat"], north_hemisphere["Humidity"])
plt.title("Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression")
plt.xlabel("Latitude")
plt.ylabel("Humidity (%)")
plt.plot(north_hemisphere["Lat"], regress_values, "r-")
plt.annotate(line_eq, (40,15), fontsize=15, color="red")
print("There is not a moderate correlation between Humidity and Latitude in the N. Hemisphere.")
# +
south_humidity= south_hemisphere["Humidity"]
print(f"The r-squared is : {round(st.pearsonr(south_lat, south_humidity)[0],2)}")
(slope, intercept, rvalue, pvalue, stderr)= linregress(south_hemisphere["Lat"], south_hemisphere["Humidity"])
regress_values= south_hemisphere["Lat"] * slope + intercept
line_eq= "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(south_hemisphere["Lat"], south_hemisphere["Humidity"])
plt.title("Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression")
plt.xlabel("Latitude")
plt.ylabel("Humidity (%)")
plt.plot(south_hemisphere["Lat"], regress_values, "r-")
plt.annotate(line_eq,(40,15),fontsize=15,color="red")
print("There is not a moderate correlation between Humidity and Latitude in the S. Hemisphere.")
# +
north_cloudiness= north_hemisphere["Cloudiness"]
print(f"The r-squared is : {round(st.pearsonr(north_lat, north_cloudiness)[0],2)}")
(slope, intercept, rvalue, pvalue, stderr)= linregress(north_hemisphere["Lat"], north_hemisphere["Cloudiness"])
regress_values= north_hemisphere["Lat"] * slope + intercept
line_eq= "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(north_hemisphere["Lat"], north_hemisphere["Cloudiness"])
plt.title("Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression")
plt.xlabel("Latitude")
plt.ylabel("Cloudiness (%)")
plt.plot(north_hemisphere["Lat"], regress_values, "r-")
plt.annotate(line_eq,(40,15),fontsize=15,color="red")
print("There is not a strong correlation between Cloudiness and Latitude in the N. Hemisphere.")
# +
south_cloudiness= south_hemisphere["Cloudiness"]
print(f"The r-squared is : {round(st.pearsonr(south_lat, south_cloudiness)[0],2)}")
(slope, intercept, rvalue, pvalue, stderr)= linregress(south_hemisphere["Lat"], south_hemisphere["Cloudiness"])
regress_values= south_hemisphere["Lat"] * slope + intercept
line_eq= "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(south_hemisphere["Lat"], south_hemisphere["Cloudiness"])
plt.title("Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression")
plt.xlabel("Latitude")
plt.ylabel("Cloudiness (%)")
plt.plot(south_hemisphere["Lat"], regress_values, "r-")
plt.annotate(line_eq, (40,15), fontsize=15, color="red")
print("There is not a strong correlation between Cloudiness and Latitude in the S. Hemisphere.")
# +
north_wind= north_hemisphere["Wind Speed"]
print(f"The r-squared is : {round(st.pearsonr(north_lat, north_wind)[0],2)}")
(slope, intercept, rvalue, pvalue, stderr)= linregress(north_hemisphere["Lat"], north_hemisphere["Wind Speed"])
regress_values= north_hemisphere["Lat"] * slope + intercept
line_eq= "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(north_hemisphere["Lat"], north_hemisphere["Wind Speed"])
plt.title("Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression")
plt.xlabel("Latitude")
plt.ylabel("Wind Speed (mph)")
plt.ylim(0, 35)
plt.plot(north_hemisphere["Lat"], regress_values, "r-")
plt.annotate(line_eq, (50,25), fontsize=15, color="red")
print("There is not a strong correlation between Wind Speed and Latitude in the N. Hemisphere.")
# +
south_wind= south_hemisphere["Wind Speed"]
print(f"The r-squared is : {round(st.pearsonr(south_lat, south_wind)[0],2)}")
(slope, intercept, rvalue, pvalue, stderr)= linregress(south_hemisphere["Lat"], south_hemisphere["Wind Speed"])
regress_values= south_hemisphere["Lat"] * slope + intercept
line_eq= "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(south_hemisphere["Lat"], south_hemisphere["Wind Speed"])
plt.title("Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression")
plt.xlabel("Latitude")
plt.ylabel("Wind Speed (mph)")
plt.plot(south_hemisphere["Lat"], regress_values, "r-")
plt.annotate(line_eq, (-30,25), fontsize=15, color="red")
print("There is not a strong correlation between Wind Speed and Latitude in the S. Hemisphere.")
| WeatherPy/weatherpy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/kdh7979/Korean-SignLanguage-Interpreter/blob/main/mask.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="_mM4R9ackWdX" colab={"base_uri": "https://localhost:8080/"} outputId="1ea5c8f4-d08a-4bd7-bb2e-9919201a4be0"
# !pip install mediapipe
# + id="Tod84grzpdVo"
import cv2
import mediapipe as mp
from google.colab.patches import cv2_imshow
mp_drawing = mp.solutions.drawing_utils
mp_face_mesh = mp.solutions.face_mesh
# + id="F3k-XgUep3nF"
img_fg = cv2.imread(r"/content/Korean-SignLanguage-Interpreter/pictures/n_mask.png") #마스크 (1411, 1920, 3)
img_fg = img_fg[250:1400, 0:1920]
img2gray = cv2.cvtColor(img_fg, cv2.COLOR_BGR2GRAY)
ret, mask = cv2.threshold(img2gray, 10, 255, cv2.THRESH_BINARY)
mask_inv = cv2.bitwise_not(mask)
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="0fp46fGMbtMa" outputId="ed660b3e-df15-4284-ba6f-94c4528ceb6d"
cv2_imshow(mask_inv)
# + id="KUw2xi8Ecgqm"
# For webcam input:
drawing_spec = mp_drawing.DrawingSpec(thickness=1, circle_radius=1)
for i in range(1, 2):
cap = cv2.VideoCapture(r"/content/Korean-SignLanguage-Interpreter/pictures/KETI_SL_0000000001.avi")
with mp_face_mesh.FaceMesh(min_detection_confidence=0.5,min_tracking_confidence=0.5) as face_mesh:
while cap.isOpened():
success, image = cap.read()
if not success:
print("Ignoring empty camera frame.")
# If loading a video, use 'break' instead of 'continue'.
cv2.destroyAllWindows()
break
# the BGR image to RGB.
image = cv2.cvtColor(cv2.flip(image, 1), cv2.COLOR_BGR2RGB)
# To improve performance, optionally mark the image as not writeable to
# pass by reference.
image.flags.writeable = False
results = face_mesh.process(image)
# Draw the face mesh annotations on the image.
image.flags.writeable = True
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
if results.multi_face_landmarks:
for face_landmarks in results.multi_face_landmarks:
mp_drawing.draw_landmarks(
image=image,
landmark_list=face_landmarks,
connections=mp_face_mesh.FACE_CONNECTIONS,
landmark_drawing_spec=drawing_spec,
connection_drawing_spec=drawing_spec)
#4개 랜드마크의 픽셀 좌표 지정
top = int(results.multi_face_landmarks[0].landmark[10].y*720)
bottom = int(results.multi_face_landmarks[0].landmark[152].y*720)
left = int(results.multi_face_landmarks[0].landmark[234].x*1280) - 30
right = int(results.multi_face_landmarks[0].landmark[454].x*1280) + 30
#마스크 이미지의 크기 정하기
h_fg = int((bottom - top)/2)+20
w_fg = right - left
#레이어 마스크 크기 조정
mask = cv2.resize(mask, (w_fg, h_fg))
mask_inv = cv2.resize(mask_inv, (w_fg, h_fg))
#마스크 크기 조정
img_fg = cv2.resize(img_fg, (w_fg, h_fg))
#ROI 추출
roi = image[bottom-h_fg:bottom, left:right]
#마스크 이미지와 image 합성한 후 roi에 저장
roi = cv2.bitwise_and(roi, roi, mask = mask_inv)
img_fg = cv2.bitwise_and(img_fg, img_fg, mask = mask)
dst = cv2.add(roi, img_fg)
#roi와 image 합치기
image[bottom-h_fg:bottom, left:right] = dst
#결과 영상 저장
#fps = cap.get(cv2.CAP_PROP_FPS)
#fourcc = cv2.VideoWriter_fourcc(*'DIVH')
#out = cv2.VideoWriter('output_1.avi', fourcc, 30, (1280, 720))
cv2_imshow(image)
if cv2.waitKey(1) & 0xFF == 27:
cv2.destroyAllWindows()
break
cap.release()
# + colab={"base_uri": "https://localhost:8080/"} id="xf-eLg8Fad9n" outputId="f1f9d690-1b21-40b7-b77a-c7fedae2569f"
# !git clone https://github.com/kdh7979/Korean-SignLanguage-Interpreter.git
# + id="ixdkjxKmae8m"
| mask.ipynb |