code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:pyvizenv] *
# language: python
# name: conda-env-pyvizenv-py
# ---
# # San Francisco Housing Cost Analysis
#
# In this assignment, you will perform fundamental analysis for the San Francisco housing market to allow potential real estate investors to choose rental investment properties.
# +
# imports
import panel as pn
pn.extension('plotly')
import plotly.express as px
import pandas as pd
import hvplot.pandas
import matplotlib.pyplot as plt
import numpy as np
import os
from pathlib import Path
from dotenv import load_dotenv
import warnings
warnings.filterwarnings('ignore')
# -
# Read the Mapbox API key
load_dotenv()
map_box_api = os.getenv("MAPBOX")
# ## Load Data
# Read the census data into a Pandas DataFrame
file_path = Path("Data/sfo_neighborhoods_census_data.csv")
sfo_data = pd.read_csv(file_path, index_col="year")
sfo_data.head()
# - - -
# ## Housing Units Per Year
#
# In this section, you will calculate the number of housing units per year and visualize the results as a bar chart using the Pandas plot function.
#
# **Hint:** Use the Pandas `groupby` function.
#
# **Optional challenge:** Use the min, max, and std to scale the y limits of the chart.
#
#
# Calculate the mean number of housing units per year (hint: use groupby)
units_per_year = sfo_data.groupby('year').mean()['housing_units']
units_per_year
# Save the dataframe as a csv file
units_per_year.to_csv('units_per_year.csv')
# +
# Use the Pandas plot function to plot the average housing units per year.
# Note: You will need to manually adjust the y limit of the chart using the min and max values from above.
units_per_year.plot.bar(x='year', y='housing units', title='Housing Units in San Francisco from 2010 to 2016', figsize=(15,10))
# Optional Challenge: Use the min, max, and std to scale the y limits of the chart
std = units_per_year.std()
minimum = units_per_year.min()-std
maximum = units_per_year.max()+std
units_per_year.plot.bar(title='Housing Units in san Francisco form 2010 to 2016')
plt.ylabel('Housing Units')
plt.ylim(minimum - std, maximum + std)
# -
# - - -
# ## Average Housing Costs in San Francisco Per Year
#
# In this section, you will calculate the average monthly rent and the average price per square foot for each year. An investor may wish to better understand the sales price of the rental property over time. For example, a customer will want to know if they should expect an increase or decrease in the property value over time so they can determine how long to hold the rental property. Plot the results as two line charts.
#
# **Optional challenge:** Plot each line chart in a different color.
# Calculate the average sale price per square foot and average gross rent
avg_price_sqr_foot = pd.DataFrame(sfo_data.groupby(['year']).mean()['sale_price_sqr_foot'])
avg_price_sqr_foot.head()
# +
# Create two line charts, one to plot the average sale price per square foot and another for average montly rent
# Line chart for average sale price per square foot
avg_price_sqr_foot.plot(figsize=(15,10), title='Average Price Per SqFt by Year', color='purple')
# Line chart for average montly rent
avg_gross_rent = pd.DataFrame(sfo_data.groupby(['year']).mean()['gross_rent'])
avg_gross_rent.plot(figsize=(15,10), title='Average Gross Rent Per Year', color='red')
# -
# - - -
# ## Average Prices by Neighborhood
#
# In this section, you will use hvplot to create two interactive visulizations of average prices with a dropdown selector for the neighborhood. The first visualization will be a line plot showing the trend of average price per square foot over time for each neighborhood. The second will be a line plot showing the trend of average montly rent over time for each neighborhood.
#
# **Hint:** It will be easier to create a new DataFrame from grouping the data and calculating the mean prices for each year and neighborhood
# Group by year and neighborhood and then create a new dataframe of the mean values
df_neighborhood = sfo_data.groupby([sfo_data.index, 'neighborhood']).mean()
df_neighborhood.reset_index(inplace=True)
df_neighborhood.head()
# Use hvplot to create an interactive line chart of the average price per sq ft.
# The plot should have a dropdown selector for the neighborhood
df_neighborhood.hvplot.line(
'year',
'sale_price_sqr_foot',
xlabel='year',
ylabel='Average Sale Price Per Square Foot',
groupby='neighborhood'
)
# Use hvplot to create an interactive line chart of the average monthly rent.
# The plot should have a dropdown selector for the neighborhood
df_neighborhood.hvplot.line(
'year',
'gross_rent',
xlabel='year',
ylabel='Average Monthly Rent',
groupby='neighborhood'
)
# ## The Top 10 Most Expensive Neighborhoods
#
# In this section, you will need to calculate the mean sale price per square foot for each neighborhood and then sort the values to obtain the top 10 most expensive neighborhoods on average. Plot the results as a bar chart.
# Getting the data from the top 10 expensive neighborhoods to own
df_expensive_hoods = sfo_data.groupby(by='neighborhood').mean()
df_expensive_hoods = df_expensive_hoods.sort_values(by='sale_price_sqr_foot', ascending=False).head(10)
df_expensive_hoods = df_expensive_hoods.reset_index()
df_expensive_hoods
# Plotting the data from the top 10 expensive neighborhoods
df_expensive_hoods.hvplot.bar(
'neighborhood',
'sale_price_sqr_foot',
title="Top 10 Expensive Neighborhoods",
xlabel='Neighborhood',
ylabel='Average Sale Price Per Square Foot'
)
# - - -
# ## Comparing cost to purchase versus rental income
#
# In this section, you will use `hvplot` to create an interactive visualization with a dropdown selector for the neighborhood. This visualization will feature a side-by-side comparison of average price per square foot versus average montly rent by year.
#
# **Hint:** Use the `hvplot` parameter, `groupby`, to create a dropdown selector for the neighborhood.
# Fetch the previously generated DataFrame that was grouped by year and neighborhood
df_neighborhood = sfo_data.groupby([sfo_data.index, 'neighborhood']).mean()
df_neighborhood.reset_index(inplace=True)
df_neighborhood.head()
# Plotting the data from the top 10 expensive neighborhoods
df_neighborhood.hvplot.bar(
x='year',
y=['gross_rent', 'sale_price_sqr_foot'],
xlabel='Neighborhood',
ylabel='Num Housing Units',
groupby='neighborhood',
title='Top 10 Expensive Neighboorhoods in SFO'
)
# - - -
# ## Neighborhood Map
#
# In this section, you will read in neighborhoods location data and build an interactive map with the average house value per neighborhood. Use a `scatter_mapbox` from Plotly express to create the visualization. Remember, you will need your Mapbox API key for this.
# ### Load Location Data
# Load neighborhoods coordinates data
coordinates_path = Path("Data/neighborhoods_coordinates.csv")
df_coordinates = pd.read_csv(coordinates_path)
df_coordinates.rename(columns={'Neighborhood': 'neighborhood'}, inplace=True)
df_coordinates.head()
# ### Data Preparation
#
# You will need to join the location data with the mean values per neighborhood.
#
# 1. Calculate the mean values for each neighborhood.
#
# 2. Join the average values with the neighborhood locations.
# Calculate the mean values for each neighborhood
neighborhood_mean = sfo_data.groupby('neighborhood').mean()
neighborhood_mean = neighborhood_mean.sort_values(by='sale_price_sqr_foot', ascending=False)
neighborhood_mean.reset_index(inplace=True)
neighborhood_mean.head()
# Join the average values with the neighborhood locations
mean_hood_locations = pd.concat([df_coordinates,
neighborhood_mean['sale_price_sqr_foot'],
neighborhood_mean['housing_units'],
neighborhood_mean['gross_rent']
], axis=1).dropna()
mean_hood_locations.head()
# ### Mapbox Visualization
#
# Plot the average values per neighborhood using a Plotly express `scatter_mapbox` visualization.
# Set the mapbox access token
px.set_mapbox_access_token(map_box_api)
# Create a scatter mapbox to analyze neighborhood info
map = px.scatter_mapbox(mean_hood_locations, size='sale_price_sqr_foot', lat='Lat', lon='Lon', color='gross_rent', title='Average Sale Price Per Square Foot and Gross Rent in San Francisco')
map.show()
# - - -
# ## Cost Analysis - Optional Challenge
#
# In this section, you will use Plotly express to create visualizations that investors can use to interactively filter and explore various factors related to the house value of the San Francisco's neighborhoods.
#
# ### Create a DataFrame showing the most expensive neighborhoods in San Francisco by year
# Fetch the data from all expensive neighborhoods per year.
df_expensive_neighborhoods_per_year = df_neighborhood[df_neighborhood["neighborhood"].isin(df_expensive_hoods["neighborhood"])]
df_expensive_neighborhoods_per_year.head()
# ### Create a parallel coordinates plot and parallel categories plot of most expensive neighborhoods in San Francisco per year
#
# Parallel Categories Plot
px.parallel_categories(df_expensive_neighborhoods_per_year,
dimensions=['neighborhood', 'sale_price_sqr_foot', 'housing_units', 'gross_rent'],
color='sale_price_sqr_foot',
labels={
'neighborhood': 'neighborhood',
'sale_price_sqr_foot': 'sale_price_sqr_foot',
'housing_units': 'housing_units',
'gross_rent': 'gross_rent'},
)
# Parallel Coordinates Plot
px.parallel_coordinates(df_expensive_neighborhoods_per_year, color='sale_price_sqr_foot')
# ### Create a sunburst chart to conduct a costs analysis of most expensive neighborhoods in San Francisco per year
# Sunburst Plot
px.sunburst(df_expensive_neighborhoods_per_year, path=['year', 'neighborhood'], color='gross_rent')
| Starter_Code/rental_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <small><small><i>
# All the IPython Notebooks in this lecture series by Dr. <NAME> are available @ **[GitHub](https://github.com/milaan9/02_Python_Datatypes/tree/main/002_Python_String_Methods)**
# </i></small></small>
# # Python String `isnumeric()`
#
# The string **`isnumeric()`** method returns **`True`** if all characters in a string are numeric characters. If not, it returns **`False`**.
#
# A numeric character has following properties:
#
# * **`Numeric_Type=Decimal`**
# * **`Numeric_Type=Digit`**
# * **`Numeric_Type=Numeric`**
#
# In Python, decimal characters (like: 0, 1, 2..), digits (like: subscript, superscript), and characters having Unicode numeric value property (like: fraction, roman numerals, currency numerators) are all considered numeric characters.
#
# You can write the digit and numeric characters using unicode in the program. For example:
#
# ```python
# # s = '½'
# >>>s = '\u00BD'
# ```
#
# **Syntax**:
#
# ```python
# string.isnumeric()
# ```
# + [markdown] heading_collapsed=true
# ## `isnumeric()` Parameters
#
# The **`isnumeric()`** method doesn't take any parameters.
# -
# ## Return Value from `isnumeric()`
#
# The **`isnumeric()`** returns:
#
# * **`True`** if all characters in the string are numeric characters.
# * **`False`** if at least one character is not a numeric character.
# +
# Example 1: Working of isnumeric()
s = '1242323'
print(s.isnumeric())
#s = '²3455'
s = '\u00B23455'
print(s.isnumeric())
# s = '½'
s = '\u00BD'
print(s.isnumeric())
s = '1242323'
s='python12'
print(s.isnumeric())
# +
# Example 2: How to use isnumeric()?
#s = '²3455'
s = '\u00B23455'
if s.isnumeric() == True:
print('All characters are numeric.')
else:
print('All characters are not numeric.')
# +
# Example 3:
str = "helloworld99"
print (str.isnumeric())
str = "10121997"
print (str.isnumeric())
# -
| 002_Python_String_Methods/017_Python_String_isnumeric().ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras.utils import to_categorical
import pandas as pd
import time
from functions import plots
from functions import finite_volumes as fv
from functions import finite_volumes_split as fvs
from functions import finite_volumes_par as fvp
from functions import neural_network as nn
import mnist
# # IMAGE INPAINTING WITH FLUID DYNAMICS
# Image inpainting aims to remove damage from an image. There are various techniques for image inpainting, and here we focus on solving a fluid-type PDE denoted as the Cahn-Hilliard equation.
#
# The three take-home messages from this notebook are that:
#
# 1. Image inpainting can be solved with efficient and parallelizable finite-volume schemes
# 2. The classification accuracy of neural networks is affected by the presence of damage
# 3. The application of image inpainting in damaged images improves their classification accuracy
#
# <p> </p>
#
# #### Damaged image:
# <img src="images/damage_23.png" style="width:300px;height:250px;" >
#
# #### Restored image:
# <img src="images/inpainting_23.png" style="width:300px;height:250px;" >
# As an example we take the MNIST dataset, which consists of binary images of handwritten digits:
test_images = mnist.test_images() # Load MNIST test set
test_images = test_images.reshape((-1,784)) # Flatten
test_images = (test_images / 255) *2-1 # Normalize between -1 and 1
example = test_images[0,:] # Select 1 image
plots.plot_image(example) # Plot image
# The MNIST dataset is corrupted by adding different types of damage to it:
# +
intensity = 0.5 # elect % of damaged pixels
damage = np.random.choice(np.arange(example.size), replace=False,
size=int(example.size * intensity)) # Create random damage
damaged_example = example.copy() # Generate damaged example
damaged_example[damage] = 0 # Turn damaged pixels to 0
plots.plot_image(damaged_example) # Plot image
# -
# ## Finite volumes for image inpainting
# With image inpainting we aim to recover the original image. There are various methods to conduct image inpainting, and here I solve a modified Cahn-Hilliard equation via finite-volume schemes:
#
#
# $$
# \frac{\partial \phi (x,t)}{\partial t}= -\nabla^{2} \left(\epsilon^2 \nabla^{2} \phi - H'(\phi) \right) + \lambda(x)\left(\phi (x,t=0) - \phi\right)
# $$
#
# As a baseline let's solve this equation with a simple finite-volume scheme:
start = time.time() # Start time
restored_example = fv.temporal_loop(damaged_example, damage) # Run finite-volume scheme
print("Total time: {:.2f}".format(time.time()-start)) # Print spent time
plots.plot_image(restored_example) # Plot image
# Let's compare the restored image with respect to the original image:
#
plots.plot_3images(example, damaged_example, restored_example) # Plot 3 images
# The computational cost of finite-volume scheme can be reduced by:
#
# 1. Applying a dimensional-splitting technique and solving row by row and column by column
# 2. Parallelizing the code and solving rows/columns simultaneously
#
# The simple finite-volume scheme has taken 40s to run. Let's compare it with the dimensional-splitting code:fully parallelized code:
start = time.time() # Start time
restored_example = fvs.temporal_loop_split(damaged_example, damage) # Run finite-volume scheme
print("Total time: {:.2f}".format(time.time()-start)) # Print spent time
plots.plot_image(restored_example) # Plot image
# By dimensionally splitting the code we have reduced the computational time from 40s to 8s!
#
# Can we reduce that time by parallelizing?
num_proc = 8 # Number of processors
start = time.time() # Start time
restored_example = fvp.temporal_loop_par(damaged_example, damage, num_proc) # Run finite-volume scheme
print("Total time: {:.2f}".format(time.time()-start)) # Print spent time
plots.plot_image(restored_example) # Plot image
# The parallel code takes 15 seconds, which is a higher than the non-parallel one. Parallelizing the code does not reduce that time since MNIST images are only 28x28. However, for high-dimensional images it has a clear benefit.
# ## Neural network for classification
# 
# The neural network is trained with the undamaged training dataset. Then we compare its accuracy for the test images with and without damage:
# +
train_images = mnist.train_images() # Load training set
train_labels = mnist.train_labels() # Load training labels
train_images = (train_images / 255) *2-1 # Normalize between -1 and 1
train_images = train_images.reshape((-1,784)) # Flatten
model, history = nn.training(train_images, train_labels) # Train the neural network
plots.loss_acc_plots(history) # Plot loss and accuracy
test_labels = mnist.test_labels() # Load test labels
print("Validation of undamaged test set:")
test_loss, test_accuracy = model.evaluate(test_images, to_categorical(test_labels),
verbose=2) # Print test loss and acc
# -
# The accuracy for the test dataset is quite high: 97%. This accuracy drops as we include damage in the test images. For instance, with an intensity of 80% the accuracy is 55%. Can we recover the accuracy by firstly applying image inpainting?
# ## Image inpainting prior to classifying damaged images
# Let's select a group of 5 images to add damage:
# +
n_images = 5 # Number of images
indices_images = range(5) # Select indices
examples = test_images[indices_images,:].copy() # Choose examples from test set
intensity = 0.8 # Damage intensity
# damages = np.zeros((len(indices_images), int(examples.shape[1] * intensity)), dtype=int) # Instantiate damage matrices
# damaged_examples = examples.copy() # Instantiate damaged examples
damages = np.load("data/damages.npy") # Load a previously saved damage matrix
for i in range(len(indices_images)): # Loop over examples t introduce damage
# damages[i, :] = np.random.choice(np.arange(examples.shape[1]), replace=False,
# size=int(examples.shape[1] * intensity)) # Choose random damage
damaged_examples[i, damages[i, :]] = 0 # Turn damaged pixels to 0
plots.plot_image(damaged_examples[1,:]) # Plot one of the damaged examples
# -
# We proceed to restore those 5 images:
# +
restored_examples = np.zeros(examples.shape) # Instantiate restored examples
for i in range(n_images): # Loop over damaged imaged
restored_examples[i,:] = fvs.temporal_loop_split(
damaged_examples[i, :], damages[i, :])
plots.plot_3images(examples[1,:], damaged_examples[1,:], restored_examples[1,:])
# -
# We can now compare the ground truth with the predicted labels for the damaged and restore images:
# +
predictions_damaged = np.argmax(model.predict(damaged_examples), axis=1)
predictions_restored = np.argmax(model.predict(restored_examples), axis=1)
print("Ground truth: ", test_labels[indices_images])
print("Damaged images: ", predictions_damaged)
print("Restored images: ", predictions_restored)
# -
# ## Final remarks
# The three take-home messages from this notebook are that:
#
# 1. Image inpainting can be solved with efficient and parallelizable finite-volume schemes
# 2. The classification accuracy of neural networks is affected by the presence of damage
# 3. The application of image inpainting in damaged images improves their classification accuracy
| Example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/NightMachinary/soal_playground/blob/master/sk_playground.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="2mwPOWLFjGB8" colab={"base_uri": "https://localhost:8080/", "height": 17} outputId="beea2ea5-a7f7-476b-cb85-343a4f3ba1c3"
from IPython.display import HTML, display
def set_css():
display(HTML('''
<style>
pre {
white-space: pre-wrap;
}
</style>
'''))
get_ipython().events.register('pre_run_cell', set_css)
# + id="tvs0BM0MmuS4"
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="PHRgm-mqkC--" outputId="0dce6675-76fd-493c-fde1-e404b3761e1c"
from notebook.services.config import ConfigManager
c = ConfigManager()
c.update('notebook', {"CodeCell": {"cm_config": {"lineNumbers": False, "lineWrapping": True}}})
# + [markdown] id="IM-YSuEGTFe8"
# # bootstrap
# + id="x0_rkJ36UsxP"
import numpy as np
import pandas
pd = pandas
from time import time, sleep
import concurrent
import gc
# + colab={"base_uri": "https://localhost:8080/"} id="P0jFsBdw2iAu" outputId="b897bbe5-4f57-4235-891e-f4d2fb103fb5"
# ! apt install -y unzip aria2 ncdu htop
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="0JAAwaSXpBzd" outputId="2f2b6e63-a309-49fc-dfab-ae9c22691c50"
# !pip3 install -U scikit-learn fastai
# !pip3 install -U skorch
# !pip install -U memory_profiler
# scikit-neuralnetwork
# + id="FLY-STd83hwB"
# %load_ext memory_profiler
# from memory_profiler import profile
#: did not work on IPython
import memory_profiler as mp
# + id="qSU-zEHiqFKV"
from sklearn import datasets
# + id="8x2-1qhGtfp7"
from fastai.torch_core import show_image, show_images
# + id="CCQfrh4r8KiC"
import matplotlib.pyplot as plt
# %matplotlib inline
# + id="1JYv0UpD1Za2"
import numpy as np
from sklearn.svm import SVC
from sklearn.cluster import KMeans
from sklearn import metrics
from sklearn.model_selection import train_test_split
#: https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html
#: https://scikit-learn.org/0.16/modules/generated/sklearn.cross_validation.train_test_split.html
from sklearn.preprocessing import MinMaxScaler
#: https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html
from sklearn.neural_network import MLPRegressor
from sklearn.metrics import accuracy_score
# + [markdown] id="Ctl5KzABS8GR"
# # digits dataset
# + colab={"base_uri": "https://localhost:8080/"} id="8r8udutxqPnV" outputId="f534835b-dbba-4298-a77d-430511c647c9"
iris = datasets.load_iris()
iris
# + colab={"base_uri": "https://localhost:8080/"} id="yRtEs5_Tqzzl" outputId="e0a7a9ab-ff01-41f5-8c47-813bddf92db9"
print(iris.DESCR)
# + colab={"base_uri": "https://localhost:8080/"} id="J0pJsOklrUri" outputId="d7199fe7-c937-46aa-fffa-0ffb7c862066"
digits = datasets.load_digits()
print(digits.DESCR)
# + colab={"base_uri": "https://localhost:8080/"} id="GriP0cnErnye" outputId="12ba26da-0d82-4033-e1d8-70abc7305d5c"
print(f"digits_x: {digits.data.shape}, digits_y: {digits.target.shape}")
# + colab={"base_uri": "https://localhost:8080/"} id="qf0WCCK-tWN7" outputId="788e5472-2fa7-4485-f53d-21c4ee54f82c"
digits.images[0]
# + colab={"base_uri": "https://localhost:8080/", "height": 102} id="qMRVemKjue4I" outputId="5ad66627-3cba-4549-ca74-669d812223cf"
show_image(digits.images[0])
# + colab={"base_uri": "https://localhost:8080/", "height": 683} id="BR6VwiYMurjg" outputId="8263eab4-db90-4fed-bb54-d40982b4c1cf"
show_images(digits.images, ncols=5, nrows=4)
# + [markdown] id="cs5jC0FC1YM_"
# ## SVC
# + id="fnAK6jIu1iwC"
##
# clf = SVC()
##
clf = KMeans(n_clusters=10, max_iter=10_000)
##
# + colab={"base_uri": "https://localhost:8080/"} id="4RunldO35Ajl" outputId="c50be19f-c715-4d2e-907a-9ed22c8b779c"
##
# train_x = digits.data[0:1000]
# train_y = digits.target[0:1000]
# val_x = digits.data[1000:]
# val_y = digits.target[1000:]
##
train_x, val_x, train_y, val_y = \
train_test_split(digits.data, digits.target, test_size=0.25)
# normalize the data via scaling
t = MinMaxScaler()
t.fit(train_x)
train_x = t.transform(train_x)
val_x = t.transform(val_x)
##
train_x.shape
# + colab={"base_uri": "https://localhost:8080/"} id="x42KB22o1xrC" outputId="63753a16-1494-4863-8c76-fef3b6634599"
clf.fit(train_x, train_y)
# + colab={"base_uri": "https://localhost:8080/"} id="iIQuLrZ75Q1V" outputId="0503073d-1d81-4901-aa07-674578ba1e6e"
preds = clf.predict(val_x)
preds
# + colab={"base_uri": "https://localhost:8080/"} id="99BSEKqIiS5m" outputId="cd056c04-d2fc-4a9c-f503-1a8a7ffdfb1b"
val_y
# + colab={"base_uri": "https://localhost:8080/"} id="JitkdSeQiaMn" outputId="ec44d8d2-0240-4895-fd82-5847f6d88919"
permu = {
6: 1,
9: 4,
1: 0,
5: 5,
7: 3,
2: 6,
3: 9,
4: 7,
8: 8,
0: 2,
}
preds_permuted = [permu[p] for p in preds]
sum(preds_permuted == val_y)/len(preds)
# + colab={"base_uri": "https://localhost:8080/"} id="YDpqTWGTkgo7" outputId="e543b764-3307-474f-e9b9-29e4ce56ff77"
metrics.completeness_score(val_y, preds)
# + colab={"base_uri": "https://localhost:8080/"} id="QzSWQHqqn7qs" outputId="55ec7e9c-926f-48d2-d353-ee9118eeae65"
metrics.homogeneity_score(val_y, preds)
# + colab={"base_uri": "https://localhost:8080/"} id="O1kCyksI5b5j" outputId="6d610649-b692-4aac-8d3f-9b7324acdbff"
sum(preds == val_y)/len(preds)
# + [markdown] id="ljKdywrTTrn3"
# # reddit_sample
# + [markdown] id="ResTfCkD9MKd"
# ## loading the data
# + colab={"base_uri": "https://localhost:8080/"} id="f1tsr3FY-Coy" outputId="1fbf270f-1d8a-4194-c682-b4046e71d988"
# !aria2c --allow-overwrite=true 'https://archive.ics.uci.edu/ml/machine-learning-databases/00441/repeat_consumption_data.zip'
# !yes | unzip repeat_consumption_data.zip
# + id="E0Gf8hj32LvP"
train_path = "/content/data/reddit_sample/train.csv"
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="-sVo_2Hz55Fd" outputId="1d037d9a-5c8d-4e60-bfac-74a6eda3bb62"
def train_load():
train_raw = pd.read_csv(train_path, names=['x','y','z'])
display(train_raw)
#: @todo1 use sparse matrices
train_mat = train_raw.pivot(index="x", columns="y", values="z")
# display(train_mat.loc[0,741])
display(train_mat)
train_np = train_mat.to_numpy(na_value=0.)
return train_np #, train_mat
train_np = train_load()
display(train_np)
gc.collect()
# + colab={"base_uri": "https://localhost:8080/"} id="U1Ypm8zW9ny6" outputId="e9af03d0-115a-47f7-8633-f3319ea7d2e5"
train_np.shape
# + colab={"base_uri": "https://localhost:8080/"} id="74In2i5w8yYh" outputId="fb5fa0c6-6aad-4638-8cf2-079b55bbc6fe"
train_np[0][[train_np[0] != 0]]
# + [markdown] id="KT48bi4r9PAt"
# ## benchmarking tools
# + id="hmz0ib9K3Ymq"
def benchmark_one(proc, *, name, interval=1):
mp_kwargs = {
"max_usage": True,
"retval": True,
# "timeout": timeout, #: after reading its source code, this basically only manipulates the number of iterations
"max_iterations": 1,
# "multiprocess": True, #: useless for functions
# "include_children": True,
#: @upstreamBug? This double counts (well, probably a copy-on-write fork causes this) memory when measuring functions, but not when measuring the starting memory
"include_children": False,
"interval": interval
}
start_mem = mp.memory_usage(**mp_kwargs)
start_time = time()
res = mp.memory_usage(proc, **mp_kwargs)
end_time = time()
dur = end_time - start_time
ret_val = res[1]
max_mem = res[0] #: I don't know why this is not returning the number of samples per its doc.
used_mem = max_mem - start_mem
print(f"{name}: dur={dur}, used_mem={used_mem}, max_mem={max_mem}")
# print(f"res={repr(res)}")
return ret_val, dur, used_mem
def benchmark_n(n, *args, **kwargs):
"Even with forking, the amount of memory used is dependent on the state of the process before the fork; e.g., the more memory it has claimed from the OS and then subsequently freed but not released back, can decrease its further OS memory usage."
fork_p = True
durs = np.empty(n) #: filled with nans
used_mems = np.empty(n)
losses = np.empty(n)
ret_val = None #: needed for correct scoping
for i in range(n):
if fork_p == True:
with concurrent.futures.ProcessPoolExecutor(max_workers=1) as executor:
res = executor.submit(benchmark_one, *args, **kwargs).result()
else:
res = benchmark_one(*args, **kwargs)
ret_val, dur, used_mem = res
loss = ret_val['loss']
losses[i] = loss
durs[i] = dur
used_mems[i] = used_mem
if 1 != (n-1):
ret_val = None #: might help GC
return (ret_val, durs, used_mems, losses)
# + colab={"base_uri": "https://localhost:8080/"} id="7Vvi6wolNBXZ" outputId="166ae659-f35a-4f36-ad00-871f93b041a0"
def dummy():
gc.collect()
sleep(1)
a = np.zeros(100_000_000)
sleep(1.1)
return {'loss': 0}
def nop():
return None
benchmark_n(3,
(dummy, []),
name="dummy",)
# + [markdown] id="X3jzGwaOtJ09"
# ### tmp
# + colab={"base_uri": "https://localhost:8080/", "height": 52} id="-SrDDs-Ds9yC" outputId="0776d183-3289-421b-c093-bbb3ec39c728"
from resource import getrusage, RUSAGE_SELF
getrusage(RUSAGE_SELF)
# + [markdown] id="qUd7UmbUg9aI"
# ## k-means
# + colab={"base_uri": "https://localhost:8080/", "height": 69} id="JK-qsqRq5GFu" outputId="0cefe5bd-837a-4ac3-a6c3-6b3c36643cb4"
def kmeans_sklearn_n10_iter10(input_data):
clf = KMeans(n_clusters=10, max_iter=10)
clf.fit(input_data)
return {'loss': clf.inertia_}
def kmeans_sklearn_n10_iter10_withdata():
return kmeans_sklearn_n10_iter10(train_np)
ret_val, durs, used_mems, losses = \
(benchmark_n(3,
(kmeans_sklearn_n10_iter10_withdata, []),
# ((lambda: kmeans_sklearn_n10_iter10(train_np)), []),
# (kmeans_sklearn_n10_iter10, [train_np]),
name="kmeans_sklearn_n10_iter10",))
# + [markdown] id="mtR8Q5UnrDnA"
# ### interactive
# + colab={"base_uri": "https://localhost:8080/"} id="bF309v-B-7ye" outputId="768ed630-ef88-42e0-a3ca-292c442c22cc"
train_pred = clf.predict(train_np)
train_pred
# + colab={"base_uri": "https://localhost:8080/"} id="v523c4-Q_T2Z" outputId="deb6fc83-ce93-488d-95e1-9621ea2751c8"
train_pred_pd = pd.Series(train_pred)
train_pred_pd
# + colab={"base_uri": "https://localhost:8080/"} id="-QPIE4o7_4o8" outputId="faa45413-c48e-4ede-8a45-a9b0c229cdd8"
with pd.option_context('display.max_rows', None, 'display.max_columns', None):
print(train_pred_pd.value_counts())
| sk_playground.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import seaborn as sns;
import numpy as np;
import matplotlib.pyplot as plt;
import pandas as pd;
iris = sns.load_dataset('iris');
iris['species'].value_counts()
testData = pd.DataFrame(np.random.randn(100,3),columns=['A','B','C'])
sns.distplot(testData['A'])
sns.distplot(testData['A'].cumsum())
| Seaborn - Crash Course/Playing with Python.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # The Quarterly Japanese Economic Model (Q-JEM)
# This workbook implement the "The Quarterly Japanese Economic Model (Q-JEM): 2019 version".
#
# Press **Space** to proceed. Press **Shift-space** to go back
#
# In code cells you press Shift-Enter to evaluate your code.
#
# You can always use smaller / larger fonts with keyboard shortcuts like **Alt +** and **Alt -** or similar (it could be Ctrl instead of Alt depending on the platform you are on). If the font is messed up, it helps to make it larger/smaller
#
# If you want to leave the slideshow and return to the notebook, just press the upper left **X**
#
# + [markdown] slideshow={"slide_type": "slide"}
# At http://www.boj.or.jp/en/research/wps_rev/wps_2019/wp19e07.htm/ you will find the working paper describing
# the model and a zipfile containing all the relevant information needed to use the model.
#
# The model logic has been transformed from Eview equation to ModelFlow Business logic and the dataseries has been transformed to a Pandas Dataframe.
#
# In this workbook the impulse responses from the working paper section 3.1.1, 3.1.2, 3.1.3, and 3.1.4 has been recreated.
#
# The quarters has been rebased to 2001q1 to 2009q4
# + [markdown] slideshow={"slide_type": "slide"}
# ## Import Python libraries
# + cell_style="center" slideshow={"slide_type": "fragment"}
import pandas as pd
import modelmf
from modelclass import model
model.modelflow_auto()
# + [markdown] cell_style="center" slideshow={"slide_type": "slide"}
# ## Create model and dataframe
# + slideshow={"slide_type": "fragment"}
mqjem, baseline = model.modelload('qjem.pcim',run=1)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Define some variable labels
# This gives more meaningful legends
# + slideshow={"slide_type": "fragment"}
legend = {
'GDP' : 'Real gross domestic product, S.A.',
'CP' : 'Real private consumption, S.A.',
'U' : 'Unemployment rate, S.A.',
'PGDP' : 'GDP deflator',
'USGDP' : 'Real gross domestic product of the United States, S.A.',
'NUSGDP': 'Output gap of the rest of the world',
'EX': 'Real exports of goods and services, S.A.',
'IM' : 'Real imports of goods and services, S.A.',
'INV' : 'Real private non-residential investment, S.A.',
'CORE_CPI' : 'Consumer price index (all items, less fresh food), S.A.'
}
# + [markdown] slideshow={"slide_type": "slide"}
# ## Make experiment with Foreign GDP +1 percent point.
# + slideshow={"slide_type": "fragment"}
instruments = [ 'V_NUSGAP','V_USGAP']
target = baseline.loc['2005q1':,['USGDP','NUSGDP']].mfcalc('''\
USGDP = USGDP*1.01
NUSGDP = NUSGDP*1.01
''',silent=1)
mqjem.invert(baseline,target,instruments);
# + [markdown] slideshow={"slide_type": "slide"}
# ## Display the results
# + slideshow={"slide_type": "fragment"}
disp = mqjem['GDP CP INV EX IM CORE_CPI'].difpctlevel.mul100.rename(legend).plot(
colrow=2,sharey=0,title='Impact of Foreign GDP +1 percent',top=0.9)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Lower Oil prices
# + slideshow={"slide_type": "-"}
instruments = [ 'V_POIL']
target = baseline.loc['2005q1':,['POIL']].mfcalc('''\
POIL = POIL*0.9
''',silent=1)
resalt = mqjem.invert(baseline,target,instruments,silent=1)
# + slideshow={"slide_type": "fragment"}
disp = mqjem['GDP CP INV EX IM CORE_CPI'].difpctlevel.rename(legend).plot(
colrow=2,sharey=0,title='Impact of 10 percent permanent decrease in oil price',top=0.9)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Combine the two previous experiments
# + slideshow={"slide_type": "-"}
instruments = [ 'V_POIL','V_NUSGAP','V_USGAP']
target = baseline.loc['2005q1':,['POIL','USGDP','NUSGDP']].mfcalc('''\
POIL = POIL*0.9
USGDP = USGDP*1.01
NUSGDP = NUSGDP*1.01
''',silent=1)
resalt = mqjem.invert(baseline,target,instruments,silent=1)
# + slideshow={"slide_type": "fragment"}
disp = mqjem['GDP CP INV EX IM CORE_CPI'].difpctlevel.mul100.rename(legend).plot(
colrow=2,sharey=0,title='Impact of Foreign gdp GDP +1 and percent 10 percent permanent decrease in oil price',top=0.9)
# + [markdown] slideshow={"slide_type": "slide"}
# ## A permanent depreciation of exchange rates.
# + slideshow={"slide_type": "-"}
instruments = [ 'V_FXYEN']
target = baseline.loc['2005q1':,['FXYEN']].mfcalc('''\
FXYEN = FXYEN*1.1
''',silent=1)
resalt = mqjem.invert(baseline,target,instruments,silent=1)
disp = mqjem['GDP CP INV EX IM CORE_CPI'].difpctlevel.mul100.rename(legend).plot(
colrow=2,sharey=0,title='Impact of Foreign gdp GDP +1 and percent 10 percent permanent decrease in oil price',top=0.9)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Draw the causal structure current periode, used when solving
# + slideshow={"slide_type": "fragment"}
mqjem.plotadjacency(size=(19,19)); # delete the # in front to display, takes some time
# + [markdown] slideshow={"slide_type": "slide"}
# ## Draw the causal structuretaking all lags, the economic feedback
# + slideshow={"slide_type": "fragment"}
mqjem.plotadjacency(nolag=True,size=(19,19)); # delete the # in front to display, takes some time
# + [markdown] slideshow={"slide_type": "slide"}
# ## How is CPQ determined
# + slideshow={"slide_type": "fragment"}
mqjem.cpq.draw(up=2,down=2,HR=0,transdic= {'ZPI*' : 'ZPI'}) # we condens all ZPI to one, to make the chart easy
# + [markdown] slideshow={"slide_type": "slide"}
# ## Also with values
# The result can be inspected in the graph/subfolder in PDF format.
# + slideshow={"slide_type": "subslide"}
with mqjem.set_smpl('2001q1','2001q3'):
mqjem.cpq.draw(up=1,down=1,HR=0,transdic= {'ZPI*' : 'ZPI'},last=1) # we condens all ZPI to one, to make the chart easy
# + [markdown] slideshow={"slide_type": "slide"}
# ## Another Example
# That determins the export (**EX**) and what is EX going into
# + slideshow={"slide_type": "fragment"}
mqjem.ex.draw(up=1,down=1)
# + [markdown] slideshow={"slide_type": "slide"}
# # The values for EX
# We only look at 3 quarters to make it simple
# + slideshow={"slide_type": "fragment"}
with mqjem.set_smpl('2001q1','2001q3'):
mqjem.ex.show
| Examples/Bank of Japan - Quarterly Japanese Economic Model/Q-JEM experiments.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Shahid1993/colab-notebooks/blob/master/word_completion_prediction_test.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="oU5MHFuEiKSJ" colab_type="text"
# # Testing Already Created Models
# + [markdown] id="TKXljvAujD7y" colab_type="text"
# ### Load Model from Google Drive
# + id="bycjqNRjiIid" colab_type="code" outputId="1471ec7c-4dc3-4930-88f8-b12520b65ab0" colab={"base_uri": "https://localhost:8080/", "height": 120}
# Mounting Google Drive to Load Data
from google.colab import drive
drive.mount('/content/drive')
# + id="pe5fd94Kuth2" colab_type="code" colab={}
import numpy as np
from keras.models import load_model
import pickle
import heapq
# + id="7DDUxE02jHaQ" colab_type="code" colab={}
model = load_model('./drive/My Drive/ML/Models/word_completion_prediction/word_completion_prediction_keras_model.h5')
history = pickle.load(open("./drive/My Drive/ML/Models/word_completion_prediction/word_completion_prediction_history.p", "rb"))
# + id="6uEfPhAUv6vh" colab_type="code" colab={}
chars = ' !"\'(),-.0123456789:;?_abcdefghijklmnopqrstuvwxyz¦'
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
# + id="tc2bqryXZzyg" colab_type="code" colab={}
def prepare_input(text):
x = np.zeros((1, len(text), len(chars)))
for t, char in enumerate(text):
x[0, t, char_indices[char]] = 1.
return x
# + id="bFSJJODFihiE" colab_type="code" colab={}
def sample(preds, top_n=3):
preds = np.asarray(preds).astype('float64')
preds = np.log(preds)
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
return heapq.nlargest(top_n, range(len(preds)), preds.take)
# + id="3uQTCh9nioeZ" colab_type="code" colab={}
def predict_completion(text):
original_text = text
generated = text
completion = ''
while True:
x = prepare_input(text)
preds = model.predict(x, verbose=0)[0]
next_index = sample(preds, top_n=1)[0]
next_char = indices_char[next_index]
text = text[1:] + next_char
completion += next_char
if len(original_text + completion) + 2 > len(original_text) and next_char == ' ':
return completion
# + id="oC4jLUdsitKW" colab_type="code" colab={}
def predict_completions(text, n=3):
x = prepare_input(text)
preds = model.predict(x, verbose=0)[0]
next_indices = sample(preds, n)
return [indices_char[idx] + predict_completion(text[1:] + indices_char[idx]) for idx in next_indices]
# + id="MzxwwuRNixKF" colab_type="code" colab={}
# actual_text = [
# "It is not a lack of love, but a lack of friendship that makes unhappy marriages.",
# "That which does not kill us makes us stronger.",
# "I'm not upset that you lied to me, I'm upset that from now on I can't believe you.",
# "And those who were seen dancing were thought to be insane by those who could not hear the music.",
# "It is hard enough to remember my opinions, without also remembering my reasons for them!",
# "A man lying on a comfortable sofa is listening to his wi",
# "Assuming the predictions are probabilistic, novel sequences can be generated from a trai",
# "The networks performance is competitive with state-of-the-art language models, and it works almost",
# "This document is the initial part of a study to predict next words from a text dataset"
# ]
input = [
"It is not a lack of lov",
"That which does not kill us makes us stro",
"I'm not upset that you lied to me, I'm upset that from now on I can't bel",
"And those who were seen dan",
"It is hard enough to remember my opini",
"A man lying on a comfortable ch",
"The networks perf",
"The networks performance is competi",
"The networks performance is competitive with state-of-the-art lan",
"This document is the initial part of a study to pre",
"This document is the initial part of a study to pred",
"Assuming the prediction",
"Assuming the predictions are probabilistic, novel sequences can be gene",
"Assuming the predictions are probabilistic, novel sequences can be generat"
]
# + id="_sNhloMoi3nl" colab_type="code" outputId="7c8b230b-4561-48ac-abe2-56b6b9b2bc8a" colab={"base_uri": "https://localhost:8080/", "height": 716}
for i in input:
seq = i.lower()
print(seq)
print(predict_completions(seq, 5))
print()
# + [markdown] id="O8lBEf5fzVsf" colab_type="text"
# # Corpus Preprocessing
# + id="rKk-brP3i9i0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 33} outputId="b4f97152-9a73-4756-90f3-2d9ac8bd848d"
#path = 'nietzsche.txt'
#path = "./drive/My Drive/ML/data/nietzsche.txt"
#path = "./drive/My Drive/ML/data/1-billion-word-language-modeling-benchmark-r13output/training-monolingual.tokenized.shuffled/news.en-00001-of-00100"
path = "./drive/My Drive/ML/data/word_pred.txt"
text = open(path).read().lower()
print('corpus length:', len(text))
# + id="HVHqSJwCzav0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 103} outputId="e4044260-7347-486b-fffa-338a70cb94bf"
chars = sorted(list(set(text)))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
print(f'unique chars: {len(chars)}')
print(chars)
print(''.join(map(str, chars)))
# + id="EpvKmmrWzfhr" colab_type="code" colab={}
def preprocess(data):
punct = '\n#$<=>[\\]@^{|}~¡¢£¤¥©«¬®°²´µ¶·º»¼½¾¿×àáâãäåæçèéêëíîïñóôõöøùúüþąćĕěœšŵžʼ˚а‐‑‚‟†•′₤€∆④●♥fi()£�'
for p in punct:
data = data.replace(p, '')
return data
text = preprocess(text)
# + id="KOTjX5nYzkjI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 103} outputId="1d456f5c-a023-4799-d82e-578f50ba9b05"
chars = sorted(list(set(text)))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
print(f'unique chars: {len(chars)}')
print(chars)
print(''.join(map(str, chars)))
print('corpus length:', len(text))
# + id="Y0wt6ikjznri" colab_type="code" colab={}
from nltk.tokenize.punkt import PunktSentenceTokenizer, PunktTrainer
trainer = PunktTrainer()
trainer.INCLUDE_ALL_COLLOCS = True
trainer.train(text)
tokenizer = PunktSentenceTokenizer(trainer.get_params())
# + id="LxuLq4110r5J" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 33} outputId="f50b8669-3c5a-403b-f788-84428929315e"
# Test the tokenizer on a piece of text
sentences = "Mr. James told me Dr. Brown is not available today. I will try tomorrow."
print (tokenizer.tokenize(sentences))
# + id="JJDi8QQc2GDN" colab_type="code" colab={}
import nltk
# + id="iDMFjiKo2WHH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="da0928df-6560-4d90-9395-3a3ceebcd481"
nltk.download()
# + id="5PyP6wsT2bxN" colab_type="code" colab={}
from nltk import sent_tokenize
sentences = sent_tokenize(text)
# + id="ZA82UtN022tW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 53} outputId="776d404b-81cc-40ff-daef-d0f14ff0bcc3"
sentences[101]
# + id="wY-Q0tYi28Fv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 53} outputId="e4668e8e-5aba-4264-b8ee-1a02425a20a9"
sentences[500]
# + id="z9nZo4lU3Gjp" colab_type="code" colab={}
import numpy as np
np.random.seed(42)
import tensorflow as tf
tf.set_random_seed(42)
from keras.models import Sequential, load_model
from keras.layers import Dense, Activation
from keras.layers import LSTM, Dropout, CuDNNLSTM
from keras.layers import TimeDistributed
from keras.layers.core import Dense, Activation, Dropout, RepeatVector
from keras.optimizers import RMSprop
import matplotlib.pyplot as plt
import pickle
import sys
import heapq
import seaborn as sns
from pylab import rcParams
# %matplotlib inline
sns.set(style='whitegrid', palette='muted', font_scale=1.5)
rcParams['figure.figsize'] = 12, 5
# + id="ybjaR9y35GK_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 33} outputId="12784d97-2619-4517-f894-4c84cc13bb13"
#path = 'nietzsche.txt'
#path = "./drive/My Drive/ML/data/nietzsche.txt"
#path = "./drive/My Drive/ML/data/1-billion-word-language-modeling-benchmark-r13output/training-monolingual.tokenized.shuffled/news.en-00001-of-00100"
path = "./drive/My Drive/ML/data/word_pred.txt"
text = open(path).read().lower()
print('corpus length:', len(text))
# + id="upu1eWTd5IZf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 103} outputId="aed16e7a-1150-4c05-f7e9-6e52bcd332a6"
chars = sorted(list(set(text)))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
print(f'unique chars: {len(chars)}')
print(chars)
print(''.join(map(str, chars)))
# + id="eV-2aZVA5MHo" colab_type="code" colab={}
def preprocess(data):
punct = '\n#$<=>[\\]@^{|}~¡¢£¤¥©«¬®°²´µ¶·º»¼½¾¿×àáâãäåæçèéêëíîïñóôõöøùúüþąćĕěœšŵžʼ˚а‐‑‚‟†•′₤€∆④●♥fi()£�'
for p in punct:
data = data.replace(p, '')
return data
text = preprocess(text)
# + id="AxQZyqds5Pfm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 103} outputId="9523ba72-40ed-415b-b862-0fc588d1e11c"
chars = sorted(list(set(text)))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
print(f'unique chars: {len(chars)}')
print(chars)
print(''.join(map(str, chars)))
print('corpus length:', len(text))
# + id="vgSgyCWw5TIn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 33} outputId="153d3c48-133a-4483-8998-46e66b0d98e9"
SEQUENCE_LENGTH = 40
step = 3
sentences = []
next_chars = []
for i in range(0, len(text) - SEQUENCE_LENGTH, step):
sentences.append(text[i: i + SEQUENCE_LENGTH])
next_chars.append(text[i + SEQUENCE_LENGTH])
print(f'num training examples: {len(sentences)}')
# + id="diEAJLRU5XUf" colab_type="code" colab={}
X = np.zeros((len(sentences), SEQUENCE_LENGTH, len(chars)), dtype=np.bool)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
X[i, t, char_indices[char]] = 1
y[i, char_indices[next_chars[i]]] = 1
# + id="9jm99P4N5Z_M" colab_type="code" colab={}
model = Sequential()
#model.add(LSTM(128, input_shape=(SEQUENCE_LENGTH, len(chars))))
#model.add(CuDNNLSTM(128, input_shape=(None, len(chars))))
model.add(CuDNNLSTM(128, input_shape=(None, len(chars)), return_sequences=True))
#model.add(CuDNNLSTM(256, return_sequences=True))
model.add(CuDNNLSTM(256))
#Dropout added to avoid overfitting
model.add(Dropout(rate = 0.2))
# build model using keras documentation recommended optimizer initialization
optimizer = RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0)
model.add(Dense(len(chars)))
model.add(Activation('softmax'))
# + id="cSp2H6ad5b6O" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 316} outputId="a506884f-1fef-4bab-dd26-a64dca18953a"
model.summary()
# + id="rcjuFP517TwJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 386} outputId="0c91fc62-c829-4874-b317-84a4e71950d9"
#optimizer = RMSprop(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])
history = model.fit(X, y, validation_split=0.05, batch_size=128, epochs=10, shuffle=True).history
# + id="bxJ2FLmr7aLC" colab_type="code" colab={}
model.save('./drive/My Drive/ML/Models/word_completion_prediction/R3/word_completion_prediction_keras_model.h5')
pickle.dump(history, open("./drive/My Drive/ML/Models/word_completion_prediction/R3/word_completion_prediction_history.p", "wb"))
# + id="HhJ7wPmvF_s_" colab_type="code" colab={}
model = load_model('./drive/My Drive/ML/Models/word_completion_prediction/R3/word_completion_prediction_keras_model.h5')
history = pickle.load(open("./drive/My Drive/ML/Models/word_completion_prediction/R3/word_completion_prediction_history.p", "rb"))
# + id="q_HwdXtVGCCW" colab_type="code" colab={}
def prepare_input(text):
x = np.zeros((1, len(text), len(chars)))
for t, char in enumerate(text):
x[0, t, char_indices[char]] = 1.
return x
# + id="2hZqtzwiGFCK" colab_type="code" colab={}
def sample(preds, top_n=3):
preds = np.asarray(preds).astype('float64')
preds = np.log(preds)
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
return heapq.nlargest(top_n, range(len(preds)), preds.take)
# + id="KtCHypN1GHt2" colab_type="code" colab={}
def predict_completions(text, n=3):
x = prepare_input(text)
preds = model.predict(x, verbose=0)[0]
next_indices = sample(preds, n)
return [indices_char[idx] + predict_completion(text[1:] + indices_char[idx]) for idx in next_indices]
# + id="EerG_sPgGK9f" colab_type="code" colab={}
# actual_text = [
# "It is not a lack of love, but a lack of friendship that makes unhappy marriages.",
# "That which does not kill us makes us stronger.",
# "I'm not upset that you lied to me, I'm upset that from now on I can't believe you.",
# "And those who were seen dancing were thought to be insane by those who could not hear the music.",
# "It is hard enough to remember my opinions, without also remembering my reasons for them!",
# "A man lying on a comfortable sofa is listening to his wi",
# "Assuming the predictions are probabilistic, novel sequences can be generated from a trai",
# "The networks performance is competitive with state-of-the-art language models, and it works almost",
# "This document is the initial part of a study to predict next words from a text dataset"
# ]
input = [
"It is not a lack of lov",
"That which does not kill us makes us stro",
"I'm not upset that you lied to me, I'm upset that from now on I can't bel",
"And those who were seen dan",
"It is hard enough to remember my opini",
"A man lying on a comfortable ch",
"Assuming the pre",
"The networks performance is competi",
"The networks performance is competitive with state-of-the-art lan",
"This document is the initial part of a study to pre",
"This document is the initial part of a study to pred",
"Assuming the prediction",
"Assuming the predictions are probabilistic, novel sequences can be gene",
"Assuming the predictions are probabilistic, novel sequences can be generat"
]
# + id="56pk9UEzGNa4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 750} outputId="0807c16c-1dc0-46eb-c988-8549fbf9817c"
for i in input:
seq = i.lower()
print(seq)
print(predict_completions(seq, 5))
print()
# + id="2GlzxCKUGQbQ" colab_type="code" colab={}
| word_completion_prediction_test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 加载 MNIST 数据集
# +
from keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data('mnist/mnist.npz')
print(x_train.shape, type(x_train))
print(y_train.shape, type(y_train))
# -
# ## 数据处理:规范化
#
# `channels_last` corresponds to inputs with shape (batch, height, width, channels) while `channels_first` corresponds to inputs with shape (batch, channels, height, width).
#
# It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be `channels_last`.
# +
from keras import backend as K
img_rows, img_cols = 28, 28
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
print(x_train.shape, type(x_train))
print(x_test.shape, type(x_test))
# +
# 将数据类型转换为float32
X_train = x_train.astype('float32')
X_test = x_test.astype('float32')
# 数据归一化
X_train /= 255
X_test /= 255
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
# -
# ## 统计训练数据中各标签数量
# +
import numpy as np
import matplotlib.pyplot as plt
label, count = np.unique(y_train, return_counts=True)
print(label, count)
# +
fig = plt.figure()
plt.bar(label, count, width = 0.7, align='center')
plt.title("Label Distribution")
plt.xlabel("Label")
plt.ylabel("Count")
plt.xticks(label)
plt.ylim(0,7500)
for a,b in zip(label, count):
plt.text(a, b, '%d' % b, ha='center', va='bottom',fontsize=10)
plt.show()
# -
# ## 数据处理:one-hot 编码
#
# ### 几种编码方式的对比
#
# | Binary | Gray code | One-hot |
# | ------ | --------- | -------- |
# | 000 | 000 | 00000001 |
# | 001 | 001 | 00000010 |
# | 010 | 011 | 00000100 |
# | 011 | 010 | 00001000 |
# | 100 | 110 | 00010000 |
# | 101 | 111 | 00100000 |
# | 110 | 101 | 01000000 |
# | 111 | 100 | 10000000 |
#
# ### one-hot 应用
# 
# +
from keras.utils import np_utils
n_classes = 10
print("Shape before one-hot encoding: ", y_train.shape)
Y_train = np_utils.to_categorical(y_train, n_classes)
print("Shape after one-hot encoding: ", Y_train.shape)
Y_test = np_utils.to_categorical(y_test, n_classes)
# -
print(y_train[0])
print(Y_train[0])
# ## 使用 Keras sequential model 定义 MNIST CNN 网络
#
# +
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
model = Sequential()
## Feature Extraction
# 第1层卷积,32个3x3的卷积核 ,激活函数使用 relu
model.add(Conv2D(filters=32, kernel_size=(3, 3), activation='relu',
input_shape=input_shape))
# 第2层卷积,64个3x3的卷积核,激活函数使用 relu
model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu'))
# 最大池化层,池化窗口 2x2
model.add(MaxPooling2D(pool_size=(2, 2)))
# Dropout 25% 的输入神经元
model.add(Dropout(0.25))
# 将 Pooled feature map 摊平后输入全连接网络
model.add(Flatten())
## Classification
# 全联接层
model.add(Dense(128, activation='relu'))
# Dropout 50% 的输入神经元
model.add(Dropout(0.5))
# 使用 softmax 激活函数做多分类,输出各数字的概率
model.add(Dense(n_classes, activation='softmax'))
# -
# ## 查看 MNIST CNN 模型网络结构
model.summary()
for layer in model.layers:
print(layer.get_output_at(0).get_shape().as_list())
# ## 编译模型
#
# [model.compile()](https://keras.io/models/sequential/#compile)
#
# ```python
# compile(optimizer, loss=None, metrics=None, loss_weights=None, sample_weight_mode=None, weighted_metrics=None, target_tensors=None)
# ```
model.compile(loss='categorical_crossentropy', metrics=['accuracy'], optimizer='adam')
# ## 训练模型,并将指标保存到 history 中
#
# [model.fit()](https://keras.io/models/sequential/#fit)
#
# ```python
# fit(x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None)
# ```
history = model.fit(X_train,
Y_train,
batch_size=128,
epochs=5,
verbose=2,
validation_data=(X_test, Y_test))
# ## 可视化指标
# +
fig = plt.figure()
plt.subplot(2,1,1)
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('Model Accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='lower right')
plt.subplot(2,1,2)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model Loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper right')
plt.tight_layout()
plt.show()
# -
# ## 保存模型
#
# [model.save()](https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model)
#
# You can use `model.save(filepath)` to save a Keras model into a single **HDF5 file** which will contain:
#
# - the architecture of the model, allowing to re-create the model
# - the weights of the model
# - the training configuration (loss, optimizer)
# - the state of the optimizer, allowing to resume training exactly where you left off.
#
# You can then use `keras.models.load_model(filepath)` to reinstantiate your model. load_model will also take care of compiling the model using the saved training configuration (unless the model was never compiled in the first place).
# +
import os
import tensorflow.gfile as gfile
save_dir = "./mnist/model/"
if gfile.Exists(save_dir):
gfile.DeleteRecursively(save_dir)
gfile.MakeDirs(save_dir)
model_name = 'keras_mnist.h5'
model_path = os.path.join(save_dir, model_name)
model.save(model_path)
print('Saved trained model at %s ' % model_path)
# -
# ## 加载模型
# +
from keras.models import load_model
mnist_model = load_model(model_path)
# -
# ## 统计模型在测试集上的分类结果
# +
loss_and_metrics = mnist_model.evaluate(X_test, Y_test, verbose=2)
print("Test Loss: {}".format(loss_and_metrics[0]))
print("Test Accuracy: {}%".format(loss_and_metrics[1]*100))
predicted_classes = mnist_model.predict_classes(X_test)
correct_indices = np.nonzero(predicted_classes == y_test)[0]
incorrect_indices = np.nonzero(predicted_classes != y_test)[0]
print("Classified correctly count: {}".format(len(correct_indices)))
print("Classified incorrectly count: {}".format(len(incorrect_indices)))
# -
| tensorflow1.0/04Mnist - softmax,cnn/.ipynb_checkpoints/03mnist-cnn-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Pre-Tutorial Exercises
#
# If you've arrived early for the tutorial, please feel free to attempt the following exercises to warm-up.
# +
# 1. Basic Python data structures
# I have a list of dictionaries as such:
names = [{'name': 'Eric',
'surname': 'Ma'},
{'name': 'Jeffrey',
'surname': 'Elmer'},
{'name': 'Mike',
'surname': 'Lee'},
{'name': 'Jennifer',
'surname': 'Elmer'}]
# Write a function that takes in a list of dictionaries and a query surname,
# and searches it for all individuals with a given surname.
def find_persons_with_surname(persons, query_surname):
# Assert that the persons parameter is a list.
# This is a good defensive programming practice.
assert isinstance(persons, list)
results = []
for ______ in ______:
if ___________ == __________:
results.append(________)
return results
# +
# Test your result below.
results = find_persons_with_surname(names, 'Lee')
assert len(results) == 1
results = find_persons_with_surname(names, 'Elmer')
assert len(results) == 2
# -
| 0-pre-tutorial-exercises.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # SLU01 - Pandas 101: Examples notebook
import pandas as pd
# ### Create a pandas Series with some countries, capital and population
countries = pd.Series(['Denmark', 'Finland', 'Iceland', 'Norway', 'Greenland', 'Faroe Islands'])
capitals = pd.Series(['Copenhagen', 'Helsinki', 'Reykjavík', 'Oslo', 'Nuuk', 'Tórshavn'])
pop = pd.Series([5724456, 5498211, 335878, 5265158,56483,49188])
# ### Creating a pandas DataFrame
nordic_countries = pd.DataFrame({'Country': countries, 'Capital': capitals, 'Population':pop})
nordic_countries
# ### Check the dtype in a Series
capitals.dtype
# ### Check the dtypes of the columns in a DataFrame
nordic_countries.dtypes
# ### Convert DataFrame to a numpy array
nordic_countries.to_numpy()
# ### Load a file
# `pd.read_csv(filepath)`
# This filepath is specific to the file you want and depends on where you currently are.
iris = pd.read_csv('data/iris.csv')
# ### Info about the DataFrame
iris.info()
# ### Printing the top 5 entries
iris.head(5)
# ### Printing the bottom 5 entries
iris.tail(5)
# ### Get the index of a DataFrame
iris.index
# ### Get the columns of a DataFrame
iris.columns
# ### Shape of the DataFrame
iris.shape
nr_rows = iris.shape[0]
nr_cols = iris.shape[1]
# ### Get more information about the numerical features
iris.describe()
# ---
| S01 - Bootcamp and Binary Classification/SLU01 - Pandas 101/Examples notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.8 ('MLPricingVenv')
# language: python
# name: python3
# ---
__author__ = "konwar.m"
__copyright__ = "Copyright 2022, AI R&D"
__credits__ = ["konwar.m"]
__license__ = "Individual Ownership"
__version__ = "1.0.1"
__maintainer__ = "konwar.m"
__email__ = "<EMAIL>"
__status__ = "Development"
# ### Importing Libraries
# +
# Importing Libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import xgboost as xgb
import time
import pickle
from math import sqrt
from numpy import loadtxt
from itertools import product
from tqdm import tqdm
from sklearn import preprocessing
from xgboost import plot_tree
from matplotlib import pyplot
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import KFold
from sklearn.feature_extraction.text import TfidfVectorizer
# -
os.chdir('..')
os.getcwd()
# ### Loading Data
sales_train = pd.read_csv(r'datasets\sales_train.csv')
items = pd.read_csv(r'datasets\translated_items.csv')
shops = pd.read_csv(r'datasets\translated_shops.csv')
item_categories = pd.read_csv(r'datasets\translated_item_categories.csv')
test = pd.read_csv(r'datasets\test.csv')
sample_submission = pd.read_csv(r'datasets\sample_submission.csv')
# ### Aggregation of data
# Create a dataframe grid which is based on shop and item id combinations and is arranged based on
grid = []
for block_num in sales_train['date_block_num'].unique():
cur_shops = sales_train[sales_train['date_block_num']==block_num]['shop_id'].unique()
cur_items = sales_train[sales_train['date_block_num']==block_num]['item_id'].unique()
grid.append(np.array(list(product(*[cur_shops, cur_items, [block_num]])),dtype='int32'))
index_cols = ['shop_id', 'item_id', 'date_block_num']
grid = pd.DataFrame(np.vstack(grid), columns = index_cols, dtype=np.int32)
grid
# Aggregations are done to convert daily sales to month level
sales_train['item_cnt_day'] = sales_train['item_cnt_day'].clip(0,20)
groups = sales_train.groupby(['shop_id', 'item_id', 'date_block_num'])
trainset = groups.agg({'item_cnt_day':'sum', 'item_price':'mean'}).reset_index()
trainset = trainset.rename(columns = {'item_cnt_day' : 'item_cnt_month'})
trainset['item_cnt_month'] = trainset['item_cnt_month'].clip(0,20)
trainset
# Extract mean prices for each item based on each month
sales_groups = sales_train.groupby(['item_id'])
sales_item_data = sales_groups.agg({'item_price':'mean'}).reset_index()
sales_item_data
trainset = pd.merge(grid,trainset,how='left',on=index_cols)
trainset.item_cnt_month = trainset.item_cnt_month.fillna(0)
trainset
# Replace NAN price values of non existing sales of item with mean item prices that has been seen
# throughtout the duration of 33 months
price_trainset = pd.merge(trainset[['item_id', 'item_price']], sales_item_data, how='left', on='item_id')
price_trainset['final_item_price']=0.0
for row in tqdm(price_trainset.copy().itertuples(), total=price_trainset.shape[0]):
if pd.isna(row.item_price_x):
price_trainset.iloc[row.Index, price_trainset.columns.get_loc('final_item_price')] = row.item_price_y
else:
price_trainset.iloc[row.Index, price_trainset.columns.get_loc('final_item_price')] = row.item_price_x
price_trainset
trainset['final_item_price'] = price_trainset['final_item_price']
trainset.drop(['item_price'], axis=1, inplace=True)
trainset.rename(columns = {'final_item_price':'item_price'}, inplace = True)
trainset
# Get category id
trainset = pd.merge(trainset, items[['item_id', 'item_category_id']], on = 'item_id')
trainset
# ### Feature Engineering
# Set seeds and options
np.random.seed(10)
pd.set_option('display.max_rows', 231)
pd.set_option('display.max_columns', 100)
# +
# Feature engineering list
new_features = []
enable_feature_idea = [True, True, True, True, True, True, False, False, True, True]
# Some parameters(maybe add more periods, score will be better) [1,2,3,12]
lookback_range = [1,2,3,4,5,6,7,8,9,10,11,12]
tqdm.pandas()
# Use recent data
start_month_index = trainset.date_block_num.min()
end_month_index = trainset.date_block_num.max()
# +
current = time.time()
trainset = trainset[['shop_id', 'item_id', 'item_category_id', 'date_block_num', 'item_price', 'item_cnt_month']]
trainset = trainset[(trainset.date_block_num >= start_month_index) & (trainset.date_block_num <= end_month_index)]
print('Loading test set...')
test_dataset = loadtxt(r'datasets\test.csv', delimiter="," ,skiprows=1, usecols = (1,2), dtype=int)
testset = pd.DataFrame(test_dataset, columns = ['shop_id', 'item_id'])
print('Merging with other datasets...')
# Get item category id into test_df
testset = testset.merge(items[['item_id', 'item_category_id']], on = 'item_id', how = 'left')
testset['date_block_num'] = 34
# Make testset contains same column as trainset so we can concatenate them row-wise
testset['item_cnt_month'] = -1
testset
# +
train_test_set = pd.concat([trainset, testset], axis = 0)
end = time.time()
diff = end - current
print('Took ' + str(int(diff)) + ' seconds to train and predict val set')
# -
# Using Label Encoder to encode the item categories and use them with training set data
lb = preprocessing.LabelEncoder()
l_cat = list(item_categories.translated_item_category_name)
# +
item_categories['item_category_id_fix'] = lb.fit_transform(l_cat)
item_categories['item_category_name_fix'] = l_cat
train_test_set = train_test_set.merge(item_categories[['item_category_id', 'item_category_id_fix']], on = 'item_category_id', how = 'left')
_ = train_test_set.drop(['item_category_id'],axis=1, inplace=True)
train_test_set.rename(columns = {'item_category_id_fix':'item_category_id'}, inplace = True)
_ = item_categories.drop(['item_category_id'],axis=1, inplace=True)
_ = item_categories.drop(['item_category_name'],axis=1, inplace=True)
_ = item_categories.drop(['translated_item_category_name'],axis=1, inplace=True)
item_categories.rename(columns = {'item_category_id_fix':'item_category_id'}, inplace = True)
item_categories.rename(columns = {'item_category_name_fix':'item_category_name'}, inplace = True)
item_categories = item_categories.drop_duplicates()
item_categories.index = np.arange(0, len(item_categories))
item_categories = item_categories.sort_values(by=['item_category_id']).reset_index(drop=True)
item_categories
# -
# Idea 0: Add previous shop/item sales as feature (Lag feature)
if enable_feature_idea[0]:
for diff in tqdm(lookback_range):
feature_name = 'prev_shopitem_sales_' + str(diff)
trainset2 = train_test_set.copy()
trainset2.loc[:, 'date_block_num'] += diff
trainset2.rename(columns={'item_cnt_month': feature_name}, inplace=True)
train_test_set = train_test_set.merge(trainset2[['shop_id', 'item_id', 'date_block_num', feature_name]], on = ['shop_id', 'item_id', 'date_block_num'], how = 'left')
train_test_set[feature_name] = train_test_set[feature_name].fillna(0)
new_features.append(feature_name)
train_test_set.head(3)
# Idea 1: Add previous item sales as feature (Lag feature)
if enable_feature_idea[1]:
groups = train_test_set.groupby(by = ['item_id', 'date_block_num'])
for diff in tqdm(lookback_range):
feature_name = 'prev_item_sales_' + str(diff)
result = groups.agg({'item_cnt_month':'mean'})
result = result.reset_index()
result.loc[:, 'date_block_num'] += diff
result.rename(columns={'item_cnt_month': feature_name}, inplace=True)
train_test_set = train_test_set.merge(result, on = ['item_id', 'date_block_num'], how = 'left')
train_test_set[feature_name] = train_test_set[feature_name].fillna(0)
new_features.append(feature_name)
train_test_set.head(3)
# Idea 2: Add previous shop/item price as feature (Lag feature)
if enable_feature_idea[2]:
groups = train_test_set.groupby(by = ['shop_id', 'item_id', 'date_block_num'])
for diff in tqdm(lookback_range):
feature_name = 'prev_shopitem_price_' + str(diff)
result = groups.agg({'item_price':'mean'})
result = result.reset_index()
result.loc[:, 'date_block_num'] += diff
result.rename(columns={'item_price': feature_name}, inplace=True)
train_test_set = train_test_set.merge(result, on = ['shop_id', 'item_id', 'date_block_num'], how = 'left')
train_test_set[feature_name] = train_test_set[feature_name]
new_features.append(feature_name)
train_test_set.head(3)
# Idea 3: Add previous item price as feature (Lag feature)
if enable_feature_idea[3]:
groups = train_test_set.groupby(by = ['item_id', 'date_block_num'])
for diff in tqdm(lookback_range):
feature_name = 'prev_item_price_' + str(diff)
result = groups.agg({'item_price':'mean'})
result = result.reset_index()
result.loc[:, 'date_block_num'] += diff
result.rename(columns={'item_price': feature_name}, inplace=True)
train_test_set = train_test_set.merge(result, on = ['item_id', 'date_block_num'], how = 'left')
train_test_set[feature_name] = train_test_set[feature_name]
new_features.append(feature_name)
train_test_set.head(3)
# Idea 4: Mean encodings for shop/item pairs(Mean encoding, doesnt work for me)
def create_mean_encodings(train_test_set, categorical_var_list, target):
feature_name = "_".join(categorical_var_list) + "_" + target + "_mean"
df = train_test_set.copy()
df1 = df[df.date_block_num <= 32]
df2 = df[df.date_block_num <= 33]
df3 = df[df.date_block_num == 34]
# Extract mean encodings using training data(here we don't use month 33 to avoid data leak on validation)
# If I try to extract mean encodings from all months, then val rmse decreases a tiny bit, but test rmse would increase by 4%
# So this is important
mean_32 = df1[categorical_var_list + [target]].groupby(categorical_var_list, as_index=False)[[target]].mean()
mean_32 = mean_32.rename(columns={target:feature_name})
# Extract mean encodings using all data, this will be applied to test data
mean_33 = df2[categorical_var_list + [target]].groupby(categorical_var_list, as_index=False)[[target]].mean()
mean_33 = mean_33.rename(columns={target:feature_name})
# Apply mean encodings
df2 = df2.merge(mean_32, on = categorical_var_list, how = 'left')
df3 = df3.merge(mean_33, on = categorical_var_list, how = 'left')
# Concatenate
train_test_set = pd.concat([df2, df3], axis = 0)
new_features.append(feature_name)
return train_test_set
train_test_set = create_mean_encodings(train_test_set, ['shop_id', 'item_id'], 'item_cnt_month')
train_test_set.head(3)
# Idea 5: Mean encodings for item (Mean encoding, doesnt work for me)
train_test_set = create_mean_encodings(train_test_set, ['item_id'], 'item_cnt_month')
train_test_set.head(3)
# Idea 6: Number of month from last sale of shop/item (Use info from past)
# +
def create_last_sale_shop_item(row):
for diff in range(1,33+1):
feature_name = '_prev_shopitem_sales_' + str(diff)
if row[feature_name] != 0.0:
return diff
return np.nan
lookback_range = list(range(1, 33 + 1))
if enable_feature_idea[6]:
for diff in tqdm(lookback_range):
feature_name = '_prev_shopitem_sales_' + str(diff)
trainset2 = train_test_set.copy()
trainset2.loc[:, 'date_block_num'] += diff
trainset2.rename(columns={'item_cnt_month': feature_name}, inplace=True)
train_test_set = train_test_set.merge(trainset2[['shop_id', 'item_id', 'date_block_num', feature_name]], on = ['shop_id', 'item_id', 'date_block_num'], how = 'left')
train_test_set[feature_name] = train_test_set[feature_name].fillna(0)
#new_features.append(feature_name)
train_test_set.loc[:, 'last_sale_shop_item'] = train_test_set.progress_apply (lambda row: create_last_sale_shop_item(row),axis=1)
new_features.append('last_sale_shop_item')
# -
# Idea 7: Number of month from last sale of item(Use info from past)
# +
def create_last_sale_item(row):
for diff in range(1,33+1):
feature_name = '_prev_item_sales_' + str(diff)
if row[feature_name] != 0.0:
return diff
return np.nan
lookback_range = list(range(1, 33 + 1))
if enable_feature_idea[7]:
groups = train_test_set.groupby(by = ['item_id', 'date_block_num'])
for diff in tqdm(lookback_range):
feature_name = '_prev_item_sales_' + str(diff)
result = groups.agg({'item_cnt_month':'mean'})
result = result.reset_index()
result.loc[:, 'date_block_num'] += diff
result.rename(columns={'item_cnt_month': feature_name}, inplace=True)
train_test_set = train_test_set.merge(result, on = ['item_id', 'date_block_num'], how = 'left')
train_test_set[feature_name] = train_test_set[feature_name].fillna(0)
new_features.append(feature_name)
train_test_set.loc[:, 'last_sale_item'] = train_test_set.progress_apply (lambda row: create_last_sale_item(row),axis=1)
# -
# Idea 8: Item name (Tfidf text feature)
# +
items_subset = items[['item_id', 'item_name']]
feature_count = 25
tfidf = TfidfVectorizer(max_features=feature_count)
items_df_item_name_text_features = pd.DataFrame(tfidf.fit_transform(items_subset['item_name']).toarray())
cols = items_df_item_name_text_features.columns
for i in range(feature_count):
feature_name = 'item_name_tfidf_' + str(i)
items_subset[feature_name] = items_df_item_name_text_features[cols[i]]
new_features.append(feature_name)
items_subset.drop('item_name', axis = 1, inplace = True)
train_test_set = train_test_set.merge(items_subset, on = 'item_id', how = 'left')
train_test_set.head()
# -
train_test_set
# ### Save New feature list and training set
with open(os.path.join(r'datasets\training_datasets', 'new_features.pkl'), "wb") as fp: #Pickling
pickle.dump(new_features, fp)
import gc
def reduce_mem_usage(df, int_cast=True, obj_to_category=False, subset=None):
"""
Iterate through all the columns of a dataframe and modify the data type to reduce memory usage.
:param df: dataframe to reduce (pd.DataFrame)
:param int_cast: indicate if columns should be tried to be casted to int (bool)
:param obj_to_category: convert non-datetime related objects to category dtype (bool)
:param subset: subset of columns to analyse (list)
:return: dataset with the column dtypes adjusted (pd.DataFrame)
"""
start_mem = df.memory_usage().sum() / 1024 ** 2;
gc.collect()
print('Memory usage of dataframe is {:.2f} MB'.format(start_mem))
cols = subset if subset is not None else df.columns.tolist()
for col in tqdm(cols):
col_type = df[col].dtype
if col_type != object and col_type.name != 'category' and 'datetime' not in col_type.name:
c_min = df[col].min()
c_max = df[col].max()
# test if column can be converted to an integer
treat_as_int = str(col_type)[:3] == 'int'
if int_cast and not treat_as_int:
# treat_as_int = check_if_integer(df[col])
treat_as_int = (df[col] % 1 == 0).all()
if treat_as_int:
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.uint8).min and c_max < np.iinfo(np.uint8).max:
df[col] = df[col].astype(np.uint8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.uint16).min and c_max < np.iinfo(np.uint16).max:
df[col] = df[col].astype(np.uint16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.uint32).min and c_max < np.iinfo(np.uint32).max:
df[col] = df[col].astype(np.uint32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
elif c_min > np.iinfo(np.uint64).min and c_max < np.iinfo(np.uint64).max:
df[col] = df[col].astype(np.uint64)
else:
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
elif 'datetime' not in col_type.name and obj_to_category:
df[col] = df[col].astype('category')
gc.collect()
end_mem = df.memory_usage().sum() / 1024 ** 2
print('Memory usage after optimization is: {:.3f} MB'.format(end_mem))
print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem) / start_mem))
return df
train_test_set = reduce_mem_usage(train_test_set)
train_test_set
train_test_set.to_csv(os.path.join(r'datasets\training_datasets', 'trainset.csv'), index=False)
| modelling_pipeline/feature_engg_script.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Process HIFLD data
#
# This notebook reads in the HIFLD dataset and converts it to GeoJSON format.
#
# From https://hifld-geoplatform.opendata.arcgis.com/datasets/hospitals
#
# > This feature class/shapefile contains locations of Hospitals for 50 US states, Washington D.C., US territories of Puerto Rico, Guam, American Samoa, Northern Mariana Islands, Palau, and Virgin Islands. The dataset only includes hospital facilities based on data acquired from various state departments or federal sources which has been referenced in the SOURCE field. Hospital facilities which do not occur in these sources will be not present in the database. The source data was available in a variety of formats (pdfs, tables, webpages, etc.) which was cleaned and geocoded and then converted into a spatial database. The database does not contain nursing homes or health centers. Hospitals have been categorized into children, chronic disease, critical access, general acute care, long term care, military, psychiatric, rehabilitation, special, and women based on the range of the available values from the various sources after removing similarities. In this update the TRAUMA field was populated for 172 additional hospitals and helipad presence were verified for all hospitals.
# +
import pandas as pd
import geopandas as gpd
from covidcaremap.data import external_data_path, processed_data_path
# -
hifld_file_path = external_data_path('hifld-hospitals.csv')
hifld_df = pd.read_csv(hifld_file_path, encoding='utf-8')
hifld_gdf = gpd.GeoDataFrame(
hifld_df,
crs='EPSG:4326',
geometry=gpd.points_from_xy(hifld_df['X'], hifld_df['Y']))
hifld_gdf.to_file(processed_data_path('hifld_facility_data.geojson'),
encoding='utf-8',
driver='GeoJSON')
| notebooks/processing/Process_HIFLD_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
df = pd.read_excel('港股打新人数.xls')
df.head()
df2 = pd.DataFrame(df.values.T,)
df2.head()
df.info()
df2.to_excel('people3.xls',index=None,header=None)
# +
import xlsxwriter #导入模块
workbook = xlsxwriter.Workbook('new_excel.xlsx') #新建excel表
worksheet = workbook.add_worksheet('sheet1') #新建sheet(sheet的名称为"sheet1")
headings = ['Number','testA','testB']
#设置表头
data = [
['2017-9-1','2017-9-2','2017-9-3','2017-9-4','2017-9-5','2017-9-6'],
[10,40,50,20,10,50],
[30,60,70,50,40,30],
] #自己造的数据
worksheet.write_row('A1',headings)
worksheet.write_column('A2',data[0])
worksheet.write_column('B2',data[1])
worksheet.write_column('C2',data[2]) #将数据插入到表格中
workbook.close()
# -
import xlsxwriter #导入模块
workbook = xlsxwriter.Workbook('new_people.xlsx') #新建excel表
worksheet = workbook.add_worksheet('sheet1') #新建sheet(sheet的名称为"sheet1")
df.head(2)
# +
for index,item in df.iterrows():
date=item['上市日期']
count=item['申购人数']
date=date.replace(' 00:00:00','')
worksheet.write(0,index,date)
worksheet.write(1,index,count)
workbook.close()
# -
df.info()
df= df.set_index('上市日期')
df.head()
df.head()
df_m = df.resample('M').sum()
df_m.head()
# +
line=0
for index,item in df_m.iterrows():
# print(index)
# print(type(index))
date=str(index).replace(' 00:00:00','')
date=date[:-3]
count=item['申购人数']
# date=item.index
# print(date)
# print(type(date))
# count=item['申购人数']
# date=date.replace(' 00:00:00','')
worksheet.write(0,line,date)
worksheet.write(1,line,count)
line+=1
workbook.close()
# -
df_m[df_m['申购人数']==0]
| analysis/apply_people_count.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import phuzzy
from phuzzy.optimization import alphaOpt
import matplotlib.pyplot as plt
# %matplotlib inline
# +
v1 = phuzzy.Triangle(alpha0=[0,4], alpha1=[1], number_of_alpha_levels=5)
v2 = phuzzy.Superellipse(alpha0=[-1, 2.], alpha1=None, m=1.0, n=.5, number_of_alpha_levels=6)
v3 = phuzzy.TruncGenNorm(alpha0=[1, 4], alpha1=[2, 3], number_of_alpha_levels=5, beta=3.)
v4 = phuzzy.Trapezoid(alpha0=[0, 4], alpha1=[2, 3], number_of_alpha_levels=5)
v5 = phuzzy.TruncNorm(alpha0=[1, 3], number_of_alpha_levels=5, name="y")
v6 = phuzzy.Triangle(alpha0=[1,4], alpha1=[3], number_of_alpha_levels=5)
obj_function = '-1*((x[0] - 1) ** 2 + (x[1] + .1) ** 2 + .1 - (x[2] + 2) ** 2 - (x[3] - 0.1) ** 2 - (x[4] * x[5]) ** 2)'
name = 'Opti_Test'
kwargs = {'var1': v1, 'var2': v2, 'var3': v3,
'var4': v4,'var5': v5, 'var6': v6,
'obj_function': obj_function, 'name': name}
# -
z = alphaOpt.Alpha_Level_Optimization(**kwargs)
z.calculation() #Default: n=60,iters=3,optimizer="sobol",backup=False,start_at=None
z
z.df
z.compact_output(round=2) # Default: round=None
z.df
z.extanded_output(round=4) # Default: round=None
z.df_extanded
z.total_nfev
z.plot(show=True)
z.defuzzification(method = 'centroid') # mean / alpha_one / centroid
z.plot(show=True, defuzzy=z.determin_point,labels=True)
z.determin_objective
# +
# Export Simple Dataframe as CSV
z.export_to_csv() # Default df='simple', filepath=None
# Export Extanded Dataframe as CSV
z.export_to_csv(df='extended') # Default df='simple', filepath=None
# -
z.calculation(start_at=4)
z.plot(show=True)
| phuzzy/optimization/Alpha Opti Notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # A prime spot
#
# <div class="alert alert-block alert-warning" id = "warning">
# This assessment offers the opportunity to tie up the server in loopish operations. Should you be unable to run cells, simply locate the <b>Kernel</b> menu at the top of the screen and click <b>Interrupt Kernel</b>. This should jumpstart the kernel again and clear out the infinite loop behavior.</div>
#
# 
#
# Being a local dignitary has its perks (official uniforms, hats, et al.). But, it also has its responsibilities -- like visitng local businesses to do all sorts of glad-handing. Of course, having won our election (SPOILER!), our fair gator wizard friend is required to start making these kinds of visits. Today is one such excursion -- a trip to the local Widget Prime packing facility to see the latest and greatest in shipping technology.
#
# There's one issue, though: the system isn't finished yet! In order to make the visit a success, we need to complete and test the system.
#
# Our particular widget shipping depot specializes in four (4) types of widgets:
#
# * small
# * medium
# * large
# * extra large
#
# Each have differing price points and weights. Our system needs to be able to handle input of each of these categories for an order, add them to a box and determine if it can be shipped and, if so, for what cost.
#
# ## Sample output
#
# ```
# WIDGET PRIME PACKING SYSTEM 1.0
# -------------------------------
#
#
# Warming up the conveyor...
# Enter item to put on the conveyor (leave blank to end):
#
# .
# .
# .
#
# There are 27 extra large widgets in the crate.
# There are 30 small widgets in the crate.
# There are 17 large widgets in the crate.
# There are 26 medium widgets in the crate.
# The crate weighs a total of 87.84999999999997 pounds.
# The crate is worth $142.0.
# This create ships with a $10 fee.
# ```
#
# ## Testing
#
# The system must:
#
# * Take input of of items to add to the `conveyor`
# * Continue taking input until a blank line is entered
# * Add items to a `crate` for shipping
# * Display the quantity of individual items
# * Display the total value of the `crate`
# * Display the total weight of the `crate`
# * Determine if the `crate`:
# * Ships free (if value of the `create` is over 100
# * `print` `"These items ship for free!"`
# * Ships with a $10 shipping charge (if value less than 100, but greater than 50)
# * `print` `"These items ship with a charge of $10"`
# * Doesn't ship at all if less than the above
# * `print` `"Add more items to shipment."`
#
# ## Code requirements
#
# * At least 1 `while` loop
# * At least 1 `for` loop
# * A maxiumu of 1 `input` function
# * Use of `total_cost`, `total_weight`, and `total_items` variables to record their respective values
# * Use of the `catalog` dictionary to look up prices and weights for each item added to the crate
# * Use of the `crate` dictionary to store the quantity of items
#
# ### Extra credit requirement
#
# * Reject any input that isn't a `key` in the `catalog` dictionary
# * If a user enters something that's _not in the dictionary_ `print` `"I don't know that one."`
# +
# Widget catalog
catalog = {
"small widget": {
"price": 1.00,
"weight": .50
},
"medium widget": {
"price": 1.25,
"weight": .85
},
"large widget": {
"price": 1.50,
"weight": 1.00,
},
"extra large widget":{
"price": 2.00,
"weight": 1.25
}
}
print("WIDGET PRIME PACKING SYSTEM 1.0")
print("-------------------------------")
print("\n")
print("Warming up the conveyor...")
conveyor = []
# TODO: Write inputs to add items to the conveyor
# TODO: Create variables total_cost, total_items, and total_weight; intiialize to 0
crate = {}
# TODO: Iterate over conveyor and/or crate in order to get total_cost of items, total_weight, and the individual
# counts of each item; print the inventory of each kind of item in the crate
# TODO: Print total weight and total cost
# TODO: Create if statement to test for shipping status/cost
| assessment/CMPSC 100 - Fall 2020 - Midterm Assessment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: mmc
# language: python
# name: mmc
# ---
# +
# hide pakcage non critical warnings
import warnings
warnings.filterwarnings('ignore')
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:80% !important; }</style>"))
# %load_ext autoreload
# %autoreload 2
# %matplotlib inline
# +
import matplotlib.pyplot as plt
import scipy.io as sio
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.nn.init as init
from custom_data import DCCPT_data
from config import cfg, get_data_dir, get_output_dir, AverageMeter, remove_files_in_dir
from convSDAE import convSDAE
from tensorboard_logger import Logger
import os
import random
import numpy as np
import data_params as dp
# -
import devkit.api as dk
net = convSDAE(dim=[1, 50, 50, 50, 10], output_padding=[0, 1, 0], numpen=4, dropout=0.2, slope=0)
net
import torch.optim as optim
import torch.optim.lr_scheduler as lr_scheduler
lr =0.0001
numlayers = 4
lr = 10
maxepoch = 2
stepsize = 10
for par in net.base[numlayers-1].parameters():
par.requires_grad = True
for par in net.bbase[numlayers-1].parameters():
par.requires_grad = True
for m in net.bbase[numlayers-1].modules():
if isinstance(m, nn.BatchNorm2d):
m.training = True
# setting up optimizer - the bias params should have twice the learning rate w.r.t. weights params
bias_params = filter(lambda x: ('bias' in x[0]) and (x[1].requires_grad), net.named_parameters())
bias_params = list(map(lambda x: x[1], bias_params))
nonbias_params = filter(lambda x: ('bias' not in x[0]) and (x[1].requires_grad), net.named_parameters())
nonbias_params = list(map(lambda x: x[1], nonbias_params))
# +
optimizer = optim.SGD([{'params': bias_params, 'lr': 2*lr}, {'params': nonbias_params}],
lr=lr, momentum=0.9, weight_decay=0.0, nesterov=True)
scheduler = lr_scheduler.StepLR(optimizer, step_size=200, gamma=0.1)
# -
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter('fashion_mnist_experiment_1')
datadir = get_data_dir("cmnist")
trainset = DCCPT_data(root=datadir, train=True, h5=False)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=256, shuffle=True)
dataiter = iter(trainloader)
dataiter.next()
images, labels = dataiter.next()
images.shape
torch.tensor(3)
labels
net(images, torch.tensor(1))
writer = SummaryWriter('fashion_mnist_experiment_1')
writer.add_graph(net, (images, torch.tensor(4)))
# writer.close()
| pytorch/dcc_conv.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import librosa
import numpy as np
# Valor arbitrario. Tiene que ser chico, se puede probar con otros a ver como da
DELTA_SPECTRUM_DELTA = 0.05
# Valor arbitrario del paper, se pueden probar otros
SPECTRAL_ROLLOFF_THRESHOLD = 0.85
def sign(val):
if val >= 0:
return 1
else:
return -1
# TODO: Implementacion ad_hoc, ver como se calcula la resolution real
def DFT_resolution(signal, sampling_rate):
return int(
np.ceil(
sampling_rate / len(signal)
)
)
def sample_spectrum(frecuency, i, t, delta):
return np.log(
frecuency[i, t]
) + delta
# drecado, tambien estaba en librosa
def zero_crossing_rate(signal):
rate = 0
for i in range(1, len(signal)):
rate = rate + np.absolute(
sign( signal[i] )
-
sign( signal[i - 1] )
)
return rate / len(signal - 1)
# TODO: Terminar, como se calcula K? (resolution of the DFT)
def delta_spectrum(signal, sampling_rate, t_frame):
res = 0
DFT_res = DFT_resolution(signal, sampling_rate)
frecuency = np.abs(librosa.stft(signal))
for i in range(0, DFT_res):
res = res + np.square(
sample_spectrum(frecuency, i, t_frame, DELTA_SPECTRUM_DELTA)
-
sample_spectrum(frecuency, i, t_frame-1, DELTA_SPECTRUM_DELTA)
)
return res / (DFT_res - 1)
# TODO: terminar, como se calcula K? (resolution of the DFT)
def spectral_rolloff(signal, sampling_rate, t_frame):
res = 0
upper_bound = 0
DFT_res = DFT_resolution(signal, sampling_rate)
frecuency = np.abs(librosa.stft(signal))
for i in range(0, DFT_res):
upper_bound = upper_bound + frecuency[i, t_frame]
upper_bound = upper_bound * SPECTRAL_ROLLOFF_THRESHOLD
for i in range(0, DFT_res):
proximo_val = res + frecuency[i, t_frame]
if proximo_val >= upper_bound:
return res
else:
res = proximo_val
def generate_delta_spectrum_feature_vector(signal, sampling_rate):
frecuency = librosa.stft(signal)
_, columns = frecuency.shape
feature_vector = np.array([])
for i in range(columns):
feature_vector = np.append(
feature_vector,
delta_spectrum(
signal,
sampling_rate,
i
)
)
return feature_vector
# @deprecated(version='1.2.1', reason="Habia una hecha en librosa")
def generate_spectral_rolloff_feature_vector(signal, sampling_rate):
frecuency = librosa.stft(signal)
_, columns = frecuency.shape
feature_vector = np.array([], dtype=float)
for i in range(columns):
feature_vector = np.append(
feature_vector,
spectral_rolloff(
signal,
sampling_rate,
i
)
)
return feature_vector
def generate_audio_features(signal, sampling_rate):
return np.array([
sum(librosa.feature.zero_crossing_rate(signal)[0]),
sum(librosa.feature.spectral_rolloff(signal, sampling_rate)[0]),
sum(librosa.feature.spectral_centroid(y=signal, sr=sampling_rate)[0]),
sum(librosa.feature.spectral_flatness(y=signal)[0])
])
# +
import sklearn.decomposition
from sklearn.decomposition import NMF
import librosa
from os import listdir, mkdir
from os.path import isfile, join, splitext, getsize
from numpy import linalg
def concatenate(a, b):
if a.size == 0: return b
return np.concatenate((a, b))
def containsAll(substrings, string):
res = True
for s in substrings:
res = res and (s in string)
return res
def containsAny(substrings, string):
res = False
for s in substrings:
res = res or (s in string)
return res
def getConcat(arr):
res = ""
for a in arr:
res = res + "_" + a
return res
def generate_stacked_new_row(a, b):
if a.size == 0: return b # Caso especial si a no tiene elems todavia
return np.vstack( (a, b) )
def filter_valid_names(names, substrings, not_substrings):
res = []
for name in names:
if containsAll(substrings, name) and not containsAny(not_substrings, name):
res.append(name)
return res
def getFiles(directory, substrings, not_substrings):
# Pruebo levantar por size
all_paths = [
file_path
for file_path
in listdir(directory)
if isfile(join(directory, file_path))
]
all_paths = filter_valid_names(all_paths, substrings, not_substrings)
return all_paths
def get_audio_features(file_path):
y, sr = librosa.load(file_path, sr=44100)
return generate_audio_features(y, sr)
def generate_instrument_dataset(directory, substrings, not_substrings, instrument):
instrument_dataset = np.array([])
files = getFiles(directory, substrings, not_substrings)
# Generamos el instrument_dataset
for file in files:
file_path = join(directory, file)
instrument_dataset = generate_stacked_new_row(
instrument_dataset,
get_audio_features(file_path)
)
return instrument_dataset
def generate_dataset(instruments):
dataset = np.array([])
labels = [] # El label de la i-esima columna
for directory, substrings, not_substrings, instrument in instruments:
print("Voy con el instrumento: " + instrument)
instrument_dataset = generate_instrument_dataset(
directory,
substrings,
not_substrings,
instrument
)
# armo el dataset nuevo
dataset = concatenate(
dataset,
instrument_dataset
)
# armo los labels, las cols representan los vectores de features
for i in range(instrument_dataset.shape[0]):
labels.append(instrument)
# Para el dataset uso los vectores de features como columnas
dataset = dataset.T
# Generamos la descomposicion de Non-negative matrix
# dataset = W * H
model = NMF(n_components=len(instruments), init='random', random_state=0)
W = model.fit_transform(dataset)
H = model.components_
# Calculamos la inversa de Moore-Penrose
# Asi queda generado el modelo dataset * W^(-1) = H
W_inv = linalg.pinv(W)
return np.array(W_inv), np.array(H), labels
# +
from sklearn.metrics import pairwise
from sklearn.neighbors import KDTree
import operator
import numpy as np
from numpy import linalg
VIOLIN = ("../audios/violin/retocadas_MIS", [], ["trill"], "violin")
FLAUTA = ("../audios/flauta/retocadas_MIS", [], ["trill"], "flauta")
TROMBON = ("../audios/trombon/retocadas_MIS", [], ["trill"], "trombon")
TROMPETA = ("../audios/trompeta/retocadas_MIS", [], ["trill"], "trompeta")
GUITARRA = ("../audios/guitar/retocadas", [], ["trill"], "guitarra")
CLASH_SYMBALS = ("../audios/cymbals/retocadas_MIS", [], [], "clash_symbals")
def calculateHitRate(rates):
success, fails, _ = rates
total = success + fails
return (success * 100 / total)
def calculateHits(dataset, k, test_path, substr, not_substr, test_label):
test_files = getFiles(test_path, substr, not_substr)
success = 0
fail = 0
predicted_labels = []
for file in test_files:
file_path = join(test_path, file)
predicted_label = predict(dataset, k, file_path)
predicted_labels.append(predicted_label)
if predicted_label == test_label:
success = success + 1
else:
fail = fail + 1
return success, fail, predicted_labels
def cosine_similarity(a, b):
numerador = a.T.dot(b)
denominador = linalg.norm(a) * linalg.norm(b)
return numerador / denominador
def euclidean_dist(a, b):
return linalg.norm(a-b)
def calcular_distancias(H_t, predicted):
distancias = []
for i in range(H_t.shape[0]):
distancias.append(
euclidean_dist(
predicted,
H_t[i]
)
)
return distancias
def get_key_from_max_value(dic):
return max(dic.items(), key=operator.itemgetter(1))[0]
def count_frecuencies(labels_ordenados):
frecuencias = dict()
for i in range(len(labels_ordenados)):
val = labels_ordenados[i]
if val in frecuencias.keys():
frecuencias[val] = frecuencias[val] + 1
else:
frecuencias[val] = 1
return frecuencias
# Usamos Cosine Similarity Measure para knn, probar si anda bien
# predicted tiene shape: #Instr * 1
# H tiene shape: #Instr * #Audios
# h_i columna tiene shape 1 * # instrumentso <- estas comparamos
def k_near_neighbors(predicted, acts, labels, k):
max_label = 0
max_label_val = -1
H_t = acts.T
# Calculamos las distancias a todo el resto de los audios
distancias = calcular_distancias(H_t, predicted)
# Ordenamos los labels en base a cuales estan mas cerca
labels_ordenados_por_distancia = [
label
for _,label
in sorted(
zip(distancias,labels),
reverse=False
)
]
# Calculamos las frecuencias de los primeros k labels
frecuencias = count_frecuencies(
labels_ordenados_por_distancia[:k]
)
# Predecimos el label que mas veces aparecio
predicted_label = get_key_from_max_value(frecuencias)
return predicted_label
def predict(dataset, k, test_file_path):
comps_inv, acts, labels = dataset
test = get_audio_features(test_file_path)
# Generamos la prediccion de test
W = comps_inv
test_vect = test
# Para sacar el nuevo vector de activaciones calculamos
# Componentes^(-1) * test_vect_features = activacion_test
predicted = W.dot(test_vect.T)
# Calculamos el instrumento mas cercano al de test
pred = k_near_neighbors(predicted, acts, labels, k)
return pred
# -
# Generamos el modelo para predecir
instruments = [VIOLIN, FLAUTA, TROMBON, GUITARRA, CLASH_SYMBALS, TROMPETA]
W_inv, H, labels = generate_dataset(instruments=instruments)
# +
# Probamos predecir
VIOLIN_TEST = (4, "../audios/violin/retocadas_MIS/test", [], [], "violin")
FLAUTA_TEST = (4, "../audios/flauta/retocadas_MIS/test", [], [], "flauta")
TROMBON_TEST = (4, "../audios/trombon/retocadas_MIS/test", [], [], "trombon")
TROMPETA_TEST = (4, "../audios/trompeta/retocadas_MIS/test", [], [], "trompeta")
GUITARRA_TEST = (4, "../audios/guitar/retocadas/test", [], [], "guitarra")
CLASH_SYMBALS_TEST = (4, "../audios/cymbals/retocadas_MIS/test", [], [], "clash_symbals")
instruments_test = [VIOLIN_TEST, FLAUTA_TEST, TROMBON_TEST, GUITARRA_TEST, CLASH_SYMBALS_TEST, TROMPETA_TEST]
from sklearn.metrics import confusion_matrix
dataset = (W_inv, H, labels)
expected_values = []
predicted_values = []
for k, test_path, substring, not_substring, instrument in instruments_test:
success, fails, predicted_labels = calculateHits(dataset, k, test_path, substring, not_substring, instrument)
print(str(instrument) + " | hit_rate: " + str(calculateHitRate((success, fails, predicted_labels))))
predicted_values = predicted_values + predicted_labels
for i in predicted_labels:
expected_values.append(instrument)
conf_matrix = confusion_matrix(
expected_values,
predicted_values,
labels=["violin", "flauta", "trombon", "trompeta", "guitarra", "clash_symbals"]
)
print(conf_matrix)
# -
| Non-negative-matrices/Caracterizacion de Instrumentos/Notebooks/.ipynb_checkpoints/Instrument_classifier-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Hyper-parameter tuning
# First, let's fetch the "titanic" dataset directly from OpenML.
import pandas as pd
# In this dataset, the missing values are stored with the following character `"?"`. We will notify it to Pandas when reading the CSV file.
df = pd.read_csv(
"https://www.openml.org/data/get_csv/16826755/phpMYEkMl.csv",
na_values='?'
)
df.head()
# The classification task is to predict whether or not a person will survive the Titanic disaster.
X_df = df.drop(columns='survived')
y = df['survived']
# We will split the data into a training and a testing set.
# +
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X_df, y, random_state=42, stratify=y
)
# -
# ## The typical machine-learning pipeline
# The titanic dataset is composed of mixed data types (i.e. numerical and categorical data). Therefore, we need to define a preprocessing pipeline for each data type and use a `ColumnTransformer` to process each type separetely.
# First, let's define the different column depending of their data types.
num_cols = ['age', 'fare']
cat_col = ['sex', 'embarked', 'pclass']
# Then, define the two preprocessing pipelines.
# +
import numpy as np
from sklearn.pipeline import make_pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import OrdinalEncoder
# some of the categories will be rare and we need to
# specify the categories in advance
categories = [X_df[column].unique() for column in X_df[cat_col]]
for cat in categories:
for idx, elt in enumerate(cat):
if not isinstance(elt, str) and np.isnan(elt):
cat[idx] = 'missing'
# define the pipelines
cat_pipe = make_pipeline(
SimpleImputer(strategy='constant', fill_value='missing'),
OrdinalEncoder(categories=categories)
)
num_pipe = SimpleImputer(strategy='mean')
# -
# Combine both preprocessing using a `ColumnTransformer`.
from sklearn.compose import ColumnTransformer
preprocessing = ColumnTransformer(
[('cat_preprocessor', cat_pipe, cat_col),
('num_preprocessor', num_pipe, num_cols)]
)
# Finally, let's create a pipeline made of the preprocessor and a random forest classifier.
# +
from sklearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestClassifier
model = Pipeline([
('preprocessing', preprocessing),
('clf', RandomForestClassifier(n_jobs=-1, random_state=42))
])
# -
# # Influence of parameters tuning
# Machine-learning algorithms rely on parameters which will affect the performance of the final model. Scikit-learn provides default values for these parameters. However, using these default parameters does not necessarily lead to the a model with the best performance.
# Let's set some parameters which will may change the performance of the classifier.
model.get_params()
model.set_params(clf__n_estimators=2, clf__max_depth=2)
_ = model.fit(X_train, y_train)
print(f'Accuracy score on the training data: '
f'{model.score(X_train, y_train):.3f}')
print(f'Accuracy score on the testing data: '
f'{model.score(X_test, y_test):.3f}')
# <div class="alert alert-success">
# <p><b>QUESTIONS</b>:</p>
# <ul>
# <li>By analyzing the training and testing scores, what can you say about the model? Is it under- or over-fitting?</li>
# </ul>
# </div>
# <div class="alert alert-success">
# <p><b>QUESTIONS</b>:</p>
# <ul>
# <li>What if we don't limit the depth of the trees in the forest?</li>
# </ul>
# </div>
# <div class="alert alert-success">
# <p><b>QUESTIONS</b>:</p>
# <ul>
# <li>And for the case where the forest is composed of a large number of deep trees and each tree has no depth limit?</li>
# </ul>
# </div>
# # Use a grid-search instead
# The previous is really tedious and we are not sure to cover all possible cases. Instead, we could make an automatic search to discover all possible combination of hyper-parameters and check what would be the performance of the model. One tool for search exhaustive search is called `GridSearchCV`.
# With grid-search, we need to specify the set of values we wish to test. The `GridSearchCV` will create a grid with all the possible combinations.
# +
from sklearn.model_selection import GridSearchCV
param_grid = {
'clf__n_estimators': [5, 50, 100],
'clf__max_depth': [3, 5, 8, None]
}
grid = GridSearchCV(model, param_grid=param_grid, n_jobs=-1, cv=5)
# -
# The obtain estimator is used as a normal estimator using `fit`.
grid.fit(X_train, y_train)
# We can check the results of all combination by looking at the `cv_results_` attributes.
df_results = pd.DataFrame(grid.cv_results_)
columns_to_keep = [
'param_clf__max_depth',
'param_clf__n_estimators',
'mean_test_score',
'std_test_score',
]
df_results = df_results[columns_to_keep]
df_results.sort_values(by='mean_test_score', ascending=False)
# <div class="alert alert-success">
# <p><b>QUESTIONS</b>:</p>
# <ul>
# <li>What might be a limitation of using a grid-search with several parmaters and several values for each parameter?</li>
# </ul>
# </div>
# An alternative to the `RandomizedSearchCV`. In this case, the parameters values will be drawn from some predefined distribution. Then, we will make some successive drawing anch check the performance.
# +
from scipy.stats import randint
from sklearn.model_selection import RandomizedSearchCV
param_distributions = {
'clf__n_estimators': randint(1, 100),
'clf__max_depth': randint(2, 15),
'clf__max_features': [1, 2, 3, 4, 5],
'clf__min_samples_split': [2, 3, 4, 5, 10, 30],
}
search = RandomizedSearchCV(
model, param_distributions=param_distributions,
n_iter=20, n_jobs=-1, cv=5, random_state=42
)
# -
_ = search.fit(X_train, y_train)
df_results = pd.DataFrame(search.cv_results_)
columns_to_keep = [
"param_" + param_name for param_name in param_distributions]
columns_to_keep += [
'mean_test_score',
'std_test_score',
]
df_results = df_results[columns_to_keep]
df_results = df_results.sort_values(by="mean_test_score", ascending=False)
df_results.head(5)
df_results.tail(5)
# <div class="alert alert-success">
# <p><b>EXERCISE</b>:</p>
# <p>Build a machine-learning pipeline using a <tt>HistGradientBoostingClassifier</tt> and fine tune your model on the Titanic dataset using a <tt>RandomizedSearchCV</tt>.</p>
# <p>You may want to set the parameter distributions is the following manner:</p>
# <ul>
# <li><tt>learning_rate</tt> with values ranging from 0.001 to 0.5 following a reciprocal distribution.</li>
# <li><tt>l2_regularization</tt> with values ranging from 0.0 to 0.5 following a uniform distribution.</li>
# <li><tt>max_leaf_nodes</tt> with integer values ranging from 5 to 30 following a uniform distribution.</li>
# <li><tt>min_samples_leaf</tt> with integer values ranging from 5 to 30 following a uniform distribution.</li>
# </ul>
# </div>
# +
# TODO
| 05_advanced_sklearn_usage/04_parameters_search.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Step 0: Pre-processing
import requests
import json
import pandas as pd
from pandas.io.json import json_normalize
from datetime import datetime
import numpy as np
# The base_path should be set to a location on your local machine where you'd like the script to output files and input source data from.
base_path = 'C:/Users/geoffc.REDMOND/OneDrive/Data512/A2/'
# # Step 1: Data acquisition
# First, we pull wikipedia article data and population reference bureau data from the CSV files we have in the base_path location.
# +
#wikipedia data
wiki_page_data = pd.read_csv(base_path+'page_data.csv',header=0)
wiki_page_data = wiki_page_data.sort_values(by=['rev_id'],ascending = True) #The data appears to be pre-sorted but better safe than sorry.
#population reference bureau data
prb_data = pd.read_csv(base_path+'population_prb.csv',header=2)
prb_data = prb_data.drop(prb_data.columns[[1,2,3,5]],axis=1) #Drop location type, timeframe, data type, footnotes
prb_data.columns = ['country','population'] #Rename columns
# -
# Next, define a function to call the ORES API and return json with given revision ids, ORES quality prediction and several other fields for the associated article. This code was (heavily) based on a example use of this API provided by <NAME>.
def get_ores_data(revision_ids, headers):
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in revision_ids)
}
api_call = requests.get(endpoint.format(**params))
return json.dumps(json.loads(api_call.text))
# Define a function that given the json returned by the ORES API will turn it into a sorted pandas dataframe object with just the revision ids and ORES quality prediction.
def process_ores_data(ores_json):
out = pd.read_json(pd.read_json(ores_json,'index',typ='frame')['scores'].to_json(),date_unit='us')['enwiki'].astype(str)
#take only the prediction from the json
out = out.str.split(',',expand=True)[0]
out = out.str.split("'",expand=True)[7]
out = out.to_frame()
out.columns = ['prediction']
out['rev_id'] = out.index
out = out.sort_values(by=['rev_id'],ascending = True)
return out
# Define a function which given a header and sorted pandas dataframe of rev_ids will process rev_ids through ORES in batches of 50 and return a sorted list of predictions.
def process_rev_id_list(revision_ids, headers):
start = revision_ids[0:1]
out = process_ores_data(get_ores_data(start, headers))
#print('out:', out)
index = 1
inc = 50
list_len = len(revision_ids)
while (index < list_len):
end = min(list_len,index+inc)
#print('end:',end)
lst = revision_ids[index:end]
res = process_ores_data(get_ores_data(lst, headers))
#print('res:',res)
out = out.append(res)
index = end
return out
# Using our functions, process the list of revisions_ids from the wikipedia article data.
headers = {'User-Agent' : 'https://github.com/gdc3000', 'From' : '<EMAIL>'}
input_ids = wiki_page_data['rev_id'].tolist()
output_df = process_rev_id_list(input_ids,headers)
# # Step 2: Data processing
# Now that we have the wikipedia page data, ORES quality scores and population we merge the data into one table for analysis. First, we join the resulting ORES dataframe with wiki the other wikipedia page fields.
wiki_page_data_wPrediction = pd.merge(wiki_page_data,output_df,on='rev_id',how='outer')
# Define a function which outputs a given dataframe to a CSV file with given name at the given path.
def expToCSV(path,filename,dataframe):
combined_data.to_csv(path_or_buf=path+filename,sep=',', encoding='utf-8',index=False)
# Next, join the wikipedia and population data together on country. Where countries in the two datasets do not match, we will remove the row (i.e. we are doing an inner join).
# +
#convert population to numeric type
prb_data['population'] = prb_data['population'].str.replace(',', '')
prb_data['population'] = pd.to_numeric(prb_data['population'])
#combine data
combined_data = pd.merge(wiki_page_data_wPrediction,prb_data,on='country',how='inner')
combined_data = combined_data[['country','page','rev_id','prediction','population']]
combined_data.columns = ['country','article_name','revision_id','article_quality','population']
# -
# There are a few rows in the data where the ORES API couldn't return a quality score. The quality score returned starts with "RevisionNotFound". We'll remove these rows from the data even though it only appears there are two of them.
print(combined_data[combined_data.article_quality.str.match('RevisionNotFound',case=False)].shape)
combined_data_clean = combined_data[~combined_data.article_quality.str.match('RevisionNotFound',case=False)]
# Before starting the analysis step, we will export the scrubbed, combined data to a CSV file.
expToCSV(base_path,'final_data_a2.csv',combined_data_clean)
# # Step 3: Analysis
# Taking the table resulting from step 2, flag any articles where article_quality is 'FA' or 'GA' as high quality and add these flags as a field to the table. The warnings shown below should not affect our analysis.
combined_data['high_quality'] = 0
combined_data['high_quality'][combined_data['article_quality'] == 'FA'] = 1
combined_data['high_quality'][combined_data['article_quality'] == 'GA'] = 1
# Compute the proportion of high quality articles in terms of the total number of politician articles per country.
# +
qual_byarticlecount = combined_data.groupby('country',as_index=False).agg({'high_quality': np.sum, 'article_name': np.size})
qual_byarticlecount.columns = ['country','high_quality','article_count'] #fix column name
qual_byarticlecount['proportion'] = qual_byarticlecount['high_quality'] / qual_byarticlecount['article_count']
# -
# Next, compute the proportion of total articles in terms of the population of each country.
# +
qual_bypop = combined_data.groupby(['country','population'],as_index=False).agg({'article_name': np.size})
qual_bypop.columns = ['country','population','article_count'] #fix column name
qual_bypop['proportion'] = qual_bypop['article_count'] / qual_bypop['population']
# -
# Next, sort these tables by the proportion.
qual_bypop = qual_bypop.sort_values(by=['proportion','population'],ascending = [False,True])
qual_byarticlecount = qual_byarticlecount.sort_values(by=['proportion','article_count'],ascending = [False,True])
# Display the 10 highest-ranked countries in terms of proportion of politician articles to a country's population.
qual_bypop.head(10)
# Display the 10 lowest-ranked countries in terms of proportion of politician articles to a country's population. For countries with an equivalent proportion (if that occurred), we show those with the highest population at the bottom.
qual_bypop.tail(10)
# Display the 10 highest-ranked countries in terms of the proportion of high-quality politician articles to total articles.
qual_byarticlecount.head(10)
# Display the 10 lowest-ranked countries in terms of the proportion of high-quality politician articles to total articles. For countries with an equivalent proportion of high quality articles, we show those with the highest total article_count at the bottom.
qual_byarticlecount.tail(10)
| hcds-a2-bias.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import import_ipynb
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
from plotting_capabilities import plot_network, plot_route
from extraction_preprocessing_visualisation import normalise_two, read_raw
from helper_functions_distance_calculator import choose_closest, calculate_dist, euclidean_distance
# # network architecture :
#
# we have to first initialise the random weights as our neurons. it is favourable if we initialise our weights in the range (0,1) as the computtaion will be easier.
def generate_random_network_weights(size):
"""
parameters :
1) size = number of neurons in the network : dtype - int
"""
return np.random.rand(size, 2)
# instead of generating a neuron grid, we will modify the neurons in a ring in which the neurons will be aware of only the neuron ahead of it and the neuron just behind it. this elastic ring will expand and try to fit the cities
def get_gaussian_neighborhood(center, radius, domain):
"""
returns the gaussian neighbourhood around the specified center
parameters :
1) center = the center of the gaussian
2) radius = the search distance for the gaussian
3) domain = a numpy array that defines the domain of the gaussian
"""
if radius < 1:
radius = 1
deltas = np.absolute(center - np.arange(domain))
distances = np.minimum(deltas, domain - deltas)
return np.exp(-(distances*distances) / (2*(radius*radius)))
def route_find(cities, network):
"""we have to compute the route generated by the network.
to do this we will first select the winner neuron using
the choose closest function, then we will arrange this
in asceding order of index"""
cities['w'] = cities[['x', 'y']].apply(lambda x : choose_closest(network, x), axis = 1, raw = True)
# it is important that we set raw = true as on application of the function it will recieve ndarray inputs in contrast to series
return cities.sort_values('w').index
def SELF_ORGANISING_MAP(dataset, epochs, learning_rate, decay_rate, threshold, file_path):
"""
dataset = current pandas datafram we are working on (city num, x, y)
epochs = number of learning rounds for which the learning phase has to go
learning_rate = initial value of the learning rate
decay_rate = the value of the rate at which both the num neurons, learning rate decay
threshold = a parameter that will tell use when to stop the learning
file_path = the file path of the folder in which the .png file has to be stored
"""
cities = dataset.copy()
# step 1 - normalise the coordinates using the normalisation function.
cities = normalise_two(cities)
# step 2 - decide the neuron population size - we have exactly n neurons at the start
if cities.shape[0]< 20000:
n = cities.shape[0] * np.random.randint(1,11)
else:
n = cities.shape[0]
# step 3 - initialise a network using the generate network function :
NN = generate_random_network_weights(n)
print(f"initialised a network of {n} neurons. starting the learning process:")
for epoch in range(epochs):
if not epoch % 1000:
print(f"Iteration {epoch}/{epochs}")
# step 4 - choose a city randomly from the list of cities
city = cities.sample(1)[['x', 'y']].values
# step 5 - compute the winner neuron
winner = choose_closest(NN, city)
# step 6 - define the neighbourhood
neighbourhood = get_gaussian_neighborhood(winner, n//10, NN.shape[0])
# step 7 - update the weights according to the relation
NN += neighbourhood[:, np.newaxis] * learning_rate * (city - NN)
# step 8 - to ensure the convergence, decay the learning rate
learning_rate = learning_rate * decay_rate
n = n * decay_rate
if not epoch % 1000 :
# here we will try to plot the state of the NN using the plotting capabilities.
plot_network(cities, NN, name = file_path+('{:05d}.png'.format(epoch)))
if (n < 1) :
# if we have exhausted our search space, no neuron remains, lambda ----> 0 we stop the search.
print(f"radius has shrunk completely to just one neuron, finishing the learning phase at {epoch} epochs")
break
if (learning_rate < threshold):
print(f"learning rate has completely decayed, finishing the learning phase at {epoch} epochs")
break
plot_network(cities, NN, name = file_path+'final.png')
optimal_route = route_find(cities, NN)
plot_route(cities,optimal_route,file_path+'route.png')
# return the optimal route
return optimal_route
# # testing the function on uruguay dataset :
route1 = SELF_ORGANISING_MAP(read_raw('/Users/adityagarg/Desktop/project.nosync/data/uy734.tsp'), 10000, 0.9999, 0.9997, 0.001, '/Users/adityagarg/Desktop/project.nosync/diagrams/uruguay-10000-0.9999/')
route2 = SELF_ORGANISING_MAP(read_raw('/Users/adityagarg/Desktop/project.nosync/data/uy734.tsp'), 10000, 0.8000, 0.9997, 0.001, '/Users/adityagarg/Desktop/project.nosync/diagrams/uruguay-10000-0.8000/')
route3 = SELF_ORGANISING_MAP(read_raw('/Users/adityagarg/Desktop/project.nosync/data/uy734.tsp'), 100000, 0.9998, 0.9997, 0.001, '/Users/adityagarg/Desktop/project.nosync/diagrams/uruguay-100000-0.9998/')
route4 = SELF_ORGANISING_MAP(read_raw('/Users/adityagarg/Desktop/project.nosync/data/uy734.tsp'), 100000, 0.9997, 0.9997, 0.00001, '/Users/adityagarg/Desktop/project.nosync/diagrams/uruguay-100000-0.9997/')
# if you open the files you will see that these solutions are sub-optimal in nature, they have loops which shouldn't be there in reality
# # testing the network on qatar dataset :
route_qa = SELF_ORGANISING_MAP(read_raw('/Users/adityagarg/Desktop/project.nosync/data/qa194.tsp'), 100000, 0.99999, 0.99998, 0.0001, '/Users/adityagarg/Desktop/project.nosync/diagrams/qatar-100000-0.99999/')
route_qa_1 = SELF_ORGANISING_MAP(read_raw('/Users/adityagarg/Desktop/project.nosync/data/qa194.tsp'), 100000, 0.9997, 0.99998, 0.001, '/Users/adityagarg/Desktop/project.nosync/diagrams/qatar-100000-0.9997/')
route_qa_2 = SELF_ORGANISING_MAP(read_raw('/Users/adityagarg/Desktop/project.nosync/data/qa194.tsp'), 100000, 0.99999, 0.80000, 0.001, '/Users/adityagarg/Desktop/project.nosync/diagrams/qatar-100000-0.99997/')
route_qa_3 = SELF_ORGANISING_MAP(read_raw('/Users/adityagarg/Desktop/project.nosync/data/qa194.tsp'), 100000, 0.99988, 0.95555, 0.00001, '/Users/adityagarg/Desktop/project.nosync/diagrams/qatar-100000-0.99988/')
# we can use the in-built <b>Grid Search-CV</b> for tuning the hyper-parameters, like the learning rate and decay rate to find an optimal solution, we keep that as further work after this project, the person who is implementing this route can use "distance of the optimal solution" as the metric for defining performance and apply the grid search for paramter tuning
| notebooks/neuron.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a href="https://www.kaggle.com/luisramrez/study-plan-notebook?scriptVersionId=88428646" target="_blank"><img align="left" alt="Kaggle" title="Open in Kaggle" src="https://kaggle.com/static/images/open-in-kaggle.svg"></a>
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" papermill={"duration": 1.101187, "end_time": "2022-02-22T00:55:11.385007", "exception": false, "start_time": "2022-02-22T00:55:10.28382", "status": "completed"} tags=[]
import numpy as np
import pandas as pd
pd.plotting.register_matplotlib_converters()
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
print("Setup Complete")
import warnings
warnings.filterwarnings("ignore")
# + papermill={"duration": 0.03988, "end_time": "2022-02-22T00:55:11.441376", "exception": false, "start_time": "2022-02-22T00:55:11.401496", "status": "completed"} tags=[]
utp_filepath = "../input/utp-computer-and-systems-engineering-study-plan/UTP_Computer_and_Systems_Engineering_Study_Plan-esp.csv"
utp_data = pd.read_csv(utp_filepath, error_bad_lines=False)
# + papermill={"duration": 0.043327, "end_time": "2022-02-22T00:55:11.503107", "exception": false, "start_time": "2022-02-22T00:55:11.45978", "status": "completed"} tags=[]
utp_data.head()
# + papermill={"duration": 0.525865, "end_time": "2022-02-22T00:55:12.043998", "exception": false, "start_time": "2022-02-22T00:55:11.518133", "status": "completed"} tags=[]
#Bar char with the amount of subjects according to the number of credits
plt.figure(figsize=(10,6))
plt.title("Cantidad de asignaturas por número de créditos")
sns.barplot(x=utp_data['Cred'], y=utp_data.index)
plt.ylabel("Cantidad de asignaturas")
# + [markdown] papermill={"duration": 0.016093, "end_time": "2022-02-22T00:55:12.077268", "exception": false, "start_time": "2022-02-22T00:55:12.061175", "status": "completed"} tags=[]
# We just adjust the column names
# + papermill={"duration": 0.024535, "end_time": "2022-02-22T00:55:12.118008", "exception": false, "start_time": "2022-02-22T00:55:12.093473", "status": "completed"} tags=[]
utp_data.columns =['Cod', 'Anio', 'Semestre_Anio', 'Asignatura', 'Hclas', 'Hlab', 'Cred',
'Requisito_1', 'Requisito_2']
# + papermill={"duration": 0.038363, "end_time": "2022-02-22T00:55:12.172937", "exception": false, "start_time": "2022-02-22T00:55:12.134574", "status": "completed"} tags=[]
utp_data.info()
# + papermill={"duration": 0.026764, "end_time": "2022-02-22T00:55:12.217559", "exception": false, "start_time": "2022-02-22T00:55:12.190795", "status": "completed"} tags=[]
utp_data['Requisito_2'] = utp_data['Requisito_2'].apply(lambda x: x.replace(';',''))
# + papermill={"duration": 0.043149, "end_time": "2022-02-22T00:55:12.277732", "exception": false, "start_time": "2022-02-22T00:55:12.234583", "status": "completed"} tags=[]
utp_data
# + [markdown] papermill={"duration": 0.017879, "end_time": "2022-02-22T00:55:12.314066", "exception": false, "start_time": "2022-02-22T00:55:12.296187", "status": "completed"} tags=[]
# Let's see if we can mix it up a little bit.
# We can ask about how many credits per year need to be passed
#
# + papermill={"duration": 0.266155, "end_time": "2022-02-22T00:55:12.598251", "exception": false, "start_time": "2022-02-22T00:55:12.332096", "status": "completed"} tags=[]
plt.figure(figsize=(13,7))
x = utp_data.groupby('Anio')['Cred'].sum()
print(x)
plt.title("Cantidad de créditos por años")
f = sns.barplot(x=x.index,y=x)
# + [markdown] papermill={"duration": 0.019981, "end_time": "2022-02-22T00:55:12.638638", "exception": false, "start_time": "2022-02-22T00:55:12.618657", "status": "completed"} tags=[]
# Acording to the graph we can have lesser and lesser credits per year as long as we can continue from the second year
# where the maximum credits amount are.
# Of course, it's normal to think that there are less credits since in every course the first two years are full of diversed themes and when you're getting deeper normalize you get to specialized in a few subjects but increased in difficulty.
# + papermill={"duration": 0.029831, "end_time": "2022-02-22T00:55:12.688769", "exception": false, "start_time": "2022-02-22T00:55:12.658938", "status": "completed"} tags=[]
def subject_that_depends_on_last_year(year):
"""This function calculates the subjects
that are in the grade of precedence by year,
so you can know which subjects you need to pass in order to keep studying"""
subjects_by_year = utp_data[(utp_data['Anio']<=year)]
codes_subjects = list(set(subjects_by_year['Cod']).intersection(set(subjects_by_year['Requisito_1'])))
for item in codes_subjects:
print(subjects_by_year[subjects_by_year.Cod == item]['Asignatura'].values)
# + papermill={"duration": 0.047422, "end_time": "2022-02-22T00:55:12.756106", "exception": false, "start_time": "2022-02-22T00:55:12.708684", "status": "completed"} tags=[]
subject_that_depends_on_last_year(5)
# + [markdown] papermill={"duration": 0.019182, "end_time": "2022-02-22T00:55:12.795021", "exception": false, "start_time": "2022-02-22T00:55:12.775839", "status": "completed"} tags=[]
# It's very interesting that for several subjects you need to pass absolutetly, but
# there is no precedence grade for the Final Graduation work.
# Depending on which university you go, maybe all of these subjects will make a grade of precedence to the final work
# but you obviously need to pass each of this for the grad to come. =)
#
#
# + [markdown] papermill={"duration": 0.020708, "end_time": "2022-02-22T00:55:12.835823", "exception": false, "start_time": "2022-02-22T00:55:12.815115", "status": "completed"} tags=[]
# Normally but usually, subjects with more or equals to 4 credits used to be lightly harder than those with 3 or lesser credits and maybe it has a relationship with the hour per class or lab.
# + papermill={"duration": 0.233615, "end_time": "2022-02-22T00:55:13.089999", "exception": false, "start_time": "2022-02-22T00:55:12.856384", "status": "completed"} tags=[]
df_3= utp_data[utp_data['Cred']>=4]
df_3['horas_totales'] = df_3['Hclas']+df_3['Hlab']
y = df_3.groupby('Asignatura')['horas_totales'].sum()
sns.countplot(y)
y
# + [markdown] papermill={"duration": 0.020764, "end_time": "2022-02-22T00:55:13.131177", "exception": false, "start_time": "2022-02-22T00:55:13.110413", "status": "completed"} tags=[]
# This graph actually shows that combining lab and class hours with more than 4 credits subjects, normally per class you'll have 5 hours at least a week for a semester. Which it would take to get finished this c()urse.
# I wouldn't say that these subjects are the most difficult but of course just by name is not the easiest of task.
# for example:
# Física (Electricidad y Magnetismo) II 6
# Física (Mecánica) I 6
# These two are essecial for any engineer or not, but I must say that Physics are powerfull subjects because they allow you know if you're studying enginerring why engines works. (Maybe you don't need to know exactly everythings)
# the labs may give some insights on how things are done.
# Please if you can find more insights from the data please go ahead and play.
| study-plan-notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.6 64-bit
# name: python3
# ---
# # Import requirements
import pandas as pd
import numpy as np
import geemap, ee
import matplotlib.pyplot as plt
try:
ee.Initialize()
except:
ee.Authenticate()
ee.Initialize()
# # Call VIIIRS and Tehran Shapefiles
# +
# Nighttime light data
viirs = ee.ImageCollection(
"NOAA/VIIRS/DNB/MONTHLY_V1/VCMSLCFG").select("avg_rad")
# NASA Climate
## climate = ee.ImageCollection("NASA/GPM_L3/IMERG_MONTHLY_V06")
# Terra Climate
## climate2 = ee.ImageCollection("IDAHO_EPSCOR/TERRACLIMATE")
# FLDAS Climate
fldas = ee.ImageCollection("NASA/FLDAS/NOAH01/C/GL/M/V001")
# Tehran
teh = ee.FeatureCollection("users/amirhkiani1998/teh").select("Asrea").geometry()
# -
# # Having a look at the clipped VIIRS!
gisMap = geemap.Map()
gisMap.addLayer(viirs.mean().clip(teh))
gisMap.centerObject(teh)
gisMap
gisMap = geemap.Map()
gisMap.addLayer(fldas.mean(), {
"bands": ["Rainf_f_tavg"],
"palette": ["000000", "123456","234256", "452356","752829" , "ffffff"]})
gisMap.centerObject(teh)
gisMap
# # Getting GHSL in 2015
ghslList = ee.ImageCollection("JRC/GHSL/P2016/SMOD_POP_GLOBE_V1").toList(4)
ghslImage2015 = ee.Image(ghslList.get(3)) #2015-01-01
# # Having a look at GHSL layer
gisMap = geemap.Map()
gisMap.addLayer(ghslImage2015.select("smod_code"), {
"min": 0.0, "max": 3.0, "palette": ['000000', '448564', '70daa4', 'ffffff']})
gisMap.centerObject(teh)
gisMap
# # Getting VIIRS and climate in 2015
viirs2015 = viirs.filterDate("2015-01-01", "2015-12-01").mean()
# climate2015 = climate.filterDate("2015-01-01", "2015-12-01").mean()
fldas2015 = fldas.filterDate("2015-01-01", "2015-12-01").mean()
# # Add bands to each other
fusion = viirs2015.addBands(ghslImage2015).addBands(
fldas2015.select(["Qg_tavg", "RadT_tavg", "Rainf_f_tavg"]))
fusionArray = fusion.sampleRegions(collection=teh, scale=2000).getInfo()
mainList = fusionArray["features"]
splitDict = []
for innerDict in mainList:
splitDict.append(innerDict["properties"])
dataframe = pd.DataFrame(splitDict)
dataframe.head()
# # Check corrletion
dataframe.corr().style.background_gradient(cmap="coolwarm")
# # Plotting the average radian against precipitation
plt.figure(figsize=(18,8))
plt.title("Pericipitation-Nighttime")
plt.xlabel("precipitation", fontsize=23)
plt.ylabel("Average Radian", fontsize=24)
plt.scatter(dataframe.Rainf_f_tavg, dataframe.avg_rad, c=dataframe.smod_code)
plt.show()
# # Plotting the average radian against precipitation just for urban and rural
plt.figure(figsize=(18,8))
plt.title("Pericipitation-Nighttime")
plt.xlabel("precipitation", fontsize=23)
plt.ylabel("Average Radian", fontsize=24)
# urban
plt.scatter(dataframe[dataframe["smod_code"] == 3].Rainf_f_tavg, dataframe[dataframe["smod_code"] == 3].avg_rad, color="red", label="Urban Areas")
# rural
plt.scatter(dataframe[dataframe["smod_code"] == 1].Rainf_f_tavg, dataframe[dataframe["smod_code"] == 1].avg_rad, color="blue", label="Rural Areas")
plt.legend()
plt.show()
# # Preparing train and test data (using Precipitation and Average Radian)
# +
from sklearn.model_selection import train_test_split
# turning the features into a matrix (As a note, target data should not be turned into matrix)
X = np.c_[dataframe[(dataframe["smod_code"] == 1) | (dataframe["smod_code"] == 3)].Rainf_f_tavg.values,
dataframe[(dataframe["smod_code"] == 1) | (dataframe["smod_code"] == 3)].avg_rad.values
]
y = dataframe[(dataframe["smod_code"] == 1) | (
(dataframe["smod_code"] == 3))].smod_code.values.ravel()
# Preparing the train and test data(80% of the data are going to be train data)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# -
# # SVM Classification (Linear)
# +
from sklearn.svm import LinearSVC
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
svm_clf = Pipeline(
[
("standard", StandardScaler()),
("linear_svc", LinearSVC(C=1 , loss="hinge", max_iter=12000))
])
# -
# # Fit the train to the model
svm_clf.fit(X_train, y_train)
print("Accuracy = ", round(svm_clf.score(X_test, y_test)*100), "%",sep="")
# # Plot the Classification
# +
# The X axis (horizontal axis) is the percipitation
min_x, max_x = min(dataframe.Rainf_f_tavg.values)*0.9, max(dataframe.Rainf_f_tavg.values)*1.1
# The y axis (vertical axis) is the average radian
min_y, max_y = (min(dataframe.avg_rad.values)-10)*0.8, max(dataframe.avg_rad.values)*1.1
# define the steps
# step = 0.00005
xx , yy = np.meshgrid(np.arange(min_x, max_x, (max_x-min_x)/100), np.arange(min_y, max_y, (max_y-min_y)/100))
Z = svm_clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.figure(figsize=(18, 8))
plt.contourf(xx, yy, Z, cmap=plt.cm.coolwarm, alpha=0.5)
plt.title("Pericipitation-Nighttime")
plt.xlabel("Percipitation", fontsize=23)
plt.ylabel("Average Radian", fontsize=24)
# urban
plt.scatter(dataframe[dataframe["smod_code"] == 3].Rainf_f_tavg,
dataframe[dataframe["smod_code"] == 3].avg_rad, color="red", label="Urban Areas")
# rural
plt.scatter(dataframe[dataframe["smod_code"] == 1].Rainf_f_tavg,
dataframe[dataframe["smod_code"] == 1].avg_rad, color="blue", label="Rural Areas")
plt.legend()
plt.show()
# -
# # SVM Classification (Non-Linear)
from sklearn.svm import SVC
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
svm_clf_nonlinear = Pipeline(
[
("standard", StandardScaler()),
("linear_svc", SVC(kernel="rbf", gamma=5, C=0.1))
])
# # Fit the train data to model
svm_clf_nonlinear.fit(X_train, y_train)
print("Accuracy = ", round(svm_clf_nonlinear.score(X_test, y_test)*100), "%", sep="")
# # Plot the classification
# +
# The X axis (horizontal axis) is the percipitation
min_x, max_x = min(dataframe.Rainf_f_tavg.values) * \
0.9, max(dataframe.Rainf_f_tavg.values)*1.1
# The y axis (vertical axis) is the average radian
min_y, max_y = (min(dataframe.avg_rad.values)-10) * \
0.8, max(dataframe.avg_rad.values)*1.1
# define the steps
# step = 0.00005
xx, yy = np.meshgrid(np.arange(min_x, max_x, (max_x-min_x)/100),
np.arange(min_y, max_y, (max_y-min_y)/100))
Z = svm_clf_nonlinear.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.figure(figsize=(18, 8))
plt.contourf(xx, yy, Z, cmap=plt.cm.coolwarm, alpha=0.5)
plt.title("Pericipitation-Nighttime")
plt.xlabel("Percipitation", fontsize=23)
plt.ylabel("Average Radian", fontsize=24)
# urban
plt.scatter(dataframe[dataframe["smod_code"] == 3].Rainf_f_tavg,
dataframe[dataframe["smod_code"] == 3].avg_rad, color="red", label="Urban Areas")
# rural
plt.scatter(dataframe[dataframe["smod_code"] == 1].Rainf_f_tavg,
dataframe[dataframe["smod_code"] == 1].avg_rad, color="blue", label="Rural Areas")
plt.legend()
plt.show()
# -
# # SVM (Linear) Classification - 1D Classification
from sklearn.model_selection import train_test_split
dataframeCustom = dataframe[(dataframe["smod_code"]
== 1) | (dataframe["smod_code"] == 3)]
X2_train, X2_test, y2_train, y2_test = train_test_split(np.c_[dataframeCustom.avg_rad.values], dataframeCustom.smod_code.values, test_size = 0.2, random_state=42)
svm_clf.fit(X2_train, y2_train)
print("Accuracy = ", round(svm_clf.score(X2_test, y2_test)*100), "%", sep="")
# +
# The X axis (horizontal axis) is the precipitation
min_x, max_x = (min(dataframe.avg_rad.values)-1)*0.9, max(dataframe.avg_rad.values)*1.1
# The y axis (vertical axis) is the average radian
min_y, max_y = -1, +1
xx, yy = np.meshgrid(np.arange(min_x, max_x, (max_x-min_x)/500),
np.arange(min_y, max_y, (max_y-min_y)/500))
Z = svm_clf.predict(np.c_[xx.ravel()])
Z = Z.reshape(xx.shape)
plt.figure(figsize=(18, 8))
plt.contourf(xx, yy, Z, cmap=plt.cm.coolwarm, alpha=0.5)
plt.title("Model Results", fontsize=25)
plt.xlabel("Average Radian", fontsize=23)
# Draw the horizontal line on y = 0
plt.axhline(y=0, color="#000000", linestyle="-", zorder=1)
# Strip the vertical axis numbers
plt.yticks(np.full(dataframe[dataframe["smod_code"] == 1].avg_rad.shape, 0), "")
# urban
plt.scatter(dataframe[dataframe["smod_code"] == 3].avg_rad,
np.full(dataframe[dataframe["smod_code"] == 3].avg_rad.shape,0), color="red", label="Urban Areas", zorder=5)
# rural
plt.scatter(dataframe[dataframe["smod_code"] == 1].avg_rad,
np.full(dataframe[dataframe["smod_code"] == 1].avg_rad.shape,0), color="blue", label="Rural Areas", zorder=5)
plt.legend()
plt.show()
# -
# # SVM (Non-Linear) Classification - 1D Classification
from sklearn.svm import SVC
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
svm_clf_nonlinear = Pipeline(
[
("standard", StandardScaler()),
("linear_svc", SVC(kernel="rbf", gamma=5, C=0.1))
])
svm_clf_nonlinear.fit(X2_train, y2_train)
svm_clf_nonlinear.score(X2_test, y2_test)
# +
# The X axis (horizontal axis) is the precipitation
min_x, max_x = (min(dataframe.avg_rad.values)-1) * \
0.9, max(dataframe.avg_rad.values)*1.1
# The y axis (vertical axis) is the average radian
min_y, max_y = -1, +1
xx, yy = np.meshgrid(np.arange(min_x, max_x, (max_x-min_x)/500),
np.arange(min_y, max_y, (max_y-min_y)/500))
Z = svm_clf_nonlinear.predict(np.c_[xx.ravel()])
Z = Z.reshape(xx.shape)
plt.figure(figsize=(18, 8))
plt.contourf(xx, yy, Z, cmap=plt.cm.coolwarm, alpha=0.5)
plt.title("Model Results", fontsize=25)
plt.xlabel("Average Radian", fontsize=23)
# Draw the horizontal line on y = 0
plt.axhline(y=0, color="#000000", linestyle="-", zorder=1)
# Strip the vertical axis numbers
plt.yticks(
np.full(dataframe[dataframe["smod_code"] == 1].avg_rad.shape, 0), "")
# urban
plt.scatter(dataframe[dataframe["smod_code"] == 3].avg_rad,
np.full(dataframe[dataframe["smod_code"] == 3].avg_rad.shape, 0), color="red", label="Urban Areas", zorder=5)
# rural
plt.scatter(dataframe[dataframe["smod_code"] == 1].avg_rad,
np.full(dataframe[dataframe["smod_code"] == 1].avg_rad.shape, 0), color="blue", label="Rural Areas", zorder=5)
plt.legend()
plt.show()
# -
# # Check model in other provinces(Isfahan)
# ## Calling Isfahan geometry from google earth engine
isf = ee.FeatureCollection("users/amirhkiani1998/isf").geometry()
# ## Making fusion data
fusionIsf = viirs2015.addBands(ghslImage2015).addBands(
fldas2015.select(["Qg_tavg", "RadT_tavg", "Rainf_f_tavg"]))
fusionIsfArray = fusion.sampleRegions(collection=isf, scale=6000).getInfo()
# ## Making dataframe
mainList = fusionArray["features"]
splitDict = []
for innerDict in mainList:
splitDict.append(innerDict["properties"])
dataframe = pd.DataFrame(splitDict)
dataframe.head()
# ## Correlation matrix
dataframe.corr().style.background_gradient(cmap="coolwarm")
isfPrediction = svm_clf_nonlinear.score(np.c_[dataframe[(dataframe["smod_code"] == 1) | (dataframe["smod_code"] == 3)].avg_rad.values], dataframe[(dataframe["smod_code"] == 1) | (dataframe["smod_code"] == 3)].smod_code.values)
# ## Checking the predicition
isfPrediction
# # What if we use all the variables?
# ## Tehran
fusion = viirs2015.addBands(ghslImage2015).addBands(
fldas2015.select(["Qg_tavg", "RadT_tavg", "Rainf_f_tavg"]))
fusionArray = fusion.sampleRegions(collection=teh, scale=2000).getInfo()
mainList = fusionArray["features"]
splitDict = []
for innerDict in mainList:
splitDict.append(innerDict["properties"])
dataframe = pd.DataFrame(splitDict)
dataframe.head()
X = np.c_[dataframe.Qg_tavg.values,
dataframe.RadT_tavg.values, dataframe.Rainf_f_tavg.values, dataframe.avg_rad.values]
y = dataframe.smod_code.values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
from sklearn.svm import SVC
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
svm_clf_nonlinear = Pipeline(
[
("standard", StandardScaler()),
("linear_svc", SVC(kernel="rbf", gamma=5, C=0.1))
])
svm_clf_nonlinear.fit(X_train, y_train)
svm_clf_nonlinear.score(X_test, y_test)
# ## Check For Isfahan
fusion = viirs2015.addBands(ghslImage2015).addBands(
fldas2015.select(["Qg_tavg", "RadT_tavg", "Rainf_f_tavg"]))
fusionArray = fusion.sampleRegions(collection=isf, scale=6000).getInfo()
mainList = fusionArray["features"]
splitDict = []
for innerDict in mainList:
splitDict.append(innerDict["properties"])
dataframe = pd.DataFrame(splitDict)
dataframe.head()
X_isfahan = np.c_[dataframe.Qg_tavg.values,
dataframe.RadT_tavg.values, dataframe.Rainf_f_tavg.values, dataframe.avg_rad.values]
y_isfahan = dataframe.smod_code.values
svm_clf_nonlinear.score(X_isfahan, y_isfahan)
# ## Check for Fars
fars = ee.FeatureCollection("users/amirhkiani1998/frs").geometry()
fusion = viirs2015.addBands(ghslImage2015).addBands(
fldas2015.select(["Qg_tavg", "RadT_tavg", "Rainf_f_tavg"]))
fusionArray = fusion.sampleRegions(collection=fars, scale=6000).getInfo()
mainList = fusionArray["features"]
splitDict = []
for innerDict in mainList:
splitDict.append(innerDict["properties"])
dataframe = pd.DataFrame(splitDict)
dataframe.head()
X_fars = np.c_[dataframe.Qg_tavg.values,
dataframe.RadT_tavg.values, dataframe.Rainf_f_tavg.values, dataframe.avg_rad.values]
y_fars = dataframe.smod_code.values
svm_clf_nonlinear.score(X_fars, y_fars)
| tehran-nighttime.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
from pandas_profiling import ProfileReport
from sklearn.datasets import load_diabetes
diab_data = load_diabetes()
df=pd.DataFrame(data=diab_data.data,columns=diab_data.feature_names)
df.head()
df.columns
### To Create the Simple report quickly
profile = ProfileReport(df, title='Pandas Profiling Report', explorative=True)
profile.to_widgets()
profile.to_file("output.html")
# ls
| data_visualization/Pandas-profiling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] pycharm={"name": "#%% md\n"}
# # Demo: Audio-audio synchronization with chroma features and MrMsDTW
#
# In this notebook, we'll show a minimal example for the use of the SyncToolbox for music synchronization. We will take two recordings of the same musical piece (the first song of <NAME>'s "Winterreise"), compute chroma representations of both recordings and align them using classical dynamic time warping (DTW) and multi-resolution multi-scale DTW (MrMsDTW). We will also compare the runtimes of the two algorithms.
#
# For an explanation of chroma features and DTW, see [1].
# + pycharm={"name": "#%%\n"}
# Loading some modules and defining some constants used later
import time
import librosa.display
import matplotlib.pyplot as plt
import IPython.display as ipd
from libfmp.b.b_plot import plot_signal, plot_chromagram
from libfmp.c3.c3s2_dtw_plot import plot_matrix_with_points
from synctoolbox.dtw.core import compute_warping_path
from synctoolbox.dtw.cost import cosine_distance
from synctoolbox.dtw.mrmsdtw import sync_via_mrmsdtw
# %matplotlib inline
Fs = 22050
N = 2048
H = 1024
feature_rate = int(22050 / H)
figsize = (9, 3)
# -
# ## Loading two recordings of the same piece
#
# Here, we take recordings of the song "Gute Nacht" by <NAME> from his song cycle "Winterreise" in two performances (versions). The first version is by <NAME> and <NAME> from 1933. The second version is by <NAME> and <NAME> from 2006.
#
# ### Version 1
# + pycharm={"name": "#%%\n"}
audio_1, _ = librosa.load('data_music/Schubert_D911-01_HU33.wav', Fs)
plot_signal(audio_1, Fs=Fs, ylabel='Amplitude', title='Version 1', figsize=figsize)
ipd.display(ipd.Audio(audio_1, rate=Fs))
# -
# ### Version 2
# + pycharm={"name": "#%%\n"}
audio_2, _ = librosa.load('data_music/Schubert_D911-01_SC06.wav', Fs)
plot_signal(audio_2, Fs=Fs, ylabel='Amplitude', title='Version 2', figsize=figsize)
ipd.display(ipd.Audio(audio_2, rate=Fs))
# -
# ## Obtaining chroma representations of the recordings using librosa
#
# For most Western classical and pop music, chroma features are highly useful for aligning different versions of the same piece. Here, we use librosa to calculate two very basic chroma representations, derived from STFTs. The plots illustrate the chroma representations of the first 30 seconds of each version.
# + pycharm={"name": "#%%\n"}
chroma_1 = librosa.feature.chroma_stft(y=audio_1, sr=Fs, n_fft=N, hop_length=H, norm=2.0)
plot_chromagram(chroma_1[:, :30 * feature_rate], Fs=feature_rate, title='Chroma representation for version 1', figsize=figsize)
plt.show()
chroma_2 = librosa.feature.chroma_stft(y=audio_2, sr=Fs, n_fft=N, hop_length=H, norm=2.0)
plot_chromagram(chroma_2[:, :30 * feature_rate], Fs=feature_rate, title='Chroma representation for version 2', figsize=figsize)
plt.show()
# -
# ## Aligning chroma representations using full DTW
#
# The chroma feature sequences in the last cell can be used for time warping. As both versions last around five minutes, an alignment can still be computed in reasonable time using classical, full DTW. In the next cell we use the SyncToolbox implementation of DTW to do this. Each feature sequence consists of around 7000 frames, meaning that the matrices computed during full DTW become quite huge - around 14 million entries each! - leading to high memory consumption.
# + pycharm={"name": "#%%\n"}
C = cosine_distance(chroma_1, chroma_2)
_, _, wp_full = compute_warping_path(C=C)
# Equivalently, full DTW may be computed using librosa via:
# _, wp_librosa = librosa.sequence.dtw(C=C)
plot_matrix_with_points(C, wp_full.T, linestyle='-', marker='', aspect='equal',
title='Cost matrix and warping path computed using full DTW',
xlabel='Version 2 (frames)', ylabel='Version 1 (frames)', figsize=(9, 5))
plt.show()
# -
# ## Aligning chroma representations using SyncToolbox (MrMsDTW)
#
# We now compute an alignment between the two versions using MrMsDTW. This algorithm has a much lower memory footprint and will also be faster on long feature sequences. For more information, see [2].
# + pycharm={"name": "#%%\n"}
_ = sync_via_mrmsdtw(f_chroma1=chroma_1,
f_chroma2=chroma_2,
input_feature_rate=feature_rate,
verbose=True)
# -
# ## Runtime comparison
#
# We now compare the runtime of both algorithms. During their first call, they may create function caches etc. So, after running the previous cells, we can now test their raw performance.
# + pycharm={"name": "#%%\n"}
start_time = time.time()
C = cosine_distance(chroma_1, chroma_2)
compute_warping_path(C=C)
end_time = time.time()
print(f'Full DTW took {end_time - start_time}s')
start_time = time.time()
sync_via_mrmsdtw(f_chroma1=chroma_1,
f_chroma2=chroma_2,
input_feature_rate=feature_rate,
verbose=False)
end_time = time.time()
print(f'MrMsDTW took {end_time - start_time}s')
# -
# ## References
#
# [1] <NAME>: Fundamentals of Music Processing – Audio, Analysis, Algorithms, Applications, ISBN: 978-3-319-21944-8, Springer, 2015.
#
# [2] <NAME>, <NAME>, and <NAME>: Memory-Restricted Multiscale Dynamic Time Warping,
# In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP): 569–573, 2016.
| sync_audio_audio_simple.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.3 64-bit (''base'': conda)'
# name: python3
# ---
# # Final Project
#
# * DS 5001: Exploratory Text Analytics
# * Rehan Merchant rm2bt
# +
import pandas as pd
import numpy as np
from glob import glob #read directory system to import files
import re
import nltk
# %matplotlib inline
# -
OHCO = ['book_id', 'chap_num', 'para_num', 'sent_num', 'token_num']
# import file
epub_file = "pg42671.txt"
#epub_file = 'pg121.txt'
epub = open(epub_file, 'r', encoding='utf-8-sig').readlines()
df = pd.DataFrame(epub, columns=['line_str'])
df.index.name = 'line_num'
df.line_str = df.line_str.str.strip()
# extract book id
book_id = int(epub_file.split('-')[-1].split('.')[0].replace('pg',''))
print("BOOK ID", book_id)
df['book_id'] = book_id
# extract title
title = df.loc[0].line_str.replace('The Project Gutenberg eBook, ', '')
df['title'] = title
print(title)
df
# Remove Gutenberg's front and back matter
a = df.line_str.str.match(r"\*\*\**START OF THE PROJECT ")
b = df.line_str.str.match(r"\*\*\**END OF THE PROJECT ")
an = df.loc[a].index[0]
bn = df.loc[b].index[0]
df = df.loc[an + 1 : bn - 2]
df
# chunk by chapter
chap_lines = df.line_str.str.match(r"(chapter|letter)\s", case=False)
df.loc[chap_lines]
# Assign numbers to chapters
chap_nums = [i+1 for i in range(df.loc[chap_lines].shape[0])]
df.loc[chap_lines, 'chap_num'] = chap_nums
df.chap_num = df.chap_num.ffill()
# clean up
df = df.loc[~df.chap_num.isna()] # Remove chapter heading lines
df = df.loc[~chap_lines] # Remove everything before Chapter 1
df.chap_num = df.chap_num.astype('int') # Convert chap_num from float to int
df.sample(10)
# group lines by chapter num
dfc = df.groupby(OHCO[1:2]).line_str.apply(lambda x: '\n'.join(x)).to_frame() # Make big string
dfc.head(10)
# split into paragraphs
dfp = dfc['line_str'].str.split(r'\n\n+', expand=True).stack()\
.to_frame().rename(columns={0:'para_str'})
dfp.head(10)
dfp.index.names = OHCO[1:3]
dfp.head(10)
dfp['para_str'] = dfp['para_str'].str.replace(r'\n', ' ').str.strip()
dfp = dfp[~dfp['para_str'].str.match(r'^\s*$')] # Remove empty paragraphs
dfp.head(10)
DOC = dfp
DOC['book_id'] = book_id
DOC = DOC.reset_index().set_index(OHCO[:3])
DOC
# Tokenize
def tokenize(doc_df, remove_pos_tuple=False, OHCO=OHCO):
# Paragraphs to Sentences
df = doc_df.para_str\
.apply(lambda x: pd.Series(nltk.sent_tokenize(x)))\
.stack()\
.to_frame()\
.rename(columns={0:'sent_str'})
# Sentences to Tokens
# .apply(lambda x: pd.Series(nltk.pos_tag(nltk.word_tokenize(x))))\
df = df.sent_str\
.apply(lambda x: pd.Series(nltk.pos_tag(nltk.WhitespaceTokenizer().tokenize(x))))\
.stack()\
.to_frame()\
.rename(columns={0:'pos_tuple'})
# Grab info from tuple
df['pos'] = df.pos_tuple.apply(lambda x: x[1])
df['token_str'] = df.pos_tuple.apply(lambda x: x[0])
if remove_pos_tuple:
df = df.drop('pos_tuple', 1)
# Add index
df.index.names = OHCO
return df
TOKEN = tokenize(DOC)
TOKEN
my_lib = []
author = 'austen'
my_lib.append((book_id, title,epub_file,author,title))
LIB = pd.DataFrame(my_lib, columns=['book_id', 'book_title','book_file','author','title']).set_index('book_id')
LIB
# Reduce
# +
TOKEN['term_str'] = TOKEN['token_str'].str.lower().str.replace('[\W_]', '')
VOCAB = TOKEN.term_str.value_counts().to_frame().rename(columns={'index':'term_str', 'term_str':'n'})\
.sort_index().reset_index().rename(columns={'index':'term_str'})
VOCAB.index.name = 'term_id'
VOCAB['num'] = VOCAB.term_str.str.match("\d+").astype('int')
# -
VOCAB
# annotate vocab
sw = pd.DataFrame(nltk.corpus.stopwords.words('english'), columns=['term_str'])
sw = sw.reset_index().set_index('term_str')
sw.columns = ['dummy']
sw.dummy = 1
VOCAB['stop'] = VOCAB.term_str.map(sw.dummy)
VOCAB['stop'] = VOCAB['stop'].fillna(0).astype('int')
VOCAB[VOCAB.stop == 1].sample(10)
# porter stems
# +
from nltk.stem.porter import PorterStemmer
stemmer = PorterStemmer()
VOCAB['p_stem'] = VOCAB.term_str.apply(stemmer.stem)
VOCAB.sample(10)
# -
TOKEN
token1 = TOKEN
pos_max = token1.groupby(['term_str',"pos"]).count().sort_values("token_str", ascending = False).groupby(level=0).head(1)\
.reset_index().set_index('term_str')
pos_max.sort_index().tail(200)
VOCAB['pos_max'] = VOCAB.term_str.map(pos_max.pos)
VOCAB
# save to csv
TOKEN.to_csv('TOKEN.csv')
DOC.to_csv('DOC.csv')
LIB.to_csv('LIB.csv')
VOCAB.to_csv('VOCAB.csv')
| nlp_proj/Doc Cleaning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## print_object.py
# %%writefile pgan_loop_module/trainer/print_object.py
def print_obj(function_name, object_name, object_value):
"""Prints enclosing function, object name, and object value.
Args:
function_name: str, name of function.
object_name: str, name of object.
object_value: object, value of passed object.
"""
# pass
print("{}: {} = {}".format(function_name, object_name, object_value))
# ## input.py
# +
# %%writefile pgan_loop_module/trainer/input.py
import tensorflow as tf
from .print_object import print_obj
def decode_example(protos, params):
"""Decodes TFRecord file into tensors.
Given protobufs, decode into image and label tensors.
Args:
protos: protobufs from TFRecord file.
params: dict, user passed parameters.
Returns:
Image and label tensors.
"""
# Create feature schema map for protos.
features = {
"image_raw": tf.FixedLenFeature(shape=[], dtype=tf.string),
"label": tf.FixedLenFeature(shape=[], dtype=tf.int64)
}
# Parse features from tf.Example.
parsed_features = tf.parse_single_example(
serialized=protos, features=features
)
print_obj("\ndecode_example", "features", features)
# Convert from a scalar string tensor (whose single string has
# length height * width * depth) to a uint8 tensor with shape
# [height * width * depth].
image = tf.decode_raw(
input_bytes=parsed_features["image_raw"], out_type=tf.uint8
)
print_obj("decode_example", "image", image)
# Reshape flattened image back into normal dimensions.
image = tf.reshape(
tensor=image,
shape=[params["height"], params["width"], params["depth"]]
)
print_obj("decode_example", "image", image)
# Convert from [0, 255] -> [-1.0, 1.0] floats.
image = tf.cast(x=image, dtype=tf.float32) * (2. / 255) - 1.0
print_obj("decode_example", "image", image)
# Convert label from a scalar uint8 tensor to an int32 scalar.
label = tf.cast(x=parsed_features["label"], dtype=tf.int32)
print_obj("decode_example", "label", label)
return {"image": image}, label
def read_dataset(filename, mode, batch_size, params):
"""Reads TF Record data using tf.data, doing necessary preprocessing.
Given filename, mode, batch size, and other parameters, read TF Record
dataset using Dataset API, apply necessary preprocessing, and return an
input function to the Estimator API.
Args:
filename: str, file pattern that to read into our tf.data dataset.
mode: The estimator ModeKeys. Can be TRAIN or EVAL.
batch_size: int, number of examples per batch.
params: dict, dictionary of user passed parameters.
Returns:
An input function.
"""
def _input_fn():
"""Wrapper input function used by Estimator API to get data tensors.
Returns:
Batched dataset object of dictionary of feature tensors and label
tensor.
"""
# Create list of files that match pattern.
file_list = tf.gfile.Glob(filename=filename)
# Create dataset from file list.
dataset = tf.data.TFRecordDataset(
filenames=file_list, num_parallel_reads=40
)
# Shuffle and repeat if training with fused op.
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.apply(
tf.contrib.data.shuffle_and_repeat(
buffer_size=50 * batch_size,
count=None # indefinitely
)
)
# Decode CSV file into a features dictionary of tensors, then batch.
dataset = dataset.apply(
tf.contrib.data.map_and_batch(
map_func=lambda x: decode_example(
protos=x,
params=params
),
batch_size=batch_size,
num_parallel_calls=4
)
)
# Prefetch data to improve latency.
dataset = dataset.prefetch(buffer_size=2)
# Create a iterator, then get batch of features from example queue.
batched_dataset = dataset.make_one_shot_iterator().get_next()
return batched_dataset
return _input_fn
# -
# ## generator.py
# +
# %%writefile pgan_loop_module/trainer/generator.py
import tensorflow as tf
from . import regularization
from . import utils
from .print_object import print_obj
def create_generator_projection_layer(regularizer, params):
"""Creates generator projection from noise latent vector.
Args:
regularizer: `l1_l2_regularizer` object, regularizar for kernel
variables.
params: dict, user passed parameters.
Returns:
Latent vector projection `Dense` layer.
"""
# Project latent vectors.
projection_height = params["generator_projection_dims"][0]
projection_width = params["generator_projection_dims"][1]
projection_depth = params["generator_projection_dims"][2]
with tf.variable_scope(name_or_scope="generator", reuse=tf.AUTO_REUSE):
# shape = (
# cur_batch_size,
# projection_height * projection_width * projection_depth
# )
projection_layer = tf.layers.Dense(
units=projection_height * projection_width * projection_depth,
activation=tf.nn.leaky_relu,
kernel_initializer="he_normal",
kernel_regularizer=regularizer,
name="generator_projection_layer"
)
print_obj(
"create_generator_projection_layer",
"projection_layer",
projection_layer
)
return projection_layer
def build_generator_projection_layer(projection_layer, params):
"""Builds generator projection layer internals using call.
Args:
projection_layer: `Dense` layer for projection of noise into image.
params: dict, user passed parameters.
Returns:
Latent vector projection tensor.
"""
# Project latent vectors.
projection_height = params["generator_projection_dims"][0]
projection_width = params["generator_projection_dims"][1]
projection_depth = params["generator_projection_dims"][2]
with tf.variable_scope(name_or_scope="generator", reuse=tf.AUTO_REUSE):
# shape = (
# cur_batch_size,
# projection_height * projection_width * projection_depth
# )
projection_tensor = projection_layer(
inputs=tf.zeros(
shape=[1, params["latent_size"]], dtype=tf.float32
)
)
print_obj(
"\nbuild_generator_projection_layer",
"projection_tensor",
projection_tensor
)
return projection_tensor
def use_generator_projection_layer(Z, projection_layer, params):
"""Uses projection layer to convert random noise vector into an image.
Args:
Z: tensor, latent vectors of shape [cur_batch_size, latent_size].
projection_layer: `Dense` layer for projection of noise into image.
params: dict, user passed parameters.
Returns:
Latent vector projection tensor.
"""
# Project latent vectors.
projection_height = params["generator_projection_dims"][0]
projection_width = params["generator_projection_dims"][1]
projection_depth = params["generator_projection_dims"][2]
with tf.variable_scope(name_or_scope="generator", reuse=tf.AUTO_REUSE):
# shape = (
# cur_batch_size,
# projection_height * projection_width * projection_depth
# )
projection_tensor = projection_layer(inputs=Z)
print_obj(
"\nuse_generator_projection_layer", "projection_tensor", projection_tensor
)
# Reshape projection into "image".
# shape = (
# cur_batch_size,
# projection_height,
# projection_width,
# projection_depth
# )
projection_tensor_reshaped = tf.reshape(
tensor=projection_tensor,
shape=[-1, projection_height, projection_width, projection_depth],
name="generator_projection_reshaped"
)
print_obj(
"use_generator_projection_layer",
"projection_tensor_reshaped",
projection_tensor_reshaped
)
return projection_tensor_reshaped
def create_generator_base_conv_layer_block(regularizer, params):
"""Creates generator base conv layer block.
Args:
regularizer: `l1_l2_regularizer` object, regularizar for kernel
variables.
params: dict, user passed parameters.
Returns:
List of base conv layers.
"""
with tf.variable_scope(name_or_scope="generator", reuse=tf.AUTO_REUSE):
# Get conv block layer properties.
conv_block = params["generator_base_conv_blocks"][0]
# Create list of base conv layers.
base_conv_layers = [
tf.layers.Conv2D(
filters=conv_block[i][3],
kernel_size=conv_block[i][0:2],
strides=conv_block[i][4:6],
padding="same",
activation=tf.nn.leaky_relu,
kernel_initializer="he_normal",
kernel_regularizer=regularizer,
name="generator_base_layers_conv2d_{}_{}x{}_{}_{}".format(
i,
conv_block[i][0],
conv_block[i][1],
conv_block[i][2],
conv_block[i][3]
)
)
for i in range(len(conv_block))
]
print_obj(
"\ncreate_generator_base_conv_layer_block",
"base_conv_layers",
base_conv_layers
)
return base_conv_layers
def build_generator_base_conv_layer_block(base_conv_layers, params):
"""Builds generator base conv layer block internals using call.
Args:
base_conv_layers: list, the base block's conv layers.
params: dict, user passed parameters.
Returns:
List of base conv tensors.
"""
with tf.variable_scope(name_or_scope="generator", reuse=tf.AUTO_REUSE):
# Get conv block layer properties.
conv_block = params["generator_base_conv_blocks"][0]
# Create list of base conv layers.
base_conv_tensors = [
base_conv_layers[i](
inputs=tf.zeros(
shape=[1] + conv_block[i][0:3], dtype=tf.float32
)
)
for i in range(len(conv_block))
]
print_obj(
"\nbuild_generator_base_conv_layer_block",
"base_conv_tensors",
base_conv_tensors
)
return base_conv_tensors
def create_generator_growth_layer_block(regularizer, params, block_idx):
"""Creates generator growth block.
Args:
regularizer: `l1_l2_regularizer` object, regularizar for kernel
variables.
params: dict, user passed parameters.
block_idx: int, the current growth block's index.
Returns:
List of growth block layers.
"""
with tf.variable_scope(name_or_scope="generator", reuse=tf.AUTO_REUSE):
# Get conv block layer properties.
conv_block = params["generator_growth_conv_blocks"][block_idx]
# Create new inner convolutional layers.
conv_layers = [
tf.layers.Conv2D(
filters=conv_block[i][3],
kernel_size=conv_block[i][0:2],
strides=conv_block[i][4:6],
padding="same",
activation=tf.nn.leaky_relu,
kernel_initializer="he_normal",
kernel_regularizer=regularizer,
name="generator_growth_layers_conv2d_{}_{}_{}x{}_{}_{}".format(
block_idx,
i,
conv_block[i][0],
conv_block[i][1],
conv_block[i][2],
conv_block[i][3]
)
)
for i in range(len(conv_block))
]
print_obj(
"\ncreate_generator_growth_layer_block", "conv_layers", conv_layers
)
return conv_layers
def build_generator_growth_layer_block(conv_layers, params, block_idx):
"""Builds generator growth block internals through call.
Args:
conv_layers: list, the current growth block's conv layers.
params: dict, user passed parameters.
block_idx: int, the current growth block's index.
Returns:
List of growth block tensors.
"""
with tf.variable_scope(name_or_scope="generator", reuse=tf.AUTO_REUSE):
# Get conv block layer properties.
conv_block = params["generator_growth_conv_blocks"][block_idx]
# Create new inner convolutional layers.
conv_tensors = [
conv_layers[i](
inputs=tf.zeros(
shape=[1] + conv_block[i][0:3], dtype=tf.float32
)
)
for i in range(len(conv_block))
]
print_obj(
"\nbuild_generator_growth_layer_block",
"conv_tensors",
conv_tensors
)
return conv_tensors
def create_generator_to_rgb_layers(regularizer, params):
"""Creates generator toRGB layers of 1x1 convs.
Args:
regularizer: `l1_l2_regularizer` object, regularizar for kernel
variables.
params: dict, user passed parameters.
Returns:
List of toRGB 1x1 conv layers.
"""
with tf.variable_scope(name_or_scope="generator", reuse=tf.AUTO_REUSE):
# Get toRGB layer properties.
to_rgb = [
params["generator_to_rgb_layers"][i][0][:]
for i in range(len(params["generator_to_rgb_layers"]))
]
# Create list to hold toRGB 1x1 convs.
to_rgb_conv_layers = [
tf.layers.Conv2D(
filters=to_rgb[i][3],
kernel_size=to_rgb[i][0:2],
strides=to_rgb[i][4:6],
padding="same",
activation=tf.nn.leaky_relu,
kernel_initializer="he_normal",
kernel_regularizer=regularizer,
name="generator_to_rgb_layers_conv2d_{}_{}x{}_{}_{}".format(
i, to_rgb[i][0], to_rgb[i][1], to_rgb[i][2], to_rgb[i][3]
)
)
for i in range(len(to_rgb))
]
print_obj(
"\ncreate_generator_to_rgb_layers",
"to_rgb_conv_layers",
to_rgb_conv_layers
)
return to_rgb_conv_layers
def build_generator_to_rgb_layers(to_rgb_conv_layers, params):
"""Builds generator toRGB layers of 1x1 convs internals through call.
Args:
to_rgb_conv_layers: list, toRGB conv layers.
params: dict, user passed parameters.
Returns:
List of toRGB 1x1 conv tensors.
"""
with tf.variable_scope(name_or_scope="generator", reuse=tf.AUTO_REUSE):
# Get toRGB layer properties.
to_rgb = [
params["generator_to_rgb_layers"][i][0][:]
for i in range(len(params["generator_to_rgb_layers"]))
]
# Create list to hold toRGB 1x1 convs.
to_rgb_conv_tensors = [
to_rgb_conv_layers[i](
inputs=tf.zeros(shape=[1] + to_rgb[i][0:3], dtype=tf.float32))
for i in range(len(to_rgb))
]
print_obj(
"\nbuild_generator_to_rgb_layers",
"to_rgb_conv_tensors",
to_rgb_conv_tensors
)
return to_rgb_conv_tensors
def upsample_generator_image(image, original_image_size, block_idx):
"""Upsamples generator image.
Args:
image: tensor, image created by generator conv block.
original_image_size: list, the height and width dimensions of the
original image before any growth.
block_idx: int, index of the current generator growth block.
Returns:
Upsampled image tensor.
"""
# Upsample from s X s to 2s X 2s image.
upsampled_image = tf.image.resize(
images=image,
size=tf.convert_to_tensor(
value=original_image_size,
dtype=tf.int32,
name="upsample_generator_image_original_image_size"
) * 2 ** block_idx,
method="nearest",
name="generator_growth_upsampled_image_{}_{}x{}_{}x{}".format(
block_idx,
original_image_size[0] * 2 ** (block_idx - 1),
original_image_size[1] * 2 ** (block_idx - 1),
original_image_size[0] * 2 ** block_idx,
original_image_size[1] * 2 ** block_idx
)
)
print_obj(
"\nupsample_generator_image",
"upsampled_image",
upsampled_image
)
return upsampled_image
def create_base_generator_network(
Z, projection_layer, to_rgb_conv_layers, blocks, params):
"""Creates base generator network.
Args:
Z: tensor, latent vectors of shape [cur_batch_size, latent_size].
projection_layer: `Dense` layer for projection of noise into image.
to_rgb_conv_layers: list, toRGB 1x1 conv layers.
blocks: list, lists of block layers for each block.
params: dict, user passed parameters.
Returns:
Final network block conv tensor.
"""
print_obj("\ncreate_base_generator_network", "Z", Z)
with tf.variable_scope(name_or_scope="generator", reuse=tf.AUTO_REUSE):
# Project latent noise vectors into image.
projection = use_generator_projection_layer(
Z=Z, projection_layer=projection_layer, params=params
)
print_obj("create_base_generator_network", "projection", projection)
# Only need the first block and toRGB conv layer for base network.
block_layers = blocks[0]
to_rgb_conv_layer = to_rgb_conv_layers[0]
# Pass inputs through layer chain.
block_conv = block_layers[0](inputs=projection)
print_obj("create_base_generator_network", "block_conv_0", block_conv)
for i in range(1, len(block_layers)):
block_conv = block_layers[i](inputs=block_conv)
print_obj(
"create_base_generator_network",
"block_conv_{}".format(i),
block_conv
)
to_rgb_conv = to_rgb_conv_layer(inputs=block_conv)
print_obj("create_base_generator_network", "to_rgb_conv", to_rgb_conv)
return to_rgb_conv
def create_growth_transition_generator_network(
Z,
projection_layer,
to_rgb_conv_layers,
blocks,
original_image_size,
alpha_var,
params,
trans_idx):
"""Creates base generator network.
Args:
Z: tensor, latent vectors of shape [cur_batch_size, latent_size].
projection_layer: `Dense` layer for projection of noise into image.
to_rgb_conv_layers: list, toRGB 1x1 conv layers.
blocks: list, lists of block layers for each block.
original_image_size: list, the height and width dimensions of the
original image before any growth.
alpha_var: variable, alpha for weighted sum of fade-in of layers.
params: dict, user passed parameters.
trans_idx: int, index of current growth transition.
Returns:
Final network block conv tensor.
"""
print_obj(
"\nEntered create_growth_transition_generator_network",
"trans_idx",
trans_idx
)
print_obj("create_growth_transition_generator_network", "Z", Z)
with tf.variable_scope(name_or_scope="generator", reuse=tf.AUTO_REUSE):
# Project latent noise vectors into image.
projection = use_generator_projection_layer(
Z=Z, projection_layer=projection_layer, params=params
)
print_obj(
"create_growth_transition_generator_network",
"projection",
projection
)
# Permanent blocks.
permanent_blocks = blocks[0:trans_idx + 1]
# Base block doesn't need any upsampling so it's handled differently.
base_block_conv_layers = permanent_blocks[0]
# Pass inputs through layer chain.
block_conv = base_block_conv_layers[0](inputs=projection)
print_obj(
"create_growth_transition_generator_network",
"base_block_conv_{}_0".format(trans_idx),
block_conv
)
for i in range(1, len(base_block_conv_layers)):
block_conv = base_block_conv_layers[i](inputs=block_conv)
print_obj(
"create_growth_transition_generator_network",
"base_block_conv_{}_{}".format(trans_idx, i),
block_conv
)
# Growth blocks require first the prev conv layer's image upsampled.
for i in range(1, len(permanent_blocks)):
# Upsample previous block's image.
block_conv = upsample_generator_image(
image=block_conv,
original_image_size=original_image_size,
block_idx=i
)
print_obj(
"create_growth_transition_generator_network",
"upsample_generator_image_block_conv_{}_{}".format(
trans_idx, i
),
block_conv
)
block_conv_layers = permanent_blocks[i]
for j in range(0, len(block_conv_layers)):
block_conv = block_conv_layers[j](inputs=block_conv)
print_obj(
"create_growth_transition_generator_network",
"block_conv_{}_{}_{}".format(trans_idx, i, j),
block_conv
)
# Upsample most recent block conv image for both side chains.
upsampled_block_conv = upsample_generator_image(
image=block_conv,
original_image_size=original_image_size,
block_idx=len(permanent_blocks)
)
print_obj(
"create_growth_transition_generator_network",
"upsampled_block_conv_{}".format(trans_idx),
upsampled_block_conv
)
# Growing side chain.
growing_block_layers = blocks[trans_idx + 1]
growing_to_rgb_conv_layer = to_rgb_conv_layers[trans_idx + 1]
# Pass inputs through layer chain.
block_conv = growing_block_layers[0](inputs=upsampled_block_conv)
print_obj(
"create_growth_transition_generator_network",
"growing_block_conv_{}_0".format(trans_idx),
block_conv
)
for i in range(1, len(growing_block_layers)):
block_conv = growing_block_layers[i](inputs=block_conv)
print_obj(
"create_growth_transition_generator_network",
"growing_block_conv_{}_{}".format(trans_idx, i),
block_conv
)
growing_to_rgb_conv = growing_to_rgb_conv_layer(inputs=block_conv)
print_obj(
"create_growth_transition_generator_network",
"growing_to_rgb_conv_{}".format(trans_idx),
growing_to_rgb_conv
)
# Shrinking side chain.
shrinking_to_rgb_conv_layer = to_rgb_conv_layers[trans_idx]
# Pass inputs through layer chain.
shrinking_to_rgb_conv = shrinking_to_rgb_conv_layer(
inputs=upsampled_block_conv
)
print_obj(
"create_growth_transition_generator_network",
"shrinking_to_rgb_conv_{}".format(trans_idx),
shrinking_to_rgb_conv
)
# Weighted sum.
weighted_sum = tf.add(
x=growing_to_rgb_conv * alpha_var,
y=shrinking_to_rgb_conv * (1.0 - alpha_var),
name="growth_transition_weighted_sum_{}".format(trans_idx)
)
print_obj(
"create_growth_transition_generator_network",
"weighted_sum_{}".format(trans_idx),
weighted_sum
)
return weighted_sum
def create_final_generator_network(
Z, projection_layer, to_rgb_conv_layers, blocks, original_image_size, params):
"""Creates base generator network.
Args:
Z: tensor, latent vectors of shape [cur_batch_size, latent_size].
projection_layer: `Dense` layer for projection of noise into image.
to_rgb_conv_layers: list, toRGB 1x1 conv layers.
blocks: list, lists of block layers for each block.
original_image_size: list, the height and width dimensions of the
original image before any growth.
params: dict, user passed parameters.
Returns:
Final network block conv tensor.
"""
print_obj("\ncreate_final_generator_network", "Z", Z)
with tf.variable_scope(name_or_scope="generator", reuse=tf.AUTO_REUSE):
# Project latent noise vectors into image.
projection = use_generator_projection_layer(
Z=Z, projection_layer=projection_layer, params=params
)
print_obj("create_final_generator_network", "projection", projection)
# Base block doesn't need any upsampling so it's handled differently.
base_block_conv_layers = blocks[0]
# Pass inputs through layer chain.
block_conv = base_block_conv_layers[0](inputs=projection)
print_obj(
"\ncreate_final_generator_network",
"base_block_conv",
block_conv
)
for i in range(1, len(base_block_conv_layers)):
block_conv = base_block_conv_layers[i](inputs=block_conv)
print_obj(
"create_final_generator_network",
"base_block_conv_{}".format(i),
block_conv
)
# Growth blocks require first the prev conv layer's image upsampled.
for i in range(1, len(blocks)):
# Upsample previous block's image.
block_conv = upsample_generator_image(
image=block_conv,
original_image_size=original_image_size,
block_idx=i
)
print_obj(
"create_final_generator_network",
"upsample_generator_image_block_conv_{}".format(i),
block_conv
)
block_conv_layers = blocks[i]
for j in range(0, len(block_conv_layers)):
block_conv = block_conv_layers[j](inputs=block_conv)
print_obj(
"create_final_generator_network",
"block_conv_{}_{}".format(i, j),
block_conv
)
# Only need the last toRGB conv layer.
to_rgb_conv_layer = to_rgb_conv_layers[-1]
# Pass inputs through layer chain.
to_rgb_conv = to_rgb_conv_layer(inputs=block_conv)
print_obj(
"create_final_generator_network", "to_rgb_conv", to_rgb_conv
)
return to_rgb_conv
def generator_network(Z, alpha_var, params):
"""Creates generator network and returns generated output.
Args:
Z: tensor, latent vectors of shape [cur_batch_size, latent_size].
alpha_var: variable, alpha for weighted sum of fade-in of layers.
params: dict, user passed parameters.
Returns:
Generated outputs tensor of shape
[cur_batch_size, height * width * depth].
"""
print_obj("\ngenerator_network", "Z", Z)
# Create regularizer for layer kernel weights.
regularizer = tf.contrib.layers.l1_l2_regularizer(
scale_l1=params["generator_l1_regularization_scale"],
scale_l2=params["generator_l2_regularization_scale"]
)
# Create projection dense layer to turn random noise vector into image.
projection_layer = create_generator_projection_layer(
regularizer=regularizer, params=params
)
# Build projection layer internals using call.
projection_tensor = build_generator_projection_layer(
projection_layer=projection_layer, params=params
)
with tf.control_dependencies(control_inputs=[projection_tensor]):
# Create empty lists to hold generator convolutional layer/tensor blocks.
block_layers = []
block_tensors = []
# Create base convolutional layers, for post-growth.
block_layers.append(
create_generator_base_conv_layer_block(
regularizer=regularizer, params=params
)
)
# Build base convolutional layer block's internals using call.
block_tensors.append(
build_generator_base_conv_layer_block(
base_conv_layers=block_layers[0], params=params
)
)
# Create growth block layers.
for block_idx in range(len(params["generator_growth_conv_blocks"])):
block_layers.append(
create_generator_growth_layer_block(
regularizer=regularizer, params=params, block_idx=block_idx
)
)
print_obj("generator_network", "block_layers", block_layers)
# Build growth block layer internals through call.
for block_idx in range(len(params["generator_growth_conv_blocks"])):
block_tensors.append(
build_generator_growth_layer_block(
conv_layers=block_layers[block_idx + 1],
params=params,
block_idx=block_idx
)
)
# Flatten block tensor lists of lists into list.
block_tensors = [item for sublist in block_tensors for item in sublist]
print_obj("generator_network", "block_tensors", block_tensors)
with tf.control_dependencies(control_inputs=block_tensors):
# Create toRGB 1x1 conv layers.
to_rgb_conv_layers = create_generator_to_rgb_layers(
regularizer=regularizer, params=params
)
print_obj(
"generator_network", "to_rgb_conv_layers", to_rgb_conv_layers
)
# Build toRGB 1x1 conv layer internals through call.
to_rgb_conv_tensors = build_generator_to_rgb_layers(
to_rgb_conv_layers=to_rgb_conv_layers, params=params
)
print_obj(
"generator_network", "to_rgb_conv_tensors", to_rgb_conv_tensors
)
with tf.control_dependencies(control_inputs=to_rgb_conv_tensors):
# Get original image size to use for setting image shape.
original_image_size = params["generator_projection_dims"][0:2]
# Create list of function calls for each training stage.
generated_outputs_list = utils.LazyList(
[
# 4x4
lambda: create_base_generator_network(
Z=Z,
projection_layer=projection_layer,
to_rgb_conv_layers=to_rgb_conv_layers,
blocks=block_layers,
params=params
),
# 8x8
lambda: create_growth_transition_generator_network(
Z=Z,
projection_layer=projection_layer,
to_rgb_conv_layers=to_rgb_conv_layers,
blocks=block_layers,
original_image_size=original_image_size,
alpha_var=alpha_var,
params=params,
trans_idx=0
),
# 16x16
lambda: create_growth_transition_generator_network(
Z=Z,
projection_layer=projection_layer,
to_rgb_conv_layers=to_rgb_conv_layers,
blocks=block_layers,
original_image_size=original_image_size,
alpha_var=alpha_var,
params=params,
trans_idx=1
),
# 32x32
lambda: create_growth_transition_generator_network(
Z=Z,
projection_layer=projection_layer,
to_rgb_conv_layers=to_rgb_conv_layers,
blocks=block_layers,
original_image_size=original_image_size,
alpha_var=alpha_var,
params=params,
trans_idx=2
),
# 64x64
lambda: create_growth_transition_generator_network(
Z=Z,
projection_layer=projection_layer,
to_rgb_conv_layers=to_rgb_conv_layers,
blocks=block_layers,
original_image_size=original_image_size,
alpha_var=alpha_var,
params=params,
trans_idx=3
),
# 128x128
lambda: create_growth_transition_generator_network(
Z=Z,
projection_layer=projection_layer,
to_rgb_conv_layers=to_rgb_conv_layers,
blocks=block_layers,
original_image_size=original_image_size,
alpha_var=alpha_var,
params=params,
trans_idx=4
),
# 256x256
lambda: create_growth_transition_generator_network(
Z=Z,
projection_layer=projection_layer,
to_rgb_conv_layers=to_rgb_conv_layers,
blocks=block_layers,
original_image_size=original_image_size,
alpha_var=alpha_var,
params=params,
trans_idx=5
),
# 512x512
lambda: create_growth_transition_generator_network(
Z=Z,
projection_layer=projection_layer,
to_rgb_conv_layers=to_rgb_conv_layers,
blocks=block_layers,
original_image_size=original_image_size,
alpha_var=alpha_var,
params=params,
trans_idx=6
),
# 1024x1024
lambda: create_growth_transition_generator_network(
Z=Z,
projection_layer=projection_layer,
to_rgb_conv_layers=to_rgb_conv_layers,
blocks=block_layers,
original_image_size=original_image_size,
alpha_var=alpha_var,
params=params,
trans_idx=7
),
# 1024x1024
lambda: create_final_generator_network(
Z=Z,
projection_layer=projection_layer,
to_rgb_conv_layers=to_rgb_conv_layers,
blocks=block_layers,
original_image_size=original_image_size,
params=params
)
]
)
# Call function from list for generated outputs at growth index.
generated_outputs = generated_outputs_list[params["growth_index"]]
print_obj(
"generator_network", "generated_outputs", generated_outputs
)
return generated_outputs
def get_generator_loss(fake_logits, params):
"""Gets generator loss.
Args:
fake_logits: tensor, shape of [cur_batch_size, 1] that came from
discriminator having processed generator's output image.
params: dict, user passed parameters.
Returns:
Tensor of generator's total loss of shape [].
"""
# Calculate base generator loss.
generator_loss = -tf.reduce_mean(
input_tensor=fake_logits,
name="generator_loss"
)
print_obj("\nget_generator_loss", "generator_loss", generator_loss)
# Get generator regularization losses.
generator_reg_loss = regularization.get_regularization_loss(
params=params, scope="generator"
)
print_obj(
"get_generator_loss",
"generator_reg_loss",
generator_reg_loss
)
# Combine losses for total losses.
generator_total_loss = tf.math.add(
x=generator_loss,
y=generator_reg_loss,
name="generator_total_loss"
)
print_obj(
"get_generator_loss", "generator_total_loss", generator_total_loss
)
return generator_total_loss
# -
# ## discriminator.py
# +
# %%writefile pgan_loop_module/trainer/discriminator.py
import tensorflow as tf
from . import regularization
from . import utils
from .print_object import print_obj
def create_discriminator_from_rgb_layers(regularizer, params):
"""Creates discriminator fromRGB layers of 1x1 convs.
Args:
regularizer: `l1_l2_regularizer` object, regularizar for kernel
variables.
params: dict, user passed parameters.
Returns:
List of fromRGB 1x1 conv layers.
"""
with tf.variable_scope(
name_or_scope="discriminator", reuse=tf.AUTO_REUSE):
# Get fromRGB layer properties.
from_rgb = [
params["discriminator_from_rgb_layers"][i][0][:]
for i in range(len(params["discriminator_from_rgb_layers"]))
]
# Create list to hold toRGB 1x1 convs.
from_rgb_conv_layers = [
tf.layers.Conv2D(
filters=from_rgb[i][3],
kernel_size=from_rgb[i][0:2],
strides=from_rgb[i][4:6],
padding="same",
activation=tf.nn.leaky_relu,
kernel_initializer="he_normal",
kernel_regularizer=regularizer,
name="discriminator_from_rgb_layers_conv2d_{}_{}x{}_{}_{}".format(
i,
from_rgb[i][0],
from_rgb[i][1],
from_rgb[i][2],
from_rgb[i][3]
)
)
for i in range(len(from_rgb))
]
print_obj(
"\ncreate_discriminator_from_rgb_layers",
"from_rgb_conv_layers",
from_rgb_conv_layers
)
return from_rgb_conv_layers
def build_discriminator_from_rgb_layers(from_rgb_conv_layers, params):
"""Creates discriminator fromRGB layers of 1x1 convs.
Args:
from_rgb_conv_layers: list, fromGRB con layers.
params: dict, user passed parameters.
Returns:
List of fromRGB 1x1 conv tensors.
"""
with tf.variable_scope(
name_or_scope="discriminator", reuse=tf.AUTO_REUSE):
# Get fromRGB layer properties.
from_rgb = [
params["discriminator_from_rgb_layers"][i][0][:]
for i in range(len(params["discriminator_from_rgb_layers"]))
]
# Create list to hold toRGB 1x1 convs.
from_rgb_conv_tensors = [
from_rgb_conv_layers[i](
inputs=tf.zeros(
shape=[1] + from_rgb[i][0:3], dtype=tf.float32
)
)
for i in range(len(from_rgb))
]
print_obj(
"\nbuild_discriminator_from_rgb_layers",
"from_rgb_conv_tensors",
from_rgb_conv_tensors
)
return from_rgb_conv_tensors
def create_discriminator_base_conv_layer_block(regularizer, params):
"""Creates discriminator base conv layer block.
Args:
regularizer: `l1_l2_regularizer` object, regularizar for kernel
variables.
params: dict, user passed parameters.
Returns:
List of base conv layers.
"""
with tf.variable_scope(
name_or_scope="discriminator", reuse=tf.AUTO_REUSE):
# Get conv block layer properties.
conv_block = params["discriminator_base_conv_blocks"][0]
# Create list of base conv layers.
base_conv_layers = [
tf.layers.Conv2D(
filters=conv_block[i][3],
kernel_size=conv_block[i][0:2],
strides=conv_block[i][4:6],
padding="same",
activation=tf.nn.leaky_relu,
kernel_initializer="he_normal",
kernel_regularizer=regularizer,
name="discriminator_base_layers_conv2d_{}_{}x{}_{}_{}".format(
i,
conv_block[i][0],
conv_block[i][1],
conv_block[i][2],
conv_block[i][3]
)
)
for i in range(len(conv_block) - 1)
]
# Have valid padding for layer just before flatten and logits.
base_conv_layers.append(
tf.layers.Conv2D(
filters=conv_block[-1][3],
kernel_size=conv_block[-1][0:2],
strides=conv_block[-1][4:6],
padding="valid",
activation=tf.nn.leaky_relu,
kernel_initializer="he_normal",
kernel_regularizer=regularizer,
name="discriminator_base_layers_conv2d_{}_{}x{}_{}_{}".format(
len(conv_block) - 1,
conv_block[-1][0],
conv_block[-1][1],
conv_block[-1][2],
conv_block[-1][3]
)
)
)
print_obj(
"\ncreate_discriminator_base_conv_layer_block",
"base_conv_layers",
base_conv_layers
)
return base_conv_layers
def build_discriminator_base_conv_layer_block(base_conv_layers, params):
"""Creates discriminator base conv layer block.
Args:
base_conv_layers: list, base conv block's layers.
params: dict, user passed parameters.
Returns:
List of base conv tensors.
"""
with tf.variable_scope(
name_or_scope="discriminator", reuse=tf.AUTO_REUSE):
# Get conv block layer properties.
conv_block = params["discriminator_base_conv_blocks"][0]
# Create list of base conv layer tensors.
base_conv_tensors = [
base_conv_layers[i](
inputs=tf.zeros(
shape=[1] + conv_block[i][0:3], dtype=tf.float32
)
)
for i in range(len(conv_block))
]
print_obj(
"\nbase_conv_layers",
"base_conv_tensors",
base_conv_tensors
)
return base_conv_tensors
def create_discriminator_growth_layer_block(regularizer, params, block_idx):
"""Creates discriminator growth block.
Args:
regularizer: `l1_l2_regularizer` object, regularizar for kernel
variables.
params: dict, user passed parameters.
block_idx: int, the current growth block's index.
Returns:
List of growth block layers.
"""
with tf.variable_scope(
name_or_scope="discriminator", reuse=tf.AUTO_REUSE):
# Get conv block layer properties.
conv_block = params["discriminator_growth_conv_blocks"][block_idx]
# Create new inner convolutional layers.
conv_layers = [
tf.layers.Conv2D(
filters=conv_block[i][3],
kernel_size=conv_block[i][0:2],
strides=conv_block[i][4:6],
padding="same",
activation=tf.nn.leaky_relu,
kernel_initializer="he_normal",
kernel_regularizer=regularizer,
name="discriminator_growth_layers_conv2d_{}_{}_{}x{}_{}_{}".format(
block_idx,
i,
conv_block[i][0],
conv_block[i][1],
conv_block[i][2],
conv_block[i][3]
)
)
for i in range(len(conv_block))
]
print_obj(
"\ncreate_discriminator_growth_layer_block",
"conv_layers",
conv_layers
)
# Down sample from 2s X 2s to s X s image.
downsampled_image_layer = tf.layers.AveragePooling2D(
pool_size=(2, 2),
strides=(2, 2),
name="discriminator_growth_downsampled_image_{}".format(
block_idx
)
)
print_obj(
"create_discriminator_growth_layer_block",
"downsampled_image_layer",
downsampled_image_layer
)
return conv_layers + [downsampled_image_layer]
def build_discriminator_growth_layer_block(conv_layers, params, block_idx):
"""Creates discriminator growth block.
Args:
list, the current growth block's conv layers.
params: dict, user passed parameters.
block_idx: int, the current growth block's index.
Returns:
List of growth block layers.
"""
with tf.variable_scope(
name_or_scope="discriminator", reuse=tf.AUTO_REUSE):
# Get conv block layer properties.
conv_block = params["discriminator_growth_conv_blocks"][block_idx]
# Create new inner convolutional layers.
conv_tensors = [
conv_layers[i](
inputs=tf.zeros(
shape=[1] + conv_block[i][0:3], dtype=tf.float32
)
)
for i in range(len(conv_block))
]
print_obj(
"\nbuild_discriminator_growth_layer_block",
"conv_tensors",
conv_tensors
)
# Down sample from 2s X 2s to s X s image.
downsampled_image_tensor = tf.layers.AveragePooling2D(
pool_size=(2, 2),
strides=(2, 2),
name="discriminator_growth_downsampled_image_{}".format(
block_idx
)
)(inputs=conv_tensors[-1])
print_obj(
"build_discriminator_growth_layer_block",
"downsampled_image_tensor",
downsampled_image_tensor
)
return conv_tensors + [downsampled_image_tensor]
def create_discriminator_growth_transition_downsample_layers(params):
"""Creates discriminator growth transition downsample layers.
Args:
params: dict, user passed parameters.
Returns:
List of growth transition downsample layers.
"""
with tf.variable_scope(
name_or_scope="discriminator", reuse=tf.AUTO_REUSE):
# Down sample from 2s X 2s to s X s image.
downsample_layers = [
tf.layers.AveragePooling2D(
pool_size=(2, 2),
strides=(2, 2),
name="discriminator_growth_transition_downsample_layer_{}".format(
layer_idx
)
)
for layer_idx in range(
1 + len(params["discriminator_growth_conv_blocks"])
)
]
print_obj(
"\ncreate_discriminator_growth_transition_downsample_layers",
"downsample_layers",
downsample_layers
)
return downsample_layers
def create_discriminator_logits_layer(regularizer):
"""Creates discriminator flatten and logits layer.
Args:
regularizer: `l1_l2_regularizer` object, regularizar for kernel
variables.
Returns:
Flatten and logits layers of discriminator.
"""
with tf.variable_scope(
name_or_scope="discriminator", reuse=tf.AUTO_REUSE):
# Flatten layer to get final block conv tensor ready for dense layer.
flatten_layer = tf.layers.Flatten(name="discriminator_flatten_layer")
print_obj(
"\ncreate_discriminator_logits_layer",
"flatten_layer",
flatten_layer
)
# Final linear layer for logits.
logits_layer = tf.layers.Dense(
units=1,
activation=None,
kernel_regularizer=regularizer,
name="discriminator_layers_dense_logits"
)
print_obj(
"create_growth_transition_discriminator_network",
"logits_layer",
logits_layer
)
return flatten_layer, logits_layer
def build_discriminator_logits_layer(flatten_layer, logits_layer, params):
"""Builds flatten and logits layer internals using call.
Args:
flatten_layer: `Flatten` layer.
logits_layer: `Dense` layer for logits.
params: dict, user passed parameters.
Returns:
Final logits tensor of discriminator.
"""
with tf.variable_scope(
name_or_scope="discriminator", reuse=tf.AUTO_REUSE):
block_conv_size = params["discriminator_base_conv_blocks"][-1][-1][3]
# Flatten final block conv tensor.
block_conv_flat = flatten_layer(
inputs=tf.zeros(
shape=[1, 1, 1, block_conv_size],
dtype=tf.float32
)
)
print_obj(
"build_discriminator_logits_layer",
"block_conv_flat",
block_conv_flat
)
# Final linear layer for logits.
logits = logits_layer(inputs=block_conv_flat)
print_obj("build_discriminator_logits_layer", "logits", logits)
return logits
def use_discriminator_logits_layer(
block_conv, flatten_layer, logits_layer, params):
"""Uses flatten and logits layers to get logits tensor.
Args:
block_conv: tensor, output of last conv layer of discriminator.
flatten_layer: `Flatten` layer.
logits_layer: `Dense` layer for logits.
params: dict, user passed parameters.
Returns:
Final logits tensor of discriminator.
"""
print_obj("\nuse_discriminator_logits_layer", "block_conv", block_conv)
# Set shape to remove ambiguity for dense layer.
block_conv.set_shape(
[
block_conv.get_shape()[0],
params["generator_projection_dims"][0] / 4,
params["generator_projection_dims"][1] / 4,
block_conv.get_shape()[-1]]
)
print_obj("use_discriminator_logits_layer", "block_conv", block_conv)
with tf.variable_scope(name_or_scope="discriminator", reuse=tf.AUTO_REUSE):
# Flatten final block conv tensor.
block_conv_flat = flatten_layer(inputs=block_conv)
print_obj(
"use_discriminator_logits_layer",
"block_conv_flat",
block_conv_flat
)
# Final linear layer for logits.
logits = logits_layer(inputs=block_conv_flat)
print_obj("use_discriminator_logits_layer", "logits", logits)
return logits
def create_base_discriminator_network(
X, from_rgb_conv_layers, blocks, flatten_layer, logits_layer, params):
"""Creates base discriminator network.
Args:
X: tensor, input image to discriminator.
from_rgb_conv_layers: list, fromRGB 1x1 conv layers.
blocks: list, lists of block layers for each block.
flatten_layer: `Flatten` layer.
logits_layer: `Dense` layer for logits.
params: dict, user passed parameters.
Returns:
Final logits tensor of discriminator.
"""
print_obj("\ncreate_base_discriminator_network", "X", X)
with tf.variable_scope(name_or_scope="discriminator", reuse=tf.AUTO_REUSE):
# Only need the first fromRGB conv layer and block for base network.
from_rgb_conv_layer = from_rgb_conv_layers[0]
block_layers = blocks[0]
# Pass inputs through layer chain.
from_rgb_conv = from_rgb_conv_layer(inputs=X)
print_obj(
"create_base_discriminator_network",
"from_rgb_conv",
from_rgb_conv
)
block_conv = from_rgb_conv
for i in range(len(block_layers)):
block_conv = block_layers[i](inputs=block_conv)
print_obj(
"create_base_discriminator_network", "block_conv", block_conv
)
# Get logits now.
logits = use_discriminator_logits_layer(
block_conv=block_conv,
flatten_layer=flatten_layer,
logits_layer=logits_layer,
params=params
)
print_obj("create_base_discriminator_network", "logits", logits)
return logits
def create_growth_transition_discriminator_network(
X,
from_rgb_conv_layers,
blocks,
transition_downsample_layers,
flatten_layer,
logits_layer,
alpha_var,
params,
trans_idx):
"""Creates base discriminator network.
Args:
X: tensor, input image to discriminator.
from_rgb_conv_layers: list, fromRGB 1x1 conv layers.
blocks: list, lists of block layers for each block.
transition_downsample_layers: list, downsample layers for transition.
flatten_layer: `Flatten` layer.
logits_layer: `Dense` layer for logits.
alpha_var: variable, alpha for weighted sum of fade-in of layers.
params: dict, user passed parameters.
trans_idx: int, index of current growth transition.
Returns:
Final logits tensor of discriminator.
"""
print_obj(
"\nEntered create_growth_transition_discriminator_network",
"trans_idx",
trans_idx
)
print_obj("create_growth_transition_discriminator_network", "X", X)
with tf.variable_scope(
name_or_scope="discriminator", reuse=tf.AUTO_REUSE):
# Growing side chain.
growing_from_rgb_conv_layer = from_rgb_conv_layers[trans_idx + 1]
growing_block_layers = blocks[trans_idx + 1]
# Pass inputs through layer chain.
growing_block_conv = growing_from_rgb_conv_layer(inputs=X)
print_obj(
"\ncreate_growth_transition_discriminator_network",
"growing_block_conv",
growing_block_conv
)
for i in range(len(growing_block_layers)):
growing_block_conv = growing_block_layers[i](
inputs=growing_block_conv
)
print_obj(
"create_growth_transition_discriminator_network",
"growing_block_conv",
growing_block_conv
)
# Shrinking side chain.
transition_downsample_layer = transition_downsample_layers[trans_idx]
shrinking_from_rgb_conv_layer = from_rgb_conv_layers[trans_idx]
# Pass inputs through layer chain.
transition_downsample = transition_downsample_layer(inputs=X)
print_obj(
"create_growth_transition_discriminator_network",
"transition_downsample",
transition_downsample
)
shrinking_from_rgb_conv = shrinking_from_rgb_conv_layer(
inputs=transition_downsample
)
print_obj(
"create_growth_transition_discriminator_network",
"shrinking_from_rgb_conv",
shrinking_from_rgb_conv
)
# Weighted sum.
weighted_sum = tf.add(
x=growing_block_conv * alpha_var,
y=shrinking_from_rgb_conv * (1.0 - alpha_var),
name="growth_transition_weighted_sum_{}".format(trans_idx)
)
print_obj(
"create_growth_transition_discriminator_network",
"weighted_sum",
weighted_sum
)
# Permanent blocks.
permanent_blocks = blocks[0:trans_idx + 1]
# Reverse order of blocks and flatten.
permanent_block_layers = [
item for sublist in permanent_blocks[::-1] for item in sublist
]
# Pass inputs through layer chain.
block_conv = weighted_sum
for i in range(len(permanent_block_layers)):
block_conv = permanent_block_layers[i](inputs=block_conv)
print_obj(
"create_growth_transition_discriminator_network",
"block_conv",
block_conv
)
# Get logits now.
logits = use_discriminator_logits_layer(
block_conv, flatten_layer, logits_layer, params
)
print_obj(
"create_growth_transition_discriminator_network", "logits", logits
)
return logits
def create_final_discriminator_network(
X, from_rgb_conv_layers, blocks, flatten_layer, logits_layer, params):
"""Creates base discriminator network.
Args:
X: tensor, input image to discriminator.
from_rgb_conv_layers: list, fromRGB 1x1 conv layers.
blocks: list, lists of block layers for each block.
flatten_layer: `Flatten` layer.
logits_layer: `Dense` layer for logits.
params: dict, user passed parameters.
Returns:
Final logits tensor of discriminator.
"""
print_obj("\ncreate_final_discriminator_network", "X", X)
with tf.variable_scope(
name_or_scope="discriminator", reuse=tf.AUTO_REUSE):
# Only need the last fromRGB conv layer.
from_rgb_conv_layer = from_rgb_conv_layers[-1]
# Reverse order of blocks and flatten.
block_layers = [item for sublist in blocks[::-1] for item in sublist]
# Pass inputs through layer chain.
block_conv = from_rgb_conv_layer(inputs=X)
print_obj(
"\ncreate_final_discriminator_network",
"block_conv",
block_conv
)
for i in range(len(block_layers)):
block_conv = block_layers[i](inputs=block_conv)
print_obj(
"create_final_discriminator_network", "block_conv", block_conv
)
# Get logits now.
logits = use_discriminator_logits_layer(
block_conv=block_conv,
flatten_layer=flatten_layer,
logits_layer=logits_layer,
params=params
)
print_obj("create_final_discriminator_network", "logits", logits)
return logits
def discriminator_network(X, alpha_var, params):
"""Creates discriminator network and returns logits.
Args:
X: tensor, image tensors of shape
[cur_batch_size, height, width, depth].
alpha_var: variable, alpha for weighted sum of fade-in of layers.
params: dict, user passed parameters.
Returns:
Logits tensor of shape [cur_batch_size, 1].
"""
print_obj("\ndiscriminator_network", "X", X)
# Create regularizer for layer kernel weights.
regularizer = tf.contrib.layers.l1_l2_regularizer(
scale_l1=params["discriminator_l1_regularization_scale"],
scale_l2=params["discriminator_l2_regularization_scale"]
)
# Create fromRGB 1x1 conv layers.
from_rgb_conv_layers = create_discriminator_from_rgb_layers(
regularizer=regularizer, params=params
)
print_obj(
"discriminator_network",
"from_rgb_conv_layers",
from_rgb_conv_layers
)
# Build fromRGB 1x1 conv layers internals through call.
from_rgb_conv_tensors = build_discriminator_from_rgb_layers(
from_rgb_conv_layers=from_rgb_conv_layers, params=params
)
print_obj(
"discriminator_network",
"from_rgb_conv_tensors",
from_rgb_conv_tensors
)
with tf.control_dependencies(control_inputs=from_rgb_conv_tensors):
# Create empty list to hold discriminator convolutional layer blocks.
block_layers = []
block_tensors = []
# Create base convolutional block's layers, for post-growth.
block_layers.append(
create_discriminator_base_conv_layer_block(
regularizer=regularizer, params=params
)
)
# Create base convolutional block's layer internals using call.
block_tensors.append(
build_discriminator_base_conv_layer_block(
base_conv_layers=block_layers[0], params=params
)
)
# Create growth layer blocks.
for block_idx in range(
len(params["discriminator_growth_conv_blocks"])):
block_layers.append(
create_discriminator_growth_layer_block(
regularizer=regularizer,
params=params,
block_idx=block_idx
)
)
print_obj("discriminator_network", "block_layers", block_layers)
# Build growth layer block internals through call.
for block_idx in range(
len(params["discriminator_growth_conv_blocks"])):
block_tensors.append(
build_discriminator_growth_layer_block(
conv_layers=block_layers[block_idx + 1],
params=params,
block_idx=block_idx
)
)
# Flatten block tensor lists of lists into list.
block_tensors = [item for sublist in block_tensors for item in sublist]
print_obj("discriminator_network", "block_tensors", block_tensors)
with tf.control_dependencies(control_inputs=block_tensors):
# Create list of transition downsample layers.
transition_downsample_layers = (
create_discriminator_growth_transition_downsample_layers(
params=params
)
)
print_obj(
"discriminator_network",
"transition_downsample_layers",
transition_downsample_layers
)
# Create flatten and logits layers.
flatten_layer, logits_layer = create_discriminator_logits_layer(
regularizer=regularizer
)
# Build logits layer internals using call.
logits_tensor = build_discriminator_logits_layer(
flatten_layer=flatten_layer,
logits_layer=logits_layer,
params=params
)
with tf.control_dependencies(control_inputs=[logits_tensor]):
# Create list of function calls for each training stage.
logits_list = utils.LazyList(
[
# 4x4
lambda: create_base_discriminator_network(
X=X,
from_rgb_conv_layers=from_rgb_conv_layers,
blocks=block_layers,
flatten_layer=flatten_layer,
logits_layer=logits_layer,
params=params
),
# 8x8
lambda: create_growth_transition_discriminator_network(
X=X,
from_rgb_conv_layers=from_rgb_conv_layers,
blocks=block_layers,
transition_downsample_layers=transition_downsample_layers,
flatten_layer=flatten_layer,
logits_layer=logits_layer,
alpha_var=alpha_var,
params=params,
trans_idx=0
),
# 16x16
lambda: create_growth_transition_discriminator_network(
X=X,
from_rgb_conv_layers=from_rgb_conv_layers,
blocks=block_layers,
transition_downsample_layers=transition_downsample_layers,
flatten_layer=flatten_layer,
logits_layer=logits_layer,
alpha_var=alpha_var,
params=params,
trans_idx=1
),
# 32x32
lambda: create_growth_transition_discriminator_network(
X=X,
from_rgb_conv_layers=from_rgb_conv_layers,
blocks=block_layers,
transition_downsample_layers=transition_downsample_layers,
flatten_layer=flatten_layer,
logits_layer=logits_layer,
alpha_var=alpha_var,
params=params,
trans_idx=2
),
# 64x64
lambda: create_growth_transition_discriminator_network(
X=X,
from_rgb_conv_layers=from_rgb_conv_layers,
blocks=block_layers,
transition_downsample_layers=transition_downsample_layers,
flatten_layer=flatten_layer,
logits_layer=logits_layer,
alpha_var=alpha_var,
params=params,
trans_idx=3
),
# 128x128
lambda: create_growth_transition_discriminator_network(
X=X,
from_rgb_conv_layers=from_rgb_conv_layers,
blocks=block_layers,
transition_downsample_layers=transition_downsample_layers,
flatten_layer=flatten_layer,
logits_layer=logits_layer,
alpha_var=alpha_var,
params=params,
trans_idx=4
),
# 256x256
lambda: create_growth_transition_discriminator_network(
X=X,
from_rgb_conv_layers=from_rgb_conv_layers,
blocks=block_layers,
transition_downsample_layers=transition_downsample_layers,
flatten_layer=flatten_layer,
logits_layer=logits_layer,
alpha_var=alpha_var,
params=params,
trans_idx=5
),
# 512x512
lambda: create_growth_transition_discriminator_network(
X=X,
from_rgb_conv_layers=from_rgb_conv_layers,
blocks=block_layers,
transition_downsample_layers=transition_downsample_layers,
flatten_layer=flatten_layer,
logits_layer=logits_layer,
alpha_var=alpha_var,
params=params,
trans_idx=6
),
# 1024x1024
lambda: create_growth_transition_discriminator_network(
X=X,
from_rgb_conv_layers=from_rgb_conv_layers,
blocks=block_layers,
transition_downsample_layers=transition_downsample_layers,
flatten_layer=flatten_layer,
logits_layer=logits_layer,
alpha_var=alpha_var,
params=params,
trans_idx=7
),
# 1024x1024
lambda: create_final_discriminator_network(
X=X,
from_rgb_conv_layers=from_rgb_conv_layers,
blocks=block_layers,
flatten_layer=flatten_layer,
logits_layer=logits_layer,
params=params
)
]
)
# Call function from list for logits at growth index.
logits = logits_list[params["growth_index"]]
return logits
def get_discriminator_loss(fake_logits, real_logits, params):
"""Gets discriminator loss.
Args:
fake_logits: tensor, shape of
[cur_batch_size, height * width * depth].
real_logits: tensor, shape of
[cur_batch_size, height * width * depth].
params: dict, user passed parameters.
Returns:
Tensor of discriminator's total loss of shape [].
"""
# Calculate base discriminator loss.
discriminator_real_loss = tf.reduce_mean(
input_tensor=real_logits,
name="discriminator_real_loss"
)
print_obj(
"\nget_discriminator_loss",
"discriminator_real_loss",
discriminator_real_loss
)
discriminator_generated_loss = tf.reduce_mean(
input_tensor=fake_logits,
name="discriminator_generated_loss"
)
print_obj(
"get_discriminator_loss",
"discriminator_generated_loss",
discriminator_generated_loss
)
discriminator_loss = tf.add(
x=discriminator_real_loss, y=-discriminator_generated_loss,
name="discriminator_loss"
)
print_obj(
"get_discriminator_loss",
"discriminator_loss",
discriminator_loss
)
# Get discriminator gradient penalty loss.
discriminator_gradients = tf.gradients(
ys=discriminator_loss,
xs=tf.trainable_variables(scope="discriminator"),
name="discriminator_gradients_for_penalty"
)
discriminator_gradient_penalty = tf.square(
x=tf.multiply(
x=params["discriminator_gradient_penalty_coefficient"],
y=tf.linalg.global_norm(
t_list=discriminator_gradients,
name="discriminator_gradients_global_norm"
) - 1.0
),
name="discriminator_gradient_penalty"
)
# Get discriminator Wasserstein GP loss.
discriminator_wasserstein_gp_loss = tf.add(
x=discriminator_loss,
y=discriminator_gradient_penalty,
name="discriminator_wasserstein_gp_loss"
)
# Get discriminator regularization losses.
discriminator_reg_loss = regularization.get_regularization_loss(
params=params, scope="discriminator"
)
print_obj(
"get_discriminator_loss",
"discriminator_reg_loss",
discriminator_reg_loss
)
# Combine losses for total losses.
discriminator_total_loss = tf.math.add(
x=discriminator_wasserstein_gp_loss,
y=discriminator_reg_loss,
name="discriminator_total_loss"
)
print_obj(
"get_discriminator_loss",
"discriminator_total_loss",
discriminator_total_loss
)
return discriminator_total_loss
# -
# ## regularization.py
# +
# %%writefile pgan_loop_module/trainer/regularization.py
import tensorflow as tf
from .print_object import print_obj
def get_regularization_loss(params, scope=None):
"""Gets regularization losses from variables attached to a regularizer.
Args:
params: dict, user passed parameters.
scope: str, the name of the variable scope.
Returns:
Scalar regularization loss tensor.
"""
def sum_nd_tensor_list_to_scalar_tensor(t_list):
"""Sums different shape tensors into a scalar tensor.
Args:
t_list: list, tensors of varying shapes.
Returns:
Scalar tensor.
"""
# Sum list of tensors into a list of scalars.
t_reduce_sum_list = [
tf.reduce_sum(
# Remove the :0 from the end of the name.
input_tensor=t, name="{}_reduce_sum".format(t.name[:-2])
)
for t in t_list
]
print_obj(
"\nsum_nd_tensor_list_to_scalar_tensor",
"t_reduce_sum_list",
t_reduce_sum_list
)
# Add all scalars together into one scalar.
t_scalar_sum_tensor = tf.add_n(
inputs=t_reduce_sum_list,
name="{}_t_scalar_sum_tensor".format(scope)
)
print_obj(
"sum_nd_tensor_list_to_scalar_tensor",
"t_scalar_sum_tensor",
t_scalar_sum_tensor
)
return t_scalar_sum_tensor
print_obj("\nget_regularization_loss", "scope", scope)
lambda1 = params["discriminator_l1_regularization_scale"]
lambda2 = params["discriminator_l2_regularization_scale"]
if lambda1 <= 0. and lambda2 <= 0.:
# No regularization so return zero.
return tf.zeros(shape=[], dtype=tf.float32)
# Get list of trainable variables with a regularizer attached in scope.
trainable_reg_vars_list = tf.get_collection(
tf.GraphKeys.REGULARIZATION_LOSSES, scope=scope)
print_obj(
"get_regularization_loss",
"trainable_reg_vars_list",
trainable_reg_vars_list
)
for var in trainable_reg_vars_list:
print_obj(
"get_regularization_loss_{}".format(scope),
"{}".format(var.name),
var.graph
)
l1_loss = 0.
if lambda1 > 0.:
# For L1 regularization, take the absolute value element-wise of each.
trainable_reg_vars_abs_list = [
tf.abs(
x=var,
# Clean up regularizer scopes in variable names.
name="{}_abs".format(("/").join(var.name.split("/")[0:3]))
)
for var in trainable_reg_vars_list
]
# Get L1 loss
l1_loss = tf.multiply(
x=lambda1,
y=sum_nd_tensor_list_to_scalar_tensor(
t_list=trainable_reg_vars_abs_list
),
name="{}_l1_loss".format(scope)
)
l2_loss = 0.
if lambda2 > 0.:
# For L2 regularization, square all variables element-wise.
trainable_reg_vars_squared_list = [
tf.square(
x=var,
# Clean up regularizer scopes in variable names.
name="{}_squared".format(("/").join(var.name.split("/")[0:3]))
)
for var in trainable_reg_vars_list
]
print_obj(
"get_regularization_loss",
"trainable_reg_vars_squared_list",
trainable_reg_vars_squared_list
)
# Get L2 loss
l2_loss = tf.multiply(
x=lambda2,
y=sum_nd_tensor_list_to_scalar_tensor(
t_list=trainable_reg_vars_squared_list
),
name="{}_l2_loss".format(scope)
)
l1_l2_loss = tf.add(
x=l1_loss, y=l2_loss, name="{}_l1_l2_loss".format(scope)
)
return l1_l2_loss
# -
# ## pgan.py
# +
# %%writefile pgan_loop_module/trainer/pgan.py
import tensorflow as tf
from . import discriminator
from . import generator
from . import utils
from .print_object import print_obj
def train_network(loss, global_step, alpha_var, params, scope):
"""Trains network and returns loss and train op.
Args:
loss: tensor, shape of [].
global_step: tensor, the current training step or batch in the
training loop.
alpha_var: variable, alpha for weighted sum of fade-in of layers.
params: dict, user passed parameters.
scope: str, the variables that to train.
Returns:
Loss tensor and training op.
"""
# Create optimizer map.
optimizers = {
"Adam": tf.train.AdamOptimizer,
"Adadelta": tf.train.AdadeltaOptimizer,
"AdagradDA": tf.train.AdagradDAOptimizer,
"Adagrad": tf.train.AdagradOptimizer,
"Ftrl": tf.train.FtrlOptimizer,
"GradientDescent": tf.train.GradientDescentOptimizer,
"Momentum": tf.train.MomentumOptimizer,
"ProximalAdagrad": tf.train.ProximalAdagradOptimizer,
"ProximalGradientDescent": tf.train.ProximalGradientDescentOptimizer,
"RMSProp": tf.train.RMSPropOptimizer
}
# Get gradients.
gradients = tf.gradients(
ys=loss,
xs=tf.trainable_variables(scope=scope),
name="{}_gradients".format(scope)
)
# Clip gradients.
if params["{}_clip_gradients".format(scope)]:
gradients, _ = tf.clip_by_global_norm(
t_list=gradients,
clip_norm=params["{}_clip_gradients".format(scope)],
name="{}_clip_by_global_norm_gradients".format(scope)
)
# Zip back together gradients and variables.
grads_and_vars = zip(gradients, tf.trainable_variables(scope=scope))
# Get optimizer and instantiate it.
optimizer = optimizers[params["{}_optimizer".format(scope)]](
learning_rate=params["{}_learning_rate".format(scope)]
)
# Create train op by applying gradients to variables and incrementing
# global step.
train_op = optimizer.apply_gradients(
grads_and_vars=grads_and_vars,
global_step=global_step,
name="{}_apply_gradients".format(scope)
)
# Update alpha variable to linearly scale from 0 to 1 based on steps.
alpha_var_update_op = tf.assign(
ref=alpha_var,
value=tf.divide(
x=tf.cast(
x=tf.mod(x=global_step, y=params["num_steps_until_growth"]),
dtype=tf.float32
),
y=params["num_steps_until_growth"]
)
)
# Ensure alpha variable gets updated.
with tf.control_dependencies(control_inputs=[alpha_var_update_op]):
loss = tf.identity(input=loss, name="train_network_loss_identity")
return loss, train_op
def resize_real_image(block_idx, image, params):
"""Resizes real images to match the GAN's current size.
Args:
block_idx: int, index of current block.
image: tensor, original image.
params: dict, user passed parameters.
Returns:
Resized image tensor.
"""
print_obj("\nresize_real_image", "block_idx", block_idx)
print_obj("resize_real_image", "image", image)
# Resize image to match GAN size at current block index.
resized_image = tf.image.resize(
images=image,
size=[
params["generator_projection_dims"][0] * (2 ** block_idx),
params["generator_projection_dims"][1] * (2 ** block_idx)
],
method="nearest",
name="resize_real_images_resized_image_{}".format(block_idx)
)
print_obj("resize_real_images", "resized_image", resized_image)
return resized_image
def resize_real_images(image, params):
"""Resizes real images to match the GAN's current size.
Args:
image: tensor, original image.
params: dict, user passed parameters.
Returns:
Resized image tensor.
"""
print_obj("\nresize_real_images", "image", image)
# Resize real image for each block.
# Create list of function calls for each training stage.
resized_image_list = utils.LazyList(
[
lambda: resize_real_image(0, image, params), # 4x4
lambda : resize_real_image(1, image, params), # 8x8
lambda : resize_real_image(2, image, params), # 16x16
lambda : resize_real_image(3, image, params), # 32x32
lambda : resize_real_image(4, image, params), # 64x64
lambda : resize_real_image(5, image, params), # 128x128
lambda : resize_real_image(6, image, params), # 256x256
lambda : resize_real_image(7, image, params), # 512x512
lambda : resize_real_image(8, image, params), # 1024x1024
]
)
# Calculate index to choose the correct resizing.
index = (
len(params["conv_num_filters"]) - 1
if params["growth_index"] == -1 else params["growth_index"]
)
# Call function from list for resized image at index.
resized_image = resized_image_list[index]
print_obj(
"resize_real_images", "selected resized_image", resized_image
)
return resized_image
def pgan_model(features, labels, mode, params):
"""Progressively Growing GAN custom Estimator model function.
Args:
features: dict, keys are feature names and values are feature tensors.
labels: tensor, label data.
mode: tf.estimator.ModeKeys with values of either TRAIN, EVAL, or
PREDICT.
params: dict, user passed parameters.
Returns:
Instance of `tf.estimator.EstimatorSpec` class.
"""
print_obj("\npgan_model", "features", features)
print_obj("pgan_model", "labels", labels)
print_obj("pgan_model", "mode", mode)
print_obj("pgan_model", "params", params)
# Loss function, training/eval ops, etc.
predictions_dict = None
loss = None
train_op = None
eval_metric_ops = None
export_outputs = None
# Create alpha variable to use for weighted sum for smooth fade-in.
alpha_var = tf.get_variable(
name="alpha_var",
dtype=tf.float32,
initializer=tf.zeros(shape=[], dtype=tf.float32),
trainable=False
)
print_obj("pgan_model", "alpha_var", alpha_var)
if mode == tf.estimator.ModeKeys.PREDICT:
# Extract given latent vectors from features dictionary.
Z = tf.cast(x=features["Z"], dtype=tf.float32)
# Get predictions from generator.
generated_images = generator.generator_network(
Z=Z, alpha_var=alpha_var, params=params
)
# Create predictions dictionary.
predictions_dict = {
"generated_images": generated_images
}
# Create export outputs.
export_outputs = {
"predict_export_outputs": tf.estimator.export.PredictOutput(
outputs=predictions_dict)
}
else:
# Extract image from features dictionary.
X = features["image"]
# Get dynamic batch size in case of partial batch.
cur_batch_size = tf.shape(
input=X,
out_type=tf.int32,
name="pgan_model_cur_batch_size"
)[0]
# Create random noise latent vector for each batch example.
Z = tf.random.normal(
shape=[cur_batch_size, params["latent_size"]],
mean=0.0,
stddev=1.0,
dtype=tf.float32
)
# Get generated image from generator network from gaussian noise.
print("\nCall generator with Z = {}.".format(Z))
generator_outputs = generator.generator_network(
Z=Z, alpha_var=alpha_var, params=params
)
# Get fake logits from discriminator using generator's output image.
print("\nCall discriminator with generator_outputs = {}.".format(
generator_outputs
))
fake_logits = discriminator.discriminator_network(
X=generator_outputs, alpha_var=alpha_var, params=params
)
# Resize real images based on the current size of the GAN.
real_image = resize_real_images(X, params)
# Get real logits from discriminator using real image.
print("\nCall discriminator with real_image = {}.".format(
real_image
))
real_logits = discriminator.discriminator_network(
X=real_image, alpha_var=alpha_var, params=params
)
# Get generator total loss.
generator_total_loss = generator.get_generator_loss(
fake_logits=fake_logits, params=params
)
# Get discriminator total loss.
discriminator_total_loss = discriminator.get_discriminator_loss(
fake_logits=fake_logits, real_logits=real_logits, params=params
)
if mode == tf.estimator.ModeKeys.TRAIN:
# Get global step.
global_step = tf.train.get_or_create_global_step()
# Determine if it is time to train generator or discriminator.
cycle_step = tf.mod(
x=global_step,
y=tf.cast(
x=tf.add(
x=params["generator_train_steps"],
y=params["discriminator_train_steps"]
),
dtype=tf.int64
),
name="pgan_model_cycle_step"
)
# Create choose generator condition.
condition = tf.less(
x=cycle_step, y=params["generator_train_steps"]
)
# Needed for batch normalization, but has no effect otherwise.
update_ops = tf.get_collection(key=tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(control_inputs=update_ops):
# Conditionally choose to train generator or discriminator.
loss, train_op = tf.cond(
pred=condition,
true_fn=lambda: train_network(
loss=generator_total_loss,
global_step=global_step,
alpha_var=alpha_var,
params=params,
scope="generator"
),
false_fn=lambda: train_network(
loss=discriminator_total_loss,
global_step=global_step,
alpha_var=alpha_var,
params=params,
scope="discriminator"
)
)
else:
loss = discriminator_total_loss
# Concatenate discriminator logits and labels.
discriminator_logits = tf.concat(
values=[real_logits, fake_logits],
axis=0,
name="discriminator_concat_logits"
)
discriminator_labels = tf.concat(
values=[
tf.ones_like(tensor=real_logits),
tf.zeros_like(tensor=fake_logits)
],
axis=0,
name="discriminator_concat_labels"
)
# Calculate discriminator probabilities.
discriminator_probabilities = tf.nn.sigmoid(
x=discriminator_logits, name="discriminator_probabilities"
)
# Create eval metric ops dictionary.
eval_metric_ops = {
"accuracy": tf.metrics.accuracy(
labels=discriminator_labels,
predictions=discriminator_probabilities,
name="pgan_model_accuracy"
),
"precision": tf.metrics.precision(
labels=discriminator_labels,
predictions=discriminator_probabilities,
name="pgan_model_precision"
),
"recall": tf.metrics.recall(
labels=discriminator_labels,
predictions=discriminator_probabilities,
name="pgan_model_recall"
),
"auc_roc": tf.metrics.auc(
labels=discriminator_labels,
predictions=discriminator_probabilities,
num_thresholds=200,
curve="ROC",
name="pgan_model_auc_roc"
),
"auc_pr": tf.metrics.auc(
labels=discriminator_labels,
predictions=discriminator_probabilities,
num_thresholds=200,
curve="PR",
name="pgan_model_auc_pr"
)
}
# Return EstimatorSpec
return tf.estimator.EstimatorSpec(
mode=mode,
predictions=predictions_dict,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_metric_ops,
export_outputs=export_outputs
)
# -
# ## serving.py
# +
# %%writefile pgan_loop_module/trainer/serving.py
import tensorflow as tf
from .print_object import print_obj
def serving_input_fn(params):
"""Serving input function.
Args:
params: dict, user passed parameters.
Returns:
ServingInputReceiver object containing features and receiver tensors.
"""
# Create placeholders to accept data sent to the model at serving time.
# shape = (batch_size,)
feature_placeholders = {
"Z": tf.placeholder(
dtype=tf.float32,
shape=[None, params["latent_size"]],
name="serving_input_placeholder_Z"
)
}
print_obj(
"\nserving_input_fn",
"feature_placeholders",
feature_placeholders
)
# Create clones of the feature placeholder tensors so that the SavedModel
# SignatureDef will point to the placeholder.
features = {
key: tf.identity(
input=value,
name="serving_input_fn_identity_placeholder_{}".format(key)
)
for key, value in feature_placeholders.items()
}
print_obj(
"serving_input_fn",
"features",
features
)
return tf.estimator.export.ServingInputReceiver(
features=features, receiver_tensors=feature_placeholders
)
# -
# ## model.py
# +
# %%writefile pgan_loop_module/trainer/model.py
import tensorflow as tf
from . import input
from . import serving
from . import pgan
def train_and_evaluate(args):
"""Trains and evaluates custom Estimator model.
Args:
args: dict, user passed parameters.
"""
# Set logging to be level of INFO.
tf.logging.set_verbosity(tf.logging.INFO)
# Create exporter to save out the complete model to disk.
exporter = tf.estimator.LatestExporter(
name="exporter",
serving_input_receiver_fn=lambda: serving.serving_input_fn(args)
)
# Create eval spec to read in our validation data and export our model.
eval_spec = tf.estimator.EvalSpec(
input_fn=input.read_dataset(
filename=args["eval_file_pattern"],
mode=tf.estimator.ModeKeys.EVAL,
batch_size=args["eval_batch_size"],
params=args
),
steps=args["eval_steps"],
start_delay_secs=args["start_delay_secs"],
throttle_secs=args["throttle_secs"],
exporters=exporter
)
# Determine number of training stages.
num_stages = min(
args["train_steps"] // args["num_steps_until_growth"] + 1,
len(args["conv_num_filters"])
)
# Train estimator for each stage.
for i in range(num_stages):
# Determine number of training steps from last checkpoint.
train_steps = args["num_steps_until_growth"] * (i + 1)
# Create train spec to read in our training data.
train_spec = tf.estimator.TrainSpec(
input_fn=input.read_dataset(
filename=args["train_file_pattern"],
mode=tf.estimator.ModeKeys.TRAIN,
batch_size=args["train_batch_size"],
params=args
),
max_steps=train_steps
)
# Set growth index.
args["growth_index"] = i
print(
"\n\n\nTRAINING MODEL FOR {} STEPS WITH GROWTH INDEX = {}".format(
args["num_steps_until_growth"], args["growth_index"]
)
)
# Instantiate estimator.
estimator = tf.estimator.Estimator(
model_fn=pgan.pgan_model,
model_dir=args["output_dir"],
params=args
)
# Create train and evaluate loop to train and evaluate our estimator.
tf.estimator.train_and_evaluate(
estimator=estimator, train_spec=train_spec, eval_spec=eval_spec)
# Determine if more training is needed using final stage block.
train_steps_completed = num_stages * args["num_steps_until_growth"]
train_steps_remaining = args["train_steps"] - train_steps_completed
# Train for any remaining steps.
if train_steps_remaining > 0:
# Create train spec to read in our training data.
train_spec = tf.estimator.TrainSpec(
input_fn=input.read_dataset(
filename=args["train_file_pattern"],
mode=tf.estimator.ModeKeys.TRAIN,
batch_size=args["train_batch_size"],
params=args
),
max_steps=args["train_steps"]
)
# Set growth index.
args["growth_index"] = -1 if len(args["conv_num_filters"]) > 1 else 0
print(
"\n\n\nTRAINING MODEL MORE USING FINAL BLOCK FOR REMAINING {} STEPS AT GROWTH INDEX {}".format(
train_steps_remaining, args["growth_index"]
)
)
# Instantiate estimator.
estimator = tf.estimator.Estimator(
model_fn=pgan.pgan_model,
model_dir=args["output_dir"],
params=args
)
# Create train and evaluate loop to train and evaluate our estimator.
tf.estimator.train_and_evaluate(
estimator=estimator, train_spec=train_spec, eval_spec=eval_spec)
# -
# ## task.py
# +
# %%writefile pgan_loop_module/trainer/task.py
import argparse
import json
import os
from . import model
def calc_generator_discriminator_conv_layer_properties(
conv_num_filters, conv_kernel_sizes, conv_strides, depth):
"""Calculates generator and discriminator conv layer properties.
Args:
num_filters: list, nested list of ints of the number of filters
for each conv layer.
kernel_sizes: list, nested list of ints of the kernel sizes for
each conv layer.
strides: list, nested list of ints of the strides for each conv
layer.
depth: int, depth dimension of images.
Returns:
Nested lists of conv layer properties for both generator and
discriminator.
"""
def make_generator(num_filters, kernel_sizes, strides, depth):
"""Calculates generator conv layer properties.
Args:
num_filters: list, nested list of ints of the number of filters
for each conv layer.
kernel_sizes: list, nested list of ints of the kernel sizes for
each conv layer.
strides: list, nested list of ints of the strides for each conv
layer.
depth: int, depth dimension of images.
Returns:
Nested list of conv layer properties for generator.
"""
# Get the number of growths.
num_growths = len(num_filters) - 1
# Make base block.
in_out = num_filters[0]
base = [
[kernel_sizes[0][i]] * 2 + in_out + [strides[0][i]] * 2
for i in range(len(num_filters[0]))
]
blocks = [base]
# Add growth blocks.
for i in range(1, num_growths + 1):
in_out = [[blocks[i - 1][-1][-3], num_filters[i][0]]]
block = [[kernel_sizes[i][0]] * 2 + in_out[0] + [strides[i][0]] * 2]
for j in range(1, len(num_filters[i])):
in_out.append([block[-1][-3], num_filters[i][j]])
block.append(
[kernel_sizes[i][j]] * 2 + in_out[j] + [strides[i][j]] * 2
)
blocks.append(block)
# Add toRBG conv.
blocks[-1].append([1, 1, blocks[-1][-1][-3], depth] + [1] * 2)
return blocks
def make_discriminator(generator):
"""Calculates discriminator conv layer properties.
Args:
generator: list, nested list of conv layer properties for
generator.
Returns:
Nested list of conv layer properties for discriminator.
"""
# Reverse generator.
discriminator = generator[::-1]
# Reverse input and output shapes.
discriminator = [
[
conv[0:2] + conv[2:4][::-1] + conv[-2:]
for conv in block[::-1]
]
for block in discriminator
]
return discriminator
# Calculate conv layer properties for generator using args.
generator = make_generator(
conv_num_filters, conv_kernel_sizes, conv_strides, depth
)
# Calculate conv layer properties for discriminator using generator
# properties.
discriminator = make_discriminator(generator)
return generator, discriminator
def split_up_generator_conv_layer_properties(
generator, num_filters, strides, depth):
"""Splits up generator conv layer properties into lists.
Args:
generator: list, nested list of conv layer properties for
generator.
num_filters: list, nested list of ints of the number of filters
for each conv layer.
strides: list, nested list of ints of the strides for each conv
layer.
depth: int, depth dimension of images.
Returns:
Nested lists of conv layer properties for generator.
"""
generator_base_conv_blocks = [generator[0][0:len(num_filters[0])]]
generator_growth_conv_blocks = []
if len(num_filters) > 1:
generator_growth_conv_blocks = generator[1:-1] + [generator[-1][:-1]]
generator_to_rgb_layers = [
[[1] * 2 + [num_filters[i][0]] + [depth] + [strides[i][0]] * 2]
for i in range(len(num_filters))
]
return (generator_base_conv_blocks,
generator_growth_conv_blocks,
generator_to_rgb_layers)
def split_up_discriminator_conv_layer_properties(
discriminator, num_filters, strides, depth):
"""Splits up discriminator conv layer properties into lists.
Args:
discriminator: list, nested list of conv layer properties for
discriminator.
num_filters: list, nested list of ints of the number of filters
for each conv layer.
strides: list, nested list of ints of the strides for each conv
layer.
depth: int, depth dimension of images.
Returns:
Nested lists of conv layer properties for discriminator.
"""
discriminator_from_rgb_layers = [
[[1] * 2 + [depth] + [num_filters[i][0]] + [strides[i][0]] * 2]
for i in range(len(num_filters))
]
if len(num_filters) > 1:
discriminator_base_conv_blocks = [discriminator[-1]]
else:
discriminator_base_conv_blocks = [discriminator[-1][1:]]
discriminator_growth_conv_blocks = []
if len(num_filters) > 1:
discriminator_growth_conv_blocks = [discriminator[0][1:]] + discriminator[1:-1]
discriminator_growth_conv_blocks = discriminator_growth_conv_blocks[::-1]
return (discriminator_from_rgb_layers,
discriminator_base_conv_blocks,
discriminator_growth_conv_blocks)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
# File arguments.
parser.add_argument(
"--train_file_pattern",
help="GCS location to read training data.",
required=True
)
parser.add_argument(
"--eval_file_pattern",
help="GCS location to read evaluation data.",
required=True
)
parser.add_argument(
"--output_dir",
help="GCS location to write checkpoints and export models.",
required=True
)
parser.add_argument(
"--job-dir",
help="This model ignores this field, but it is required by gcloud.",
default="junk"
)
# Training parameters.
parser.add_argument(
"--train_batch_size",
help="Number of examples in training batch.",
type=int,
default=32
)
parser.add_argument(
"--train_steps",
help="Number of steps to train for.",
type=int,
default=100
)
# Eval parameters.
parser.add_argument(
"--eval_batch_size",
help="Number of examples in evaluation batch.",
type=int,
default=32
)
parser.add_argument(
"--eval_steps",
help="Number of steps to evaluate for.",
type=str,
default="None"
)
parser.add_argument(
"--start_delay_secs",
help="Number of seconds to wait before first evaluation.",
type=int,
default=60
)
parser.add_argument(
"--throttle_secs",
help="Number of seconds to wait between evaluations.",
type=int,
default=120
)
# Image parameters.
parser.add_argument(
"--height",
help="Height of image.",
type=int,
default=32
)
parser.add_argument(
"--width",
help="Width of image.",
type=int,
default=32
)
parser.add_argument(
"--depth",
help="Depth of image.",
type=int,
default=3
)
# Shared parameters.
parser.add_argument(
"--num_steps_until_growth",
help="Number of steps until layer added to generator & discriminator.",
type=int,
default=100
)
parser.add_argument(
"--conv_num_filters",
help="Number of filters for growth conv layers.",
type=str,
default="512,512;512,512"
)
parser.add_argument(
"--conv_kernel_sizes",
help="Kernel sizes for growth conv layers.",
type=str,
default="3,3;3,3"
)
parser.add_argument(
"--conv_strides",
help="Strides for growth conv layers.",
type=str,
default="1,1;1,1"
)
# Generator parameters.
parser.add_argument(
"--latent_size",
help="The latent size of the noise vector.",
type=int,
default=3
)
parser.add_argument(
"--generator_projection_dims",
help="The 3D dimensions to project latent noise vector into.",
type=str,
default="8,8,256"
)
parser.add_argument(
"--generator_l1_regularization_scale",
help="Scale factor for L1 regularization for generator.",
type=float,
default=0.0
)
parser.add_argument(
"--generator_l2_regularization_scale",
help="Scale factor for L2 regularization for generator.",
type=float,
default=0.0
)
parser.add_argument(
"--generator_optimizer",
help="Name of optimizer to use for generator.",
type=str,
default="Adam"
)
parser.add_argument(
"--generator_learning_rate",
help="How quickly we train our model by scaling the gradient for generator.",
type=float,
default=0.1
)
parser.add_argument(
"--generator_clip_gradients",
help="Global clipping to prevent gradient norm to exceed this value for generator.",
type=str,
default="None"
)
parser.add_argument(
"--generator_train_steps",
help="Number of steps to train generator for per cycle.",
type=int,
default=100
)
# Discriminator parameters.
parser.add_argument(
"--discriminator_l1_regularization_scale",
help="Scale factor for L1 regularization for discriminator.",
type=float,
default=0.0
)
parser.add_argument(
"--discriminator_l2_regularization_scale",
help="Scale factor for L2 regularization for discriminator.",
type=float,
default=0.0
)
parser.add_argument(
"--discriminator_optimizer",
help="Name of optimizer to use for discriminator.",
type=str,
default="Adam"
)
parser.add_argument(
"--discriminator_learning_rate",
help="How quickly we train our model by scaling the gradient for discriminator.",
type=float,
default=0.1
)
parser.add_argument(
"--discriminator_clip_gradients",
help="Global clipping to prevent gradient norm to exceed this value for discriminator.",
type=str,
default="None"
)
parser.add_argument(
"--discriminator_gradient_penalty_coefficient",
help="Coefficient of gradient penalty for discriminator.",
type=float,
default=10.0
)
parser.add_argument(
"--discriminator_train_steps",
help="Number of steps to train discriminator for per cycle.",
type=int,
default=100
)
# Parse all arguments.
args = parser.parse_args()
arguments = args.__dict__
# Unused args provided by service.
arguments.pop("job_dir", None)
arguments.pop("job-dir", None)
# Fix eval steps.
if arguments["eval_steps"] == "None":
arguments["eval_steps"] = None
else:
arguments["eval_steps"] = int(arguments["eval_steps"])
# Fix generator_projection_dims.
arguments["generator_projection_dims"] = [
int(x)
for x in arguments["generator_projection_dims"].split(",")
]
# Fix conv layer property parameters.
arguments["conv_num_filters"] = [
[int(y) for y in x.split(",")]
for x in arguments["conv_num_filters"].split(";")
]
arguments["conv_kernel_sizes"] = [
[int(y) for y in x.split(",")]
for x in arguments["conv_kernel_sizes"].split(";")
]
arguments["conv_strides"] = [
[int(y) for y in x.split(",")]
for x in arguments["conv_strides"].split(";")
]
# Make some assertions.
assert len(arguments["conv_num_filters"]) > 0
assert len(arguments["conv_num_filters"]) == len(arguments["conv_kernel_sizes"])
assert len(arguments["conv_num_filters"]) == len(arguments["conv_strides"])
# Truncate lists if over the 1024x1024 current limit.
if len(arguments["conv_num_filters"]) > 9:
arguments["conv_num_filters"] = arguments["conv_num_filters"][0:10]
arguments["conv_kernel_sizes"] = arguments["conv_kernel_sizes"][0:10]
arguments["conv_strides"] = arguments["conv_strides"][0:10]
# Get conv layer properties for generator and discriminator.
(generator,
discriminator) = calc_generator_discriminator_conv_layer_properties(
arguments["conv_num_filters"],
arguments["conv_kernel_sizes"],
arguments["conv_strides"],
arguments["depth"]
)
# Split up generator properties into separate lists.
(generator_base_conv_blocks,
generator_growth_conv_blocks,
generator_to_rgb_layers) = split_up_generator_conv_layer_properties(
generator,
arguments["conv_num_filters"],
arguments["conv_strides"],
arguments["depth"]
)
arguments["generator_base_conv_blocks"] = generator_base_conv_blocks
arguments["generator_growth_conv_blocks"] = generator_growth_conv_blocks
arguments["generator_to_rgb_layers"] = generator_to_rgb_layers
# Split up discriminator properties into separate lists.
(discriminator_from_rgb_layers,
discriminator_base_conv_blocks,
discriminator_growth_conv_blocks) = split_up_discriminator_conv_layer_properties(
discriminator,
arguments["conv_num_filters"],
arguments["conv_strides"],
arguments["depth"]
)
arguments["discriminator_from_rgb_layers"] = discriminator_from_rgb_layers
arguments["discriminator_base_conv_blocks"] = discriminator_base_conv_blocks
arguments["discriminator_growth_conv_blocks"] = discriminator_growth_conv_blocks
# Fix clip_gradients.
if arguments["generator_clip_gradients"] == "None":
arguments["generator_clip_gradients"] = None
else:
arguments["generator_clip_gradients"] = float(
arguments["generator_clip_gradients"]
)
if arguments["discriminator_clip_gradients"] == "None":
arguments["discriminator_clip_gradients"] = None
else:
arguments["discriminator_clip_gradients"] = float(
arguments["discriminator_clip_gradients"]
)
# Append trial_id to path if we are doing hptuning.
# This code can be removed if you are not using hyperparameter tuning.
arguments["output_dir"] = os.path.join(
arguments["output_dir"],
json.loads(
os.environ.get(
"TF_CONFIG", "{}"
)
).get("task", {}).get("trial", ""))
# Run the training job.
model.train_and_evaluate(arguments)
# -
# %%writefile pgan_loop_module/trainer/utils.py
class LazyList(list):
"Like a list, but invokes callables"
def __getitem__(self, key):
item = super().__getitem__(key)
if callable(item):
item = item()
return item
| machine_learning/gan/pgan/tf_pgan/tf_pgan_loop_module.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import GLFinancial as glf
import GLTaxTools as gltt
import matplotlib.pyplot as plt
import numpy as np
# +
pretax_income = np.arange(0, 350 * 1000, 10 * 1000)
print('Pretax Income ranging from {} to {}'.format(pretax_income[0], pretax_income[-1]))
fig, ax = plt.subplots(figsize=(10,5), constrained_layout=True)
plt.plot(pretax_income, gltt.TaxTable(status='single', state=None).incometax(pretax_income), label='2019 Federal Tax Rates (Single Filer)')
plt.plot(pretax_income, gltt.TaxTable(status='single', state='CA').incometax(pretax_income), label='2019 Federal (Single) +CA Tax Rates')
plt.plot(pretax_income, gltt.TaxTable(status='single', state='MA').incometax(pretax_income), label='2019 Federal (Single) +MA Tax Rates')
plt.plot(pretax_income, gltt.TaxTable(status='married', state='CA').incometax(pretax_income), label='2019 Federal (Married) +CA Tax Rates')
plt.plot(pretax_income, gltt.TaxTable(status='married', state='MA').incometax(pretax_income), label='2019 Federal (Married) +MA Tax Rates')
plt.axis([0,pretax_income[-1],0,pretax_income[-1]])
ax.legend()
ax.set_xlabel('PreTax Income ($)')
ax.set_ylabel('Income Tax ($)')
ax.set_ylim(top=np.max(pretax_income)*0.45)
plt.grid()
plt.show()
# +
def get_sample_ca_fm():
fm = glf.FinancialModel('Sample Model (CA Renting)', location='CA')
fm.add_yearly(61372, 'Salary', year_end=30, agi_impacting=True)
fm.add_monthly(-1012, 'Rent', apr=1.02) # Average rent increase is 4-5% per year, reduced by inflation of ~2% because we do all math in current value dollars
# https://www.deptofnumbers.com/rent/us/ https://magazine.realtor/commercial/feature/article/2016/02/delicate-art-rent-increases
fm.add_yearly(-9576, 'Auto Costs', apr=1.01) # https://www.investopedia.com/articles/pf/08/cost-car-ownership.asp
fm.add_yearly(-5000, '401k Savings', yearly_nw_amount=5000 * 2, year_end=30, agi_impacting=True) # We assume a generous company match
fm.add_monthly(-800, 'Monthly Spend') # ~$200 per week
fm.add_yearly(-4000, 'Yearly Spend') # Vacations, medical bills, and other unexpected/intermittent costs
# You got married 15 years in!
fm.change_status('married', year_start=15)
# ... but it didn't work out
fm.change_status('single', year_start=21)
return fm
def get_sample_owning_fm():
fm = glf.FinancialModel('Sample Model (CA Rent until MA Home Buying)', location='CA')
fm.add_yearly(61372, 'Salary', year_end=30, agi_impacting=True)
# We buy a home in this model
home_price = 200 * 1000 # We assume a 20% down payment
home_purchase_year = 4
fm.add_monthly(-1012, 'Rent', apr=1.04, year_end=home_purchase_year) # Rent until you buy a house
fm.change_residence('MA', year_start=home_purchase_year)
fm.add_single(-0.2 * home_price , 'Home Down Payment', year=home_purchase_year)
fm.add_loan(0.8 * home_price, 'Mortgage', duration=30, year_start=home_purchase_year, apr=1.03) # 5% APR, 2% inflation
fm.add_yearly(-0.02 * home_price, 'Home Maintenance, Insurance, Etc', year_start=home_purchase_year)
# OPTIONAL: Some people include their home value in their net worth, and thus appreciation/etc. Since I don't plan to sell, I don't count it.
# fm.add_single(home_price, 'Home Value', year_start=home_price)
fm.add_monthly(-350, 'Auto Costs', apr=1.01) # A car payment, gas, etc
fm.add_yearly(-2500, 'Auto Insurance and Repairs')
fm.add_yearly(-5000, '401k Savings', yearly_nw_amount=5000 * 2, year_end=30, agi_impacting=True) # We assume a generous company match
fm.add_monthly(-800, 'Monthly Spend') # ~$200 per week
fm.add_yearly(-4000, 'Yearly Spend') # Vacations, medical bills, and other unexpected/intermittent costs
return fm
models = [get_sample_ca_fm(), get_sample_owning_fm()]
results = []
# These sims use the default args based on real stock market data over time for rate of return and variance on it
for fm in models:
fm.plot_cashflow(year_end=40, block=False)
fm_mt = fm.simmany(nruns=175, nyears=40)
fm.plot(fm_mt, block=False)
results.append(fm_mt)
glf.FinancialModel.plotmany(results, nyears=40, block=True)
# -
| glfinancial/Example GLFinancial Workbook.ipynb |
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .cpp
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: C++17
// language: C++17
// name: xeus-cling-cpp17
// ---
// + [markdown] graffitiCellId="id_dwzadwu"
// ## Experiment with Jupyter Notebooks
// Press the `Compile & Run` button below to run the code in the terminal. The Notebook will save the code within the cell to `./code/main.cpp` and then compile and execute it.
//
// Try writing and running some code to see how it works!
// + graffitiCellId="id_u5bdxi7" graffitiConfig={"executeCellViaGraffiti": "hsznf5f_v797hj5"}
#include <iostream>
// Write a simple function to add two integers
int Addition(int a, int b)
{
return a + b;
}
// Define a main() function to test the Addition() function
int main()
{
int a = 2;
int b = 2;
int z = Addition(a, b);
std::cout << a << " + " << b << " = " << z << " Yay!\n";
}
// + [markdown] graffitiCellId="id_hsznf5f"
// <span class="graffiti-highlight graffiti-id_hsznf5f-id_v797hj5"><i></i><button>Compile & Run</button></span>
// <span class="graffiti-highlight graffiti-id_bhg57lw-id_n6h1zxb"><i></i><button>Explain</button></span>
// + [markdown] graffitiCellId="id_rjsor21" graffitiConfig={"rows": 6, "terminalId": "id_rjsor21", "type": "terminal"}
// <i>Loading terminal (id_rjsor21), please wait...</i>
| home/Example_Notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
### Load Packages
import pyodbc
import pandas as pd
import numpy as np
import pickle
# +
### Open Python Format of SDH Taxonomy
with open('SDOH_codes_complete.p', 'rb') as fp:
SDOH_code_map = pickle.load(fp)
# +
### Create ICD-10 Codes Table
icd10_codes_1 = {}
for key in SDOH_code_map.keys():
icd10_codes_1[key] = (SDOH_code_map[key]['icd10'])
icd10_codes_2 = {}
for k,v in icd10_codes_1.items():
for x in v:
icd10_codes_2.setdefault(x,[]).append(k)
icd10table = pd.DataFrame(icd10_codes_2.items(), columns=['icd10', 'SDOH'])
# +
### Clean-Up Table
icd10table.icd10 = icd10table.icd10.astype(str)
icd10table.icd10 = icd10table.icd10.str.upper()
icd10table['SDOH'] = icd10table['SDOH'].astype(str).replace("\['", '', regex=True)
icd10table['SDOH'] = icd10table['SDOH'].astype(str).replace("\']", '', regex=True)
icd10table.head(3)
# +
## Create Codes List
icd10s = list(icd10table['icd10'])
# +
### Connect to Database and Run SQL Query
dsn_name = "DSN=DBS_4_Python"
conn = pyodbc.connect(dsn_name)
cur = conn.cursor()
sql_deid = """
SELECT * FROM dbs.schema.diagnosis_table WHERE UPPER(icd10) in {} ;
""".format('('+str(icd10s)[1:-1]+')')
icd10_data = pd.read_sql(sql_deid, conn)
### Note, this is just one example of connecting to a ODBC database system and pulling its data directly into
### a python environment. Please review documentation for the pyodbc package at
### https://github.com/mkleehammer/pyodbc and contact your DBMS administrator for instructions on how to access
### your specific database system
# +
### Attached SDH Lables to the Extracted Data
labled_icd10_data = pd.merge(icd10_data, icd10table, how='left',on=['icd10'])
# +
### Alternatively, you can save your python dataframe as a csv and upload it to your DBMS as a table. You can then run SQL
## querys and JOINS to extract and label SDH data.
icd10table.to_csv("icd10_table.csv")
### SQL Example
SELECT boo.*, coo.SDOH FROM
(SELECT * FROM dbs.schema.diagnosis_table WHERE UPPER(icd10) IN
(SELECT icd10 FROM sandbox.schema.icd10_table)) as boo
LEFT JOIN (SELECT * FROM sandbox.schema.icd10_table) as coo
ON boo.icd10 = coo.icd10
# -
| Python Example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=true editable=true
# # Fitting a straight line to data
#
# _Inspired by [Hogg et al. 2010](https://arxiv.org/abs/1008.4686) and [@jakevdp's notes](https://github.com/jakevdp/ESAC-stats-2014)_.
# + [markdown] deletable=true editable=true
# Python imports we'll need later...
# + deletable=true editable=true
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
rnd = np.random.RandomState(seed=42)
# + [markdown] deletable=true editable=true
# ---
#
# # Intro and choice of objective function
#
# I want to start with a problem that everyone is probably familiar with or has at least seen before. The problem is this: we observe $N$ independent data points $\boldsymbol{y}=\{y_1,y_2,...y_N\}$ with uncertainties $\boldsymbol{\sigma}=\{\sigma_1,\sigma_2,...\sigma_N\}$ at perfectly-measured values $\boldsymbol{x}=\{x_1,x_2,...x_N\}$. We have reason to believe that the these data were generated by a process that is well-represented by a straight-line, and the only reason that the data deviate from this straight line is because of uncorrelated, Gaussian measurement noise in the $y$-direction. Let's first generate some data that meet these qualifications:
# + deletable=true editable=true
n_data = 16 # number of data points
a_true = 1.255 # randomly chosen truth
b_true = 4.507
# + [markdown] deletable=true editable=true
# ---
#
# ### Exercise 1:
#
# 1. Randomly generate an array of uniformly-distributed `x` values from the domain `(0,2)`.
# 2. Sort the values in ascending order.
# + deletable=true editable=true
# Fill in your solution here
# + [markdown] deletable=true editable=true
# Execute the code below and verify that it executes:
# + deletable=true editable=true
# evaluate the true model at the given x values
y = a_true*x + b_true
# Heteroscedastic Gaussian uncertainties only in y direction
y_err = rnd.uniform(0.1, 0.2, size=n_data) # randomly generate uncertainty for each datum
y = y + rnd.normal(0, y_err) # add noise to y data
# + deletable=true editable=true
plt.errorbar(x, y, y_err, marker='o', linestyle='none')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.tight_layout()
# + [markdown] deletable=true editable=true
# ---
# + [markdown] deletable=true editable=true
# Now let's forget that we did that -- we know nothing about the model parameters, except that we think the true values of the data are well-described by a linear relation! We would like to measure the "best-fit" parameters of this model (for a straight line, the slope and intercept $(a,b)$) given the data above. In math, our model for the data $y$ is:
# $$
# \begin{align}
# y &= f(x \,;\, a, b) + {\rm noise}\\
# f(x \,;\, a, b) &= a\,x + b
# \end{align}
# $$
#
# For a given set of parameters, $(a,b)$, we can evaluate our model $f(x \,;\, a, b)$ at a given $x$ location to compute the value of $y$ that we would expect in the absence of noise. For example, for the $n$th datum and for a given set of parameter values $(a,b)$:
#
# $$
# \tilde{y}_n = f(x_n \,;\, a, b)
# $$
#
# Now, we somehow want to search through all possible values of $a,b$ to find the "best" values, given the data, with some definition of "best." When we say this word, we are implying that we want to _optimize_ (find the maximum or minimum) some _objective function_ (a function that takes our data, our model, and returns a quantification of "best", usually as a scalar). Numerically, this scalar objective function can be any function (though you probably want it to be convex) and you will see different choices in practice. You have some leeway in this choice depending on whether your goal is _prediction_, _discovery_, or _data compression_.
#
# However, for _inference_—the typical use-case for us as scientists—you don't have this freedom: one of the conclusions of this talk is going to be that __you have no choice about what "best" means__! Before we get there, though, let's explore what seem like reasonable choices.
#
# Here are a few desirable features we'd like any objective function to have:
#
# 1. For a given set of parameters, we should compare our predicted values to the measured values and base our objective function on the differences
# 2. The scalar value should be dimensionless (the value of the objective function shouldn't care if we use kilometers vs. parsecs)
# 3. Data points that have larger errors should contribute less to the objective function (if a datum has a large offset from the predicted value, it shouldn't matter _if_ the datum has a large uncertainty)
# 4. Convexity
#
# To meet these three criteria, whatever objective function we choose should operate on the (dimensionless) quantities:
#
# $$
# \chi_n = \frac{y_n - \tilde{y}_n(x_n; a,b)}{\sigma_n}
# $$
#
# i.e. the difference between our predicted values $\tilde{y}$ and the observed $y$ values, weighted by the inverse uncertainties $\sigma$. The uncertainties have the same units as the data, so this is a dimensionless quantity. It also has the nice property that, as we wanted, points with large uncertainties are _downweighted_ relative to points with small uncertainties. Here are some ideas for objective functions based on this scalar:
#
# - __Weighted absolute deviation__: the sum of the absolute values
#
# $\sum_n^N \, \left|\chi_n\right|$
#
#
# - __Weighted squared deviation__: the sum of the squares
#
# $\sum_n^N \, \chi_n^2$
#
#
# - __Weighted absolute deviation to some power__ $p$:
#
# $\sum_n^N \, \left|\chi_n\right|^p $
#
#
# _(Note: don't show this to statisticians or they will get me fired. To a statistician, $\chi^2$ is a distribution not a statistic...but astronomers seem to use this terminology.)_
#
# For simplicity, let's just compare two of these: the absolute deviation and the squared deviation.
#
# ---
#
# ### Exercise 2:
#
# Implement the functions to compute the weighted deviations below
#
# + deletable=true editable=true
# FILL IN THESE FUNCTIONS:
def line_model(pars, x):
pass
def weighted_absolute_deviation(pars, x, y, y_err):
pass
def weighted_squared_deviation(pars, x, y, y_err):
pass
# + [markdown] deletable=true editable=true
# Verify that you've correctly implemented your functions by executing the following cell:
# + deletable=true editable=true
_pars = [1., -10.]
_x = np.arange(16)
_y = _x
_yerr = np.ones_like(_x)
truth = np.array([-10., -9., -8., -7., -6., -5., -4., -3., -2., -1., 0., 1., 2., 3., 4., 5.])
assert np.allclose(line_model(_pars, _x), truth), 'Error in line_model() function!'
assert weighted_absolute_deviation(_pars, _x, _y, _yerr) == 160., 'Error in weighted_absolute_deviation() function!'
assert weighted_squared_deviation(_pars, _x, _y, _yerr) == 1600., 'Error in weighted_squared_deviation() function!'
# + [markdown] deletable=true editable=true
# ---
# + [markdown] deletable=true editable=true
# We can demonstrate that these are convex (over some domain) by computing the objective function values over a grid of parameter values (a grid in $a, b$):
# + deletable=true editable=true
# make a 256x256 grid of parameter values centered on the true values
a_grid = np.linspace(a_true-2., a_true+2, 256)
b_grid = np.linspace(b_true-2., b_true+2, 256)
a_grid,b_grid = np.meshgrid(a_grid, b_grid)
ab_grid = np.vstack((a_grid.ravel(), b_grid.ravel())).T
# + deletable=true editable=true
fig,axes = plt.subplots(1, 2, figsize=(9,5.1), sharex=True, sharey=True)
for i,func in enumerate([weighted_absolute_deviation, weighted_squared_deviation]):
func_vals = np.zeros(ab_grid.shape[0])
for j,pars in enumerate(ab_grid):
func_vals[j] = func(pars, x, y, y_err)
axes[i].pcolormesh(a_grid, b_grid, func_vals.reshape(a_grid.shape),
cmap='Blues', vmin=func_vals.min(), vmax=func_vals.min()+256) # arbitrary scale
axes[i].set_xlabel('$a$')
# plot the truth
axes[i].plot(a_true, b_true, marker='o', zorder=10, color='#de2d26')
axes[i].axis('tight')
axes[i].set_title(func.__name__, fontsize=14)
axes[0].set_ylabel('$b$')
fig.tight_layout()
# + [markdown] deletable=true editable=true
# There are minima in both cases near the true values of the parameters (good), but the functions clearly look different. Which one should we choose for finding the best parameters?
# + [markdown] deletable=true editable=true
# In order to pick between these two, or any of the arbitrary objective functions we could have chosen, we have to _justify_ using one function over the others. In what follows, we'll justify optimizing the sum of the squared deviations (so-called "least-squares fitting") by thinking about the problem _probabilistically_, rather than procedurally.
# + [markdown] deletable=true editable=true
# ### Least-squares fitting
#
# Let's review the assumptions we made above in generating our data:
#
# 1. The data were generated by a straight line
# 2. Uncorrelated, _known_ Gaussian uncertainties in $y$ cause deviations between the data and predictions
# 3. The data points are independent
# 4. The $x$ data are known perfectly, or at least their uncertainties are _far smaller_ than the uncertainties in $y$
#
# First off, these assumptions tell us that for each datum $(x_n, y_n)$ there is some true $y_{n,{\rm true}}$, and because of limitations in our observing process we can't observe the truth, but we know that the values we do observe will be Gaussian (Normal) distributed around the true value. _(Note: This assumption tends to be a good or at least a conservative approximation in practice, but there are certainly more complex situations when, e.g., you have asymmetric uncertainties, or error distributions with large tails!)_. In math:
#
# $$
# \begin{align}
# p(y \,|\, y_{\rm true}) &= \mathcal{N}(y \,|\, y_{\rm true}, \sigma^2) \\
# \mathcal{N}(y \,|\, y_{\rm true}, \sigma^2) &= (2\pi \sigma^2)^{-1/2} \, \exp\left(-\frac{1}{2} \frac{(y-y_{\rm true})^2}{\sigma^2} \right)
# \end{align}
# $$
#
# This is the likelihood of observing a particular $y$ given the true $y_{\rm true}$. Note that in our model, all of the $y_{\rm true}$'s must lie on a line. It is also interesting that the argument of the normal distribution looks a lot like $\chi^2$!
#
# What about considering two data points, $y_1$ and $y_2$? Now we need to write down the _joint_ probability
#
# $$
# p(y_1, y_2 \,|\, y_{1,{\rm true}}, \sigma_1, y_{2,{\rm true}}, \sigma_2)
# $$
#
# But, note that in assumption 3 above, we are assuming the data are independent. In that case, the random error in one point does not affect the random error in any other point, so the joint probability can be turned into a product:
#
# $$
# p(\{y_n\} \,|\, \{y_{n,{\rm true}}\}, \{\sigma_n\}) = \prod_n^N \, p(y_n \,|\, y_{n,{\rm true}}, \sigma_n)
# $$
#
# This is the full expression for the likelihood of the observed data given the true $y$ values. Recall that these true values, according to our assumptions, must lie on a line with some parameters, and we're trying to infer those parameters! We can compute a particular $y_{n,{\rm true}}$ using $x_n$ and a given set of model parameters $a, b$. With that in mind, we can write the likelihood instead as:
#
# $$
# p(\{y_n\} \,|\, a, b, \{x_n\}, \{\sigma_n\}) = \prod_n^N \, p(y_n \,|\, a, b, x_n, \sigma_n)
# $$
# + [markdown] deletable=true editable=true
# So what are the "best" values of the parameters $a, b$? They are the ones that _maximize_ this likelihood!
#
# The product on the right of the likelihood is a product over exponentials (well, Gaussians), which can be annoying to deal with. But, maximizing the likelihood is equivalent to maximizing the _log_-likelihood -- so we can get rid of the product and all of those exponentials by taking the log of both sides:
#
# $$
# \begin{align}
# \ln p(\{y_n\} \,|\, a, b, \{x_n\}, \{\sigma_n\}) &= \sum_n^N \, \ln\left[p(y_n \,|\, a, b, x_n, \sigma_n)\right] \\
# &= \sum_n^N \ln \left[(2\pi \sigma_n^2)^{-1/2} \,
# \exp\left(-\frac{1}{2} \frac{(y_n-(a\,x_n+b))^2}{\sigma_n^2} \right) \right] \\
# &= -\frac{N}{2}\ln(2\pi)
# - \frac{1}{2} \sum_n^N \left[\frac{(y_n-(a\,x_n+b))^2}{\sigma_n^2} + \ln{\sigma_n^2} \right]
# \end{align}
# $$
#
# In this case, the uncertainties are known and constant, so to maximize this expression we only care that (abbreviating the likelihood as $\mathcal{L}$):
#
# $$
# \begin{align}
# \ln \mathcal{L} &= - \frac{1}{2} \sum_n^N \left[\frac{(y_n-(a\,x_n+b))^2}{\sigma_n^2}\right] + {\rm const.} \\
# &= - \frac{1}{2} \sum_n^N \, \chi_n^2 + {\rm const.} \\
# \end{align}
# $$
#
# Apparently, _minimizing_ the sum of the weighted squared deviations is equivalent to _maximizing_ the (log) likelihood derived from thinking about the probability of the data! That is great because (a) it directly gives us the uncertainties on the inferred model parameters, and (b) it's an analytic way to solve this problem using linear algebra which is _really_ fast!
# + [markdown] deletable=true editable=true
# ### Least-squares / maximum likelihood with matrix calculus
#
# Using linear algebra, we can simplify and generalize a lot of the expressions above. In what follows, all vectors are column vectors and are represented by lower-case bold symbols. Matrices are upper-case bold symbols.
#
#
# We'll start by writing our model as a matrix equation. To do that, we need a way to, for a given set of parameters, compute the set of predicted $y$'s. This is done by defining the parameter vector, $\boldsymbol{\theta}$, and a matrix typically called the _design matrix_, $\boldsymbol{X}$:
#
# $$
# \boldsymbol{\theta} = \begin{bmatrix} b \\ a \end{bmatrix} \quad
# \boldsymbol{X} = \begin{bmatrix} 1 & x_1 \\ 1 & x_2 \\ \vdots & \vdots \\ 1 & x_N \end{bmatrix}
# $$
#
# (note the order of the parameters!). With these definitions, the vector of predicted $y$ values is just
#
# $$
# \boldsymbol{y}_{\rm pred} = \boldsymbol{X} \, \boldsymbol{\theta}
# $$
#
# so the deviation vector between the prediction and the data is just $(\boldsymbol{y}-\boldsymbol{X} \, \boldsymbol{\theta})$ where
#
# $$
# \boldsymbol{y} = \begin{bmatrix} y_1 \\ y_2 \\ \vdots \\ y_N \end{bmatrix}
# $$
#
# But how do we include the uncertainties? We'll pack the list of uncertainties (variances) into the trace of a 2D, $N \times N$ matrix called the _covariance matrix_. Because we are assuming the uncertainties are independent, the off-diagonal terms are all zero:
#
# $$
# \boldsymbol{\Sigma} = \begin{bmatrix}
# \sigma_1^2 & 0 & \dots & 0 \\
# 0 & \sigma_2^2 & \dots & 0 \\
# \vdots & \vdots & \ddots & \vdots \\
# 0 & 0 & 0 & \sigma_N^2
# \end{bmatrix}
# $$
#
# With these matrices, we can write the expression for $\chi^2$ (and therefore the log-likelihood) very concisely:
#
# $$
# \begin{align}
# \chi^2 &= \left(\boldsymbol{y} - \boldsymbol{X}\,\boldsymbol{\theta}\right)^\mathsf{T} \,
# \boldsymbol{\Sigma}^{-1} \,
# \left(\boldsymbol{y} - \boldsymbol{X}\,\boldsymbol{\theta}\right) \\
# \ln\mathcal{L} &= -\frac{1}{2}\left[N\,\ln(2\pi)
# + \ln|\boldsymbol{\Sigma}|
# + \left(\boldsymbol{y} - \boldsymbol{X}\,\boldsymbol{\theta}\right)^\mathsf{T} \,
# \boldsymbol{\Sigma}^{-1} \,
# \left(\boldsymbol{y} - \boldsymbol{X}\,\boldsymbol{\theta}\right)
# \right]
# \end{align}
# $$
#
# In this form, the terms in the $\chi^2$ have a nice geometric interpretation: This looks like a distance between the data and the model computed with the metric $\boldsymbol{\Sigma}$.
# + [markdown] deletable=true editable=true
# If you solve for the optimum of the log-likelihood function (take the derivative with respect to $\boldsymbol{\theta}$ and set equal to 0), you find that:
#
# $$
# \newcommand{\trpo}[1]{{#1}^{\mathsf{T}}}
# \newcommand{\bs}[1]{\boldsymbol{#1}}
# \bs{\theta}_{\rm best} = \left[\trpo{\bs{X}} \, \bs{\Sigma}^{-1} \, \bs{X}\right]^{-1} \,
# \trpo{\bs{X}} \, \bs{\Sigma}^{-1} \, \bs{y}
# $$
#
# Getting the best-fit parameters just requires a few simple linear algebra operations! As an added bonus, we also get the _uncertainties_ on the parameters. The $2\times2$ covariance matrix for the best-fit parameters is given by the matrix:
#
# $$
# \newcommand{\trpo}[1]{{#1}^{\mathsf{T}}}
# \newcommand{\bs}[1]{\boldsymbol{#1}}
# C = \left[\trpo{\bs{X}} \, \bs{\Sigma}^{-1} \, \bs{X}\right]^{-1}
# $$
#
# That means we can just write out the linear algebra explicitly and use `numpy.linalg` to solve it for us!
#
# ### Exercise 3:
#
# Implement the necessary linear algebra to solve for the best-fit parameters and the parameter covariance matrix, defined above. Call these `best_pars` and `pars_Cov`, respectively.
# + deletable=true editable=true
# + [markdown] deletable=true editable=true
# Now let's look at the covariance matrix of the parameters (the uncertainty in the parameters) and plot the 1 and 2-sigma error ellipses:
# + deletable=true editable=true
# some tricks to get info we need to plot an ellipse, aligned with
# the eigenvectors of the covariance matrix
eigval,eigvec = np.linalg.eig(pars_Cov)
angle = np.degrees(np.arctan2(eigvec[1,0], eigvec[0,0]))
w,h = 2*np.sqrt(eigval)
# + deletable=true editable=true
from matplotlib.patches import Ellipse
fig,ax = plt.subplots(1, 1, figsize=(5,5))
for n in [1,2]:
ax.add_patch(Ellipse(best_pars, width=n*w, height=n*h, angle=angle,
fill=False, linewidth=3-n, edgecolor='#555555',
label=r'{}$\sigma$'.format(n)))
ax.plot(b_true, a_true, marker='o', zorder=10, label='truth')
ax.plot(best_pars[0], best_pars[1], marker='o', zorder=9, label='estimate')
ax.set_xlabel('$b$')
ax.set_ylabel('$a$')
ax.legend(loc='best')
fig.tight_layout()
# -
# There we have it! The best-fit parameters and their errors for the straight-line fit, optimized with the only justifyable objective function, directly from a few linear algebra calculations.
#
# This approach can be generalized somewhat (e.g. to account for correlated errors as off-diagonal elements in the covariance matrix $\Sigma$), but it only works **for models with linear parameters**, meaning that parameters enter linearly in the model function $f$.
# + [markdown] deletable=true editable=true
# ---
# # Bonus material (peruse at your leisure)
#
# ## The Bayesian approach
#
# Let's review what we did so far. We found that standard weighted least squares fitting is a justified approach to estimating the best-fit parameters because it optimizes the likelihood of the data under the assumptions of our model; it optimizes a _justified scalar objective function_. We then fit our straight-line model to the data using and got back a point-estimate of the best parameters along with a covariance matrix describing the uncertainties in the parameters. This is the way of the _frequentist_. What we're going to do now is see what happens if we switch to a Bayesian methodology instead. While the two methods end up looking mathematically identical, there are fundamental philosophical differences that can lead to very different interpretations and implementations when models are more complex than the toy example we use above.
#
# As Bayesians, we aren't interested in a point-estimate of the best parameters, but rather we're interested in the inferred distribution of possible parameter values (the _posterior probability distribution function_ over parameters). So how do we write down or solve for this posterior pdf? Before we get to that, let's take a look at a fundamental equation of Bayesian statistics, [Bayes' theorem](https://en.wikipedia.org/wiki/Bayes'_theorem), which we'll derive using the joint probability of $A$ and $B$ which are conditional on some other information $I$ that, right now, we don't care about. For example, $A$ could be the time it takes to get from here to NYC, $B$ could be the amount of traffic on the road, and $I$ could include the information that we're driving a car and not walking. Bayes' theorem as expressed below is not controversial -- Bayesians and Frequentists agree that this is just how joint and conditional probabilities work. We start by writing down the joint probability of $A$ and $B$, then factor it in two ways into conditional proabilities:
#
# $$
# p(A,B \,|\, I) = p(A\,|\,B, I)\,p(B\,|\, I) = p(B\,|\,A, I)\,p(A \,|\, I)
# $$
#
# Now we look at the right two expressions, and divide by one of the marginal probabilities to get:
#
# $$
# p(A\,|\,B, I) = \frac{p(B\,|\,A, I)\,p(A \,|\, I)}{p(B\,|\, I)}
# $$
#
# Ok, so that's all fine. Now let's replace $A$ and $B$ with variables that represent, from our example above, our data $D=(\{x_n\},\{y_n\},\{\sigma_n\})$ and our model parameters $\boldsymbol{\theta}$:
#
# $$
# p(\boldsymbol{\theta}\,|\,D, I) = \frac{p(D\,|\,\boldsymbol{\theta}, I)\,p(\boldsymbol{\theta} \,|\, I)}{p(D\,|\, I)}
# $$
#
# In just switching the meaning of the variables, this expression becomes controversial! Frequentists would object to the above for two main reasons:
#
# 1. The term on the left hand side is a probability over parameters given the data (the _posterior_ pdf) $p(\boldsymbol{\theta}\,|\,D, I)$. This is something that a frequentist would say cannot exist - there is only one true vector of parameters that we are trying to learn, not a distribution!
# 2. The right-most term in the numerator is a probability over parameters _with no dependence on the data_ (the _prior_ pdf). This encapsulates all of our prior knowledge about the parameters before we did the experiment and observed some data. This is perhaps the aspect of Bayesian inference that frequentists most disagree with.
#
# The differences above result from the fact that probability means something different to Frequentists and Bayesians. Bayesians think of probability as representing a _degree of belief_ about something, whereas a frequentist thinks of a probability as related to _limiting frequencies of occurrence_ in repeated trials or observations. This is a rich topic and I highly recommend reading [this series of blogposts](http://jakevdp.github.io/blog/2014/03/11/frequentism-and-bayesianism-a-practical-intro/) by <NAME> to learn more. For now, let's put on Bayesian hats and take a look at the implications of the expression above.
#
# (_It's good to rememeber that we're all friends. The differences are based on philosophy and so can lead to some heated discussions and debates, but remember we're all trying to do science -- we're on the same team!_)
# + [markdown] deletable=true editable=true
# ## Bayes' theorem and Bayesian inference
#
# Let's decompose Bayes' theorem (as applied to modeling and inference). The four terms in Bayes' theorem above have names that are good to be familiar with:
#
# - $p(\boldsymbol{\theta}\,|\,D, I)$ - __posterior probability__:
# This is the thing we are after when we do Bayesian inference or model fitting. We want to know what the distribution of possible parameter values is, given the data we observe and any prior information or assumptions $I$.
#
#
# - $p(D\,|\,\boldsymbol{\theta}, I)$ - __likelihood__:
# This is the likelihood of the data given a particular set of model parameters. We've already seen this object and used it above to find the best-fit model parameters by maximizing this function. In a Bayesian context, it can also be thought of as a distribution -- it's a distribution that generates new datasets given a model instance. For that reason, we typically refer to models that produce a likelihood as _generative models_ because they specify how to generate new data sets that look like the one you observe. As we saw above when we wrote the likelihood function for a straight line model and data with Gaussian errors, the likelihood usually contains a component that can be interpreted as the _noise model_.
#
#
# - $p(\boldsymbol{\theta} \,|\, I)$ - __prior probability__
# This contains any relevant information about our parameters that we know before observing the data. This can include physical constraints, previous measurements, or anything, really. This flexibility is what makes the prior a somewhat controversial object. In practice, the prior only really matters if it is much narrower than the likelihood function. If the prior is broad with respect to the likelihood, the information in the likelihood makes the prior almost irrelevant. However, there are several subtleties to choosing priors that need to be considered. As an example, one subtlety comes from the choice of coordinates for the model parameters: a prior that is broad and flat in a parameter $\alpha$ won't be broad and flat if you change variables to $\beta = \alpha^2$.
#
#
# - $p(D\,|\, I)$ - __evidence__ or __fully marginalized likelihood__ (FML)
# In many cases the evidence is simply a normalization constant and, for some of the most relevant algorithms used in inference, can be ignored. This term involves an integral over all of parameter space that can be very difficult to compute:
#
# $$
# p(D\,|\, I) = \int \,\mathrm{d}\boldsymbol{\theta} \, p(D\,|\,\boldsymbol{\theta}, I) \, p(\boldsymbol{\theta} \,|\, I)
# $$
#
# If you need to do Bayesian model selection (e.g., decide between models with different parameters), you unfortunately need to compute this quantity. But if you only _think_ you need the FML, beware!
# + [markdown] deletable=true editable=true
# So how do we make use of all of this, in practice?
#
# Let's return to our example of fitting a line to data with the same data as above. In some sense, we are almost done once we write down an expression for the posterior pdf. If we ignore the FML, this amounts to multiplying a likelihood by a prior pdf. Well, we've already done the most important part: we already wrote down the likelihood function! This is often the hardest part and what we spend the most time doing as scientists (well, assuming you're not building the instrument to observe the data!). We now need to define a prior pdf over the model parameters. Here we have some flexibility. Two possibilities you can always consider:
#
# 1. A completely uninformative prior, based on dimensionality, symmetry, or entropy arguments (sometimes, this will mean using a _flat prior_ or _uniform prior_)
# 2. An empirical prior, based on previous _independent data_ that constrains this model (e.g., a previous measurement of the model parameters from an earlier dataset)
#
# For simplicity, we're going to assume a flat prior over both slope and intercept. Note that for this problem, this is [_not_ an uninformative prior](http://jakevdp.github.io/blog/2014/06/14/frequentism-and-bayesianism-4-bayesian-in-python/). For now, we'll assume that the data are informative enough that the small bias we introduce by using this prior is negligible. Let's now define the functions we'll need, and recall that
#
# $$
# \ln\mathcal{L} = -\frac{1}{2}\left[N\,\ln(2\pi)
# + \ln|\boldsymbol{\Sigma}|
# + \left(\boldsymbol{y} - \boldsymbol{X}\,\boldsymbol{\theta}\right)^\mathsf{T} \,
# \boldsymbol{\Sigma}^{-1} \,
# \left(\boldsymbol{y} - \boldsymbol{X}\,\boldsymbol{\theta}\right)
# \right]
# $$
#
# ### Exercise 4:
#
# Implement the log-prior method (`ln_prior`) on the model class below.
#
# #### Solution:
# + deletable=true editable=true
class StraightLineModel(object):
def __init__(self, x, y, y_err):
"""
We store the data as attributes of the object so we don't have to
keep passing it in to the methods that compute the probabilities.
"""
self.x = np.asarray(x)
self.y = np.asarray(y)
self.y_err = np.asarray(y_err)
def ln_likelihood(self, pars):
"""
We don't need to pass in the data because we can access it from the
attributes. This is basically the same as the weighted squared
deviation function, but includes the constant normalizations for the
Gaussian likelihood.
"""
N = len(self.y)
dy = self.y - line_model(pars, self.x)
ivar = 1 / self.y_err**2 # inverse-variance
return -0.5 * (N*np.log(2*np.pi) + np.sum(2*np.log(self.y_err)) + np.sum(dy**2 * ivar))
def ln_prior(self, pars):
"""
The prior only depends on the parameters, so we don't need to touch
the data at all. We're going to implement a flat (uniform) prior
over the ranges:
a : [0, 100]
b : [-50, 50]
"""
a, b = pars # unpack parameters
ln_prior_val = 0. # we'll add to this
if a < 0 or a > 100.:
return -np.inf
else:
ln_prior_val += np.log(1E-2) # normalization, log(1/100)
if b < -50 or b > 50.:
return -np.inf
else:
ln_prior_val += np.log(1E-2) # normalization, log(1/100)
return ln_prior_val
def ln_posterior(self, pars):
"""
Up to a normalization constant, the log of the posterior pdf is just
the sum of the log likelihood plus the log prior.
"""
lnp = self.ln_prior(pars)
if np.isinf(lnp): # short-circuit if the prior is infinite (don't bother computing likelihood)
return lnp
lnL = self.ln_likelihood(pars)
lnprob = lnp + lnL
if np.isnan(lnprob):
return -np.inf
return lnprob
def __call__(self, pars):
return self.ln_posterior(pars)
# + deletable=true editable=true
model = StraightLineModel(x, y, y_err)
# + [markdown] deletable=true editable=true
# Now we'll repeat what we did above to map out the value of the log-posterior over a 2D grid of parameter values. Because we used a flat prior, you'll notice it looks identical to the visualization of the `weighted_squared_deviation` -- only the likelihood has any slope to it!
# + deletable=true editable=true
def evaluate_on_grid(func, a_grid, b_grid, args=()):
a_grid,b_grid = np.meshgrid(a_grid, b_grid)
ab_grid = np.vstack((a_grid.ravel(), b_grid.ravel())).T
func_vals = np.zeros(ab_grid.shape[0])
for j,pars in enumerate(ab_grid):
func_vals[j] = func(pars, *args)
return func_vals.reshape(a_grid.shape)
# + deletable=true editable=true
fig,axes = plt.subplots(1, 3, figsize=(14,5.1), sharex=True, sharey=True)
# make a 256x256 grid of parameter values centered on the true values
a_grid = np.linspace(a_true-5., a_true+5, 256)
b_grid = np.linspace(b_true-5., b_true+5, 256)
ln_prior_vals = evaluate_on_grid(model.ln_prior, a_grid, b_grid)
ln_like_vals = evaluate_on_grid(model.ln_likelihood, a_grid, b_grid)
ln_post_vals = evaluate_on_grid(model.ln_posterior, a_grid, b_grid)
for i,vals in enumerate([ln_prior_vals, ln_like_vals, ln_post_vals]):
axes[i].pcolormesh(a_grid, b_grid, vals,
cmap='Blues', vmin=vals.max()-1024, vmax=vals.max()) # arbitrary scale
axes[0].set_title('log-prior', fontsize=20)
axes[1].set_title('log-likelihood', fontsize=20)
axes[2].set_title('log-posterior', fontsize=20)
for ax in axes:
ax.set_xlabel('$a$')
# plot the truth
ax.plot(a_true, b_true, marker='o', zorder=10, color='#de2d26')
ax.axis('tight')
axes[0].set_ylabel('$b$')
fig.tight_layout()
# + [markdown] deletable=true editable=true
# ### Exercise 5:
#
# Subclass the `StraightLineModel` class and implement a new prior. Replace the flat prior above with an uncorrelated 2D Gaussian centered on $(\mu_a,\mu_b) = (3., 5.5)$ with root-variances $(\sigma_a,\sigma_b) = (0.05, 0.05)$. Compare the 2D grid plot with the flat prior to the one with a Gaussian prior
#
# #### Solution:
# + deletable=true editable=true
class StraightLineModelGaussianPrior(StraightLineModel): # verbose names are a good thing!
def ln_prior(self, pars):
a, b = pars # unpack parameters
ln_prior_val = 0. # we'll add to this
# prior on a is a Gaussian with mean, stddev = (3, 0.05)
ln_prior_val += -0.5*(a - 3.)**2/0.05**2 # this is not normalized properly, but that's ok
# prior on b is a Gaussian with mean, stddev = (5.5, 0.05)
ln_prior_val += -0.5*(b - 5.5)**2/0.05**2 # this is not normalized properly, but that's ok
return ln_prior_val
# + deletable=true editable=true
model_Gprior = StraightLineModelGaussianPrior(x, y, y_err)
# + deletable=true editable=true
fig,axes = plt.subplots(1, 3, figsize=(14,5.1), sharex=True, sharey=True)
ln_prior_vals2 = evaluate_on_grid(model_Gprior.ln_prior, a_grid, b_grid)
ln_like_vals2 = evaluate_on_grid(model_Gprior.ln_likelihood, a_grid, b_grid)
ln_post_vals2 = evaluate_on_grid(model_Gprior.ln_posterior, a_grid, b_grid)
for i,vals in enumerate([ln_prior_vals2, ln_like_vals2, ln_post_vals2]):
axes[i].pcolormesh(a_grid, b_grid, vals,
cmap='Blues', vmin=vals.max()-1024, vmax=vals.max()) # arbitrary scale
axes[0].set_title('log-prior', fontsize=20)
axes[1].set_title('log-likelihood', fontsize=20)
axes[2].set_title('log-posterior', fontsize=20)
for ax in axes:
ax.set_xlabel('$a$')
# plot the truth
ax.plot(a_true, b_true, marker='o', zorder=10, color='#de2d26')
ax.axis('tight')
axes[0].set_ylabel('$b$')
fig.tight_layout()
# + [markdown] deletable=true editable=true
# ---
#
# Now what do we do? The answer depends a bit on your intentions. If you'd like to propagate the posterior pdf (as in, pass on to other scientists to use your results), what do you do if the posterior pdf isn't analytic? And what numbers do you put in your abstract? One option is to draw samples from your posterior pdf and compute summary statistics (e.g., median and quantils) using the samples. That's the approach we're going to take.
#
# ## MCMC
#
# One of the most common and powerful class of methods people use for generating these samples is Markov Chain Monte Carlo (MCMC), but there are other options (e.g., brute-force or monte carlo rejection sampling). MCMC methods are useful because they scale reasonably to higher dimensions (well, at least better than brute-force). A disadvantage to these methods comes from the "Markov Chain" part of the name: there is always some correlation between nearby steps in a chain of samples, so you have to compute second-order statistics on the samples to try to verify whether your samples are truly random or fair samples from the target distribution (your posterior pdf).
#
# The simplest MCMC algorithm is known as Metropolis-Hastings. I'm not going to explain it in detail, but in pseudocode, it is:
#
# - Start from some position in parameter space, $\theta_0$ with posterior probability $\pi_0$
# - Iterate from 1 to $N_{\rm steps}$:
# - Sample an offset from $\delta\theta_0$ from some proposal distribution
# - Compute a new parameter value using this offset, $\theta_{\rm new} = \theta_0 + \delta\theta_0$
# - Evaluate the posterior probability at the new new parameter vector, $\pi_{\rm new}$
# - Sample a uniform random number, $r \sim \mathcal{U}(0,1)$
# - if $\pi_{\rm new}/\pi_0 > 1$ or $\pi_{\rm new}/\pi_0 > r$:
# - store $\theta_{\rm new}$
# - replace $\theta_0,\pi_0$ with $\theta_{\rm new},\pi_{\rm new}$
# - else:
# - store $\theta_0$ again
#
# The proposal distribution has to be chosen and tuned by hand. We'll use a spherical / uncorrelated Gaussian distribution with root-variances set by hand:
# + deletable=true editable=true
def sample_proposal(*sigmas):
return np.random.normal(0., sigmas)
def run_metropolis_hastings(p0, n_steps, model, proposal_sigmas):
"""
Run a Metropolis-Hastings MCMC sampler to generate samples from the input
log-posterior function, starting from some initial parameter vector.
Parameters
----------
p0 : iterable
Initial parameter vector.
n_steps : int
Number of steps to run the sampler for.
model : StraightLineModel instance (or subclass)
A callable object that takes a parameter vector and computes
the log of the posterior pdf.
proposal_sigmas : list, array
A list of standard-deviations passed to the sample_proposal
function. These are like step sizes in each of the parameters.
"""
p0 = np.array(p0)
if len(proposal_sigmas) != len(p0):
raise ValueError("Proposal distribution should have same shape as parameter vector.")
# the objects we'll fill and return:
chain = np.zeros((n_steps, len(p0))) # parameter values at each step
ln_probs = np.zeros(n_steps) # log-probability values at each step
# we'll keep track of how many steps we accept to compute the acceptance fraction
n_accept = 0
# evaluate the log-posterior at the initial position and store starting position in chain
ln_probs[0] = model(p0)
chain[0] = p0
# loop through the number of steps requested and run MCMC
for i in range(1,n_steps):
# proposed new parameters
step = sample_proposal(*proposal_sigmas)
new_p = chain[i-1] + step
# compute log-posterior at new parameter values
new_ln_prob = model(new_p)
# log of the ratio of the new log-posterior to the previous log-posterior value
ln_prob_ratio = new_ln_prob - ln_probs[i-1]
if (ln_prob_ratio > 0) or (ln_prob_ratio > np.log(np.random.uniform())):
chain[i] = new_p
ln_probs[i] = new_ln_prob
n_accept += 1
else:
chain[i] = chain[i-1]
ln_probs[i] = ln_probs[i-1]
acc_frac = n_accept / n_steps
return chain, ln_probs, acc_frac
# + [markdown] deletable=true editable=true
# Now we'll run the sampler! Let's start from some arbitrary position allowed by our prior.
#
# ### Exercise 6:
#
# Choose a starting position, values for `a` and `b` to start the MCMC from. In general, a good way to do this is to sample from the prior pdf. Generate values for `a` and `b` by sampling from a uniform distribution over the domain we defined above. Then, run the MCMC sampler from this initial position for 8192 steps. Play around with ("tune" as they say) the `proposal_sigmas` until you get an acceptance fraction around ~40%.
#
# #### Solution:
# + deletable=true editable=true
p0 = [6.,6.]
chain,_,acc_frac = run_metropolis_hastings(p0, n_steps=8192, model=model,
proposal_sigmas=[0.05,0.05])
print("Acceptance fraction: {:.1%}".format(acc_frac))
# + [markdown] deletable=true editable=true
# Let's look at the chain returned, the parameter value positions throughout the sampler run:
# + deletable=true editable=true
fig,ax = plt.subplots(1, 1, figsize=(5,5))
ax.pcolormesh(a_grid, b_grid, ln_post_vals,
cmap='Blues', vmin=ln_post_vals.max()-128, vmax=ln_post_vals.max()) # arbitrary scale
ax.axis('tight')
fig.tight_layout()
ax.plot(a_true, b_true, marker='o', zorder=10, color='#de2d26')
ax.plot(chain[:512,0], chain[:512,1], marker='', color='k', linewidth=1.)
ax.set_xlabel('$a$')
ax.set_ylabel('$b$')
# + [markdown] deletable=true editable=true
# We can also look at the individual parameter traces, i.e. the 1D functions of parameter value vs. step number for each parameter separately:
# + deletable=true editable=true
fig,axes = plt.subplots(len(p0), 1, figsize=(5,7), sharex=True)
for i in range(len(p0)):
axes[i].plot(chain[:,i], marker='', drawstyle='steps')
axes[0].axhline(a_true, color='r', label='true')
axes[0].legend(loc='best')
axes[0].set_ylabel('$a$')
axes[1].axhline(b_true, color='r')
axes[1].set_ylabel('$b$')
fig.tight_layout()
# + [markdown] deletable=true editable=true
# From these trace plots, we can see by eye that it takes the sampler about a few hundred steps to converge. When we look at the samples returned or when we compute our summary statistics, we don't want to include these parameter values! In addition, there is likely some correlation between nearby steps. We can attempt to remove some of the correlated steps by _thinning_ the chain, i.e. by downsampling. We can do both simultaneously using Python indexing tricks. Certainly by step 2000 the chains look converged, so from there on we'll keep only every 8th step:
# + deletable=true editable=true
good_samples = chain[2000::8]
good_samples.shape
# + [markdown] deletable=true editable=true
# We're left with 774 samples; we hope these are approximately uncorrelated, converged samples from the posterior pdf (there are other ways we can check, but these are out of scope for this workshop). Now you have to choose what summary statistics to report. You have some options, but a reasonable choice is to report the median, 16th, and 84th percentiles:
# + deletable=true editable=true
low,med,hi = np.percentile(good_samples, [16, 50, 84], axis=0)
upper, lower = hi-med, med-low
disp_str = ""
for i,name in enumerate(['a', 'b']):
fmt_str = '{name}={val:.2f}^{{+{plus:.2f}}}_{{-{minus:.2f}}}'
disp_str += fmt_str.format(name=name, val=med[i], plus=upper[i], minus=lower[i])
disp_str += r'\quad '
from IPython import display
disp_str = "${}$".format(disp_str)
display.Latex(data=disp_str)
# + [markdown] deletable=true editable=true
# Recall that the true values are:
# + deletable=true editable=true
a_true, b_true
# + [markdown] deletable=true editable=true
# We've now done this problem the Bayesian way as well! Now, instead of drawing the "best-fit" line over the data, we can take a handful of samples and plot a line for each of the samples, as a way to visualize the uncertainty we have in the model parameters:
# + deletable=true editable=true
plt.figure(figsize=(6,5))
plt.errorbar(x, y, y_err, linestyle='none', marker='o', ecolor='#666666')
x_grid = np.linspace(x.min()-0.1, x.max()+0.1, 128)
for pars in good_samples[:128]: # only plot 128 samples
plt.plot(x_grid, line_model(pars, x_grid),
marker='', linestyle='-', color='#3182bd', alpha=0.1, zorder=-10)
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.tight_layout()
# + [markdown] deletable=true editable=true
# Or, we can plot the samples using a _corner plot_ to visualize the structure of the 2D and 1D (marginal) posteriors:
# + deletable=true editable=true
# uncomment and run this line if the import fails:
# # !source activate statsseminar; pip install corner
import corner
# + deletable=true editable=true
fig = corner.corner(chain[2000:], bins=32, labels=['$a$', '$b$'], truths=[a_true, b_true])
# + [markdown] deletable=true editable=true
# ---
#
# ## Finally, the problem you came here for: fitting a straight line to data with intrinsic scatter
#
# We made it! We're now ready to do the problem we set out to do. In the initial model, we assumed that we knew the uncertainties in our measurements exactly and that the data were drawn from a one-dimensional line. We're now going to relax that assumption and assume that either (a) the data uncertainties have been underestimated or (b) there is intrinsic scatter in the true model (in the absence of other information, these two ideas are degenerate). Let's first generate some data. We'll assume the latter of the two ideas, and we'll further assume that the model line is convolved with an additional Gaussian in the $y$ direction, with the new parameter being the intrinsic width of the relation expressed as a variance $V$:
# + deletable=true editable=true
V_true = 0.5**2
n_data = 42
# we'll keep the same parameters for the line as we used above
# + deletable=true editable=true
x = rnd.uniform(0, 2., n_data)
x.sort() # sort the values in place
y = a_true*x + b_true
# Heteroscedastic Gaussian uncertainties only in y direction
y_err = rnd.uniform(0.1, 0.2, size=n_data) # randomly generate uncertainty for each datum
# add Gaussian intrinsic width
y = rnd.normal(y, np.sqrt(y_err**2 + V_true)) # re-sample y data with noise and intrinsic scatter
# + deletable=true editable=true
plt.errorbar(x, y, y_err, linestyle='none', marker='o', ecolor='#666666')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.tight_layout()
# + [markdown] deletable=true editable=true
# Let's first naively fit the data assuming no intrinsic scatter using least-squares:
# + deletable=true editable=true
X = np.vander(x, N=2, increasing=True)
Cov = np.diag(y_err**2)
Cinv = np.linalg.inv(Cov)
# + deletable=true editable=true
best_pars = np.linalg.inv(X.T @ Cinv @ X) @ (X.T @ Cinv @ y)
pars_Cov = np.linalg.inv(X.T @ Cinv @ X)
# + deletable=true editable=true
plt.errorbar(x, y, y_err, linestyle='none', marker='o', ecolor='#666666')
x_grid = np.linspace(x.min()-0.1, x.max()+0.1, 128)
plt.plot(x_grid, line_model(best_pars[::-1], x_grid), marker='', linestyle='-', label='best-fit line')
plt.plot(x_grid, line_model([a_true, b_true], x_grid), marker='', linestyle='-', label='true line')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(loc='best')
plt.tight_layout()
# + [markdown] deletable=true editable=true
# The covariance matrix for the parameters is:
# + deletable=true editable=true
pars_Cov
# + [markdown] deletable=true editable=true
# We clearly get a biased result and yet _very_ precise measurements of the parameters when we don't take in to account the intrinsic scatter. What we need to do now is modify out model to include the scatter as a free parameter. Unfortunately, it enters the model non-linearly so there is no solution using linear algebra or least-squares. Instead, we just write a new likelihood function and optimize it numerically. One choice we'll make is to use the parameter $\ln{V}$ instead of $V$ for reasons I'll explain later. To implement the new model, we'll subclass our `StraightLineModel` class and define new likelihood and prior functions.
#
# ### Exercise 7:
#
# Subclass the `StraightLineModel` class and implement new prior and likelihood functions (`ln_prior` and `ln_likelihood`). The our model will now have 3 parameters: `a`, `b`, and `lnV` the log of the intrinsic scatter variance. Use flat priors on all of these parameters. In fact, we'll be even lazier and forget the constant normalization terms: if a parameter vector is within the ranges below, return 0. (log(1.)) otherwise return -infinity:
#
# #### Solution:
# + deletable=true editable=true
class StraightLineIntrinsicScatterModel(StraightLineModel):
def ln_prior(self, pars):
""" The prior only depends on the parameters """
a, b, lnV = pars
# flat priors on a, b, lnV
if a < -10 or a > 10 or b < -100. or b > 100. or lnV < -10. or lnV > 10.:
return -np.inf
# this is only valid up to a numerical constant
return 0.
def ln_likelihood(self, pars):
""" The likelihood function evaluation requires a particular set of model parameters and the data """
a,b,lnV = pars
V = np.exp(lnV)
N = len(self.y)
dy = self.y - line_model([a,b], self.x)
ivar = 1 / (self.y_err**2 + V) # inverse-variance now includes intrinsic scatter
return -0.5 * (N*np.log(2*np.pi) - np.sum(np.log(ivar)) + np.sum(dy**2 * ivar))
# + deletable=true editable=true
scatter_model = StraightLineIntrinsicScatterModel(x, y, y_err)
# + deletable=true editable=true
x0 = [5., 5., 0.] # starting guess for the optimizer
# we have to minimize the negative log-likelihood to maximize the likelihood
result_ml_scatter = minimize(lambda *args: -scatter_model.ln_likelihood(*args),
x0=x0, method='BFGS')
result_ml_scatter
# + deletable=true editable=true
plt.errorbar(x, y, y_err, linestyle='none', marker='o', ecolor='#666666')
x_grid = np.linspace(x.min()-0.1, x.max()+0.1, 128)
plt.plot(x_grid, line_model(result_ml_scatter.x[:2], x_grid), marker='', linestyle='-', label='best-fit line')
plt.plot(x_grid, line_model([a_true, b_true], x_grid), marker='', linestyle='-', label='true line')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend(loc='best')
plt.tight_layout()
# + deletable=true editable=true
V_true, np.exp(result_ml_scatter.x[2])
# + [markdown] deletable=true editable=true
# It looks like the maximum likelihood estimate is a little bit better, and we get a reasonable measurement of the intrinsic scatter, but none of this gives us a handle on the uncertainty. How do we quantify the uncertainty in the now 3 parameters? We'll just run MCMC.
#
# ### Exercise 8:
#
# To quantify our uncertainty in the parameters, we'll run MCMC using the new model. Run MCMC for 65536 steps and visualize the resulting chain. Make sure the acceptance fraction is between ~25-50%.
#
# #### Solution:
# + deletable=true editable=true
p0 = [6., 6., -1.]
chain,_,acc_frac = run_metropolis_hastings(p0, n_steps=2**16, model=scatter_model,
proposal_sigmas=[0.15,0.15,0.2])
acc_frac
# + deletable=true editable=true
fig,axes = plt.subplots(len(p0), 1, figsize=(5,7), sharex=True)
for i in range(len(p0)):
axes[i].plot(chain[:,i], marker='', drawstyle='steps')
axes[0].axhline(a_true, color='r', label='true')
axes[0].legend(loc='best')
axes[0].set_ylabel('$a$')
axes[1].axhline(b_true, color='r')
axes[1].set_ylabel('$b$')
axes[2].axhline(np.log(V_true), color='r')
axes[2].set_ylabel(r'$\ln V$')
fig.tight_layout()
# + deletable=true editable=true
fig = corner.corner(chain[2000:], bins=32, labels=['$a$', '$b$', r'$\ln V$'],
truths=[a_true, b_true, np.log(V_true)])
# + [markdown] deletable=true editable=true
# Now we'll again compute the percentiles for the 1D, marginal distributions:
# + deletable=true editable=true
good_samples = chain[2000::8]
good_samples.shape
# + deletable=true editable=true
low,med,hi = np.percentile(good_samples, [16, 50, 84], axis=0)
upper, lower = hi-med, med-low
disp_str = ""
for i,name in enumerate(['a', 'b', r'\ln V']):
fmt_str = '{name}={val:.2f}^{{+{plus:.2f}}}_{{-{minus:.2f}}}'
disp_str += fmt_str.format(name=name, val=med[i], plus=upper[i], minus=lower[i])
disp_str += r'\quad '
disp_str = "${}$".format(disp_str)
display.Latex(data=disp_str)
# + [markdown] deletable=true editable=true
# Compare this to the diagonal elements of the covariance matrix we got from ignoring the intrinsic scatter and doing least-squares fitting:
# + deletable=true editable=true
disp_str = ""
for i,name in zip([1,0], ['a', 'b']):
fmt_str = r'{name}={val:.2f} \pm {err:.2f}'
disp_str += fmt_str.format(name=name, val=best_pars[i], err=np.sqrt(pars_Cov[i,i]))
disp_str += r'\quad '
disp_str = "${}$".format(disp_str)
display.Latex(data=disp_str)
# + [markdown] deletable=true editable=true
# The parameter uncertainties estimated from the MCMC samples are much larger -- this reflects our uncertainty about the intrinsic scatter of the points. Precision is highly model dependent.
| day3/Fitting-a-line.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [Root]
# language: python
# name: Python [Root]
# ---
# # CKY parser with string copying
# This parser was written by Meaghan. It implements copying through a special category and placeholder string which has the effect of copying some amount of material from the end of the sentence so far and adding it to the end.
#
# This implementation allows copies to overlap, and does not require that copies be constituents.
#
# The grammar actually generates strings in Lex+"copy", where "copy" is a special string. There is then a second grammar that generates the real string from a string that includes instances of "copy" by copying some amount of material from before "copy" and replacing "copy" with it. The amount of material to be copied is determined by that second grammar. For the parser, we don't have to worry about it; we just need to find potential copies.
#
# The grammar may have one rule set with the LHS Copy and the RHS copy (ie Copy->copy). if this rule set is present, the parser will check, for every cell, whether the string it represents is identical to the preceding string of the same length. If it is, it puts the Copy category in the cell and the "copy" lexical item in the backpointers. Otherwise, the parser proceeds like a normal CKY parser.
import cky
from IPython.display import Image
from IPython.display import display
# We make a toy grammar. None of these rules are copy rules.
grammar = [("S", [(["NP","VP"])]),
("VP", [(["VP","PP"]),
(["V","NP"]),
(["eats"])]),
("PP", [(["P","NP"])]),
("NP", [(["NP","PP"]),
(["Det","N"]),
(["she"])]),
("V" , [(["eats"])]),
("P" , [(["with"])]),
("N" , [(["fish"]),
(["fork"])]),
("Det",[(["a"])])]
# A second grammar that generates full binary trees
grammar_ambig = [("S",[(["S","S"]),(["a"])])]
grammar_ambig_probs = [("S",[(["S","S"],0.5),(["a"],0.5)])]
print(cky.grammar2string(grammar_ambig))
# A third grammar with copying
grammar_ambig_copy = [("S",[["S","S"],["a"],["S","Copy"],["Copy","S"]]),("Copy",[["copy"]])]
grammar_ambig_copy_probs = [("S",[(["S","S"],0.2),(["a"],0.3),(["S","Copy"],0.2),(["Copy","S"],0.3)]),("Copy",[(["copy"],1.)])]
print(cky.grammar2string(grammar_ambig_copy))
# ## Let's run some examples
s = "she eats a fish with a fork".split(" ")
g = grammar
# Parse s with g
chart,backpointers = cky.parse(s,g)
cky.print_chart(chart)
cky.pretty_print_backpointers(backpointers,g)
# Collect the trees
parses = cky.collect_trees("S",chart,backpointers,g,s)
# Print the trees to files
for i,parse in enumerate(parses):
cky.tree_to_png(parse,"parse_%i.png"%i)
x=Image(filename='parse_0.png')
y=Image(filename='parse_1.png')
display(x,y)
# Count the trees
cky.n_parses("S",chart,backpointers,g,s)
# Calculate probabilities
grammar_probs = [("S", [(["NP","VP"],1.)]),
("VP", [(["VP","PP"],0.3),
(["V","NP"],0.4),
(["eats"],0.3)]),
("PP", [(["P","NP"],1.)]),
("NP", [(["NP","PP"],0.5),
(["Det","N"],0.3),
(["she"],0.2)]),
("V" , [(["eats"],1.)]),
("P" , [(["with"],1.)]),
("N" , [(["fish"],0.6),
(["fork"],0.4)]),
("Det",[(["a"],1.)])]
g_probs = cky.make_rule_probs(grammar_probs)
print(cky.grammar2string_probs(grammar_probs))
(probs,s_prob)=cky.probability("S",chart,backpointers,g,s,g_probs)
# The log probability of the sentence is the second element
s_prob
# Let's un-log it just to get a look
cky.np.exp(s_prob)
# ## Full binary tree grammars
# ### Using the binary tree grammar without copying
g = grammar_ambig
s = ["a"]*3
chart,backpointers = cky.parse(s,g)
cky.print_chart(chart)
cky.pretty_print_backpointers(backpointers,g)
# cky.print_backpointers(backpointers) ## in case there are so many pointers in a cell that pretty_print won't show them
cky.n_parses("S",chart,backpointers,g,s)
parses = cky.collect_trees("S",chart,backpointers,g,s)
for i,parse in enumerate(parses):
cky.tree_to_png(parse,"parse_%i.png"%i)
x=Image(filename='parse_0.png')
y=Image(filename='parse_1.png')
display(x,y)
# Probability of sentence
g_probs = cky.make_rule_probs(grammar_ambig_probs)
(probs,s_prob)=cky.probability("S",chart,backpointers,g,s,g_probs)
# The log probability of the sentence is the second element
s_prob
# Let's un-log it just to get a look
cky.np.exp(s_prob)
# ### Computing Catalan numbers
# We're interested in Catalan numbers because they provide an easy check on the basic behaviour of the parser, sans copying. the nth Catalan number is the number of parses of a full binary tree with n leaves.
# +
def a_sent(n): return ["a"]*n
#catalan numbers are 1, 1, 2, 5, 14, 42, 132, 429, 1430,
#4862, 16796, 58786, 208012, 742900, 2674440, 9694845,
#35357670, 129644790, 477638700, 1767263190, 6564120420,
#24466267020, 91482563640, 343059613650, 1289904147324, 4861946401452,
def catalan(n):
"""calculates the first n catalan numbers (except 0th) extremely inefficiently"""
for i in range(1,n):
s=a_sent(i)
chart,backpoints = cky.parse(s,grammar_ambig)
print(cky.n_parses("S",chart,backpoints,grammar_ambig,s))
def catalan_probs(n):
"""calculates the probabilities of fully ambiguous trees, with both rules p=0.5"""
for i in range(1,n):
s=a_sent(i)
chart,backpoints = cky.parse(s,grammar_ambig)
print(cky.probability("S",chart,backpoints,grammar_ambig,s,g_probs)[1])
# -
catalan(7) ## the first 7 Catalan numbers
catalan_probs(7) ## log probabilities of the first 7 Catalan numbers using the grammar with even probabilities for the 2 rules
# ### Using the binary tree grammar with copying
g = grammar_ambig_copy
s = ["a"]*3
chart,backpointers = cky.parse(s,g)
cky.print_chart(chart)
cky.pretty_print_backpointers(backpointers,g)
cky.n_parses("S",chart,backpointers,g,s)
parses = cky.collect_trees("S",chart,backpointers,g,s)
for i,parse in enumerate(parses):
cky.tree_to_png(parse,"parse_%i.png"%i)
# +
a=Image(filename='parse_0.png')
b=Image(filename='parse_1.png')
c=Image(filename='parse_2.png')
d=Image(filename='parse_3.png')
e=Image(filename='parse_4.png')
f=Image(filename='parse_5.png')
h=Image(filename='parse_6.png')
display(a,b,c,d,e,f,h)
# -
# Probability of sentence
g_probs = cky.make_rule_probs(grammar_ambig_copy_probs)
(probs,s_prob)=cky.probability("S",chart,backpointers,g,s,g_probs)
# The log probability of the sentence is the second element
s_prob
# Let's un-log it just to get a look
cky.np.exp(s_prob)
# Let's try with sentence aaaa
s = ["a"]*4
chart,backpointers = cky.parse(s,g)
cky.print_chart(chart)
cky.print_backpointers(backpointers)
cky.n_parses("S",chart,backpointers,g,s) ## 35 PARSES!
parses = cky.collect_trees("S",chart,backpointers,g,s)
for i,parse in enumerate(parses[:5]): ## let's just look at the first 5
cky.tree_to_png(parse,"parse_%i.png"%i)
# +
a=Image(filename='parse_0.png')
b=Image(filename='parse_1.png')
c=Image(filename='parse_2.png')
d=Image(filename='parse_3.png')
e=Image(filename='parse_4.png')
display(a,b,c,d,e)
# -
# Probability of sentence
g_probs = cky.make_rule_probs(grammar_ambig_copy_probs)
(probs,s_prob)=cky.probability("S",chart,backpointers,g,s,g_probs)
# The log probability of the sentence is the second element
s_prob
# Let's un-log it just to get a look
cky.np.exp(s_prob)
| ckypy/cky.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] deletable=true editable=true
# ## `pyreaclib` examples
#
# This notebook illustrates some of the higher-level data structures in `pyreaclib`.
#
# Note to run properly, you must have `pyreaclib/` in your `PYTHONPATH`
# + deletable=true editable=true
# %matplotlib inline
# + deletable=true editable=true
import pyreaclib as pyrl
# + [markdown] deletable=true editable=true
# ## Loading a single rate
#
# The `Rate` class holds a single reaction rate and takes a reaclib file as input. There are a lot of methods in the `Rate` class that allow you to explore the rate.
# + deletable=true editable=true
c13pg = pyrl.Rate("../pyreaclib/rates/c13-pg-n14-nacr")
# + [markdown] deletable=true editable=true
# ### the original reaclib source
# + deletable=true editable=true
print(c13pg.original_source)
# + [markdown] deletable=true editable=true
# ### evaluate the rate at a given temperature (in K)
# + deletable=true editable=true
c13pg.eval(1.e9)
# + [markdown] deletable=true editable=true
# ### a human readable string describing the rate, and the nuclei involved
# + deletable=true editable=true
print(c13pg)
# + [markdown] deletable=true editable=true
# The nuclei involved are all `Nucleus` objects. They have members `Z` and `N` that give the proton and neutron number
# + deletable=true editable=true
print(c13pg.reactants)
print(c13pg.products)
# + deletable=true editable=true
r2 = c13pg.reactants[1]
# + deletable=true editable=true
type(r2)
# + deletable=true editable=true
print(r2.Z, r2.N)
# + [markdown] deletable=true editable=true
# ### get the temperature sensitivity about some reference T
# + [markdown] deletable=true editable=true
# This is the exponent when we write the rate as $r = r_0 \left ( \frac{T}{T_0} \right )^\nu$. We can estimate this given a reference temperature, $T_0$
# + deletable=true editable=true
c13pg.get_rate_exponent(2.e7)
# + [markdown] deletable=true editable=true
# ### plot the rate's temperature dependence
#
# A reaction rate has a complex temperature dependence that is defined in the reaclib files. The `plot()` method will plot this for us
# + deletable=true editable=true
c13pg.plot()
# + [markdown] deletable=true editable=true
# A rate also knows its density dependence -- this is inferred from the reactants in the rate description and is used to construct the terms needed to write a reaction network. Note: since we want reaction rates per gram, this number is 1 less than the number of nuclei
# + deletable=true editable=true
c13pg.dens_exp
# + [markdown] deletable=true editable=true
# ## working with a group of rates
#
# A `RateCollection()` class allows us to work with a group of rates. This is used to explore their relationship. Other classes (introduced soon) are built on this and will allow us to output network code directly.
# + deletable=true editable=true
files = ["c12-pg-n13-ls09",
"c13-pg-n14-nacr",
"n13--c13-wc12",
"n13-pg-o14-lg06",
"n14-pg-o15-im05",
"n15-pa-c12-nacr",
"o14--n14-wc12",
"o15--n15-wc12"]
rc = pyrl.RateCollection(files)
# + [markdown] deletable=true editable=true
# ### print an overview of the network described by this rate collection
# + deletable=true editable=true
print(rc)
# + deletable=true editable=true
rc.print_network_overview()
# + [markdown] deletable=true editable=true
# ### show a network diagram
#
# At the moment, we rely on NetworkX to visualize the network
# + deletable=true editable=true
rc.plot()
# -
# ## Explore the network's rates
# To evaluate the rates, we need a composition
comp = pyrl.Composition(rc.get_nuclei())
comp.set_solar_like()
# Interactive exploration is enabled through the `Explorer` class, which takes a `RateCollection` and a `Composition`
re = pyrl.Explorer(rc, comp)
re.explore()
# + [markdown] deletable=true editable=true
# ## Integrating networks
#
# If we don't just want to explore the network interactively in a notebook, but want to output code to run integrate it, we need to create one of `PythonNetwork`, `BoxLibNetwork` or `SundialsNetwork`
# + deletable=true editable=true
pynet = pyrl.PythonNetwork(files)
# + [markdown] deletable=true editable=true
# A network knows how to express the terms that make up the function (in the right programming language). For instance, you can get the term for the ${}^{13}\mathrm{C} (p,\gamma) {}^{14}\mathrm{N}$ rate as:
# + deletable=true editable=true
print(pynet.ydot_string(c13pg))
# + [markdown] deletable=true editable=true
# and the code needed to evaluate that rate (the T-dependent part) as:
# + deletable=true editable=true
print(pynet.function_string(c13pg))
# + [markdown] deletable=true editable=true
# The `write_network()` method will output the python code needed to define the RHS of a network for integration with the SciPy integrators
# + deletable=true editable=true
pynet.write_network("test.py")
# + deletable=true editable=true
# # %load test.py
import numpy as np
from pyreaclib.rates import Tfactors
ip = 0
ihe4 = 1
ic12 = 2
ic13 = 3
in13 = 4
in14 = 5
in15 = 6
io14 = 7
io15 = 8
nnuc = 9
A = np.zeros((nnuc), dtype=np.int32)
A[ip] = 1
A[ihe4] = 4
A[ic12] = 12
A[ic13] = 13
A[in13] = 13
A[in14] = 14
A[in15] = 15
A[io14] = 14
A[io15] = 15
def c12_pg_n13(tf):
# c12 + p --> n13
rate = 0.0
# ls09n
rate += np.exp( 17.1482 + -13.692*tf.T913i + -0.230881*tf.T913
+ 4.44362*tf.T9 + -3.15898*tf.T953 + -0.666667*tf.lnT9)
# ls09r
rate += np.exp( 17.5428 + -3.77849*tf.T9i + -5.10735*tf.T913i + -2.24111*tf.T913
+ 0.148883*tf.T9 + -1.5*tf.lnT9)
return rate
def c13_pg_n14(tf):
# c13 + p --> n14
rate = 0.0
# nacrn
rate += np.exp( 18.5155 + -13.72*tf.T913i + -0.450018*tf.T913
+ 3.70823*tf.T9 + -1.70545*tf.T953 + -0.666667*tf.lnT9)
# nacrr
rate += np.exp( 13.9637 + -5.78147*tf.T9i + -0.196703*tf.T913
+ 0.142126*tf.T9 + -0.0238912*tf.T953 + -1.5*tf.lnT9)
# nacrr
rate += np.exp( 15.1825 + -13.5543*tf.T9i
+ -1.5*tf.lnT9)
return rate
def n13_c13(tf):
# n13 --> c13
rate = 0.0
# wc12w
rate += np.exp( -6.7601)
return rate
def n13_pg_o14(tf):
# n13 + p --> o14
rate = 0.0
# lg06n
rate += np.exp( 18.1356 + -15.1676*tf.T913i + 0.0955166*tf.T913
+ 3.0659*tf.T9 + -0.507339*tf.T953 + -0.666667*tf.lnT9)
# lg06r
rate += np.exp( 10.9971 + -6.12602*tf.T9i + 1.57122*tf.T913i
+ -1.5*tf.lnT9)
return rate
def n14_pg_o15(tf):
# n14 + p --> o15
rate = 0.0
# im05n
rate += np.exp( 17.01 + -15.193*tf.T913i + -0.161954*tf.T913
+ -7.52123*tf.T9 + -0.987565*tf.T953 + -0.666667*tf.lnT9)
# im05r
rate += np.exp( 6.73578 + -4.891*tf.T9i
+ 0.0682*tf.lnT9)
# im05r
rate += np.exp( 7.65444 + -2.998*tf.T9i
+ -1.5*tf.lnT9)
# im05n
rate += np.exp( 20.1169 + -15.193*tf.T913i + -4.63975*tf.T913
+ 9.73458*tf.T9 + -9.55051*tf.T953 + 0.333333*tf.lnT9)
return rate
def n15_pa_c12(tf):
# n15 + p --> he4 + c12
rate = 0.0
# nacrn
rate += np.exp( 27.4764 + -15.253*tf.T913i + 1.59318*tf.T913
+ 2.4479*tf.T9 + -2.19708*tf.T953 + -0.666667*tf.lnT9)
# nacrr
rate += np.exp( -6.57522 + -1.1638*tf.T9i + 22.7105*tf.T913
+ -2.90707*tf.T9 + 0.205754*tf.T953 + -1.5*tf.lnT9)
# nacrr
rate += np.exp( 20.8972 + -7.406*tf.T9i
+ -1.5*tf.lnT9)
# nacrr
rate += np.exp( -4.87347 + -2.02117*tf.T9i + 30.8497*tf.T913
+ -8.50433*tf.T9 + -1.54426*tf.T953 + -1.5*tf.lnT9)
return rate
def o14_n14(tf):
# o14 --> n14
rate = 0.0
# wc12w
rate += np.exp( -4.62354)
return rate
def o15_n15(tf):
# o15 --> n15
rate = 0.0
# wc12w
rate += np.exp( -5.17053)
return rate
def rhs(t, Y, rho, T):
tf = Tfactors(T)
lambda_c12_pg_n13 = c12_pg_n13(tf)
lambda_c13_pg_n14 = c13_pg_n14(tf)
lambda_n13_c13 = n13_c13(tf)
lambda_n13_pg_o14 = n13_pg_o14(tf)
lambda_n14_pg_o15 = n14_pg_o15(tf)
lambda_n15_pa_c12 = n15_pa_c12(tf)
lambda_o14_n14 = o14_n14(tf)
lambda_o15_n15 = o15_n15(tf)
dYdt = np.zeros((nnuc), dtype=np.float64)
dYdt[ip] = (
-rho*Y[ip]*Y[ic12]*lambda_c12_pg_n13
-rho*Y[ip]*Y[ic13]*lambda_c13_pg_n14
-rho*Y[ip]*Y[in13]*lambda_n13_pg_o14
-rho*Y[ip]*Y[in14]*lambda_n14_pg_o15
-rho*Y[ip]*Y[in15]*lambda_n15_pa_c12
)
dYdt[ihe4] = (
+rho*Y[ip]*Y[in15]*lambda_n15_pa_c12
)
dYdt[ic12] = (
-rho*Y[ip]*Y[ic12]*lambda_c12_pg_n13
+rho*Y[ip]*Y[in15]*lambda_n15_pa_c12
)
dYdt[ic13] = (
-rho*Y[ip]*Y[ic13]*lambda_c13_pg_n14
+Y[in13]*lambda_n13_c13
)
dYdt[in13] = (
-Y[in13]*lambda_n13_c13
-rho*Y[ip]*Y[in13]*lambda_n13_pg_o14
+rho*Y[ip]*Y[ic12]*lambda_c12_pg_n13
)
dYdt[in14] = (
-rho*Y[ip]*Y[in14]*lambda_n14_pg_o15
+rho*Y[ip]*Y[ic13]*lambda_c13_pg_n14
+Y[io14]*lambda_o14_n14
)
dYdt[in15] = (
-rho*Y[ip]*Y[in15]*lambda_n15_pa_c12
+Y[io15]*lambda_o15_n15
)
dYdt[io14] = (
-Y[io14]*lambda_o14_n14
+rho*Y[ip]*Y[in13]*lambda_n13_pg_o14
)
dYdt[io15] = (
-Y[io15]*lambda_o15_n15
+rho*Y[ip]*Y[in14]*lambda_n14_pg_o15
)
return dYdt
# + deletable=true editable=true
| examples/pyreaclib-examples.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import networkx as nx
# %matplotlib inline
# %config InlineBackend.figure_format = 'retina'
# -
# ## Read graphml
graph = nx.read_graphml('graph.xml')
print(graph.edges())
print(graph.nodes())
print(list(graph.neighbors('n0')), list(graph.neighbors('n1')))
# ### Layered-Tree-Draw for binary trees
# +
def get_borders(block):
borders = {}
for v, (x, y) in block.items():
if y not in borders:
borders[y] = [x, x]
else:
borders[y][0] = min(borders[y][0], x)
borders[y][1] = max(borders[y][1], x)
return borders
def find_shift(bl, br):
shift = -10000
bl, br = get_borders(bl), get_borders(br)
for key in bl:
if key in br:
left = bl[key][1]
right = br[key][0]
shift = max(shift, left - right + 1)
return shift
def layered_tree(v, d):
neighbors = list(graph.neighbors(v))
if neighbors:
if len(neighbors) == 1:
block = layered_tree(neighbors[0], d + 1)
block.update({v: (0, d)})
return block
else:
left, right = neighbors
bleft, bright = layered_tree(left, d + 1), layered_tree(right, d + 1)
shift = find_shift(bleft, bright)
result = bleft
for w, (x, y) in bright.items():
result[w] = (x + shift, y)
result[v] = (0, d)
return result
else:
return {v: (0, d)}
# -
block = layered_tree('n0', 0)
print(block)
# ## drawing result
def draw(v):
plt.annotate(v, xy=(block[v][0], -block[v][1]), size=20)
for w in graph.neighbors(v):
plt.plot([block[v][0], block[w][0]],
[-block[v][1], -block[w][1]], '-o', color='blue')
draw(w)
plt.figure(figsize=(16, 10))
plt.title('Layered-Tree-Draw of binary tree', fontsize=17)
draw('n0')
plt.grid()
plt.savefig('draw.png')
plt.show()
| layered_tree_draw/solution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <NAME>. <NAME>. 2019 г.
#
# # Z - фактор
# Источник: <NAME> al. Calculation of Z factors for natural gases using equations of state //Journal of Canadian Petroleum Technology. – 1975. – Т. 14. – №. 03.
#
# Коэффициент сверхсжимаемости учитывает отклонение свойств реального газа от идеального.
#
# <NAME> al. получили уравнение состояния реального газа адаптировав его к экспериментальным данным Stending and Katz
#
# $$Z = 1 +
# (A_1 +\frac{A_2}{T_r} +\frac{A_3}{T_{r}^3} +\frac{A_4}{T_{r}^4} +\frac{A_5}{T_{r}^5})\rho_{r} +
# (A_6 +\frac{A_7}{T_{r}^2} +\frac{A_8}{T_{r}^8})\rho_{r}^2 -
# A_9(\frac{A_7}{T_{r}} +\frac{A_6}{T_{r}^2})\rho_{r}^5 -
# A_{10} (1 + A_{11} \rho_{r}^2) \frac{\rho_{r}^2}{T_{r}^3}exp(-A_{11}\rho_{r}{2})$$
# где:
# $$\rho_r = \frac{Z_c P_r}{Z T_r}$$
# причем:
# $$Z_c = 0.27$$
# коэффициенты:
# $$A_1 =0.3265$$
# $$A_2 =-1.0700$$
# $$A_3 =-0.5339$$
# $$A_4 =0.01569$$
# $$A_5 =-0.05165$$
# $$A_6 =0.5475$$
# $$A_7 =-0.7361$$
# $$A_8 =0.1844$$
# $$A_{9} =0.1056$$
# $$A_{10} =0.6134$$
# $$A_{11} =0.7210$$
# а приведенные давление и температура:
# $$P_{r} = \frac{P}{P_c}$$
# $$T_{r} = \frac{T}{T_c}$$
# где $p_c$ и $T_c$ критические давление и температура соответственно
# Импорт необходимых модулей
import sys
sys.path.append('../')
import uniflocpy.uPVT.PVT_fluids as PVT
import matplotlib.pyplot as plt
import matplotlib as mpl
import numpy as np
import pandas as pd
import pylab
import uniflocpy.uPVT.PVT_correlations as PVTcorr
# %matplotlib inline
# Сравнение экспериментального графика Стендинга-Каци и уравнения Дранчука и Абу-Кассема
# +
def get_z_curve_StandingKatz(tpr):
"""
Функция позволяет считать данные из нужного файла в зависимости от входного tpr и построить график
Допустимые значения tpr = 1.05, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2, 2.2, 2.4, 2.6, 2.8, 3
:param tpr: температура приведенная
:return: данные из графика Cтендинга для этой температуры
"""
data = pd.read_csv('data\Standing-Katz Chart Data\sk_tpr_{}.txt'.format(int(tpr*100)), sep=';')
ppr = np.array(pd.DataFrame(data)['x'])
z = np.array(pd.DataFrame(data)['y'])
return ppr, z
# Сравним расчетный график с графиком Стендинга
tpr = 1.05
ppr, z = get_z_curve_StandingKatz(tpr)
z_calc = []
pogr = []
i = 0
for p in ppr:
z_calc.append(PVTcorr.unf_zfactor_DAK_ppr(p, tpr))
pogr.append((z[i]-z_calc[i])/z[i] * 100)
i += 1
pylab.figure(figsize=(15,8))
pylab.subplot(211)
pylab.plot(ppr, z, label='По графикам Стендинга-Катца')
pylab.plot(ppr, z_calc, label='расчетный')
pylab.title('Сравнение графиков для tpr = {}'.format(tpr))
pylab.grid()
pylab.legend()
pylab.figure(figsize=(15,8))
pylab.subplot(212)
pylab.plot(ppr, pogr, label='погрешность в %')
pylab.grid()
pylab.legend()
pylab.show()
# -
# <NAME>
# построим все графики Стендинга сразу
tpr = [1.05, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2, 2.2, 2.4, 2.6, 2.8, 3]
pylab.figure(figsize=(15,8))
for t in tpr:
ppr_standing, z_standing = get_z_curve_StandingKatz(t)
pylab.plot(ppr_standing, z_standing, label='tpr = {}'.format(t))
pylab.grid()
pylab.title('Графики Стендинга-Катца для различных значений tpr')
pylab.legend()
pylab.xlabel('ppr')
pylab.ylabel('z')
pylab.show()
| notebooks/uPVT. Z-factor.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### RDKit visualization example
from rdkit import Chem
from rdkit.Chem import Draw
import matplotlib.pyplot as plt
ritonavir_smiles_string = 'CC(C)C1=NC(=CS1)CN(C)C(=O)NC(C(C)C)C(=O)NC(CC2=CC=CC=C2)CC(C(CC3=CC=CC=C3)NC(=O)OCC4=CN=CS4)O'
ritonavir_molecule = Chem.MolFromSmiles(ritonavir_smiles_string)
fig = Draw.MolToMPL(ritonavir_molecule)
plt.title('Ritonavir Molecule')
plt.axis('off')
plt.show()
# ### Reading training data from tsv file
# +
import numpy as np
import pandas as pd
# train_p53 = pd.read_csv('sr-p53.smiles',
# sep='\t',
# names=['smiles', 'id', 'target'])
train_p53 = pd.read_csv('SR-p53_train.txt', sep='\s+', names=['smiles', 'target'])
# -
train_p53.head()
sml = train_p53.smiles[0]
example_smile = Chem.MolFromSmiles(sml)
fig = Draw.MolToMPL(example_smile)
plt.title('First Training Molecule')
plt.axis('off')
plt.show()
# ## Generating plots for blog post 1
#
# ***Want random examples, long and short examples, ring examples***
# five random smiles
random_smiles = train_p53.sample(n=5)
random_smiles.smiles
for sml in random_smiles.smiles:
print_smile = Chem.MolFromSmiles(sml)
fig = Draw.MolToMPL(print_smile)
plt.title(sml)
plt.axis('off')
# fig.savefig(sml+'_example.png', dpi=300)
plt.show()
# ## Creating directory of training SMILES images as tf tensors
# ***Part 2 of the project is going to be trying to apply a CNN to these images (and then step three will be combining the learned image embeddings with the RNN sequence embeddings from our baseline/something more sophisticated)***
glycine = Chem.MolFromSmiles('C(C(=O)O)N')
print(type(glycine))
# Draw.MolToFile(glycine, 'glycine.png')
train_p53.index
# +
# df of label mappings from indx -> target, smiles
df_labels = pd.DataFrame(columns=['file', 'target', 'smiles'])
for index, row in train_p53.iterrows():
sml = row['smiles']
trg = row['target']
chem = Chem.MolFromSmiles(sml)
# storing pngs with file name of the index in the training set
Draw.MolToFile(chem, f'SMILES_train_imgs/{index}.png')
df_labels = df_labels.append({'file': f'{index}.png', 'target': trg, 'smiles':sml}, ignore_index=True)
df_labels.to_csv('SMILES_train_imgs/label_mapping.csv')
# -
# ### ... now doing the same for the other two datasets (test and wholetrain)
test_p53 = pd.read_csv('SR-p53_test.txt', sep='\s+', names=['smiles', 'target'])
wholetrain_p53 = pd.read_csv('SR-p53_wholetraining.txt', sep='\s+', names=['smiles', 'target'])
test_p53
# ***saving images and label csv from test set***
# +
# df of label mappings from indx -> target, smiles
df_labels_test = pd.DataFrame(columns=['file', 'target', 'smiles'])
for index, row in test_p53.iterrows():
sml = row['smiles']
trg = row['target']
chem = Chem.MolFromSmiles(sml)
# storing pngs with file name of the index in the training set
Draw.MolToFile(chem, f'SMILES_test_imgs/{index}.png')
df_labels_test = df_labels_test.append({'file': f'{index}.png', 'target': trg, 'smiles':sml}, ignore_index=True)
df_labels_test.to_csv('SMILES_test_imgs/label_mapping.csv')
# -
# ***... and whatever the wholetrain set is ...***
# +
# df of label mappings from indx -> target, smiles
df_labels_wholetrain = pd.DataFrame(columns=['file', 'target', 'smiles'])
for index, row in wholetrain_p53.iterrows():
sml = row['smiles']
trg = row['target']
chem = Chem.MolFromSmiles(sml)
# storing pngs with file name of the index in the training set
Draw.MolToFile(chem, f'SMILES_wholetrain_imgs/{index}.png')
df_labels_wholetrain = df_labels_wholetrain.append({'file': f'{index}.png', 'target': trg, 'smiles':sml},
ignore_index=True)
df_labels_wholetrain.to_csv('SMILES_wholetrain_imgs/label_mapping.csv')
# -
| Colab_Notebooks/SMILES_visualization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# + [markdown] origin_pos=0
# # Appendix: Tools for Deep Learning
# :label:`chap_appendix_tools`
#
# In this chapter, we will walk you through major tools for deep learning, from introducing Jupyter notebook in :numref:`sec_jupyter` to empowering you training models on Cloud such as Amazon SageMaker in :numref:`sec_sagemaker`, Amazon EC2 in :numref:`sec_aws` and Google Colab in :numref:`sec_colab`. Besides, if you would like to purchase your own GPUs, we also note down some practical suggestions in :numref:`sec_buy_gpu`. If you are interested in being a contributor of this book, you may follow the instructions in :numref:`sec_how_to_contribute`.
#
# :begin_tab:toc
# - [jupyter](jupyter.ipynb)
# - [sagemaker](sagemaker.ipynb)
# - [aws](aws.ipynb)
# - [colab](colab.ipynb)
# - [selecting-servers-gpus](selecting-servers-gpus.ipynb)
# - [contributing](contributing.ipynb)
# - [d2l](d2l.ipynb)
# :end_tab:
#
| d2l-en/mxnet/chapter_appendix-tools-for-deep-learning/index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
first = input("Enter first name:")
name = "something"
print("Hello", name)
# variables
age = 100
name = 'Nick'
happy = True
age
age + 10
age = 34
age
type(age)
type(name)
x = 100
type(x)
x = "Mike is awesome"
type(x)
a = 10
b = 10
a+b
a = "10"
b = "10"
type(a)
a+b
type(b)
int(b) + int(a)
age = 34.5
int(age+0.5)
int(56.3)
float(10)
float("10.4")
int( float("10.5") )
int("three")
# enter your age and then let you know how old you will be next year
age = int( input("Enter your age: ") )
next_year = age + 1
print(f"This year you are {age}, but next year you will be {next_year}")
type(age)
dir()
salary = 45000.50
name = "mike"
age = 45
print("His name is %s and his age is %d" % (name,age) )
print("His name is %s and his age is %d" % name, age )
# this is what happens when you forget parenthesis
print("%-20s|%20s" % (name,name))
import math
print(math.pi)
print("%.1f" % (math.pi))
print("%.2f" % (math.pi))
print("%.3f" % (math.pi))
print("%.4f" % (math.pi))
print("%.5f" % (math.pi))
print("Hello %s. you are %d years old" % (input("Enter your name"), int(input("Enter your age"))))
# Write a program to input hourly rate and hours worked then output total gross pay. Next input tax rate and output total tax and net pay.
#
# Input -> Process -> Output
#
# Input: hourly_rate, hours_worked, tax_rate
# Output: gross_pay, total_tax, net_pay
#
# algorithm
#
# 1. input hourly_rate
# 2. input hours_worked
# 1. gross_pay = hourly_rate * hours_worked
# 1. output gross_pay
# 1. input tax_rate
# 1. total_tax = tax_rate * gross_pay
# 1. net_pay = gross_pay - total_tax
# 1. print total_tax
# 1. print net pay
#
# I for gjhdskafjhsd fkjhasdklfj
hourly_rate = float(input("Enter Hourly Rate: "))
hours_worked = float(input("Hours Worked: "))
gross_pay = hourly_rate * hours_worked
print("Gross Pay $ %.2f " % gross_pay)
tax_rate = float(input("Enter Tax Rate: "))
total_tax = tax_rate * gross_pay
net_pay = gross_pay - total_tax
print("Total Tax $ %.2f " %total_tax)
print("Net Pay $ %.2f " % net_pay)
type (hours_worked)
| Lesson03-VariablesAndTypes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Introduction to GIS
# ## What is Information?
# - Information are data that can be understood, usually a sequence of symbols and/or signals with meanings.
# - Information, in a general sense, is processed, organised and structured data.
# - It provides context for data and enables decision making.
# - Examples: numbers, characters, symbols musical notes, sounds, colors.
# ## What is Geographic Information?
# - Any information that contains geographic location; e.g. where is nearest restaurant.
# - 80% of information has location component(Field in GIS Data by <NAME> on October 28, 2021)
# ## What is System?
# A collection of linked components that work together for a specific purpose.
# - Example 1: Car system is composed of engine, wheel, brake components, these components linked together and work for a specific purpose.
# - Example 2: Human body is composed of organs, tissues, bones and so on. All the components collaborate with each other to support our lives.
# ## What is GIS(Geographic Information System)
# A GIS is a **computer-based system** to aid in the collection, maintenance, storage, analysis, display, and distribution of spatial data and information(Bolstad, Chapter 1)
# 
# ## What is Geoscience?
# - GIScience: The Science behind the GISystem(as tools)
# - Theoritical foundation on which GIS is based
# - Addresses the fundamental issues arising from use of geographic information.
# - The science is needed to keep the technology moving forward.
#
# - Analogy: GIScience is to GIS as scientist is to statistical software packages, or computer science to computer system.
# ## Research Questions in GIS
# - How can we represent our perception of geographic space in a computer environment?
# - What method can we use to dig deeper in the area of geographic information?
# - What are the best ways we can visualize the geospatial information to make it easier to understand?
| book/notebooks/00_IntroductionToGIS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: py3p6
# language: python
# name: py3p6
# ---
# This notebook creates a log-normal density field and applies DESI mask
# %matplotlib inline
#
import nbodykit.lab as nb
from nbodykit.cosmology import Planck15
from scipy.interpolate import UnivariateSpline
import healpy as hp
import numpy as np
# +
#
def pos2radec(pos, obs):
x, y, z= (pos - obs).T
r = (x**2+y**2+z**2) ** 0.5
dec = np.arcsin(z / r)
ra = np.arctan2(y, x)
return ra, dec, r
def r2z(cosmology, r1):
zgrid = np.linspace(0, 9.0, 10000)
rgrid = cosmology.comoving_distance(zgrid)
spl = UnivariateSpline(rgrid, zgrid)
return spl(r1)
class DESIFootprint:
def __init__(self, desi_map):
map = hp.read_map(desi_map)
self.map = map
def f(self, ra, dec):
pix = hp.ang2pix(nside=32, phi=ra, theta=np.pi / 2 - dec)
return self.map[pix]
# -
ftp = DESIFootprint('/project/projectdirs/desi/mocks/GaussianRandomField/v0.0.4/desi-map.fits')
redshift = 0.0
cosmo = nb.cosmology.Planck15
Plin = nb.cosmology.LinearPower(cosmo, redshift, transfer='CLASS')
b1 = 2.0
cat = nb.LogNormalCatalog(Plin=Plin, nbar=3e-2, BoxSize=1380., Nmesh=256, bias=b1, seed=42)
class Mock:
def __init__(self, pos, cosmo, ftp, obs):
ra, dec, r = pos2radec(pos, obs=obs)
z = r2z(cosmo, r)
f = ftp.f(ra, dec)
self.ra = ra
self.dec = dec
self.z = z
self.f = f
self.r = r
posrandom = np.random.uniform(size=(10*cat['Position'].shape[0],3))*1380
data = Mock(cat['Position'], Planck15, ftp, obs=[690, 690, 690])
random = Mock(posrandom, Planck15, ftp, obs=[690, 690, 690])
import matplotlib.pyplot as plt
import sys
sys.path.append('/global/homes/m/mehdi/github/DESILSS') # pretty old, huh?
from syslss import hpixsum
plt.hist(data.z)
m = (data.r < 690) & (data.f > 0.2)
n = (random.r < 690) & (random.f > 0.2)
datam = hpixsum(256, np.rad2deg(data.ra[m]), np.rad2deg(data.dec[m]))
randomm = hpixsum(256, np.rad2deg(random.ra[n]), np.rad2deg(random.dec[n]))
delta = np.zeros(datam.shape)
mask = randomm != 0.0
sf = datam[mask].sum()/randomm[mask].sum()
delta[mask] = datam[mask]/(randomm[mask]*sf) - 1.0
plt.figure(figsize=(20,25))
plt.subplots_adjust(wspace=0.0, hspace=0.1)
for i,(title, map_i) in enumerate([('data',datam), ('random',randomm), ('delta', delta)]):
map_m = hp.ma(map_i.astype('f8'))
map_m.mask = np.logical_not(mask)
plt.subplot(421+i)
hp.mollview(map_m.filled(), title=title, hold=True, coord=['C','G'])
plt.xlabel(r'$\delta$')
_=plt.hist(delta[mask], bins=80, range=(-1, 2.2), histtype='step')
from syslss import AngularClustering2D
randomm.max()
mock1 = AngularClustering2D(datam.astype('f8'), randomm.astype('f8')/89, hpmap=True, nside=256)
xicl = mock1.run()
xicl.keys()
xicl['attr']
xi = np.copy(xicl['xi'])
cl = np.copy(xicl['cl'])
plt.rc('font', size=18)
plt.rc('axes.spines', right=False, top=False)
plt.figure(figsize=(16,10))
plt.subplot(221)
plt.plot(xi[0], xi[1])
plt.xlim(0.1, 6)
plt.xscale('log')
plt.ylabel(r'$\omega(\theta)$')
plt.xlabel(r'$\theta$[deg]')
plt.subplot(222)
plt.scatter(cl[0], cl[1], 10.0, marker='.', color='b')
plt.ylim([-1.e-5, 0.0006])
plt.xscale('log')
plt.ylabel(r'C$_{l}$')
plt.xlim(xmin=1)
# plt.yscale('log')
| notebooks/make-log-normal-DESI-mask.ipynb |
% -*- coding: utf-8 -*-
% ---
% jupyter:
% jupytext:
% text_representation:
% extension: .m
% format_name: light
% format_version: '1.5'
% jupytext_version: 1.14.4
% kernelspec:
% display_name: Matlab
% language: matlab
% name: matlab
% ---
% ## 读取数据
clc;clear;close;
result_TDNN = xlsread('../result/result_TDNN.xlsx');
% ### 数字代表列号
%
% idx 隐藏层个数 延迟时间 训练集MSE 验证集MSE 测试集MSE 全集MSE 训练集相关系数 验证集相关系数 测试集相关系数 全集相关系数
% 1 2 3 4 5 6 7 8 9 10 11
% region1 region2 region3 region4 region5 region6 region7 region8 region9
% 12 13 14 15 16 17 18 19 20
% ### 统计延迟时间的变化
min_delay = min(result_TDNN(:, 3));
max_delay = max(result_TDNN(:, 3));
% disp(['最小的延迟时间为: ' num2str(min_delay)]);
% disp(['最大的延迟时间为: ' num2str(max_delay)]);
% 统计各个延迟时间的结果
%
% 对所有存在的延迟时间进行遍历
for idx = min_delay : 20
result_time(idx, :) = mean(result_TDNN(result_TDNN(:, 3) == idx, :), 1);
end
% ### 全集MSE 随时间 的变化
plot(result_time(:, 7))
% ### 测试集MSE 随时间 的变化
plot(result_time(:, 6))
% ### 全集相关系数 的变化
plot(result_time(:, 11))
% ### 测试集相关系数 的变化
plot(result_time(:, 10))
% ### 九个区域 的变化
plot(mean(result_time(:, 12:20)))
% ### 读取分区图片并比较
img = imread('../img/K-means_Clustering_Results.png');
image(img)
| 6-ResultAnalysis/.ipynb_checkpoints/result_analysis-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ### from https://www.tensorflow.org/versions/r0.9/get_started/basic_usage.html
# ## Building the graph
import tensorflow as tf
# Create a Constant op that produces a 1x2 matrix. The op is
# added as a node to the default graph.
#
# The value returned by the constructor represents the output
# of the Constant op.
matrix1 = tf.constant([[3., 3.]])
# Create another Constant that produces a 2x1 matrix.
matrix2 = tf.constant([[2.],[2.]])
# Create a Matmul op that takes 'matrix1' and 'matrix2' as inputs.
# The returned value, 'product', represents the result of the matrix
# multiplication.
product = tf.matmul(matrix1, matrix2)
# ## Launching the graph in a session
# Launch the default graph.
sess = tf.Session()
# To run the matmul op we call the session 'run()' method, passing 'product'
# which represents the output of the matmul op. This indicates to the call
# that we want to get the output of the matmul op back.
#
# All inputs needed by the op are run automatically by the session. They
# typically are run in parallel.
#
# The call 'run(product)' thus causes the execution of three ops in the
# graph: the two constants and matmul.
#
# The output of the op is returned in 'result' as a numpy `ndarray` object.
result = sess.run(product)
print(result)
# Close the Session when we're done to release resources.
sess.close()
# ## Alternative session launch with "with"
with tf.Session() as sess:
result = sess.run([product])
print(result)
# +
# If you want to use more than GPU, you need to specify this explicitly,
# for which "with" comes in handy:
#with tf.Session() as sess:
# with tf.device("/gpu:1"): # zero-indexed, so this is the second GPU
# matrix1 = tf.constant([[3., 3.]])
# matrix2 = tf.constant([[2.],[2.]])
# product = tf.matmul(matrix1, matrix2)
# #etc.
# +
# "with" also comes in handy for launching the graph in a distributed session, e.g.:
#with tf.Session("http://example.org:2222") as sess:
# -
# ## Interactive Usage
# Great for use within IPython notebooks like this one :)
import tensorflow as tf
sess = tf.InteractiveSession()
x = tf.Variable([1., 2.])
a = tf.constant([3., 3.])
# Initialize x with run() method of initializer op.
x.initializer.run()
# Add an op to subtract 'a' from 'x'.
sub = tf.sub(x, a)
# Print result.
print(sub.eval())
sess.close()
# ## Variables
# Create a Variable, which will be initialized to the scalar zero.
state = tf.Variable(0, name="counter")
# Create an Op to add one to 'state'.
one = tf.constant(1)
new_value = tf.add(state, one)
update = tf.assign(state, new_value)
# Initialize variables.
init_op = tf.initialize_all_variables()
# Launch the grph and run the ops.
with tf.Session() as sess:
# Run the 'init' op.
sess.run(init_op)
# Print the initial value of 'state'.
print(sess.run(state))
# Run the op that updates 'state' and print 'state'.
for _ in range(3):
sess.run(update)
print(sess.run(state))
# ## Fetches
# +
# To fetch op outputs, execute the graph with a run() call on the Session object
# and pass in the tensors to retrieve.
input1 = tf.constant([3.])
input2 = tf.constant([2.])
input3 = tf.constant([5.])
intermed = tf.add(input2, input3)
mul = tf.mul(input1, intermed)
with tf.Session() as sess:
result = sess.run([mul, intermed])
print(result)
# -
# ## Feeds
# +
# TensorFlow provides a feed mechanism for patching a tensor directly
# into any operation in the graph.
input1 = tf.placeholder(tf.float32)
input2 = tf.placeholder(tf.float32)
output = tf.mul(input1, input2)
with tf.Session() as sess:
print(sess.run([output], feed_dict={input1:[7.], input2:[2.]}))
# -
| weekly-work/week1/basic_usage.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
class Node:
def __init__(self, info):
self.info = info;
# Each node can have at most 2 childs.
self.right = None
self.left = None
def __str__(self):
return str(self.info)
def insert(self,data):
if self.info > data:
if self.right is None:
self.right = Node(data)
else:
# Se o nó estiver ocupado, recursivamente iremos buscar a posição ideal através dessa função.
self.right.insert(data)
if self.info < data:
if self.left is None:
self.left = Node(data)
else:
self.left.insert(data)
def pre_order_traverse(self, root):
if root:
self.pre_order_traverse(root.left)
print(root.info)
self.pre_order_traverse(root.right)
def post_order_traverse(self, root):
if root:
self.post_order_traverse(root.left)
self.post_order_traverse(root.right)
print(root.info)
def in_order_traverse(self, root):
if root:
print(root.info)
self.in_order_traverse(root.left)
self.in_order_traverse(root.right)
# -
root = Node(10)
root.insert(12)
root.insert(5)
root.insert(2)
root.insert(14)
# +
# Em uma árvore binária, o número máximo de nós em cada nível é 2.
# A raíz é nível 0;
# O número máximo de nós que uma árvore de um nível N pode ter é 2^N
# Altura é a quantidade de nós no caminho até o nó de referência. Um nó com 1 filho tem altura = 1;
# -
root.pre_order_traverse(root)
root.post_order_traverse(root)
| Python/tree/binary_tree.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Car Features and MSRP Dataet
#
# - `Number of Instances`: **11914**
# - `Number of Attributes`: **16**
# - `Attributes`(Description of Attributes, according to me which I got from dicussions and Google beacuse it was not given in the https://www.kaggle.com/CooperUnion/cardataset page):
# - `Make`: Make of a car(BMW, Volkswagen and so on)
# - `Model`: Model of a car
# - `Year`: Year when the car was manufactured
# - `Engine Fuel Type`: Type of fuel engine needs(disel and so on)
# - `Engine HP`: Horsepower of engine
# - `Engine Cylinders`: Number of cylinders in engine
# - `Transmission Type`: Type of transmission(automatic or manual)
# - `Driven Wheels`: front, rear, all
# - `Number of Doors`: Number of doors a car has
# - `Market Category`: luxury, crossover and so on
# - `Vehicle Size`: compact, midsize, large
# - `Vehicle Style`: Style of vehicle(sedan, convertible and so on)
# - `Highway MPG`: miles per gallon(MPG) in highway
# - `City MPG`: miles per gallon(MPG) in city
# - `Popularity`: Number of times the car was mentioned in a Twitter stream
# - `MSRP`: Manufacturer's Suggested Retail Price
# # 1. Understand the Business Requirements
#
# **Problem statement:**
#
# `Cars dataset with features including make, model, year, engine, and other properties of the car used to predict its price.`
# # 2. Collecting Data
#
# `which was given in .csv file from Kaggle website(link was attached above).`
# +
#Python Libraries
import pandas as pd #Data Processing and CSV file I/o
import numpy as np #for numeric operations
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
# %matplotlib inline
#to make sure that plots rendered correctly in jupyter notebook
from sklearn.model_selection import train_test_split #split train and test dataset
# -
car_df = pd.read_csv('archive.zip') #reading the .csv file which is present in archive.zip file
car_df.head(8) #top 8 rows
#lowercasing all the column names and replacing space with underscores
car_df.columns = car_df.columns.str.lower().str.replace(' ', '_')
car_df.columns #columns name
car_df.dtypes #data type of every column
#similary lowercasing all the rows and replacing space with underscores
string_columns = list(car_df.dtypes[car_df.dtypes == 'object'].index)
for col in string_columns:
car_df[col] = car_df[col].str.lower().str.replace(' ', '_')
car_df.sample(4)
print(f"The Numbers of Rows and Columns in this data set are: {car_df.shape[0]} rows and {car_df.shape[1]} columns.")
# # 3. Exploratory Data Analysis(EDA)
#Checking for Data type of columns
car_df.info()
car_df.sample(2)
# - `Everything looks fine for Data type of columns`.
#summary statistics for categorical columns
car_df.describe(include=['object'])
car_df['age'] = 2017 - car_df['year']
car_df.head()
car_df.drop(['year'], axis=1, inplace=True)#droping the year
car_df.head()
# ## Checking out the Correlation Matrix
#pairplots to get an intuition of potential correlations
sns.pairplot(car_df, diag_kind="kde");
#creating correlation matrix
corr = car_df.corr()
#plotting the correlation matrix
plt.figure(figsize=(16,12))
ax = sns.heatmap(corr, annot=True, square=True, fmt='.3f', linecolor='black')
ax.set_xticklabels(ax.get_xticklabels(), rotation=90)
ax.set_yticklabels(ax.get_yticklabels(), rotation=30)
plt.title('Correlation Heatmap')
plt.show()
corr_matrix = car_df.corr()
corr_matrix['msrp'].sort_values(ascending=False)
# ## Segregating the categorical and numerical features from dataframe
#segregating the categorical from the dataframe
cat_vars = ['object']
cat_df = car_df.select_dtypes(include=cat_vars)
cat_df.head()
#printing missing value and labels in each column
print(cat_df.isnull().sum())
print('-'*25)
for var in list(cat_df.columns):
print(var, 'has', len(cat_df[var].unique()), 'labels')
#segregating the numerical columns from the dataframe
numerics = ['int64', 'float64']
num_df = car_df.select_dtypes(include=numerics)
num_df.head()
#counting the missing values in numerical features
num_df.isnull().sum()
# ### Outlier Analysis
#summary statistics of all the columns
num_df.describe().T
# - `If we compare the mean of each column with the min/max value, we'll notice that engine_hp, highway_mpg, city_mpg and popularity might have outliers as there's a considerable difference between average value and max value`.
#Checking for outliers
plt.figure(figsize=(16,8)) #(width,height)
plt.subplot(2,2,1) #(row, column, plot_number)
#The figure has 1 row, 1 columns, and this plot is the first plot.
sns.boxplot(x='engine_hp', data=num_df);
plt.subplot(2,2,2)
#The figure has 1 row, 2 columns, and this plot is the second plot.
sns.boxplot(x='highway_mpg', data=num_df);
plt.subplot(2,2,3)
#The figure has 2 row, 1 columns, and this plot is the second plot.
sns.boxplot(x='city_mpg', data=num_df);
plt.subplot(2,2,4)
#The figure has 2 row, 2 columns, and this plot is the second plot.
sns.boxplot(x='popularity', data=num_df);
# # 4. Feature Engineering and Scaling
# segregating the target variable
X = car_df.drop(['msrp'], axis=1)
y = car_df['msrp'].copy()
print(f"Columns names of X:")
for i in range(len(X.columns)):
print(' ~>', X.columns[i])
print(f"Shape of X: {X.shape}")
print(f"Shape of y: {y.shape}")
#creating training and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
print(f"Shape of X_train: {X_train.shape}")
print(f"Shape of y_train: {y_train.shape}")
print('-'*25)
print(f"Shape of X_test: {X_test.shape}")
print(f"Shape of y_test: {y_test.shape}")
# ## Imputing Missing Values
#creating a list numerical and categorical columns
num_cols = list(X_train.select_dtypes(include=numerics).columns)
cat_cols = list(X_train.select_dtypes(include=cat_vars).columns)
# - `Since there are a lot of outliers in the data, we're going to use median strategy to impute rather than mean because median is not sensitive to outliers`
#imputing missing numerical values in both training and testing data
for df in [X_train, X_test]:
for col in num_cols:
col_median=X_train[col].median() # usign median to impute
df[col].fillna(col_median, inplace=True)
# - `Using the mode of each categorical column to impute the values`
#imputing missing categorical values in both training and testing data
for df in [X_train, X_test]:
for col in cat_cols:
col_mode=X_train[col].mode()[0] # usign mode to impute
df[col].fillna(col_mode, inplace=True)
#checking missing values in X_train
X_train.isnull().sum()
#checking missing values in X_test
X_test.isnull().sum()
# ## Removing Outliers
#
# - Removing outliers from engine_hp, highway_mpg, city_mpg and popularity columns.
# - ref:
# - [Feature Engineering – How to Detect and Remove Outliers (with Python Code)](https://www.analyticsvidhya.com/blog/2021/05/feature-engineering-how-to-detect-and-remove-outliers-with-python-code/)
# - [Detecting and Treating Outliers | Treating the odd one out!](https://www.analyticsvidhya.com/blog/2021/05/detecting-and-treating-outliers-treating-the-odd-one-out/)
#before removing Outliers
plt.figure(figsize=(16,8))
plt.subplot(2,2,1)
sns.boxplot(x='engine_hp', data=X_train);
plt.subplot(2,2,2)
sns.boxplot(x='highway_mpg', data=X_train);
plt.subplot(2,2,3)
sns.boxplot(x='city_mpg', data=X_train);
plt.subplot(2,2,4)
sns.boxplot(x='popularity', data=X_train);
plt.figure(figsize=(16,8))
plt.subplot(2,2,1)
sns.distplot(X_train['engine_hp'])
plt.subplot(2,2,2)
sns.distplot(X_train['highway_mpg'])
plt.subplot(2,2,3)
sns.distplot(X_train['city_mpg'])
plt.subplot(2,2,4)
sns.distplot(X_train['popularity']);
#from this we got to know that we have to use IQR based filtering because our data distribution is skewed
# +
def cap_min_max_values(df, var, min_value, max_value):
"""
We cap our outliers data and make the limit, i.e, above a particular value or less than that value,
all the values will be considered as outliers.
"""
return np.where(df[var]>max_value, max_value, np.where(df[var]<min_value, min_value, df[var]))
for df in [X_train, X_test]:
for index in ['engine_hp', 'highway_mpg', 'city_mpg']:
upper_limit = df[index].mean() + 3*df[index].std()
lower_limit = df[index].mean() - 3*df[index].std()
df[index] = cap_min_max_values(df, index, lower_limit, upper_limit)
# -
#after removing outliers
plt.figure(figsize=(16,8))
plt.subplot(2,2,1)
sns.boxplot(x='engine_hp', data=X_train);
plt.subplot(2,2,2)
sns.boxplot(x='highway_mpg', data=X_train);
plt.subplot(2,2,3)
sns.boxplot(x='city_mpg', data=X_train);
plt.subplot(2,2,4)
sns.boxplot(x='popularity', data=X_train);
plt.figure(figsize=(16,8))
plt.subplot(2,2,1)
sns.distplot(X_train['engine_hp'])
plt.subplot(2,2,2)
sns.distplot(X_train['highway_mpg'])
plt.subplot(2,2,3)
sns.distplot(X_train['city_mpg']);
#much better now
# ## One-Hot Encoding the Categorical columns
#converting dataframes into dictionaries
train_dict = X_train[cat_cols + num_cols].to_dict(orient='rows')
test_dict = X_test[cat_cols + num_cols].to_dict(orient='rows')
# +
from sklearn.feature_extraction import DictVectorizer
dv = DictVectorizer(sparse=False)
dv.fit(train_dict)
dv.fit(test_dict)
# -
X_train = dv.transform(train_dict)
X_test = dv.transform(test_dict)
X_train[0]
#feature names of the encoded variables
print(len(dv.get_feature_names()))
cols = dv.get_feature_names()
print(cols)
# - `After one hot encoding we have 860 columns in our training and testing set.`
# ## Feature Scaling
# +
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# -
X_train = pd.DataFrame(X_train, columns=[cols])
X_test = pd.DataFrame(X_test, columns=[cols])
X_train.describe().T
# # 5. Selecting and Training Models
#
# 1. `Linear Regression`
# 2. `Decision Tree`
# 3. `Random Forest`
# 4. `SVM regressor`
print(f"In X_train dataset there are: {X_train.shape[0]} rows and {X_train.shape[1]} columns.")
print(f"In X_test dataset there are: {X_test.shape[0]} rows and {X_test.shape[1]} columns.")
print(f"The shape of y_train is: {y_train.shape}")
print(f"The shape of y_test is: {y_test.shape}")
# ### Mean Squared Error
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import cross_val_score
# +
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(X_train, y_train)
price_predictions = lin_reg.predict(X_train)
lin_mse = mean_squared_error(y_train, price_predictions)
lin_rmse = np.sqrt(lin_mse)
lin_rmse
# -
lin_reg_cv_scores = cross_val_score(lin_reg, X_train, y_train, scoring='neg_mean_squared_error', cv = 10)
lin_reg_rmse_scores = np.sqrt(-lin_reg_cv_scores)
lin_reg_rmse_scores.mean()
# +
from sklearn.tree import DecisionTreeRegressor
tree_reg = DecisionTreeRegressor()
tree_reg.fit(X_train, y_train)
price_predictions = tree_reg.predict(X_train)
tree_mse = mean_squared_error(y_train, price_predictions)
tree_rmse = np.sqrt(tree_mse)
tree_rmse
# -
tree_reg_cv_scores = cross_val_score(tree_reg, X_train, y_train, scoring='neg_mean_squared_error', cv = 10)
tree_reg_rmse_scores = np.sqrt(-tree_reg_cv_scores)
tree_reg_rmse_scores.mean()
# +
from sklearn.ensemble import RandomForestRegressor
forest_reg = RandomForestRegressor()
forest_reg.fit(X_train, y_train)
forest_reg_cv_scores = cross_val_score(forest_reg,
X_train,
y_train,
scoring='neg_mean_squared_error',
cv = 10)
forest_reg_rmse_scores = np.sqrt(-forest_reg_cv_scores)
forest_reg_rmse_scores.mean()
# +
from sklearn.model_selection import GridSearchCV
param_grid = [
{'n_estimators': [3, 10, 30], 'max_features': [2, 4, 6, 8]},
{'bootstrap': [False], 'n_estimators': [3, 10], 'max_features': [2, 3, 4]},
]
forest_reg = RandomForestRegressor()
grid_search = GridSearchCV(forest_reg, param_grid,
scoring='neg_mean_squared_error',
return_train_score=True,
cv=10,
)
grid_search.fit(X_train, y_train)
# +
# feature importances
feature_importances = grid_search.best_estimator_.feature_importances_
feature_importances
# -
final_model = grid_search.best_estimator_
final_predictions = final_model.predict(X_test)
final_mse = mean_squared_error(y_test, final_predictions)
final_rmse = np.sqrt(final_mse)
final_rmse
| Car Price Prediction (1st trial).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Each machine learning algorithm has strengths and weaknesses. A weakness of decision trees is that they are prone to overfitting on the training set. A way to mitigate this problem is to constrain how large a tree can grow. Bagged trees try to overcome this weakness by using bootstrapped data to grow multiple deep decision trees. The idea is that many trees protect each other from individual weaknesses.
# 
#
# In this video, I'll share with you how you can build a bagged tree model for regression.
# ## Import Libraries
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
# Bagged Trees Regressor
from sklearn.ensemble import BaggingRegressor
# -
# ## Load the Dataset
# This dataset contains house sale prices for King County, which includes Seattle. It includes homes sold between May 2014 and May 2015. The code below loads the dataset. The goal of this dataset is to predict price based on features like number of bedrooms and bathrooms
# +
df = pd.read_csv('data/kc_house_data.csv')
df.head()
# +
# This notebook only selects a couple features for simplicity
# However, I encourage you to play with adding and substracting more features
features = ['bedrooms','bathrooms','sqft_living','sqft_lot','floors']
X = df.loc[:, features]
y = df.loc[:, 'price'].values
# -
# ## Splitting Data into Training and Test Sets
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
# Note, another benefit of bagged trees like decision trees is that you don’t have to standardize your features unlike other algorithms like logistic regression and K-Nearest Neighbors.
# ## Bagged Trees
#
# <b>Step 1:</b> Import the model you want to use
#
# In sklearn, all machine learning models are implemented as Python classes
# +
# This was already imported earlier in the notebook so commenting out
#from sklearn.ensemble import BaggingRegressor
# -
# <b>Step 2:</b> Make an instance of the Model
#
# This is a place where we can tune the hyperparameters of a model.
reg = BaggingRegressor(n_estimators=100,
random_state = 0)
# <b>Step 3:</b> Training the model on the data, storing the information learned from the data
# Model is learning the relationship between X (features like number of bedrooms) and y (price)
reg.fit(X_train, y_train)
# <b>Step 4:</b> Make Predictions
#
# Uses the information the model learned during the model training process
# Returns a NumPy Array
# Predict for One Observation
reg.predict(X_test.iloc[0].values.reshape(1, -1))
# Predict for Multiple Observations at Once
reg.predict(X_test[0:10])
# ## Measuring Model Performance
# Unlike classification models where a common metric is accuracy, regression models use other metrics like R^2, the coefficient of determination to quantify your model's performance. The best possible score is 1.0. A constant model that always predicts the expected value of y, disregarding the input features, would get a R^2 score of 0.0.
score = reg.score(X_test, y_test)
print(score)
# ## Tuning n_estimators (Number of Decision Trees)
#
# A tuning parameter for bagged trees is **n_estimators**, which represents the number of trees that should be grown.
# +
# List of values to try for n_estimators:
estimator_range = [1] + list(range(10, 150, 20))
scores = []
for estimator in estimator_range:
reg = BaggingRegressor(n_estimators=estimator, random_state=0)
reg.fit(X_train, y_train)
scores.append(reg.score(X_test, y_test))
# +
plt.figure(figsize = (10,7))
plt.plot(estimator_range, scores);
plt.xlabel('n_estimators', fontsize =20);
plt.ylabel('Score', fontsize = 20);
plt.tick_params(labelsize = 18)
plt.grid()
# -
# Notice that the score stops improving after a certain number of estimators (decision trees). One way to get a better score would be to include more features in the features matrix. So that's it, I encourage you to try a building a bagged tree model
| scikitlearn/Ex_Files_ML_SciKit_Learn/Exercise Files/02_09_Bagged_Trees.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: normal
# language: python
# name: normal
# ---
import pandas as pd
import numpy as np
# +
data = pd.read_excel("X:/Storage/Github/Data/Test_InitiateBuild/0311報名者資料_登錄出席.xls", header=None,
encoding='big5')
# print(data.head())
first_row_list = [str(j) for i, j in enumerate(data.iloc[0, :].tolist())]
activity_name = max(first_row_list, key=len)
print(activity_name)
# data = data.iloc[1:, :]
# data.columns = data.iloc[0, :]
# data = data.iloc[1:, :]
# print(data.columns)
# # print(data.head())
# data.to_csv("X:/Storage/Github/Data/Test_InitiateBuild/0311_testing_usage.csv",
# encoding='utf8', index=None, header=None)
| models/model_testing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python [conda env:PythonData]
# language: python
# name: conda-env-PythonData-py
# ---
# # Observations from Py City Schools Data
#
# ## Observation 1
#
# The first observation is that the amount spent per student is not an indicator of how well the students perform in math and reading. The data displayed under School Summary section shows that the top performing schools generally have lower cost per student than the lower performing schools. Specifically, Cabrera High School is the top performer based on Overall Passing Rate but has a lower cost per student than all of the bottom five performers. This is further illustrated by the table in section Scores by School Spending. That data shows that the lowest spending schools are producing the best overall passing scores. Based on this, more money does not produce better students.
#
# ## Observation 2
# The second observation is that school size has a major role in student scores. The top five performing schools have half the number of students than the bottom five performing schools. With a difference on Overall Passing Scores of approximately 22 points, the size of the school plays a significant role in student performance.
#
# ### Note
# * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
# +
# Dependencies and Setup
import pandas as pd
import numpy as np
# File to Load (Remember to Change These)
school_data_to_load = "Resources/schools_complete.csv"
student_data_to_load = "Resources/students_complete.csv"
# Read School and Student Data File and store into Pandas Data Frames
school_data = pd.read_csv(school_data_to_load)
student_data = pd.read_csv(student_data_to_load)
# Combine the data into a single dataset
school_data_complete = pd.merge(student_data, school_data, how="left", on=["school_name", "school_name"])
school_data_complete.head(3)
# -
# ## District Summary
# * Calculate the total number of schools
# * Calculate the total number of students
# * Calculate the total budget
# * Calculate the average math score
# * Calculate the average reading score
# * Calculate the overall passing rate (overall average score), i.e. (avg. math score + avg. reading score)/2
# * Calculate the percentage of students with a passing math score (70 or greater)
# * Calculate the percentage of students with a passing reading score (70 or greater)
# * Create a dataframe to hold the above results
# * Optional: give the displayed data cleaner formatting
# Calculate the total number of schools
unique_schools = school_data_complete["school_name"].unique()
unique_school_count = unique_schools.size
# Calculate the total number of students
unique_ids = school_data_complete["Student ID"].unique()
unique_id_count = unique_ids.size
# Calculate the total budget
school_data_complete.sort_values("school_name", inplace = True)
hs_no_dup = school_data_complete.drop_duplicates(subset ="school_name",
keep = "first", inplace = False)
total_budget = hs_no_dup["budget"].sum()
# Calculate the average math score
math_average = school_data_complete["math_score"].mean()
math_average
# Calculate the average reading score
reading_average = school_data_complete["reading_score"].mean()
reading_average
# Calculate the overall passing rate math & english average
overall_average = np.divide((math_average + reading_average), 2)
# +
# * Calculate the percentage of students with a passing math score (70 or greater)
math_filter = school_data_complete["math_score"] >= 70
passing_math = school_data_complete[math_filter]
passing_math_count = passing_math.shape[0]
math_passing_percent = (np.divide(passing_math_count, unique_id_count)) * 100
#Calculate the percentage of students with a passing reading score (70 or greater)
reading_filter = school_data_complete["reading_score"] >= 70
passing_reading = school_data_complete[reading_filter]
passing_reading_count = passing_reading.shape[0]
reading_passing_percent = (np.divide(passing_reading_count, unique_id_count)) * 100
# +
# Create the data frame with a dictionary and display the District Summary
summary = [{"Total Schools": unique_school_count,
"Total Students": unique_id_count,
"Total Budget": total_budget,
"Average Math Score": math_average,
"Students Above 70% (Math)": math_passing_percent,
"Average Reading Score": reading_average,
"Students Above 70% (Reading)": reading_passing_percent,
"Overall Average": overall_average}]
summary_df = pd.DataFrame(summary)
# Format floats to two decimal places
headings = list(summary_df.columns)[3:9]
for col in headings:
summary_df[col] = summary_df[col].map("{:.2f}%".format)
summary_df["Total Budget"] = summary_df["Total Budget"].map("${:,.0f}".format)
summary_df["Total Students"] = summary_df["Total Students"].map("{:,.0f}".format)
# Don't show the numerical index.
summary_df.set_index(keys="Total Schools", inplace=True)
summary_df
# -
# ## School Summary
# * Create an overview table that summarizes key metrics about each school, including:
# * School Name
# * School Type
# * Total Students
# * Total School Budget
# * Per Student Budget
# * Average Math Score
# * Average Reading Score
# * % Passing Math
# * % Passing Reading
# * Overall Passing Rate (Average of the above two)
#
# * Create a dataframe to hold the above results
# +
# Create the dataframe for the summary by school. Then sort the table to get top and bottom performers.
school_summary_headings = ["School Name", "School Type", "Total Students", "Total School Budget",
"Per Student Budget", "Average Math Score", "Average Reading Score",
"% Passing Math", "% Passing Reading", "% Overall Passing Rate"]
school_summary_df = pd.DataFrame()
# Loop thhrough the schools and extract the required information.
# unique_schools was set in one of the frames above.
for school_name in unique_schools:
# Create the filter and and get the data for one school.
hs_filter = school_data_complete["school_name"] == school_name
school_x_df = school_data_complete.loc[hs_filter]
# All the type cells have the same value so use the first one.
school_type = school_x_df["type"].values[0]
school_students = school_x_df["size"].values[0]
school_budget = school_x_df["budget"].values[0]
# Calculate the cost per student.
school_budget_per_student = np.divide(school_budget, school_students)
# Get the math and reading averages.
school_ave_math = school_x_df["math_score"].mean()
school_ave_reading = school_x_df["reading_score"].mean()
# Filter the math scores >= 70, get the number of them and calculate the percent passing math.
math_filter = school_x_df["math_score"] >= 70
passing_math = school_x_df[math_filter].shape[0]
passing_math_percent = np.divide(passing_math, school_students) * 100
# Filter the reading scores >= 70, get the number of them and calculate the percent passing reading.
reading_filter = school_x_df["reading_score"] >= 70
passing_reading = school_x_df[reading_filter].shape[0]
passing_reading_percent = np.divide(passing_reading, school_students) * 100
total_passing_percent = np.divide(passing_math_percent + passing_reading_percent , 2)
# Make a list of a list for the DataFrame() constructor.
school_summary_data = [[school_name, school_type, school_students, school_budget,
school_budget_per_student, school_ave_math, school_ave_reading,
passing_math_percent, passing_reading_percent, total_passing_percent]]
# Create the DataFrame and append it to the school_summary_df.
a_school_df = pd.DataFrame (school_summary_data, columns=school_summary_headings)
if(not school_summary_df.empty):
school_summary_df = school_summary_df.append(a_school_df, ignore_index=True)
else:
school_summary_df = a_school_df
# -
# ## Top Performing Schools (By Passing Rate)
# * Sort and display the top five schools in overall passing rate
# Set the index, sort to get the top five, and display table.
school_summary_df.set_index(keys="School Name", inplace=True)
school_summary_df.sort_values("% Overall Passing Rate", axis = 0, ascending = False, inplace = True)
school_summary_df.head(5)
# ## Bottom Performing Schools (By Passing Rate)
# * Sort and display the five worst-performing schools
# The index is changed in Top Perfoming code.
# Sort to get the bottom five, and display table.
school_summary_df.sort_values("% Overall Passing Rate", axis = 0, ascending = True, inplace = True)
school_summary_df.head(5)
# ## Math and Reading Scores by Grade
# * Create a table that lists the average Reading Score for students of each grade level (9th, 10th, 11th, 12th) at each school.
#
# * Create a pandas series for each grade. Hint: use a conditional statement.
#
# * Group each series by school
#
# * Combine the series into a dataframe
#
# * Optional: give the displayed data cleaner formatting
# This function is used for both Math Scores by Grade and Reading Scores by Grade.
# school_list - the list of unique_schools, unique_schools is set in one of the top frames.
# subject_score - the heading for the desired subject. ("math_score" or "reading_score")
# returns - The completed dataframe with all the schools included.
def scoreBySchool(school_list, subject_score):
subj_by_school_df = pd.DataFrame() #create a DataFrame to keep the results.
for aSchool in school_list:
school_name = aSchool
# Group by school and grade.
group_series = school_data_complete.groupby(["school_name", "grade"])[subject_score].mean()
group_df = pd.DataFrame(group_series)
grade_series = group_df.loc[school_name, subject_score]
# grade_series makes rows and not columns so it needs to be transposed to be useful.
grade_df = (pd.DataFrame(grade_series)).T
grade_df["School Name"] = school_name
# We have all the information we need. Put it in a data frame.
if(not subj_by_school_df.empty):
subj_by_school_df.append(grade_df)
subj_by_school_df = subj_by_school_df.append(grade_df, ignore_index=True)
else:
subj_by_school_df = grade_df
# Set the index and sort.
subj_by_school_df.set_index(keys="School Name", inplace=True)
subj_by_school_df.sort_values("School Name", axis = 0, ascending = True, inplace = True)
# Some unwanted column headers got drug along. Delete them.
del subj_by_school_df.columns.name
subj_by_school_df = subj_by_school_df.rename_axis(None)
# The default for the heading has 9th grade listed last. Reorder them.
headings = list(subj_by_school_df)
ninth = headings.pop(3)
headings.insert(0, ninth)
# Get the columns reordered.
subj_by_school_df = subj_by_school_df.loc[:, headings]
# Format the columns to two decimal places.
for grade in headings:
subj_by_school_df[grade] = subj_by_school_df[grade].map("{:.2f}%".format)
subj_by_school_df.index.name = "School Name"
return subj_by_school_df
# ## Math Score by Grade
# Call the function scoreBySchool(,) passing in the school list and subject heading for math.
scoreBySchool(unique_schools, "math_score")
# ## Reading Score by Grade
# * Perform the same operations as above for reading scores
# Call the function scoreBySchool(,) passing in the school list and subject heading for reading.
scoreBySchool(unique_schools, "reading_score")
# ## Scores by School Spending
# * Create a table that breaks down school performances based on average Spending Ranges (Per Student). Use 4 reasonable bins to group school spending. Include in the table each of the following:
# * Average Math Score
# * Average Reading Score
# * % Passing Math
# * % Passing Reading
# * Overall Passing Rate (Average of the above two)
# This function is used by Scores by School Spendng, Scores by School Size, and Scores by School Type.
# grouped_df - a data frame that has already been grouped.
# group_names - The ranges used during the binning process.
# index_by - Used at the end to set the index column in the results data frame.
# heading_list - The list of column heading names for the results data frames.
# returns - The completed dataframe with all groups included.
def summaryByGroup(grouped_df, group_names, index_by, heading_list):
results_df = pd.DataFrame()
# Look at each group and extract the required information.
for a_group in group_names:
one_grp = grp.get_group(a_group)
one_grp_math_mean = one_grp["math_score"].mean()
one_grp_reading_mean = one_grp["reading_score"].mean()
# Get the number of students in the group.
one_grp_count = len(one_grp["Student ID"].unique())
# Filter and calculate the number and percentage of passing students for math.
math_filter = one_grp["math_score"] >= 70
passing_math = one_grp[math_filter].shape[0]
passing_math_percent = np.divide(passing_math, one_grp_count) * 100
# Filter and calculate the number and percentage of passing students for reading.
reading_filter = one_grp["reading_score"] >= 70
passing_reading = one_grp[reading_filter].shape[0]
passing_reading_percent = np.divide(passing_reading, one_grp_count) * 100
# Combined matha nd reading passing percentage.
overall_passing = np.divide((passing_math_percent + passing_reading_percent), 2)
# Make a list of list for the DataFrame constructor.
one_data =[[a_group, one_grp_math_mean, one_grp_reading_mean, passing_math_percent, passing_reading_percent, overall_passing]]
# Create a data frame for the one school.
one_df = pd.DataFrame(one_data, columns=heading_list)
# Append each school to the finished results table.
if(not results_df.empty):
results_df = results_df.append(one_df, ignore_index=True)
else:
results_df = one_df
# Set the index and return the completed results table.
results_df.set_index(keys=index_by, inplace=True)
return results_df
# Sample bins. Feel free to create your own bins.
spending_bins = [0, 585, 615, 645, 675]
group_names = ["<$585", "$585-615", "$615-645", "$645-675"]
# +
# We are adding an new column, lets preseve the original data.
df = pd.DataFrame(school_data_complete)
# Add the dolumn with the per student spending.
df["PerStudent"] = (df["budget"] / df["size"])
# Add a column which is the result of the binning operation.
df["PerStudentRange"] = pd.cut(df["PerStudent"], spending_bins, labels=group_names)
# Group them by the binning results.
grp = df.groupby("PerStudentRange")
# Create the final data frame column headings.
heading_list = ["Spending Ranges (Per Student)", "Average Math Score",
"Average Reading Score", "% Passing Math",
"% Passing Reading", "% Overall Passing Rate"]
# -
# Call the function summaryByGroup(,,,) to get the resulting df for spending.
summaryByGroup(grp, group_names, "Spending Ranges (Per Student)", heading_list)
# ## Scores by School Size
# * Perform the same operations as above, based on school size.
# Sample bins. Feel free to create your own bins.
size_bins = [0, 1000, 2000, 5000]
group_names = ["Small (<1000)", "Medium (1000-2000)", "Large (2000-5000)"]
# +
# We are adding an new column, lets preseve the original data.
df = pd.DataFrame(school_data_complete)
# Add the dolumn with the with the binning results.
df["SizeRange"] = pd.cut(df["size"], size_bins, labels=group_names)
# Group them by the binning column.
grp = df.groupby("SizeRange")
# The column headins for the results df.
heading_list = ["School Size", "Average Math Score",
"Average Reading Score", "% Passing Math",
"% Passing Reading", "% Overall Passing Rate"]
# -
# Call the function summaryByGroup(,,,) to get the resulting df for school size.
summaryByGroup(grp, group_names, "School Size", heading_list)
# ## Scores by School Type
# * Perform the same operations as above, based on school type.
# Sample bins. Feel free to create your own bins.
size_bins = ["Charter", "District"]
group_names = ["Charter", "District"]
# The column headins for the results df.
heading_list = ["School Type", "Average Math Score",
"Average Reading Score", "% Passing Math",
"% Passing Reading", "% Overall Passing Rate"]
# Group them by type.
grp = school_data_complete.groupby("type")
# Call the function summaryByGroup(,,,) to get the resulting df for school type.
summaryByGroup(grp, group_names, "School Type", heading_list)
| PyCitySchools/PyCitySchools_starter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from data import DataLoader
from model import model_fn
from config import args
import tensorflow as tf
import numpy as np
import json
import pprint
def main():
tf.logging.set_verbosity(tf.logging.INFO)
print(json.dumps(args, indent=4))
train_dl = DataLoader(
path='../temp/qa5_three-arg-relations_train.txt',
is_training=True)
test_dl = DataLoader(
path='../temp/qa5_three-arg-relations_test.txt',
is_training=False, vocab=train_dl.vocab, params=train_dl.params)
model = tf.estimator.Estimator(model_fn, params=train_dl.params)
model.train(train_dl.input_fn())
gen = model.predict(test_dl.input_fn())
preds = np.concatenate(list(gen))
preds = np.reshape(preds, [test_dl.data['size'], 2])
print('Testing Accuracy:', (test_dl.data['val']['answers'][:, 0] == preds[:, 0]).mean())
demo(test_dl.demo, test_dl.vocab['idx2word'], preds)
def demo(demo, idx2word, ids, demo_idx=3):
demo_i, demo_q, demo_a = demo
print()
pprint.pprint(demo_i[demo_idx])
print()
print('Question:', demo_q[demo_idx])
print()
print('Prediction:', [idx2word[id] for id in ids[demo_idx]])
if __name__ == '__main__':
main()
# -
| nlp-models/tensorflow/dmn/train.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.5 64-bit (''base'': conda)'
# language: python
# name: python3
# ---
import optuna
# ## 핵심
# - Search할 Parameter가 증가할수록 필요한 시간이 exponentially하게 증가한다.
# - 중요한 Parameter에 대해서만 Search하도록 한다.
# ## Pythonic Search Space
# 다음과 같은 방법으로 Search Space를 정하여 탐색을 진행할 수 있다.
def objective(trial):
# Categorical parameter
optimizer = trial.suggest_categorical("optimizer", ["MomentumSGD", "Adam"])
# Integer parameter
num_layers = trial.suggest_int("num_layers", 1, 3)
# Integer parameter (log)
num_channels = trial.suggest_int("num_channels", 32, 512, log=True)
# Integer parameter (discretized)
num_units = trial.suggest_int("num_units", 10, 100, step=5)
# Floating point parameter
dropout_rate = trial.suggest_float("dropout_rate", 0.0, 1.0)
# Floating point parameter (log)
learning_rate = trial.suggest_float("learning_rate", 1e-5, 1e-2, log=True)
# Floating point parameter (discretized)
drop_path_rate = trial.suggest_float("drop_path_rate", 0.0, 1.0, step=0.1)
# ## Defining Parameter Spaces
# +
# if문을 활용하여 Conditional하게 search space를 지정할 수 있다.
# 이를 branches를 활용한 방법이라 한다.
import sklearn.ensemble
import sklearn.svm
def objective(trial):
classifier_name = trial.suggest_categorical('classifier', ['SVC', 'RandomForest']) # SVC와 RandomForest 중 택1
if classifier_name == 'SVC': # SVC일 경우 if문에서의 parameter 중 선택
svc_c = trial.suggest_float('svc_c', 1e-10, 1e10, log=True) # 1e-10 ~ 1e10 중 선택
classifier_obj = sklearn.svm.SVC(C=svc_c)
else:
rf_max_depth = trial.suggest_int('rf_max_depth', 2, 32, log=True)
classifier_obj = sklearn.ensemble.RandomForestClassifier(max_deptn=rf_max_depth)
# +
import torch
import torch.nn as nn
def creat_model(trial, in_size):
'''
for loop를 활용해서 각 layer마다 다른 parameter를 활용할 수 있다.
'''
n_layers = trial.suggest_int('n_layers', 1, 3)
layers = []
for i in range(n_layers) :
n_units = trial.suggest_int('n_units_{}'.format(i), 4, 128, log=True)
layers.append(nn.Linear(in_size, n_units))
layers.append(nn.ReLU())
in_size = n_units
layers.append(nn.Linear(in_size, 10))
return nn.Sequential(*layers)
| optuna_tutorial/2_basic_search_space.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
import sys
sys.path.append('../../code/')
import os
import json
from datetime import datetime
import time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import scipy.stats as stats
import igraph as ig
from load_data import load_citation_network_igraph, case_info
# %load_ext autoreload
# %autoreload 2
# %matplotlib inline
data_dir = '../../data/'
court_name = 'scotus'
# +
# this will be a little slow the first time you run it
G = load_citation_network_igraph(data_dir, court_name)
print 'loaded %s network with %d cases and %d edges' % (court_name, len(G.vs), len(G.es))
# -
# # randomly sample edges that are not there
# +
desired_num_samples = 1000
all_indices = range(len(G.vs))
nonexistant_edge_list = []
start_time = time.time()
while len(nonexistant_edge_list) < desired_num_samples:
# randomly select a pair of vertices
rand_pair = np.random.choice(all_indices, size=2, replace=False)
# check if there is currently an edge between the two vertices
edge_check = G.es.select(_between=([rand_pair[0]], [rand_pair[1]]))
# if edge does not exist add it to the list
if len(edge_check) == 0:
# order the vertices by time
if G.vs[rand_pair[0]]['year'] <= G.vs[rand_pair[1]]['year']:
ing_id = rand_pair[1]
ed_id = rand_pair[0]
else:
ing_id = rand_pair[0]
ed_id = rand_pair[1]
nonexistant_edge_list.append((ing_id, ed_id))
total_runtime = time.time() - start_time
print 'total_runtime %1.5f' % (total_runtime/desired_num_samples)
print 'len nonexistant_edge_list %d' % len(nonexistant_edge_list)
# -
print 'estimated time to get to 500000 samples: %1.5f min' % (((total_runtime/desired_num_samples) * 500000)/60)
| explore/Iain/sample_complement.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Install Required Packages
# +
# install instaloader package
# !pip install instaloader
# +
# upgrade
# !pip install --upgrade instaloader
# -
# ### Adjust Working Directory
# +
# Import the os module
import os
# Get the current working directory
cwd = os.getcwd()
# Print the current working directory
print("Current working directory: {0}".format(cwd))
# Print the type of the returned object
print("os.getcwd() returns an object of type: {0}".format(type(cwd)))
# Change the current working directory
os.chdir('/notebooks/images/guinness/bad')
# Print the current working directory
print("Current working directory: {0}".format(os.getcwd()))
# -
# ### Import Modules
# +
from datetime import datetime
from itertools import dropwhile, takewhile
import instaloader
# -
# Define the Instaloader Method
L = instaloader.Instaloader()
# +
# get users posts
posts = instaloader.Profile.from_username(L.context, 'shitlondonguinness').get_posts()
# -
SINCE = datetime(2021, 1, 1)
UNTIL = datetime(2021, 6, 30)
# enter own username
username=input()
# enter password
L.interactive_login(username) # Asks for password in the terminal
# +
# for post in takewhile(lambda p: p.date > UNTIL, dropwhile(lambda p: p.date > SINCE, posts)):
# print(post.date)
# L.download_post(post, target=f"{index}")
# -
from itertools import islice
# +
# download 150 posts
limit=150
for index, post in enumerate(posts,1):
if index <= limit:
print(post.date_utc)
L.download_post(post, target=f"{index}")
# -
# ### Identify, move and rename all image files
import shutil
# +
# Change the working directory to make manipulating the files easier
# Get the current working directory
cwd = os.getcwd()
# Print the current working directory
print("Current working directory: {0}".format(cwd))
# Print the type of the returned object
print("os.getcwd() returns an object of type: {0}".format(type(cwd)))
# Change the current working directory
os.chdir('/notebooks/images/guinness')
# Print the current working directory
print("Current working directory: {0}".format(os.getcwd()))
# +
def moveimages(loc):
# iterate through the files and move any images out of their source folders and into the main image folder
for count, filename in enumerate(os.listdir(loc)):
try:
source = loc+str(count)+'/'
dest = loc
files = os.listdir(source)
for f in files:
if os.path.splitext(f)[1] in (".jpg", ".gif", ".png"):
shutil.move(source+f, dest)
except OSError as e:
print("Error: %s - %s." % (e.filename, e.strerror))
# Delete the existing folders once the images have been removed
try:
shutil.rmtree(loc+str(count)+'/')
except OSError as e:
print("Error: %s - %s." % (e.filename, e.strerror))
# +
# Function to rename multiple files
def main(loc):
# rename the image files numerically
for count, filename in enumerate(os.listdir(loc)):
dst =str(count)+".jpg"
src =loc+ filename
dst =loc+ dst
# rename() function will
# rename all the files
os.rename(src, dst)
# +
# run the functions
# These were run twice, once for the 'good' and once for the 'bad'
moveimages('good/')
main('good/')
# -
# ### Delete the folders, if necessary
# +
# os.chdir('/notebooks/images/guinness')
# shutil.rmtree('bad/')
# shutil.rmtree('good/')
# -
| GuinnessImageCollector (1).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction
# One of the most common tasks for pandas and python is to automate the process to aggregate data from multiple spreadsheets and files.
#
# This article will walk through the basic flow required to parse multiple excel files, combine some data, clean it up and analyze it.
#
# Please refer to [this post](http://pbpython.com/excel-file-combine.html) for the full post.
# # Collecting the Data
# Import pandas and numpy
import pandas as pd
import numpy as np
# Let's take a look at the files in our input directory, using the convenient shell commands in ipython.
# !ls ../data
# There are a lot of files, but we only want to look at the sales .xlsx files.
# !ls ../data/sales-*-2014.xlsx
# Use the python glob module to easily list out the files we need
import glob
glob.glob("../data/sales-*-2014.xlsx")
# This gives us what we need, let's import each of our files and combine them into one file.
#
# Panda's concat and append can do this for us. I'm going to use append in this example.
#
# The code snippet below will initialize a blank DataFrame then append all of the individual files into the all_data DataFrame.
all_data = pd.DataFrame()
for f in glob.glob("../data/sales-*-2014.xlsx"):
df = pd.read_excel(f)
print(df.shape)
all_data = all_data.append(df,ignore_index=True)
all_data.shape
# Now we have all the data in our all_data DataFrame. You can use describe to look at it and make sure you data looks good.
all_data.describe()
# Alot of this data may not make much sense for this data set but I'm most interested in the count row to make sure the number of data elements makes sense.
all_data.head(16)
all_data.tail()
# It is not critical in this example but the best practice is to convert the date column to a date time object.
all_data['date'] = pd.to_datetime(all_data['date'])
# # Combining Data
# Now that we have all of the data into one DataFrame, we can do any manipulations the DataFrame supports. In this case, the next thing we want to do is read in another file that contains the customer status by account. You can think of this as a company's customer segmentation strategy or some other mechanism for identifying their customers.
#
# First, we read in the data.
status = pd.read_excel("../data/customer-status.xlsx")
status
status.shape
# We want to merge this data with our concatenated data set of sales. We use panda's merge function and tell it to do a left join which is similar to Excel's vlookup function.
all_data_st = pd.merge(all_data, status, how='left')
all_data_st.head(20)
# This looks pretty good but let's look at a specific account.
all_data_st[all_data_st["account number"]==737550].head()
# This account number was not in our status file, so we have a bunch of NaN's. We can decide how we want to handle this situation. For this specific case, let's label all missing accounts as bronze. Use the fillna function to easily accomplish this on the status column.
all_data_st['status'].fillna('bronze',inplace=True)
all_data_st
# Check the data just to make sure we're all good.
all_data_st[all_data_st["account number"]==737550].head()
# Now we have all of the data along with the status column filled in. We can do our normal data manipulations using the full suite of pandas capability.
# # Using Categories
# One of the relatively new functions in pandas is support for categorical data. From the pandas, documentation -
#
# "Categoricals are a pandas data type, which correspond to categorical variables in statistics: a variable, which can take on only a limited, and usually fixed, number of possible values (categories; levels in R). Examples are gender, social class, blood types, country affiliations, observation time or ratings via Likert scales."
#
# For our purposes, the status field is a good candidate for a category type.
#
# You must make sure you have a recent version of pandas installed for this example to work.
pd.__version__
# First, we typecast it to a category using astype.
all_data_st["status"] = all_data_st["status"].astype("category")
# This doesn't immediately appear to change anything yet.
all_data_st.head()
# Buy you can see that it is a new data type.
all_data_st.dtypes
# Categories get more interesting when you assign order to the categories. Right now, if we call sort on the column, it will sort alphabetically.
all_data_st.sort_values(by=["status"]).head()
# We use set_categories to tell it the order we want to use for this category object. In this case, we use the Olympic medal ordering.
all_data_st["status"].cat.set_categories([ "gold","silver","bronze"],inplace=True)
# Now, we can sort it so that gold shows on top.
all_data_st.sort_values(by=["status"]).head()
all_data_st["status"].describe()
# For instance, if you want to take a quick look at how your top tier customers are performaing compared to the bottom. Use groupby to give us the average of the values.
all_data_st.groupby(["status"])["quantity","unit price","ext price"].mean()
# Of course, you can run multiple aggregation functions on the data to get really useful information
all_data_st.groupby(["status"])["quantity","unit price","ext price"].agg([np.sum,np.mean, np.std])
# So, what does this tell you? Well, the data is completely random but my first observation is that we sell more units to our bronze customers than gold. Even when you look at the total dollar value associated with bronze vs. gold, it looks backwards.
#
# Maybe we should look at how many bronze customers we have and see what is going on.
#
# What I plan to do is filter out the unique accounts and see how many gold, silver and bronze customers there are.
#
# I'm purposely stringing a lot of commands together which is not necessarily best practice but does show how powerful pandas can be. Feel free to review my previous articles and play with this command yourself to understand what all these commands mean.
all_data_st.drop_duplicates(subset=["account number","name"]).iloc[:,[0,1,7]].groupby(["status"])["name"].count()
# Ok. This makes a little more sense. We see that we have 9 bronze customers and only 4 customers. That is probably why the volumes are so skewed towards our bronze customers.
| notebooks/01_Combining-Multiple-Excel-File-with-Pandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Ringraziamenti
#
#
# Questo sito è stato realizzato con stanziamenti del [Dipartimento di Ingegneria e Scienze dell’Informazione (DISI)](https://www.disi.unitn.it) dell'Università di Trento
#
# 
#
# **Ringraziamo inoltre:**
#
# Il professor [<NAME>](http://cricca.disi.unitn.it/montresor/) per aver consentito la realizzazione dei materiale nei seminari 2017/18
#
# 
#
# Il [Dipartimento di Ingegneria e Scienze dell’Informazione (DISI)](https://www.disi.unitn.it) dell'Università di Trento, [Hub Innovazione Trentino HIT](https://www.trentinoinnovation.eu) e la rete STAAR per aver consentito la realizzazione delle challenge ICTDays Summercamp [Edizione 2018](https://webmagazine.unitn.it/evento/disi/39864/ict-days-summer-camp) e [edizione 2019](https://webmagazine.unitn.it/evento/disi/66093/ict-days-summer-camp) per ragazzi delle superiori, che hanno usato il materiale del sito per affrontare diverse [sfide di Data Science](challenges.ipynb)
#   [dati.trentino.it](http://dati.trentino.it) per i dati da elaborare, e l'assistenza di <NAME> e <NAME> che hanno anche promosso la [challenge Turismo 3.0 2018](https://it.softpython.org/challenges/turismo-3.0/turismo-3.0-challenge.html) e [RiParco da Trento 2019](challenges/riparco-da-trento/riparco-da-trento-challenge.ipynb)
#   [SpazioDati](http://spaziodati.eu) nelle persone di <NAME> e <NAME> per aver promosso le challenge [Business Oriented 2018](challenges/business-oriented/business-oriented-challenge.ipynb) e [A Prova di Hacker 2019](challenges/a-prova-di-hacker/a-prova-di-hacker-challenge.ipynb) fornendo i dati sulle imprese con il servizio [Atoka.io](http://atoka.io), e l'accesso ai servizi semantici di [Dandelion](http://dandelion.eu)
#   [U-Hopper](https://u-hopper.com/) / [Thinkin](https://thinkin.io/) nelle persone di <NAME>, <NAME>, <NAME> per aver promosso le challenge [Mondiali Russia 2018](challenges/mondiali-russia-2018/mondiali-russia-2018-challenge.ipynb) e [Real Time Transport 2019](challenges/real-time-transport/real-time-transport-challenge.ipynb)
#
#
# 
#
# L'[Agenzia del Lavoro - Provincia Autonoma di Trento](https://www.agenzialavoro.tn.it/) nelle persone di <NAME>, <NAME>, <NAME> ed [Engineering](https://www.eng.it) nelle persone di <NAME>, <NAME>, <NAME> e <NAME> per aver promosso la challenge 2019 [Lavoro 4.0](challenges/lavoro-4.0/lavoro-4.0-challenge.ipynb)
#
#    Il professor <NAME> del [Dipartimento di Ingegneria e Scienze dell’Informazione (DISI)](https://www.disi.unitn.it) dell'Università di Trento in rappresentaza del Corso di Studi in Ingegneria Informatica, delle Comunicazioni ed Elettronica (ICE) per aver promosso la challenge [RiParco da Trento 2019](challenges/riparco-da-trento/riparco-da-trento-challenge.ipynb) e <NAME> per il prezioso aiuto offerto sul capitolo computer vision.
#
#    La Raspberry Pi Foundation e l'European Space Agency per il concorso Astropi e i dati usati nel [tutorial su Pandas](pandas/pandas-sol.ipynb)
#
#
#   Autoreferenzialmente citiamo anche [CoderDojo Trento](http://coderdojotrento.it) per i tutorial su HTML & UMap !
| thanks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Skorch introduction
# *`skorch`* is designed to maximize interoperability between `sklearn` and `pytorch`. The aim is to keep 99% of the flexibility of `pytorch` while being able to leverage most features of `sklearn`. Below, we show the basic usage of `skorch` and how it can be combined with `sklearn`.
#
# +
# from skorch documentation
# -
! [ ! -z "$COLAB_GPU" ] && pip install torch skorch
# +
import torch
from torch import nn
import torch.nn.functional as F
torch.manual_seed(0);
# -
# ## Training a classifier and making predictions
# ### A toy binary classification task
# We load a toy classification task from `sklearn`.
import numpy as np
from sklearn.datasets import make_classification
X, y = make_classification(1000, 20, n_informative=10, random_state=0)
X = X.astype(np.float32)
X.shape, y.shape, y.mean()
# ### Definition of the `pytorch` classification `module`
# We define a vanilla neural network with two hidden layers. The output layer should have 2 output units since there are two classes. In addition, it should have a softmax nonlinearity, because later, when calling `predict_proba`, the output from the `forward` call will be used.
class ClassifierModule(nn.Module):
def __init__(
self,
num_units=10,
nonlin=F.relu,
dropout=0.5,
):
super(ClassifierModule, self).__init__()
self.num_units = num_units
self.nonlin = nonlin
self.dropout = dropout
self.dense0 = nn.Linear(20, num_units)
self.nonlin = nonlin
self.dropout = nn.Dropout(dropout)
self.dense1 = nn.Linear(num_units, 10)
self.output = nn.Linear(10, 2)
def forward(self, X, **kwargs):
X = self.nonlin(self.dense0(X))
X = self.dropout(X)
X = F.relu(self.dense1(X))
X = F.softmax(self.output(X), dim=-1)
return X
# ### Defining and training the neural net classifier
# We use `NeuralNetClassifier` because we're dealing with a classifcation task. The first argument should be the `pytorch module`. As additional arguments, we pass the number of epochs and the learning rate (`lr`), but those are optional.
#
# *Note*: To use the CUDA backend, pass `device='cuda'` as an additional argument.
from skorch import NeuralNetClassifier
net = NeuralNetClassifier(
ClassifierModule,
max_epochs=20,
lr=0.1,
# device='cuda', # uncomment this to train with CUDA
)
# As in `sklearn`, we call `fit` passing the input data `X` and the targets `y`. By default, `NeuralNetClassifier` makes a `StratifiedKFold` split on the data (80/20) to track the validation loss. This is shown, as well as the train loss and the accuracy on the validation set.
pdb on
net.fit(X, y)
# Also, as in `sklearn`, you may call `predict` or `predict_proba` on the fitted model.
# ### Making predictions, classification
y_pred = net.predict(X[:5])
y_pred
y_proba = net.predict_proba(X[:5])
y_proba
# ## Usage with sklearn `GridSearchCV`
# ### Special prefixes
# The `NeuralNet` class allows to directly access parameters of the `pytorch module` by using the `module__` prefix. So e.g. if you defined the `module` to have a `num_units` parameter, you can set it via the `module__num_units` argument. This is exactly the same logic that allows to access estimator parameters in `sklearn Pipeline`s and `FeatureUnion`s.
# This feature is useful in several ways. For one, it allows to set those parameters in the model definition. Furthermore, it allows you to set parameters in an `sklearn GridSearchCV` as shown below.
# In addition to the parameters prefixed by `module__`, you may access a couple of other attributes, such as those of the optimizer by using the `optimizer__` prefix (again, see below). All those special prefixes are stored in the `prefixes_` attribute:
print(', '.join(net.prefixes_))
# ### Performing a grid search
# Below we show how to perform a grid search over the learning rate (`lr`), the module's number of hidden units (`module__num_units`), the module's dropout rate (`module__dropout`), and whether the SGD optimizer should use Nesterov momentum or not (`optimizer__nesterov`).
from sklearn.model_selection import GridSearchCV
net = NeuralNetClassifier(
ClassifierModule,
max_epochs=20,
lr=0.1,
verbose=0,
optimizer__momentum=0.9,
)
params = {
'lr': [0.05, 0.1],
'module__num_units': <YOUR CODE>, # range for number of units
'module__dropout': <YOUR CODE>, # range for possible dropout rates
'optimizer__nesterov': [False, True],
}
gs = GridSearchCV(net, params, refit=False, cv=3, scoring='accuracy', verbose=2)
gs.fit(X, y)
print(gs.best_score_, gs.best_params_)
| 2-skorch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# ## A simple Notebook to be posted on the Github website
#generate random numbers from Normal distribution
data = np.random.randn(100000)
fig,ax = plt.subplots()
fig.set_size_inches(14,6)
fig.suptitle('Normal Distribution Histogram')
ax.hist(data, bins=100, linewidth=1.2, edgecolor='black')
ax.set_xlabel("Random Variable")
ax.set_ylabel('Frequency')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
plt.plot()
| example_folder/Sample_Notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Importing packages for analyzing useses
import pandas as pd
import numpy as np
from sklearn import preprocessing, neighbors
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels
from sklearn.metrics import classification_report
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.model_selection import cross_val_score
import seaborn as sns
from pylab import rcParams
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from numpy.linalg import *
import math
from datetime import datetime
from datetime import timedelta
# +
# Import the data and making a dataframe out of it
df_deli = pd.read_csv('../Data/Delivery.csv')
df_cons = pd.read_csv('../Data/Consumption.csv')
df_info = pd.read_csv('../Data/Information.csv')
df_weather = pd.read_csv('../../Data/KNMI_Voorschoten_20170711_20190601.csv')
# Setting indexes
df_deli.set_index('ID-nummer',inplace=True)
df_deli.index = pd.to_datetime(df_deli.index)
df_deli.index.names = ['date']
df_cons.set_index('ID-nummer',inplace=True)
df_cons.index = pd.to_datetime(df_cons.index)
df_cons.index.names = ['date']
df_info.set_index('ID-nummer',inplace=True)
df_info.index.names = ['date']
df_weather.set_index('Date_and_time',inplace=True)
df_weather = df_weather.loc['2017-09-12':'2019-06-01 00:00:00']
df_weather.index = pd.to_datetime(df_weather.index)
df_weather.index.names = ['date']
df_weather = df_weather.apply(pd.to_numeric)
# Creating first row to datasets
top_row = [0 for col in df_deli.columns]
df_top_row = pd.DataFrame(top_row).transpose()
s_top_row = pd.Series([pd.to_datetime('2017-09-12 00:00:00')])
df_top_row.set_index(s_top_row, inplace=True)
df_top_row.columns = df_deli.columns
# Adding first row to the datasets
df_deli = pd.concat([df_top_row, df_deli])
df_cons = pd.concat([df_top_row, df_cons])
# Joining datasets
df_deli = df_deli.join(df_weather)
df_cons = df_cons.join(df_weather)
# Filling NaN temperature values with the previous ones
df_deli.fillna(method='ffill', inplace=True)
df_cons.fillna(method='ffill', inplace=True)
# Deleting first row (dummy row)
df_deli = df_deli.iloc[1:]
df_cons = df_cons.iloc[1:]
display(df_deli.head(2))
display(df_deli.shape)
display(df_cons.head(2))
display(df_cons.shape)
# -
# ## Balancing the dataset out
df_info.transpose()['concept'].value_counts()
'''df_deli = df_deli[['H01','H02','H06','H15','H29','H03','H04','H11','H23','H27','H20','H22','H25','H28','H32','T','SQ','Q','N']]
df_cons = df_cons[['H01','H02','H06','H15','H29','H03','H04','H11','H23','H27','H20','H22','H25','H28','H32','T','SQ','Q','N']]
df_info = df_info[['H01','H02','H06','H15','H29','H03','H04','H11','H23','H27','H20','H22','H25','H28','H32']]'''
df_deli = df_deli.groupby([df_deli.index.year, df_deli.index.month, df_deli.index.week]).agg({'H01':'sum',
'H02':'sum',
'H03':'sum',
'H04':'sum',
'H06':'sum',
'H07':'sum',
'H08':'sum',
'H09':'sum',
'H11':'sum',
'H13':'sum',
'H15':'sum',
'H16':'sum',
'H17':'sum',
'H18':'sum',
'H19':'sum',
'H20':'sum',
'H21':'sum',
'H22':'sum',
'H23':'sum',
'H24':'sum',
'H25':'sum',
'H26':'sum',
'H27':'sum',
'H28':'sum',
'H29':'sum',
'H31':'sum',
'H32':'sum',
'H33':'sum',
'T':'mean',
'SQ':'mean',
'Q':'mean',
'N':'mean',
})
df_cons = df_cons.groupby([df_cons.index.year, df_cons.index.month, df_cons.index.week]).agg({'H01':'sum',
'H02':'sum',
'H03':'sum',
'H04':'sum',
'H06':'sum',
'H07':'sum',
'H08':'sum',
'H09':'sum',
'H11':'sum',
'H13':'sum',
'H15':'sum',
'H16':'sum',
'H17':'sum',
'H18':'sum',
'H19':'sum',
'H20':'sum',
'H21':'sum',
'H22':'sum',
'H23':'sum',
'H24':'sum',
'H25':'sum',
'H26':'sum',
'H27':'sum',
'H28':'sum',
'H29':'sum',
'H31':'sum',
'H32':'sum',
'H33':'sum',
'T':'mean',
'SQ':'mean',
'Q':'mean',
'N':'mean',
})
# ## Dummy Variables
# +
# Setting different columns (delivery, consumption, houses_info, dummy_variables)
df = pd.DataFrame(columns=['delivery','consumption', 'T', 'SQ', 'Q', 'N', 'heating_sys'])
num_houses_cols = len(df_deli.columns) - 4
# Preparing the delivery and consumption arrays
ar_deli = np.array([])
ar_cons = np.array([])
for col in range(num_houses_cols):
ar_deli = np.append(ar_deli, df_deli.values[:, col])
ar_cons = np.append(ar_cons, df_cons.values[:, col])
# Preparing the houses information arrays
ar_heatSystem = np.array([])
for col in df_info.columns:
ar_heatSystem = np.append(ar_heatSystem, (df_info.loc[['concept'], [col]].values[0].tolist() * df_deli.shape[0]))
# Inserting the data to the dataframe
df['delivery'] = pd.Series(ar_deli)
df['consumption'] = pd.Series(ar_cons)
df['T'] = df_deli['T'].values.tolist() * num_houses_cols
df['SQ'] = df_deli['SQ'].values.tolist() * num_houses_cols
df['Q'] = df_deli['Q'].values.tolist() * num_houses_cols
df['N'] = df_deli['N'].values.tolist() * num_houses_cols
df['heating_sys'] = pd.Series(ar_heatSystem)
# Dummy variables
dummy_sys = pd.get_dummies(df['heating_sys'], prefix='sys')
df = pd.concat([df, dummy_sys], axis=1)
print(df['heating_sys'].unique())
print(df.shape, len(ar_heatSystem))
# Replacing heating system type by numbers
df.replace('E', 1, inplace=True)
df.replace('WP', 2, inplace=True)
df.replace('Zon', 3, inplace=True)
display(df.head())
display(df.shape)
# +
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
cax = ax.matshow(df.corr())
fig.colorbar(cax)
plt.title('Correlation Matrix')
ax.set_xticks(range(len(df.columns)))
ax.set_yticks(range(len(df.columns)))
ax.set_xticklabels(labels=df.columns)
ax.set_yticklabels(labels=df.columns)
plt.show()
# -
# ## K Nearest Implementation (Scaler)
from sklearn.preprocessing import StandardScaler, LabelBinarizer
from sklearn.pipeline import Pipeline
from sklearn.metrics import roc_curve
# +
features = ['delivery', 'consumption', 'T', 'SQ', 'Q', 'N']
target_feature = 'heating_sys'
X = (df[features]-df[features].min())/(df[features].max()-df[features].min())
y = df[target_feature]
# +
steps = [('scaler', StandardScaler()),
('knn', neighbors.KNeighborsClassifier())]
pipeline = Pipeline(steps)
param_grid = {'knn__n_neighbors': np.arange(5, 15),
'knn__p': np.arange(1, 3),
'knn__weights':['uniform', 'distance']}
knn_reg = GridSearchCV(pipeline, param_grid, cv=5, return_train_score=True)
knn_reg.fit(X, y)
display(knn_reg.best_estimator_)
display('Best Params: ' + str(knn_reg.best_params_))
display('Best Score: ' + str(knn_reg.best_score_))
# +
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
knn_reg.fit(X_train, y_train)
y_pred = knn_reg.best_estimator_.predict(X_test)
print(classification_report(y_test, y_pred))
# -
| Supervised Machine Learning Algorithms/K-Nearest Neighbors/K-Nearest Neighbors.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="2rFiJu_cvdUZ" colab_type="code" colab={}
import pandas as pd
import numpy as np
import plotly.express as px
# Instantiate all of our dfs
df2020 = pd.read_csv('2020_unemployment.csv', encoding='utf-8')
df2019 = pd.read_csv('2019_unemployment.csv', encoding='utf-8')
df2018 = pd.read_csv('2018_unemployment.csv', encoding='utf-8')
df2017 = pd.read_csv('2017_unemployment.csv', encoding='utf-8')
df2016 = pd.read_csv('2016_unemployment.csv', encoding='utf-8')
df2015 = pd.read_csv('2015_unemployment.csv', encoding='utf-8')
df2020
# + id="btWyaFFX77UN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 419} outputId="feb790d0-eb0a-48a7-c0b8-9e6da04c0177"
# Lets add year column to each df
df2020['year'] = 2020
df2019['year'] = 2019
df2018['year'] = 2018
df2017['year'] = 2017
df2016['year'] = 2016
df2015['year'] = 2015
df2020
# + id="aVDdySLH9WlN" colab_type="code" colab={}
# Check how the city column is laid out through entire df
# pd.set_option('display.max_rows', None)
# df2020
# + id="llZWPb_h8-9f" colab_type="code" colab={}
#df2020
df2020["city"] = df2020['city'].str.replace("Metropolitan Statistical Area(1)", "")
df2020["city"] = df2020["city"].str.replace("Metropolitan Statistical Area", "")
df2020["city"] = df2020["city"].str.replace("Metropolitan NECTA", "")
df2020["city"] = df2020["city"].str.replace("--", "-")
df2020.city = df2020.city.str.rstrip()
df2020[['city','state']] = df2020['city'].str.split(',',expand=True)
df2020.state = df2020.state.str.lstrip()
df2020 = df2020.assign(city=df2020.city.str.split("-")).explode('city')
df2020 = df2020.assign(state=df2020.state.str.split("-")).explode('state')
df2020['city_state'] = (df2020['city'] + "," + " " + df2020['state'])
#df2019
df2019["city"] = df2019['city'].str.replace("Metropolitan Statistical Area(1)", "")
df2019["city"] = df2019["city"].str.replace("Metropolitan Statistical Area", "")
df2019["city"] = df2019["city"].str.replace("Metropolitan NECTA", "")
df2019["city"] = df2019["city"].str.replace("--", "-")
df2019.city = df2019.city.str.rstrip()
df2019[['city','state']] = df2019['city'].str.split(',',expand=True)
df2019.state = df2019.state.str.lstrip()
df2019 = df2019.assign(city=df2019.city.str.split("-")).explode('city')
df2019 = df2019.assign(state=df2019.state.str.split("-")).explode('state')
df2019['city_state'] = (df2019['city'] + "," + " " + df2019['state'])
#df2018
df2018["city"] = df2018['city'].str.replace("Metropolitan Statistical Area(1)", "")
df2018["city"] = df2018["city"].str.replace("Metropolitan Statistical Area", "")
df2018["city"] = df2018["city"].str.replace("Metropolitan NECTA", "")
df2018["city"] = df2018["city"].str.replace("--", "-")
df2018.city = df2018.city.str.rstrip()
df2018[['city','state']] = df2018['city'].str.split(',',expand=True)
df2018.state = df2018.state.str.lstrip()
df2018 = df2018.assign(city=df2018.city.str.split("-")).explode('city')
df2018 = df2018.assign(state=df2018.state.str.split("-")).explode('state')
df2018['city_state'] = (df2018['city'] + "," + " " + df2018['state'])
#df2017
df2017["city"] = df2017['city'].str.replace("Metropolitan Statistical Area(1)", "")
df2017["city"] = df2017["city"].str.replace("Metropolitan Statistical Area", "")
df2017["city"] = df2017["city"].str.replace("Metropolitan NECTA", "")
df2017["city"] = df2017["city"].str.replace("--", "-")
df2017.city = df2017.city.str.rstrip()
df2017[['city','state']] = df2017['city'].str.split(',',expand=True)
df2017.state = df2017.state.str.lstrip()
df2017 = df2017.assign(city=df2017.city.str.split("-")).explode('city')
df2017 = df2017.assign(state=df2017.state.str.split("-")).explode('state')
df2017['city_state'] = (df2017['city'] + "," + " " + df2017['state'])
#df2016
df2016["city"] = df2016['city'].str.replace("Metropolitan Statistical Area(1)", "")
df2016["city"] = df2016["city"].str.replace("Metropolitan Statistical Area", "")
df2016["city"] = df2016["city"].str.replace("Metropolitan NECTA", "")
df2016["city"] = df2016["city"].str.replace("--", "-")
df2016.city = df2016.city.str.rstrip()
df2016[['city','state']] = df2016['city'].str.split(',',expand=True)
df2016.state = df2016.state.str.lstrip()
df2016 = df2016.assign(city=df2016.city.str.split("-")).explode('city')
df2016 = df2016.assign(state=df2016.state.str.split("-")).explode('state')
df2016['city_state'] = (df2016['city'] + "," + " " + df2016['state'])
#df2015
df2015["city"] = df2015['city'].str.replace("Metropolitan Statistical Area(1)", "")
df2015["city"] = df2015["city"].str.replace("Metropolitan Statistical Area", "")
df2015["city"] = df2015["city"].str.replace("Metropolitan NECTA", "")
df2015["city"] = df2015["city"].str.replace("--", "-")
df2015.city = df2015.city.str.rstrip()
df2015[['city','state']] = df2015['city'].str.split(',',expand=True)
df2015.state = df2015.state.str.lstrip()
df2015 = df2015.assign(city=df2015.city.str.split("-")).explode('city')
df2015 = df2015.assign(state=df2015.state.str.split("-")).explode('state')
df2015['city_state'] = (df2015['city'] + "," + " " + df2015['state'])
# + id="tE_bx9tiCXyC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 419} outputId="1f2af1e2-d44c-41c3-b8a4-94491d13e629"
# Instantiate df to merge with, creating 100 city unemployment
cities_df = pd.read_csv('cities.csv', encoding='utf-8')
cities_df.columns = ['city_state']
cities_df
# + id="PjNFfuceY3cJ" colab_type="code" colab={}
# Instantiate list of dfs to concat into 1 dataset
data_frames = [df2020, df2019, df2018, df2017, df2016, df2015]
# + id="lABsI_xRZMTv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 419} outputId="946d550a-ccf3-4d75-b3f2-c2379f44845a"
# 1 dataset with all the years
concat_df = pd.concat(data_frames)
concat_df
# + id="ccGr7ykNZk66" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 419} outputId="2e8cc989-1222-4654-ff5e-65cf5652d395"
# Re-arrange columns aesthetically
concat_df = concat_df[["city", "state", "year", "unemployment_rate", "rank", "city_state"]]
concat_df
# + id="U5r0stLwCrG0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 419} outputId="7bd9e5bb-0776-48ac-dbbd-1138fc56c7ba"
# pd.set_option('display.max_rows', None)
merged = pd.merge(df2018, cities_df, on=['city_state'], how='right')
merged
# + id="vqtQLak-wxgr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 419} outputId="238f19f7-2b35-4398-8a59-1291f40059eb"
# Add a city_id unique identifier
# Re-arrange columns
merged = merged.sort_values(["state", "city"], ascending = (True, True))
merged.insert(0, 'city_id', range(0, 0 + len(merged)))
merged = merged[["city_id", "city", "state", "year", "unemployment_rate",
"rank", "city_state"]]
merged
# + id="swXNf8C6xxIh" colab_type="code" colab={}
pd.DataFrame.to_csv(merged, '2018_city_unemployment.csv', sep=',', na_rep='NaN', index=False)
# + id="QcWmCp73xy0Q" colab_type="code" colab={}
pd.DataFrame.to_csv(concat_df, '6yr_city_unemployment_data.csv', sep=',', na_rep='NaN', index=False)
# + id="81fZG3GbyxvG" colab_type="code" colab={}
# Instantiate new csv's, start prepping endpoints
df = pd.read_csv('2018_city_unemployment.csv', encoding='utf-8')
df2 = pd.read_csv('6yr_city_unemployment_data.csv', encoding='utf-8')
# + id="ooYQ8H56zpdd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 419} outputId="7b4346ea-688f-49ff-e4a8-f798363698bb"
df
# + id="U92ttqiAzGtz" colab_type="code" colab={}
def unemployment_data(city_id):
rt_dict = {}
rt_data_dict = {}
df = pd.read_csv('2018_city_unemployment.csv', encoding='utf-8')
dataframe = df[df['city_id']==city_id]
rt_data = dataframe.to_numpy()
rt_data_dict["unemployment_rate"] = rt_data[0][4]
rt_data_dict["rank"] = rt_data[0][5]
rt_data_dict["city_state"] = rt_data[0][6]
rt_dict["data"] = rt_data_dict
# rt_dict["viz"] = citypopviz(city=rt_data[0][1], state=rt_data[0][2])
return rt_dict
# + id="yGdlytPx0mQ9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="feb59438-e3fc-47a7-9340-5bcd20ddd77d"
unemployment_data(97)
# + id="tTGk-HPf02jA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 542} outputId="e64f688c-f2a4-4c6c-86a1-883931f0becf"
city = 'Anchorage'
state = 'AK'
metric = 'unemployment_rate'
subset = df2[(df2.city == city) & (df2.state == state)]
fig = px.area(subset, x='year', y=metric, title=f'{metric} in {city},{state}')
fig.show()
| notebooks/Unemployment_Exploration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # maysics.constant模块使用说明
#
# constant模块包含以下常数:
#
# |名称|意义|
# |---|---|
# |chaos_1|第一费根鲍姆常数|
# |chaos_2|第二费根鲍姆常数|
# |e|自然常数|
# |gamma|欧拉-马歇罗尼常数|
# |golden|黄金比例|
# |K|兰道-拉马努金常数|
# |K0|辛钦常数|
# |pi|圆周率|
# |AU|天文单位|
# |atm|标准大气压|
# |c|真空光速|
# |c_e|元电荷|
# |epsilon|真空介电常数|
# |g|重力加速度|
# |G|万有引力常数|
# |h|普朗克常数|
# |hr|约化普朗克常数|
# |k|玻尔兹曼常数|
# |lambdac|康普顿波长|
# |ly|光年|
# |m_e|电子质量|
# |m_earth|地球质量|
# |m_n|中子质量|
# |m_p|质子质量|
# |m_s|太阳质量|
# |miu|真空磁导率|
# |NA|阿伏伽德罗常数|
# |pc|秒差距|
# |Platonic_year|柏拉图年|
# |R|理想气体常数|
# |r_earth|地球平均半径|
# |r_sun|太阳平均半径|
# |r_e_m|地月平均距离|
# |SB|斯特藩-玻尔兹曼常量|
# |v1|第一宇宙速度|
# |v2|第二宇宙速度|
# |v3|第三宇宙速度|
#
# constant模块包含以下函数:
#
# |名称|意义|
# |---|---|
# |lp|勒让德多项式|
# |lpn|勒让德多项式的模|
# |alp|连带勒让德多项式|
# |alpn|连带勒让德多项式的模|
# |hp|厄米多项式|
# |v_mean|麦克斯韦速率分布律下的平均速率|
# |v_p|麦克斯韦速率分布律下的最概然速率|
# |v_rms|麦克斯韦速率分布律下的均方根速率|
| maysics教程/constant说明.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8 - Tensorflow
# language: python
# name: azureml_py38_tensorflow
# ---
# # Run BERT-Large training workload
# + language="bash"
#
# # Download datasets, checkpoints and pre-trained model
# rm -rf ~/TF/bert-large
# mkdir -p ~/TF/bert-large/SQuAD-1.1
# cd ~/TF/bert-large/SQuAD-1.1
# wget https://github.com/oap-project/oap-project.github.io/raw/master/resources/ai/bert/dev-v1.1.json
# wget https://github.com/oap-project/oap-project.github.io/raw/master/resources/ai/bert/evaluate-v1.1.py
# wget https://github.com/oap-project/oap-project.github.io/raw/master/resources/ai/bert/train-v1.1.json
#
# cd ~/TF/bert-large
# wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v1_8/bert_large_checkpoints.zip
# unzip bert_large_checkpoints.zip
#
# cd ~/TF/bert-large
# wget https://storage.googleapis.com/bert_models/2019_05_30/wwm_uncased_L-24_H-1024_A-16.zip
# unzip wwm_uncased_L-24_H-1024_A-16.zip
# + language="bash"
#
# # BERT-Large training
# # Install necessary packages
# sudo apt-get install -y numactl
# sudo apt-get install -y libblacs-mpi-dev
# # Create ckpt directory
# rm -rf ~/TF/bert-large/training/*
# mkdir -p ~/TF/bert-large/training/BERT-Large-output
# # Download IntelAI benchmark
# cd ~/TF/bert-large/training
# wget https://github.com/IntelAI/models/archive/refs/tags/v1.8.1.zip
# unzip v1.8.1.zip
# wget https://github.com/oap-project/oap-tools/raw/master/integrations/ml/databricks/benchmark/IntelAI_models_bertlarge_inference_realtime_throughput.patch
# cd ./models-1.8.1/
# git apply ../IntelAI_models_bertlarge_inference_realtime_throughput.patch
# + language="bash"
#
# #Bert-Large training
# export SQUAD_DIR=~/TF/bert-large/SQuAD-1.1/
# export BERT_LARGE_OUTPUT=~/TF/bert-large/training/BERT-Large-output
# export BERT_LARGE_MODEL=~/TF/bert-large/wwm_uncased_L-24_H-1024_A-16
# export PYTHONPATH=~/TF/bert-large/training/models-1.8.1/benchmarks/
#
# cores_per_socket=$(lscpu | awk '/^Core\(s\) per socket/{ print $4 }')
# numa_nodes=$(lscpu | awk '/^NUMA node\(s\)/{ print $3 }')
#
# cd ~/TF/bert-large/training/models-1.8.1/benchmarks/
#
# function run_training_without_numabind() {
# /anaconda/envs/azureml_py38_tensorflow/bin/python launch_benchmark.py \
# --model-name=bert_large \
# --precision=fp32 \
# --mode=training \
# --framework=tensorflow \
# --batch-size=4 \
# --benchmark-only \
# --data-location=$BERT_LARGE_MODEL \
# -- train-option=SQuAD DEBIAN_FRONTEND=noninteractive config_file=$BERT_LARGE_MODEL/bert_config.json init_checkpoint=$BERT_LARGE_MODEL/bert_model.ckpt vocab_file=$BERT_LARGE_MODEL/vocab.txt train_file=$SQUAD_DIR/train-v1.1.json predict_file=$SQUAD_DIR/dev-v1.1.json do-train=True learning-rate=1.5e-5 max-seq-length=384 do_predict=True warmup-steps=0 num_train_epochs=0.1 doc_stride=128 do_lower_case=False experimental-gelu=False mpi_workers_sync_gradients=True
# }
#
# function run_training_with_numabind() {
# intra_thread=`expr $cores_per_socket - 2`
# /anaconda/envs/azureml_py38_tensorflow/bin/python launch_benchmark.py \
# --model-name=bert_large \
# --precision=fp32 \
# --mode=training \
# --framework=tensorflow \
# --batch-size=4 \
# --mpi_num_processes=$numa_nodes \
# --num-intra-threads=$intra_thread \
# --num-inter-threads=1 \
# --benchmark-only \
# --data-location=$BERT_LARGE_MODEL \
# --train-option=SQuAD DEBIAN_FRONTEND=noninteractive config_file=$BERT_LARGE_MODEL/bert_config.json init_checkpoint=$BERT_LARGE_MODEL/bert_model.ckpt vocab_file=$BERT_LARGE_MODEL/vocab.txt train_file=$SQUAD_DIR/train-v1.1.json predict_file=$SQUAD_DIR/dev-v1.1.json do-train=True learning-rate=1.5e-5 max-seq-length=384 do_predict=True warmup-steps=0 num_train_epochs=0.1 doc_stride=128 do_lower_case=False experimental-gelu=False mpi_workers_sync_gradients=True
# }
#
#
# if [ "$numa_nodes" = "1" ];then
# run_training_without_numabind
# else
# run_training_with_numabind
# fi
# +
# Print TensorFlow version, and check whether it is intel-optimized
import tensorflow
print("tensorflow version: " + tensorflow.__version__)
from packaging import version
if (version.parse("2.5.0") <= version.parse(tensorflow.__version__)):
from tensorflow.python.util import _pywrap_util_port
print( _pywrap_util_port.IsMklEnabled())
else:
from tensorflow.python import _pywrap_util_port
print(_pywrap_util_port.IsMklEnabled())
| integrations/ml/azure/benchmark/benchmark_tensorflow_bertlarge_training.ipynb |
# <a href="https://colab.research.google.com/github/036takashima/Pandas-Cookbook-Second-Edition/blob/master/Chapter01/c1_code.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# # Chapter 1: Pandas Foundations
import pandas as pd
import numpy as np
# ## Introduction
from google.colab import drive
drive.mount('/content/drive')
# ## Dissecting the anatomy of a DataFrame
pd.set_option('max_columns', 4, 'max_rows', 10)
# /dataのありかは以下の通り
#
# /content/drive/MyDrive/ColabNotebooks/Pandas-Cookbook-Second-Edition/data/
movies = pd.read_csv('/content/drive/MyDrive/ColabNotebooks/Pandas-Cookbook-Second-Edition/data/movie.csv')
movies.head()
# ### How it works...
# ## DataFrame Attributes
# ### How to do it... {#how-to-do-it-1}
movies = pd.read_csv('/content/drive/MyDrive/ColabNotebooks/Pandas-Cookbook-Second-Edition/data/movie.csv')
columns = movies.columns
index =movies.index
data = movies.values
columns
index
data
type(index)
type(columns)
type(data)
issubclass(pd.RangeIndex, pd.Index)
issubclass(columns.__class__, pd.Index)
# ### How it works...
# ### There's more
index.values
columns.values
# ## Understanding data types
# ### How to do it... {#how-to-do-it-2}
movies = pd.read_csv('/content/drive/MyDrive/ColabNotebooks/Pandas-Cookbook-Second-Edition/data/movie.csv')
movies.dtypes
movies.dtypes
movies.dtypes.value_counts()
movies.info()
# ### How it works...
pd.Series(['Paul', np.nan, 'George']).dtype
# ### There's more...
# ### See also
# ## Selecting a Column
# ### How to do it... {#how-to-do-it-3}
movies = pd.read_csv('/content/drive/MyDrive/ColabNotebooks/Pandas-Cookbook-Second-Edition/data/movie.csv')
movies['director_name']
movies.director_name
movies.loc[:, 'director_name']
movies.iloc[:, 1]
movies['director_name'].index
movies.director_name.index
movies['director_name'].dtype
movies.director_name.dtype
movies['director_name'].size
movies.director_name.size
movies['director_name'].name
movies.director_name.name
type(movies['director_name'])
type(movies.director_name)
movies['director_name'].apply(type).unique()
movies.director_name.apply(type).unique()
# ### How it works... 10/20
# ### There's more
# ### See also
# ## Calling Series Methods
s_attr_methods = set(dir(pd.Series))
len(s_attr_methods)
df_attr_methods = set(dir(pd.DataFrame))
len(df_attr_methods)
len(s_attr_methods & df_attr_methods)
# ### How to do it... {#how-to-do-it-4}
movies = pd.read_csv('/content/drive/MyDrive/ColabNotebooks/Pandas-Cookbook-Second-Edition/data/movie.csv')
director = movies['director_name']
fb_likes = movies['actor_1_facebook_likes']
director.dtype
fb_likes.dtype
director.head()
director.sample(n=5, random_state=42)
director.sample(n=5)
fb_likes.head()
fb_likes.sample(n=5)
director.value_counts()
fb_likes.value_counts()
director.size
fb_likes.size
director.shape
fb_likes.shape
len(director)
len(fb_likes)
director.unique()
fb_likes.unique()
director.count()
fb_likes.count()
fb_likes.quantile()
fb_likes.min()
fb_likes.max()
fb_likes.mean()
fb_likes.median()
fb_likes.std()
fb_likes.describe()
director.describe()
fb_likes.quantile(.2)
fb_likes.quantile(.25)
fb_likes.quantile([.1, .2, .3, .4, .5, .6, .7, .8, .9])
director.isna()
fb_likes_filled = fb_likes.fillna(0)
fb_likes_filled.count()
fb_likes_dropped = fb_likes.dropna()
fb_likes_dropped.size
# ### How it works...
# ### There's more...
director.value_counts(normalize=True)
director.hasnans
director.notna()
# ### See also
# ## Series Operations 10/24
5 + 9 # plus operator example. Adds 5 and 9
# ### How to do it... {#how-to-do-it-5}
movies = pd.read_csv('/content/drive/MyDrive/ColabNotebooks/Pandas-Cookbook-Second-Edition/data/movie.csv')
imdb_score = movies['imdb_score']
imdb_score
imdb_score = movies.imdb_score
imdb_score
imdb_score + 1
imdb_score * 2.5
imdb_score // 7
imdb_score > 7
director = movies['director_name']
director == '<NAME>'
director = movies.director_name
director == '<NAME>'
# ### How it works...
# ### There's more...
imdb_score.add(1) # imdb_score + 1
imdb_score.gt(7) # imdb_score > 7
# ### See also
# ## Chaining Series Methods 10/27
# ### How to do it... {#how-to-do-it-6}
movies = pd.read_csv('/content/drive/MyDrive/ColabNotebooks/Pandas-Cookbook-Second-Edition/data/movie.csv')
fb_likes = movies['actor_1_facebook_likes']
director = movies['director_name']
fb_likes_2 = movies.actor_1_facebook_likes
director_2 = movies.director_name
director.value_counts().head(3)
director_2.value_counts()
fb_likes.isna().sum()
fb_likes_2.isna()
fb_likes.dtype
(fb_likes.fillna(0)
.astype(int)
.head()
)
# ### How it works...10/28
# ### There's more...
(fb_likes.fillna(0)
#.astype(int)
#.head()
)
(fb_likes.fillna(0)
.astype(int)
#.head()
)
fb_likes.isna().mean()
fb_likes.fillna(0) \
.astype(int) \
.head()
def debug_df(df):
print("BEFORE")
print(df)
print("AFTER")
return df
(fb_likes.fillna(0)
.pipe(debug_df)
.astype(int)
.head()
)
intermediate = None
def get_intermediate(df):
global intermediate
intermediate = df
return df
res = (fb_likes.fillna(0)
.pipe(get_intermediate)
.astype(int)
.head()
)
intermediate
# ## Renaming Column Names
# ### How to do it...
movies = pd.read_csv('data/movie.csv')
col_map = {'director_name':'Director Name',
'num_critic_for_reviews': 'Critical Reviews'}
movies.rename(columns=col_map).head()
# ### How it works... {#how-it-works-8}
# ### There's more {#theres-more-7}
idx_map = {'Avatar':'Ratava', 'Spectre': 'Ertceps',
"Pirates of the Caribbean: At World's End": 'POC'}
col_map = {'aspect_ratio': 'aspect',
"movie_facebook_likes": 'fblikes'}
(movies
.set_index('movie_title')
.rename(index=idx_map, columns=col_map)
.head(3)
)
movies = pd.read_csv('data/movie.csv', index_col='movie_title')
ids = movies.index.tolist()
columns = movies.columns.tolist()
# # rename the row and column labels with list assignments
ids[0] = 'Ratava'
ids[1] = 'POC'
ids[2] = 'Ertceps'
columns[1] = 'director'
columns[-2] = 'aspect'
columns[-1] = 'fblikes'
movies.index = ids
movies.columns = columns
movies.head(3)
def to_clean(val):
return val.strip().lower().replace(' ', '_')
movies.rename(columns=to_clean).head(3)
cols = [col.strip().lower().replace(' ', '_')
for col in movies.columns]
movies.columns = cols
movies.head(3)
# ## Creating and Deleting columns
# ### How to do it... {#how-to-do-it-9}
movies = pd.read_csv('data/movie.csv')
movies['has_seen'] = 0
idx_map = {'Avatar':'Ratava', 'Spectre': 'Ertceps',
"Pirates of the Caribbean: At World's End": 'POC'}
col_map = {'aspect_ratio': 'aspect',
"movie_facebook_likes": 'fblikes'}
(movies
.rename(index=idx_map, columns=col_map)
.assign(has_seen=0)
)
total = (movies['actor_1_facebook_likes'] +
movies['actor_2_facebook_likes'] +
movies['actor_3_facebook_likes'] +
movies['director_facebook_likes'])
total.head(5)
cols = ['actor_1_facebook_likes','actor_2_facebook_likes',
'actor_3_facebook_likes','director_facebook_likes']
sum_col = movies[cols].sum(axis='columns')
sum_col.head(5)
movies.assign(total_likes=sum_col).head(5)
def sum_likes(df):
return df[[c for c in df.columns
if 'like' in c]].sum(axis=1)
movies.assign(total_likes=sum_likes).head(5)
(movies
.assign(total_likes=sum_col)
['total_likes']
.isna()
.sum()
)
(movies
.assign(total_likes=total)
['total_likes']
.isna()
.sum()
)
(movies
.assign(total_likes=total.fillna(0))
['total_likes']
.isna()
.sum()
)
def cast_like_gt_actor_director(df):
return df['cast_total_facebook_likes'] >= \
df['total_likes']
df2 = (movies
.assign(total_likes=total,
is_cast_likes_more = cast_like_gt_actor_director)
)
df2['is_cast_likes_more'].all()
df2 = df2.drop(columns='total_likes')
actor_sum = (movies
[[c for c in movies.columns if 'actor_' in c and '_likes' in c]]
.sum(axis='columns')
)
actor_sum.head(5)
movies['cast_total_facebook_likes'] >= actor_sum
movies['cast_total_facebook_likes'].ge(actor_sum)
movies['cast_total_facebook_likes'].ge(actor_sum).all()
pct_like = (actor_sum
.div(movies['cast_total_facebook_likes'])
)
pct_like.describe()
pd.Series(pct_like.values,
index=movies['movie_title'].values).head()
# ### How it works... {#how-it-works-9}
# ### There's more... {#theres-more-8}
profit_index = movies.columns.get_loc('gross') + 1
profit_index
movies.insert(loc=profit_index,
column='profit',
value=movies['gross'] - movies['budget'])
del movies['director_name']
# ### See also
| Chapter01/c1_code.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.10.1 64-bit
# language: python
# name: python3
# ---
import pandas as pd
from aggregate.PUMS.count_PUMS_households import PUMSCountHouseholds
# %load_ext autoreload
# %autoreload 2
import sys
sys.path.append('../utils')
import wd_management
wd_management.set_wd_root()
# +
result = PUMSCountHouseholds(limited_PUMA=True, household=True)
df = result.PUMS
df.head()
# -
df.loc[(df.HINCP > 79597)& (df.HINCP < 119395) & (df.NPF == 4)].household_income_bands
# MI is the right classification for the household income between 79597 and 119395 members with 4
# ## Also taking a look at the the fraction calculation
# +
agg = result.aggregated
agg.head()
# +
#ib_ = ['ELI', 'VLI', "LI", 'MI', 'MIDI', 'HI']
agg['ELI-count'] / agg['ELI-fraction'] == agg['VLI-count'] / agg['VLI-fraction']
| notebooks/household_income_bands_checks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/erickaalgr/CpEN-21A-BSCpE-1-1/blob/main/Loop_Statement.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="qD-riicPjeH_"
# ##For Loop Statement
# + id="k6iPZD-4h2PV" colab={"base_uri": "https://localhost:8080/"} outputId="4c8403c8-e6e6-449e-9025-981ad583ca7c"
week=["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"]
for x in week:
print(x)
# + [markdown] id="7nBKVNHYwBfE"
# The Break Statement
# + colab={"base_uri": "https://localhost:8080/"} id="9Ctsr5XswK8F" outputId="d3726f62-c09b-4a75-b597-054617e0d2fc"
week=["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"]
for x in week:
print(x)
if x=="Thursday":
break
# + colab={"base_uri": "https://localhost:8080/"} id="UZZGJEnaxJGR" outputId="25c61406-a3c0-480e-8482-e3d26a230636"
week=["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"]
for x in week:
if x=="Thursday":
break
print(x)
# + [markdown] id="fjH8Ct4wxb28"
# Looping through a string
# + colab={"base_uri": "https://localhost:8080/"} id="Q_5XGKB1xfR-" outputId="87815029-3dba-407a-e64b-c7afb0572409"
for x in "week":
print(x)
# + [markdown] id="_qawELXvxpt0"
# The range() function
# + colab={"base_uri": "https://localhost:8080/"} id="8QcArx-WxxNu" outputId="6c7454d0-b035-4833-b278-cea679eacc3f"
for x in range(6):
print(x)
for x in range(2,6):
print(x)
# + colab={"base_uri": "https://localhost:8080/"} id="HRt0_fS2yRrO" outputId="82e9f491-a065-41ef-9ba1-d0108730a15a"
for x in range(6):
print("Example 1",x)
for x in range(2,6):
print("Example 2",x)
# + [markdown] id="54IWCdmFyoY1"
# Nested loops
# + colab={"base_uri": "https://localhost:8080/"} id="_1JvkWKCyqpv" outputId="93134784-91e8-4dcd-ba90-d0eecb797889"
adjective=["red","big","tasty"]
fruits=["apple","banana","cherry"]
for x in adjective:
for y in fruits:
print(x,y)
# + [markdown] id="evGoJBTuzXfj"
# ##While Loop Statement
# + colab={"base_uri": "https://localhost:8080/"} id="yAy4DCzKysfW" outputId="ff475417-c2bd-4550-da67-dc03388b0340"
i=1
while i<6:
print(i)
i+=1
# + [markdown] id="VvD4_EVZ0cyk"
# The break statement
# + colab={"base_uri": "https://localhost:8080/"} id="LgdnC26k0ZZB" outputId="aaf0847b-9879-4f34-d800-0c35e1576254"
i=1
while i<6:
print(i)
if i==3:
break
i+=1 #Assignment operator for addition
# + colab={"base_uri": "https://localhost:8080/"} id="wE9wwuMZ1gd2" outputId="57a5fed7-6ae3-43ee-d941-25c413b2b089"
i=1
while i<6:
i+=1 #Assignment operator for addition
if i==3:
continue
print(i)
# + colab={"base_uri": "https://localhost:8080/"} id="yPmHAo503CnO" outputId="090ca911-42fa-4620-8644-7c4ac7c25633"
i=1
while i<6:
i+=1 #Assignment operator for addition
if i==3:
continue
else:
print(i)
# + [markdown] id="aDK4LPCl3VvU"
# The else statement
# + colab={"base_uri": "https://localhost:8080/"} id="MCe9XHjm3X0A" outputId="a77497a4-b41d-4677-a1c5-87d82525c334"
i=1
while i<6:
print(i)
i+=1
else:
print("i is no longer less than 6")
# + [markdown] id="u7OIbM9A4Fur"
# ###Application 1
# Create a Phyton program that displays:
# Hello 0
# Hello 1
# Hello 2
# Hello 3
# Hello 4
# Hello 5
# Hello 6
# Hello 7
# Hello 8
# Hello 9
# Hello 10(in vertical display)
# + colab={"base_uri": "https://localhost:8080/"} id="fxCEUVvy4mQk" outputId="b23b6cf6-d1b5-4c1b-8fd3-ec651178fadc"
hello=["Hello 0","Hello 1", "Hello 2", "Hello 3", "Hello 4", "Hello 5", "Hello 6","Hello 7", "Hello 8", "Hello 9", "Hello 10"]
for x in hello:
print(x)
# + colab={"base_uri": "https://localhost:8080/"} id="YdT-ZICU5F_y" outputId="a818ea1c-b3af-444c-ec97-65b39ceec542"
i=0
while i<=10:
print("Hello",i)
i+=1
# + [markdown] id="3iXyqvY25my_"
# ###Application 2
# Create a Phyton program that displays integers less than 10 but not less than 3.
# + colab={"base_uri": "https://localhost:8080/"} id="a4NyCU89C9oR" outputId="c22d9b8a-5a14-4574-bcd8-6cd67ac4ec51"
integers=[9,8,7,6,5,4,3]
for x in integers:
print(x)
# + colab={"base_uri": "https://localhost:8080/"} id="4x6ecpVT_q03" outputId="bf589c33-cb25-4307-8395-f6a6d5f31ad0"
for x in range(3,10):
print(x)
# + colab={"base_uri": "https://localhost:8080/"} id="jvGgYOTnBHuE" outputId="618f9f45-5d52-44eb-b763-98e50142f2ee"
i=3
while i<10:
print(i)
if i==10:
break
i+=1
| Loop_Statement.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All).
#
# Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your name and collaborators below:
NAME = "<NAME>"
COLLABORATORS = ""
# ---
# + [markdown] deletable=false editable=false nbgrader={"checksum": "9c6aafba73c5d0229e2a9f1e316371d2", "grade": false, "grade_id": "cell-1cec5ee110f26162", "locked": true, "schema_version": 1, "solution": false}
# # Exercício Prático 6: Regressão Linear Simples
#
# Neste exercício vamos analisar o conjunto de dados ```Cereals```, que contém uma lista de marcas de cereais com dados nutricionais e avaliações feitas por consumidores. Vamos utilizar a regressão linear simples para descobrir qual dos fatores nutricionais melhor explica as avaliações.
# + [markdown] deletable=false editable=false nbgrader={"checksum": "bfb8222296765cab858e1062be4c478f", "grade": false, "grade_id": "cell-7c2014d5328a1027", "locked": true, "schema_version": 1, "solution": false}
# ## Carregando os dados
#
# Iremos carregar os dados usando a biblioteca ```pandas```. Não se preocupe se você não conhece a biblioteca, pois o nosso objetivo é apenas extrair a matriz de dados $X$. Segue uma descrição do dataset, retirada [daqui](http://statweb.stanford.edu/~owen/courses/202/Cereals.txt).
#
# * Datafile Name: Cereals
# * Datafile Subjects: Food , Health
# * Story Names: Healthy Breakfast
# * Reference: Data available at many grocery stores
# * Authorization: free use
# * Description: Data on several variable of different brands of cereal.
#
# A value of -1 for nutrients indicates a missing observation.
# Number of cases: 77
# Variable Names:
#
# 1. Name: Name of cereal
# 2. mfr: Manufacturer of cereal where A = American Home Food Products; G =
# General Mills; K = Kelloggs; N = Nabisco; P = Post; Q = Quaker Oats; R
# = Ralston Purina
# 3. type: cold or hot
# 4. calories: calories per serving
# 5. protein: grams of protein
# 6. fat: grams of fat
# 7. sodium: milligrams of sodium
# 8. fiber: grams of dietary fiber
# 9. carbo: grams of complex carbohydrates
# 10. sugars: grams of sugars
# 11. potass: milligrams of potassium
# 12. vitamins: vitamins and minerals - 0, 25, or 100, indicating the typical percentage of FDA recommended
# 13. shelf: display shelf (1, 2, or 3, counting from the floor)
# 14. weight: weight in ounces of one serving
# 15. cups: number of cups in one serving
# 16. rating: a rating of the cereals
# + deletable=false editable=false nbgrader={"checksum": "ca61fccce8bfb24fe88241464ceb2729", "grade": false, "grade_id": "cell-1cef18acd2d00556", "locked": true, "schema_version": 1, "solution": false}
import pandas as pd
df = pd.read_table('cereal.txt',sep='\s+',index_col='name')
df
# + [markdown] deletable=false editable=false nbgrader={"checksum": "39d8fb90c721f30e8ee0c01c77cb8e88", "grade": false, "grade_id": "cell-b87a84ce6b9f7ac0", "locked": true, "schema_version": 1, "solution": false}
# A seguir iremos remover as linhas correspondentes aos cereais que possuem dados faltantes, representados pelo valor -1.
# Também iremos remover as colunas com dados categóricos 'mfr' e 'type', e os dados numéricos, 'shelf', 'weight' e 'cups'.
# + deletable=false editable=false nbgrader={"checksum": "287efe9551d4e5eb93eafed223d107b2", "grade": false, "grade_id": "cell-d3877019e42d35fa", "locked": true, "schema_version": 1, "solution": false}
import numpy as np
new_df = df.replace(-1,np.nan)
new_df = new_df.dropna()
new_df = new_df.drop(['mfr','type','shelf','weight','cups'],axis=1)
new_df
# + [markdown] deletable=false editable=false nbgrader={"checksum": "88bce48af0dfea035271e7bb3349286c", "grade": false, "grade_id": "cell-494d8a89eb2bbc25", "locked": true, "schema_version": 1, "solution": false}
# Finalmente, iremos converter os dados nutricionais numéricos de ```new_df``` para uma matriz ```dados``` e as avaliações (ratings) para um vetor $y$. Os nomes dos cereais serão salvos em uma lista ```cereral_names``` e os nomes das colunas em uma lista ```col_names```.
# + deletable=false editable=false nbgrader={"checksum": "d401e318147be000a788cd18dde6c7dc", "grade": false, "grade_id": "cell-7d9673ae2b1f4679", "locked": true, "schema_version": 1, "solution": false}
cereral_names = list(new_df.index)
print('Cereais:',cereral_names)
col_names = list(new_df.columns)
print('Colunas:',col_names)
dados = new_df.drop('rating', axis=1).values
print('As dimensões de dados são:',dados.shape)
y = new_df['rating'].values
print('As dimensões de y são:',y.shape)
# + [markdown] deletable=false editable=false nbgrader={"checksum": "5f812e5471050572889b800dd0b6ca55", "grade": false, "grade_id": "cell-6ffca3d9ce8879dc", "locked": true, "schema_version": 1, "solution": false}
# ## Estimando os parâmetros da regressão linear simples
#
# Qual será a relação entre a avaliação $y$ e o número de calorias $x$ de um cereal? Para responder esta pergunta, considere uma regressão linear simples
# $$
# y = \beta_0 + \beta_1 x.
# $$
# Para encontrar os coeficientes $\beta_0$ e $\beta_1$ utilizando o método dos mínimos quadrados, basta resolver o sistema
# $$
# \begin{bmatrix}
# n & \sum_i x^{(i)} \\
# \sum_i x^{(i)} & \sum_i (x^{(i)})^2
# \end{bmatrix}
# \begin{bmatrix}
# \beta_0 \\ \beta_1
# \end{bmatrix}
# =
# \begin{bmatrix}
# \sum_i y^{(i)} \\ \sum_i x^{(i)} y^{(i)}
# \end{bmatrix}
# $$
#
# Portanto, para encontrar $\beta_0$ e $\beta_1$, você precisa
# 1. Calcular a matriz
# $$
# A = \begin{bmatrix}
# n & \sum_i x^{(i)} \\
# \sum_i x^{(i)} & \sum_i (x^{(i)})^2
# \end{bmatrix}
# $$
# e o vetor
# $$
# c = \begin{bmatrix}
# \sum_i y^{(i)} \\ \sum_i x^{(i)} y^{(i)}
# \end{bmatrix}
# $$
# 2. Resolver $A \beta = c$, onde $\beta$ é o vetor de coeficientes.
# + [markdown] deletable=false editable=false nbgrader={"checksum": "d6f6c4fdb0de37fff030bc15ace79a8c", "grade": false, "grade_id": "cell-ee05121a8c6a4721", "locked": true, "schema_version": 1, "solution": false}
# **Exercício:** Encontre os coeficientes $\beta_0$ e $\beta_1$ quando a variável independente é ```calories```. Dica: Esta variável está armazenada na primeira coluna da matriz ```dados```.
# + deletable=false nbgrader={"checksum": "6a1d7a3aec4f18331524b3129050e721", "grade": false, "grade_id": "cell-32c2324937d96046", "locked": false, "schema_version": 1, "solution": true}
### Passo 0: obtenha o vetor de observações x (~1 linha)
x = dados[:,0]
def regressaoLinearSimples(x, y):
### Passo 1: obtenha o número de observações n (~1 linha)
n = len(x)
### Passo 2: calcule a matriz A e o vetor c(~2 linhas)
A = np.array([[n, x.sum()], [x.sum(), (x**2).sum()]])
c = np.array([[y.sum()], [(x*y).sum()]])
### Passo 3: resolva A beta = c (~1 linha, usando np.linalg.solve)
beta = np.transpose(np.linalg.solve(A,c))
return beta
beta = regressaoLinearSimples(x,y)
print('beta:',beta)
# + deletable=false editable=false nbgrader={"checksum": "0525ab4e7cdfb98e4959addba288e41d", "grade": true, "grade_id": "cell-de5415d6d60981ed", "locked": true, "points": 1, "schema_version": 1, "solution": false}
assert np.allclose(beta, np.array([ 94.88442777, -0.49064841]))
# + [markdown] deletable=false editable=false nbgrader={"checksum": "100fac0b74f261f8c2abd351aaaf860a", "grade": false, "grade_id": "cell-822f2e846945989f", "locked": true, "schema_version": 1, "solution": false}
# **Exercício:** Agora iremos avaliar a qualidade da regressão. Utilizando os parâmetros obtidos no passo anterior, calcule o desvio
#
# $$
# D = \sum_{i=1}^n (\hat y^{(i)} - y^{(i)})^2
# $$
# + deletable=false nbgrader={"checksum": "6000b486dd3b63b08f20575e9a64802d", "grade": false, "grade_id": "cell-7fdfbc967fc7ecd5", "locked": false, "schema_version": 1, "solution": true}
def calculaDesvio(x, y, beta):
### Passo 1: obtenha o número de observações n (~1 linha)
n = len(x)
### Passo 2: calcule o vetor de predições yhat (~1 a 2 linhas)
yhat = np.zeros(n)
yhat = np.transpose(beta)[0] + np.transpose(beta)[1]*x
### Passo 3: calcule o desvio (~1 a 3 linhas)
desvio = 0.0
for i in range(0,n):
desvio += ((yhat[i]-y[i])**2)
return desvio
# + deletable=false editable=false nbgrader={"checksum": "9f48e557459534065bebf6baf813a81c", "grade": true, "grade_id": "cell-da578eb2b1f68812", "locked": true, "points": 1, "schema_version": 1, "solution": false}
assert np.round(calculaDesvio(x,y,beta),3) == 7456.811
# + [markdown] deletable=false editable=false nbgrader={"checksum": "77c6b4d11cb9a608c66f32f52436f91b", "grade": false, "grade_id": "cell-ea42d8af3335d370", "locked": true, "schema_version": 1, "solution": false}
# **Exercício:** Finalmente, iremos escrever um laço para avaliar qual das variáveis independentes retorna o **menor** desvio quando usada na regressão linear simples.
# + deletable=false nbgrader={"checksum": "557e7bc82e9134845d21109858e7e291", "grade": false, "grade_id": "cell-9863e6bf0e08068f", "locked": false, "schema_version": 1, "solution": true}
# Passo 1: obtenha o número de colunas (variáveis independentes) sobre os quais vamos iterar (~1 linha)
ncols = len(dados[0,:])
# inicializando variaveis min_desvio e melhor_coluna
min_desvio = np.inf
melhor_coluna = 0
for j in range(ncols):
# Passo 2: obtenha x, calcule os parametros beta da regressao e entao calcule o desvio (~3 linhas)
x = dados[:,j]
beta = regressaoLinearSimples(x,y)
desvio = calculaDesvio(x,y,beta)
print('Coluna: {}, Desvio: {}'.format(j, desvio))
# Passo 3: atualize as variáveis min_desvio e melhor_coluna (~3 linhas)
if min_desvio > desvio:
min_desvio = desvio
melhor_coluna = j
# Passo 4: imprima o nome da melhor coluna utilizando a variável col_names (~1 linha)
print('Melhor coluna:', col_names[melhor_coluna])
# + deletable=false editable=false nbgrader={"checksum": "a95b3349f99cc925b0617cfcb70840ac", "grade": true, "grade_id": "cell-63b1652ea55bc0ea", "locked": true, "points": 2, "schema_version": 1, "solution": false}
### testes ocultos
| alc/ep6/ep6.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Import lib:
import matplotlib.pyplot as plt
# Parameter cell is tagged with **#parameter**:
# + tags=["parameters"]
# Pie chart, where the slices will be ordered and plotted counter-clockwise:
labels = 'Frogs', 'Hogs', 'Dogs', 'Logs'
sizes = [15, 30, 45, 10]
explode = (0, 0.1, 0, 0) # only "explode" the 2nd slice (i.e. 'Hogs')
# -
# Show chart:
# +
fig1, ax1 = plt.subplots()
ax1.pie(sizes, explode=explode, labels=labels, autopct='%1.1f%%',
shadow=True, startangle=90)
ax1.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle.
plt.show()
| examples/parameters/pie.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.6.10 64-bit (''PythonData'': conda)'
# name: python3
# ---
# # Imports
# +
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import pandas as pd
import tensorflow as tf
import pandas as pd
import helper
pd.set_option("max_columns", 200)
# -
# ## Load database tables into Pandas dataframe
#
# Using our helper script with db credentials/connection pre-defined
raw_df = helper.get_all().set_index('appid')
raw_df.head(2)
# # Drop columns we won't be predicting on to shrink dataframe
dropcolumns = ['required_age','supported_languages','developers','publishers','achievements','linux','mac','windows','date','owners','average_forever','average_2weeks','median_forever','median_2weeks','ccu','name']
trimmed_df = raw_df.drop(columns = dropcolumns)
trimmed_df.head(2)
# ## Filter out any rows 'coming soon'
out_now_df = trimmed_df[~trimmed_df['coming_soon']].drop(columns='coming_soon')
out_now_df.head(2)
# ## Filter out rows with not enough data to make an interesting prediction
# Filter on at least 100 total reviews
# +
total_reviews = out_now_df['positive'] + out_now_df['negative']
filter_reviews_df = out_now_df[total_reviews > 100]
filter_reviews_df.head(2)
# -
# ## Define values for 'good' or 'bad' user review ratios and make categorical bins from the result
# Investigate what the distribution of review ratios is
ratios = out_now_df['positive'].div(out_now_df['negative'])
print(ratios.describe())
quantiles = ratios.quantile([0.1, 0.25, 0.50, 0.75]).get_values()
print(quantiles)
# Create bins for the quantiles above
def bin_ratio(ratio):
if ratio < quantiles[0]:
return f'LT10%'
if ratio < quantiles[1]:
return f'LT25%'
if ratio < quantiles[2]:
return f'LT50%'
if ratio < quantiles[3]:
return f'LT75%'
else:
return f'GT75%'
# ## Apply the binning function and drop the original review count columns
binned_df = out_now_df.drop(columns = ['positive', 'negative'])
binned_df['quantiles'] = ratios.apply(bin_ratio)
binned_df.head(2)
# # Convert categorical columns into binary dummy columns
#
# Pivot a column with lists as values into a series of columns for each distinct value in any of the contained lists.
#
# The resulting columns will have a '1' value if the value was in that rows list of values, making them appropriately dummied values for later ML processing
def explodelist(column_name, alias, dataframe):
df = dataframe[[column_name]]
test = df.join(pd.Series(df[column_name].apply(eval).apply(pd.Series).stack().reset_index(1, drop=True),
name=alias))
dummies_testdf = pd.get_dummies(test.drop(column_name, axis=1),
columns=[alias]).groupby(test.index).sum()
return dummies_testdf.join(dataframe).drop(columns= column_name)
# Apply the newly created function to columns containing lists and dummy the remaining categorical columns
# +
list_cols = [('categories', 'category'), ('genres', 'genre')]
dummy_columns = ['quantiles']
dummy_df = binned_df
for col, alias in list_cols:
dummy_df = explodelist(col, alias, dummy_df)
dummy_df = pd.get_dummies(dummy_df, columns=dummy_columns, dtype=float)
dummy_df.shape
# -
# ## Split X and Y data
y_columns = [x for x in dummy_df.columns if x.startswith('quantiles')]
y_columns
X = dummy_df.drop(columns = y_columns)
Y = dummy_df[y_columns]
print(Y.columns)
X.head(2)
# ## Apply StandardScaler to any numeric columns
# First split out training and testing data
X_train, X_test, y_train, y_test = train_test_split(X, Y, random_state = 50)
# ## Use a ColumnTransformer so we don't have to pass the dummy columns into the scaler
# +
scale_cols = ['price']
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import StandardScaler
ct = ColumnTransformer([
('ct', StandardScaler(), scale_cols)
], remainder='passthrough')
X_train_scaled = ct.fit_transform(X_train)
X_test_scaled = ct.fit_transform(X_test)
X_train_scaled
# -
# ## Define Neural Network architecture
# +
nn2 = tf.keras.models.Sequential()
nn2.add(tf.keras.layers.Dense(units=50, activation="relu",input_dim=X_train_scaled.shape[1]))
nn2.add(tf.keras.layers.Dense(units=40, activation="relu"))
nn2.add(tf.keras.layers.Dense(units=30, activation="relu"))
nn2.add(tf.keras.layers.Dense(units=20, activation="relu"))
nn2.add(tf.keras.layers.Dropout(0.3))
nn2.add(tf.keras.layers.Dense(units=len(y_columns), activation="sigmoid"))
nn2.summary()
# +
nn2.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
fit_model = nn2.fit(X_train_scaled, y_train, epochs=10)
# -
# # Print out accuracy and loss based on test data
model_loss, model_accuracy = nn2.evaluate(X_test_scaled,y_test,verbose=2)
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
| ratings_ratio_predictions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: MindSpore-1.0.1
# language: python
# name: mindspore-1.0.1
# ---
# # <center/> 自定义调试体验文档
# ## 概述
# 本文将使用[快速入门](https://gitee.com/mindspore/docs/blob/master/docs/sample_code/lenet/lenet.py)作为样例,并通过构建自定义调试函数:`Callback`、`metrics`、Print算子、日志打印、数据Dump功能等,同时将构建的自定义调试函数添加进代码中,通过运行效果来展示具体如何使用MindSpore提供给我们的自定义调试能力,帮助快速调试训练网络。
# 体验过程如下:
# 1. 数据准备。
# 2. 定义深度神经网络LeNet5。
# 3. 使用Callback回调函数构建StopAtTime类来控制训练停止时间。
# 4. 设置日志环境变量。
# 5. 启动同步Dump功能。
# 5. 定义训练网络并执行训练。
# 6. 执行测试。
# 7. 算子输出数据的读取与展示。
#
# > 本次体验适用于GPU环境。
# ## 数据准备
# ### 数据集的下载
# 这里我们需要将MNIST数据集中随机取出一张图片,并增强成适合LeNet网络的数据格式(如何处理请参考[mindspore_quick_start.ipynb](https://gitee.com/mindspore/docs/blob/master/docs/mindspore/programming_guide/source_zh_cn/quick_start/quick_start.ipynb))
# 在Jupyter Notebook中运行以下命令来获取数据集:
# !mkdir -p ./datasets/MNIST_Data/train ./datasets/MNIST_Data/test
# !wget -NP ./datasets/MNIST_Data/train https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/train-labels-idx1-ubyte --no-check-certificate
# !wget -NP ./datasets/MNIST_Data/train https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/train-images-idx3-ubyte --no-check-certificate
# !wget -NP ./datasets/MNIST_Data/test https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/t10k-labels-idx1-ubyte --no-check-certificate
# !wget -NP ./datasets/MNIST_Data/test https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/t10k-images-idx3-ubyte --no-check-certificate
# !tree ./datasets/MNIST_Data
# `custom_debugging_info.ipynb`为本文文档。
# ### 数据集的增强操作
# 下载的数据集,需要通过`mindspore.dataset`处理成适用于MindSpore框架的数据,再使用一系列框架中提供的工具进行数据增强操作来适应LeNet网络的数据处理需求。
# +
import mindspore.dataset as ds
import mindspore.dataset.vision.c_transforms as CV
import mindspore.dataset.transforms.c_transforms as C
from mindspore.dataset.vision import Inter
from mindspore import dtype as mstype
def create_dataset(data_path, batch_size=32, repeat_size=1,
num_parallel_workers=1):
""" create dataset for train or test
Args:
data_path (str): Data path
batch_size (int): The number of data records in each group
repeat_size (int): The number of replicated data records
num_parallel_workers (int): The number of parallel workers
"""
# define dataset
mnist_ds = ds.MnistDataset(data_path)
# define operation parameters
resize_height, resize_width = 32, 32
rescale = 1.0 / 255.0
shift = 0.0
rescale_nml = 1 / 0.3081
shift_nml = -1 * 0.1307 / 0.3081
# define map operations
trans_image_op=[
CV.Resize((resize_height, resize_width), interpolation=Inter.LINEAR),
CV.Rescale(rescale_nml, shift_nml),
CV.Rescale(rescale, shift),
CV.HWC2CHW()
]
type_cast_op = C.TypeCast(mstype.int32)
# apply map operations on images
mnist_ds = mnist_ds.map(operations=type_cast_op, input_columns="label", num_parallel_workers=num_parallel_workers)
mnist_ds = mnist_ds.map(operations=trans_image_op, input_columns="image", num_parallel_workers=num_parallel_workers)
# apply DatasetOps
buffer_size = 10000
mnist_ds = mnist_ds.shuffle(buffer_size=buffer_size)
mnist_ds = mnist_ds.batch(batch_size, drop_remainder=True)
mnist_ds = mnist_ds.repeat(repeat_size)
return mnist_ds
# -
# ## 定义深度神经网络LeNet5
# 针对MNIST数据集我们采用的是LeNet5网络,先对卷积函数和全连接函数初始化,然后`construct`构建神经网络。
# +
from mindspore.common.initializer import Normal
import mindspore.nn as nn
class LeNet5(nn.Cell):
"""Lenet network structure."""
def __init__(self):
super(LeNet5, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 5, pad_mode="valid")
self.conv2 = nn.Conv2d(6, 16, 5, pad_mode="valid")
self.fc1 = nn.Dense(16 * 5 * 5, 120, weight_init=Normal(0.02))
self.fc2 = nn.Dense(120, 84, weight_init=Normal(0.02))
self.fc3 = nn.Dense(84, 10)
self.relu = nn.ReLU()
self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
self.flatten = nn.Flatten()
def construct(self, x):
x = self.max_pool2d(self.relu(self.conv1(x)))
x = self.max_pool2d(self.relu(self.conv2(x)))
x = self.flatten(x)
x = self.relu(self.fc1(x))
x = self.relu(self.fc2(x))
x = self.fc3(x)
return x
# -
# ## 构建自定义回调函数StopAtTime
# 使用回调函数的基类Callback,构建训练定时器`StopAtTime`,其基类(可在源码中找到位置在`/mindspore/nn/callback`)为:
# ```python
# class Callback():
# def begin(self, run_context):
# pass
# def epoch_begin(self, run_context):
# pass
# def epoch_end(self, run_context):
# pass
# def step_begin(self, run_context):
# pass
# def step_end(self, run_context):
# pass
# def end(self, run_context):
# pass
# ```
# - `begin`:表示训练开始时执行。
# - `epoch_begin`:表示每个epoch开始时执行。
# - `epoch_end`:表示每个epoch结束时执行。
# - `step_begin`:表示每个step刚开始时执行。
# - `step_end`:表示每个step结束时执行。
# - `end`:表示训练结束时执行。
#
# 了解上述基类的用法后,还有一个参数`run_context`,这是一个类,存储了模型训练中的各种参数,我们在这里使用`print(cb_params.list_callback)`将其放在`end`中打印(当然也可以使用`print(cb_param)`打印所有参数信息,由于参数信息太多,我们这里只选了一个参数举例),后续在执行完训练后,根据打印信息,会简单介绍`run_context`类中各参数的意义,我们开始构建训练定时器,如下:
# +
from mindspore.train.callback import Callback
import time
class StopAtTime(Callback):
def __init__(self, run_time):
super(StopAtTime, self).__init__()
self.run_time = run_time*60
def begin(self, run_context):
cb_params = run_context.original_args()
cb_params.init_time = time.time()
def step_end(self, run_context):
cb_params = run_context.original_args()
epoch_num = cb_params.cur_epoch_num
step_num = cb_params.cur_step_num
loss = cb_params.net_outputs
cur_time = time.time()
if (cur_time - cb_params.init_time) > self.run_time:
print("epoch: ", epoch_num, " step: ", step_num, " loss: ", loss)
run_context.request_stop()
def end(self, run_context):
cb_params = run_context.original_args()
print(cb_params.list_callback)
# -
# ## 启动同步Dump功能
# 本例中使用同步Dump功能,导出每次迭代中前向传播和反向传播算子的输出数据,导出的数据方便用户在进行优化训练策略时进行分析使用,如需导出更多数据可参考[官方教程](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html#dump)。
# +
import os
import json
abspath = os.getcwd()
data_dump = {
"common_dump_settings": {
"dump_mode": 0,
"path": abspath + "/data_dump",
"net_name": "LeNet5",
"iteration": 0,
"input_output": 2,
"kernels": ["Default/network-WithLossCell/_backbone-LeNet5/flatten-Flatten/Reshape-op118"],
"support_device": [0,1,2,3,4,5,6,7]
},
"e2e_dump_settings": {
"enable": True,
"trans_flag": False
}
}
with open("./data_dump.json", "w", encoding="GBK") as f:
json.dump(data_dump, f)
os.environ['MINDSPORE_DUMP_CONFIG'] = abspath + "/data_dump.json"
# -
# 执行完上述命令后会在工作目录上生成`data_dump.json`文件,目录结构如下:
# ! tree .
# 启动同步Dump功能需要注意:
#
# - `path`需要设置成绝对路径。例如`/usr/data_dump`可以,`./data_dump`则不行。
# - `e2e_dump_settings`中的`enable`需要设置成`True`。
#
# - 需要将生成的`data_dump.json`文件添加至系统环境变量中。
# ## 设置日志环境变量
# MindSpore采用`glog`来输出日志,我们这里将日志输出到屏幕:
#
# `GlOG_v`:控制日志的级别,默认值为2,即WARNING级别,对应关系如下:0-DEBUG、1-INFO、2-WARNING、3-ERROR。本次设置为1。
#
# `GLOG_logtostderr`:控制日志输出方式,设置为`1`时,日志输出到屏幕;值设置为`0`时,日志输出到文件。设置输出屏幕时,日志部分的信息会显示成红色,设置成输出到文件时,会在`GLOG_log_dir`路径下生成`mindspore.log`文件。
#
# > 更多设置请参考官网:<https://www.mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html>
# +
import os
from mindspore import log as logger
os.environ['GLOG_v'] = '1'
os.environ['GLOG_logtostderr'] = '1'
os.environ['GLOG_log_dir'] = 'D:/' if os.name=="nt" else '/var/log/mindspore'
os.environ['logger_maxBytes'] = '5242880'
os.environ['logger_backupCount'] = '10'
print(logger.get_log_config())
# -
# 打印信息为`GLOG_v`的等级:`INFO`级别。
#
# 输出方式`GLOG_logtostderr`:`1`表示屏幕输出。
# ## 定义训练网络并执行训练
# ### 定义训练网络
# 此过程中先将之前生成的模型文件`.ckpt`和`.meta`的数据删除,并将模型需要用到的参数配置到`Model`。
# +
from mindspore import context, Model
from mindspore.nn import Accuracy
from mindspore.nn import SoftmaxCrossEntropyWithLogits
from mindspore.train.callback import ModelCheckpoint, CheckpointConfig, LossMonitor
# clean files
if os.name == "nt":
os.system('del/f/s/q *.ckpt *.meta')
else:
os.system('rm -f *.ckpt *.meta *.pb')
context.set_context(mode=context.GRAPH_MODE, device_target="GPU")
lr = 0.01
momentum = 0.9
epoch_size = 3
train_data_path = "./datasets/MNIST_Data/train"
eval_data_path = "./datasets/MNIST_Data/test"
model_path = "./models/ckpt/custom_debugging_info/"
net_loss = SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean")
repeat_size = 1
network = LeNet5()
metrics = {
'accuracy': nn.Accuracy(),
'loss': nn.Loss(),
'precision': nn.Precision(),
'recall': nn.Recall(),
'f1_score': nn.F1()
}
net_opt = nn.Momentum(network.trainable_params(), lr, momentum)
config_ck = CheckpointConfig(save_checkpoint_steps=1875, keep_checkpoint_max=10)
ckpoint_cb = ModelCheckpoint(prefix="checkpoint_lenet", directory=model_path, config=config_ck)
model = Model(network, net_loss, net_opt, metrics=metrics)
# -
# ### 执行训练
# 在构建训练网络中,给`model.train`传入了三个回调函数,分别是`ckpoint_cb`,`LossMonitor`,`stop_cb`;其分别代表如下:
#
# `ckpoint_cb`:即是`ModelCheckpoint`,设置模型保存的回调函数。
#
# `LossMonitor`:loss值监视器,打印训练过程每步的loss值。
#
# `stop_cb`:即是`StopAtTime`,上面刚构建的训练定时器。
# 我们将训练定时器`StopAtTime`设置成36秒,即`run_time=0.6`。
print("============== Starting Training ==============")
ds_train = create_dataset(train_data_path, repeat_size = repeat_size)
stop_cb = StopAtTime(run_time=0.6)
model.train(epoch_size, ds_train, callbacks=[ckpoint_cb, LossMonitor(375), stop_cb], dataset_sink_mode=False)
# 以上打印信息中,主要分为两部分:
# - 日志信息部分:
# - `[INFO]`部分信息即为日志输出的信息,由于没有Warning信息,目前主要记录的是训练的几个重要步骤。
#
# - 回调函数信息部分:
# - `LossMonitor`:每步的loss值。
# - `StopAtTime`:在每个epoch结束及训练时间结束时,打印当前epoch的训练总时间(单位为毫秒),每步训练花费的时间以及平均loss值,另外在训练结束时还打印了`run_context.list_callback`的信息,这条信息表示本次训练过程中使用的回调函数;另外`run_conext.original_args`中还包含以下参数:
# - `train_network`:网络的各类参数。
# - `epoch_num`:训练的epoch数。
# - `batch_num`:一个epoch的step数。
# - `mode`:MODEL的模式。
# - `loss_fn`:使用的损失函数。
# - `optimizer`:使用的优化器。
# - `parallel_mode`:并行模式。
# - `device_number`:训练卡的数量。
# - `train_dataset`:训练的数据集。
# - `list_callback`:使用的回调函数。
# - `train_dataset_element`:打印当前batch的数据集。
# - `cur_step_num`:当前训练的step数。
# - `cur_epoch_num`:当前的epoch。
# - `net_outputs`:网络返回值。
#
# 几乎在训练中的所有重要数据,都可以从Callback中取得,所以Callback也是在自定义调试中比较常用的功能。
# ## 执行测试
# 测试网络中我们的自定义函数`metrics`将在`model.eval`中被调用,除了模型的预测正确率外`recall`,`F1`等不同的检验标准下的预测正确率也会打印出来:
print("============== Starting Testing ==============")
ds_eval = create_dataset(eval_data_path, repeat_size=repeat_size)
acc = model.eval(ds_eval,dataset_sink_mode = False)
print("============== Accuracy:{} ==============".format(acc))
# `Accuracy`部分的信息即为`metric`控制输出的信息,模型的预测值正确率和其他标准下验证(0-9)的正确率值,至于不同的验证标准计算方法,大家可以去官网搜索`mindspore.nn`查找,这里就不多介绍了。
# ## 算子输出数据的读取展示
# 执行完成上述训练后,可以在`data_dump`文件夹中找到导出的训练数据,按照本例`data_dump.json`文件的设置,在目录`data_dump/LeNet5/device_0/`中找到每次迭代的数据,保存每次迭代的数据文件夹名称为`iteration_{迭代次数}`,每个算子输出数据的文件后缀为`.bin`,可以使用`numpy.fromfile`读取其中的数据。
#
# 本例子,在第400次迭代数据中,随机读取其中一个算子的输出文件并进行展示:
# +
import numpy as np
import random
dump_data_path = "./data_dump/LeNet5/device_0/iteration_400/"
ops_output_file = random.choice(os.listdir(dump_data_path))
print("ops name:", ops_output_file, "\n")
ops_dir = dump_data_path + ops_output_file
ops_output = np.fromfile(ops_dir)
print("ops output value:", ops_output, "\n")
print("the shape of ops output:", ops_output.shape)
# -
# ## 总结
#
# 本例使用了MNIST数据集,通过LeNet5神经网络进行训练,将自定义调试函数结合到代码中进行调试,同时展示了使用方法和部分功能,并使用调试函数导出需要的输出数据,来更好的认识自定义调试函数的方便性,以上就是本次的体验内容。
| docs/notebook/mindspore_custom_debugging_info.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="o5H3tWliUd2J" outputId="6414437a-f47e-4500-a642-f4a128585e22"
# !pip install neural_tangents
# + id="UcJ6sMI5Ud2N"
import numpy as np
import matplotlib.pyplot as plt
import jax.numpy as jnp
from jax import random
from jax.experimental import optimizers
from jax import jit, grad, vmap
from jax.experimental.ode import odeint
from jax.config import config
config.update("jax_enable_x64", True)
import neural_tangents as nt
from neural_tangents import stax
def ensure_dir(file_path):
import os
directory = os.path.dirname(file_path)
if not os.path.exists(directory):
os.makedirs(directory)
root_dir = "beneficial_detrimental/"
ensure_dir(root_dir)
# + [markdown] id="c3FvmmZJUd2O"
# ## Define Functions to Compute Theoretical Learning Curves
# + id="CI01ZzJsUd2Q"
# take in
softmax = jit(lambda a: jnp.exp(a) / jnp.exp(a).sum())
@jit
def func(kappa, t, params):
P, lamb, eigs = params
return lamb + kappa * jnp.sum( eigs / (eigs * P + kappa) ) - kappa
@jit
def solve_kappa(P, lamb, eigs):
return odeint(func, lamb, jnp.linspace(0.0,250.0, 25), (P,lamb, eigs))[-1]
@jit
def Eg_wrt_test(a, params):
eigs, phi0, y, P, lamb, train_dist = params
sm = softmax(a)
O = phi0.T @ jnp.diag(sm) @ phi0
coeffs = (jnp.diag(train_dist)@phi0).T @ y
kappa = solve_kappa(P, lamb, eigs)
gamma = P * jnp.sum(eigs**2/(eigs* P + kappa)**2)
Eg_old = kappa**2/(1-gamma) * jnp.sum(coeffs**2 / (eigs*P + kappa)**2)
f = jnp.sum(jnp.diag(O) * eigs**2/(eigs*P + kappa)**2)/jnp.sum(eigs**2/(eigs*P + kappa)**2)
G_mid = kappa / (kappa + eigs * P) * coeffs
Eg = Eg_old * f + jnp.dot(G_mid, (O - f*jnp.eye(O.shape[0])) @ G_mid)
return Eg
grad_wrt_test = grad(Eg_wrt_test)
def kr_expt_measure_test(Pvals, prob_dist, K, y, lamb, train_dist):
num_repeat = 35
errs = np.zeros((len(Pvals), num_repeat))
key = random.PRNGKey(0)
for n in range(num_repeat):
for i,P in enumerate(Pvals):
_, key = random.split(key)
inds = np.random.choice(K.shape[0], int(P), p=train_dist, replace = True)
Ki = K[inds,:]
Kii = Ki[:,inds]
yi = y[inds]
alpha = np.linalg.lstsq(Kii+lamb*np.eye(int(P)), yi, rcond=None)[0]
yhat = Ki.T @ alpha
errs[i,n] = np.sum(prob_dist * (yhat - y)**2)
return np.mean(errs, axis = 1), np.std(errs, axis = 1)
def predictor_uniform(Pvals, X_train, X_test, y, lamb, num_repeat = 50):
predictor = np.zeros((len(Pvals), num_repeat, len(y)))
key = random.PRNGKey(0)
K_tr = kernel_fn(X_train, X_train, 'ntk')
K_te = kernel_fn(X_train, X_test, 'ntk')
for n in range(num_repeat):
for i,P in enumerate(Pvals):
_, key = random.split(key)
inds = np.random.choice(K.shape[0], int(P), replace = True)
Ki = K_tr[inds,:]
Ki = Ki[:,inds]
yi = y[inds]
alpha = np.linalg.lstsq(Ki+lamb*np.eye(int(P)), yi, rcond=None)[0]
predictor[i,n] = K_te[inds].T @ alpha
return predictor.mean(axis = 1), predictor.std(axis = 1)
def mnist_binary(N_tr, a, b, shuffle = False):
from tensorflow import keras
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
inds_binary_train = [i for i in range(len(y_train)) if y_train[i]==a or y_train[i]==b]
inds_binary_test = [i for i in range(len(y_test)) if y_test[i]==a or y_test[i]==b]
if shuffle:
inds_binary_train = np.random.choice(inds_binary_train, size = N_tr, replace = False)
else:
inds_binary_train = inds_binary_train[:N_tr]
x_train = x_train[inds_binary_train]
y_train = y_train[inds_binary_train]
x_train = x_train.reshape(N_tr, x_train.shape[1]*x_train.shape[2])
x_train = x_train.T - np.mean(x_train, axis = 1)
x_train = (x_train / np.linalg.norm(x_train, axis = 0)).T
x_test = x_test[inds_binary_test]
y_test = y_test[inds_binary_test]
x_test = x_test.reshape(x_test.shape[0], x_test.shape[1]*x_test.shape[2])
x_test = x_test.T - np.mean(x_test, axis = 1)
x_test = (x_test / np.linalg.norm(x_test, axis = 0)).T
y_train = 1.0*(y_train == a) - 1.0*(y_train == b)
y_test = 1.0*(y_test == a) - 1.0*(y_test == b)
return (x_train, y_train), (x_test, y_test)
# + [markdown] id="oYXWpFRQUd2U"
# ## Download and Preprocess Data
# + colab={"base_uri": "https://localhost:8080/"} id="IhBJAKzrUd2V" outputId="210d646d-bcb1-460d-8a51-e91c646959c5"
def fully_connected(width,depth):
layers = []
for l in range(depth):
layers += [stax.Dense(width), stax.Relu()]
layers += [stax.Dense(1)]
return stax.serial(*layers)
width=5000
depth = 2
# define a RELU neural tangent kernel
init_fn, apply_fn, kernel_fn = fully_connected(width, depth)
kernel_fn = jit(kernel_fn, static_argnums=(2,))
apply_fn = jit(apply_fn)
a = 9
b = 8
total_pts = 1000
(X, y_vec), (x_test, y_test) = mnist_binary(total_pts, a, b, shuffle = False)
X = jnp.array(X)
y_vec = jnp.array(y_vec)
lamb = 1e-4
print(y_vec.shape)
# + [markdown] id="rr_j17HOUd2V"
# # Perform Eigendecomposition to get Eigenvalues and Eigenvectors of the Kernel
#
# ### Run This code before optimizing: The kernel is scaled accurately here
# + id="dW41mIbrUd2W"
K = kernel_fn(X, None, 'ntk')
eig0, phi0 = jnp.linalg.eigh(1/K.shape[0] * K)
phi0 = jnp.sqrt(K.shape[0]) * phi0
eig0 = jnp.abs(eig0)
psi0 = phi0 @ jnp.diag(jnp.sqrt(eig0))
coeff_phi = 1/K.shape[0] * phi0.T @ y_vec
# + [markdown] id="CHCEUkuUUd2W"
# # Optimize Test Measure for Fixed Training Measure
# + [markdown] id="X9QR7JjJUd2X"
# ### Under gradient descent find beneficial examples
# ### Check these examples are classified correctly
# ### Add noise to them and see which ones get wrongly classified
# ### Replace these examples in the test set
# ### Again run gradient descent to see if they become adverserial examples
# + id="RCWJm2LZUd2X"
K = kernel_fn(X, None, 'ntk')
eig0, phi0 = jnp.linalg.eigh(1/K.shape[0] * K)
phi0 = jnp.sqrt(K.shape[0]) * phi0
eig0 = jnp.abs(eig0)
epoch_list = [90]
P_list = np.array([30.])
train_dist = jnp.ones(len(K))/len(K)
epoch_ben_dist = []
epoch_adv_dist = []
for epoch in epoch_list:
opt_init, opt_update, get_params = optimizers.adam(0.05)
all_final_ben = []
key = random.PRNGKey(1)
for i, P in enumerate(P_list):
opt_state = opt_init(jnp.zeros(K.shape[0]))
params = (eig0, phi0, y_vec, P, lamb, train_dist)
losses = []
for t in range(epoch):
losst = Eg_wrt_test(get_params(opt_state), params)
if t % (epoch//3) == 0:
print("loss %0.5f | participation ratio = %0.5f" % (losst, 1.0/(softmax(get_params(opt_state))**2 ).sum()))
g = grad_wrt_test(get_params(opt_state), params)
opt_state= opt_update(t, g, opt_state)
losses += [losst]
all_final_ben += [get_params(opt_state)]
opt_init, opt_update, get_params = optimizers.adam(0.05)
all_final_adv = []
key = random.PRNGKey(1)
for i, P in enumerate(P_list):
opt_state = opt_init(jnp.zeros(K.shape[0]))
params = (eig0, phi0, y_vec, P, lamb, train_dist)
losses = []
for t in range(epoch):
losst = Eg_wrt_test(get_params(opt_state), params)
if t % (epoch//3) == 0:
print("loss %0.5f | participation ratio = %0.5f" % (losst, 1.0/(softmax(get_params(opt_state))**2 ).sum()))
g = grad_wrt_test(get_params(opt_state), params)
opt_state= opt_update(t, -g, opt_state)
losses += [losst]
all_final_adv += [get_params(opt_state)]
epoch_ben_dist += [all_final_ben]
epoch_adv_dist += [all_final_adv]
epoch_ben_dist = np.array(epoch_ben_dist)
epoch_adv_dist = np.array(epoch_adv_dist)
# + id="IqT-Uv3sUd2Y"
e_idx = 0
P_idx = 0
idx_ben = jnp.argsort(softmax(epoch_ben_dist[e_idx][P_idx]))[::-1]
idx_adv = jnp.argsort(softmax(epoch_adv_dist[e_idx][P_idx]))[::-1]
epoch_ben_sort = []
epoch_adv_sort = []
epoch_uni_sort = []
for epoch_ben, epoch_adv in zip(epoch_ben_dist, epoch_adv_dist):
epoch_ben_sort += [softmax(epoch_ben[P_idx][idx_ben])]
epoch_adv_sort += [softmax(epoch_adv[P_idx][idx_ben])]
epoch_ben_sort = np.array(epoch_ben_sort)
epoch_adv_sort = np.array(epoch_adv_sort)
# + id="_uc7ees3Ud2Z" outputId="b67265d6-0ecf-498e-ba9a-d16ec7a37dbe"
num_samp = 3
fig, axs = plt.subplots(num_samp,2,figsize=(3.7,5.7))
axs = axs.T
X_ben = X[idx_ben[:num_samp]].reshape(num_samp,28,28)
X_adv = X[idx_adv[:num_samp]].reshape(num_samp,28,28)
for i, x_ben, x_adv in zip(range(num_samp),X_ben,X_adv):
axs[0,i].imshow(x_ben)
axs[1,i].imshow(x_adv)
axs[1,i].set_xticks([])
axs[0,i].set_xticks([])
axs[1,i].set_yticks([])
axs[0,i].set_yticks([])
axs[0,0].set_title('beneficial', fontsize=18)
axs[1,0].set_title('detrimental', fontsize=18)
plt.subplots_adjust(wspace=0, hspace=0)
plt.savefig(root_dir+'ben_adv_samples.png', dpi=200)
# + id="yOn2YzaUUd2Z" outputId="540d5d35-daca-41c7-f715-fec583a66fd9"
epoch_idx = [0]
fig, axs = plt.subplots(1,2,figsize=(12,5))
for i, ben_sorted,adv_sorted in zip(epoch_idx, epoch_ben_sort[epoch_idx], epoch_adv_sort[epoch_idx]):
uni_sorted = softmax(jnp.zeros(phi0.shape[0]))
if i == 0:
axs[0].plot(uni_sorted, '-',color = 'black', label = 'uniform', linewidth=1)
axs[1].plot(uni_sorted, '-',color = 'black', label = 'uniform', linewidth=1)
axs[0].semilogy(ben_sorted*(ben_sorted>1e-5), 'o', label = 'epoch=%d'%epoch_list[i], color='C%d'%i, markersize=5)
axs[1].semilogy(adv_sorted*(adv_sorted>1e-5), 'o', label = 'epoch=%d'%epoch_list[i], color='C%d'%i, markersize=5, linewidth=2)
axs[0].set_title(r'Beneficial',fontsize = 20)
axs[1].set_title(r'Detrimental',fontsize = 20)
axs[0].set_xlabel(r'Image Index',fontsize = 20)
axs[1].set_xlabel(r'Image Index',fontsize = 20)
axs[0].set_ylabel(r'$\tilde\mathbf{P}$',fontsize=25)
axs[0].tick_params(axis='x', labelsize= 18)
axs[1].tick_params(axis='x', labelsize= 18)
axs[0].tick_params(axis='y', labelsize= 18)
axs[1].legend(fontsize = 18, loc='best', bbox_to_anchor=(-0.24, 0.45, 0.5, 0.5))
plt.tight_layout()
axs[1].set_yticks([])
axs[0].set_xlim([-20, 1050])
axs[1].set_xlim([-20, 1050])
axs[0].set_ylim([1e-5, 1.5])
axs[1].set_ylim([1e-5, 1.5])
plt.subplots_adjust(wspace=0, hspace=0)
plt.savefig(root_dir + 'adv_vs_beneficial_measure.pdf')
plt.show()
# + id="Ogwo9X64Ud2a" outputId="b97dbf3f-6e5f-4052-8701-75cdc9696d81"
epoch_idx = [0]
fig, axs = plt.subplots(1,2,figsize=(12,5))
for i, idx, ben_sorted,adv_sorted in zip(range(len(epoch_idx)), epoch_idx, epoch_ben_sort[epoch_idx], epoch_adv_sort[epoch_idx]):
uni_sorted = softmax(jnp.zeros(phi0.shape[0]))
axs[i].plot(uni_sorted, '-',color = 'black', label = 'uniform', linewidth=1)
axs[i].semilogy(ben_sorted*(ben_sorted>1e-5), 'o', label = 'Beneficial', color='C0', markersize=5)
axs[i].semilogy(adv_sorted*(adv_sorted>1e-5), 'o', label = 'Detrimental', color='C1', markersize=5, linewidth=2)
axs[i].set_title(r'epoch=%d'%epoch_list[idx],fontsize = 20)
axs[0].set_xlabel(r'Image Index',fontsize = 20)
axs[1].set_xlabel(r'Image Index',fontsize = 20)
axs[0].set_ylabel(r'$\tilde\mathbf{P}$',fontsize=25)
axs[0].tick_params(axis='x', labelsize= 18)
axs[1].tick_params(axis='x', labelsize= 18)
axs[0].tick_params(axis='y', labelsize= 18)
axs[0].legend(fontsize = 18, loc='best')
plt.tight_layout()
axs[1].set_yticks([])
axs[0].set_xlim([-20, 1050])
axs[1].set_xlim([-20, 1050])
axs[0].set_ylim([1e-5, 1.5])
axs[1].set_ylim([1e-5, 1.5])
plt.subplots_adjust(wspace=0, hspace=0)
plt.savefig(root_dir + 'adv_vs_beneficial_measure.pdf')
plt.show()
# + id="JHpCV37WUd2a" outputId="b553ac2f-fc3f-47c2-8085-8d5176bf69ce"
uniform = softmax(jnp.zeros(K.shape[0]))
Pvals_th = jnp.linspace(1,100,100)
Pvals_expt = np.linspace(5, 100, 10)
print('Kernel Regression for Uniform')
err_uni, std_uni = kr_expt_measure_test(Pvals_expt, uniform, K, y_vec, lamb, train_dist)
err_ker_uni = err_uni
std_ker_uni = std_uni
theory_uni = jnp.array([Eg_wrt_test(jnp.zeros(y_vec.shape[0]), (eig0, phi0, y_vec, P, lamb, train_dist)) for P in Pvals_th])
err_ker_ben = []
std_ker_ben = []
theory_ben =[]
err_ker_adv = []
std_ker_adv = []
theory_adv =[]
for e_idx in range(len(epoch_list)):
theory_ben_P = []
theory_adv_P = []
err_ker_ben_P = []
std_ker_ben_P = []
err_ker_adv_P = []
std_ker_adv_P = []
for P_idx in range(epoch_ben_dist.shape[1]):
print(epoch_list[e_idx])
ben_dist = epoch_ben_dist[e_idx][P_idx]
adv_dist = epoch_adv_dist[e_idx][P_idx]
theory_ben_P += [jnp.array([Eg_wrt_test(ben_dist, (eig0, phi0, y_vec, P, lamb, train_dist)) for P in Pvals_th])]
theory_adv_P += [jnp.array([Eg_wrt_test(adv_dist, (eig0, phi0, y_vec, P, lamb, train_dist)) for P in Pvals_th])]
print('Kernel Regression for Beneficial')
err_ben, std_ben = kr_expt_measure_test(Pvals_expt, softmax(ben_dist), K, y_vec, lamb, train_dist)
err_ker_ben_P += [err_ben]
std_ker_ben_P += [std_ben]
print('Kernel Regression for Detrimental')
err_adv, std_adv = kr_expt_measure_test(Pvals_expt, softmax(adv_dist), K, y_vec, lamb, train_dist)
err_ker_adv_P += [err_adv]
std_ker_adv_P += [std_adv]
theory_ben += [theory_ben_P]
theory_adv += [theory_adv_P]
err_ker_ben += [err_ker_ben_P]
std_ker_ben += [std_ker_ben_P]
err_ker_adv += [err_ker_adv_P]
std_ker_adv += [std_ker_adv_P]
# + id="GWoyyVhwUd2b" outputId="cc0f2fba-4208-423b-e6c9-9fb92a81bca3"
plt.errorbar(Pvals_expt, err_ker_ben[e_idx][P_idx], std_ker_ben[e_idx][P_idx] , fmt='o', color = 'C%d'%0, label='epoch: %d'%epoch_list[e_idx])
plt.plot(Pvals_th, theory_ben[0][0])
plt.errorbar(Pvals_expt, err_ker_adv[e_idx][P_idx], std_ker_adv[e_idx][P_idx] , fmt='o', color = 'C%d'%1, label='epoch: %d'%epoch_list[e_idx])
plt.plot(Pvals_th, theory_adv[0][0])
plt.errorbar(Pvals_expt, err_ker_uni, std_ker_uni , fmt='o', color = 'C%d'%2, label='epoch: %d'%epoch_list[e_idx])
plt.plot(Pvals_th, theory_uni)
# + id="3bwCMfJdUd2b"
# + id="YqG_yrU3Ud2c"
# + id="jkDyf_XYUd2c"
# + id="IFLeRmqhUd2c"
# + id="ioo9gA72Ud2c" outputId="2f3d9a69-d3f2-43ef-df12-8b00f839e9a4"
max_noise = abs(np.min(X))
noise_list = np.linspace(0,max_noise,5)*4
noise_values = np.array([noise_std*np.random.standard_normal(X.shape) for noise_std in noise_list])
fig, axs = plt.subplots(len(noise_values),2, figsize=(3,8))
for i, noise in enumerate(noise_values):
x = X[i].reshape(28,28)
noise = noise[i].reshape(28,28)
axs[i,0].imshow(x)
axs[i,1].imshow(x+noise)
# + id="rXmSzinTUd2c"
y_hat_noise = []
for noise in noise_values:
y_hat, y_hat_std = predictor_uniform(P_list, X, X + noise, y_vec, lamb, num_repeat = 1000)
y_hat_noise += [y_hat]
y_hat_noise = np.array(y_hat_noise)
y_hat_noise_onehot = 2*np.heaviside(y_hat_noise,0)-1
# + id="oadqifZ1Ud2d" outputId="7bc0f04e-7c79-4f85-e33e-2c61fd2b182b"
## For each data budget P find which images are classified correctly for original images
## For larger data budgets classification accuracy improves
noise_idx = 0
corr_inds = np.array([(y == y_vec) for y in y_hat_noise_onehot[noise_idx]])
print(corr_inds.mean(1))
## Now when there is noise, find which samples misclassifies when tested on
noise_idx = -1
corr_inds_noise = np.array([(y == y_vec) for y in y_hat_noise_onehot[noise_idx]])
print(corr_inds_noise.mean())
# + id="2aE6rT3_Ud2d" outputId="83b8f41b-631b-4865-eaa6-2408871356bd"
P_idx = 0
inds_correct = corr_inds[P_idx] == True
## Convert correctly classified indices when noise added to false
corr_inds_noise[P_idx][np.logical_not(inds_correct)] = False
print(corr_inds[P_idx].sum()-corr_inds_noise[P_idx].sum())
## After noise addition, these are the misclassified images
miss_classified_inds = corr_inds[P_idx]^corr_inds_noise[P_idx]
#miss_classified_inds = corr_inds_noise[P_idx]
num_miss_class = miss_classified_inds.sum()
# + id="twPndS5ZUd2e" outputId="88da9073-18bc-4359-a55c-fa3167089362"
np.square(y_vec[miss_classified_inds] - y_hat_noise_onehot[-1,0][miss_classified_inds]).mean()
# + [markdown] id="FGZV_1XJUd2e"
# ## Now generate a new dataset with last miss_classified_inds.sum() digits being misclassified images
# + id="zFt-jI3NUd2e"
X_new = np.append(X, X[miss_classified_inds] + noise_values[noise_idx][miss_classified_inds], axis = 0)
y_new = np.append(y_vec, y_vec[miss_classified_inds], axis = 0)
# + id="7wfgzQy-Ud2e" outputId="8be7fa2f-ad4f-4a27-92dd-50498deacc1f"
K_new = kernel_fn(X_new, None, 'ntk')
uniform = softmax(jnp.zeros(K_new.shape[0]))
eig0, phi0 = jnp.linalg.eigh(jnp.diag(jnp.sqrt(uniform)) @ K_new @ jnp.diag(jnp.sqrt(uniform)))
phi0 = jnp.linalg.pinv(jnp.diag(jnp.sqrt(uniform))) @ phi0
eig0 = jnp.abs(eig0)
tr_dist = np.append(softmax(jnp.zeros(X.shape[0])), [0 for _ in range(num_miss_class)])
# # Eigendecomposition of training kernel
# eig0, phi0 = jnp.linalg.eigh(jnp.diag(jnp.sqrt(tr_dist)) @ K_new @ jnp.diag(jnp.sqrt(tr_dist)))
# phi0 = jnp.linalg.pinv(jnp.diag(jnp.sqrt(tr_dist))) @ phi0
epoch_ben_dist = []
epoch_adv_dist = []
epoch_list = [100]
for epoch in epoch_list:
opt_init, opt_update, get_params = optimizers.adam(0.05)
all_final_ben = []
key = random.PRNGKey(1)
for i, P in enumerate(P_list):
opt_state = opt_init(jnp.zeros(K_new.shape[0]))
params = (eig0, phi0, y_new, P, lamb, tr_dist)
losses = []
for t in range(epoch):
losst = Eg_wrt_test(get_params(opt_state), params)
if t % (epoch//3) == 0:
print("loss %0.5f | participation ratio = %0.5f" % (losst, 1.0/(softmax(get_params(opt_state))**2).sum()))
g = grad_wrt_test(get_params(opt_state), params)
opt_state= opt_update(t, g, opt_state)
losses += [losst]
all_final_ben += [get_params(opt_state)]
opt_init, opt_update, get_params = optimizers.adam(0.05)
all_final_adv = []
key = random.PRNGKey(1)
for i, P in enumerate(P_list):
opt_state = opt_init(jnp.zeros(K_new.shape[0]))
params = (eig0, phi0, y_new, P, lamb, tr_dist)
losses = []
for t in range(epoch):
losst = Eg_wrt_test(get_params(opt_state), params)
if t % (epoch//3) == 0:
print("loss %0.5f | participation ratio = %0.5f" % (losst, 1.0/(softmax(get_params(opt_state))**2 ).sum()))
g = grad_wrt_test(get_params(opt_state), params)
opt_state= opt_update(t, -g, opt_state)
losses += [losst]
all_final_adv += [get_params(opt_state)]
epoch_ben_dist += [all_final_ben]
epoch_adv_dist += [all_final_adv]
epoch_ben_dist = np.array(epoch_ben_dist)
epoch_adv_dist = np.array(epoch_adv_dist)
# + id="yx9ZLNC_Ud2f"
P_idx = 0
idx_ben = np.argsort(np.array(softmax(epoch_ben_dist[0][P_idx])))[::-1]
idx_adv = np.argsort(np.array(softmax(epoch_adv_dist[0][P_idx])))[::-1]
epoch_ben_sort = []
epoch_adv_sort = []
epoch_uni_sort = []
for epoch_ben, epoch_adv in zip(epoch_ben_dist, epoch_adv_dist):
epoch_ben_sort += [softmax(epoch_ben[P_idx][idx_ben])]
epoch_adv_sort += [softmax(epoch_adv[P_idx][idx_ben])]
epoch_ben_sort = np.array(epoch_ben_sort)
epoch_adv_sort = np.array(epoch_adv_sort)
# + id="GDONrHGPUd2f" outputId="d099d561-439a-41bf-8cf2-ffbda7232e33"
(idx_adv>1000).sum()
# + id="uizhgUhiUd2f" outputId="0a114700-fb71-4b3c-f16b-33793de6ebc7"
epoch_ben_sort.shape
# + id="L4ow7QMxUd2f" outputId="7e98751f-8ff4-4982-c266-c19641895286"
plt.plot(idx_ben>1000,'o')
plt.plot(idx_adv>1000,'o')
plt.plot(np.ones_like(idx_adv))
idx_ben[::-1][:42]
# + id="ApWj6S7wUd2g" outputId="727c427c-b768-4e6f-fea0-d54053b46170"
softmax(epoch_ben_dist[0][P_idx])[idx_ben[idx_ben>1000]].sum()
# + id="z9NDxbYwUd2g" outputId="a71dccb6-94f3-4242-fdc3-c60a72731d6e"
plt.plot(softmax(epoch_ben_dist[0][P_idx])[idx_ben[idx_ben>1000]], 'o', label='Beneficial')
plt.plot(softmax(epoch_adv_dist[0][P_idx])[idx_adv[idx_adv>1000]], 'o', label='Detrimental')
plt.legend(fontsize=14)
plt.xlabel('Adversarial Image Index', fontsize=14)
plt.ylabel('Probability Mass', fontsize=14)
plt.savefig('adversarial_prob_dist.pdf')
# + id="_koMdtR0Ud2g"
# + id="aKNF1r_rUd2g" outputId="2e6015d0-dda3-42b9-c16c-b48c553ce3fb"
num_samp =num_miss_class+3
X_high = X_new[idx_ben[:num_samp]].reshape(num_samp,28,28)
X_low = X_new[idx_ben[::-1][:num_samp]].reshape(num_samp,28,28)
fig, axs = plt.subplots(7,8, figsize=(7,6.3))
for j, a in enumerate(axs.flatten()):
a.imshow(X_high[j].reshape((28,28)), cmap = 'gray')
a.axis('off')
fig.suptitle('High-probability Mass Images', fontsize=22)
plt.tight_layout()
plt.subplots_adjust(wspace =0,hspace=0)
plt.savefig('fixed_test_best_digits.pdf')
plt.show()
fig, axs = plt.subplots(7,8, figsize=(7,6.3))
for j, a in enumerate(axs.flatten()):
a.imshow(X_low[j].reshape((28,28)), cmap = 'gray')
a.axis('off')
fig.suptitle('Low-probability Mass Images', fontsize=22)
plt.tight_layout()
plt.subplots_adjust(wspace =0,hspace=0)
plt.savefig('fixed_test_worst_digits.pdf')
plt.show()
# + id="alI440lKUd2g" outputId="2d9d208e-e894-4936-b615-63b1f204079a"
epoch_idx = [0]
fig, axs = plt.subplots(1,2,figsize=(12,5))
for i, idx, ben_sorted,adv_sorted in zip(range(len(epoch_idx)), epoch_idx, epoch_ben_sort[epoch_idx], epoch_adv_sort[epoch_idx]):
uni_sorted = softmax(jnp.zeros(phi0.shape[0]))
axs[i].plot(uni_sorted, '-',color = 'black', label = 'uniform', linewidth=1)
axs[i].semilogy(ben_sorted*(ben_sorted>1e-7), 'o', label = 'Beneficial', color='C0', markersize=5)
axs[i].semilogy(adv_sorted*(adv_sorted>1e-7), 'o', label = 'Detrimental', color='C1', markersize=5, linewidth=2)
axs[i].set_title(r'epoch=%d'%epoch_list[idx],fontsize = 20)
axs[0].plot(idx_ben[idx_ben>1000],softmax(epoch_ben_dist[0][P_idx])[idx_ben[idx_ben>1000]], 'o', label='Beneficial')
axs[0].plot(idx_adv[idx_adv>1000],softmax(epoch_adv_dist[0][P_idx])[idx_adv[idx_adv>1000]], 'o', label='Adversarial')
axs[0].set_xlabel(r'Image Index',fontsize = 20)
axs[1].set_xlabel(r'Image Index',fontsize = 20)
axs[0].set_ylabel(r'$\tilde\mathbf{P}$',fontsize=25)
axs[0].tick_params(axis='x', labelsize= 18)
axs[1].tick_params(axis='x', labelsize= 18)
axs[0].tick_params(axis='y', labelsize= 18)
axs[0].legend(fontsize = 18, loc='best')
plt.tight_layout()
axs[1].set_yticks([])
axs[0].set_xlim([-20, 1050])
axs[1].set_xlim([-20, 1050])
axs[0].set_ylim([1e-7, 1.5])
axs[1].set_ylim([1e-7, 1.5])
plt.subplots_adjust(wspace=0, hspace=0)
plt.savefig(root_dir + 'adv_vs_beneficial_measure.pdf')
plt.show()
# + id="H-xuD9NDUd2h"
# + id="gcm_918oUd2h" outputId="1e48393a-0498-4377-b696-16964a91b072"
uniform = softmax(jnp.zeros(K_new.shape[0]))
Pvals_th = jnp.linspace(1,100,100)
Pvals_expt = np.linspace(5, 100, 10)
print('Kernel Regression for Uniform')
err_uni, std_uni = kr_expt_measure_test(Pvals_expt, uniform, K_new, y_new, lamb, tr_dist)
err_ker_uni_mod = err_uni
std_ker_uni_mod = std_uni
theory_uni_mod = jnp.array([Eg_wrt_test(uniform, (eig0, phi0, y_new, P, lamb, tr_dist)) for P in Pvals_th])
err_ker_ben_mod = []
std_ker_ben_mod = []
theory_ben_mod =[]
err_ker_adv_mod = []
std_ker_adv_mod = []
theory_adv_mod =[]
for e_idx in range(len(epoch_list)):
theory_ben_P = []
theory_adv_P = []
err_ker_ben_P = []
std_ker_ben_P = []
err_ker_adv_P = []
std_ker_adv_P = []
for P_idx in range(epoch_ben_dist.shape[1]):
print(epoch_list[e_idx])
b_dist = epoch_ben_dist[e_idx][P_idx]
a_dist = epoch_adv_dist[e_idx][P_idx]
theory_ben_P += [jnp.array([Eg_wrt_test(b_dist, (eig0, phi0, y_new, P, lamb, tr_dist)) for P in Pvals_th])]
theory_adv_P += [jnp.array([Eg_wrt_test(a_dist, (eig0, phi0, y_new, P, lamb, tr_dist)) for P in Pvals_th])]
print('Kernel Regression for Beneficial')
err_ben, std_ben = kr_expt_measure_test(Pvals_expt, softmax(b_dist), K_new, y_new, lamb, tr_dist)
err_ker_ben_P += [err_ben]
std_ker_ben_P += [std_ben]
print('Kernel Regression for Detrimental')
err_adv, std_adv = kr_expt_measure_test(Pvals_expt, softmax(a_dist), K_new, y_new, lamb, tr_dist)
err_ker_adv_P += [err_adv]
std_ker_adv_P += [std_adv]
theory_ben_mod += [theory_ben_P]
theory_adv_mod += [theory_adv_P]
err_ker_ben_mod += [err_ker_ben_P]
std_ker_ben_mod += [std_ker_ben_P]
err_ker_adv_mod += [err_ker_adv_P]
std_ker_adv_mod += [std_ker_adv_P]
# + id="I1bj6YSDUd2h"
# + id="8E00ljY9Ud2h" outputId="49a948ec-0bb8-4c40-f7a3-70cf99d2bc0f"
plt.errorbar(Pvals_expt, err_ker_ben_mod[e_idx][P_idx], std_ker_ben_mod[e_idx][P_idx] , fmt='o', color = 'C%d'%0, label='epoch: %d'%epoch_list[e_idx])
plt.plot(Pvals_th, theory_ben_mod[0][0])
plt.errorbar(Pvals_expt, err_ker_ben[e_idx][P_idx], std_ker_ben[e_idx][P_idx] , fmt='^', color = 'C%d'%0, label='epoch: %d'%epoch_list[e_idx])
#plt.plot(Pvals_th, theory_ben[0][0])
plt.errorbar(Pvals_expt, err_ker_adv_mod[e_idx][P_idx], std_ker_adv_mod[e_idx][P_idx] , fmt='o', color = 'C%d'%1, label='epoch: %d'%epoch_list[e_idx])
plt.plot(Pvals_th, theory_adv_mod[0][0])
plt.errorbar(Pvals_expt, err_ker_adv[e_idx][P_idx], std_ker_adv[e_idx][P_idx] , fmt='^', color = 'C%d'%1, label='epoch: %d'%epoch_list[e_idx])
# plt.plot(Pvals_th, theory_adv[0][0])
plt.errorbar(Pvals_expt, err_ker_uni_mod, std_ker_uni_mod , fmt='o', color = 'C%d'%2, label='epoch: %d'%epoch_list[e_idx])
plt.plot(Pvals_th, theory_uni_mod)
plt.errorbar(Pvals_expt, err_ker_uni, std_ker_uni , fmt='^', color = 'C%d'%2, label='epoch: %d'%epoch_list[e_idx])
# plt.plot(Pvals_th, theory_uni)
# + id="YKCj8XsoUd2h"
| figureSI2_3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
acountbal = 50000
choice = input("Please enter 'b' to check balance ,'w' to withdraw or 'd' to deposit: ")
while choice != 'q':
if choice.lower() in ('w','b','d'):
if choice.lower() == 'b':
print("Your balance is: %d" % acountbal)
print("Anything else?")
choice = input("Enter b for balance, w to withdraw ,d to deposit or q to quit: ")
print(choice.lower())
elif choice.lower() == 'd':
deposit = float(input("Enter amount to deposit: "))
print("you have succefully made your deposit")
acountbal = acountbal + deposit
print("Your balance is: %d" % acountbal)
print("Anything else?")
choice = input("Enter b for balance, w to withdraw, d to deposit or q to quit: ")
elif choice.lower() == 'w' :
withdraw = float(input("Enter amount to withdraw: "))
if withdraw <= acountbal:
print("here is your: %.2f" % withdraw)
acountbal = acountbal - withdraw
print("Anything else?")
choice = input("Enter b for balance, w to withdraw, d to deposit or q to quit: ")
#choice = 'q'
else:
print("You have insufficient funds: %.2f" % acountbal)
else:
print("Wrong choice!")
choice = input("Please enter 'b' to check balance, 'w' to withdraw or 'd' to deposit: ")
X=0
while X <= 4:
X+=1 #incrementing by adding plus 1
print(X)
X=0
while X <= 9:
X+=1 #incrementing by adding plus 1
if X == 5:
continue
#else:
X >5
print(X)
for i in range(11):
if i==4:
print(i)
elif i >=4:
print(i)
#if i>=4:
| exercise_notebooks/NOTE5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <div class="alert alert-success">
# <b>Author</b>:
#
# <NAME>
# <EMAIL>
#
# </div>
#
# # [Click here to see Output video](https://photos.app.goo.gl/gf3B3fd2Xvahw8Q77)
# +
from selenium.common.exceptions import NoSuchElementException, ElementClickInterceptedException
from selenium import webdriver
from selenium.webdriver.common.by import By
# -
# open webpage
options = webdriver.ChromeOptions()
driver = webdriver.Chrome(executable_path='chromedriver.exe', options=options)
driver.set_window_size(1120, 1000)
url = 'https://fs2.formsite.com/meherpavan/form2/index.html'
driver.get(url)
input_boxes = driver.find_elements(By.CLASS_NAME,'text_field')
print(len(input_boxes))
input_boxes[0].send_keys('Rashik')
input_boxes[1].send_keys('Rahman')
input_boxes[2].send_keys('0162*******')
input_boxes[3].send_keys('Bangladesh')
input_boxes[4].send_keys('Dhaka')
input_boxes[5].send_keys('17201012***********')
driver.find_element_by_xpath('//*[@id="RESULT_TextField-1"]').is_displayed()
driver.find_element_by_xpath('//*[@id="RESULT_TextField-2"]').is_displayed()
Male = driver.find_element_by_xpath('//*[@id="q26"]/table/tbody/tr[1]/td/label')
Female = driver.find_element_by_xpath('//*[@id="q26"]/table/tbody/tr[2]/td/label')
Sun = driver.find_element_by_xpath('//*[@id="q15"]/table/tbody/tr[1]/td/label')
Mon = driver.find_element_by_xpath('//*[@id="q15"]/table/tbody/tr[2]/td/label')
Tue = driver.find_element_by_xpath('//*[@id="q15"]/table/tbody/tr[3]/td/label')
Wed = driver.find_element_by_xpath('//*[@id="q15"]/table/tbody/tr[4]/td/label')
Thurs = driver.find_element_by_xpath('//*[@id="q15"]/table/tbody/tr[5]/td/label')
Fri = driver.find_element_by_xpath('//*[@id="q15"]/table/tbody/tr[6]/td/label')
Sat = driver.find_element_by_xpath('//*[@id="q15"]/table/tbody/tr[7]/td/label')
Sat.click()
Sat.is_selected()
#check if Sat is selected
var = driver.find_element_by_id("RESULT_CheckBox-8_6")
if var.get_attribute("checked") == "true":
print('Checked')
else:
print('Not Checked')
Male.click()
from selenium.webdriver.support.ui import Select
drop_element = driver.find_element_by_xpath('//*[@id="RESULT_RadioButton-9"]')
drp = Select(drop_element)
drp.select_by_visible_text('Morning')
drp.select_by_index(2)
ops = drp.options
print(len(ops))
for option in ops:
print(option.text)
links = driver.find_elements(By.CLASS_NAME,'link')
print(len(links))
links[0].click()
driver.back()
driver.find_element(By.LINK_TEXT,'Software Testing Tutorials').click()
print(driver.title)
driver.back()
print(driver.title)
# # That's all for this LAB!
| CSE_322_Software Engineering Lab/Lab_3_21.09.2020.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.7 ('sa')
# language: python
# name: python3
# ---
# # Datasets processing
# ## Import and preprocessing
# +
import pandas as pd
pd.set_option('display.max_colwidth', None)
import warnings
warnings.filterwarnings("ignore")
#INPS
ht_inps=pd.read_csv('data/raw/Enti/INPS/Hashtags.csv')
ht_inps['type'] = 'hashtag'
mn_inps=pd.read_csv('data/raw//Enti/INPS/Mentions.csv')
mn_inps['type']='mention'
#INAIL
ht_inail=pd.read_csv('data/raw//Enti/INAIL/Hashtags.csv')
ht_inail['type'] = 'hashtag'
mn_inail=pd.read_csv('data/raw//Enti/INAIL/Mentions.csv')
mn_inail['type']='mention'
#Protezione Civile
ht_pc=pd.read_csv('data/raw//Enti/Protezione Civile/Hashtags.csv')
ht_pc['type'] = 'hashtag'
mn_pc=pd.read_csv('data/raw//Enti/Protezione Civile/Mentions.csv')
mn_pc['type']='mention'
# +
import numpy as np
#INPS
#concatenation of hashtags and mentions dataframes indicating the type of tweet in a column
frames_inps = [ht_inps, mn_inps]
df_inps = pd.concat(frames_inps)
#column definition to identify retweets
df_inps['retweet'] = np.where(df_inps['tweet'].str.contains('RT @'), True, False)
#INAIL
#concatenation of hashtags and mentions dataframes indicating the type of tweet in a column
frames_inail = [ht_inail, mn_inail]
df_inail = pd.concat(frames_inail)
#column definition to identify retweets
df_inail['retweet'] = np.where(df_inail['tweet'].str.contains('RT @'), True, False)
#Protezione Civile
#concatenation of hashtags and mentions dataframes indicating the type of tweet in a column
frames_pc = [ht_pc, mn_pc]
df_pc = pd.concat(frames_pc)
#column definition to identify retweets
df_pc['retweet'] = np.where(df_pc['tweet'].str.contains('RT @'), True, False)
# -
#Dataset infos
def get_stats(df):
print("Dataset Shape: ",df.shape)
print("\t Mentions - Hashtags")
print("#Mentions:",df.loc[df['type'] == 'mention'].shape)
print("#Hashtags:",df.loc[df['type'] == 'hashtag'].shape)
print(df['type'].value_counts(normalize=True) * 100)
if "retweet" in df:
print("\t Retweet")
print("#Retweet:",df.loc[df['retweet'] == True].shape)
print(df['retweet'].value_counts(normalize=True) * 100)
get_stats(df_inps)
df_inps.head()
# +
#Removing retweets and unnecessary columns
#INPS
df_inps=df_inps.loc[df_inps['retweet'] == False]
df_inps=df_inps[['created_at','tweet_id','tweet','type']]
#INAIL
df_inail=df_inail.loc[df_inail['retweet'] == False]
df_inail=df_inail[['created_at','tweet_id','tweet','type']]
#Protezione Civile
df_pc=df_pc.loc[df_pc['retweet'] == False]
df_pc=df_pc[['created_at','tweet_id','tweet','type']]
# -
get_stats(df_inps)
# # Silver labelling
#Emoji lists
positive_emoticons=["😀","😃","😄","😁","😆","🤣","😂","🙂","😊","😍","🥰","🤩","☺","🥳"]
negative_emoticons=["😒","😔","😟","🙁","☹","😥","😢","😭","😱","😞","😓","😩","😫","😡","😠","🤬"]
# +
#Definition of silver labels based on the presence / absence of emojis within the entire tweet
#INPS
pos_df_inps = df_inps.loc[df_inps['tweet'].str.contains('|'.join(positive_emoticons))]
neg_df_inps = df_inps.loc[df_inps['tweet'].str.contains('|'.join(negative_emoticons))]
neutral_df_inps = pd.concat([df_inps, pos_df_inps, neg_df_inps]).drop_duplicates(keep=False)
#INAIL
pos_df_inail = df_inail.loc[df_inail['tweet'].str.contains('|'.join(positive_emoticons))]
neg_df_inail = df_inail.loc[df_inail['tweet'].str.contains('|'.join(negative_emoticons))]
neutral_df_inail = pd.concat([df_inail, pos_df_inail, neg_df_inail]).drop_duplicates(keep=False)
#Protezione Civile
pos_df_pc = df_pc.loc[df_pc['tweet'].str.contains('|'.join(positive_emoticons))]
neg_df_pc = df_pc.loc[df_pc['tweet'].str.contains('|'.join(negative_emoticons))]
neutral_df_pc = pd.concat([df_pc, pos_df_pc, neg_df_pc]).drop_duplicates(keep=False)
# -
get_stats(neutral_df_pc)
#tweets containing both positive and negative emoticons
int_df_inps = pd.merge(pos_df_inps, neg_df_inps, how ='inner')
int_df_inail = pd.merge(pos_df_inail, neg_df_inail, how ='inner')
int_df_pc = pd.merge(pos_df_pc, neg_df_pc, how ='inner')
int_df_inps.shape
# +
#Sampling neutral datasets to balance classes
#INPS
sample_neutral_df_inps = neutral_df_inps.sample(frac=0.015)
#INAIL
sample_neutral_df_inail = neutral_df_inail.sample(frac=0.015)
#Protezione Civile
sample_neutral_df_pc = neutral_df_pc.sample(frac=0.015)
# -
neg_df_inail.shape
# +
#Added polarity and topic column
#INPS
pos_df_inps['polarity']='positive'
pos_df_inps['topic']='inps'
neg_df_inps['polarity']='negative'
neg_df_inps['topic']='inps'
sample_neutral_df_inps['polarity']='neutral'
sample_neutral_df_inps['topic']='inps'
#INAIL
pos_df_inail['polarity']='positive'
pos_df_inail['topic']='inail'
neg_df_inail['polarity']='negative'
neg_df_inail['topic']='inail'
sample_neutral_df_inail['polarity']='neutral'
sample_neutral_df_inail['topic']='inail'
#Protezione civile
pos_df_pc['polarity']='positive'
pos_df_pc['topic']='pc'
neg_df_pc['polarity']='negative'
neg_df_pc['topic']='pc'
sample_neutral_df_pc['polarity']='neutral'
sample_neutral_df_pc['topic']='pc'
# -
#concatenation of all dataframes
df_total = pd.concat([pos_df_inps, pos_df_inail, pos_df_pc,neg_df_inps,neg_df_inail,neg_df_pc,sample_neutral_df_inps,sample_neutral_df_inail,sample_neutral_df_pc])
df_total.head()
#Dataset shuffling
df_total_shuffle = df_total.sample(frac=1)
#Duplicate removal
df_total_shuffle = df_total_shuffle.drop_duplicates(keep='first',subset=['tweet_id'])
#First round annotation file
ann_1 = df_total_shuffle.sample(n=322,random_state=1)
ann_1.to_csv("Annotazione_1.csv")
#Second round annotation file
ann_2 = pd.concat([df_total_shuffle,ann_1]).drop_duplicates(keep=False,subset=['tweet_id'])
ann_2.to_csv("Annotazione_2")
| notebooks/2.0 - Datasets.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: pymedphys-master
# language: python
# name: pymedphys-master
# ---
# +
import functools
import pathlib
import json
import random
import logging
import numpy as np
import matplotlib.pyplot as plt
import shapely.geometry
import skimage.draw
import skimage.filters
import tensorflow as tf
import pydicom
import pymedphys
import pymedphys._dicom.structure as dcm_struct
from names import names_map
# +
# Put all of the DICOM data within a directory called 'dicom' in here:
data_path_root = pathlib.Path.home().joinpath('.data/dicom-ct-and-structures')
dcm_paths = list(data_path_root.rglob('dicom/**/*.dcm'))
# This will be the location of the numpy cache
npz_directory = data_path_root.joinpath('npz_cache')
npz_directory.mkdir(parents=True, exist_ok=True)
# This will be the location of the tfrecord cache
tfrecord_directory = data_path_root.joinpath('tfrecord_cache')
tfrecord_directory.mkdir(parents=True, exist_ok=True)
# -
# This will be the location of the DICOM header UID cache
uid_cache_path = data_path_root.joinpath("uid-cache.json")
def soft_surface_dice(reference, evaluation):
edge_reference = skimage.filters.scharr(reference)
edge_evaluation = skimage.filters.scharr(evaluation)
score = (
np.sum(np.abs(edge_evaluation - edge_reference)) /
np.sum(edge_evaluation + edge_reference)
)
return 1 - score
def get_uid_cache(relative_paths):
relative_paths = [
str(path) for path in relative_paths
]
try:
with open(uid_cache_path) as f:
uid_cache = json.load(f)
except (FileNotFoundError, json.JSONDecodeError):
uid_cache = {
"ct_image_paths": {},
"structure_set_paths": {},
"ct_uid_to_structure_uid": {},
"paths_when_run": []
}
if set(uid_cache["paths_when_run"]) == set(relative_paths):
return uid_cache
dcm_headers = []
for dcm_path in dcm_paths:
dcm_headers.append(pydicom.read_file(
dcm_path, force=True,
specific_tags=['SOPInstanceUID', 'SOPClassUID', 'StudyInstanceUID']))
ct_image_paths = {
str(header.SOPInstanceUID): str(path)
for header, path in zip(dcm_headers, relative_paths)
if header.SOPClassUID.name == "CT Image Storage"
}
structure_set_paths = {
str(header.SOPInstanceUID): str(path)
for header, path in zip(dcm_headers, relative_paths)
if header.SOPClassUID.name == "RT Structure Set Storage"
}
ct_uid_to_study_instance_uid = {
str(header.SOPInstanceUID): str(header.StudyInstanceUID)
for header in dcm_headers
if header.SOPClassUID.name == "CT Image Storage"
}
study_instance_uid_to_structure_uid = {
str(header.StudyInstanceUID): str(header.SOPInstanceUID)
for header in dcm_headers
if header.SOPClassUID.name == "RT Structure Set Storage"
}
ct_uid_to_structure_uid = {
ct_uid: study_instance_uid_to_structure_uid[study_uid]
for ct_uid, study_uid in ct_uid_to_study_instance_uid.items()
}
uid_cache["ct_image_paths"] = ct_image_paths
uid_cache["structure_set_paths"] = structure_set_paths
uid_cache["ct_uid_to_structure_uid"] = ct_uid_to_structure_uid
uid_cache["paths_when_run"] = relative_paths
with open(uid_cache_path, "w") as f:
json.dump(uid_cache, f)
return uid_cache
# +
relative_paths = [
path.relative_to(data_path_root)
for path in dcm_paths
]
uid_cache = get_uid_cache(relative_paths)
ct_image_paths = uid_cache["ct_image_paths"]
structure_set_paths = uid_cache["structure_set_paths"]
ct_uid_to_structure_uid = uid_cache["ct_uid_to_structure_uid"]
# -
structures_to_learn = list(set([item for key, item in names_map.items()]).difference({None}))
structures_to_learn = sorted(structures_to_learn)
structures_to_learn
def get_image_transformation_parameters(dcm_ct):
# From <NAME> work in ../old/data_generator.py
position = dcm_ct.ImagePositionPatient
spacing = [x for x in dcm_ct.PixelSpacing] + [dcm_ct.SliceThickness]
orientation = dcm_ct.ImageOrientationPatient
dx, dy, *_ = spacing
Cx, Cy, *_ = position
Ox, Oy = orientation[0], orientation[4]
return dx, dy, Cx, Cy, Ox, Oy
def reduce_expanded_mask(expanded_mask, img_size, expansion):
return np.mean(np.mean(
tf.reshape(expanded_mask, (img_size, expansion, img_size, expansion)),
axis=1), axis=2)
def calculate_aliased_mask(contours, dcm_ct, expansion=5):
dx, dy, Cx, Cy, Ox, Oy = get_image_transformation_parameters(dcm_ct)
ct_size = np.shape(dcm_ct.pixel_array)
x_grid = np.arange(Cx, Cx + ct_size[0]*dx*Ox, dx*Ox)
y_grid = np.arange(Cy, Cy + ct_size[1]*dy*Oy, dy*Oy)
new_ct_size = np.array(ct_size) * expansion
expanded_mask = np.zeros(new_ct_size)
for xyz in contours:
x = np.array(xyz[0::3])
y = np.array(xyz[1::3])
z = xyz[2::3]
assert len(set(z)) == 1
r = (((y - Cy) / dy * Oy)) * expansion + (expansion - 1) * 0.5
c = (((x - Cx) / dx * Ox)) * expansion + (expansion - 1) * 0.5
expanded_mask = np.logical_or(expanded_mask, skimage.draw.polygon2mask(new_ct_size, np.array(list(zip(r, c)))))
mask = reduce_expanded_mask(expanded_mask, ct_size[0], expansion)
mask = 2 * mask - 1
return x_grid, y_grid, mask
def get_contours_from_mask(x_grid, y_grid, mask):
cs = plt.contour(x_grid, y_grid, mask, [0]);
contours = [
path.vertices for path in cs.collections[0].get_paths()
]
plt.close()
return contours
def create_numpy_input_output(ct_uid):
structure_uid = ct_uid_to_structure_uid[ct_uid]
structure_set_path = data_path_root.joinpath(structure_set_paths[structure_uid])
structure_set = pydicom.read_file(
structure_set_path,
force=True,
specific_tags=['ROIContourSequence', 'StructureSetROISequence'])
number_to_name_map = {
roi_sequence_item.ROINumber: names_map[roi_sequence_item.ROIName]
for roi_sequence_item in structure_set.StructureSetROISequence
if names_map[roi_sequence_item.ROIName] is not None
}
contours_by_ct_uid = {}
for roi_contour_sequence_item in structure_set.ROIContourSequence:
try:
structure_name = number_to_name_map[roi_contour_sequence_item.ReferencedROINumber]
except KeyError:
continue
for contour_sequence_item in roi_contour_sequence_item.ContourSequence:
ct_uid = contour_sequence_item.ContourImageSequence[0].ReferencedSOPInstanceUID
try:
_ = contours_by_ct_uid[ct_uid]
except KeyError:
contours_by_ct_uid[ct_uid] = dict()
try:
contours_by_ct_uid[ct_uid][structure_name].append(contour_sequence_item.ContourData)
except KeyError:
contours_by_ct_uid[ct_uid][structure_name] = [contour_sequence_item.ContourData]
ct_path = data_path_root.joinpath(ct_image_paths[ct_uid])
dcm_ct = pydicom.read_file(ct_path, force=True)
dcm_ct.file_meta.TransferSyntaxUID = pydicom.uid.ImplicitVRLittleEndian
ct_size = np.shape(dcm_ct.pixel_array)
contours_on_this_slice = contours_by_ct_uid[ct_uid].keys()
masks = np.nan * np.ones((*ct_size, len(structures_to_learn)))
for i, structure in enumerate(structures_to_learn):
if not structure in contours_on_this_slice:
masks[:,:,i] = np.zeros(ct_size) - 1
continue
original_contours = contours_by_ct_uid[ct_uid][structure]
x_grid, y_grid, masks[:,:,i] = calculate_aliased_mask(original_contours, dcm_ct)
np.shape(masks)
assert np.sum(np.isnan(masks)) == 0
return dcm_ct.pixel_array, masks
def numpy_input_output_from_cache(ct_uid, structures_to_learn):
npz_path = npz_directory.joinpath(f'{ct_uid}.npz')
structures_to_learn_cache_path = npz_directory.joinpath('structures_to_learn.json')
try:
with open(structures_to_learn_cache_path) as f:
structures_to_learn_cache = json.load(f)
except (FileNotFoundError, json.JSONDecodeError):
structures_to_learn_cache = []
if structures_to_learn_cache != structures_to_learn:
logging.warning("Structures to learn has changed. Dumping npz cache.")
for path in npz_directory.glob('*.npz'):
path.unlink()
with open(structures_to_learn_cache_path, "w") as f:
json.dump(structures_to_learn, f)
try:
data = np.load(npz_path)
input_array = data['input_array']
output_array = data['output_array']
except FileNotFoundError:
input_array, output_array = create_numpy_input_output(ct_uid)
np.savez(npz_path, input_array=input_array, output_array=output_array)
return input_array, output_array
# +
all_ct_uids = list(ct_image_paths.keys())
random.shuffle(all_ct_uids)
def from_numpy_generator():
for ct_uid in all_ct_uids:
input_array, output_array = numpy_input_output_from_cache(
ct_uid, structures_to_learn)
input_array = input_array[:,:,None]
yield ct_uid, input_array, output_array
from_numpy_generator_params = (
(tf.string, tf.int32, tf.float64),
(tf.TensorShape(()), tf.TensorShape([512, 512, 1]), tf.TensorShape([512, 512, 41]))
)
from_numpy_dataset = tf.data.Dataset.from_generator(
from_numpy_generator, *from_numpy_generator_params)
# +
def _bytes_feature(value):
value = value.numpy()
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def serialise(ct_uid, input_array, output_array):
ct_uid = tf.io.serialize_tensor(ct_uid)
input_array = tf.io.serialize_tensor(input_array)
output_array = tf.io.serialize_tensor(output_array)
feature = {
'ct_uid': _bytes_feature(ct_uid),
'input_array': _bytes_feature(input_array),
'output_array': _bytes_feature(output_array),
}
example_proto = tf.train.Example(features=tf.train.Features(feature=feature))
return example_proto.SerializeToString()
for ct_uid, input_array, output_array in from_numpy_dataset.take(1):
serialise(ct_uid, input_array, output_array)
# -
# Details on this from https://www.tensorflow.org/tutorials/load_data/tfrecord
def tf_serialise(ct_uid, input_array, output_array):
tf_string = tf.py_function(
serialise,
(ct_uid, input_array, output_array),
tf.string
)
return tf.reshape(tf_string, ())
serialised_dataset = from_numpy_dataset.map(tf_serialise)
serialised_dataset
tfrecord_path = str(tfrecord_directory.joinpath('test.tfrecord'))
writer = tf.data.experimental.TFRecordWriter(tfrecord_path)
writer.write(serialised_dataset)
| prototyping/auto-segmentation/sb/00-stage-0-prototyping/08-create-tfrecord.ipynb |
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .scala
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: Apache Toree - Scala
// language: scala
// name: apache_toree_scala
// ---
// ## LOAD vs. INSERT
//
// Let us compare and contrast LOAD and INSERT commands. These are the main approaches using which we get data into Spark Metastore tables.
// + tags=["remove-cell"]
// %%HTML
<iframe width="560" height="315" src="https://www.youtube.com/embed/DZSIVt0sG5k?rel=0&controls=1&showinfo=0" frameborder="0" allowfullscreen></iframe>
// -
// Let us start spark context for this Notebook so that we can execute the code provided. You can sign up for our [10 node state of the art cluster/labs](https://labs.itversity.com/plans) to learn Spark SQL using our unique integrated LMS.
val username = System.getProperty("user.name")
// +
import org.apache.spark.sql.SparkSession
val username = System.getProperty("user.name")
val spark = SparkSession.
builder.
config("spark.ui.port", "0").
config("spark.sql.warehouse.dir", s"/user/${username}/warehouse").
enableHiveSupport.
appName(s"${username} | Spark SQL - Managing Tables - DML and Partitioning").
master("yarn").
getOrCreate
// -
// If you are going to use CLIs, you can use Spark SQL using one of the 3 approaches.
//
// **Using Spark SQL**
//
// ```
// spark2-sql \
// --master yarn \
// --conf spark.ui.port=0 \
// --conf spark.sql.warehouse.dir=/user/${USER}/warehouse
// ```
//
// **Using Scala**
//
// ```
// spark2-shell \
// --master yarn \
// --conf spark.ui.port=0 \
// --conf spark.sql.warehouse.dir=/user/${USER}/warehouse
// ```
//
// **Using Pyspark**
//
// ```
// pyspark2 \
// --master yarn \
// --conf spark.ui.port=0 \
// --conf spark.sql.warehouse.dir=/user/${USER}/warehouse
// ```
// * LOAD will copy the files by dividing them into blocks.
// * LOAD is the fastest way of getting data into Spark Metastore tables. However, there will be minimal validations at File level.
// * There will be no transformations or validations at data level.
// * If it require any transformation while getting data into Spark Metastore table, then we need to use INSERT command.
// * Here are some of the usage scenarios of insert:
// * Changing delimiters in case of text file format
// * Changing file format
// * Loading data into partitioned or bucketed tables (if bucketing is supported).
// * Apply any other transformations at data level (widely used)
// + language="sql"
//
// USE itversity_retail
// + language="sql"
//
// DROP TABLE IF EXISTS order_items
// + language="sql"
//
// CREATE TABLE order_items (
// order_item_id INT,
// order_item_order_id INT,
// order_item_product_id INT,
// order_item_quantity INT,
// order_item_subtotal FLOAT,
// order_item_product_price FLOAT
// ) STORED AS parquet
// + language="sql"
//
// LOAD DATA LOCAL INPATH '/data/retail_db/order_items'
// INTO TABLE order_items
// -
val username = System.getProperty("user.name")
// +
import sys.process._
s"hdfs dfs -ls /user/${username}/warehouse/${username}_retail.db/order_items" !
// + language="sql"
//
// SELECT * FROM order_items LIMIT 10
| 05_dml_and_partitioning/04_load_vs_insert.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# https://www.youtube.com/watch?v=YwieiqlhTgw&list=PLbFHh-ZjYFwErD9ICH42MPeT8PQuzIj56&index=12
# -
# # Python Standard Library
# +
#Python standard library: String capitalization methods
s = "abcd"
dir(s)
# -
s.upper()
s.lower()
dir(s)
a = "ABCDfgh"
a.swapcase()
a = "This is a word"
a.capitalize()
a = "This is a word"
a.title()
# # Frozen Sets
# +
#sets of sets
s1 = {1,2,3}
s2 = {4,5,6}
s3 = {7,8,9}
s = {s1,s2,s3}
# -
# sets are unhashable, we can't put list and sets
# we can put tuple
d = {}
d[s1] = 10
# +
# sets are roughly a dictionary key
# +
# Solution is frozen sets
# -
fs = frozenset([10,20,30])
d[fs] = 10
d[fs]
# 1. all sets operation can be peformed
# 2. we can't change anything
#
#
#
# # Python Dictionary
d = {}
d = {'a':1}
d.keys()
d.values()
d
cd = {[1,2,3]:5}
mydata = [("a",1)]
d = dict(mydata)
d
dict(a=1,b=2,c=3)
# +
# the best one is d = {}
# +
# Adding and replacing to dictionary
# -
d = {"a":1, "b":2}
d['z'] = 100
d
d['z'] = 200
# +
# Dictionary are mutable
# -
d
# +
# Want to add x=100
d.setdefault('x', 100)
# -
d
d.setdefault('x', 567) # this wont change the dic
d
k = {"c":5}
d.update(k)
d
# +
# new stuff always takes priority and add if collides
# -
# # Byte aarys
x = b"heello"
x
# +
# while reading from network we need that data to be mutable
# image data need to be tweaked
# -
g = bytearray("helo", 'utf-8')
g[0]
g[1:2]
#all string will be applied here
g[0] = 65
g
| Python-Standard-Library.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Stumpy Tutorial Dataset Backups
#
# This notebook copies the download process in active tutorials as part of the Stumpy docs. Then exports CSVs to a local directory.
#
# The CSVs are subsequently uploaded to the Stumpy community on [Zenodo](https://zenodo.org/communities/stumpy/?page=1&size=20).
# +
import pandas as pd
import urllib
import ssl
import io
import os
from zipfile import ZipFile
from urllib.request import urlopen
from scipy.io import loadmat
context = ssl.SSLContext() # Ignore SSL certificate verification for simplicity
# -
# ## Steamgen
# +
colnames = ['drum pressure',
'excess oxygen',
'water level',
'steam flow'
]
url = 'https://www.cs.ucr.edu/~eamonn/iSAX/steamgen.dat'
raw_bytes = urllib.request.urlopen(url, context=context).read()
data = io.BytesIO(raw_bytes)
steam_df = pd.read_csv(data, header=None, sep="\\s+")
steam_df.columns = colnames
steam_df.to_csv('STUMPY_Basics_steamgen.csv', index=False)
# -
# ## Taxi
# +
# Ref - https://github.com/stanford-futuredata/ASAP
taxi_df = pd.read_csv("https://raw.githubusercontent.com/stanford-futuredata/ASAP/master/Taxi.csv", sep=',')
taxi_df.to_csv('STUMPY_Basics_Taxi.csv', index=False)
# -
# ## Kohls
# +
url = 'https://sites.google.com/site/timeserieschain/home/Kohls_data.mat?attredirects=0&revision=1'
raw_bytes = urllib.request.urlopen(url, context=context).read()
data = io.BytesIO(raw_bytes)
mat = loadmat(data)
mdata = mat['VarName1']
mdtype = mdata.dtype
df = pd.DataFrame(mdata, dtype=mdtype, columns=['volume'])
df.to_csv('Time_Series_Chains_Kohls_data.csv', index=False)
# -
# ## TiltABP
# +
url = 'https://sites.google.com/site/timeserieschain/home/TiltABP_210_25000.txt'
raw_bytes = urllib.request.urlopen(url, context=context).read()
data = io.BytesIO(raw_bytes)
df = pd.read_csv(data, header=None)
df = df.reset_index().rename({'index': 'time', 0: 'abp'}, axis='columns')
df.to_csv('Semantic_Segmentation_TiltABP.csv', index=False)
# -
# ## Robot Dog
# +
T_url = 'https://www.cs.unm.edu/~mueen/robot_dog.txt'
T_raw_bytes = urllib.request.urlopen(T_url, context=context).read()
T_data = io.BytesIO(T_raw_bytes)
T_df = pd.read_csv(T_data, header=None, sep='\s+', names=['Acceleration'])
T_df.to_csv('Fast_Pattern_Searching_robot_dog.csv', index=False)
# -
# ## Carpet query
# +
Q_url = 'https://www.cs.unm.edu/~mueen/carpet_query.txt'
Q_raw_bytes = urllib.request.urlopen(Q_url, context=context).read()
Q_data = io.BytesIO(Q_raw_bytes)
Q_df = pd.read_csv(Q_data, header=None, sep='\s+', names=['Acceleration'])
Q_df.to_csv('carpet_query.csv', index=False)
# -
# ## Gun Point Training Data
# +
fzip = ZipFile(io.BytesIO(urlopen("http://alumni.cs.ucr.edu/~lexiangy/Shapelet/gun.zip").read()))
# training set
train = fzip.extract("gun_train")
train_df = pd.read_csv(train, sep="\\s+", header=None)
os.remove(train)
train_df.to_csv("gun_point_train_data.csv", index=False)
# -
# ## Gun Point Test Data
# +
fzip = ZipFile(io.BytesIO(urlopen("http://alumni.cs.ucr.edu/~lexiangy/Shapelet/gun.zip").read()))
test = fzip.extract("gun_test")
test_df = pd.read_csv(test, sep="\\s+", header=None)
os.remove(test)
test_df.to_csv("gun_point_test_data.csv", index=False)
# -
# ## Vanilla Ice, Queen, and <NAME> Data
# +
fzip = ZipFile(io.BytesIO(urlopen("https://www.dropbox.com/s/ybzkw5v6h46bv22/figure9_10.zip?dl=1&sa=D&sntz=1&usg=AFQjCNEDp3G8OKGC-Zj5yucpSSCz7WRpRg").read()))
mat = fzip.extract("figure9_10/data.mat")
data = loadmat(mat)
queen_df = pd.DataFrame(data['mfcc_queen'][0], columns=['under_pressure'])
vanilla_ice_df = pd.DataFrame(data['mfcc_vanilla_ice'][0], columns=['ice_ice_baby'])
queen_df.to_csv("queen.csv", index=False)
vanilla_ice_df.to_csv("vanilla_ice.csv", index=False)
# -
# ## Mitochondrial DNA (mtDNA) Data
# +
T_url = 'https://sites.google.com/site/consensusmotifs/dna.zip?attredirects=0&d=1'
T_raw_bytes = urllib.request.urlopen(T_url, context=context).read()
T_data = io.BytesIO(T_raw_bytes)
T_zipfile = ZipFile(T_data)
animals = ['python', 'hippo', 'red_flying_fox', 'alpaca']
for animal in animals:
with T_zipfile.open(f'dna/data/{animal}.mat') as f:
data = loadmat(f)['ts'].flatten().astype(float)
df = pd.DataFrame(data)
df.to_csv(f"{animal}.csv", index=False)
# -
# ## Multi-dimensional Toy Data
# +
url = "https://github.com/mcyeh/mstamp/blob/master/Python/toy_data.mat?raw=true"
raw_bytes = urllib.request.urlopen(url, context=context).read()
data = io.BytesIO(raw_bytes)
mat = loadmat(data)
mdata = mat['data']
mdtype = mdata.dtype
df = pd.DataFrame(mdata, dtype=mdtype, columns=['T3', 'T2', 'T1'])
df.to_csv("toy.csv", index=False)
| docs/Zenodo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/94JuHo/study_for_deeplearning/blob/master/intro_to_pandas.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] colab_type="text" id="copyright-notice"
# #### Copyright 2017 Google LLC.
# + colab_type="code" id="copyright-notice2" cellView="both" colab={}
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] colab_type="text" id="rHLcriKWLRe4"
# # Pandas 간단 소개
# + [markdown] colab_type="text" id="QvJBqX8_Bctk"
# **학습 목표:**
# * *pandas* 라이브러리의 `DataFrame` 및 `Series` 데이터 구조에 관한 소개 확인하기
# * `DataFrame` 및 `Series` 내의 데이터 액세스 및 조작
# * *pandas* `DataFrame`으로 CSV 데이터 가져오기
# * `DataFrame`의 색인을 다시 생성하여 데이터 셔플
# + [markdown] colab_type="text" id="TIFJ83ZTBctl"
# [*Pandas*](http://Pandas.pydata.org/)는 열 중심 데이터 분석 API입니다. 입력 데이터를 처리하고 분석하는 데 효과적인 도구이며, 여러 ML 프레임워크에서도 *Pandas* 데이터 구조를 입력으로 지원합니다.
# *Pandas* API를 전체적으로 소개하려면 길어지겠지만 중요한 개념은 꽤 간단하므로 아래에서 소개하도록 하겠습니다. 전체 내용을 참조할 수 있도록 [*Pandas* 문서 사이트](http://pandas.pydata.org/pandas-docs/stable/index.html)에서 광범위한 문서와 여러 가이드를 제공하고 있습니다.
# + [markdown] colab_type="text" id="s_JOISVgmn9v"
# ## 기본 개념
#
# 다음 행은 *Pandas* API를 가져와서 API 버전을 출력합니다.
# + colab_type="code" id="aSRYu62xUi3g" outputId="2681edac-3ff0-43a5-f23c-fbb69cc2cca6" colab={"base_uri": "https://localhost:8080/", "height": 34}
from __future__ import print_function
import pandas as pd
pd.__version__
# + [markdown] colab_type="text" id="daQreKXIUslr"
# *Pandas*의 기본 데이터 구조는 두 가지 클래스로 구현됩니다.
#
# * **`DataFrame`**은 행 및 이름 지정된 열이 포함된 관계형 데이터 테이블이라고 생각할 수 있습니다.
# * **`Series`**는 하나의 열입니다. `DataFrame`에는 하나 이상의 `Series`와 각 `Series`의 이름이 포함됩니다.
#
# 데이터 프레임은 데이터 조작에 일반적으로 사용하는 추상화입니다. [Spark](https://spark.apache.org/) 및 [R](https://www.r-project.org/about.html)에 유사한 구현이 존재합니다.
# + [markdown] colab_type="text" id="fjnAk1xcU0yc"
# `Series`를 만드는 한 가지 방법은 `Series` 객체를 만드는 것입니다. 예를 들면 다음과 같습니다.
# + colab_type="code" id="DFZ42Uq7UFDj" outputId="704255e3-840c-49e1-e05b-edde8dbe6fdd" colab={"base_uri": "https://localhost:8080/", "height": 85}
pd.Series(['San Francisco', 'San Jose', 'Sacramento'])
# + [markdown] colab_type="text" id="U5ouUp1cU6pC"
# `DataFrame` 객체는 `string` 열 이름과 매핑되는 'dict'를 각각의 `Series`에 전달하여 만들 수 있습니다. `Series`의 길이가 일치하지 않는 경우, 누락된 값은 특수 [NA/NaN](http://pandas.pydata.org/pandas-docs/stable/missing_data.html) 값으로 채워집니다. 예를 들면 다음과 같습니다.
# + colab_type="code" id="avgr6GfiUh8t" outputId="1d3bc273-62f3-4f2d-da75-9432a1d2d601" colab={"base_uri": "https://localhost:8080/", "height": 142}
city_names = pd.Series(['San Francisco', 'San Jose', 'Sacramento'])
population = pd.Series([852469, 1015785, 485199])
pd.DataFrame({ 'City name': city_names, 'Population': population })
# + [markdown] colab_type="text" id="oa5wfZT7VHJl"
# 하지만 대부분의 경우 전체 파일을 `DataFrame`으로 로드합니다. 다음 예는 캘리포니아 부동산 데이터가 있는 파일을 로드합니다. 다음 셀을 실행하여 데이터에 로드하고 기능 정의를 만들어 보세요.
# + colab_type="code" id="av6RYOraVG1V" outputId="822865b2-4abc-4229-f9b7-fcafed3634fa" colab={"base_uri": "https://localhost:8080/", "height": 297}
california_housing_dataframe = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe.describe()
# + [markdown] colab_type="text" id="WrkBjfz5kEQu"
# 위의 예에서는 `DataFrame.describe`를 사용하여 `DataFrame`에 관한 흥미로운 통계를 보여줍니다. 또 다른 유용한 함수는 `DataFrame.head`로, `DataFrame` 레코드 중 처음 몇 개만 표시합니다.
# + colab_type="code" id="s3ND3bgOkB5k" outputId="b7db6801-a48d-4528-9efb-c2529907b9a7" colab={"base_uri": "https://localhost:8080/", "height": 204}
california_housing_dataframe.head()
# + [markdown] colab_type="text" id="w9-Es5Y6laGd"
# *Pandas*의 또 다른 강력한 기능은 그래핑입니다. 예를 들어 `DataFrame.hist`를 사용하면 한 열에서 값의 분포를 빠르게 검토할 수 있습니다.
# + colab_type="code" id="nqndFVXVlbPN" outputId="777a649f-a586-4b5a-d34c-8151aff28749" colab={"base_uri": "https://localhost:8080/", "height": 315}
california_housing_dataframe.hist('housing_median_age')
# + [markdown] colab_type="text" id="XtYZ7114n3b-"
# ## 데이터 액세스
#
# 익숙한 Python dict/list 작업을 사용하여 `DataFrame` 데이터에 액세스할 수 있습니다.
# + colab_type="code" id="_TFm7-looBFF" outputId="679122eb-422f-4f8b-badb-9452b7c5e4c5" colab={"base_uri": "https://localhost:8080/", "height": 102}
cities = pd.DataFrame({ 'City name': city_names, 'Population': population })
print(type(cities['City name']))
cities['City name']
# + colab_type="code" id="V5L6xacLoxyv" outputId="e2baa486-df93-41e5-85cb-dc32d00f257f" colab={"base_uri": "https://localhost:8080/", "height": 51}
print(type(cities['City name'][1]))
cities['City name'][1]
# + colab_type="code" id="gcYX1tBPugZl" outputId="4f6c22fd-9eae-4647-ae70-b122993da198" colab={"base_uri": "https://localhost:8080/", "height": 128}
print(type(cities[0:2]))
cities[0:2]
# + [markdown] colab_type="text" id="65g1ZdGVjXsQ"
# 또한 *Pandas*는 고급 [색인 생성 및 선택](http://Pandas.pydata.org/Pandas-docs/stable/indexing.html) 기능을 위한 풍부한 API를 제공합니다. 이 내용은 너무 광범위하므로 여기에서 다루지 않습니다.
# + [markdown] colab_type="text" id="RM1iaD-ka3Y1"
# ## 데이터 조작
#
# Python의 기본 산술 연산을 `Series`에 적용할 수도 있습니다. 예를 들면 다음과 같습니다.
# + colab_type="code" id="XWmyCFJ5bOv-" outputId="56b2a614-7a90-4881-9f41-5364f2edc268" colab={"base_uri": "https://localhost:8080/", "height": 85}
population / 1000.
# + [markdown] colab_type="text" id="TQzIVnbnmWGM"
# [NumPy](http://www.numpy.org/)는 유명한 계산과학 툴킷입니다. *Pandas* `Series`는 대부분의 NumPy 함수에 인수로 사용할 수 있습니다.
# + colab_type="code" id="ko6pLK6JmkYP" outputId="c64b701e-154b-4a8b-b3df-78430dc23167" colab={"base_uri": "https://localhost:8080/", "height": 85}
import numpy as np
np.log(population)
# + [markdown] colab_type="text" id="xmxFuQmurr6d"
# 더 복잡한 단일 열 변환에는 `Series.apply`를 사용할 수 있습니다. Python [map 함수](https://docs.python.org/2/library/functions.html#map)처럼,
# `Series.apply`는 인수로 [lambda 함수](https://docs.python.org/2/tutorial/controlflow.html#lambda-expressions)를 허용하며, 이는 각 값에 적용됩니다.
#
# 아래의 예에서는 `인구`가 백만 명을 초과하는지 나타내는 새 `Series`를 만듭니다.
# + colab_type="code" id="Fc1DvPAbstjI" outputId="459ed45c-ed84-4e04-8fd7-4a30533a92f5" colab={"base_uri": "https://localhost:8080/", "height": 85}
population.apply(lambda val: val > 1000000)
# + [markdown] colab_type="text" id="ZeYYLoV9b9fB"
#
# `DataFrames` 수정 역시 간단합니다. 예를 들어 다음 코드는 기존 `DataFrame`에 두 개의 `Series`를 추가합니다.
# + colab_type="code" id="0gCEX99Hb8LR" outputId="03bb44f7-b88b-4489-dbee-411716a805f1" colab={"base_uri": "https://localhost:8080/", "height": 142}
cities['Area square miles'] = pd.Series([46.87, 176.53, 97.92])
cities['Population density'] = cities['Population'] / cities['Area square miles']
cities
# + [markdown] colab_type="text" id="6qh63m-ayb-c"
# ## 실습 #1
#
# 다음 두 명제 *모두* True인 경우에만 True인 새 부울 열을 추가하여 `도시` 테이블을 수정합니다.
#
# * 도시 이름은 성인의 이름을 본따서 지었다.
# * 도시 면적이 130제곱킬로미터보다 넓다.
#
# **참고:** 부울 `Series`는 기존 부울 연산자가 아닌 비트 연산자를 사용하여 결합할 수 있습니다. 예를 들어 *logical and*를 실행할 때 `and` 대신 `&`를 사용합니다.
#
# **참고:** 스페인어에서 "San"은 "성인"의 의미입니다.
# + colab_type="code" id="zCOn8ftSyddH" outputId="05e8180b-003b-4de6-ac3e-755702c1b990" colab={"base_uri": "https://localhost:8080/", "height": 142}
# Your code here
cities['Is wide and has saint name'] = (cities['Area square miles'] > 130) & cities['City name'].apply(lambda name: name.startswith('San'))
cities
# + [markdown] colab_type="text" id="YHIWvc9Ms-Ll"
# ### 해결 방법
#
# 해결 방법을 보려면 아래를 클릭하세요.
# + colab_type="code" id="T5OlrqtdtCIb" outputId="117d32e5-779d-45b9-8313-f1169883c19c" colab={"base_uri": "https://localhost:8080/", "height": 142}
cities['Is wide and has saint name'] = (cities['Area square miles'] > 50) & cities['City name'].apply(lambda name: name.startswith('San'))
cities
# + [markdown] colab_type="text" id="f-xAOJeMiXFB"
# ## 색인
# `Series`와 `DataFrame` 객체 모두 식별자 값을 각 `Series` 항목이나 `DataFrame` 행에 할당하는 `index` 속성을 정의합니다.
#
# 기본적으로 생성 시 *Pandas*는 소스 데이터의 순서를 나타내는 색인 값을 할당합니다. 생성된 이후 색인 값은 고정됩니다. 즉, 데이터의 순서가 재정렬될 때 변하지 않습니다.
# + colab_type="code" id="2684gsWNinq9" outputId="adce858f-0a45-440b-ba20-06387f74d924" colab={"base_uri": "https://localhost:8080/", "height": 34}
city_names.index
# + colab_type="code" id="F_qPe2TBjfWd" outputId="f035420b-e521-41e2-cd73-c6b6163af68c" colab={"base_uri": "https://localhost:8080/", "height": 34}
cities.index
# + [markdown] colab_type="text" id="hp2oWY9Slo_h"
# `DataFrame.reindex`를 호출하여 수동으로 행의 순서를 재정렬합니다. 예를 들어 다음은 도시 이름을 기준으로 분류하는 것과 효과가 같습니다.
# + colab_type="code" id="sN0zUzSAj-U1" outputId="ff0aadc9-2983-441c-ccc0-692823b9d13a" colab={"base_uri": "https://localhost:8080/", "height": 142}
cities.reindex([2, 0, 1])
# + [markdown] colab_type="text" id="-GQFz8NZuS06"
# 색인 재생성은 `DataFrame`을 섞기(임의 설정하기) 위한 좋은 방법입니다. 아래의 예에서는 배열처럼 된 색인을 NumPy의 `random.permutation` 함수에 전달하여 값을 섞습니다. 이렇게 섞인 배열로 `reindex`를 호출하면 `DataFrame` 행도 같은 방식으로 섞입니다.
# 다음 셀을 여러 번 실행해 보세요.
# + colab_type="code" id="mF8GC0k8uYhz" outputId="dfacdf40-e3c2-431e-db0a-fb41015e4b35" colab={"base_uri": "https://localhost:8080/", "height": 142}
cities.reindex(np.random.permutation(cities.index))
# + [markdown] colab_type="text" id="fSso35fQmGKb"
# 자세한 정보는 [색인 문서](http://pandas.pydata.org/pandas-docs/stable/indexing.html#index-objects)를 참조하세요.
# + [markdown] colab_type="text" id="8UngIdVhz8C0"
# ## 실습 #2
#
# `reindex` 메서드는 원래 `DataFrame`의 색인 값에 없는 색인 값을 허용합니다. 메서드를 실행해보고 이런 값을 사용하면 어떤 결과가 나오는지 확인해보세요. 왜 이런 값이 허용된다고 생각하나요?
# + colab_type="code" id="PN55GrDX0jzO" outputId="29a1ab28-58a2-43e2-90fa-abea2fb6a96a" colab={"base_uri": "https://localhost:8080/", "height": 142}
# Your code here
cities.reindex([2, 6, 1])
# + id="AxJPhgsM7X0z" colab_type="code" outputId="f0169480-e4fd-435b-90a9-1e4f93efd1eb" colab={"base_uri": "https://localhost:8080/", "height": 142}
cities
# + [markdown] colab_type="text" id="TJffr5_Jwqvd"
# ### 해결 방법
#
# 해결 방법을 보려면 아래를 클릭하세요.
# + [markdown] colab_type="text" id="8oSvi2QWwuDH"
# `reindex` 입력 배열에 원래 `DataFrame` 색인 값에 없는 값을 포함하면 `reindex`가 이 \'누락된\' 색인에 새 행을 추가하고 모든 해당 열을 `NaN` 값으로 채웁니다.
# + colab_type="code" id="yBdkucKCwy4x" colab={}
cities.reindex([0, 4, 5, 2])
# + [markdown] colab_type="text" id="2l82PhPbwz7g"
# 색인은 보통 실제 데이터에서 가져온 문자열이기 때문에 이 동작이 바람직합니다([*Pandas* 색인 재생성 문서](http://Pandas.pydata.org/Pandas-docs/stable/generated/Pandas.DataFrame.reindex.html)에서 색인 값이 브라우저 이름인 예제 참조).
#
# 이 경우 \'누락된\' 색인을 허용하면 외부 목록을 사용하여 쉽게 색인을 다시 생성할 수 있으므로, 입력 처리에 대해 걱정하지 않아도 됩니다.
| intro_to_pandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="K8HAOWyDgPiH"
# # Shallow methods for supervised learning
# + [markdown] id="v6w1KA97gXao"
# In this notebook we will exploring a very naive (yet powerful) approach for solving graph-based supervised machine learning. The idea rely on the classic machine learning approach of handcrafted feature extraction.
#
# In Chapter 1 you learned how local and global graph properties can be extracted from graphs. Those properties represent the graph itself and bring important informations which can be useful for classification.
# + id="5k3sYIRJpMgb"
# !pip install stellargraph
# + [markdown] id="yWL_AuChPcYS"
# In this demo, we will be using the PROTEINS dataset, already integrated in StellarGraph
# + colab={"base_uri": "https://localhost:8080/", "height": 51} id="gS5B47T2gWll" outputId="4020adc2-75b7-4aa5-b480-24d2693a8a74"
from stellargraph import datasets
from IPython.display import display, HTML
dataset = datasets.PROTEINS()
display(HTML(dataset.description))
graphs, graph_labels = dataset.load()
# + [markdown] id="YDlUMUUFLrjh"
# To compute the graph metrics, one way is to retrieve the adjacency matrix representation of each graph.
# + id="qsOw9zFwrxDe"
# convert graphs from StellarGraph format to numpy adj matrices
adjs = [graph.to_adjacency_matrix().A for graph in graphs]
# convert labes fom Pandas.Series to numpy array
labels = graph_labels.to_numpy(dtype=int)
# + id="6S5M5mL2t-ik"
import numpy as np
import networkx as nx
metrics = []
for adj in adjs:
G = nx.from_numpy_matrix(adj)
# basic properties
num_edges = G.number_of_edges()
# clustering measures
cc = nx.average_clustering(G)
# measure of efficiency
eff = nx.global_efficiency(G)
metrics.append([num_edges, cc, eff])
# + [markdown] id="1_a5CiZKL4vW"
# We can now exploit scikit-learn utilities to create a train and test set. In our experiments, we will be using 70% of the dataset as training set and the remaining as testset
# + id="NRrNPqOxu7eY"
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(metrics, labels, test_size=0.3, random_state=42)
# + [markdown] id="TMIF1weiMO0F"
# As commonly done in many Machine Learning workflows, we preprocess features to have zero mean and unit standard deviation
# + id="9qUjNhPru6ni"
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# + [markdown] id="QqaZzejRMdmu"
# It's now time for training a proper algorithm. We chose a support vector machine for this task
# + colab={"base_uri": "https://localhost:8080/"} id="L3A6_fh0OV9x" outputId="6297d8fe-3cc9-435b-e8fe-b50425aaee24"
from sklearn import svm
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
clf = svm.SVC()
clf.fit(X_train_scaled, y_train)
y_pred = clf.predict(X_test_scaled)
print('Accuracy', accuracy_score(y_test,y_pred))
print('Precision', precision_score(y_test,y_pred))
print('Recall', recall_score(y_test,y_pred))
print('F1-score', f1_score(y_test,y_pred))
# + [markdown] id="NBVKcDWHeGoR"
# # Supervised graph representation learning using Graph ConvNet
# + [markdown] id="lb6FvAQ3eUNs"
# In this notebook we will be performing supervised graph representation learning using Deep Graph ConvNet as encoder.
#
# The model embeds a graph by using stacked Graph ConvNet layers
# + [markdown] id="VHU1UGiHfw1e"
# In this demo, we will be using the PROTEINS dataset, already integrated in StellarGraph
# + colab={"base_uri": "https://localhost:8080/", "height": 51} id="_8SDtHy1PfNx" outputId="aa1f8875-ab8d-42b6-eadc-8084f1796cc1"
import pandas as pd
from stellargraph import datasets
from IPython.display import display, HTML
dataset = datasets.PROTEINS()
display(HTML(dataset.description))
graphs, graph_labels = dataset.load()
labels = graph_labels.to_numpy(dtype=int)
# necessary for converting default string labels to int
graph_labels = pd.get_dummies(graph_labels, drop_first=True)
# + [markdown] id="_uEUzYBIM6cK"
# StellarGraph we are using for building the model, uses tf.Keras as backend. According to its specific, we need a data generator for feeding the model. For supervised graph classification, we create an instance of StellarGraph's PaddedGraphGenerator class. This generator supplies the features arrays and the adjacency matrices to a mini-batch Keras graph classification model. Differences in the number of nodes are resolved by padding each batch of features and adjacency matrices, and supplying a boolean mask indicating which are valid and which are padding.
# + id="i34QgSA_P_sM"
from stellargraph.mapper import PaddedGraphGenerator
generator = PaddedGraphGenerator(graphs=graphs)
# + [markdown] id="YepZYuk_NWWT"
# Now we are ready for actually create the model. The GCN layers will be created and stacked togheter through StellarGraph's utility function. This _backbone_ will be then concateneted to 1D Convolutional layers and Fully connected layers using tf.Keras
# + id="qF8DHIalQuIW"
from stellargraph.layer import DeepGraphCNN
from tensorflow.keras import Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.layers import Dense, Conv1D, MaxPool1D, Dropout, Flatten
from tensorflow.keras.losses import binary_crossentropy
import tensorflow as tf
nrows = 35 # the number of rows for the output tensor
layer_dims = [32, 32, 32, 1]
dgcnn_model = DeepGraphCNN(
layer_sizes=layer_dims,
activations=["tanh", "tanh", "tanh", "tanh"],
k=nrows,
bias=False,
generator=generator,
)
gnn_inp, gnn_out = dgcnn_model.in_out_tensors()
x_out = Conv1D(filters=16, kernel_size=sum(layer_dims), strides=sum(layer_dims))(gnn_out)
x_out = MaxPool1D(pool_size=2)(x_out)
x_out = Conv1D(filters=32, kernel_size=5, strides=1)(x_out)
x_out = Flatten()(x_out)
x_out = Dense(units=128, activation="relu")(x_out)
x_out = Dropout(rate=0.5)(x_out)
predictions = Dense(units=1, activation="sigmoid")(x_out)
# + [markdown] id="ZOj3TjPIN4ev"
# Let's now compile the model
# + id="clWqCmfLJjBF"
model = Model(inputs=gnn_inp, outputs=predictions)
model.compile(optimizer=Adam(lr=0.0001), loss=binary_crossentropy, metrics=["acc"])
# + [markdown] id="4ZnhaSMDN9ii"
# We use 70% of the dataset for training and the remaining for test
# + id="j3Hr6_FyJ5m4"
from sklearn import model_selection
train_graphs, test_graphs = model_selection.train_test_split(
graph_labels, test_size=.3, stratify=labels,
)
# + id="p9_2ybPqJ-3B"
gen = PaddedGraphGenerator(graphs=graphs)
train_gen = gen.flow(
list(train_graphs.index - 1),
targets=train_graphs.values,
symmetric_normalization=False,
batch_size=50,
)
test_gen = gen.flow(
list(test_graphs.index - 1),
targets=test_graphs.values,
symmetric_normalization=False,
batch_size=1,
)
# + [markdown] id="wCNr8_IsOIbQ"
# It's now time for training!
# + colab={"base_uri": "https://localhost:8080/"} id="h3b2BNJUKKas" outputId="e3c8b8e7-0beb-4479-9829-793f781e1a03"
epochs = 100
history = model.fit(
train_gen, epochs=epochs, verbose=1, validation_data=test_gen, shuffle=True,
)
# + id="gdPBykJ4KPrV"
# https://stellargraph.readthedocs.io/en/stable/demos/graph-classification/index.html
# + [markdown] id="T1lM0v05_zJe"
# ## Supervised node representation learning using GraphSAGE
# + colab={"base_uri": "https://localhost:8080/", "height": 51} id="gERK1Zen_xL7" outputId="5959207e-3139-4262-fdbf-502d255bc826"
from stellargraph import datasets
from IPython.display import display, HTML
dataset = datasets.Cora()
display(HTML(dataset.description))
G, nodes = dataset.load()
# + [markdown] id="xrkhfxtenQ4i"
# Let's split the dataset into training and testing set
# + id="RZsS_u7v_5vc"
from sklearn.model_selection import train_test_split
train_nodes, test_nodes = train_test_split(
nodes, train_size=0.1, test_size=None, stratify=nodes
)
# + [markdown] id="mm4-Me5GnVce"
# Since we are performing a categorical classification, it is useful to represent each categorical label in its one-hot encoding
# + id="dP-sXgekAFOY"
from sklearn import preprocessing, feature_extraction, model_selection
label_encoding = preprocessing.LabelBinarizer()
train_labels = label_encoding.fit_transform(train_nodes)
test_labels = label_encoding.transform(test_nodes)
# + [markdown] id="KJSpM4cnnfUN"
# It's now time for creating the mdoel. It will be composed by two GraphSAGE layers followed by a Dense layer with softmax activation for classification
# + id="BP5G49akANxy"
from stellargraph.mapper import GraphSAGENodeGenerator
batchsize = 50
n_samples = [10, 5, 7]
generator = GraphSAGENodeGenerator(G, batchsize, n_samples)
# + id="qkgF2VvWAlct"
from stellargraph.layer import GraphSAGE
from tensorflow.keras.layers import Dense
graphsage_model = GraphSAGE(
layer_sizes=[32, 32, 16], generator=generator, bias=True, dropout=0.6,
)
# + id="S_g_yOhjAovl"
gnn_inp, gnn_out = graphsage_model.in_out_tensors()
outputs = Dense(units=train_labels.shape[1], activation="softmax")(gnn_out)
# + id="HeMlOuDnA9B_"
from tensorflow.keras.losses import categorical_crossentropy
from keras.models import Model
from keras.optimizers import Adam
model = Model(inputs=gnn_inp, outputs=outputs)
model.compile(optimizer=Adam(lr=0.003), loss=categorical_crossentropy, metrics=["acc"],)
# + [markdown] id="MqF4EWFbnwU9"
# We will use the flow function of the generator for feeding the model with the train and the test set.
# + id="x-5xmzRqBDCX"
train_gen = generator.flow(train_nodes.index, train_labels, shuffle=True)
test_gen = generator.flow(test_nodes.index, test_labels)
# + [markdown] id="952-5V6Xn45o"
# Finally, let's train the model!
# + id="sS3vnQ_HBZxK"
history = model.fit(train_gen, epochs=20, validation_data=test_gen, verbose=2, shuffle=False)
# + id="fGJXHirZBcuL"
# + id="iFZJ2OJcoQgi"
| Chapter04/04_Graph_Neural_Networks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Use this file to zip or unzip files in the data folder
# +
import os
from os.path import expanduser
import glob
from tqdm.notebook import tqdm
from functions.file_tools.zip import recursive_zip, recursive_unzip, recursive_rm
# -
data_dir = "data"
# # Recursive Unzip
recursive_unzip(data_dir)
# # Zip Files
recursive_zip(data_dir)
# # Recursive Remove
# +
# Beware!
# recursive_rm(data_dir, 'json')
# -
| zip.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Discrete Choice Models Overview
import numpy as np
import statsmodels.api as sm
# ## Data
#
# Load data from Spector and Mazzeo (1980). Examples follow Greene's Econometric Analysis Ch. 21 (5th Edition).
spector_data = sm.datasets.spector.load(as_pandas=False)
spector_data.exog = sm.add_constant(spector_data.exog, prepend=False)
# Inspect the data:
print(spector_data.exog[:5,:])
print(spector_data.endog[:5])
# ## Linear Probability Model (OLS)
lpm_mod = sm.OLS(spector_data.endog, spector_data.exog)
lpm_res = lpm_mod.fit()
print('Parameters: ', lpm_res.params[:-1])
# ## Logit Model
logit_mod = sm.Logit(spector_data.endog, spector_data.exog)
logit_res = logit_mod.fit(disp=0)
print('Parameters: ', logit_res.params)
# Marginal Effects
margeff = logit_res.get_margeff()
print(margeff.summary())
# As in all the discrete data models presented below, we can print a nice summary of results:
print(logit_res.summary())
# ## Probit Model
probit_mod = sm.Probit(spector_data.endog, spector_data.exog)
probit_res = probit_mod.fit()
probit_margeff = probit_res.get_margeff()
print('Parameters: ', probit_res.params)
print('Marginal effects: ')
print(probit_margeff.summary())
# ## Multinomial Logit
# Load data from the American National Election Studies:
anes_data = sm.datasets.anes96.load(as_pandas=False)
anes_exog = anes_data.exog
anes_exog = sm.add_constant(anes_exog, prepend=False)
# Inspect the data:
print(anes_data.exog[:5,:])
print(anes_data.endog[:5])
# Fit MNL model:
mlogit_mod = sm.MNLogit(anes_data.endog, anes_exog)
mlogit_res = mlogit_mod.fit()
print(mlogit_res.params)
# ## Poisson
#
# Load the Rand data. Note that this example is similar to Cameron and Trivedi's `Microeconometrics` Table 20.5, but it is slightly different because of minor changes in the data.
rand_data = sm.datasets.randhie.load(as_pandas=False)
rand_exog = rand_data.exog.view(float).reshape(len(rand_data.exog), -1)
rand_exog = sm.add_constant(rand_exog, prepend=False)
# Fit Poisson model:
poisson_mod = sm.Poisson(rand_data.endog, rand_exog)
poisson_res = poisson_mod.fit(method="newton")
print(poisson_res.summary())
# ## Negative Binomial
#
# The negative binomial model gives slightly different results.
mod_nbin = sm.NegativeBinomial(rand_data.endog, rand_exog)
res_nbin = mod_nbin.fit(disp=False)
print(res_nbin.summary())
# ## Alternative solvers
#
# The default method for fitting discrete data MLE models is Newton-Raphson. You can use other solvers by using the ``method`` argument:
mlogit_res = mlogit_mod.fit(method='bfgs', maxiter=250)
print(mlogit_res.summary())
| examples/notebooks/discrete_choice_overview.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
import time
import cv2
from pySINDy.sindypde import SINDyPDE
from pySINDy import SINDy
from pySINDy.sindybase import SINDyBase
from pySINDy.sindylibr import SINDyLibr
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
def plotUV(Un, Vn, du_dt, dv_dt,x):
##--SAMPLING--##
s=[78,56]
U_ds=np.zeros(s)
V_ds=np.zeros(s)
d_u=np.zeros(s)
d_v=np.zeros(s)
for i in range(1,s[0]):
for j in range(1,s[1]):
U_ds[i,j]=Un[10*i,10*j]
V_ds[i,j]=Vn[10*i,10*j]
d_u[i,j]=np.reshape(du_dt,file_dim)[10*i,10*j]
d_v[i,j]=np.reshape(dv_dt,file_dim)[10*i,10*j]
fig, ax = plt.subplots(figsize=(s[1],s[0]))
x_pos = np.arange(0,s[1],1)
y_pos = np.arange(0,s[0],1)
ax.quiver(x_pos,y_pos, U_ds[:,:], V_ds[:,:], width=0.001)
ax.set_title('Plotting motion vectors')
#plt.show()
U_ds+=10*d_u
V_ds+=10*d_v
fig, ax = plt.subplots(figsize=(s[1],s[0]))
x_pos = np.arange(0,s[1],1)
y_pos = np.arange(0,s[0],1)
ax.quiver(x_pos,y_pos, U_ds[:,:], V_ds[:,:], width=0.0005)
ax.set_title('Plotting motion vectors')
filename="%s/%i_%i_Im%i.png"%(savepath,pred_date,pred_times[x],x)
plt.savefig(filename)
plt.close()
def trainSINDy(U,V,dx,dy,dt):
model = SINDyPDE(name='SINDyPDE model for Reaction-Diffusion Eqn')
start_train=time.time()
#model.fit(self, data, poly_degree=2, cut_off=1e-3)
model.fit({'u': U[:,:,-1*train_size:], 'v': V[:,:,-1*train_size:]}, dt, [dx, dy], space_deriv_order=2, poly_degree=2, sample_rate=0.01, cut_off=0.01, deriv_acc=2)
print("\n--- Train time %s seconds ---\n" %(time.time() - start_train))
#print("\n--- Active terms ---\n" )
size=np.shape(model.coefficients)
cnt=0
for i in range(size[0]):
for j in range(size[1]):
if (model.coefficients[i,j])!=0:
#print(model.coefficients[i,j],"--",model.descriptions[i])
cnt+=1
print("Train Success..")
print("--- Active terms %s ---\n" %cnt)
return U, V, model.coefficients
def testSINDy(U,V,dx,dy,dt,coeff,x):
model2 = SINDyLibr(name='Derived module from sindybase.py for libr computation')
libx=model2.libr({'u': U[:,:,-1*train_size:], 'v': V[:,:,-1*train_size:]}, dt, [dx,dy], space_deriv_order=2, poly_degree=2, sample_rate=0.01, cut_off=0.01, deriv_acc=2)
duv_dt=np.matmul(libx,coeff)
du_dt=duv_dt[:,0]
dv_dt=duv_dt[:,1]
U_nxt=np.reshape(U,file_dim)+np.reshape(du_dt,file_dim)
V_nxt=np.reshape(V,file_dim)+np.reshape(dv_dt,file_dim)
Uname="%s/%i_U%i.csv"%(savepath,pred_date,x)
Vname="%s/%i_V%i.csv"%(savepath,pred_date,x)
np.savetxt(Uname,np.reshape(U,file_dim),delimiter=',')
np.savetxt(Vname,np.reshape(V,file_dim),delimiter=',')
print("Test Success..")
#plotUV(U_nxt,V_nxt,du_dt,dv_dt,x)
return U_nxt, V_nxt
def loadUV(loadpath):
start_load = time.time()
U_mat=np.zeros(file_dim)
V_mat=np.zeros(file_dim)
print("\n--- Loading UV-Data.. ---\n")
for i in range (len(inp_times)-1):
#print("%i_%i00_%i_%i00_u.csv"%(pred_date,pred_times[i],pred_date,pred_times[i+1]))
#print("%i_%i00_%i_%i00_v.csv"%(pred_date,pred_times[i],pred_date,pred_times[i+1]))
U_mat = np.dstack([U_mat,pd.read_csv("%s/%i_%s_%i_%s_u.csv"%(loadpath,fnames[i]),sep=',',header=None).values])
V_mat = np.dstack([V_mat,pd.read_csv("%s/%i_%s_%i_%s_v.csv"%(loadpath,pred_date,inp_times[i],pred_date,inp_times[i+1]),sep=',',header=None).values])
print("\n--- Load complete.. Time: %s seconds ---\n" %(time.time() - start_load))
return U_mat[:,:,-1*train_size:], V_mat[:,:,-1*train_size:]
def loadUV_a(loadpath):
start_load = time.time()
U_mat=np.zeros(file_dim)
V_mat=np.zeros(file_dim)
print("\n--- Loading UV-Data.. ---\n")
for i in range (len(fnames)-1):
print(fnames[i])
U_mat = np.dstack([U_mat,pd.read_csv("%s/%s_u.csv"%(loadpath,fnames[i]),sep=',',header=None).values])
V_mat = np.dstack([V_mat,pd.read_csv("%s/%s_v.csv"%(loadpath,fnames[i]),sep=',',header=None).values])
print("\n--- Load complete.. Time: %s seconds ---\n" %(time.time() - start_load))
return U_mat[:,:,-1*train_size:], V_mat[:,:,-1*train_size:]
def gettimes(pred_time):
times=[['0000','0100','0200','0300','0400','0500','0600','0700','0800','0900','1000','1100','1200','1300','1400','1500','1600','1700','1800','1900','2000','2100','2200','2300'],
['0010','0110','0210','0310','0410','0510','0610','0710','0810','0910','1010','1110','1210','1310','1410','1510','1610','1710','1810','1910','2010','2110','2210','2310'],
['0020','0120','0220','0320','0420','0520','0620','0720','0820','0920','1020','1120','1220','1320','1420','1520','1620','1720','1820','1920','2020','2120','2220','2320'],
['0030','0130','0230','0330','0430','0530','0630','0730','0830','0930','1030','1130','1230','1330','1430','1530','1630','1730','1830','1930','2030','2130','2230','2330'],
['0040','0140','0240','0340','0440','0540','0640','0740','0840','0940','1040','1140','1240','1340','1440','1540','1640','1740','1840','1940','2040','2140','2240','2340'],
['0050','0150','0250','0350','0450','0550','0650','0750','0850','0950','1050','1150','1250','1350','1450','1550','1650','1750','1850','1950','2050','2150','2250','2350']]
times=np.array(times)
for i in range(24):
#print(times[5,i-1])
if pred_time==times[5,i-1]:
pred_times=np.hstack([times[:,i],times[:,i+1],times[:,i+2]]) #next 18 frames
#print("PredTimes:",pred_times)
inp_times=np.hstack([times[:,i-2],times[:,i-1]]) #prev $train_size frames
#print("InpTimes:",inp_times[-1*(train_size+1):])
inp_times=inp_times[-1*(train_size+1):]
#print("Get Time Success..")
return inp_times, pred_times
# +
fnames = ['201605291725_88_201605291735_90','201605291825_100_201605291835_102',
'201605291735_90_201605291745_92','201605291835_102_201605291845_104',
'201605291745_92_201605291755_94','201605291845_104_201605291855_106',
'201605291755_94_201605291805_96','201605291855_106_201605291905_108',
'201605291805_96_201605291815_98','201605291905_108_201605291915_110',
'201605291815_98_201605291825_100']
fnames.sort()
##Data Params##
pred_date = 20160529 #YYYYMMDD
#pred_time = '1915' #HHMM(HH50)
file_dim = [900,900]
train_size = 10
start_prog=time.time() #tic
#inp_times, pred_times = gettimes(pred_time)
##I/O Paths##
loadpath = '../../../Datasets/sindy/For29052016_1915'
savepath = "Outputs/%i"%(pred_date)
os.makedirs(savepath, exist_ok=True)
U,V = loadUV_a(loadpath)
# -
np.shape(U)
np.shape(V)
np.isnan(U).any()
np.isnan(V).any()
# +
#Spatio-temporal resolution
x_step = 1
y_step = 1
t_step = 10.0
for i in range(18):
print("\n--- Sequence %i ---\n" %i)
#if(i==0):
U, V, coeff = trainSINDy(U,V,x_step,y_step,t_step)
U_nxt, V_nxt = testSINDy(U[:,:,-1:],V[:,:,-1:],x_step,y_step,t_step, coeff, i)
U=np.dstack([U,U_nxt])
V=np.dstack([V,V_nxt])
print("\n--- Exec time %s seconds ---\n" %(time.time() - start_prog)) #tac
print("SINDy complete. Files saved to %s" %(savepath))
| examples/TestScript2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
from os.path import join
plt.style.use(["seaborn", "thesis"])
# -
# # Fetch Dataset
# +
from SCFInitialGuess.utilities.dataset import extract_triu_batch, AbstractDataset
from sklearn.model_selection import train_test_split
from pyscf.scf import hf
data_path = "../../dataset/EthenT/"
postfix = "EthenT"
dim = 72
N_ELECTRONS = 16
basis = "6-311++g**"
#data_path = "../../../cc2ai/ethen/"
#postfix = "_ethen_6-31g**"
#dim = 48
#N_ELECTRONS = 16
#basis = "6-31g**"
#data_path = "../../dataset/TSmall_sto3g"
#postfix = "TSmall_sto3g"
#dim = 26
#N_ELECTRONS = 30
#basis = "sto-3g"
#data_path = "../../../butadien/data/"
#postfix = ""
#dim = 26
def split(x, y, ind):
return x[:ind], y[:ind], x[ind:], y[ind:]
#S, P = np.load(join(data_path, "dataset" + postfix + ".npy"))
S = np.load(join(data_path, "S" + postfix + ".npy")).reshape(-1, dim, dim)
P = np.load(join(data_path, "P" + postfix + ".npy")).reshape(-1, dim, dim)
#index = np.load(join(data_path, "index" + postfix + ".npy"))
ind = int(0.8 * len(S))
molecules = np.load(join(data_path, "molecules" + postfix + ".npy"))[ind:]
#molecules = (molecules[:ind], molecules[ind:])
s_test = S[ind:].reshape(-1, dim, dim)
p_test = P[ind:].reshape(-1, dim, dim)
#H = [hf.get_hcore(mol.get_pyscf_molecule()) for mol in molecules]
# -
# # Energies
# ## Calculate ENergies
# +
from SCFInitialGuess.utilities.analysis import measure_hf_energy
energies = measure_hf_energy(p_test, molecules)
# -
# ## See distribution
# +
n_bins = 50
#offset = np.min(E)
hist, edges = np.histogram(energies, bins=n_bins, density=True)
centers = (edges[:-1] + edges[1:]) / 2
width = np.mean(np.diff(centers)) * 0.8
plt.bar(centers, hist, width=width)
plt.ylabel("Relative Frequency / 1")
plt.xlabel("HF Energy / Hartree")
#plt.savefig(figure_save_path + "EnergyDistributionDataset.pdf")
plt.show()
# -
# # Distances
# ## Calculate
# +
import scipy.spatial as sp
def distance_sum(mol):
m = sp.distance_matrix(mol.positions, mol.positions)
return np.sum(m.flatten())
distances = []
for mol in molecules:
distances.append(distance_sum(mol))
#np.array(distances)
# -
# ## Distribution
# +
n_bins = 50
#offset = np.min(E)
hist, edges = np.histogram(distances, bins=n_bins, density=True)
centers = (edges[:-1] + edges[1:]) / 2
width = np.mean(np.diff(centers)) * 0.8
plt.bar(centers, hist, width=width)
plt.ylabel("Relative Frequency / 1")
plt.xlabel("Sum of Atomic distances / Bohr")
#plt.savefig(figure_save_path + "EnergyDistributionDataset.pdf")
plt.show()
# -
# # Iterations
# +
fpath = "../../dataset/EthenT/EmbeddedBlocks/"
#f_conv = np.load(fpath + "f_conv.npy")
f_gwh = np.load(fpath + "f_gwh.npy")
f_embedded_gwh = np.load(fpath + "f_embedded_gwh.npy")
f_sad = np.load(fpath + "f_sad.npy")
f_embedded_sad = np.load(fpath + "f_embedded_sad.npy")
# +
from SCFInitialGuess.utilities.dataset import density_from_fock_batch
g_gwh = density_from_fock_batch(f_gwh, s_test, molecules)
g_embedded_gwh = density_from_fock_batch(f_embedded_gwh, s_test, molecules)
g_sad = density_from_fock_batch(f_sad, s_test, molecules)
g_embedded_sad = density_from_fock_batch(f_embedded_sad, s_test, molecules)
# +
from SCFInitialGuess.utilities.analysis import measure_iterations, mf_initializer
from SCFInitialGuess.utilities.usermessages import Messenger as msg
msg.print_level = 0
it_gwh = measure_iterations(mf_initializer, g_gwh, molecules)
it_embedded_gwh = measure_iterations(mf_initializer, g_embedded_gwh, molecules)
it_sad = measure_iterations(mf_initializer, g_sad, molecules)
it_embedded_sad = measure_iterations(mf_initializer, g_embedded_sad, molecules)
# -
# ## Vs. Energies
# +
plt.scatter(energies, it_gwh, label="GWH")
plt.scatter(energies, it_embedded_gwh, label="GWH+")
plt.scatter(energies, it_sad, label="SAD")
plt.scatter(energies, it_embedded_sad, label="SAD+")
plt.legend()
plt.show()
# -
# ## Vs Distances
# +
plt.scatter(distances, it_gwh, label="GWH")
plt.scatter(distances, it_embedded_gwh, label="GWH+")
#plt.scatter(distances, it_sad, label="SAD")
#plt.scatter(distances, it_embedded_sad, label="SAD+")
plt.legend()
plt.show()
# -
| thesis/notebooks/AtomicBlocks/Fock/DistanceFromEquilibrium.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## This Notebook - Goals - FOR EDINA
#
# **What?:** <br>
# - Introduction/tutorial to <code>ObsPy</code>, a Python framework for processing seismological data
# - Illustrates how to access, visualize and process seismic data from a large number of networks
#
# **Who?:** <br>
# - Academics in geosciences and specifically seismology
# - Geophysical Data Science course
# - Users interested in seismic data analysis
#
# **Why?:** <br>
# - Tutorial/guide to introduce digital seismic data analysis
#
# **Noteable features to exploit:** <br>
# - Use of pre-installed libraries
#
# **How?:** <br>
# - Effective use of core libraries
# - Comprehensive list of methods for seismic data analysis
# - Interactive and clear visualisations - concise explanations
# <hr>
# # Accessing seismic data using Obspy
#
# <code>ObsPy</code> is a Python package aimed at processing seismological data, including parsers for common file formats, clients to access data centers and seismological signal processing routines. The different clients hold station, instrument and event information, as well as waveform data. Station information encompasses an inventory of network metadata (code, description,...), station metadata (code, latitude, longitude, elevation, start- and endtime of recordings,...) and channel metadata (location code, latitude, longitude, elevation, dip, azimuth, instrument response, sample rate,...). Amongst other things, it can be used to plot the location of the stations within a network or to plot the instrument response of a seismograph at a particular station. Event information is stored in a catalogue of seismic events, such as natural or human-induced earthquakes. Its metadata includes the origin (location, depth, time), magnitudes (magnitude types), picks, focal mechanisms. Waveform data is stored in a Stream object containing multiple Trace objects with the actual waveform data as a timeseries and the metadata of the station information (network, station, channel, location, sampling rate,...).
#
# Available clients included in <code>ObsPy</code>:
# - [fdsn](https://www.fdsn.org/webservices/)
# - [arclink](https://docs.obspy.org/packages/obspy.clients.arclink.html)
# - [iris](https://service.iris.edu/irisws/)
# - [neic](https://github.com/usgs/edgecwb/wiki)
# - [filesystem](https://docs.obspy.org/packages/obspy.clients.filesystem.html)
# - [seedlink](https://docs.obspy.org/packages/obspy.clients.seedlink.html#module-obspy.clients.seedlink)
# - [nrl](http://ds.iris.edu/NRL/)
# - [seishub](https://docs.obspy.org/master/packages/obspy.clients.seishub.html)
# - [syngine](https://ds.iris.edu/ds/products/syngine/)
# - [earthworm](http://www.earthwormcentral.org/)
#
#
# This Notebook gives an introduction to the wide range of uses of <code>ObsPy</code>. The documentation for <code>ObsPy</code> can be found [here](https://docs.obspy.org/_downloads/ObsPyTutorial.pdf). Further links for seismologists on special use cases aimed at processing seismic data can be found [here](http://seismo-live.org/).
#
# **Notebook contents:**
# - Importing the necessary libraries
# - <a href='#cli'>List of providers included in the FDSN client</a>
# - <a href='#cat'>Accessing the earthquake catalogue</a>
# - <a href='#sta'>Accessing the station metadata in a network</a>
# - <a href='#sei'>Plotting waveform data as seismograms (using widgets)</a>
# - <a href='#fil'>Filtering seismograms (using widgets)</a>
# - <a href='#ins'>Plotting the instrument response of a seismograph at a specific station</a>
# - <a href='#arr'>Retrieving phase arrivals</a>
# - <a href='#tra'>Plotting travel time curves</a>
# - <a href='#ray'>Plotting ray paths within the Earth</a>
# +
# Import general libraries
import matplotlib.pyplot as plt
import numpy as np
# %matplotlib inline
# Import modules from obspy
import obspy
from obspy.clients.fdsn import RoutingClient, Client
from obspy.clients.fdsn.header import URL_MAPPINGS
from obspy import read, read_inventory, UTCDateTime
from obspy.taup import TauPyModel, plot_travel_times
from obspy.taup.tau import plot_ray_paths
# Import widget library
from ipywidgets import widgets, interact, interactive
from IPython.display import display
# Hide warning messages
import warnings
warnings.filterwarnings('ignore')
# -
# <a id='cli'></a>
# ## Providers within the FDSN client
# The International Federation of Digital Seismograph Networks implements a large number of data centres such as USGS, NOA and IRIS. This section lists all the available data centres. In further sections, only USGS and IRIS is used, but the method to access the other data centres is the same so the code could be adapted to fit them. The data centres do not necessarily offer all three types of data (waveform, station and event information), so if you are unsure which provider to use, you can use a routing client instead of explicitly specifying the client.
# +
# Print all the providers included in the FDSN client
for key in sorted(URL_MAPPINGS.keys()):
print("{0:<11} {1}".format(key, URL_MAPPINGS[key]))
# Use routing client
routing_client = RoutingClient("iris-federator")
# -
# <a id='cat'></a>
# ## Access to the earthquake catalogue
# One of the FDSN providers which offer event metadata is USGS. Its earthquake catalogue is accessible through <code>ObsPy</code> and the data can be used to filter events through time, location and magnitude using <code>client.get_events()</code> method. The following example searches the catalogue for any earthquakes with at least a magnitude of 8 in December 2004. There are two earthquakes found, the locations of which can be plotted on a world map using <code>catalogue.plot()</code>. The size and colour of the data points are scaled by the magnitude and the origin depth respectively.
# +
#ONLY WORKS IN BETA - DELETE ONCE LIVE IS UPDATED
# Specify the webserver within the FDSN client
client = Client('USGS')
# Specify the time frame to search for earthquake events in
starttime = UTCDateTime('2004-12-01')
endtime = UTCDateTime('2004-12-31')
# Find earthquake in catalogue with a chosen minimum magnitude
cat = client.get_events(starttime=starttime, endtime=endtime, minmagnitude=8)
print(cat)
# Note: spatial constraints can also be specified within client.get_events()
# Plot the location of the earthquakes in the catalogue
cat.plot(title='Earthquakes in December 2004', method='cartopy')
plt.show()
# -
# <a id='sta'></a>
# ## Access to the station metadata within a seismic network
# One of the FDSN providers which offer station metadata is IRIS. Its station data is accessible through <code>ObsPy</code> and the data can be used to filter stations through time, location and network using <code>client.get_stations()</code> method. IRIS will be used in all further sections where a client is called. There are many different networks available within IRIS, a full list can be found [here](http://www.fdsn.org/networks/). In this section, the Global Seismograph Network of IRIS/USGS (network code:'IU') is used as it has a large number of station across the world. IU is one of the most commonly used networks, but other local networks might be more suitable for you depending on the kind of data you are interested in. The individual networks have a collection of stations which they acquire their data from. The IU network has list of stations, their station codes can be found [here](http://www.fdsn.org/networks/detail/IU/).
#
# The selected stations are stored in an inventory, which can be plotted to show the location of the stations around the world using <code>inventory.plot()</code>.
# +
#ONLY WORKS IN BETA - DELETE ONCE LIVE IS UPDATED
# Specify the webserver within the FDSN client
client2 = Client('IRIS')
# Specify the time frame to search for station data in
starttime = UTCDateTime('2004-12-26T00:00:00')
endtime = UTCDateTime('2004-12-27T12:00:00')
# Retrieve station data for all stations within the network 'Global Seismograph Network - IRIS/USGS' abbreviated 'IU'
ins2 = client2.get_stations(starttime=starttime, endtime=endtime,
network='IU')
# Plot the location of all the stations in the IU network
ins2.plot(method='cartopy')
# Retrieve station data for a station 'BILL' in Bilibino, Russia within the IU network
ins = client2.get_stations(starttime=starttime, endtime=endtime,
network='IU', station='BILL')
# Plot the location of the BILL station in the IU network
ins.plot(method='cartopy')
print(ins, ins2)
plt.show()
# -
# <a id='sei'></a>
# ## Plotting waveform data - seismograms
# As explained above, there are different seismic networks which are available to use and they all have a list of stations within the network. To access the actual waveform data, however, it is not enough to specify the network and the station. You also need to define which channel and location code you are interested in. The channels and location codes for the individual stations within IU can be found [here](http://service.iris.edu/irisws/fedcatalog/1/query?net=IU&format=text&includeoverlaps=true&nodata=404).
#
# Channel codes are usually made up of a set of three letters. The first letter refers to the general sampling rate and the response band of the instrument such as S for short-period instrument. The second letter specifies the type of instrument that is used, such as H for a high-gain seismometer. The third letter indicates the orientation of the sensor measurement and is an axis of a 3D orthogonal coordinate system, such as Z,N,E for Vertical, North-South, East-West. The full list of abbreviations for the channel code can be found [here](https://ds.iris.edu/ds/nodes/dmc/data/formats/seed-channel-naming/).
#
# The location code is a 2 character code used to uniquely identify different data streams at a single station. These IDs are commonly used to logically separate multiple instruments or sensor sets at a single station.
#
# The hierarchy of station data is: network $\rightarrow$ station $\rightarrow$ location code and channel.
#
# In this section the waveform data from one station located at Bilibino, Russia (BILL) or Cathedral Cave, Missouri, USA (CCM) is extracted at the time of the 26.12.2004 Sumatra earthquake (from the catalogue in the previous section in this Notebook). Data for all three orientations (ZNE) from both short- (S) and long-period (L) high-gain seismometers were retrieved and plotted on a seismogram.
#
# The arrivals of different phases (waves originating from the earthquake) to the selected station can be detected from the seismogram if the reader knows the characteristics of the different elastic waves and phases (such as their polarisation). The most straightforward one to identify is the P-wave, as it is the first arrival on the seismogram, often most visible in the vertical component of the seismogram. All you need is to identify the time of the first real motion that is not background static. For BILL, this happens at 01:10:57, indicating that the P-wave took 12min 5sec to reach the station from the earthquake origin. This knowledge is often used to identify the location of the earthquake if data is available from at least 3 stations (in this case, however, we already knew where it happened).
#
# A widget was used to make this template more concise, the station can be chosen from the dropdown list.
# +
# Specify the reference time
t = UTCDateTime('2004-12-26T00:58:53.450')
# Define function to plot the seismograms
def waveforms_for_station(station_select):
if station_select == 'Bilibino, Russia (BILL)':
# Retrieve waveform data for all three orientations (ZNE) for a short-period high-gain seismometer (SH)
s_one = client2.get_waveforms('IU', # Network
'BILL', # Station
'10', # Location
'SH?', # Channel - S=short-period, H=high-gain, ?=all directions (ZNE)
t+720, # Starttime
t + 780) # Endtime
# Retrieve waveform data for all three orientations (ZNE) for a long-period high-gain seismometer (LH)
l_one = client2.get_waveforms('IU', 'BILL', '00', 'LH?', t+690, t + 1200)
# Plot seismograms
s_one.plot(show=True, size=(900, 350))
l_one.plot(show=True, size=(900, 350))
else:
# Retrieve waveform data for all three orientations (ZNE) for a short-period high-gain seismometer (SH)
s_two = client2.get_waveforms('IU', 'CCM', '10', 'SH?', t+960, t + 1020)
# Retrieve waveform data for all three orientations (ZNE) for a long-period high-gain seismometer (LH)
l_two = client2.get_waveforms('IU', 'CCM', '00', 'LH?', t+960, t + 1230)
# Plot seismograms
s_two.plot(show=True, size=(900, 350))
l_two.plot(show=True, size=(900, 350))
# Create dropdown list widget to choose which station to retrieve data from
stations = ['Bilibino, Russia (BILL)', 'Cathedral Cave, Missouri, USA (CCM)']
interactive_plot = interactive(waveforms_for_station,
station_select=widgets.Dropdown(options=stations,
value='Bilibino, Russia (BILL)',
description='Choose station:',
style={'description_width': 'initial'},
disabled=False))
interactive_plot.layout.height = '750px'
interactive_plot
# -
# <a id='fil'></a>
# ## Filtering seismograms
# The seismograms illustrated above can also be filtered using one of the four filtering options available in <code>ObsPy</code>. Certain frequency ranges can be filtered out in order to improve the signal to noise ratio. Of course, for the result to be sensible, you need more insight into filtering, but this section shows how the filters can be created in a Notebook. A widget was added where the type of filter and the corresponding frequency range can be selected. The first cell filters the data from the short-period instrument, whereas the second cell filters the long-period seismometer.
#
# There are 4 filtering options:
# - <code>highpass</code> - filters out frequencies above a certain frequency (given by maximum frequency in widget)
# - <code>lowpass</code> - filters out frequencies below a certain frequency (given by minimum frequency in widget)
# - <code>bandpass</code> - filters out frequencies outside of a specified frequency range (given by both minimum and maximum frequency in widget)
# - <code>bandstop</code> - filters out frequencies within a specified frequency range (given by both minimum and maximum frequency in widget)
#
# More information and more advanced examples of filtering can be found [here](https://krischer.github.io/seismo_live_build/html/Signal%20Processing/filter_basics_solution_wrapper.html) and [here](https://docs.obspy.org/packages/autogen/obspy.signal.filter.html#module-obspy.signal.filter).
#
# *Note: If bandpass or bandstop is selected, make sure the maximum and minimum frequency are not equal. If this happens with the former, it filters out all the data, if it happens with the latter it essentially does not filter the data at all.*
# +
# Define function to filter the seismograms (short-period instrument)
def filtering(filter_select, max_freq, min_freq):
station = ['BILL', 'CCM']
shortperiod = ['s_one', 's_two']
starttime1 = [720, 960]
endtime1 = [780, 1020]
for i in range(0,2):
shortperiod[i] = client2.get_waveforms('IU', station[i], '10', 'SH?', t+starttime1[i], t + endtime1[i])
st_filt = shortperiod[i].copy()
# Cut out frequencies above a certain frequency if highpass is selected
if filter_select == 'highpass':
st_filt.filter(filter_select, freq=max_freq,
corners=2, zerophase=True).plot(show=True, size=(900, 350))
# Cut out frequencies below a certain frequency if lowpass is selected
elif filter_select == 'lowpass':
st_filt.filter(filter_select, freq=min_freq,
corners=2, zerophase=True).plot(show=True, size=(900, 350))
# Cut out frequencies outside or within a frequency range if bandpass or bandstop is selected respectively
else:
st_filt.filter(filter_select, freqmin=min_freq, freqmax=max_freq,
corners=2, zerophase=True).plot(show=True, size=(900, 350))
# Create dropdown list widget to choose which option to use
filters = ['highpass', 'lowpass', 'bandpass', 'bandstop']
interactive_plot = interactive(filtering,
# Dropdown widget for the filter option
filter_select = widgets.Dropdown(options=filters, value='highpass',
description='Type of filter',
style={'description_width': 'initial'},
disabled=False),
# Slider widget for maximum frequency
max_freq = widgets.FloatSlider(value=1, min=0.2, max=10, step=0.2,
description='Max frequency',
style={'description_width': 'initial'},
disabled=False),
# Slider widget for minimum frequency
min_freq = widgets.FloatSlider(value=1, min=0.2, max=10, step=0.2,
description='Min frequency',
style={'description_width': 'initial'},
disabled=False))
interactive_plot.layout.height = '800px'
interactive_plot
# +
# Define function to filter the seismograms (long-period instrument)
def filtering(filter_select, max_freq, min_freq):
station = ['BILL', 'CCM']
longperiod = ['l_one', 'l_two']
starttime2 = [690, 960]
endtime2 = [1200, 1230]
for i in range(0,2):
longperiod[i] = client2.get_waveforms('IU', station[i], '00', 'LH?', t+starttime2[i], t + endtime2[i])
lt_filt = longperiod[i].copy()
# Cut out frequencies above a certain frequency if highpass is selected
if filter_select == 'highpass':
lt_filt.filter(filter_select, freq=max_freq,
corners=2, zerophase=True).plot(show=True, size=(900, 350))
# Cut out frequencies below a certain frequency if lowpass is selected
elif filter_select == 'lowpass':
lt_filt.filter(filter_select, freq=min_freq,
corners=2, zerophase=True).plot(show=True, size=(900, 350))
# Cut out frequencies outside or within a frequency range if bandpass or bandstop is selected respectively
else:
lt_filt.filter(filter_select, freqmin=min_freq, freqmax=max_freq,
corners=2, zerophase=True).plot(show=True, size=(900, 350))
# Create dropdown list widget to choose which option to use
filters = ['highpass', 'lowpass', 'bandpass', 'bandstop']
interactive_plot = interactive(filtering,
# Dropdown widget for the filter option
filter_select=widgets.Dropdown(options=filters, value='highpass',
description='Type of filter',
style={'description_width': 'initial'},
disabled=False),
# Slider widget for maximum frequency
max_freq = widgets.FloatSlider(value=0.1, min=0.001, max=0.4, step=0.001,
description='Max frequency',
style={'description_width': 'initial'},
disabled=False),
# Slider widget for minimum frequency
min_freq = widgets.FloatSlider(value=0.1, min=0.001, max=0.4, step=0.001,
description='Min frequency',
style={'description_width': 'initial'},
disabled=False))
interactive_plot.layout.height = '800px'
interactive_plot
# -
# <a id='ins'></a>
# ## Instrument response
# The instrument response describes how incoming data (in whatever the instrument measures) is transformed into what is finally stored in a file. Different types of seismometers respond differently to waves of a certain frequency. Short-period seismometers are more sensitive to high frequency waves so the amplitude seen on the seismograms will be considerably larger and more pronounced for a high frequency wave (such as P-waves) than for low frequency waves (such as surface waves). Long-period seismometers are pick up low frequency waves (such as surface waves) more than the high frequency waves (such as P-waves). This is all due to the physics of simple harmonic motion and resonance. Therefore, it is often useful to plot the instrument response (amplitude and phase delay) against the frequency spectrum.
#
# The sampling rate $Δt$ of the instrument (the Nyquist frequency is $f_N=1/(2Δt)$ ) is shown as the dashed vertical line. The plot does not show frequencies higher than the Nyquist frequency (the vertical dashed line) as these frequencies do not exist in the data and thus have no physical meaning.
#
# This section shows the instrument response of a vertically oriented, extremely short-period high-gain seismometer from a station in Jochberg, Germany in the BayernNetz network.
# +
# Set figure size and axes
fig, (ax1, ax2) = plt.subplots(2, 1, sharex=True, figsize=(10,10))
# Customize plot
fig.suptitle('Instrument response')
ax1.set_ylabel('Amplitude')
ax2.set_ylabel('Phase (deg)')
ax2.set_xlabel('Frequency (Hz)')
# Plot instrument response
inv = read_inventory()
print(inv[1], '\n', inv[1][0][0].response)
inv.plot_response(0.001, station='RJOB', channel='EHZ', output='VEL', plot_degrees=True, axes=[ax1, ax2])
plt.show()
# -
# <a id='arr'></a>
# ## Phase arrivals and travel time curves
# After an earthquake occurs, stations around the world pick up the seismic signals after a certain time dely. This delay depends on how far away the the station is located from the earthquake hypocentre (origin within the Earth) and the type of wave (phase) it is.
#
# There are two kinds of waves: body and surface waves. The most basic body and surface waves are P- and S-waves, and Love and Rayleigh waves respectively. They all have characteristic wave speeds based on the material they are propagating in, however their relative speeds in the same medium stays the same, with P-waves being the fastest, then S-waves followed by Love and finally Rayleigh waves. However, due to internal diffractions, reflections and refractions within the Earth, these aren't the only phases see on the seismogram. There are all kinds of combinations of different wave types such as 'PS' (P-wave that converted into S-wave upon reflection) or reflected phases like 'PPP' (P-wave internally reflected off the surface) or refracted phases such as 'PKIKP' (P-wave refracted through the inner and outer core). This leads to a very complex order of phase arrivals. These can be visualised in travel time curves, where the travel time is plotted against the source-receiver distance using the <code>plot_travel_times()</code> method of the obspy.taup module. The module encompasses several different models of the Earth's structure, so the travel time mastercurve can be plotted for each model. These plots are often used to help determine the location of the earthquake since the travel time differences between phases are unique to the source-receiver distances. It can also explicitly print the travel time for the selected phases originating from a pre-determined source depth and source-receiver distance using <code>get_travel_times()</code>.
#
# The full list of 1D velocity models within the <code>taup</code> module can be found [here](http://docs.obspy.org/packages/obspy.taup.html#module-obspy.taup).
# +
# Select velocity model
model = TauPyModel(model='iasp91')
# Retrieve travel times for a list of phases - 2 options illustrated
arrivals = model.get_travel_times(source_depth_in_km = 30, # Depth the earthquake originated from
distance_in_degree = 30, # Dstance from the epicentre of the earthquake
phase_list = ['P', 'PS', 'S', 'PP', 'SS']) # List of phases
arrivals2 = model.get_ray_paths(source_depth_in_km = 30,
distance_in_degree = 30,
phase_list = ['P', 'PS', 'S', 'PP', 'SS'])
# Print arrival times of the phases
print(arrivals)
print('\n', arrivals2)
# Print ray characteristics (ray parameter, arrival time, angle of incidence)
arr = arrivals[0]
print('\nRay parameter:', arr.ray_param, '\nArrival time:', arr.time, '\nAngle of incidence:', arr.incident_angle)
# -
# <a id='tra'></a>
# +
# Define function to plot travel time curves for a list of phases against distance (default model = 'iasp91')
def travel_times_mastercurve(model_select):
# Set figure size and axes
fig, ax = plt.subplots(figsize=(10, 10))
# Customize plot
ax.grid()
ax.set_title('Phase travel time curves through the Earth')
# Plot travel time curves for a list of phases against distance (default model = 'iasp91')
plot_travel_times(source_depth=20, npoints = 80, model = model_select,
phase_list=['P', 'S', 'PP', 'SS', 'Pdiff', 'PKIKP', 'PcP'],
ax=ax, fig=fig, legend=True, plot_all=False)
# Save figure
#fig.savefig('PhaseTravelTimeCurves.png')
models = ['iasp91', '1066a', '1066b', 'ak135', 'ak135f_no_mud', 'herrin', 'jb', 'prem', 'pwdk', 'sp6']
# Create interactive widget to choose model type
interactive_plot = interactive(travel_times_mastercurve,
# Dropdown widget for the model option
model_select= widgets.Dropdown(options=models, value='iasp91',
description='Type of model',
style={'description_width': 'initial'},
disabled=False))
interactive_plot.layout.height = '700px'
interactive_plot
# -
# <a id='ray'></a>
# ## Ray paths
# As mentioned above, there are many differet phases due to internal reflections, refractions and diffractiond in the interior of the Earth. On a global scale, the ray paths of these phases can be visualized using <code>plot_ray_paths()</code> from the <code>obspy.taup.tau</code> module or <code>plot_rays()</code> after specifying the model, source depth and source-receiver distance.
# The first cell shows the expected ray paths to 12 equally spaced receivers (13th receiver is located over the first one) from an earthquake at 100km depth. The second cell shows the ray paths to a receiver 120° away from the source which is located at a depth of 700km. Both use the same list of phases ('P', 'S', 'Pdiff', 'PKIKP', 'PcP').
# +
# Select velocity model
model = TauPyModel(model='iasp91')
# Set figure size and axes
fig, ax = plt.subplots(subplot_kw=dict(polar=True), figsize=(10,10))
# Customize plot
ax.set_title('Ray paths originating from a source at a depth of 100km \nthrough the inside of the Earth')
ax.text(0, 0, 'Solid\ninner\ncore',
horizontalalignment='center', verticalalignment='center',
bbox=dict(facecolor='white', edgecolor='none', alpha=0.7))
ocr = (model.model.radius_of_planet -
(model.model.s_mod.v_mod.iocb_depth +
model.model.s_mod.v_mod.cmb_depth) / 2)
ax.text(np.deg2rad(180), ocr, 'Fluid outer core',
horizontalalignment='center',
bbox=dict(facecolor='white', edgecolor='none', alpha=0.7))
mr = model.model.radius_of_planet - model.model.s_mod.v_mod.cmb_depth / 2
ax.text(np.deg2rad(180), mr, 'Solid mantle',
horizontalalignment='center',
bbox=dict(facecolor='white', edgecolor='none', alpha=0.7))
# Plot ray paths in a spherical coordinate system
ax = plot_ray_paths(source_depth = 100, phase_list = ['P', 'S', 'Pdiff', 'PKIKP', 'PcP'],
npoints =13, legend = True, fig = fig, ax = ax)
# +
# Set figure size and axes
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(1,1,1, polar=True)
# Customize plot
ax.set_title('Ray paths originating from a source at a depth of 700km \nto a receiver at a distance of 120° \nthrough the inside of the Earth')
ax.text(0, 0, 'Solid\ninner\ncore',
horizontalalignment='center', verticalalignment='center',
bbox=dict(facecolor='white', edgecolor='none', alpha=0.7))
ocr = (model.model.radius_of_planet -
(model.model.s_mod.v_mod.iocb_depth +
model.model.s_mod.v_mod.cmb_depth) / 2)
ax.text(np.deg2rad(180), ocr, 'Fluid outer core',
horizontalalignment='center',
bbox=dict(facecolor='white', edgecolor='none', alpha=0.7))
mr = model.model.radius_of_planet - model.model.s_mod.v_mod.cmb_depth / 2
ax.text(np.deg2rad(180), mr, 'Solid mantle',
horizontalalignment='center',
bbox=dict(facecolor='white', edgecolor='none', alpha=0.7))
# Plot ray paths in a spherical coordinate system
arrivals = model.get_ray_paths(source_depth_in_km=700, distance_in_degree=120,
phase_list = ['P', 'S', 'Pdiff', 'PKIKP', 'PcP'])
arrivals.plot_rays(legend=True, fig=fig, ax=ax)
plt.show()
| GeneralExemplars/GeoExemplars/ObsPy_Notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (herschelhelp_internal)
# language: python
# name: helpint
# ---
# # SSDF master catalogue
# ## Preparation of VHS data
#
# VISTA telescope/VHS catalogue: the catalogue comes from `dmu0_VHS`.
#
# In the catalogue, we keep:
#
# - The identifier (it's unique in the catalogue);
# - The position;
# - The stellarity;
# - The magnitude for each band.
# - The kron magnitude to be used as total magnitude (no “auto” magnitude is provided).
#
# We don't know when the maps have been observed. We will use the year of the reference paper.
#
# * Note: on SSDF, the VHS catalogue does not contain Y data.*
from herschelhelp_internal import git_version
print("This notebook was run with herschelhelp_internal version: \n{}".format(git_version()))
# +
# %matplotlib inline
# #%config InlineBackend.figure_format = 'svg'
import matplotlib.pyplot as plt
plt.rc('figure', figsize=(10, 6))
from collections import OrderedDict
import os
from astropy import units as u
from astropy.coordinates import SkyCoord
from astropy.table import Column, Table
import numpy as np
from herschelhelp_internal.flagging import gaia_flag_column
from herschelhelp_internal.masterlist import nb_astcor_diag_plot, remove_duplicates
from herschelhelp_internal.utils import astrometric_correction, mag_to_flux
# +
OUT_DIR = os.environ.get('TMP_DIR', "./data_tmp")
try:
os.makedirs(OUT_DIR)
except FileExistsError:
pass
RA_COL = "vhs_ra"
DEC_COL = "vhs_dec"
# -
# ## I - Column selection
# +
# Bands: Y,J,H,K
imported_columns = OrderedDict({
'SOURCEID': "vhs_id",
'ra': "vhs_ra",
'dec': "vhs_dec",
'PSTAR': "vhs_stellarity",
'JPETROMAG': "m_vista_j",
'JPETROMAGERR': "merr_vista_j",
'JAPERMAG3': "m_ap_vista_j",
'JAPERMAG3ERR': "merr_ap_vista_j",
'HPETROMAG': "m_vista_h",
'HPETROMAGERR': "merr_vista_h",
'HAPERMAG3': "m_ap_vista_h",
'HAPERMAG3ERR': "merr_ap_vista_h",
'KSPETROMAG': "m_vista_k",
'KSPETROMAGERR': "merr_vista_k",
'KSAPERMAG3': "m_ap_vista_k",
'KSAPERMAG3ERR': "merr_ap_vista_k",
})
catalogue = Table.read("../../dmu0/dmu0_VISTA-VHS/data/VHS_SSDF.fits")[list(imported_columns)]
for column in imported_columns:
catalogue[column].name = imported_columns[column]
epoch = 2011
# Clean table metadata
catalogue.meta = None
# -
# Conversion from Vega magnitudes to AB is done using values from
# http://casu.ast.cam.ac.uk/surveys-projects/vista/technical/filter-set
vega_to_ab = {
"z": 0.521,
"y": 0.618,
"j": 0.937,
"h": 1.384,
"k": 1.839
}
# +
# Coverting from Vega to AB and adding flux and band-flag columns
for col in catalogue.colnames:
if col.startswith('m_'):
errcol = "merr{}".format(col[1:])
# Some object have a magnitude to 0, we suppose this means missing value
catalogue[col][catalogue[col] <= 0] = np.nan
catalogue[errcol][catalogue[errcol] <= 0] = np.nan
# Convert magnitude from Vega to AB
catalogue[col] += vega_to_ab[col[-1]]
flux, error = mag_to_flux(np.array(catalogue[col]), np.array(catalogue[errcol]))
# Fluxes are added in µJy
catalogue.add_column(Column(flux * 1.e6, name="f{}".format(col[1:])))
catalogue.add_column(Column(error * 1.e6, name="f{}".format(errcol[1:])))
# Band-flag column
if "ap" not in col:
catalogue.add_column(Column(np.zeros(len(catalogue), dtype=bool), name="flag{}".format(col[1:])))
# TODO: Set to True the flag columns for fluxes that should not be used for SED fitting.
# -
for col in catalogue.colnames:
if '_vista_k' in col:
catalogue.rename_column(col, col.replace('_vista_k', '_vista_ks'))
catalogue[:10].show_in_notebook()
# ## II - Removal of duplicated sources
# We remove duplicated objects from the input catalogues.
# +
SORT_COLS = ['merr_ap_vista_h', 'merr_ap_vista_j', 'merr_ap_vista_ks']
FLAG_NAME = 'vhs_flag_cleaned'
nb_orig_sources = len(catalogue)
catalogue = remove_duplicates(catalogue, RA_COL, DEC_COL, sort_col=SORT_COLS,flag_name=FLAG_NAME)
nb_sources = len(catalogue)
print("The initial catalogue had {} sources.".format(nb_orig_sources))
print("The cleaned catalogue has {} sources ({} removed).".format(nb_sources, nb_orig_sources - nb_sources))
print("The cleaned catalogue has {} sources flagged as having been cleaned".format(np.sum(catalogue[FLAG_NAME])))
# -
# ## III - Astrometry correction
#
# We match the astrometry to the Gaia one. We limit the Gaia catalogue to sources with a g band flux between the 30th and the 70th percentile. Some quick tests show that this give the lower dispersion in the results.
gaia = Table.read("../../dmu0/dmu0_GAIA/data/GAIA_SSDF.fits")
gaia_coords = SkyCoord(gaia['ra'], gaia['dec'])
nb_astcor_diag_plot(catalogue[RA_COL], catalogue[DEC_COL],
gaia_coords.ra, gaia_coords.dec, near_ra0=True)
# +
delta_ra, delta_dec = astrometric_correction(
SkyCoord(catalogue[RA_COL], catalogue[DEC_COL]),
gaia_coords, near_ra0=True
)
print("RA correction: {}".format(delta_ra))
print("Dec correction: {}".format(delta_dec))
# -
catalogue[RA_COL] += delta_ra.to(u.deg)
catalogue[DEC_COL] += delta_dec.to(u.deg)
nb_astcor_diag_plot(catalogue[RA_COL], catalogue[DEC_COL],
gaia_coords.ra, gaia_coords.dec, near_ra0=True)
# ## IV - Flagging Gaia objects
catalogue.add_column(
gaia_flag_column(SkyCoord(catalogue[RA_COL], catalogue[DEC_COL]), epoch, gaia)
)
# +
GAIA_FLAG_NAME = "vhs_flag_gaia"
catalogue['flag_gaia'].name = GAIA_FLAG_NAME
print("{} sources flagged.".format(np.sum(catalogue[GAIA_FLAG_NAME] > 0)))
# -
# ## V - Flagging objects near bright stars
# # VI - Saving to disk
catalogue.write("{}/VISTA-VHS.fits".format(OUT_DIR), overwrite=True)
| dmu1/dmu1_ml_SSDF/1.2_VISTA-VHS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
from plot_utils import read_Noise2Seg_results, fraction_to_abs, cm2inch
from matplotlib import pyplot as plt
plt.rc('text', usetex=True)
# -
# # Flywing n20: AP scores on validation data
alpha0_5_n20 = read_Noise2Seg_results('alpha0.5', 'flywing_n20', measure='AP', runs=[1,2,3,4,5],
fractions=[0.125, 0.25, 0.5, 1.0, 2.0], score_type = '',
path_str='/home/tibuch/Noise2Seg/experiments/{}_{}_run{}/fraction_{}/{}scores.csv')
# +
baseline_flywing_n20 = read_Noise2Seg_results('fin', 'flywing_n20', measure='AP', runs=[1,2,3,4,5],
fractions=[0.125, 0.25, 0.5, 1.0, 2.0], score_type = '',
path_str='/home/tibuch/Noise2Seg/experiments/{}_{}_run{}/fraction_{}/{}scores.csv')
sequential_flywing_n20 = read_Noise2Seg_results('finSeq', 'flywing_n20', measure='AP', runs=[1,2,3,4,5],
fractions=[0.125, 0.25, 0.5, 1.0, 2.0], score_type = '',
path_str='/home/tibuch/Noise2Seg/experiments/{}_{}_run{}/fraction_{}/{}scores.csv')
# -
plt.rc('font', family = 'serif', size = 16)
# +
fig = plt.figure(figsize=cm2inch(12.2/2,3)) # 12.2cm is the text-widht of the MICCAI template
plt.rcParams['axes.axisbelow'] = True
plt.plot(fraction_to_abs(alpha0_5_n20[:, 0], max_num_imgs = 1428),
alpha0_5_n20[:, 1],
color = '#8F89B4', alpha = 1, linewidth=2, label = r'\textsc{DenoiSeg} ($\alpha = 0.5$)')
plt.fill_between(fraction_to_abs(alpha0_5_n20[:, 0], max_num_imgs = 1428),
y1 = alpha0_5_n20[:, 1] + alpha0_5_n20[:, 2],
y2 = alpha0_5_n20[:, 1] - alpha0_5_n20[:, 2],
color = '#8F89B4', alpha = 0.5)
plt.plot(fraction_to_abs(sequential_flywing_n20[:, 0], max_num_imgs = 1428),
sequential_flywing_n20[:, 1],
color = '#526B34', alpha = 1, linewidth=2, label = r'Sequential Baseline')
plt.fill_between(fraction_to_abs(sequential_flywing_n20[:, 0], max_num_imgs = 1428),
y1 = sequential_flywing_n20[:, 1] + sequential_flywing_n20[:, 2],
y2 = sequential_flywing_n20[:, 1] - sequential_flywing_n20[:, 2],
color = '#526B34', alpha = 0.5)
plt.plot(fraction_to_abs(baseline_flywing_n20[:, 0], max_num_imgs = 1428),
baseline_flywing_n20[:, 1],
color = '#6D3B2B', alpha = 1, linewidth=2, label = r'Baseline ($\alpha = 0$)')
plt.fill_between(fraction_to_abs(baseline_flywing_n20[:, 0], max_num_imgs = 1428),
y1 = baseline_flywing_n20[:, 1] + baseline_flywing_n20[:, 2],
y2 = baseline_flywing_n20[:, 1] - baseline_flywing_n20[:, 2],
color = '#6D3B2B', alpha = 0.25)
plt.semilogx()
leg = plt.legend(loc = 'lower right')
for legobj in leg.legendHandles:
legobj.set_linewidth(3.0)
plt.ylabel(r'\textbf{AP}')
plt.xlabel(r'\textbf{Number of Annotated Training Images}')
plt.grid(axis='y')
plt.xticks(ticks=fraction_to_abs(baseline_flywing_n20[:, 0], max_num_imgs = 1428),
labels=fraction_to_abs(baseline_flywing_n20[:, 0], max_num_imgs = 1428).astype(np.int),
rotation=45)
plt.minorticks_off()
plt.yticks(rotation=45)
plt.xlim([1.86, 31])
plt.tight_layout();
plt.savefig('Flywing_AP_n20_area.pdf', pad_inches=0.0);
plt.savefig('Flywing_AP_n20_area.svg', pad_inches=0.0);
| scripts/reproducibility/figures/Supplement-Figure Flywing AP n20.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### This notebook gives a 30 second introduction to the Vortexa SDK
# First let's import our requirements
from datetime import datetime
import vortexasdk as v
# Now let's load a dataframe of a sum of vessels in congestion.
# You'll need to enter your Vortexa API key when prompted.
df = v.VoyagesCongestionBreakdown()\
.search(
time_min=datetime(2021, 8, 1, 0),
time_max=datetime(2021, 8, 1, 23))\
.to_df()
df.head()
# That's it! You've successfully loaded data using the Vortexa SDK. Check out https://vortechsa.github.io/python-sdk/ for more examples
| docs/examples/try_me_out/voyages_congestion_breakdown.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={} colab_type="code" id="p8-I7pnopY9L"
from sklearn.datasets import load_iris, fetch_openml
from sklearn.preprocessing import MinMaxScaler, normalize
from sklearn.model_selection import train_test_split
from scipy.spatial.distance import minkowski, cosine
from sklearn.metrics import accuracy_score
from collections import Counter
import numpy as np
import math
import random
# + colab={} colab_type="code" id="MUEMnD1lpY9Z"
X, Y = load_iris(return_X_y=True)
X = MinMaxScaler().fit_transform(X)
# + colab={} colab_type="code" id="9qcHMOROpY9i"
class Neuron:
def __init__(self, size, x, y):
self.weight = np.array([random.uniform(-1, 1) for i in range(size)]).reshape(1,-1)
self.x = x
self.y = y
self.label = None
self.wins = Counter()
self.active = True
def predict(self, data):
return cosine(data, self.weight)
class SOM:
def __init__(self, rows, columns, size):
self.network = list()
for i in range(rows):
for j in range(columns):
self.network.append(Neuron(size=size, x=i, y=j))
def fit(self, X, epochs, radius, alpha0):
alpha = alpha0
for t in range(epochs):
D = np.copy(X)
np.random.shuffle(D)
for data in D:
l = map(lambda x: x.predict(data), self.network)
l = list(l)
winner = self.network[np.argmax(l)]
for neuron in self.network:
if winner.x-radius < neuron.x < winner.x+radius and winner.y-radius < neuron.y < winner.y+radius:
#p = neuron.weight+alpha*data
#neuron.weight = p/np.linalg.norm(p)
#neuron.weight += normalize(alpha*(data-neuron.weight), norm="max")
neuron.weight += alpha*(data-neuron.weight)
radius -= 1
if radius == -1:
radius == 0
alpha = alpha0 / (1+(t/len(D)))
def neuron_labeling(self, X, Y):
for neuron in self.network:
l = map(neuron.predict, X)
l = list(l)
neuron.label = Y[np.argmax(l)]
def mode_labeling(self, X, Y):
for i, instance in enumerate(X):
l = map(lambda x: x.predict(instance), filter(lambda x: x.active, self.network))
l = list(l)
winner = self.network[np.argmax(l)]
winner.wins[Y[i]] += 1
winner.label = winner.wins.most_common()[0][0]
if len(winner.wins.keys()) > 1:
winner.active = True
def predict(self, X):
output = np.zeros((X.shape[0],))
for i,instance in enumerate(X):
l = map(lambda x: x.predict(instance), filter(lambda x: x.active, self.network))
l = list(l)
output[i] = self.network[np.argmax(l)].label
return output
# + colab={} colab_type="code" id="FVTVAH-mpY9p"
X_train, X_test, Y_train, Y_test= train_test_split(X, Y, test_size=0.33, random_state=0, stratify=Y)
# + colab={} colab_type="code" id="_Y4kuA-2pY9u"
som = SOM(12, 8, 4)
som.fit(X_train, 100, 4, 0.5)
som.mode_labeling(X_train, Y_train)
Y_predict = som.predict(X_test)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="I6Jh_xncpY90" outputId="971becf7-c26b-4160-af65-196b29909963"
np.sum(Y_predict == Y_test)/Y_test.shape[0]
# + colab={} colab_type="code" id="HbQeUhYlpY97"
# MNIST
X, Y = fetch_openml("mnist_784", return_X_y=True)
X = MinMaxScaler().fit_transform(X)
# + colab={} colab_type="code" id="QOX8zP7kpY9_"
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=10000, random_state=0, stratify=Y)
# + colab={"base_uri": "https://localhost:8080/", "height": 52} colab_type="code" id="Iu3Z-a5cpY-F" outputId="f3542f3a-ff29-43e5-d37a-72cfc0a036f1"
som = SOM(12, 8, 784)
som.fit(X_train, 10, 4, 0.5)
som.mode_labeling(X_train, Y_train)
Y_predict = som.predict(X_test)
print(accuracy_score(Y_predict, Y_test, normalize=True))
som = SOM(12, 8, 784)
som.fit(X_train, 10, 4, 0.5)
som.neuron_labeling(X_train, Y_train)
Y_predict = som.predict(X_test)
print(accuracy_score(Y_predict, Y_test, normalize=True))
# + [markdown] colab_type="text" id="UVzQz6rbQGEN"
# Los resultados con Iris dan un 25% de acierto
| SOM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
from math import *
π = np.pi
import scipy.special as ss
import scipy.integrate as sint
import mpmath
import matplotlib.pyplot as plt
import pandas as pd
import os
from common import *
# %config InlineBackend.figure_formats = ['svg']
# +
plt.figure(figsize=(6,5))
c = np.logspace(log2(0.1), log2(10), 200, base=2)
for a in [0.6, 0.5, 0.4, 0.1]:
plt.plot(c, fpt_2d_poisson_tau(b=np.inf, c=c, a=a), label="$a={}$".format(a))
plt.ylim((0,40))
plt.grid()
plt.legend()
plt.xlabel("$c$")
plt.ylabel("$W_\operatorname{poisson}(a,c)$")
plt.savefig("th-curves/fpt_2d_poisson_tau_binf.pdf")
# +
plt.figure(figsize=(6,5))
c = np.logspace(log2(0.1), log2(10), 200, base=2)
for a in [0.6, 0.5, 0.4, 0.1]:
plt.plot(c, fpt_2d_poisson_tau(b=3, c=c, a=a), label="$a={}$".format(a))
plt.ylim((0,20))
plt.grid()
plt.legend()
plt.xlabel("$c$")
plt.ylabel("$W_\operatorname{poisson}(a,b=3,c)$")
plt.savefig("th-curves/fpt_2d_poisson_tau_b3.pdf")
# +
plt.figure(figsize=(6,5))
c = np.logspace(log2(0.1), log2(25), 200, base=2)
for b in [7, 6, 5.2561, 5, 4]:
plt.plot(c, fpt_2d_poisson_tau(b, c, a=0.5), label="$b={}$".format(b))
plt.ylim((0,5))
plt.grid()
plt.legend()
plt.xlabel("$c$")
plt.ylabel("$W_\operatorname{poisson}(a=0.5,b,c)$")
plt.savefig("th-curves/fpt_2d_poisson_tau_a05.pdf")
# +
plt.figure(figsize=(6,5))
c = np.logspace(log2(0.08), log2(5), 200, base=2)
for a in [0.6, 0.5, 0.4, 0.1]:
plt.plot(c, fpt_2d_periodical_tau(b=np.inf, c=c, a=a), label="$a={}$".format(a))
plt.ylim((0,30))
plt.grid()
plt.legend()
plt.xlabel("$c$")
plt.ylabel("$W_\operatorname{per}(a,c)$")
plt.savefig("th-curves/fpt_2d_per_tau_binf.pdf")
# +
plt.figure(figsize=(6,5))
c = np.logspace(log2(0.1), log2(10), 200, base=2)
for a in [0.3, 0.5, 0.9]:
plt.plot(c, fpt_2d_periodical_tau(b=4, c=c, a=a, use_cache="th-cache-2d-periodical/"), label="$a={}$".format(a))
plt.ylim((0,10))
plt.grid()
plt.legend()
plt.xlabel("$c$")
plt.ylabel("$W_\operatorname{per}(a,b=4,c)$")
plt.savefig("th-curves/fpt_2d_per_tau_b4.pdf")
# +
plt.figure(figsize=(6,5))
c = np.logspace(log2(0.1), log2(20), 200, base=2)
for b in [8, 6, 5, 4]:
plt.plot(c, fpt_2d_periodical_tau(b, c, a=0.5, use_cache="th-cache-2d-periodical/"), label="$b={}$".format(b))
plt.ylim((0,4))
plt.grid()
plt.legend()
plt.xlabel("$c$")
plt.ylabel("$W_\operatorname{per}(a=0.5,b,c)$")
plt.savefig("th-curves/fpt_2d_per_tau_a05.pdf")
# -
# ## Generation of MFPT curves with 2D target and periodical resetting
# +
plt.figure(figsize=(10,8))
a = 0.9
c = np.logspace(-8, 3, 100, base=2)
plt.semilogy(c, fpt_2d_periodical_tau(np.inf,c,a), label="full integral (46), num and denom. regularized, with cutoffs")
c = np.linspace(2, 8, 100)
tau_large_c = np.sqrt(π/a)*(1-a)/c * np.exp(c**2*(1-a)**2)
plt.semilogy(c, tau_large_c, '--', label=r"high $c$ behavior ($\exp(c^2)/c$)")
c = np.logspace(-8, 0, 100, base=2)
tau_small_c = log(1/a) / c**2 / np.log(1/a/c)
plt.semilogy(c, tau_small_c, '--', label=r"small $c$ behavior ($1/\ln(1/ac)$)")
plt.legend()
plt.ylabel(r"$\tau$")
plt.xlabel(r"$c$")
plt.title("Theoretical MFPT, 2D periodical resetting, $b=\infty$, $a={}$".format(a))
plt.grid()
plt.savefig("th-curves/w_per_binf_a{}.pdf".format(a), bbox_inches='tight')
# +
plt.figure(figsize=(10,8))
a = 0.5
b = 5
path = "th-cache-2d-periodical/a{:.4f}_b{:.4f}".format(a,b)
if os.path.exists(path):
df = pd.read_csv(path, sep=',')
c = df['c']
tau_full_reg = df['tau']
else:
c = np.logspace(-5, 5, 60, base=2)
tau_full_reg = fpt_2d_periodical_tau(b,c,a)
df = pd.DataFrame({'c':c,'tau':tau_full_reg})
df.to_csv(path, sep=',', index=False, float_format='%.6e')
plt.plot(c, tau_full_reg, label="full integral (62), denom.+num. regularized", color='red')
c = np.linspace(2, 42, 100)
gm1 = sint.quad( lambda z: z * exp(-b**2/2-z**2/2) * ss.i0(b*z), 0, a*b, epsrel=1e-8, limit=1000 )[0]
tau_large_c = (1-gm1)/gm1 / c**2
plt.plot(c, tau_large_c, '--', label=r"large $c$ behavior ($\propto 1/c^2$)")
c = np.logspace(-6, 0, 100, base=2)
A1 = sint.quad( lambda z: z * exp(-b**2/2-z**2/2) * ss.i0(b*z) * log(z/a/b), a*b, 10+b, epsrel=1e-8, limit=1000 )[0]
tau_small_c = A1 / c**2 / np.log(1/a/c)
plt.plot(c, tau_small_c, '--', label=r"small $c$ behavior ($\propto 1/c^2\ln(1/ac)$)")
plt.yscale('log')
plt.xscale('log')
plt.ylabel(r"$\tau$")
plt.xlabel(r"$c$")
plt.title("Theoretical MFPT, 2D periodical resetting, $b={}$, $a={}$".format(b,a))
plt.legend()
plt.grid()
plt.savefig("th-curves/w_per_b{}_a{}.pdf".format(b,a), bbox_inches='tight')
| langevin-mfpt/th-curves.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import gym
env = gym.make('CartPole-v0')
env.reset()
for _ in range(1000):
env.render()
print(env.step(env.action_space.sample()))
# +
from collections import namedtuple
import random
Transition = namedtuple('Transition',
('state', 'action', 'next_state', 'reward', 'done'))
class ReplayMemory(object):
def __init__(self, capacity):
self.capacity = capacity
self.memory = []
self.position = 0
def push(self, *args):
"""Saves a transition."""
if len(self.memory) < self.capacity:
self.memory.append(None)
args = [torch.tensor(x) for x in args]
self.memory[self.position] = Transition(*args)
self.position = (self.position + 1) % self.capacity
def sample(self, batch_size):
return random.sample(self.memory, batch_size)
def __len__(self):
return len(self.memory)
# +
import torch.nn as nn
import torch.nn.functional as F
import torch
class DQN(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(4, 4)
self.fc2 = nn.Linear(4, 2)
def forward(self, x):
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
# +
from tqdm import tqdm
import numpy as np
import torch.optim as optim
memory_length = 1000
policy_net = DQN()
target_net = DQN()
optimizer = optim.Adam(policy_net.parameters(), lr=0.01)
# Clopy the network
target_net.load_state_dict(policy_net.state_dict())
memory = ReplayMemory(10000)
n_episode = 1000
render_step = 1
BATCH_SIZE = 128
GAMMA = 0.99
EPS_START = 0.9
EPS_END = 0.1
EPS_DECAY = 20000
TARGET_UPDATE = 10
def get_eps(step):
return max(EPS_END, EPS_END + step * (EPS_START - EPS_END) / EPS_DECAY)
for _ in tqdm(range(n_episode)):
done = False
old_obs = env.reset()
old_obs = torch.from_numpy(old_obs).float()
step = 0
global_step = 0
while not done:
eps = get_eps(global_step)
if True or np.random.uniform() < eps:
action = env.action_space.sample()
else:
action = policy_net(old_obs)
obs, reward, done, info = env.step(action)
obs = torch.from_numpy(obs).float()
memory.push(old_obs, action, obs_tensor, reward, done)
old_obs = obs
step += 1
global_step += 1
if step % render_step == 0:
env.render()
# Training
if len(memory) >= BATCH_SIZE:
batch = Transition(*zip(*memory.sample(BATCH_SIZE)))
batch_state = torch.cat([x.unsqueeze(0) for x in batch.state])
batch_action = torch.cat([x.unsqueeze(0) for x in batch.action])
batch_reward = torch.cat([x.unsqueeze(0) for x in batch.reward])
batch_done = torch.cat([x.unsqueeze(0) for x in batch.done])
batch_next_state = torch.cat([x.unsqueeze(0) for x in batch.next_state])
# Build State Action Value
batch_output = policy_net(batch_state)
state_action_values = batch_output.gather(1, batch_action.unsqueeze(1)).squeeze(1)
# Build Target
batch_target = target_net(batch_next_state)
next_state_values = torch.zeros(BATCH_SIZE)
batch_max = batch_target.max(1)[0].detach()
next_state_values[batch_done] = batch_max[batch_done]
target = batch_reward + GAMMA * next_state_values
# Optimization
loss = F.smooth_l1_loss(state_action_values, target)
optimizer.zero_grad()
loss.backward()
for param in policy_net.parameters():
param.grad.data.clamp_(-1, 1)
optimizer.step()
if global_step % TARGET_UPDATE == 0:
target_net.load_state_dict(policy_net.state_dict())
# -
torch.cat([torch.ones(3).unsqueeze(0), torch.ones(3).unsqueeze(0)]).shape
help(F.smooth_l1_loss)
help(pytorch.detach)
torch.randn(3).max(0)[0]
obs_.float()
transtions = memory.sample(3)
x = Transition(*zip(*transtions))
for i in x:
print(i)
break
list(x)
x
x = torch.ones(3, dtype=torch.long).unsqueeze(1)
y = torch.randn((3, 5))
y.gather(1, x)
x.requires_grad
torch.eye(x)
help(torch.eye)
(1.1 ** 5 - 1) / 0.1 * 35746
1.1 ** 6
np.log1p
help(np.log1p)
np.exp(np.log1p(3))
| examples/miscellaneous/DQN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2.7 with Spark
# language: python
# name: python2
# ---
# + [markdown] colab_type="text" id="9U1dSOGK5RML"
# ### Setup spark
# + colab={"base_uri": "https://localhost:8080/", "height": 73} colab_type="code" id="GkeVMe5n5ebJ" outputId="7d1df72c-8a84-4813-ed13-113089f5e1c7"
# !apt-get install openjdk-8-jdk-headless -qq > /dev/null
# !java -version
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="GtQ30mmN5O1r" outputId="7228fa61-24ea-4e40-cb71-b168ee982d53"
# !wget http://www-eu.apache.org/dist/spark/spark-2.3.3/spark-2.3.3-bin-hadoop2.6.tgz
# !tar -xvf spark-2.3.3-bin-hadoop2.6.tgz
# !pip install findspark
# !pip install systemml
# + colab={} colab_type="code" id="kWmGeSjp5gXy"
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["SPARK_HOME"] = "/content/spark-2.3.3-bin-hadoop2.6"
# + [markdown] colab_type="text" id="MfmJFEUM5PQb"
# ### DL4j code
# + colab={"base_uri": "https://localhost:8080/", "height": 223} colab_type="code" id="-s7ztOTa2EV-" outputId="90f060e8-089f-411c-8c6a-0c78d3c749c1"
# !wget https://raw.githubusercontent.com/maxpumperla/dl4j_coursera/master/iris.txt
# + colab={"base_uri": "https://localhost:8080/", "height": 765} colab_type="code" id="EoI5erWZ2NA1" outputId="d87f45f1-6813-4c0f-f437-fc6429a493ce"
import numpy
import pandas
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasClassifier
from keras.utils import np_utils
from sklearn.preprocessing import LabelEncoder
seed = 10
numpy.random.seed(seed)
dataframe = pandas.read_csv('iris.txt',header=None)
dataset = dataframe.values
x = dataset[:,0:4].astype(float)
y = dataset[:,4]
encoder = LabelEncoder()
encoder.fit(y)
encoded_Y = encoder.transform(y)
dummy_y = np_utils.to_categorical(encoded_Y)
model = Sequential()
model.add(Dense(4,input_dim = 4, kernel_initializer = 'normal', activation = 'relu'))
model.add(Dense(3, kernel_initializer = 'normal', activation = 'sigmoid'))
model.compile(loss = 'categorical_crossentropy',optimizer = 'adam', metrics = ['accuracy'])
model.fit(x, dummy_y, epochs = 20, batch_size = 5)
model.save('my_model.h5')
# + colab={"base_uri": "https://localhost:8080/", "height": 336} colab_type="code" id="4A_G71h92vMq" outputId="cfe47169-f646-4e1d-c367-2922e5d809ff"
# !wget https://github.com/maxpumperla/dl4j_coursera/releases/download/v0.4/dl4j-snapshot.jar
# + colab={} colab_type="code" id="5sgc-gvd-CNc"
#case "META-INF/services/org.nd4j.linalg.factory.Nd4jBackend" => MergeStrategy.first
#case "META-INF/services/org.nd4j.linalg.compression.NDArrayCompressor" => MergeStrategy.first
# + colab={"base_uri": "https://localhost:8080/", "height": 54} colab_type="code" id="xTZGrvvz5Am6" outputId="30d059c0-987d-408f-a8c4-e76ff3b6a805"
# !ls $SPARK_HOME
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="o4JuFUwe4zFc" outputId="5e00a6b8-c323-430a-be5d-bf6a3c738a9a"
#/content/spark-2.3.3-bin-hadoop2.6.tgz \
# !$SPARK_HOME/bin/spark-submit \
# --class skymind.dsx.KerasImportCSVSparkRunner \
# --files iris.txt,my_model.h5 \
# --master local[*] \
# dl4j-snapshot.jar \
# -batchSizePerWorker 15 \
# -indexLabel 4 \
# -train false \
# -numClasses 3 \
# -modelFileName my_model.h5 \
# -dataFileName iris.txt \
# -useSparkLocal true
#--master $MASTER \
| Applied AI Study Group #1 - July 2019/week3/DL4J Example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Modules
#
# Having now introduced the idea that objects can have attributes and associated methods, as we saw with `date` and `datetime` type objects, let's briefly discuss modules, which provide users access to functions and objects for carrying out particular tasks before we see how to define our own object types in the next chapter.
# ## modules & imports
#
# **Modules** are files that store python code that can be `import`ed into your Namespace for use. Specifically, in the last chapter we `import`ed from the `datetime` module without really discussing what that meant. We'll pause here to ensure that we understand what modules are, what modules are available to you, and how `import`s work.
#
# <div class="alert alert-success">
# A <b>module</b> is a set of Python code with functions, classes, etc. available in it. A Python <b>package</b> is a directory of modules.
# </div>
#
# In hte last chapter when we specified `from datetime import date` this indicated that we wanted to `import`, meaning make available in my current Namespace the `date` object. However, we need to indicate to python *where* t find the `class` definition for that object type. In this statement we indicated that it could be found in the `datetime` module. Upon executing this `import` statement, one can then use the `date` type object.
# ### Why `import`?
#
# <div class="alert alert-success">
# <code>import</code> is a keyword to import external code into the local namespace.
# </div>
#
# There is *a lot* of Python code out in the world, with different people using Python for vastly different tasks. For example, the functions/classes used by a neuroscientist analyzing brain imaging data would be vastly different than the functions/classes being used by a software engineer developing an app. Therefore, you wouldn't want Python to make *all* code ever written available on installation. First, that would take a long time, making python take a long time to open up. Second, there are only so many different function names. Certain modules/packages could have functions/classes with the same names. If they were all installed at once, it would be hard to figure out exactly which function/class had been imported most recently, and thus would break programs.
#
# To avoid these issues, Python only front-loads those functions and object types that are commonly used across all Python programmers (i.e. `range`, `len`, `type`, `list`, `int`, etc.), requiring any additional functionality to be `import`ed prior to use.
# ### What's available?
#
# Some modules are available with every single python installation. These tend to be general use modules, or things that many different Python programmers will find useful, regardless of their specific field. These modules and the functionality within them can be `import`ed at any time so long as Python has been installed. These modules are said to be part of the Python **standard library**. All of these modules are listed [here](https://docs.python.org/3/library/); however, we'll note a few of the most-commonly used here and describe when they're helpful:
#
# - `random` - to "generate psuedo-random numbers" and carry out tasks with randomness (i.e. selecting a value at random from a collection)
# - `collections` - for common operations with collections (dict, tuple, list, etc.)
# - `datetime` - for operating with dates
# - `math` - for carrying out mathematical operations
# - `os` - for interfacing with your operating system (including file system)
# - `re` - for operating on regular expressions
#
# These, and all modules that are part of the standard library can be `import`ed, after which their functionality is available in your current Namespace.
# ### How to `import`?
#
# Prior to `import`ing a module, its functionality will not be available in your namespace. For example, prior to importing the `math` module if you ask what type of object `math` is, an error will be returned, as `math` is not yet in your current Namespace:
# we haven't imported the module yet
# this code fails if you haven't yet imported math
type(math)
# To access the `math` module, it must first be imported into your current Namespace. Most simply this can be done with the general syntax: `import module_name`. For example, to import the `math` module, you could use the following:
import math
# The above imports the `math` module into your current namespace, such that if you, at this point, check for the `type` of math, an error is no longer returned, as `math` has been `import`ed.
# Check the type of math
type(math)
# By importing, we now have access to the functionality within this module. For example, the `math` module includes a function `sqrt` for calculating the square root of a number.
#
# To execute a function from a module, you must first specify the module name, followed by a `.` and then the function to be executed:
math.sqrt(9)
# ### What functionality is available?
#
# While it can be intimidating at first, much of the functionality in these commonly-used modules will become familiar over time. However, you can always look up what's available within a module using the `dir()` function, which returns all available attributes, functions, and methods of a given module/function/class.
#
# For example, for the `math` module, we see a handful of different functions/attributes, including `sqrt()`, which we used above. (We'll ignore the "dunder"s (double underscores) for now.)
dir(math)
# ### `import`s: `from` & `as`
#
# Above we specified `import module_name` and then provided access to all functionality within the module specified. However, sometimes you know there's only a single function or single object type that you want to use in your code. In those cases, there's no need to `import` the *entire* module. Instead, we can use `from` to *only* `import` the objects we intend to use in our code.
#
# <div class="alert alert-success">
# <code>from</code> allows us to decide exactly what objects to import into our Namespace
# </div>
# For example, if the *only* thing we wanted to use from the `random` module was the `choice` function, rather than using the `import` statement `import random`, which would provide access to the entire `random` module in our Namespace, we can instead use the general syntax `from module_name import function`:
from random import choice
# When we specify specifically what it is we want to import, when we go to execute (use) this function, we no longer have to first specify the module name. Rather, we can call the function directly:
choice(['a', 'b', 'c', 'd', 'e'])
# Note that the above function chooses a value at random from the list. Thus, if you're following along, you likely won't see the same specific output value; however, the value you see will be a character between 'a' and 'e' as those are the values from which `choice` is choosing one at random.
# Similarly, `as` allows for users to specify what name should be used in your code for an object that is imported. This can be helpful when 1) the names of an object to be used is really long and 2) when there are multiple objects with the same name, avoiding confusion.
#
# <div class="alert alert-success">
# <code>as</code> allows us to decide exactly what name to use for a module's functionality in our Namespace.
# </div>
#
# For example, `collections` is a bit long. If you wanted to `import` all the functionality from that module into your Namespace, but didn't want to have to type `collections` every time you wanted to use an object from that module, you could instead `import` the module as follows:
import collections as cols
# The above would import all the functionality from the `collections` module into your Namespace; however, it would use the name `cols`, instead of `collections`. Thus, if you attempted to access the functionality by referring to `collections`, the following would error (as `collections` has not been imported by that name):
# this will produce an error
dir(collections)
# However, the following would provide you with information about the functionality in the `collections` module, since it was imported using the short name `cols`:
dir(cols)
# Note that `from` `import` and `as` can be used in concert. For example,
from string import punctuation as punc
# the above statement `import`s the `punctuation` object from the `string` module, giving it the short name `punc` in our current Namespace.
#
# <div class="alert alert-danger">
# Warning: The order will always be `from` -> `import` -> `as`. <code>import ascii_letters from string</code>, for example, will provide an error, as the <code>from</code> statement comes <it>after</it> the <code>import</code> statement.
# </div>
# ### `*` `imports`
#
# Occasionally you will see `from module_name import *`, where the `*` acts as a wildcard, meaning "everything". The statement `from module_name import *` should be read as "import everything from module_name"; however, this is generally bad practice. The reason for this is because later in your code when you refer to and use an object from the module, it will be unclear *where* that function came from, given the use of the wildcard.
#
#
# <div class="alert alert-danger">
# Warning: <code>*</code> <code>imports</code> are to be avoided.
# </div>
import random
dir(random)
# ## Exercises
#
# Q1. **Which of the following is NOT a valid Python `import` statement?**
#
# - A) `import collections as col`
# - B) `from statistics import mean as average`
# - B) `from os import path`
# - D) `from random import choice, choices`
# - E) `import ascii_letters from string`
#
# Q2. **If the statement `import random` were used, how would one specify use of the `sample` function from that module?**
#
# - A) `import sample`
# - B) `random.sample()`
# - B) `sample()`
# - D) `rand.sample()`
# - E) `from random sample()`
#
# Q3. **If the statement `from math import sqrt` were used, how would one specify use of the `sqrt` function from that module?**
#
# - A) `from math sqrt()`
# - B) `import sqrt`
# - B) `m.sqrt()`
# - D) `math.sqrt()`
# - E) `sqrt()`
| content/05-classes/modules.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/shyam1234/AIML_PythonTutorials/blob/master/Bridgingo_Sentiment_Analysis_.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="DuJRkwuSdklx" colab_type="text"
# # **Bridgingo Sentiment Analysis**
# This is for demonstrating the Sentiment analysis based on the live twitter feed.
# ###### Auther : <NAME> (AIML Energetic)
# + [markdown] id="9hGbUjL7dwel" colab_type="text"
# ##**Agenda**
#
#
# 1. NLP (Natural Language Processing)
# 2. Sentiment Analysis
# 3. Connect Twitter with Collab
# 4. Tweets Preprocessing and Cleaning
# 5. Visualization from Tweets
#
#
# + [markdown] id="SZA7MziOkfEK" colab_type="text"
# ###1. **Natural Language Processing**
# Simple Defination: NLP is a field in machine learning with the ability of a computer to understand, analyze, manipulate, and potentially generate human language.
#
# ####**NLP in Real Life**
#
#
# * Information Retrieval(Google finds relevant and similar results).
# * Information Extraction(Gmail structures events from emails).
# * Machine Translation(Google Translate translates language from one language to another).
# * Text Simplification(Rewordify simplifies the meaning of sentences). Shashi * * Tharoor tweets could be used(pun intended).
# * Sentiment Analysis(Hater News gives us the sentiment of the user).
# * Text Summarization(Smmry or Reddit’s autotldr gives a summary of sentences).
# * Spam Filter(Gmail filters spam emails separately).
# * Auto-Predict(Google Search predicts user search results).
# * Auto-Correct(Google Keyboard and Grammarly correct words otherwise spelled wrong).
# * Speech Recognition(Google WebSpeech or Vocalware).
# * Question Answering(IBM Watson’s answers to a query).
# * Natural Language Generation(Generation of text from image or video data.)
#
#
#
# + [markdown] id="yRXfHA4Klk2_" colab_type="text"
# (Natural Language Toolkit)NLTK: NLTK is a popular open-source package in Python. Rather than building all tools from scratch, NLTK provides all common NLP Tasks.
#
#
# Concept of Tokenization, Stemming, and Lemmatization
#
#
# **Stemming**
# While working with words, we come across a lot of variations due to grammatical reasons. The concept of variations here means that we have to deal with different forms of the same words like democracy, democratic, and democratization. It is very necessary for machines to understand that these different words have the same base form. In this way, it would be useful to extract the base forms of the words while we are analyzing the text.
#
# **Lemmatization**
#
# We can also extract the base form of words by lemmatization. It basically does this task with the use of a vocabulary and morphological analysis of words, normally aiming to remove inflectional endings only. This kind of base form of any word is called lemma.
#
# **('walk', 'walked', 'walks', 'walking)**
#
#
# **Model:**
#
# Beg of words, TF-IDF and N-grams
# + [markdown] id="WFhUPXYWd6i0" colab_type="text"
# ###2. **Sentiment Analysis**
# the process of computationally identifying and categorizing opinions expressed in a piece of text, especially in order to determine whether the writer’s attitude towards a particular topic, product, etc. is positive, negative, or neutral. **-Oxford dictionary**
#
# + [markdown] id="I0-EgmKjeQWq" colab_type="text"
# There are many methods and algorithms to implement sentiment analysis systems, which can be classified as:
#
#
# * **Rule-based** systems that perform sentiment analysis based on a set of manually crafted rules.
# * **Automatic systems** that rely on machine learning techniques to learn from data.
# * **Hybrid systems** that combine both rule based and automatic approaches.
#
#
# + [markdown] id="7IOttatkrVcF" colab_type="text"
# There are mainly two approaches for performing sentiment analysis (Defined by University of Victoria)
#
# 1. **Lexicon based**: count number of positive and negative words in given text and the larger count will be the sentiment of text
# 2. **Machine learning based approach**: Develop a classification model, which is trained using the pre-labeled dataset of positive, negative, and neutral
#
#
# + [markdown] id="9FNBxC2qe3PE" colab_type="text"
# ### Create the Twitter application for getting live tweet
# + [markdown] id="4hOYfZrpfPa1" colab_type="text"
# A. Go to "https://developer.twitter.com" for creating the Twitter application
# B. Then go to "Keys and tokens" and then copy the **API key**, **Access token, Access token secret** and **API secret key**
#
# + [markdown] id="m4dwcYTxhhDm" colab_type="text"
# ### Lets code now
# + [markdown] id="YYFjA9VpmaB9" colab_type="text"
# ###3.**Connect Twitter with Collab**
# + id="8Eim5VO53iJz" colab_type="code" colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "Ly8gQ29weXJpZ2h0IDIwMTcgR29vZ2xlIExMQwovLwovLyBMaWNlbnNlZCB1bmRlciB0aGUgQXBhY2hlIExpY2Vuc2UsIFZlcnNpb24gMi4wICh0aGUgIkxpY2Vuc2UiKTsKLy8geW91IG1heSBub3QgdXNlIHRoaXMgZmlsZSBleGNlcHQgaW4gY29tcGxpYW5jZSB3aXRoIHRoZSBMaWNlbnNlLgovLyBZb3UgbWF5IG9idGFpbiBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKLy8KLy8gICAgICBodHRwOi8vd3d3LmFwYWNoZS5vcmcvbGljZW5zZXMvTElDRU5TRS0yLjAKLy8KLy8gVW5sZXNzIHJlcXVpcmVkIGJ5IGFwcGxpY2FibGUgbGF3IG9yIGFncmVlZCB0byBpbiB3cml0aW5nLCBzb2Z0d2FyZQovLyBkaXN0cmlidXRlZCB1bmRlciB0aGUgTGljZW5zZSBpcyBkaXN0cmlidXRlZCBvbiBhbiAiQVMgSVMiIEJBU0lTLAovLyBXSVRIT1VUIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4KLy8gU2VlIHRoZSBMaWNlbnNlIGZvciB0aGUgc3BlY2lmaWMgbGFuZ3VhZ2UgZ292ZXJuaW5nIHBlcm1pc3Npb25zIGFuZAovLyBsaW1pdGF0aW9ucyB1bmRlciB0aGUgTGljZW5zZS4KCi8qKgogKiBAZmlsZW92ZXJ2aWV3IEhlbHBlcnMgZm9yIGdvb2dsZS5jb2xhYiBQeXRob24gbW9kdWxlLgogKi8KKGZ1bmN0aW9uKHNjb3BlKSB7CmZ1bmN0aW9uIHNwYW4odGV4dCwgc3R5bGVBdHRyaWJ1dGVzID0ge30pIHsKICBjb25zdCBlbGVtZW50ID0gZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgnc3BhbicpOwogIGVsZW1lbnQudGV4dENvbnRlbnQgPSB0ZXh0OwogIGZvciAoY29uc3Qga2V5IG9mIE9iamVjdC5rZXlzKHN0eWxlQXR0cmlidXRlcykpIHsKICAgIGVsZW1lbnQuc3R5bGVba2V5XSA9IHN0eWxlQXR0cmlidXRlc1trZXldOwogIH0KICByZXR1cm4gZWxlbWVudDsKfQoKLy8gTWF4IG51bWJlciBvZiBieXRlcyB3aGljaCB3aWxsIGJlIHVwbG9hZGVkIGF0IGEgdGltZS4KY29uc3QgTUFYX1BBWUxPQURfU0laRSA9IDEwMCAqIDEwMjQ7Ci8vIE1heCBhbW91bnQgb2YgdGltZSB0byBibG9jayB3YWl0aW5nIGZvciB0aGUgdXNlci4KY29uc3QgRklMRV9DSEFOR0VfVElNRU9VVF9NUyA9IDMwICogMTAwMDsKCmZ1bmN0aW9uIF91cGxvYWRGaWxlcyhpbnB1dElkLCBvdXRwdXRJZCkgewogIGNvbnN0IHN0ZXBzID0gdXBsb2FkRmlsZXNTdGVwKGlucHV0SWQsIG91dHB1dElkKTsKICBjb25zdCBvdXRwdXRFbGVtZW50ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQob3V0cHV0SWQpOwogIC8vIENhY2hlIHN0ZXBzIG9uIHRoZSBvdXRwdXRFbGVtZW50IHRvIG1ha2UgaXQgYXZhaWxhYmxlIGZvciB0aGUgbmV4dCBjYWxsCiAgLy8gdG8gdXBsb2FkRmlsZXNDb250aW51ZSBmcm9tIFB5dGhvbi4KICBvdXRwdXRFbGVtZW50LnN0ZXBzID0gc3RlcHM7CgogIHJldHVybiBfdXBsb2FkRmlsZXNDb250aW51ZShvdXRwdXRJZCk7Cn0KCi8vIFRoaXMgaXMgcm91Z2hseSBhbiBhc3luYyBnZW5lcmF0b3IgKG5vdCBzdXBwb3J0ZWQgaW4gdGhlIGJyb3dzZXIgeWV0KSwKLy8gd2hlcmUgdGhlcmUgYXJlIG11bHRpcGxlIGFzeW5jaHJvbm91cyBzdGVwcyBhbmQgdGhlIFB5dGhvbiBzaWRlIGlzIGdvaW5nCi8vIHRvIHBvbGwgZm9yIGNvbXBsZXRpb24gb2YgZWFjaCBzdGVwLgovLyBUaGlzIHVzZXMgYSBQcm9taXNlIHRvIGJsb2NrIHRoZSBweXRob24gc2lkZSBvbiBjb21wbGV0aW9uIG9mIGVhY2ggc3RlcCwKLy8gdGhlbiBwYXNzZXMgdGhlIHJlc3VsdCBvZiB0aGUgcHJldmlvdXMgc3RlcCBhcyB0aGUgaW5wdXQgdG8gdGhlIG5leHQgc3RlcC4KZnVuY3Rpb24gX3VwbG9hZEZpbGVzQ29udGludWUob3V0cHV0SWQpIHsKICBjb25zdCBvdXRwdXRFbGVtZW50ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQob3V0cHV0SWQpOwogIGNvbnN0IHN0ZXBzID0gb3V0cHV0RWxlbWVudC5zdGVwczsKCiAgY29uc3QgbmV4dCA9IHN0ZXBzLm5leHQob3V0cHV0RWxlbWVudC5sYXN0UHJvbWlzZVZhbHVlKTsKICByZXR1cm4gUHJvbWlzZS5yZXNvbHZlKG5leHQudmFsdWUucHJvbWlzZSkudGhlbigodmFsdWUpID0+IHsKICAgIC8vIENhY2hlIHRoZSBsYXN0IHByb21pc2UgdmFsdWUgdG8gbWFrZSBpdCBhdmFpbGFibGUgdG8gdGhlIG5leHQKICAgIC8vIHN0ZXAgb2YgdGhlIGdlbmVyYXRvci4KICAgIG91dHB1dEVsZW1lbnQubGFzdFByb21pc2VWYWx1ZSA9IHZhbHVlOwogICAgcmV0dXJuIG5leHQudmFsdWUucmVzcG9uc2U7CiAgfSk7Cn0KCi8qKgogKiBHZW5lcmF0b3IgZnVuY3Rpb24gd2hpY2ggaXMgY2FsbGVkIGJldHdlZW4gZWFjaCBhc3luYyBzdGVwIG9mIHRoZSB1cGxvYWQKICogcHJvY2Vzcy4KICogQHBhcmFtIHtzdHJpbmd9IGlucHV0SWQgRWxlbWVudCBJRCBvZiB0aGUgaW5wdXQgZmlsZSBwaWNrZXIgZWxlbWVudC4KICogQHBhcmFtIHtzdHJpbmd9IG91dHB1dElkIEVsZW1lbnQgSUQgb2YgdGhlIG91dHB1dCBkaXNwbGF5LgogKiBAcmV0dXJuIHshSXRlcmFibGU8IU9iamVjdD59IEl0ZXJhYmxlIG9mIG5leHQgc3RlcHMuCiAqLwpmdW5jdGlvbiogdXBsb2FkRmlsZXNTdGVwKGlucHV0SWQsIG91dHB1dElkKSB7CiAgY29uc3QgaW5wdXRFbGVtZW50ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQoaW5wdXRJZCk7CiAgaW5wdXRFbGVtZW50LmRpc2FibGVkID0gZmFsc2U7CgogIGNvbnN0IG91dHB1dEVsZW1lbnQgPSBkb2N1bWVudC5nZXRFbGVtZW50QnlJZChvdXRwdXRJZCk7CiAgb3V0cHV0RWxlbWVudC5pbm5lckhUTUwgPSAnJzsKCiAgY29uc3QgcGlja2VkUHJvbWlzZSA9IG5ldyBQcm9taXNlKChyZXNvbHZlKSA9PiB7CiAgICBpbnB1dEVsZW1lbnQuYWRkRXZlbnRMaXN0ZW5lcignY2hhbmdlJywgKGUpID0+IHsKICAgICAgcmVzb2x2ZShlLnRhcmdldC5maWxlcyk7CiAgICB9KTsKICB9KTsKCiAgY29uc3QgY2FuY2VsID0gZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgnYnV0dG9uJyk7CiAgaW5wdXRFbGVtZW50LnBhcmVudEVsZW1lbnQuYXBwZW5kQ2hpbGQoY2FuY2VsKTsKICBjYW5jZWwudGV4dENvbnRlbnQgPSAnQ2FuY2VsIHVwbG9hZCc7CiAgY29uc3QgY2FuY2VsUHJvbWlzZSA9IG5ldyBQcm9taXNlKChyZXNvbHZlKSA9PiB7CiAgICBjYW5jZWwub25jbGljayA9ICgpID0+IHsKICAgICAgcmVzb2x2ZShudWxsKTsKICAgIH07CiAgfSk7CgogIC8vIENhbmNlbCB1cGxvYWQgaWYgdXNlciBoYXNuJ3QgcGlja2VkIGFueXRoaW5nIGluIHRpbWVvdXQuCiAgY29uc3QgdGltZW91dFByb21pc2UgPSBuZXcgUHJvbWlzZSgocmVzb2x2ZSkgPT4gewogICAgc2V0VGltZW91dCgoKSA9PiB7CiAgICAgIHJlc29sdmUobnVsbCk7CiAgICB9LCBGSUxFX0NIQU5HRV9USU1FT1VUX01TKTsKICB9KTsKCiAgLy8gV2FpdCBmb3IgdGhlIHVzZXIgdG8gcGljayB0aGUgZmlsZXMuCiAgY29uc3QgZmlsZXMgPSB5aWVsZCB7CiAgICBwcm9taXNlOiBQcm9taXNlLnJhY2UoW3BpY2tlZFByb21pc2UsIHRpbWVvdXRQcm9taXNlLCBjYW5jZWxQcm9taXNlXSksCiAgICByZXNwb25zZTogewogICAgICBhY3Rpb246ICdzdGFydGluZycsCiAgICB9CiAgfTsKCiAgaWYgKCFmaWxlcykgewogICAgcmV0dXJuIHsKICAgICAgcmVzcG9uc2U6IHsKICAgICAgICBhY3Rpb246ICdjb21wbGV0ZScsCiAgICAgIH0KICAgIH07CiAgfQoKICBjYW5jZWwucmVtb3ZlKCk7CgogIC8vIERpc2FibGUgdGhlIGlucHV0IGVsZW1lbnQgc2luY2UgZnVydGhlciBwaWNrcyBhcmUgbm90IGFsbG93ZWQuCiAgaW5wdXRFbGVtZW50LmRpc2FibGVkID0gdHJ1ZTsKCiAgZm9yIChjb25zdCBmaWxlIG9mIGZpbGVzKSB7CiAgICBjb25zdCBsaSA9IGRvY3VtZW50LmNyZWF0ZUVsZW1lbnQoJ2xpJyk7CiAgICBsaS5hcHBlbmQoc3BhbihmaWxlLm5hbWUsIHtmb250V2VpZ2h0OiAnYm9sZCd9KSk7CiAgICBsaS5hcHBlbmQoc3BhbigKICAgICAgICBgKCR7ZmlsZS50eXBlIHx8ICduL2EnfSkgLSAke2ZpbGUuc2l6ZX0gYnl0ZXMsIGAgKwogICAgICAgIGBsYXN0IG1vZGlmaWVkOiAkewogICAgICAgICAgICBmaWxlLmxhc3RNb2RpZmllZERhdGUgPyBmaWxlLmxhc3RNb2RpZmllZERhdGUudG9Mb2NhbGVEYXRlU3RyaW5nKCkgOgogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAnbi9hJ30gLSBgKSk7CiAgICBjb25zdCBwZXJjZW50ID0gc3BhbignMCUgZG9uZScpOwogICAgbGkuYXBwZW5kQ2hpbGQocGVyY2VudCk7CgogICAgb3V0cHV0RWxlbWVudC5hcHBlbmRDaGlsZChsaSk7CgogICAgY29uc3QgZmlsZURhdGFQcm9taXNlID0gbmV3IFByb21pc2UoKHJlc29sdmUpID0+IHsKICAgICAgY29uc3QgcmVhZGVyID0gbmV3IEZpbGVSZWFkZXIoKTsKICAgICAgcmVhZGVyLm9ubG9hZCA9IChlKSA9PiB7CiAgICAgICAgcmVzb2x2ZShlLnRhcmdldC5yZXN1bHQpOwogICAgICB9OwogICAgICByZWFkZXIucmVhZEFzQXJyYXlCdWZmZXIoZmlsZSk7CiAgICB9KTsKICAgIC8vIFdhaXQgZm9yIHRoZSBkYXRhIHRvIGJlIHJlYWR5LgogICAgbGV0IGZpbGVEYXRhID0geWllbGQgewogICAgICBwcm9taXNlOiBmaWxlRGF0YVByb21pc2UsCiAgICAgIHJlc3BvbnNlOiB7CiAgICAgICAgYWN0aW9uOiAnY29udGludWUnLAogICAgICB9CiAgICB9OwoKICAgIC8vIFVzZSBhIGNodW5rZWQgc2VuZGluZyB0byBhdm9pZCBtZXNzYWdlIHNpemUgbGltaXRzLiBTZWUgYi82MjExNTY2MC4KICAgIGxldCBwb3NpdGlvbiA9IDA7CiAgICB3aGlsZSAocG9zaXRpb24gPCBmaWxlRGF0YS5ieXRlTGVuZ3RoKSB7CiAgICAgIGNvbnN0IGxlbmd0aCA9IE1hdGgubWluKGZpbGVEYXRhLmJ5dGVMZW5ndGggLSBwb3NpdGlvbiwgTUFYX1BBWUxPQURfU0laRSk7CiAgICAgIGNvbnN0IGNodW5rID0gbmV3IFVpbnQ4QXJyYXkoZmlsZURhdGEsIHBvc2l0aW9uLCBsZW5ndGgpOwogICAgICBwb3NpdGlvbiArPSBsZW5ndGg7CgogICAgICBjb25zdCBiYXNlNjQgPSBidG9hKFN0cmluZy5mcm9tQ2hhckNvZGUuYXBwbHkobnVsbCwgY2h1bmspKTsKICAgICAgeWllbGQgewogICAgICAgIHJlc3BvbnNlOiB7CiAgICAgICAgICBhY3Rpb246ICdhcHBlbmQnLAogICAgICAgICAgZmlsZTogZmlsZS5uYW1lLAogICAgICAgICAgZGF0YTogYmFzZTY0LAogICAgICAgIH0sCiAgICAgIH07CiAgICAgIHBlcmNlbnQudGV4dENvbnRlbnQgPQogICAgICAgICAgYCR7TWF0aC5yb3VuZCgocG9zaXRpb24gLyBmaWxlRGF0YS5ieXRlTGVuZ3RoKSAqIDEwMCl9JSBkb25lYDsKICAgIH0KICB9CgogIC8vIEFsbCBkb25lLgogIHlpZWxkIHsKICAgIHJlc3BvbnNlOiB7CiAgICAgIGFjdGlvbjogJ2NvbXBsZXRlJywKICAgIH0KICB9Owp9CgpzY29wZS5nb29nbGUgPSBzY29wZS5nb29nbGUgfHwge307CnNjb3BlLmdvb2dsZS5jb2xhYiA9IHNjb3BlLmdvb2dsZS5jb2xhYiB8fCB7fTsKc2NvcGUuZ29vZ2xlLmNvbGFiLl9maWxlcyA9IHsKICBfdXBsb2FkRmlsZXMsCiAgX3VwbG9hZEZpbGVzQ29udGludWUsCn07Cn0pKHNlbGYpOwo=", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 74} outputId="05928fa2-bd13-4129-c242-<PASSWORD>"
#Load the twitter application credential
from google.colab import files
uploaded = files.upload()
# + id="62r4SPIc3poD" colab_type="code" colab={}
# Get the data
log = pd.read_csv('login.csv')
# + id="fIDKN7D23uaZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 173} outputId="bb0c9e0<PASSWORD>"
log
# + id="TCEyKM5Ekr-X" colab_type="code" colab={}
# Twitter API credentials
consumerKey = log['value'][0]
consumerSecret = log['value'][1]
accessToken = log['value'][2]
accessTokenSecret = log['value'][3]
# + id="oEuImkK93xoL" colab_type="code" colab={}
#Import the libraries
import tweepy
# + [markdown] id="fvbnKL0j35eC" colab_type="text"
# #### **What is tweepy?**
# An easy-to-use Python library for accessing the Twitter API. It is great for simple automation and creating twitter bots.
#
# [Documentation](http://docs.tweepy.org/en/v3.5.0/index.html)
# + id="RprNiXiilKAa" colab_type="code" colab={}
# Create the authentication object. Into this we pass our consumer token and secret
authenticate = tweepy.OAuthHandler(consumerKey, consumerSecret)
# Set the access token and access token secret
authenticate.set_access_token(accessToken, accessTokenSecret)
# Create the API object while passing in the auth information (Twitter API wrapper)
api = tweepy.API(authenticate, wait_on_rate_limit= True)
# + id="EbKY2gVRoyYJ" colab_type="code" colab={}
authenticate.access_token
# + id="IbLlw2iBnGLG" colab_type="code" colab={}
# Extract 100 tweets from the twitter user (getting time line tweet : user_timeline)
posts = api.user_timeline(screen_name="realDonaldTrump", count = 100, lang= "en", tweet_mode= "extended")
# Print the last five tweets from the account
print("/* Show the last five recent tweets */ \n")
i = 1
for tweet in posts[0:5]:
print(str(i)+')'+tweet.full_text +'\n')
i = i+1
# + id="eM-NFAHN6D5M" colab_type="code" colab={}
import pandas as pd
# + [markdown] id="QkAMx6-y53fk" colab_type="text"
# #### **What is Pandas?**
# Pandas is one of the most widely used python libraries in data science. It provides high-performance, easy to use structures and data analysis tools. Pandas uses for Dataframe to manipulate.
# + id="LWIhY2aXqChU" colab_type="code" colab={}
# Create the dataframe with a column called Tweets
df = pd.DataFrame([tweet.full_text for tweet in posts], columns=['Tweets'])
# Show the first 5 rows of data
df.head()
# + [markdown] id="HASzbRCfmxww" colab_type="text"
# ###4. **Tweets Preprocessing and Cleaning**
# + id="hAlPorpp6IBl" colab_type="code" colab={}
import re
# + id="8Vbbi63Tqpel" colab_type="code" colab={}
# Clean the tweets. Try to remove the unwanted things from the tweets
# Create the function to clean the tweets
def cleanTxt(text):
text = re.sub(r'@[A-Za-z0-9]+', '', text) # Removed @mentions
text = re.sub(r'#','',text) #Removed the '#'
text = re.sub(r'#','',text) #Removed the '#'
text = re.sub(r'RT[\s]+','',text) # Removed RT
#text = re.sub(r'https?:\/\/\/S+','',text) #Remove the hyperlink
text = re.sub(r'^https?:\/\/.*[\r\n]*', '', text, flags=re.MULTILINE) #Remove the hyperlink
text = re.sub(r'^http?:\/\/.*[\r\n]*', '', text, flags=re.MULTILINE) #Remove the hyperlink
text = re.sub(r"http\S+", "", text)
return text
#Clean the text
df['Tweets'] = df['Tweets'].apply(cleanTxt)
#Show the cleaned text
df
# + [markdown] id="eSSQKLZH6LJi" colab_type="text"
# #### **What is Polarity?**
# It defines the **emotions** expressed in a sentence between **-1 to 1**.
#
# 1. negative value shows negative emotion
# 2. 0 value shows neutral
# 3. positive value shows positive emotion
#
# Example
# 1. I want to thank our Indians to support us to fight against coronavirus virus. (Positive)
# 2. We should fight with enemies. (Negative)
# 3. We think to support other country some other time. (Seems Neutral)
#
# + [markdown] id="UxgtPEyZ6VI8" colab_type="text"
# #### **What is Subjectivity?**
# It identifies **subjective** and **objective** sentence between **0 to 1**.
#
# Subjective sentences generally refer to personal opinion, emotion or judgment whereas objective refers to factual information.
#
# Example
# 1. THANK YOU TEXAS (Subjective)
# 2. I am pleased to announce that <NAME> will become White House Chief of Staff (Objective)
# + [markdown] id="tQ65OpSqYPqw" colab_type="text"
# The sentiment function of textblob returns two properties, **polarity**, and **subjectivity**.
#
# **Polarity** is float which lies in the range of [-1,1] where 1 means positive statement and -1 means a negative statement.
#
# **Subjective** sentences generally refer to personal opinion, emotion or judgment whereas objective refers to factual information. Subjectivity is also a float which lies in the range of [0,1]
#
# + [markdown] id="DOP57-Oa6inv" colab_type="text"
# ##### **What is TextBlob?**
# TextBlob is a python library and offers a simple API to access its methods and perform basic NLP tasks. It is for processing textual data. It provides a simple API for diving into common natural language processing (NLP) tasks such as part-of-speech tagging, noun phrase extraction, sentiment analysis, classification, translation, and more.
# + id="kgp337mD8KYs" colab_type="code" colab={}
from textblob import TextBlob
# + id="wD_VxVMislm9" colab_type="code" colab={}
# Create the function to get the subjectivity
def getSubjectivity(text):
return TextBlob(text).sentiment.subjectivity
# Create the function to get the polarity
def getPolarity(text):
return TextBlob(text).sentiment.polarity
# Create two new columns
df['Subjectivity'] = df['Tweets'].apply(getSubjectivity)
df['Polarity'] = df['Tweets'].apply(getPolarity)
#Show the new dataframe with new added column
df
# + [markdown] id="FlLm8nja6rU8" colab_type="text"
# #### **What is WordCloud?**
# Word Cloud is a data visualization technique used for representing text data in which the size of each word indicates its frequency or importance. Significant textual data points can be highlighted using a word cloud.
# + id="sHxR9O-4658Q" colab_type="code" colab={}
from wordcloud import WordCloud
# + id="y3d55HQj8s6E" colab_type="code" colab={}
# Plot Word Cloud
allWords = ' '.join([twts for twts in df['Tweets']])
wordCloud = WordCloud( width = 500, height =300 , random_state = 21, max_font_size = 119).generate(allWords)
# + [markdown] id="zyl0PZ9j7LE3" colab_type="text"
# #### **What is Matplotlib?**
# In short: Matplotlib is used for visualizing the data.
#
# Matplotlib is a plotting library for the Python programming language and its numerical mathematics extension NumPy.
# + id="mYeH1sMS8p2j" colab_type="code" colab={}
import matplotlib.pyplot as plt
# + id="-gNduxmVtoH_" colab_type="code" colab={}
plt.imshow(wordCloud, interpolation = 'bilinear')
plt.axis('off')
plt.show()
# + id="SeohSG5lveUc" colab_type="code" colab={}
# Create the function to compute the negative, neutral and positive analysis
def getAnalysis(score):
if score < 0:
return 'Negative'
elif score == 0:
return 'Neutral'
else:
return 'Positive'
df['Analysis'] = df['Polarity'].apply(getAnalysis)
#Show the dataframe
df
# + id="U1gVOOi5w6_Q" colab_type="code" colab={}
# Print all the positive tweets
j = 1
sortedDf = df.sort_values(by=['Polarity'])
for i in range(0, sortedDf.shape[0]):
if (sortedDf['Analysis'][i] == 'Positive'):
print(str(j) + ')'+sortedDf['Tweets'][i])
print()
j = j+1
# + id="uzdHLq_syLES" colab_type="code" colab={}
# Lets print the negative tweets
j = 1
sortedDf = df.sort_values(by=['Polarity'], ascending=False)
for i in range(0, sortedDf.shape[0]):
if(sortedDf['Analysis'][i] == 'Negative'):
print(str(j) +')'+sortedDf['Tweets'][i])
print()
j = j+1
# + [markdown] id="6x5PYb6-nDpV" colab_type="text"
# ###5. **Visualization from Tweets**
# + id="iijKadXGzTe7" colab_type="code" colab={}
# Plot the polarity and subjectivity
plt.figure(figsize=(8,6))
for i in range(0, df.shape[0]):
plt.scatter(df['Polarity'][i], df['Sunjectivity'][i], color= 'Blue')
plt.title('Sentiment Analysis')
plt.xlabel('Polarity')
plt.ylabel('Subjectivity')
plt.show()
# + id="DAu11EYb0WXu" colab_type="code" colab={}
# Get the percentage of positive tweets
ptweets = df[df.Analysis == 'Positive']
ptweets = ptweets['Tweets']
round((ptweets.shape[0]/df.shape[0])*100,1)
# + id="6AF8WFQM1BpU" colab_type="code" colab={}
# Get the percentage of negative tweets
ntweets = df[df.Analysis == 'Negative']
ntweets = ntweets['Tweets']
round((ntweets.shape[0]/df.shape[0]*100),1)
# + id="-pmcskWE1avX" colab_type="code" colab={}
# Show the value counts
df['Analysis'].value_counts()
#plot and visualize the count
plt.title('Sentiment Analysis')
plt.xlabel('Sentiment')
plt.ylabel('Count')
df['Analysis'].value_counts().plot(kind="bar")
plt.show()
# + [markdown] id="69r18PKozmKu" colab_type="text"
# For ref:
# a. https://www.analyticsvidhya.com/blog/2018/02/natural-language-processing-for-beginners-using-textblob/
| Bridgingo_Sentiment_Analysis_.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6 (tabula-muris-env)
# language: python
# name: tabula-muris-env
# ---
# +
import pandas as pd
import os
import glob
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
import seaborn as sns
# Editable text and proper LaTeX fonts in illustrator
matplotlib.rcParams['ps.useafm'] = True
# matplotlib.rcParams['pdf.use14corefonts'] = True
# Editable fonts. 42 is the magic number
matplotlib.rcParams['pdf.fonttype'] = 42
# Use "Computer Modern" (LaTeX font) for math numbers
matplotlib.rcParams['mathtext.fontset'] = 'cm'
# %matplotlib inline
sns.set(style='whitegrid', context='paper')
data_ingest_folder = os.path.join('..', '00_data_ingest' )
folder = os.path.join(data_ingest_folder, '13_ngenes_ncells_facs')
palette_folder = os.path.join(data_ingest_folder, '15_color_palette')
colors = pd.read_csv(os.path.join(palette_folder, 'tissue_colors.csv'), index_col=0, squeeze=True)
colors
# +
globber = os.path.join(folder, '*_nreads_ngenes.csv')
dfs = []
for filename in glob.iglob(globber):
df = pd.read_csv(filename, index_col=0)
df['tissue'] = os.path.basename(filename).split('_nreads_ngenes.csv')[0]
dfs.append(df)
nreads_ngenes = pd.concat(dfs)
print(nreads_ngenes.shape)
print(len(nreads_ngenes.groupby('tissue')))
nreads_ngenes.head()
# -
nreads_ngenes['log10 nReads'] = np.log10(nreads_ngenes['nReads'])
# Replace underscores with spaces for LaTeX happiness
nreads_ngenes['tissue'] = nreads_ngenes['tissue'].str.replace('_', ' ')
colors.index = colors.index.str.replace('_', ' ')
cell_annotations['tissue'] = cell_annotations['tissue'].str.replace('_', ' ')
tissues = sorted(nreads_ngenes['tissue'].unique())
tissues
kwargs = dict(data=nreads_ngenes, row='tissue', facet_kws=dict(sharex=True),
row_order=tissues, palette=colors, xlabel_suffix='')
n_cells_per_tissue = nreads_ngenes.groupby('tissue').size().reset_index()
n_cells_per_tissue = n_cells_per_tissue.rename(columns={0: 'n_cells'})
n_cells_per_tissue
fig, ax = plt.subplots()
sns.barplot(x='n_cells', y='tissue', data=n_cells_per_tissue, palette=colors, order=tissues)
ax.set(xlabel='Number of cells')
fig.tight_layout()
fig.savefig('figure1b_barplot_n_cells_per_tissue.pdf')
| 01_figure1/figure1b.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # HTTP WEB的请求
import sys
import os
import json
import requests
# ## 最简单的请求示例
url = 'https://www.baidu.com'
req = requests.get(url)
print("status=%d" %(req.status_code))
if req.status_code == 200:
print("data:\n")
print(req.text)
# ## 双向证书的请求示例
# ### 自己搭建一个要求验证客户端证书的服务端('https://xxxx')
# ### 直接访问,将会有SSL报错
# +
url = 'https://cvm.hansong.io/index.html'
try:
req = requests.get(url)
print("status=%d" %(req.status_code))
if req.status_code == 200:
print("data:\n")
print(req.text)
except :
print("exception:%s||%s" %(sys.exc_info()[0], sys.exc_info()[1]))
# -
# ### 若加上verify=False, 那么对于双向认证的,依然是返回400错误的!
# +
url = 'https://cvm.hansong.io/index.html'
try:
req = requests.get(url,verify = False)
print("status=%d" %(req.status_code))
if req.status_code == 200:
print("data:\n")
print(req.text)
except :
print("exception:%s||%s" %(sys.exc_info()[0], sys.exc_info()[1]))
# -
# ### 带上客户端证书:
requests.get('https://kennethreitz.org', cert=('/path/client.cert', '/path/client.key'))
| jupyter_samples/code/learn_http.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction
# Previously, we've noticed that there is quite some redundant information in the notes and annotations field. Generally, it is more accepted to have the information stored in the annotations.
# Therefore in this model, I will move the information that is also stored in annotations to this field. Overall the notes section should then be empty (except for some comments)
import cameo
import pandas as pd
import cobra.io
model = cobra.io.read_sbml_model('../model/p-thermo.xml')
model_e_coli = cameo.load_model('iML1515')
# # Metabolites
# If metabolites already have information for that field in the annotations, we will let that one predominate. Alternatively, we will then fill it up with the information from notes.
#
# Notes contain:
# - KEGG: should be in annotation
# - NAME: Leave for now. We've already gone through many of these by hand, due to an issue of automatic conversion the end of some names were cut off. Also, as this contains some alternate names for the metabolite it can be usefull to keep.
# - ChEBI: should be in annotation
#
#check KEGG info
for met in model.metabolites:
try:
kegg_anno = met.annotation['kegg.compound'] #try to find if there is already info in the annotation
if len(kegg_anno) > 0:
try:
del met.notes['KEGG'] #delete the notes if the metabolite has it
except KeyError:
continue #if it doesn't have a KEGG in the notes but has the annotation, just continue
else:
continue
except KeyError:
try:
kegg_note = met.notes['KEGG'] #lift KEGG ID from notes
met.annotation['kegg.compound'] = kegg_note #move KEGG ID to annotation
del met.notes['KEGG'] #delete the info in notes
except KeyError:
print (met.id)
# So there are 9 metabolites without KEGG in the annotation or notes. For the Biomass, this makes sense. For the others I will check and fix this.
model.metabolites.focytB561_c.annotation['kegg.compound'] = 'C05183'
model.metabolites.sbt__D_e.annotation = model.metabolites.sbt__D_c.annotation
model.metabolites.melib_e.annotation = model.metabolites.melib_c.annotation
#tagatose 1 phosphate doesnt have a kegg number, but does have a MetanetX: so I will add this instead.
model.metabolites.tag1p__D_c.annotation['metanetx.chemical'] = 'MNXM11293'
model.metabolites.amylose_e.annotation = model.metabolites.amylose_c.annotation
model.metabolites.pyr_e.annotation = model.metabolites.pyr_c.annotation
model.metabolites.for_e.annotation = model.metabolites.for_c.annotation
#save&commit
cobra.io.write_sbml_model(model,'../model/p-thermo.xml')
# ### Chebi
#check Chebi info
for met in model.metabolites:
try:
chebi_anno = met.annotation['chebi'] #try to find if there is already info in the annotation
if len(chebi_anno) > 0:
try:
del met.notes['ChEBI'] #delete the notes if the metabolite has it
except KeyError:
continue #if it doesn't have a KEGG in the notes but has the annotation, just continue
else:
continue
except KeyError:
try:
kegg_note = met.notes['ChEBI'] #lift KEGG ID from notes
met.annotation['chebi'] = kegg_note #move KEGG ID to annotation
del met.notes['ChEBI'] #delete the info in notes
except KeyError:
print (met.id)
#save & commit
cobra.io.write_sbml_model(model,'../model/p-thermo.xml')
# Quite some metabolites do not have the Chebi associated to them. However this is fine, as they will contain atleast a KEGG or metanetX in the annotation. Now I will check that every metabolite only has the name left in the notes field.
for met in model.metabolites:
try:
met.notes['PubChem']
del met.notes['PubChem']
except KeyError:
continue
for met in model.metabolites:
try :
met.notes['EXACT_MASS']
del met.notes['EXACT_MASS']
except KeyError:
continue
for met in model.metabolites:
try:
met.notes['MODULE']
del met.notes['MODULE']
except KeyError:
continue
for met in model.metabolites:
try:
met.notes['KNApSAcK']
del met.notes['KNApSAcK']
except KeyError:
continue
for met in model.metabolites:
try:
met.notes['3DMET']
del met.notes['3DMET']
except KeyError:
continue
for met in model.metabolites:
try:
met.notes['NIKKAJI']
del met.notes['NIKKAJI']
except KeyError:
continue
for met in model.metabolites:
try:
met.notes['PDB-CCD']
del met.notes['PDB-CCD']
except KeyError:
continue
for met in model.metabolites:
try:
met.notes['LIPIDMAPS']
del met.notes['LIPIDMAPS']
except KeyError:
continue
for met in model.metabolites:
try:
met.notes['CAS']
del met.notes['CAS']
except KeyError:
continue
for met in model.metabolites:
if len (met.notes) == 0: #metabolites with no notes
continue
elif len(met.notes) == 1: #metabolites with just one note
try:
name = met.notes['NAME']
except KeyError:
print (met.id)
else:
print (met.id)
# So it some metabolites still have some notes. I will inspect what they are on a case by case basis and decide what to do with them.
model.metabolites.atp_c.notes = {}
del model.metabolites.actn__R_c.notes['KEGG_COMMENT']
del model.metabolites.actn_c.notes['KEGG_COMMENT']
del model.metabolites.hacc_c.notes['KEGG_COMMENT']
model.metabolites.mdgp_e.notes = {}
model.metabolites.mdgp_c.notes={}
#save&commit
cobra.io.write_sbml_model(model,'../model/p-thermo.xml')
# Check that every metabolite has either Chebi or Kegg annotation
for met in model.metabolites:
if len(met.annotation) == 0:
print(met.id)
else:
try:
met.annotation['kegg.compound']
except KeyError:
try:
met.annotation['chebi']
except KeyError:
print(met.id)
# So only one metabolite lacks a Kegg or chebi annotation, tag1p__D_c. But we gave it a metanetx annotation so the metabolite can still be traced back.
model.metabolites.tag1p__D_c.annotation
# # Reactions
# Annotation field can contain: ec-code, kegg-reaction, metanetx.reaction, rhea and sbo. These fields may also be present in the notes and so should be removed from notes to prevent duplicate information.
#
# So from the notes:
# - Subsystem: I will remove this, as it has been moved to the group field and modified in notebook 34.
# - Gene_association: Remove, this is stored in the GPR field also.
# - KEGG ID: move to kegg.reaction annotation and delete
# - Enzyme: move to ec-code and remove
# - Name: Contains some alternate names for reactions, can be left.
# - Definition: This describes the stoichiometry of the reaction. This can just be found when looking at the reaction in the model itself, as may become redundant if the reaction is changed and this is wrong.
#check KEGG info
for rct in model.reactions:
if rct.id[:2] in 'EX':
continue
elif rct.id[-1:] in ['t', 's', 'c']: #get rid of transports, we will tackle those later
continue
else:
try:
kegg_anno = rct.annotation['kegg.reaction'] #try to find if there is already info in the annotation
if len(kegg_anno) > 0:
try:
del rct.notes['KEGG ID'] #delete the notes if the metabolite has it
except KeyError:
continue #if it doesn't have a KEGG in the notes but has the annotation, just continue
else:
continue
except KeyError:
try:
kegg_note = rct.notes['KEGG ID'] #lift KEGG ID from notes
rct.annotation['kegg.reaction'] = kegg_note #move KEGG ID to annotation
del rct.notes['KEGG ID'] #delete the info in notes
except KeyError:
try:
kegg_note_wrong = rct.notes['ID'] #When Kegg ID is stored wrongly in the ID field instead
rct.annotation['kegg.reaction'] = kegg_note_wrong #move to annotation
del rct.notes['ID'] #delete note
except KeyError:
print (rct.id)
# The reactions that did not have the KEGG ID in either notes or annotations will be inspected one by one now.
model.reactions.LALDO2x.annotation = model_e_coli.reactions.LALDO2x.annotation
model.reactions.PGL.annotation = model_e_coli.reactions.PGL.annotation
model.reactions.KDG2R.annotation['kegg.reaction'] = 'R01739'
model.reactions.KDG2R.annotation['ec-code'] = '1.1.1.215'
model.reactions.KDG2R.annotation['rhea']= '16656'
model.reactions.KDG2R.annotation['sbo'] ='SBO:0000176'
model.reactions.KDG2R.annotation['metanetx.reaction']='MNXR94784'
model.metabolites.glcn__D_c.name = 'D-Gluconate'
model.reactions.DGLCN5R.annotation['ec-code'] = '1.1.1.69'
model.reactions.DGLCN5R.annotation['kegg.reaction'] = 'R01740'
model.reactions.DGLCN5R.annotation['rhea'] = '23939'
model.reactions.DGLCN5R.annotation['sbo'] = 'SBO:0000176'
model.reactions.DGLCN5R.annotation['metanetx.reaction'] = 'MNXR107148'
model.reactions.BGAL.annotation['sbo'] = 'SBO:0000176'
model.reactions.ALCD1.annotation['kegg.reaction']= 'R00605'
model.reactions.ALCD1.annotation['ec-code'] = '1.1.1.244'
model.reactions.ALCD1.annotation['sbo'] = 'SBO:0000176'
model.reactions.ALCD1.notes = {}
model.reactions.ALCD1.annotation['metanetx.reaction'] = 'MNXR95708'
model.reactions.TAG1PK.annotation['sbo'] = 'SBO:0000176'
model.reactions.TAG1PK.annotation['ec-code'] = '2.7.1.B6'
model.reactions.TGBPA.name = 'Tagatose-bisphosphate aldolase'
model.reactions.TGBPA.annotation = model_e_coli.reactions.TGBPA.annotation
model.reactions.SUCD5.annotation['sbo'] = 'SBO:0000176'
model.reactions.SUCD5.annotation['kegg.reaction'] = 'R02164'
model.reactions.SUCD5.annotation['ec-code'] = '1.3.5.1'
model.reactions.SUCD5.annotation['metanetx.reaction'] = 'MNXR107340'
model.reactions.LTHRK.annotation['sbo'] = 'SBO:0000176'
model.reactions.LTHRK.annotation['kegg.reaction'] = 'R06531'
model.reactions.LTHRK.annotation['ec-code'] = '2.7.1.177'
model.reactions.LTHRK.annotation['metanetx.reaction'] = 'MNXR101251'
model.reactions.ATPS4r.annotation['sbo'] = 'SBO:0000185'
model.reactions.ATPS4r.annotation['ec-code'] = '7.1.2.2'
model.reactions.ATPS4r.annotation['metanetx.reaction'] = 'MNXR96136'
model.reactions.ATPS4r.annotation['seed.reaction'] = 'rxn10042'
model.reactions.Kt2.annotation['sbo'] = 'SBO:0000185'
model.reactions.Kt2.annotation['metanetx.reaction'] = 'MNXR100951'
model.reactions.Kt2.annotation['seed.reaction'] = 'rxn05595'
model.reactions.CYTBO3.annotation['sbo'] = 'SBO:0000185'
model.reactions.CYTBO3.annotation['metanetx.reaction'] = 'MNXR97035'
model.reactions.CYTBO3.annotation['seed.reaction'] = 'rxn10113'
#save & commit
cobra.io.write_sbml_model(model,'../model/p-thermo.xml')
# ### EC info
#check EC info
for rct in model.reactions:
if rct.id[:2] in 'EX':
continue
elif rct.id[-1:] in ['t', 's', 'c']: #get rid of transports, we will tackle those later
continue
else:
try:
ec_anno = rct.annotation['ec-code'] #try to find if there is already info in the annotation
if len(ec_anno) > 0:
try:
del rct.notes['ENZYME'] #delete the notes if the metabolite has it
except KeyError:
continue #if it doesn't have an EC in the notes but has the annotation, just continue
else:
continue
except KeyError:
try:
ec_note = rct.notes['ENZYME'] #lift KEGG ID from notes
rct.annotation['ec-code'] = ec_note #move KEGG ID to annotation
del rct.notes['ENZYME'] #delete the info in notes
except KeyError:
print (rct.id)
# Most of these reactions are fine without an ec code because they contain significant other information that can allow them to be traced back. IF not, they are changed below.
del model.reactions.MALHYDRO.annotation['metanetx.reaction']
model.reactions.AMACT.annotation['ec-code'] = '2.3.1.180'
#remove all gene associations and definitions
for rct in model.reactions:
try:
del rct.notes['DEFINITION']
except KeyError:
continue
try:
del rct.notes['GENE_ASSOCIATION']
except KeyError:
continue
#save&commit
cobra.io.write_sbml_model(model,'../model/p-thermo.xml')
# ## Transport reactions
# The transport reactions in our model don't have any annotations or notes it seems. Here, I see that there are MetanetX ids for some of these reactions. So these should be added. There reactions don't have E.C. numbers or KEGG IDs and so I will add the MetanetX IDs and where possible the seed.reaction to the annotation field. And make sure their notes are empty.
#
# __Export reactions__
# for Export reactions, they all contain the SBO annotation. As these are not biochemical reactions, but just from a modelling perspective added, I will not add any additional annotations here.
#
#check which miss sbo term: should be none as Ben did this already.
for rct in model.reactions:
if rct.id[-1:] in ['t', 's', 'c']:
try:
sbo = rct.annotation['sbo']
except KeyError:
print (rct.id)
else:
continue
#check which miss metanetx term: probably almost all of them
for rct in model.reactions:
if rct.id[-1:] in ['t', 's', 'c']:
try:
meta = rct.annotation['metanetx.reaction']
except KeyError:
print (rct.id)
else:
continue
# Reactions with no Metanetx: biomass, QH2t (should be removed later), THMt, GTHRDt (should be removed later), BIOMASSt, THMTPt, NADPt(should be removed later), GTBIt, TURAt, KDG2t, MDGPt, TAGtpts.
#these two exchange reactions should be removed
model.remove_reactions(model.reactions.EX_succcoa_c)
model.remove_reactions(model.reactions.EX_dlglu_c)
model.reactions.NH4t.annotation['metanetx.reaction'] = 'MNXR101950'
model.reactions.NH4t.annotation['seed.reaction'] = 'rxn05466'
model.reactions.PIt.annotation['metanetx.reaction'] = 'MNXR102872'
model.reactions.PIt.annotation['seed.reaction'] = 'rxn05312'
model.reactions.FORt.annotation['metanetx.reaction'] = 'MNXR99621'
model.reactions.FORt.annotation['seed.reaction'] = 'rxn05559'
model.reactions.PYRt.annotation['metanetx.reaction'] = 'MNXR103385 '
model.reactions.PYRt.annotation['seed.reaction'] = 'rxn05469'
model.reactions.AMYt.annotation['metanetx.reaction'] = 'MNXR142927'
model.reactions.Kt.annotation['metanetx.reaction'] = 'MNXR100951'
model.reactions.Kt.annotation['seed.reaction'] = 'rxn05595'
model.reactions.SO4t.annotation['metanetx.reaction'] = 'MNXR104466'
model.reactions.SO4t.annotation['seed.reaction'] = 'rxn05651'
model.reactions.GLC__Dtpts.annotation['metanetx.reaction'] = 'MNXR100237'
model.reactions.GLC__Dtpts.annotation['seed.reaction'] = 'rxn05226'
model.reactions.FRUtpts.annotation['metanetx.reaction'] = 'MNXR99662'
model.reactions.ARAB__Ltabc.annotation['metanetx.reaction'] = 'MNXR126447'
model.reactions.ARAB__Ltabc.annotation['seed.reaction'] = 'rxn05173'
model.reactions.XYL__Dtabc.annotation['metanetx.reaction'] = 'MNXR105268'
model.reactions.XYL__Dtabc.annotation['seed.reaction'] = 'rxn05167'
model.reactions.GALtabc.annotation['metanetx.reaction'] = 'MNXR100023'
model.reactions.GALtabc.annotation['seed.reaction'] = 'rxn05162'
model.reactions.SUCCt.annotation['metanetx.reaction'] = 'MNXR104619'
model.reactions.SUCCt.annotation['seed.reaction'] = 'rxn10952'
model.reactions.LAC__Lt.annotation['metanetx.reaction'] = 'MNXR100999'
model.reactions.LAC__Lt.annotation['seed.reaction'] = 'rxn11016'
model.reactions.ASN__Lt.annotation['metanetx.reaction'] = 'MNXR96066 '
model.reactions.ASN__Lt.annotation['seed.reaction'] = ['rxn05508','rxn11321']
model.metabolites.pydx5p_e.name = 'Pyridoxal phosphate'
model.reactions.PYDX5Pt.annotation['metanetx.reaction'] = 'MNXR103359'
model.metabolites.qh2_e.name = 'Ubiquinol'
model.reactions.GLC__Dtabc.annotation['metanetx.reaction'] = 'MNXR100236'
model.reactions.GLC__Dtabc.annotation['seed.reaction'] = 'rxn05147'
model.reactions.FEtabc.annotation['metanetx.reaction'] = 'MNXR99504'
model.reactions.H2Ot.annotation['metanetx.reaction'] = 'MNXR98641'
model.reactions.O2t.annotation['metanetx.reaction'] = 'MNXR102090'
model.reactions.CO2t.annotation['metanetx.reaction'] = 'MNXR97980'
model.reactions.COBALT2t.id = 'COBALT2tabc'
model.reactions.ETOHt.annotation['metanetx.reaction'] = 'MNXR96810'
model.metabolites.ac_e.name = 'Acetate'
model.reactions.ACt.annotation['metanetx.reaction'] ='MNXR95431'
model.reactions.ACt.annotation['seed.reaction'] = 'rxn10904'
model.reactions.COBALT2tabc.annotation['metanetx.reaction'] = 'MNXR96819'
model.reactions.COBALT2tabc.annotation['seed.reaction'] = 'rxn08239'
model.reactions.LAC__Dt.annotation['metanetx.reaction'] = 'MNXR101277'
model.reactions.LAC__Dt.annotation['seed.reaction'] = 'rxn05602'
model.metabolites.cl_e.name = 'Chloride'
model.reactions.CLt.annotation['metanetx.reaction'] = 'MNXR96797'
model.reactions.CLt.annotation['seed.reaction'] = 'rxn10473'
model.reactions.MNLtpts.annotation['metanetx.reaction'] = 'MNXR101677'
model.reactions.MNLtpts.annotation['seed.reaction'] = 'rxn05617'
model.reactions.CELLBtpts.annotation['metanetx.reaction'] = 'MNXR96586'
model.reactions.SUCRtpts.annotation['metanetx.reaction'] = 'MNXR104643'
model.reactions.SUCRtpts.annotation['seed.reaction'] = 'rxn05655'
model.reactions.GLYCt.annotation['metanetx.reaction'] = 'MNXR100343'
model.reactions.GLYCt.annotation['seed.reaction'] = 'rxn05581'
model.reactions.RIB__Dtabc.annotation['metanetx.reaction'] = 'MNXR104034'
model.reactions.RIB__Dtabc.annotation['seed.reaction'] = 'rxn05160'
model.reactions.MANtpts.annotation['metanetx.reaction'] = 'MNXR101401'
model.reactions.MANtpts.annotation['seed.reaction'] = 'rxn05610'
model.reactions.SBTtpts.annotation['metanetx.reaction'] = 'MNXR104290'
model.reactions.SBTtpts.annotation['seed.reaction'] = 'rxn10184'
model.reactions.ACGAMtpts.annotation['metanetx.reaction'] = 'MNXR95253'
model.reactions.ACGAMtpts.annotation['seed.reaction'] = 'rxn05485'
model.reactions.ARBTtpts.annotation['metanetx.reaction'] = 'MNXR95932'
model.reactions.ARBTtpts.annotation['seed.reaction'] = 'rxn05501'
model.reactions.SALCNtpts.annotation['metanetx.reaction'] = 'MNXR104266'
model.reactions.SALCNtpts.annotation['seed.reaction'] = 'rxn05647'
model.reactions.MALTtabc.annotation['metanetx.reaction'] = 'MNXR101362'
model.reactions.MALTtabc.annotation['seed.reaction'] = 'rxn05170'
model.reactions.MALTtpts.annotation['metanetx.reaction'] = 'MNXR101363'
model.reactions.MALTtpts.annotation['seed.reaction'] = 'rxn05607'
model.reactions.TREtpts.annotation['metanetx.reaction'] = 'MNXR104931'
model.reactions.TREtpts.annotation['seed.reaction'] = 'rxn02005'
model.metabolites.tre6p_c.name='Trehalose 6-phosphate'
model.reactions.RMNt.annotation['metanetx.reaction'] = 'MNXR104041'
model.reactions.RMNt.annotation['seed.reaction'] = 'rxn05646'
model.reactions.MELIBt.annotation['metanetx.reaction'] = 'MNXR138587'
model.metabolites.tura_c.annotation['metanetx.chemical'] = 'MNXM161984'
model.metabolites.tura_e.annotation['metanetx.chemical'] = 'MNXM161984'
model.reactions.TURAt.name = 'Turanose transport'
model.metabolites.kdg2_c.annotation['metanetx.chemical'] = 'MNXM480329'
model.metabolites.kdg2_e.annotation['metanetx.chemical'] = 'MNXM480329'
model.metabolites.dglcn5_e.annotation['metanetx.chemical'] ='MNXM963'
model.metabolites.dglcn5_c.annotation['metanetx.chemical'] ='MNXM963'
model.reactions.DGLCN5t.annotation['metanetx.reaction'] = 'MNXR95067'
model.metabolites.mdgp_e.annotation['metanetx.chemical'] = 'MNXM61754'
model.metabolites.mdgp_c.annotation['metanetx.chemical'] = 'MNXM61754'
model.metabolites.tag__D_e.annotation['metanetx.chemical'] = 'MNXM83257'
#save&commit
cobra.io.write_sbml_model(model,'../model/p-thermo.xml')
# Now I will check that none of the reactions have any field left except the 'name' or other notes. I will also check that al reactions will have some annotation (atleast SBO)
#remove other fields in notes that should be gone
for rct in model.reactions:
try:
del rct.notes['GENE_ASSOCIATION']
except KeyError:
continue
for rct in model.reactions:
try:
del rct.notes['ORTHOLOGY']
except KeyError:
continue
for rct in model.reactions:
try:
del rct.notes['PATHWAY']
except KeyError:
continue
for rct in model.reactions:
try:
del rct.notes['EQUATION']
except KeyError:
continue
for rct in model.reactions:
try:
del rct.notes['RPAIR']
except KeyError:
continue
for rct in model.reactions:
try:
del rct.notes['original_bigg_ids']
except KeyError:
continue
for rct in model.reactions:
try:
del rct.notes['KEGG ID']
except KeyError:
continue
for rct in model.reactions:
try:
del rct.notes['ENZYME']
except KeyError:
continue
for rct in model.reactions:
try:
del rct.notes['ENTRY']
except KeyError:
continue
for rct in model.reactions:
try:
del rct.notes['EQUATION_USED_HEREIN']
except KeyError:
continue
for rct in model.reactions:
try:
del rct.notes['KEGG_COMMENT']
except KeyError:
continue
for rct in model.reactions:
try:
del rct.notes['MODULE']
except KeyError:
continue
#convert field comments to field 'NOTES'
for rct in model.reactions:
try:
comment = rct.notes['COMMENTS']
rct.notes['NOTES'] = comment
del rct.notes['COMMENTS']
except KeyError:
try:
comment = rct.notes['COMMENT']
rct.notes['NOTES'] = comment
del rct.notes['COMMENT']
except KeyError:
continue
for rct in model.reactions:
if len (rct.notes) == 0: #rcts with no notes
continue
elif len(rct.notes) == 1: #rcts with just one note
try:
name = rct.notes['NAME']
except KeyError:
try:
rct.notes['NOTES']
except:
print (rct.id)
else:
print (rct.id)
del model.reactions.IG3PS.notes['ECNumbers']
del model.reactions.PGL.notes['KEGG']
#save&commit
cobra.io.write_sbml_model(model,'../model/p-thermo.xml')
#which reactions have no annotation
for rct in model.reactions:
if len(rct.annotation) == 0:
print(rct.id)
else:
continue
# This list should become zero, as all should have an SBO.
#
# So below I check how many rcts have only SBO annotation, excluding the exchange reactions.
for rct in model.reactions:
if rct.id[:2] in 'EX':
continue
else:
if len(rct.annotation) <= 1:
print(rct.id)
else:
continue
# This list matches what we've already checked above. So I can leave it like it is.
# # Missing names
# some reactions and metabolites don't have names, and this was added automatically in commit 4fc25cc, and so this will be fixed here,
model.metabolites.lald__L_c.name = 'L-Lactaldehyde'
model.metabolites.gtspmd_c.name = 'Glutathionylspermidine'
model.metabolites.cdpdag_c.name = 'CDP-1,2-diacylglycerol'
model.metabolites.actn__R_c.name = '(R)-Acetoin'
model.reactions.HOXPRm.name = '(R)-Glycerate:NAD+ oxidoreductase'
model.reactions.MOX.name = '(S)-Malate:oxygen oxidoreductase'
model.reactions.TRPS3.name = 'Tryptophan synthase (indoleglycerol phosphate)'
model.reactions.SUCOAACTr.name = 'succinyl-CoA:acetate CoA-transferase'
model.reactions.ACTD.name = '(R)-Acetoin:NAD+ oxidoreductase'
model.reactions.ACOAD1.name = 'butanoyl-CoA:NAD+ trans-2-oxidoreductase'
model.reactions.ECOAH1.name = '(S)-3-hydroxybutanoyl-CoA hydro-lyase'
model.reactions.MANtpts.name = 'Mannose transport via PTS'
model.reactions.SALCNtpts.name = 'Salicin transport via PTS'
model.reactions.TGBPA.name = 'Tagatose-bisphosphate aldolase'
#save&commit
cobra.io.write_sbml_model(model,'../model/p-thermo.xml')
# __E. coli genes__
# In a previous commit, Ben saw therewere accidentally three E. coli genes present in the model. (See Issue #55: fix: 3 genes from E. coli)
#
# Here I check that they are not associated to any reaction. Also the xml file seems to show they are not associated to any reaction. So they can be removed.
e_coli_genes = ['b3903', 'b3904', 'b3902']
for rct in model.reactions:
if rct.gene_reaction_rule in e_coli_genes:
print (rct.id)
else:
continue
genes = [model.genes.get_by_id('b3903'), model.genes.get_by_id('b3904'), model.genes.get_by_id('b3902')]
genes
cobra.manipulation.delete.remove_genes(model, genes, remove_reactions=True)
#save&commit
cobra.io.write_sbml_model(model,'../model/p-thermo.xml')
| notebooks/33. Removing redundant information.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
# %load_ext autoreload
# %autoreload 2
# + [markdown] pycharm={"name": "#%% md\n"}
# # Rating models
# + tags=[]
import matplotlib.pyplot as plt
from movie_ratings import config
from taskchain import MultiChain
# + [markdown] pycharm={"name": "#%% md\n"}
# ## All features
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
features_dir = 'all_features'
chains = MultiChain.from_dir(config.TASKS_DIR, config.CONFIGS_DIR / 'rating_model' / features_dir, global_vars=config)
chains
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
for name, chain in chains.items():
print(f'{name:>20}: {chain.test_metrics.value["RMSE"]:.3f} {chain.test_metrics.value["MAE"]:.3f}')
# -
# ## Basic features
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
features_dir = 'basic_features'
chains = MultiChain.from_dir(config.TASKS_DIR, config.CONFIGS_DIR / 'rating_model' / features_dir, global_vars=config)
print(chains)
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
for name, chain in chains.items():
print(f'{name:>20}: {chain.test_metrics.value["RMSE"]:.3f} {chain.test_metrics.value["MAE"]:.3f}')
| example/scripts/rating_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
print('hello world')
print('new')
a=5.4
type(a)
print(a)
print(a.runtimetype)
my_pay = 100
myTaxrate = 10/100
my_taxes = my_pay * myTaxrate
myTaxrate
my_taxes
mystr = 'hello world'
print(mystr[-1])
mystr[5]
mystr[3:]
mystr[3:-2]
mystr[:2]
mystr[0:]
mystr[::2]
mystr[::3]
mystr[::-1]
mystr[::-2]
greet = "Good morning"
comment =" Hello world"
greet + comment
name = 'Tom'
last_letters = name[1:]
last_letters
newName='M'+ last_letters
newName
newName*10
newName.upper()
comment.split()
print('Talk about the {2}, {1} and {0}'.format('first',"second", 'third'))
score = 100/47780
print('The precise score is {s}'.format(s=score))
print('The precise score is {s:1.4f}'.format(s=score))
print(f'The precise score {score:1.4f}')
newName.len()
len(newName)
myList = ['string', 4, 39.4]
myList
hisList = ['ana','mom',4,5.6]
newList = myList + hisList
newList
len(newList)
newList[-2]='bro'
newList
newList.append('yes')
newList
| myfirstnotebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Research (conda)
# language: python
# name: research
# ---
import matplotlib as mpl
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import numpy as np
# %matplotlib inline
mpl.style.use('ggplot')
mpl.rcParams.update({'font.size': 12})
mpl.rcParams.update({'xtick.labelsize': 'x-large'})
mpl.rcParams.update({'xtick.major.size': '0'})
mpl.rcParams.update({'ytick.labelsize': 'x-large'})
mpl.rcParams.update({'ytick.major.size': '0'})
# https://stackoverflow.com/questions/18386210/annotating-ranges-of-data-in-matplotlib
def draw_brace(ax, xspan, yoffset, text):
"""Draws an annotated brace on the axes."""
xmin, xmax = xspan
xspan = xmax - xmin
ax_xmin, ax_xmax = ax.get_xlim()
xax_span = ax_xmax - ax_xmin
ymin, ymax = ax.get_ylim()
yspan = ymax - ymin
resolution = int(xspan/xax_span*100)*2+1 # guaranteed uneven
beta = 300./xax_span # the higher this is, the smaller the radius
x = np.linspace(xmin, xmax, resolution)
x_half = x[:int(resolution/2+1)]
y_half_brace = (1/(1.+np.exp(-beta*(x_half-x_half[0])))
+ 1/(1.+np.exp(-beta*(x_half-x_half[-1]))))
y = np.concatenate((y_half_brace, y_half_brace[-2::-1]))
y = yoffset + (.05*y - .02)*yspan # adjust vertical position
ax.autoscale(False)
ax.plot(x, y, color='black', lw=1)
ax.text((xmax+xmin)/2., yoffset+.07*yspan, text, ha='center', va='bottom', fontdict={'family': 'monospace'})
# +
def wall_potential(x, y, sphere_radius=5.0, inside=True):
r = np.sqrt(x ** 2 + y ** 2)
if inside:
r = sphere_radius - r
else:
r -= sphere_radius
def gauss_potential(r, epsilon=1.0, sigma=1.0):
arg = -0.5 * ((r / sigma) ** 2)
return epsilon * np.exp(arg)
return np.where(r >= 0, gauss_potential(r), 0)
spacing = np.linspace(-5, 5, 1000)
x, y = np.meshgrid(spacing, spacing)
fig = plt.Figure(figsize=(10, 6.18), dpi=100)
ax = fig.add_subplot()
img = ax.imshow(wall_potential(x, y).reshape((-1, 1000)), extent=[-5, 5, -5, 5])
ax.set_xlabel('x')
ax.set_ylabel('y')
divider = make_axes_locatable(ax)
cb = fig.colorbar(img, ax=ax, orientation="vertical", fraction=0.05, pad=-0.3)
cb.set_label("Potential Energy")
fig
# -
fig.savefig('../md-wall-potential.svg', bbox_inches='tight', facecolor=(1, 1, 1, 0))
# +
spacing = np.linspace(-7.5, 7.5, 2000)
x, y = np.meshgrid(spacing, spacing)
fig = plt.Figure(figsize=(10, 6.18), dpi=100)
ax = fig.add_subplot()
img = ax.imshow(wall_potential(x, y, inside=False).reshape((-1, 2000)), extent=[-7.5, 7.5, -7.5, 7.5])
ax.set_xlabel('x')
ax.set_ylabel('y')
divider = make_axes_locatable(ax)
cb = fig.colorbar(img, ax=ax, orientation="vertical", fraction=0.05, pad=-0.3)
cb.set_label("Potential Energy")
fig
# -
fig.savefig('../md-wall-potential-outside.svg', bbox_inches='tight', facecolor=(1, 1, 1, 0))
def lj_wall_energy(r, r_extrap, r_cut, sigma=1.0, epsilon=1.0):
def lj(r):
a = (sigma / r) ** 6
return 4 * epsilon * (a ** 2 - a)
def lj_force(r):
lj1 = 4 * epsilon * (sigma ** 12)
lj2 = 4 * epsilon * (sigma ** 6)
r2_inv = 1 / (r * r)
r6_inv = r2_inv * r2_inv * r2_inv
return r2_inv * r6_inv * (12.0 * lj1 * r6_inv - 6.0 * lj2)
if r_extrap == 0:
return lj(r)
v_extrap = lj(r_extrap)
f_extrap = lj_force(r_extrap)
return np.where(r <= r_extrap, v_extrap + (r_extrap - r) * f_extrap, lj(r))
# +
r = np.linspace(-1, 3, 1000)
r_without = np.linspace(0.89, 3, 1000)
energy = lj_wall_energy(r, 1.1, 3.0)
energy_base = lj_wall_energy(r_without, 0.0, 3.0)
fig = plt.Figure(figsize=(10, 6.18), dpi=100)
ax = fig.add_subplot()
ax.plot(r, energy, label=r"Extrapolated $r_{extrap} = 1.1$")
ax.plot(r_without, energy_base, label="Standard")
ax.set_xlabel("$r$")
ax.set_ylabel("$V$")
ax.legend()
fig
# -
fig.savefig('../md-wall-extrapolate.svg', bbox_inches='tight', facecolor=(1, 1, 1, 0))
| sphinx-doc/figures/md-walls.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Implement the following operations of a queue using stacks.
#
# - push(x) -- Push element x to the back of queue.
# - pop() -- Removes the element from in front of queue.
# - peek() -- Get the front element.
# - empty() -- Return whether the queue is empty.
#
# Example:
# ```
# MyQueue queue = new MyQueue();
#
# queue.push(1);
# queue.push(2);
# queue.peek(); // returns 1
# queue.pop(); // returns 1
# queue.empty(); // returns false
# ```
#
# Notes:
#
# - You must use only standard operations of a stack -- which means only push to top, peek/pop from top, size, and is empty operations are valid.
# - Depending on your language, stack may not be supported natively. You may simulate a stack by using a list or deque (double-ended queue), as long as you use only standard operations of a stack.
# - You may assume that all operations are valid (for example, no pop or peek operations will be called on an empty queue).
# +
class MyQueue:
def __init__(self):
"""
Initialize your data structure here.
"""
self.stack1 = []
self.stack2 = []
def push(self, x: int) -> None:
"""
Push element x to the back of queue.
"""
self.stack1.append(x)
def pop(self) -> int:
"""
Removes the element from in front of queue and returns that element.
"""
if not self.stack2:
while self.stack1:
self.stack2.append(self.stack1.pop())
return self.stack2.pop()
def peek(self) -> int:
"""
Get the front element.
"""
if not self.stack2:
while self.stack1:
self.stack2.append(self.stack1.pop())
return self.stack2[-1]
def empty(self) -> bool:
"""
Returns whether the queue is empty.
"""
if not self.stack1 and not self.stack2:
return True
else:
return False
# Your MyQueue object will be instantiated and called as such:
obj = MyQueue()
obj.push(1)
obj.push(2)
param_2 = obj.peek()
param_3 = obj.pop()
param_4 = obj.empty()
print(param_2)
print(param_3)
print(param_4)
| DSA/stack/MyQueue.ipynb |
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .fs
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: .NET (F#)
// language: F#
// name: .net-fsharp
// ---
// # Calendar View
//
// Given a department code and a term, creates a calendar view of scheduled courses in Banner for the purpose of visualizing course scheduling conflicts.
//
// The primary use case for this is determining where to schedule a new IIS class so as to minimize conflicts with other departments, but it would probably be equally useful for a department to visualize its own schedule.
//
// Note that you will need to [install and reference NuGet packages](https://docs.microsoft.com/en-us/nuget/tools/nuget-exe-cli-reference) as appropriate on your system.
//
// ## Common code
//
// The code below describes:
//
// - A basic Banner POST for all courses offered by a department
// - A basic Banner POST for a specific class
// - XPath oriented methods for processing the html (from HtmlAgilityPack)
// - Html autorepair methods for fixing bad html that breaks XPath (from AngleSharp)
// - General functions for extracting calendar data and ICS format
//
// Since AngleSharp does not support XPath, we use two libraries for html manipulation.
// Refactoring to remove XPath/HtmlAgilityPack would probably be sensible at some point.
// +
#r "/z/aolney/repos/FSharp.Data.2.3.2/lib/net40/FSharp.Data.dll"
#r "/z/aolney/repos/HtmlAgilityPack.1.4.9.5/lib/Net40/HtmlAgilityPack.dll"
#r "/z/aolney/repos/AngleSharp.0.9.10/lib/net45/AngleSharp.dll"
open FSharp.Data
open HtmlAgilityPack
open System
let QueryClass (dept:string) (number:string) (term:string) =
let result =
Http.RequestString(
//"https://banssbprod.memphis.edu/pls/PROD/bwckschd.p_get_crse_unsec",
"https://ssb.bannerprod.memphis.edu/prod/bwckschd.p_get_crse_unsec",
query=[
"term_in", term.Trim();
"sel_subj","dummy";
"sel_day","dummy";
"sel_schd","dummy";
"sel_insm","dummy";
"sel_camp","dummy";
"sel_levl","dummy";
"sel_sess","dummy";
"sel_instr","dummy";
"sel_ptrm","dummy";
"sel_attr","dummy";
"sel_subj",dept.Trim();
"sel_crse",number.Trim();
"sel_title","";
"sel_insm","%";
"sel_from_cred","";
"sel_to_cred","";
"sel_camp","%";
"sel_levl","%";
"sel_ptrm","%";
"sel_instr","%";
"sel_attr","%";
"begin_hh","0";
"begin_mi","0";
"begin_ap","a";
"end_hh","0";
"end_mi","0";
"end_ap","a" ],
httpMethod="POST")
let classFound = result.Contains("No classes were found that meet your search criteria") |> not
//
classFound,result
let QueryDepartment (dept:string) (term:string) =
QueryClass dept "" term |> snd
//Creates a doc from html string
let createDoc html =
let doc = new HtmlDocument()
doc.LoadHtml html
doc.DocumentNode
let descendants (name : string) (node : HtmlNode) =
node.Descendants name
let selectNodes (query : string) (node : HtmlNode) =
node.SelectNodes query
let inline innerText (node : HtmlNode) =
node.InnerText
let inline attr name (node : HtmlNode) =
node.GetAttributeValue(name, "")
let inline hasAttr name value node =
attr name node = value
//Use AngleSharp to auto-fix broken documents; converts to HTML5 standard
let AutoFixHtml (html : string) =
let parser = new AngleSharp.Parser.Html.HtmlParser();
let document = parser.Parse( html );
document.DocumentElement.OuterHtml
let createDocAutoFix html =
let doc = new HtmlDocument()
html |> AutoFixHtml |> doc.LoadHtml
doc.DocumentNode
//Extraction data from HTML
type Class =
{
Title: string
Type: string
Time: string
StartDateTime: DateTime //hackish: date and time components have slightly different semantics
EndDateTime: DateTime //hackish: date and time components have slightly different semantics
Days: string[]
Where: string
DateRange: string
ScheduleType: string
Instructors: string
}
let GetDays dayString =
if dayString = " " then
[||]
else
let result = new ResizeArray<string>()
for c in dayString do
match c with
| 'M' -> result.Add( "MO" )
| 'T' -> result.Add( "TU" )
| 'W' -> result.Add( "WE" )
| 'R' -> result.Add( "TH" )
| 'F' -> result.Add( "FR" )
| 'S' -> result.Add( "SA" )
| _ -> ()
//
result.ToArray()
let GetStartEndDateTime (dateNode:HtmlNode) (timeNode:HtmlNode) =
let dateSplit = dateNode.InnerHtml.Split('-')
let timeSplit = timeNode.InnerHtml.Split('-')
match dateSplit.Length,timeSplit.Length with
//we have valid dates and times
| 2,2 ->
let s = dateSplit.[0].Trim() + " " + timeSplit.[0].Trim() |> DateTime.Parse
let e = dateSplit.[1].Trim() + " " + timeSplit.[1].Trim() |> DateTime.Parse
s,e
//all other cases we can't put on calendar; we use MinValue instead of options
| 2,_ -> DateTime.MinValue,DateTime.MinValue
| _,2 -> DateTime.MinValue,DateTime.MinValue
| _,_ -> DateTime.MinValue,DateTime.MinValue
//ICS related
let GetVEvents (c : Class) =
let s = c.StartDateTime
let e = c.EndDateTime
//Fix the semantics: combine the time component of End with the date component of start to get the end time on a given day.
let dayEndTime = new DateTime(s.Year,s.Month,s.Day,e.Hour,e.Minute,e.Second)
seq {
for day in c.Days do
yield """BEGIN:VEVENT
DTSTART;TZID=America/Chicago:#START#
DTEND;TZID=America/Chicago:#END#
RRULE:FREQ=WEEKLY;UNTIL=#UNTILZ#;BYDAY=#DAY#
DTSTAMP:#NOWZ#
CREATED:#NOWZ#
DESCRIPTION:
LAST-MODIFIED:#NOWZ#
LOCATION:#WHERE#
SEQUENCE:0
STATUS:CONFIRMED
SUMMARY:#NAME#
TRANSP:OPAQUE
END:VEVENT""".Replace("#START#",c.StartDateTime.ToString("yyyyMMddTHHmmss")).Replace("#END#",dayEndTime.ToString("yyyyMMddTHHmmss")).Replace("#UNTILZ#",c.EndDateTime.ToUniversalTime().ToString("yyyyMMddTHHmmssZ")).Replace("#DAY#",day).Replace("#NOWZ#",DateTime.Now.ToUniversalTime().ToString("yyyyMMddTHHmmssZ")).Replace("#NAME#",c.Title).Replace("#WHERE#",c.Where)
}
// Looked at https://github.com/UHCalendarTeam/ANtICalendar but it wasn't good
// Instead made a calendar template on Google and reverse engineered it
let WriteVEventsToFile filename vEvents =
let prefix = """BEGIN:VCALENDAR
PRODID:-//Google Inc//Google Calendar 70.9054//EN
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:#NAME#
X-WR-TIMEZONE:America/Chicago
BEGIN:VTIMEZONE
TZID:America/Chicago
X-LIC-LOCATION:America/Chicago
BEGIN:DAYLIGHT
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
TZNAME:CDT
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
TZNAME:CST
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE""".Replace("#NAME#",filename).Replace(" ","")
let icsString = prefix + "\n" + (vEvents |> String.concat "\n") + "\n" + "END:VCALENDAR"
System.IO.File.WriteAllText(filename + ".ics", icsString)
// -
// ## Department Calendar View
//
// Run the query to get all courses in a department, extract the time/date information for each course, then export this information in ICS format.
// ICS can then be loaded into your favorite calendar app, e.g. Google Calendar.
// It is recommended to load each department ICS as its own calendar (rather than merging ICS into a single calendar) to retain color coding for departments.
// +
//Banner Term, e.g. 201680 for Fall 2016; 201610 for Spring
let term = "202010"
let depts = [|"POLS";"PSYC";"COMP";"PHIL";"LAW";"ENGL"|]
let GetClassEvents dept term =
let rawNodes =
QueryDepartment dept term
|> createDocAutoFix
//Auto fix changes source html to HTML5 standard, so XPath must use e.g. Chrome inspector structure and not source structure
|> selectNodes "html/body/div/table/tbody/tr/th/a | html/body/div/table/tbody/tr/td/table"
|> Seq.toArray
//Because some pages have degenerate structure, we cannot assume that class titles are always followed by info tables
//so we need to get the raw nodes that puportedly pair these and filter out bad cases (e.g. some disserations in philosophy)
seq {
for i = 0 to rawNodes.Length - 2 do
if rawNodes.[i].Name = "a" && rawNodes.[i+1].Name = "table" then
yield rawNodes.[i]
yield rawNodes.[i+1]
}
//|> Seq.map( fun x -> x.InnerHtml) //for debug
|> Seq.chunkBySize 2 //first is title; second is table of time/location/etc
|> Seq.collect( fun courseChunk ->
//Need distinction b/w course and class chunks b/c Psychology has GenEd with variable classes across the week; see previous commit for simpler approach
courseChunk.[1]
|> selectNodes "descendant::td"
|> Seq.chunkBySize 7
|> Seq.map( fun classChunk ->
let days = GetDays classChunk.[2].InnerHtml
let start,stop = GetStartEndDateTime classChunk.[4] classChunk.[1]
{
Title=courseChunk.[0].InnerHtml;
Type=classChunk.[0].InnerHtml;
Time=classChunk.[1].InnerHtml;
StartDateTime = start
EndDateTime = stop
Days=days;
Where=classChunk.[3].InnerHtml;
DateRange=classChunk.[4].InnerHtml;
ScheduleType=classChunk.[5].InnerHtml;
Instructors=classChunk.[6].InnerText;
}
)
)
//Purge events without times, e.g. dissertation and independent study
|> Seq.filter (fun x -> x.StartDateTime <> DateTime.MinValue)
|> Seq.collect GetVEvents
|> Seq.toList
//Traverse list of departments, writing an ICS for each
depts
|> Array.iter( fun dept ->
term
|> GetClassEvents dept
|> WriteVEventsToFile dept
)
"DONE"
// -
// ## Class Calendar View
//
// Given a list of courses, extract the time/date information for each course, then export this information in ICS format.
//
// ICS can then be loaded into your favorite calendar app, e.g. Google Calendar.
//
// ## Course List
//
// The list of courses below was generated in a spreadsheet saved as tab delimited.
let courseList = """degree department number
MINOR PSYC 1030
MINOR PSYC 3303
MINOR PSYC 3530
MINOR COMP 1900
MINOR PHIL 3460
MINOR PHIL 4421
MINOR PHIL 4661
MINOR ECON 4512
MINOR COMP 2700
MINOR COMP 4720
MINOR MATH 2120
MINOR MATH 4083
MINOR ENGL 3511"""
// +
let term = "201910"
let calendarName = "minor"
let classEvents =
courseList.Split("\n")
|> Seq.skip 1 //skip header
|> Seq.choose( fun x ->
let s = x.Split('\t')
let degree = s.[0]
let dept = s.[1]
let id = s.[2]
//handle queries that fail
match QueryClass dept id term with
| true, result -> Some( result )
| false, _ -> None
)
|> Seq.map createDocAutoFix
//|> Seq.map( fun x -> x.InnerHtml) //for debug
|> Seq.collect( fun html ->
html
//Auto fix changes source html to HTML5 standard, so XPath must use e.g. Chrome inspector structure and not source structure
|> selectNodes "html/body/div/table/tbody/tr/th/a | html/body/div/table/tbody/tr/td/table"
//|> Seq.map( fun x -> x.InnerHtml) //for debug
|> Seq.chunkBySize 2 //first is title; second is table of time/location/etc
|> Seq.collect( fun courseChunk ->
//Need distinction b/w course and class chunks b/c Psychology has GenEd with variable classes across the week; see previous commit for simpler approach
courseChunk.[1]
|> selectNodes "descendant::td"
|> Seq.chunkBySize 7
|> Seq.map( fun classChunk ->
let days = GetDays classChunk.[2].InnerHtml
let start,stop = GetStartEndDateTime classChunk.[4] classChunk.[1]
{
Title=courseChunk.[0].InnerHtml;
Type=classChunk.[0].InnerHtml;
Time=classChunk.[1].InnerHtml;
StartDateTime = start
EndDateTime = stop
Days=days;
Where=classChunk.[3].InnerHtml;
DateRange=classChunk.[4].InnerHtml;
ScheduleType=classChunk.[5].InnerHtml;
Instructors=classChunk.[6].InnerText;
}
)
)
)
//Purge events without times, e.g. dissertation and independent study
|> Seq.filter (fun x -> x.StartDateTime <> DateTime.MinValue)
|> Seq.collect GetVEvents
|> Seq.toList
WriteVEventsToFile calendarName classEvents
"DONE"
// -
| CalendarView.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
#creat a column of the sum of normalize columns. Each column has equal weight
def create_summary_metric(columns_to_include,df_in):
nom_col = []
df_in['Total Crimes'] = df_in['Total Crimes']*-1#invert crime coordinate from some the sum makes since
#create a new column with normalized values
for i in columns_to_include:
df_in[i+"_nom"] = round(df_in[i]/abs(df_in[i].mean()),2 )
nom_col.append(i+"_nom")
df_in['summary_metric'] = df_in[nom_col].sum(axis=1)
return df_in
# +
data_in_path = '../data/final_investment_criteria.csv'
data_out_path = '../data/final_investment_criteria_out.csv'
df = pd.read_csv(data_in_path)
# -
df.keys()
# +
summary_columns = ['job_growth_per','Normalized Ratio','Total Crimes','School Rating']
df_out = create_summary_metric(summary_columns,df)
df_out.to_csv(data_in_path)
# -
| src/get_real_estate_summary_metric.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="_PV0lkjlWFXN"
# **Exercício Python 038: Escreva um programa que leia dois números inteiros e compare-os. mostrando na tela uma mensagem:
#
# – O primeiro valor é maior
#
# – O segundo valor é maior
#
# – Não existe valor maior, os dois são iguais**
# + id="L0Ah83TKWGto" outputId="4c9f7cbb-70a6-4b6d-b8de-641336cfd162" colab={"base_uri": "https://localhost:8080/"}
n1= int(input('Informe o primeiro numero : '))
n2= int(input('Informe o segundo numero : '))
if n1 > n2:
print ('O primeiro numero é maior: ')
elif n2 > n1:
print ('O segundo numero é maior:')
else:
print ('Os dois numeros são iguais')
# + id="_HuxsgOMpLIg"
n1= int(input('Informe o primeiro numero'))
n2 =int(input('Informe o segundo numero'))
if n1 > n2:
print ('o primeiro numero é maior: ')
elif n2 > n1:
print ('O primeiro numero é maior: ')
else:
print ('Os numeros são iguas:')
# -
| Exercício 38 – Comparando números.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.3.1
# language: julia
# name: julia-1.3
# ---
# # Quantum Ising Phase Transition
# ### <NAME>, <NAME>
# ## Introduction
# In this tutorial we will consider a simple quantum mechanical system of spins sitting on a chain. Here, *quantum mechanical*, despite its pompous sound, simply means that our Hamiltonian matrix will have a non-trivial (i.e. non-diagonal) matrix structure.
#
# We will then ask a couple of basic questions,
#
# * What is the ground state of the system?
# * What happens if we turn on a transverse magnetic field?
# * Are there any phase transitions?
#
# To get answers to the questions, we will solve the time-independent Schrödinger equation
#
# $$H|\psi\rangle = E |\psi\rangle$$
#
# in Julia by means of exact diagonalization of the Hamiltonian.
# ## Transverse field quantum Ising chain
# Let's start out by defining our system. The Hamiltonian is given by
# $$\mathcal{H} = -\sum_{\langle i, j \rangle} \hat{\sigma}_i^z \otimes \hat{\sigma}_j^z - h\sum_i \hat{\sigma}_i^x$$
# Here, $\hat{\sigma}^z$ and $\hat{\sigma}^x$ are two of the three [Pauli matrices](https://en.wikipedia.org/wiki/Pauli_matrices), representing our quantum spins, $\langle i, j \rangle$ indicates that only neighboring spins talk to each other, and $h$ is the amplitude of the magnetic field.
σᶻ = [1 0; 0 -1] # \sigma <TAB> followed by \^z <TAB>
σˣ = [0 1; 1 0]
# Labeling the eigensates of $\sigma^z$ as $|\downarrow\rangle$ and $|\uparrow\rangle$, we interpret them as a spin pointing down or up (in $z$-direction), respectively.
#
# Clearly, since being purely off-diagonal, the effect of $\sigma^x$ on such a single spin is to flip it:
#
# $$\hat{\sigma}^x\left| \downarrow \right\rangle = \left| \uparrow \right\rangle$$
#
# $$\hat{\sigma}^x\left| \uparrow \right\rangle = \left| \downarrow \right\rangle$$
# The idea behind the Hamiltonian above is as follows:
#
# * The first term is diagonal in the $\sigma^z$ eigenbasis. If there is no magnetic field, $h=0$, our quantum model reduces to the well-known classical [Ising model](https://en.wikipedia.org/wiki/Ising_model) (diagonal = trivial matrix structure -> classical). In this case, we have a **finite temperature phase transition** from a paramagnetic ($T>T_c$) phase, where the spins are **disordered by thermal fluctuations**, to a ferromagnetic phase ($T<T_c$), where they all point into the $z$ direction and, consequently, a ferromagnetic ground state at $T=0$.
#
# * Since this would be boring, we want to add quantum complications to this picture by making $H$ non-diagonal. To this end, we expose the quantum spins to a transverse magnetic field $h$ in $x$ direction in the second term. Now, since $\sigma^z$ and $\sigma^x$ do not commute (check `σˣ*σᶻ - σᶻ*σˣ` yourself), there is no common eigenbasis of the first and the second term and our Hamiltonian has a non-trivial matrix structure (It's quantum!). If there was *only the second term* the system would, again, be trivial, as it would be diagonal in the eigenbasis of $\sigma^x$: the quantum spins want to be in an eigenstate of $\sigma^x$, i.e. align to the magnetic field.
#
# * We can see that if we have both terms we have a competition between the spins wanting to point in the $z$ direction (first term) and at the same time being disturbed by the transverse magnetic field. We say that the magnetic field term adds **quantum fluctuations** to the system.
#
# Let us explore the physics of this interplay.
# ## Building the Hamiltonian matrix
# We will choose the $\sigma^z$ eigenbasis as our computation basis.
#
# To build up our Hamiltonian matrix we need to take the kronecker product (tensor product) of spin matrices. Fortunately, Julia has a built-in function for this.
kron(σᶻ,σᶻ) # this is the matrix of the tensor product σᶻᵢ⊗ σᶻⱼ (⊗ = \otimes <TAB>)
# Let's be fancy (cause we can!) and make this look a bit cooler.
⊗(x,y) = kron(x,y)
σᶻ ⊗ σᶻ
# ### Explicit 4-site Hamiltonian
# Imagine our spin chain consists of four sites. Writing out identity matrices (which were left implicit in $H$ above) explicitly, our Hamiltonian reads
# $$\mathcal{H}_4 = -\hat{\sigma}_1^z \hat{\sigma}_2^z \hat{I}_3 \hat{I}_4 - \hat{I}_1 \hat{\sigma}_2^z \hat{\sigma}_3^z \hat{I}_4 - \hat{I}_1 \hat{I}_2 \hat{\sigma}_3^z \hat{\sigma}_4^z - h\left(\hat{\sigma}_1^x\hat{I}_2 \hat{I}_3\hat{I}_4 + \hat{I}_1 \hat{\sigma}_2^x \hat{I}_3\hat{I}_4 +\hat{I}_1 \hat{I}_2 \hat{\sigma}_3^x\hat{I}_4 + \hat{I}_1 \hat{I}_2 \hat{I}_3 \hat{\sigma}_4^x\right)$$
# (Note that we are considering *open* boundary conditions here - the spin on site 4 doesn't interact with the one on the first site. For *periodic* boundary conditions we'd have to add a term $- \hat{\sigma}^z_1 \hat{I}_2 \hat{I}_3 \hat{\sigma}_4^z$.)
#
# Translating this expression to Julia is super easy. After defining the identity matrix
id = [1 0; 0 1] # identity matrix
# we can simply write
h = 1
H = - σᶻ⊗σᶻ⊗id⊗id - id⊗σᶻ⊗σᶻ⊗id - id⊗id⊗σᶻ⊗σᶻ
H -= h*(σˣ⊗id⊗id⊗id + id⊗σˣ⊗id⊗id + id⊗id⊗σˣ⊗id + id⊗id⊗id⊗σˣ)
# There it is.
#
# As nice as it is to write those tensor products explicitly, we certainly wouldn't want to write out all the terms for, say, 100 sites.
#
# Let's define a function that iteratively does the job for us.
function TransverseFieldIsing(;N,h)
id = [1 0; 0 1]
σˣ = [0 1; 1 0]
σᶻ = [1 0; 0 -1]
# vector of operators: [σᶻ, σᶻ, id, ...]
first_term_ops = fill(id, N)
first_term_ops[1] = σᶻ
first_term_ops[2] = σᶻ
# vector of operators: [σˣ, id, ...]
second_term_ops = fill(id, N)
second_term_ops[1] = σˣ
H = zeros(Int, 2^N, 2^N)
for i in 1:N-1
# tensor multiply all operators
H -= foldl(⊗, first_term_ops)
# cyclic shift the operators
first_term_ops = circshift(first_term_ops,1)
end
for i in 1:N
H -= h*foldl(⊗, second_term_ops)
second_term_ops = circshift(second_term_ops,1)
end
H
end
TransverseFieldIsing(N=8, h=1)
# ### Many-particle basis
#
# Beyond a single spin, we have to think how to encode our basis states.
#
# We make the arbitrary choice:
# $0 = \text{false} = \downarrow$ and $1 = \text{true} = \uparrow$
#
# This way, our many-spin basis states have nice a binary representations and we can efficiently store them in a Julia `BitArray`.
#
# Example: $|0010\rangle = |\text{false},\text{false},\text{true},\text{false}\rangle = |\downarrow\downarrow\uparrow\downarrow>$ is a basis state of a 4-site system
#
# We construct the full basis by binary counting.
# +
"""
Binary `BitArray` representation of the given integer `num`, padded to length `N`.
"""
bit_rep(num::Integer, N::Integer) = BitArray(parse(Bool, i) for i in string(num, base=2, pad=N))
"""
generate_basis(N::Integer) -> basis
Generates a basis (`Vector{BitArray}`) spanning the Hilbert space of `N` spins.
"""
function generate_basis(N::Integer)
nstates = 2^N
basis = Vector{BitArray{1}}(undef, nstates)
for i in 0:nstates-1
basis[i+1] = bit_rep(i, N)
end
return basis
end
# -
generate_basis(4)
# ### Side remark: Iterative construction of $H$
# It might not be obvious that this basis is indeed the basis underlying the Hamiltonian matrix constructed in `TransverseFieldIsing`. To convince ourselves that this is indeed the case, let's calculate the matrix elements of our Hamiltonian, $\langle \psi_1 | H | \psi_2 \rangle$, explicitly by applying $H$ to our basis states and utilizing their orthonormality, $\langle \psi_i | \psi_j \rangle = \sigma_{i,j}$.
# +
using LinearAlgebra
function TransverseFieldIsing_explicit(; N::Integer, h::T=0) where T<:Real
basis = generate_basis(N)
H = zeros(T, 2^N, 2^N)
bonds = zip(collect(1:N-1), collect(2:N))
for (i, bstate) in enumerate(basis)
# diagonal part
diag_term = 0.
for (site_i, site_j) in bonds
if bstate[site_i] == bstate[site_j]
diag_term -= 1
else
diag_term += 1
end
end
H[i, i] = diag_term
# off diagonal part
for site in 1:N
new_bstate = copy(bstate)
# flip the bit on the site (that's what σˣ does)
new_bstate[site] = !new_bstate[site]
# find corresponding single basis state with unity overlap (orthonormality)
new_i = findfirst(isequal(new_bstate), basis)
H[i, new_i] = -h
end
end
return H
end
# -
TransverseFieldIsing_explicit(N=4, h=1) ≈ TransverseFieldIsing(N=4, h=1)
# ### Full exact diagonalization
# Alright. Let's solve the Schrödinger equation by diagonalizing $H$ for a system with $N=8$ and $h=1$.
basis = generate_basis(8)
H = TransverseFieldIsing(N=8, h=1)
vals, vecs = eigen(H)
# That's it. Here is our groundstate.
groundstate = vecs[:,1];
# The absolute square of this wave function is the probability of finding the system in a particular basis state.
abs2.(groundstate)
# It's instructive to look at the extremal cases $h=0$ and $h>>1$.
H = TransverseFieldIsing(N=8, h=0)
vals, vecs = eigen(H)
groundstate = vecs[:,1]
abs2.(groundstate)
# As we can see, for $h=0$ the system is (with probability one) in the first basis state, where all spins point in $-z$ direction.
basis[1]
# On the other hand, for $h=100$, the system occupies all basis states with approximately equal probability (maximal superposition) - corresponding to eigenstates of $\sigma^x$, i.e. alignment to the magnetic field.
H = TransverseFieldIsing(N=8, h=100)
vals, vecs = eigen(H)
groundstate = vecs[:,1]
abs2.(groundstate)
# # Are you a magnet or what?
# Let's vary $h$ and see what happens. Since we're looking at quantum magnets we will compute the overall magnetization, defined by
#
# $$M = \frac{1}{N}\sum_{i} \sigma^z_i$$
# where $\sigma^z_i$ is the value of the spin on site $i$ when we measure.
function magnetization(state, basis)
M = 0.
for (i, bstate) in enumerate(basis)
bstate_M = 0.
for spin in bstate
bstate_M += (state[i]^2 * (spin ? 1 : -1))/length(bstate)
end
@assert abs(bstate_M) <= 1
M += abs(bstate_M)
end
return M
end
magnetization(groundstate, basis)
# Now we would like to examine the effects of $h$. We will:
#
# 1. Find a variety of $h$ to look at.
# 2. For each, compute the lowest energy eigenvector (groundstate) of the corresponding Hamiltonian.
# 3. For each groundstate, compute the overall magnetization $M$.
# 4. Plot $M(h)$ for a variety of system sizes, and see if anything cool happens.
using Plots
hs = 10 .^ range(-2., stop=2., length=10)
Ns = 2:10
p = plot()
for N in Ns
M = zeros(length(hs))
for (i,h) in enumerate(hs)
basis = generate_basis(N)
H = TransverseFieldIsing(N=N, h=h)
vals, vecs = eigen(H)
groundstate = vecs[:,1]
M[i] = magnetization(groundstate, basis)
end
plot!(p, hs, M, xscale=:log10, marker=:circle, label="N = $N",
xlab="h", ylab="M(h)")
println(M)
end
p
# **This looks like a phase transition!**
#
# For small $h$, the magnetization is unity, corresponding to a ferromagnetic state. By increasing the magnetic field $h$ we have a competition between the two terms in the Hamiltonian and eventually the system becomes paramagnetic with $M\approx0$. Our plot suggests that this change of state happens around $h\sim1$, which is in good agreement with the exact solution $h=1$.
#
# It is crucial to realize, that in our calculation we are inspecting the ground state of the system. Since $T=0$, it is purely quantum fluctuations that drive the transition: a **quantum phase transition**! This is to be compared to increasing temperature in the classical Ising model, where it's thermal fluctuations that cause a classical phase transition from a ferromagnetic to a paramagnetic state. For this reason, the state that we observe at high magnetic field strengths is called a **quantum paramagnet**.
# ## Hilbert space is a big space
# So far, we have only inspected chains of length $N\leq10$. As we see in our plot above, there are rather strong finite-size effects on the magnetization. To extract a numerical estimate for the critical magnetic field strength $h_c$ of the transition we would have to consider much larger systems until we observe convergence as a function of $N$. Although this is clearly beyond the scope of this tutorial, let us at least pave the way.
#
# Our calculation, in its current form, doesn't scale. The reason for this is simple, **Hilbert space is a big place!**
#
# The number of basis states, and therefore the number of dimensions, grows **exponentially** with system size.
plot(N -> 2^N, 1, 20, legend=false, color=:black, xlab="N", ylab="# Hilbert space dimensions")
# Our Hamiltonian matrix therefore will become huge(!) and is not going to fit into memory (apart from the fact that diagonalization would take forever).
using Test
@test_throws OutOfMemoryError TransverseFieldIsing(N=20, h=1)
# So, what can we do about it? The answer is, **sparsity**.
#
# Let's inspect the Hamiltonian a bit more closely.
H = TransverseFieldIsing(N=10, h=1)
# Noticably, there are a lot of zeros. How does this depend on $N$?
#
# Let's plot the sparsity, i.e. ratio of zero entries.
# +
sparsity(x) = count(isequal(0), x)/length(x)
Ns = 2:12
sparsities = Float64[]
for N in Ns
H = TransverseFieldIsing(N=N, h=1)
push!(sparsities, sparsity(H))
end
plot(Ns, sparsities, legend=false, xlab="chain length N", ylab="Hamiltonian sparsity", marker=:circle)
# -
# For $N\gtrsim10$ almost all entries are zero! We should get rid of those and store $H$ as a sparse matrix.
# ### Building the sparse Hamiltonian
# Generally, we can bring a dense matrix into a sparse matrix format using the function `sparse`.
using SparseArrays
H = TransverseFieldIsing(N=4,h=1)
H |> sparse
# Note that in this format, only the 80 non-zero entries are stored (rather than 256 elements).
#
# So, how do we have to modify our function `TransverseFieldIsing` to only keep track of non-zero elements during the Hamiltonian construction?
#
# It turns out it is as simple as initializing our Hamiltonian, identity, and pauli matrices as sparse matrices!
function TransverseFieldIsing_sparse(;N,h)
id = [1 0; 0 1] |> sparse
σˣ = [0 1; 1 0] |> sparse
σᶻ = [1 0; 0 -1] |> sparse
first_term_ops = fill(id, N)
first_term_ops[1] = σᶻ
first_term_ops[2] = σᶻ
second_term_ops = fill(id, N)
second_term_ops[1] = σˣ
H = spzeros(Int, 2^N, 2^N) # note the spzeros instead of zeros here
for i in 1:N-1
H -= foldl(⊗, first_term_ops)
first_term_ops = circshift(first_term_ops,1)
end
for i in 1:N
H -= h*foldl(⊗, second_term_ops)
second_term_ops = circshift(second_term_ops,1)
end
H
end
# We should check that apart from the new type `SparseMatrixCSC` this is still the same Hamiltonian.
H = TransverseFieldIsing_sparse(N=10, h=1);
H_dense = TransverseFieldIsing(N=10, h=1)
H ≈ H_dense
# Great. But is it really faster?
@time TransverseFieldIsing(N=10,h=1);
@time TransverseFieldIsing_sparse(N=10,h=1);
# It is *a lot* faster!
# Alright, let's try to go to larger $N$. While `TransverseFieldIsing` threw an `OutOfMemoryError` for `N=20`, our new function is more efficient:
@time H = TransverseFieldIsing_sparse(N=20,h=1)
# Note that this is matrix, formally, has **1,099,511,627,776** entries!
# ### Diagonalizing sparse matrices
# We have taken the first hurdle of constructing our large-system Hamiltonian as a sparse matrix. Unfortunately, if we try to diagonalize $H$, we realize that Julia's built-in eigensolver `eigen` doesn't support matrices.
#
# ```
# eigen(A) not supported for sparse matrices. Use for example eigs(A) from the Arpack package instead.
# ```
# Gladly it suggests a solution: [ARPACK.jl](https://github.com/JuliaLinearAlgebra/Arpack.jl). It provides a wrapper to the Fortran library [ARPACK](https://www.caam.rice.edu/software/ARPACK/) which implements iterative eigenvalue and singular value solvers for sparse matrices.
#
# There are also a bunch of pure Julia implementations available in
#
# * [ArnoldiMethod.jl](https://github.com/haampie/ArnoldiMethod.jl)
# * [KrylovKit.jl](https://github.com/Jutho/KrylovKit.jl)
# * [IterativeSolvers.jl](https://github.com/JuliaMath/IterativeSolvers.jl)
#
# Let us use the ArnoldiMethod.jl package.
# +
using ArnoldiMethod
function eigen_sparse(x)
decomp, history = partialschur(x, nev=1, which=SR()); # only solve for the ground state
vals, vecs = partialeigen(decomp);
return vals, vecs
end
# -
# Solving for the ground state takes less than a minute on an i5 desktop machine.
@time vals, vecs = eigen_sparse(H)
# Voila. There we have the ground state energy and the ground state wave function for a $N=20$ chain of quantum spins!
groundstate = vecs[:,1]
# ### Magnetization once again
# To measure the magnetization, we could use our function `magnetization(state, basis)` from above. However, the way we wrote it above, it depends on an explicit list of basis states which we do not want to construct for a large system explicitly.
#
# Let's rewrite the function slightly such that bit representations of our basis states are calculated on the fly.
function magnetization(state)
N = Int(log2(length(state)))
M = 0.
for i in 1:length(state)
bstate = bit_rep(i-1,N)
bstate_M = 0.
for spin in bstate
bstate_M += (state[i]^2 * (spin ? 1 : -1))/N
end
@assert abs(bstate_M) <= 1
M += abs(bstate_M)
end
return M
end
magnetization(groundstate, basis)
# We are now able to recreate our magnetization vs magnetic field strength plotincluding larger systems (takes about 3 minutes on this i5 Desktop machine).
using Plots
hs = 10 .^ range(-2., stop=2., length=10)
Ns = 2:2:20
p = plot()
@time for N in Ns
M = zeros(length(hs))
for (i,h) in enumerate(hs)
H = TransverseFieldIsing_sparse(N=N, h=h)
vals, vecs = eigen_sparse(H)
groundstate = @view vecs[:,1]
M[i] = magnetization(groundstate)
end
plot!(p, hs, M, xscale=:log10, marker=:circle, label="N = $N",
xlab="h", ylab="M(h)")
println(M)
end
p
| Day2/4_linear_algebra/2_ed_quantum_ising.ipynb |