code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="header.png" align="left"/>
# # Exercise: Classification of MNIST (10 points)
#
#
# Die goal of this exercise is to create a simple image classification network and to work on the improvement of a model and how to debug and check the training data. We start with a simple CNN model for digit classification of the MNIST dataset [1]. This dataset contains 60,000 scans of digits for training and 10,000 scans of digits for validation. A sample consists of 28x28 features with values between 0 and 255, note that the features are inverted. Actually digits are rather dark on a light background. MNIST digits are light on a dark background.
# This example is partly based on a tutorial by <NAME> [2].
# Please follow the instructions in the notebook.
#
#
# ```
# [1] http://yann.lecun.com/exdb/mnist/
# [2] https://machinelearningmastery.com/how-to-develop-a-convolutional-neural-network-from-scratch-for-mnist-handwritten-digit-classification/
# ```
#
# **NOTE**
#
# Document your results by simply adding a markdown cell or a python cell (as comment) and writing your statements into this cell. For some tasks the result cell is already available.
#
#
#
# + tags=[]
#
# Import some modules
#
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.layers import MaxPooling2D
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import BatchNormalization
from tensorflow.keras.layers import Dropout
from tensorflow.keras.optimizers import SGD
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import model_from_json
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.callbacks import EarlyStopping
from sklearn.metrics import confusion_matrix
import os
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
#
# Turn off errors and warnings (does not work sometimes)
#
from warnings import simplefilter
# ignore all future warnings
simplefilter(action='ignore', category=FutureWarning)
simplefilter(action='ignore', category=Warning)
#
# Diagram size
#
plt.rcParams['figure.figsize'] = [16, 9]
#
# nasty hack for macos
#
os.environ['KMP_DUPLICATE_LIB_OK']='True'
#
# check version
#
print('starting notebook with tensorflow version {}'.format(tf.version.VERSION))
# -
# # Load and prepare data
#
# Loading of the data (very simplified) with split into train and test data (fixed split)
#
(x_train, y_train), (x_test, y_test) = mnist.load_data()
#
# Check some data
#
x_train[0][10]
# + tags=[]
#
# Print shapes of data
#
print('training data: X=%s, y=%s' % (x_train.shape, y_train.shape))
print('test data: X=%s, y=%s' % (x_test.shape, y_test.shape))
# -
#
# Display some examples of the data
#
for i in range(15):
plt.subplot(4,4,1 + i)
plt.imshow(x_train[i], cmap=plt.get_cmap('gray'))
plt.show()
# + tags=[]
#
# Display labels of some data
#
for i in range(15):
print('label {}'.format(y_train[i]))
# -
# # Task: Plot a histogram of the classes of the training data (1 point)
#
# After plotting, give a short estimation if this distribution is OK for use in a classification situation.
# +
#
# Histogram of class counts (digits)
#
# Task: plot the histogram as array or as plot
#
plt.rcParams['figure.figsize'] = [13, 6]
plt.hist(y_train)
# Although theere is a slight variance in the data distribution, it will probably OK for classifcation.
# -
# # Prepare data for classification (1 point)
x_train.shape
#
# Change shape of data for model
#
x_train = x_train.reshape((x_train.shape[0], 28, 28, 1))
x_test = x_test.reshape((x_test.shape[0], 28, 28, 1))
#
# Task: find out why the unusual shap of the input data is required? Why is (-1,28,28) not sufficien? (1 point)
# give a short description here in the comment.
# Hint: check the tensorflow keras documentation about 2D cnn layer.
#
# Answer: Images are 4D tensors of shape (samples, height, width, channels) or (samples, channels, height width). The former is the default configuration, the later can be configured with image_data_format. By specifing the channels parameter as 1, we are basically saying that the images are monochromatic (black/white).
x_train.shape
#
# Scale pixel values into range of 0 to 1
#
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train = x_train / 255.0
x_test = x_test / 255.0
# check one transformed sample
x_train[0]
#
# One-hot encoding for classes
#
y_train = to_categorical(y_train, num_classes=10)
y_test = to_categorical(y_test, num_classes=10)
# check the one-hot encoding
y_train
# # Build the first model (2 points)
#
# Simple CNN model
#
# Task: complete the code for a simple CNN network with one CNN layer.
# Hint: look for examples in the internet or in the slides.
#
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_uniform', input_shape=(28, 28, 1)))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(100, activation='relu', kernel_initializer='he_uniform'))
model.add(Dense(10, activation='softmax'))
# compile model
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
# + tags=[]
# get a short summary of the model
model.summary()
# + tags=[]
# train model
history = model.fit(x_train, y_train, batch_size=128, epochs=5 )
# -
# # First prediction with model (1 point)
# predict on data (one sample)
#
# Task: describe the meaning of the numbers returned from the prediction. (1 point)
# write your findings here in the comments
# Hint: look at the definition of the output layer (last layer) in the model.
#
model.predict(x_train[:1])
# +
# The softmax function returns a normalised probability distribution where each element is proportional to the exponentials of the input numbers. Which fundamentally means that the sum of all elements has to be 1. The reasonn why it isn't in the example below is because of conversion (?).
# + tags=[]
prediction = model.predict(x_train[:1])[0]
sum(prediction)
# -
# compare with expexted result
y_train[:1]
# + tags=[]
#
# Measure the accuracy
#
_, acc = model.evaluate(x_test, y_test, verbose=0)
print('accuracy {:.5f}'.format(acc))
# + tags=[]
#
# Estimate the number of false classifications in production use
#
print('with {} samples there are about {:.0f} false classifications to expect.'.format( x_test.shape[0], (x_test.shape[0]*(1-acc))))
# -
# # Print out training progress
#
# Plot loss and accuracy
#
def summarize_diagnostics(history,modelname):
plt.subplot(211)
plt.title('Cross Entropy Loss')
plt.plot(history.history['loss'], color='blue', label='train')
plt.subplot(212)
plt.title('Classification Accuracy')
plt.plot(history.history['accuracy'], color='green', label='train')
plt.subplots_adjust(hspace=0.5)
plt.savefig( 'results/' + modelname + '_plot.png')
plt.show()
plt.close()
summarize_diagnostics(history,'03_model1')
# # Task: Improve the model significantly (2 points)
#
# Your customer requires to have less than 1% of wrong classifications. Start to build a better model with significantly less than 100 wrong classifications in the 10000 test samples. Research the internet for the optimal model setup for MNIST classification and try to replicate this model here. Make sure to document the source where you found the hints for the improvement (links to sources).
# +
#
# Setup new model
#
def create_model_2():
model = Sequential()
# creates a convolutional kernel that outputs with 32 output dimensions
model.add(Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_uniform', input_shape=(28, 28, 1)))
# downsamples input feature maps by findnig maximum in order to reduce number of uneeded params
model.add(MaxPooling2D((2, 2)))
# new covnet layers
model.add(Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_uniform'))
model.add(Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_uniform'))
model.add(Flatten())
model.add(Dense(100, activation='relu', kernel_initializer='he_uniform'))
model.add(Dense(10, activation='softmax'))
return model
# -
# instantiate model
model2 = create_model_2()
# compile
model2.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
# + tags=[]
# train with history
history = model2.fit(x_train, y_train, batch_size=128, epochs=15 )
# + tags=[]
#
# Measure the accuracy
#
_, acc = model2.evaluate(x_test, y_test, verbose=0)
print('Accuracy {:.5f}'.format(acc))
# + tags=[]
#
# Estimate the number of false classifications in production use
#
print('with {} samples there are about {:.0f} false classifications to expect.'.format( x_test.shape[0], (x_test.shape[0]*(1-acc))))
# +
# Result: (describe where you found the hints for improvement and how much it improved)
# We can now safely say that we have improved our model to a point where 1% of the classifications are wrong.
# Fundamentally we introduced two new conv2d layers, which extends how many details/features get extracted.
# While the total params are relateivly the same, we now have more trainable params.
# + tags=[]
model2.summary()
# -
summarize_diagnostics(history,'03_model2')
# # Save the model
# + tags=[]
#
# Save a model for later use
#
prefix = 'results/03_'
modelName = prefix + "model.json"
weightName = prefix + "model.h5"
# set to True if the model should be saved
save_model = True
if save_model:
model_json = model2.to_json()
with open( modelName , "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model2.save_weights( weightName )
print("saved model to disk as {} {}".format(modelName,weightName))
else:
# load model (has to be saved before, model is not part of git)
json_file = open(modelName, 'r')
loaded_model_json = json_file.read()
json_file.close()
model2 = model_from_json(loaded_model_json)
# load weights into new model
model2.load_weights(weightName)
print("loaded model from disk")
# -
# # Task: Find characteristics in the errors of the model (1 point)
#
# There are still too many false classifications using the model. Evaluate all test data and plot examples of failed classifications to get a better undestanding what goes wring. Plot a confusion matrix to get a better insight.
y_test_predictions = model2.predict(x_test)
# + tags=[]
#
# generate confusion matrix
# Task: find a suitable function for generating a confusion matrix as array
#
confusion = confusion_matrix(np.argmax(y_test,axis=1), np.argmax(y_test_predictions,axis=1))
# -
# make a nice plot of the confusion matrix
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
#print(cm)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# + tags=[]
# plot confusion matrix
plot_confusion_matrix(confusion,['0','1','2','3','4','5','6','7','8','9'] )
# -
# # Task: Update your training strategy (2 points)
#
# Beside many other options, there are two streight forward ways to improve your model:
#
# 1. Add more data for those classes which are poorely classified
# 1. Add augmentation for the training data
#
# Implement the augmentation strategy and test if there is an improvement.
#
#
# ## Augmentation
#
# Task: Search the internet for the ImageDataGenerator class of the Keras framework and implement such a generator for the training of the model. Select suitable augmentation which fits to the use-case.
# Document the resulting accuracy.
# +
# Augmentation solution
# Since we are dealing with numbers, the only viable solution is a horizontal and vertical shift augmentation. Everything else would mess with the data, the numbers would not be numbers.
# Soures:
# https://machinelearningmastery.com/image-augmentation-deep-learning-keras/
# https://www.pyimagesearch.com/2019/07/08/keras-imagedatagenerator-and-data-augmentation/
# +
generator = ImageDataGenerator(rotation_range=8,
width_shift_range=0.08,
shear_range=0.3,
height_shift_range=0.08,
zoom_range=0.08 )
it_train = generator.flow(x_train, y_train, batch_size=125)
# -
# instantiate model
model3 = create_model_2()
# + tags=[]
steps = int(x_train.shape[0] / 128)
model3.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model3.fit_generator(it_train, steps_per_epoch=steps, epochs=15, validation_data=(x_test, y_test))
# + tags=[]
#
# Evaluierung
#
_, acc = model3.evaluate(x_test, y_test, verbose=0)
print('accuracy {:.3f} '.format(acc) )
# -
summarize_diagnostics(history,'03_model3')
y_test_predictions = model3.predict(x_test)
# generate confusion matrix
confusion = confusion_matrix(np.argmax(y_test,axis=1), np.argmax(y_test_predictions,axis=1))
# plot confusion matrix
plot_confusion_matrix(confusion,['0','1','2','3','4','5','6','7','8','9'] )
# +
# After applying augmentation the model's accuracy increasde a tiny bit from .991 to .995
# We can also see in the confussion matrix that some of the errors have been reduced, e.g. eight and nine, or
# six and zero. But we have also introduced a few more mistakes, specifically with seven and three.
| 03 Exercise Classification MNIST.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Find mentions of the EU in articles
#
# This notebook searches the news articles to extract the paragraphs that mention EU institutions and gathers these in a pandas DataFrame.
# +
import pandas as pd
import json
import re
settings_file = 'D:/thesis/settings - nl.json'
# +
#Preparation
#Read settings
settings = json.loads(open(settings_file).read())["settings"]
#Read data
df = pd.read_json(settings['data_json'], compression = 'gzip')
df.sort_index(inplace = True)
# +
#Choose search terms to denote EU
terms = ['Europese Unie','EU']
terms = sorted(terms, key = len, reverse = True) #Sort by length to fix overlapping patterns.
#Not strictly necessary here, but to register the exact matches this is a nice trick.
EU_terms = re.compile('|'.join(terms))
terms = ['[^a-zA-Z]'+word+'[^a-zA-Z]' for word in terms]
EU_terms_pattern = re.compile('|'.join(terms))
# +
#Create df with relevant pieces of text
EU_snippets = []
for row in df.index:
paragraph_no = 0 #Start index at 0!
for text in df.loc[row,'TEXT']:
if re.search(EU_terms_pattern,text) is not None:
snippet = {
'TEXT' : text,
'PARAGRAPH_NO' : paragraph_no,
'MEDIUM' : df.loc[row,'MEDIUM'],
'HEADLINE' : df.loc[row,'HEADLINE'],
'DATE_dt' : df.loc[row,'DATE_dt'],
'MATCHES' : re.findall(EU_terms_pattern,text)
}
snippet['MATCHES'] = [re.search(EU_terms,match)[0] for match in snippet['MATCHES'] if re.search(EU_terms,match) is not None]
EU_snippets.append(snippet)
paragraph_no += 1
EU_snippets = pd.DataFrame(EU_snippets)
# -
EU_snippets.to_json(settings['mentions_json'], compression = 'gzip')
| EU evaluation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from pyGDM2 import (structures, materials, core,
linear, fields, propagators,
tools)
def get_spectrum(geometry, step):
""" Obtain a simulated absorption spectra for a hexagonal nanorod mesh
L -- length of the cylinder
R -- radius of the cylinder
"""
#geometry = structures.sphere(step, R=r, mesh='hex')
material = materials.gold()
struct = structures.struct(step, geometry, material, verbose=False)
struct = structures.center_struct(struct)
field_generator = fields.plane_wave
wavelengths = np.linspace(400, 800, 81)
kwargs = dict(theta=0, inc_angle=180)
efield = fields.efield(field_generator,
wavelengths=wavelengths, kwargs=kwargs)
dyads = propagators.DyadsQuasistatic123(n1 = 1.33, n2 = 1.33, n3 = 1.33)
sim = core.simulation(struct, efield, dyads)
sim.scatter(verbose=False)
field_kwargs = tools.get_possible_field_params_spectra(sim)
config_idx = 0
wl, spectrum = tools.calculate_spectrum(sim,
field_kwargs[config_idx], linear.extinct)
abs_ = spectrum.T[2]/np.max(spectrum.T[2])
return abs_, geometry
# ## Specify the Step Size and Radius
# The simulator simulates the spectra of 7 equally spaced apart spheres with a gaussian distributed radius of mean "radius_mean" and standard deviation "radius_std".
step = 3
np.random.seed(5)
n_spheres = 7
radius_list = []
for i in range(n_spheres):
# Normal distribution parameters for Sphere Radius
radius_mean = 6
radius_std = 3
r = (np.random.randn(1)[0]*radius_std + radius_mean)/step
radius_list.append(r)
geometry = structures.sphere(step, R=r, mesh='cube')
loc_array = np.array([[0,0,0],[0,0,1],[0,0,-1],[1,0,0],[-1,0,0],[0,1,0],[0,-1,0]])
sphere = np.hstack((geometry[:,0].reshape(-1,1) + 20*loc_array[i,0]*radius_mean, geometry[:,1].reshape(-1,1) + 20*loc_array[i,1]*radius_mean, geometry[:,2].reshape(-1,1)+ 20*loc_array[i,2]*radius_mean))
if i == 0:
sample = sphere
else:
sample = np.vstack((sample, sphere))
# ## Plot the Cluster Geometry
#geometry = structures.sphere(step, R=5, mesh='cube')
fig, ax = plt.subplots(figsize= (7,10))
ax = plt.axes(projection='3d')
geometry = sample
# Data for three-dimensional scattered points
xdata = geometry[:,0]
ydata = geometry[:,1]
zdata = geometry[:,2]
ax.scatter3D(xdata, ydata, zdata, cmap='Greens');
#ax.set_xlim([0, 120])
#ax.set_ylim([0, 120])
#ax.set_zlim([0, 120])
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
#ax.view_init(elev=30, azim=90)
# ## Run the Simulation
I, g = get_spectrum(geometry, step)
wavelength = np.linspace(400,800,81)
#Rad 6, poly 1
plt.plot(wavelength, I)
plt.xlabel('wavelength nm')
plt.ylabel('Intensity')
#Radius of 6, Polydispersity of 3
plt.plot(wavelength, I)
plt.xlabel('wavelength nm')
plt.ylabel('Intensity')
array = np.hstack((wavelength.reshape(-1,1), I.reshape(-1,1)))
array
df = pd.DataFrame(array, columns=['Wavelength', 'Intensity'])
df.to_csv('rad_6_poly_3.csv')
from spectroscopy import obtain_spectra
step = 4
radius_mean = 5
r
array = obtain_spectra()
| Emulators/Spectroscopy/.ipynb_checkpoints/Spectroscopy_Example-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Oberservation Analysis
#
# ###### ~The bar graph and summary statistics shows us that the treatments with Capomulin and Ramicane preformed the best when reducing the tumors.
#
# #### The study had almost a completly equal amount of female mice to male mice with 125 males and 124 females.
#
# ##### ~There was a strong positive correlation between tumor volume and weight at 0.84. Seeing that the larger the mouse the larger the tumor tended to be.
#
# ## Observations and Insights
#
# +
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
data_comb_df = pd.merge(mouse_metadata, study_results , how='outer', on = 'Mouse ID')
# Display the data table for preview
data_comb_df.head()
# -
# Checking the number of mice.
number_of_mice = data_comb_df["Mouse ID"].count()
number_of_mice
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
duplicate_rows = data_comb_df[data_comb_df.duplicated(['Mouse ID', 'Timepoint'])]
duplicate_rows
# Optional: Get all the data for the duplicate mouse ID
all_duplicate_data = data_comb_df[data_comb_df.duplicated(["Mouse ID",])]
all_duplicate_data
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
clean_comb_data = data_comb_df.drop_duplicates("Mouse ID")
clean_comb_data
# Checking the number of mice in the clean DataFrame.
new_number_of_mice = clean_comb_data["Mouse ID"].count()
new_number_of_mice
# ## Summary Statistics
# +
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
mean = data_comb_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].mean()
median = data_comb_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].median()
variance = data_comb_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].var()
standard_deviation = data_comb_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].std()
SEM = data_comb_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].sem()
# Assemble the resulting series into a single summary dataframe.
Summary_Stats_df = pd.DataFrame({"Mean" : mean,
"Median" : median,
"Variance" : variance,
"Standard Deviation" : standard_deviation,
"SEM" : SEM })
Summary_Stats_df
# -
# ## Bar and Pie Charts
# +
#Generate a bar plot showing the total number of unique mice tested on each drug regimen using pandas.
Drug_Regimen_data = pd.DataFrame(data_comb_df.groupby(["Drug Regimen"]).count()).reset_index()
Drug_Regimen_df = Drug_Regimen_data[["Drug Regimen", "Mouse ID"]]
Drug_Regimen_df = Drug_Regimen_df.set_index("Drug Regimen")
#Creating the bar chart
Drug_Regimen_df.plot(kind="bar", figsize=(10,3))
plt.title("Drug Treatment Count")
plt.show()
plt.tight_layout()
# +
# Generate a pie plot showing the distribution of female versus male mice using pyplot
mice_gender_df = data_comb_df["Sex"].value_counts()
colors = ["red", "blue"]
explode = (0.1,0)
plt.figure()
plt.pie(mice_gender_df.values, explode=explode,
labels=mice_gender_df.index.values, colors=colors,
autopct="%1.1f%%", shadow=True, startangle=140)
plt.axis("equal")
plt.title("Distribution of Female versus Male Mice")
plt.tight_layout()
plt.show()
# -
#
# # Quartiles, Outliers and Boxplots
# +
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
Capomulin_df = data_comb_df.loc[data_comb_df["Drug Regimen"] == "Capomulin"]
Ramicane_df = data_comb_df.loc[data_comb_df["Drug Regimen"] == "Ramicane"]
Infubinol_df = data_comb_df.loc[data_comb_df["Drug Regimen"] == "Infubinol"]
Ceftamin_df = data_comb_df.loc[data_comb_df["Drug Regimen"] == "Ceftamin"]
# -
#Capomulin
Capomulin_greatest = Capomulin_df.groupby("Mouse ID").max()["Timepoint"]
Capomulin_vol = pd.DataFrame(Capomulin_greatest)
Capomulin_merge =pd.merge(Capomulin_vol, data_comb_df,
on = ("Mouse ID","Timepoint"), how = "left")
Capomulin_merge.head()
# +
# Calculate the IQR and quantitatively determine if there are any potential outliers.
#Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
Capomulin_tumors = Capomulin_merge["Tumor Volume (mm3)"]
Capomulin_quartiles =Capomulin_tumors.quantile([.25,.5,.75])
Capomulin_lowerq = Capomulin_quartiles[0.25]
Capomulin_upperq = Capomulin_quartiles[0.75]
Capomulin_iqr = Capomulin_upperq-Capomulin_lowerq
print(f"The lower quartile of temperatures is: {Capomulin_lowerq}")
print(f"The upper quartile of temperatures is: {Capomulin_upperq}")
print(f"The interquartile range of temperatures is: {Capomulin_iqr}")
Capomulin_lower_bound = Capomulin_lowerq - (1.5*Capomulin_iqr)
Capomulin_upper_bound = Capomulin_upperq - (1.5*Capomulin_iqr)
print(f"Values below {Capomulin_lower_bound} could be outliers.")
print(f"Values above {Capomulin_upper_bound} could be outliers.")
# -
#Ramicane
Ramicane_greatest = Ramicane_df.groupby("Mouse ID").max()["Timepoint"]
Ramicane_vol = pd.DataFrame(Ramicane_greatest)
Ramicane_merge =pd.merge(Ramicane_vol, data_comb_df,
on = ("Mouse ID","Timepoint"), how = "left")
Ramicane_merge.head()
# +
# Calculate the IQR and quantitatively determine if there are any potential outliers.
#Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
Ramicane_tumors = Ramicane_merge["Tumor Volume (mm3)"]
Ramicane_quartiles =Ramicane_tumors.quantile([.25,.5,.75])
Ramicane_lowerq = Ramicane_quartiles[0.25]
Ramicane_upperq = Ramicane_quartiles[0.75]
Ramicane_iqr = Ramicane_upperq-Ramicane_lowerq
print(f"The lower quartile of temperatures is: {Ramicane_lowerq}")
print(f"The upper quartile of temperatures is: {Ramicane_upperq}")
print(f"The interquartile range of temperatures is: {Ramicane_iqr}")
Ramicane_lower_bound = Ramicane_lowerq - (1.5*Ramicane_iqr)
Ramicane_upper_bound = Ramicane_upperq - (1.5*Ramicane_iqr)
print(f"Values below {Ramicane_lower_bound} could be outliers.")
print(f"Values above {Ramicane_upper_bound} could be outliers.")
# -
#Infubinol
Infubinol_greatest = Infubinol_df.groupby("Mouse ID").max()["Timepoint"]
Infubinol_vol = pd.DataFrame(Infubinol_greatest)
Infubinol_merge =pd.merge(Infubinol_vol, data_comb_df,
on = ("Mouse ID","Timepoint"), how = "left")
Infubinol_merge.head()
# +
# Calculate the IQR and quantitatively determine if there are any potential outliers.
#Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
Infubinol_tumors = Infubinol_merge["Tumor Volume (mm3)"]
Infubinol_quartiles = Infubinol_tumors.quantile([.25,.5,.75])
Infubinol_lowerq = Infubinol_quartiles[0.25]
Infubinol_upperq = Infubinol_quartiles[0.75]
Infubinol_iqr = Infubinol_upperq-Ramicane_lowerq
print(f"The lower quartile of temperatures is: {Infubinol_lowerq}")
print(f"The upper quartile of temperatures is: {Infubinol_upperq}")
print(f"The interquartile range of temperatures is: {Infubinol_iqr}")
Infubinol_lower_bound = Infubinol_lowerq - (1.5*Infubinol_iqr)
Infubinol_upper_bound = Infubinol_upperq - (1.5*Infubinol_iqr)
print(f"Values below {Infubinol_lower_bound} could be outliers.")
print(f"Values above {Infubinol_upper_bound} could be outliers.")
# -
#Ceftamin
Ceftamin_greatest = Ceftamin_df.groupby("Mouse ID").max()["Timepoint"]
Ceftamin_vol = pd.DataFrame(Ceftamin_greatest)
Ceftamin_merge =pd.merge(Ceftamin_vol, data_comb_df,
on = ("Mouse ID","Timepoint"), how = "left")
Ceftamin_merge.head()
# +
# Calculate the IQR and quantitatively determine if there are any potential outliers.
#Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
Ceftamin_tumors = Ceftamin_merge["Tumor Volume (mm3)"]
Ceftamin_quartiles = Ceftamin_tumors.quantile([.25,.5,.75])
Ceftamin_lowerq = Infubinol_quartiles[0.25]
Ceftamin_upperq = Infubinol_quartiles[0.75]
Ceftamin_iqr = Infubinol_upperq-Ramicane_lowerq
print(f"The lower quartile of temperatures is: {Ceftamin_lowerq}")
print(f"The upper quartile of temperatures is: {Ceftamin_upperq}")
print(f"The interquartile range of temperatures is: {Ceftamin_iqr}")
Ceftamin_lower_bound = Ceftamin_lowerq - (1.5*Ceftamin_iqr)
Ceftamin_upper_bound = Ceftamin_upperq - (1.5*Ceftamin_iqr)
print(f"Values below {Ceftamin_lower_bound} could be outliers.")
print(f"Values above {Ceftamin_upper_bound} could be outliers.")
# +
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
plot_data = [Capomulin_tumors, Ramicane_tumors, Infubinol_tumors, Ceftamin_tumors]
Drug_Regimen = ['Capomulin', 'Ramicane', 'Infubinol','Ceftamin']
fig1, ax1 = plt.subplots()
ax1.set_title("Tumor Volume at Selected Mouse")
ax1.set_ylabel("Final Tumor Vol(mm3)")
ax1.set_xlabel("Drug Regimen")
ax1.boxplot(plot_data, labels=Drug_Regimen, widths = 0.4, vert=True)
plt.show()
# -
# ## Line and Scatter Plots
# +
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
Capomulin_line = Capomulin_df.loc[Capomulin_df["Mouse ID"] == "g316",:]
Capomulin_line.head()
x_axis = Capomulin_line["Timepoint"]
tumsiz = Capomulin_line ["Tumor Volume (mm3)"]
fig1,ax1 = plt.subplots()
plt.title("Capomulin treatmeant For Mouse g316")
plt.plot(x_axis, tumsiz, linewidth=2, markersize=12)
plt.xlabel('Timepoint (Days)')
plt.ylabel('Tumor Volume (mm3)')
# +
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
Capomulin_avg = Capomulin_df.groupby(["Mouse ID"]).mean()
plt.scatter(Capomulin_avg["Weight (g)"], Capomulin_avg["Tumor Volume (mm3)"])
plt.xlabel("Weight (g)")
plt.ylabel("Average Tumor Volume (mm3)")
plt.savefig('scatterplot')
plt.show()
# -
# ## Correlation and Regression
# +
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
corr = round(st.pearsonr(Capomulin_avg["Weight (g)"],
Capomulin_avg ["Tumor Volume (mm3)"])[0],2)
print(f"The correlation between mouse weight and average tumor volume is {corr}")
model = st.linregress(Capomulin_avg["Weight (g)"], Capomulin_avg ["Tumor Volume (mm3)"])
model
# +
from scipy.stats import linregress
(slope, intercept,rvalue, pvalue, stderr)= linregress(Capomulin_avg["Weight (g)"],Capomulin_avg["Tumor Volume (mm3)"])
regress_values=Capomulin_avg["Weight (g)"]* slope + intercept
line_eq= f"y = {round(slope, 2)} x + {round(intercept, 2)}"
Capomulin_avg = Capomulin_df.groupby(["Mouse ID"]).mean()
plt.scatter(Capomulin_avg["Weight (g)"], Capomulin_avg["Tumor Volume (mm3)"])
plt.plot(Capomulin_avg["Weight (g)"], regress_values, color='red')
plt.xlabel("Weight (g)")
plt.ylabel("Average Tumor Volume (mm3)")
plt.title("Weight vs Tumor Volume for Capomulin")
plt.show()
| Pymaceuticals/pymaceuticals_analysis .ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Observations and Insights
#
# +
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# -
# Read the mouse data
mouse_metadata = pd.read_csv(mouse_metadata_path)
mouse_metadata.head()
# Read the study results data
study_results = pd.read_csv(study_results_path)
study_results.head()
# +
# Combine the data into a single dataset
merged_results = pd.merge(study_results, mouse_metadata, how='left', on="Mouse ID")
# Display the data table for preview
merged_results.head()
# -
# Checking the number of mice.
len(merged_results['Mouse ID'].unique())
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
duplicate_rows = merged_results.loc[merged_results.duplicated(subset=['Mouse ID', 'Timepoint']), 'Mouse ID'].unique()
duplicate_rows
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
clean_data = merged_results[merged_results['Mouse ID'].isin(duplicate_rows) == False]
clean_data.head()
# Checking the number of mice in the clean DataFrame.
len(clean_data['Mouse ID'].unique())
# ## Summary Statistics
# +
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Group by drup regimen
regimen_group = clean_data.groupby('Drug Regimen')
# Calculate mean, media, variance, standard deviation, and SEM of the tumor volumn
tumor_mean = regimen_group['Tumor Volume (mm3)'].mean()
tumor_median = regimen_group['Tumor Volume (mm3)'].median()
tumor_variance = regimen_group['Tumor Volume (mm3)'].var()
tumor_stdev = regimen_group['Tumor Volume (mm3)'].std()
tumor_sem = regimen_group['Tumor Volume (mm3)'].sem()
# Assemble the resulting series into a single summary dataframe.
summary = pd.DataFrame({'Tumor Volume Mean': tumor_mean,
'Tumor Volume Median': tumor_median,
'Tumor Volume Variance': tumor_variance,
'Tumor Volume Std Dev': tumor_stdev,
'Tumor Volumn SEM': tumor_sem})
summary
# -
# ## Bar and Pie Charts
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas.
regimen_count = clean_data['Drug Regimen'].value_counts()
regimen_count.plot(kind='bar')
plt.xlabel('Drug Regimen')
plt.ylabel('Number of Measurements')
plt.title('Total # of Measurements Taken for each Regimen')
plt.show()
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot.
regimen_groups = list(regimen_count.index)
plt.bar(regimen_groups, regimen_count)
plt.xticks(rotation='vertical')
plt.xlabel('Drug Regimen')
plt.ylabel(' Number of Measurements')
plt.title('Total Number of Measurements Taken By Drug Regimen')
# Generate a pie plot showing the distribution of female versus male mice using pandas
gender_count = clean_data.groupby('Sex')['Mouse ID'].count()
gender_chart = gender_count.plot(kind='pie', title='Distribution of Female and Male Mice', autopct='%1.1f%%')
gender_chart.set_ylabel('Sex')
# Generate a pie plot showing the distribution of female versus male mice using pyplot
gender_labels = list(gender_count.index)
plt.pie(gender_count, labels= gender_labels, autopct='%1.1f%%')
plt.ylabel('Sex')
plt.title('Distribution of Female and Male Mice')
plt.show()
# ## Quartiles, Outliers and Boxplots
# +
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
last_time = clean_data.groupby(['Mouse ID'])['Timepoint'].max()
last_time = last_time.reset_index()
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
last_time_merge = last_time.merge(clean_data, on=['Mouse ID', 'Timepoint'], how='left')
last_time_merge.head()
# +
# Put treatments into a list for for loop (and later for plot labels)
drug_list = ['Capomulin', 'Ramicane', 'Infubinol', 'Ceftamin']
# Create empty list to fill with tumor vol data (for plotting)
tumor_volume = []
# Calculate the IQR and quantitatively determine if there are any potential outliers.
for drug in drug_list:
# Locate the rows which contain mice on each drug and get the tumor volumes
tumor_vol = last_time_merge.loc[last_time_merge['Drug Regimen'] == drug, 'Tumor Volume (mm3)']
# add subset
tumor_volume.append(tumor_vol)
# Determine outliers using upper and lower bounds
quartiles = tumor_vol.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
upperBound = upperq + 1.5 * iqr
lowerBound = lowerq - 1.5 * iqr
outliers = tumor_vol.loc[(tumor_vol > upperBound) | (tumor_vol < lowerBound)]
print(f'The potential outliers for {drug} could be {outliers}.')
# -
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
plt.boxplot(tumor_volume, labels=drug_list)
plt.ylabel('Tumor Volume (mm3)')
plt.xlabel('Drug Regimen')
plt.title('Final Tumor Volume of Each Mouse')
plt.show()
# ## Line and Scatter Plots
# +
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
mouse_capomulin = 'b128'
mouse_df = clean_data[clean_data['Mouse ID'] == mouse_capomulin]
mouse_df.head()
mouse_tumor_volume = mouse_df.iloc[:,2]
mouse_tumor_time = mouse_df.iloc[:,1]
plt.plot(mouse_tumor_time, mouse_tumor_volume, color="green")
plt.xlabel('Timepoint of Tumor Measurement')
plt.ylabel('Tumor Volume (mm3)')
plt.title('A Look at Tumor Volume Across Timepoints for Mouse b128')
# +
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
capomulin_df = clean_data[clean_data['Drug Regimen'] == 'Capomulin']
capomulin_df = capomulin_df.groupby('Mouse ID').mean()
plt.scatter(capomulin_df['Weight (g)'], capomulin_df['Tumor Volume (mm3)'])
plt.xlabel('Mouse Weight (g)')
plt.ylabel('Tumor Volume (mm3)')
plt.title('Tumor Volume and Mouse Weight: Capomulin Regimen Analysis')
# -
# ## Correlation and Regression
# +
# Calculate the correlation coefficient and linear regression model for mouse weight and average tumor volume for the Capomulin regimen
cap_weight = capomulin_df.iloc[:,4]
cap_avg_tumor = capomulin_df.iloc[:,1]
correlation = st.pearsonr(cap_weight, cap_avg_tumor)
print(f"The correlation between mouse weight and average tumor volume is {round(correlation[0],2)}.")
x_values = capomulin_df['Weight (g)']
y_values = capomulin_df['Tumor Volume (mm3)']
from scipy.stats import linregress
slope, intercept, rvalue, pvalue, stderr = linregress(x_values, y_values)
regressValues = xValues * slope + intercept
lineEq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(xValues,yValues)
plt.plot(xValues,regressValues,"r-")
plt.annotate(lineEq,(6,10),fontsize=15,color="red")
plt.xlabel('Mouse Weight (g)')
plt.ylabel('Tumor Volume (mm3)')
plt.title('Mouse Weight and Average Tumor Volume: A Linear Regression')
plt.show()
# -
| Pymaceuticals/pymaceuticals.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
df = pd.read_csv('ConstructionTimeSeriesDataV2.csv')
#date library
date_data = {'Jan' : ('01', '31'), 'Feb' : ('02', '28'), 'Mar' : ('03', '31'), 'Apr' : ('04', '30'), 'May' : ('05', '31'), 'Jun' : ('06', '30'), 'Jul' : ('07', '31'), 'Aug' : ('08', '31'), 'Sep' : ('09', '30'), 'Oct' : ('10', '31'), 'Nov' : ('11', '30'), 'Dec' : ('12', '31')}
#date index code provided by professor Bradley
def to_data_range(date_str):
return '20' + date_str[-2:] + '-' + date_data[date_str[:3]][0] + '-' + date_data[date_str[:3]][0]
df['Date'] = df['Month-Year'].apply(to_data_range)
df.index = pd.date_range(start=df['Date'].iloc[0], periods = df['Date'].shape[0], freq='M')
#create graph
fig,ax = plt.subplots()
ax.plot(df['Public Construction'],label='Public Construction')
ax.plot(df['Private Construction'],label='Private Construction')
ax.set_xlabel('Year')
ax.set_ylabel('Units of Spenditure ($)')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
fig.legend()
# -
| timeseries/ConstructionTimeSeries.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="images/usm.jpg" width="480" height="240" align="left"/>
# # MAT281 - Laboratorio N°06
#
# ## Objetivos de la clase
#
# * Reforzar los conceptos básicos del E.D.A..
# ## <NAME>
# ## 201510509-K
# ## Contenidos
#
# * [Problema 01](#p1)
#
# ## Problema 01
# <img src="./images/logo_iris.jpg" width="360" height="360" align="center"/>
# El **Iris dataset** es un conjunto de datos que contine una muestras de tres especies de Iris (Iris setosa, Iris virginica e Iris versicolor). Se midió cuatro rasgos de cada muestra: el largo y ancho del sépalo y pétalo, en centímetros.
#
# Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:
# +
# librerias
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
pd.set_option('display.max_columns', 500) # Ver más columnas de los dataframes
# Ver gráficos de matplotlib en jupyter notebook/lab
# %matplotlib inline
# +
# cargar datos
df = pd.read_csv(os.path.join("data","iris_contaminados.csv"))
df.columns = ['sepalLength',
'sepalWidth',
'petalLength',
'petalWidth',
'species']
df.head()
# -
df
# ### Bases del experimento
#
# Lo primero es identificar las variables que influyen en el estudio y la naturaleza de esta.
#
# * **species**:
# * Descripción: Nombre de la especie de Iris.
# * Tipo de dato: *string*
# * Limitantes: solo existen tres tipos (setosa, virginia y versicolor).
# * **sepalLength**:
# * Descripción: largo del sépalo.
# * Tipo de dato: *integer*.
# * Limitantes: los valores se encuentran entre 4.0 y 7.0 cm.
# * **sepalWidth**:
# * Descripción: ancho del sépalo.
# * Tipo de dato: *integer*.
# * Limitantes: los valores se encuentran entre 2.0 y 4.5 cm.
# * **petalLength**:
# * Descripción: largo del pétalo.
# * Tipo de dato: *integer*.
# * Limitantes: los valores se encuentran entre 1.0 y 7.0 cm.
# * **petalWidth**:
# * Descripción: ancho del pépalo.
# * Tipo de dato: *integer*.
# * Limitantes: los valores se encuentran entre 0.1 y 2.5 cm.
# Su objetivo es realizar un correcto **E.D.A.**, para esto debe seguir las siguientes intrucciones:
# 1. Realizar un conteo de elementos de la columna **species** y corregir según su criterio. Reemplace por "default" los valores nan..
# +
#Primero, por conveniencia le agregaremos una columna "index" al df, por lo tanto tenemos que:
df = df.reset_index()
#Luego reemplazamos ciertos valores que traen problemas. Entonces:
df.loc[df['species']=='Setosa','species'] = 'setosa'
df.loc[df['species']=='SETOSA','species'] = 'setosa'
df.loc[df['species']=='VIRGINICA','species'] = 'virginica'
df.loc[df['species']=='VERSICOLOR','species'] = 'versicolor'
df.loc[df['species']=='Versicolor','species'] = 'versicolor'
#Reemplazamos por "default" los elementos etiquetados por "nan" de todo el df
df.replace({'default': 'nan'})
#Según el enunciado, los tipos de especies son setosa-virginia-versicolor. Por tanto, reemplacemos el valor virginica:
df.loc[df['species']=='virginica','species'] = 'virginia'
#Como el df reconoce, por ejemplo, "virginica" con "virginica" como dos datos diferentes, lo corregimos de la siguiente forma:
df['species'] = df['species'].str.lower().str.strip()
#Finalmente, mostramos un conteo de los elementos de la columna "species"
print ('El conteo de elementos de la columna species es:')
df.groupby('species')['index'].count()
# -
#Por ahora, el df queda de la siguiente forma:
df.head()
del(df['level_0'])
df
# 2. Realizar un gráfico de box-plot sobre el largo y ancho de los petalos y sépalos. Reemplace por **0** los valores nan.
# +
#Reemplazamos los valores 'nan' por 0
df.replace({'nan': 0})
#Procedemos a graficar lo pedido
graf_df = df.drop(['species', 'index'], axis=1)
sns.boxplot(data=graf_df)
# -
# 3. Anteriormente se define un rango de valores válidos para los valores del largo y ancho de los petalos y sépalos. Agregue una columna denominada **label** que identifique cuál de estos valores esta fuera del rango de valores válidos.
# +
#Agregamos la columna 'label', en base a las condiciones de largo y ancho de petalos y sepalos.
#df.loc[:,'label'] = ['valido-invalido']
df["label"] = df.apply(lambda x: 1 if float(x["sepalLength"]) >= 4.0 and float(x["sepalLength"])<=7.0
and float(x["sepalWidth"])>=2.0 and float(x["sepalWidth"])<=4.5
and float(x["petalLength"])>=1.0 and float(x["petalLength"])<=7.0
and float(x["petalWidth"])>=0.1 and float(x["petalWidth"])<=2.5 else 0, axis=1)
df.head(20)
# +
#Podriamos ahora hacerlo de otra forma (Esto para facilitar el desarrollo de la pregunta 5)
mask_sepalLen_inf = df['sepalLength']>=4.0
mask_sepalLen_sup = df['sepalLength']<=7.0
mask_sepalL = mask_sepalLen_inf & mask_sepalLen_sup
mask_sepalWid_inf = df['sepalWidth']>=2.0
mask_sepalWid_sup = df['sepalWidth']<=4.5
mask_sepalW = mask_sepalWid_inf & mask_sepalWid_sup
mask_petalLen_inf = df['petalLength']>=1.0
mask_petalLen_sup = df['petalLength']<=7.0
mask_petalL = mask_petalLen_inf & mask_petalLen_sup
mask_petalWid_inf = df['petalWidth']>=1.0
mask_petalWid_sup = df['petalWidth']<=7.0
mask_petalW = mask_petalWid_inf & mask_petalWid_sup
mask_label= mask_sepalL & mask_sepalW & mask_petalL & mask_petalW
#Agregamos una columna "label" vacía
df['label'] = ''
#Vemos que el df que cumple con los datos validos es:
df_valido = df[mask_label]
df_valido['label'] = 'valido'
#Ahora agregemos los valores validos e invalidos al df original
for i in df.index:
if i in df_valido.index:
df['label'][i]='valido'
else:
df['label'][i]='invalido'
df
# -
# 4. Realice un gráfico de sepalLength vs petalLength y otro de sepalWidth vs petalWidth categorizados por la etiqueta label. Concluya sus resultados.
#
# +
#Primero graficamos sepalLength vs petalLength
palette = sns.color_palette("hls", 2)
sns.lineplot(
x='sepalLength',
y='petalLength',
hue='label',
data=df,
ci = None,
palette=palette
)
# +
#Luego graficamos sepalWidth vs petalWidth
palette = sns.color_palette("hls", 2)
sns.lineplot(
x='sepalWidth',
y='petalWidth',
hue='label',
data=df,
ci = None,
palette=palette
)
# -
# 5. Filtre los datos válidos y realice un gráfico de *sepalLength* vs *petalLength* categorizados por la etiqueta **species**.
# +
#En df_valido estan los datos filtrados (pregunta 3), por tanto:
df = df_valido
palette = sns.color_palette("hls", 3)
sns.lineplot(
x = 'sepalLength',
y = 'petalLength',
hue = 'species',
data = df,
ci = None,
palette = palette
)
# -
| labs/01_python/laboratorio_06.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
# default_exp solution
# -
#hide
import sys
[sys.path.append(i) for i in ['.', '..']]
# +
#hide
# %load_ext autoreload
# %autoreload 2
from nbdev.showdoc import *
# -
# # solution
#
# > run the physical calculaitons of the model
#
#export
from nbdev_qq_test.classes import *
import numpy as np
import pandas as pd
from numba import njit
#export
@njit()
def growing_degree_day(GDDmethod,Tupp,Tbase,Tmax,Tmin):
"""
Function to calculate number of growing degree days on current day
<a href="../pdfs/ac_ref_man_3.pdf#page=28" target="_blank">Reference manual: growing degree day calculations</a> (pg. 19-20)
*Arguments:*
`GDDmethod`: `int` : GDD calculation method
`Tupp`: `float` : Upper temperature (degC) above which crop development no longer increases
`Tbase`: `float` : Base temperature (degC) below which growth does not progress
`Tmax`: `float` : Maximum tempature on current day (celcius)
`Tmin`: `float` : Minimum tempature on current day (celcius)
*Returns:*
`GDD`: `float` : Growing degree days for current day
"""
## Calculate GDDs ##
if GDDmethod == 1:
# Method 1
Tmean = (Tmax+Tmin)/2
Tmean = min(Tmean,Tupp)
Tmean = max(Tmean,Tbase)
GDD = Tmean-Tbase
elif GDDmethod == 2:
# Method 2
Tmax = min(Tmax,Tupp)
Tmax = max(Tmax,Tbase)
Tmin = min(Tmin,Tupp)
Tmin = max(Tmin,Tbase)
Tmean = (Tmax+Tmin)/2
GDD = Tmean-Tbase
elif GDDmethod == 3:
# Method 3
Tmax = min(Tmax,Tupp)
Tmax = max(Tmax,Tbase)
Tmin = min(Tmin,Tupp)
Tmean = (Tmax+Tmin)/2
Tmean = max(Tmean,Tbase)
GDD = Tmean-Tbase
return GDD
#hide
show_doc(growing_degree_day)
#export
@njit()
def root_zone_water(prof,InitCond_Zroot,InitCond_th,Soil_zTop,Crop_Zmin,Crop_Aer):
"""
Function to calculate actual and total available water in the rootzone at current time step
<a href="../pdfs/ac_ref_man_3.pdf#page=14" target="_blank">Reference Manual: root-zone water calculations</a> (pg. 5-8)
*Arguments:*
`prof`: `SoilProfileClass` : jit class Object containing soil paramaters
`InitCond_Zroot`: `float` : Initial rooting depth
`InitCond_th`: `np.array` : Initial water content
`Soil_zTop`: `float` : Top soil depth
`Crop_Zmin`: `float` : crop minimum rooting depth
`Crop_Aer`: `int` : number of aeration stress days
*Returns:*
`WrAct`: `float` : Actual rootzone water content
`Dr`: `DrClass` : Depletion objection containing rootzone and topsoil depletion
`TAW`: `TAWClass` : `TAWClass` containing rootzone and topsoil total avalable water
`thRZ`: `thRZClass` : thRZ object conaining rootzone water content paramaters
"""
## Calculate root zone water content and available water ##
# Compartments covered by the root zone
rootdepth = round(np.maximum(InitCond_Zroot,Crop_Zmin),2)
comp_sto = np.argwhere(prof.dzsum>=rootdepth).flatten()[0]
# Initialise counters
WrAct = 0
WrS = 0
WrFC = 0
WrWP = 0
WrDry = 0
WrAer = 0
for ii in range(comp_sto+1):
# Fraction of compartment covered by root zone
if prof.dzsum[ii] > rootdepth:
factor = 1-((prof.dzsum[ii]-rootdepth)/prof.dz[ii])
else:
factor = 1
# Actual water storage in root zone (mm)
WrAct = WrAct+round(factor*1000*InitCond_th[ii]*prof.dz[ii],2)
# Water storage in root zone at saturation (mm)
WrS = WrS+round(factor*1000*prof.th_s[ii]*prof.dz[ii],2)
# Water storage in root zone at field capacity (mm)
WrFC = WrFC+round(factor*1000*prof.th_fc[ii]*prof.dz[ii],2)
# Water storage in root zone at permanent wilting point (mm)
WrWP = WrWP+round(factor*1000*prof.th_wp[ii]*prof.dz[ii],2)
# Water storage in root zone at air dry (mm)
WrDry = WrDry+round(factor*1000*prof.th_dry[ii]*prof.dz[ii],2)
# Water storage in root zone at aeration stress threshold (mm)
WrAer = WrAer+round(factor*1000*(prof.th_s[ii]-(Crop_Aer/100))*prof.dz[ii],2)
if WrAct < 0:
WrAct = 0
# define total available water, depletion, root zone water content
TAW = TAWClass()
Dr = DrClass()
thRZ = thRZClass()
# Calculate total available water (m3/m3)
TAW.Rz = max(WrFC-WrWP,0.)
# Calculate soil water depletion (mm)
Dr.Rz = min(WrFC-WrAct,TAW.Rz)
# Actual root zone water content (m3/m3)
thRZ.Act = WrAct/(rootdepth*1000)
# Root zone water content at saturation (m3/m3)
thRZ.S = WrS/(rootdepth*1000)
# Root zone water content at field capacity (m3/m3)
thRZ.FC = WrFC/(rootdepth*1000)
# Root zone water content at permanent wilting point (m3/m3)
thRZ.WP = WrWP/(rootdepth*1000)
# Root zone water content at air dry (m3/m3)
thRZ.Dry = WrDry/(rootdepth*1000)
# Root zone water content at aeration stress threshold (m3/m3)
thRZ.Aer = WrAer/(rootdepth*1000)
## Calculate top soil water content and available water ##
if rootdepth > Soil_zTop:
# Determine compartments covered by the top soil
ztopdepth = round(Soil_zTop,2)
comp_sto = np.sum(prof.dzsum<=ztopdepth)
# Initialise counters
WrAct_Zt = 0
WrFC_Zt = 0
WrWP_Zt = 0
# Calculate water storage in top soil
assert comp_sto > 0
for ii in range(comp_sto):
# Fraction of compartment covered by root zone
if prof.dzsum[ii] > ztopdepth:
factor = 1-((prof.dzsum[ii]-ztopdepth)/prof.dz[ii])
else:
factor = 1
# Actual water storage in top soil (mm)
WrAct_Zt = WrAct_Zt+(factor*1000*InitCond_th[ii]*prof.dz[ii])
# Water storage in top soil at field capacity (mm)
WrFC_Zt = WrFC_Zt+(factor*1000*prof.th_fc[ii]*prof.dz[ii])
# Water storage in top soil at permanent wilting point (mm)
WrWP_Zt = WrWP_Zt+(factor*1000*prof.th_wp[ii]*prof.dz[ii])
# Ensure available water in top soil is not less than zero
if WrAct_Zt < 0:
WrAct_Zt = 0
# Calculate total available water in top soil (m3/m3)
TAW.Zt = max(WrFC_Zt-WrWP_Zt,0)
# Calculate depletion in top soil (mm)
Dr.Zt = min(WrFC_Zt-WrAct_Zt,TAW.Zt)
else:
# Set top soil depletions and TAW to root zone values
Dr.Zt = Dr.Rz
TAW.Zt = TAW.Rz
return WrAct,Dr,TAW,thRZ
#hide
show_doc(root_zone_water)
#export
@njit()
def check_groundwater_table(ClockStruct_TimeStepCounter,prof,NewCond,
ParamStruct_WaterTable,zGW):
"""
Function to check for presence of a groundwater table, and, if present,
to adjust compartment water contents and field capacities where necessary
<a href="../pdfs/ac_ref_man_3.pdf#page=61" target="_blank">Reference manual: water table adjustment equations</a> (pg. 52-57)
*Arguments:*
`ClockStruct`: `ClockStructClass` : ClockStruct object containing clock paramaters
`Soil`: `SoilClass` : Soil object containing soil paramaters
`InitCond`: `InitCondClass` : InitCond object containing model paramaters
`ParamStruct`: `ParamStructClass` : model paramaters
*Returns:*
`NewCond`: `InitCondClass` : InitCond object containing updated model paramaters
`Soil`: `SoilClass` : Soil object containing updated soil paramaters
"""
## Perform calculations (if variable water table is present) ##
if (ParamStruct_WaterTable == 1):
# Update groundwater conditions for current day
NewCond.zGW = zGW
# Find compartment mid-points
zMid = prof.zMid
# Check if water table is within modelled soil profile
if NewCond.zGW >= 0:
if len(zMid[zMid>=NewCond.zGW]) == 0:
NewCond.WTinSoil = False
else:
NewCond.WTinSoil = True
#If water table is in soil profile, adjust water contents
if NewCond.WTinSoil == True:
idx = np.argwhere(zMid >= NewCond.zGW).flatten()[0]
for ii in range(idx,len(prof.Comp)):
NewCond.th[ii] = prof.th_s[ii]
# Adjust compartment field capacity
compi = len(prof.Comp) - 1
thfcAdj = np.zeros(compi+1)
# Find thFCadj for all compartments
while compi >= 0:
if prof.th_fc[compi] <= 0.1:
Xmax = 1
else:
if prof.th_fc[compi] >= 0.3:
Xmax = 2
else:
pF = 2+0.3*(prof.th_fc[compi]-0.1)/0.2
Xmax = (np.exp(pF*np.log(10)))/100
if (NewCond.zGW < 0) or ((NewCond.zGW-zMid[compi]) >= Xmax):
for ii in range(compi):
thfcAdj[ii] = prof.th_fc[ii]
compi = -1
else:
if prof.th_fc[compi] >= prof.th_s[compi]:
thfcAdj[compi] = prof.th_fc[compi]
else:
if zMid[compi] >= NewCond.zGW:
thfcAdj[compi] = prof.th_s[compi]
else:
dV = prof.th_s[compi]-prof.th_fc[compi]
dFC = (dV/(Xmax*Xmax))*((zMid[compi]-(NewCond.zGW-Xmax))**2)
thfcAdj[compi] = prof.th_fc[compi]+dFC
compi = compi-1
# Store adjusted field capacity values
NewCond.th_fc_Adj = thfcAdj
prof.th_fc_Adj = thfcAdj
return NewCond, prof
#hide
show_doc(check_groundwater_table)
#export
@njit()
def root_development(Crop,prof,InitCond,GDD,GrowingSeason,ParamStruct_WaterTable):
"""
Function to calculate root zone expansion
<a href="../pdfs/ac_ref_man_3.pdf#page=46" target="_blank">Reference Manual: root developement equations</a> (pg. 37-41)
*Arguments:*
`Crop`: `CropStruct` : jit class object containing Crop paramaters
`prof`: `SoilProfileClass` : jit class object containing soil paramaters
`InitCond`: `InitCondClass` : InitCond object containing model paramaters
`GDD`: `float` : Growing degree days on current day
`GrowingSeason`: `bool` : is growing season (True or Flase)
`ParamStruct_WaterTable`: `int` : water table present (True=1 or Flase=0)
*Returns:*
`NewCond`: `InitCondClass` : InitCond object containing updated model paramaters
"""
# Store initial conditions for updating
NewCond = InitCond
# save initial zroot
Zroot_init = float(InitCond.Zroot)*1.
Soil_nLayer = np.unique(prof.Layer).shape[0]
# Calculate root expansion (if in growing season)
if GrowingSeason == True:
# If today is first day of season, root depth is equal to minimum depth
if NewCond.DAP == 1:
NewCond.Zroot = float(Crop.Zmin)*1.
Zroot_init = float(Crop.Zmin)*1.
# Adjust time for any delayed development
if Crop.CalendarType == 1:
tAdj = NewCond.DAP-NewCond.DelayedCDs
elif Crop.CalendarType == 2:
tAdj = NewCond.GDDcum-NewCond.DelayedGDDs
# Calculate root expansion #
Zini = Crop.Zmin*(Crop.PctZmin/100)
t0 = round((Crop.Emergence/2))
tmax = Crop.MaxRooting
if Crop.CalendarType == 1:
tOld = tAdj-1
elif Crop.CalendarType == 2:
tOld = tAdj-GDD
# Potential root depth on previous day
if tOld >= tmax:
ZrOld = Crop.Zmax
elif tOld <= t0:
ZrOld = Zini
else:
X = (tOld-t0)/(tmax-t0)
ZrOld = Zini+(Crop.Zmax-Zini)*np.power(X,1/Crop.fshape_r)
if ZrOld < Crop.Zmin:
ZrOld = Crop.Zmin
# Potential root depth on current day
if tAdj >= tmax:
Zr = Crop.Zmax
elif tAdj <= t0:
Zr = Zini
else:
X = (tAdj-t0)/(tmax-t0)
Zr = Zini+(Crop.Zmax-Zini)*np.power(X,1/Crop.fshape_r)
if Zr < Crop.Zmin:
Zr = Crop.Zmin
# Store Zr as potential value
ZrPot = Zr
# Determine rate of change
dZr = Zr-ZrOld
# Adjust expansion rate for presence of restrictive soil horizons
if Zr > Crop.Zmin:
layeri = 1
l_idx = np.argwhere(prof.Layer==layeri).flatten()
Zsoil = prof.dz[l_idx].sum()
while (round(Zsoil,2) <= Crop.Zmin) and (layeri < Soil_nLayer):
layeri = layeri+1
l_idx = np.argwhere(prof.Layer==layeri).flatten()
Zsoil = Zsoil+prof.dz[l_idx].sum()
soil_layer_dz = prof.dz[l_idx].sum()
layer_comp = l_idx[0]
#soil_layer = prof.Layer[layeri]
ZrAdj = Crop.Zmin
ZrRemain = Zr-Crop.Zmin
deltaZ = Zsoil-Crop.Zmin
EndProf = False
while EndProf == False:
ZrTest = ZrAdj+(ZrRemain*(prof.Penetrability[layer_comp]/100))
if (layeri == Soil_nLayer) or \
(prof.Penetrability[layer_comp]==0) or \
(ZrTest<=Zsoil):
ZrOUT = ZrTest
EndProf = True
else:
ZrAdj = Zsoil
ZrRemain = ZrRemain-(deltaZ/(prof.Penetrability[layer_comp]/100))
layeri = layeri+1
l_idx = np.argwhere(prof.Layer==layeri).flatten()
layer_comp = l_idx[0]
soil_layer_dz = prof.dz[l_idx].sum()
Zsoil = Zsoil+soil_layer_dz
deltaZ = soil_layer_dz
# Correct Zr and dZr for effects of restrictive horizons
Zr = ZrOUT
dZr = Zr-ZrOld
# Adjust rate of expansion for any stomatal water stress
if NewCond.TrRatio < 0.9999:
if Crop.fshape_ex >= 0:
dZr = dZr*NewCond.TrRatio
else:
fAdj = (np.exp(NewCond.TrRatio*Crop.fshape_ex)-1)/(np.exp(Crop.fshape_ex)-1)
dZr = dZr*fAdj
#print(NewCond.DAP,NewCond.th)
# Adjust rate of root expansion for dry soil at expansion front
if dZr > 0.001:
# Define water stress threshold for inhibition of root expansion
pZexp = Crop.p_up[1]+((1-Crop.p_up[1])/2)
# Define potential new root depth
ZiTmp = float(Zroot_init+dZr)
# Find compartment that root zone will expand in to
#compi_index = prof.dzsum[prof.dzsum>=ZiTmp].index[0] # have changed to index
idx = np.argwhere(prof.dzsum>=ZiTmp).flatten()[0]
prof = prof
# Get TAW in compartment
layeri = prof.Layer[idx]
TAWprof = (prof.th_fc[idx]-prof.th_wp[idx])
# Define stress threshold
thThr = prof.th_fc[idx]-(pZexp*TAWprof)
# Check for stress conditions
if NewCond.th[idx] < thThr:
# Root expansion limited by water content at expansion front
if NewCond.th[idx] <= prof.th_wp[idx]:
# Expansion fully inhibited
dZr = 0
else:
# Expansion partially inhibited
Wrel = (prof.th_fc[idx]-NewCond.th[idx])/TAWprof
Drel = 1-((1-Wrel)/(1-pZexp))
Ks = 1-((np.exp(Drel*Crop.fshape_w[1])-1)/(np.exp(Crop.fshape_w[1])-1))
dZr = dZr*Ks
# Adjust for early senescence
if (NewCond.CC <= 0) and (NewCond.CC_NS > 0.5):
dZr = 0
# Adjust root expansion for failure to germinate (roots cannot expand
# if crop has not germinated)
if NewCond.Germination == False:
dZr = 0
# Get new rooting depth
NewCond.Zroot = float(Zroot_init+dZr)
# Adjust root density if deepening is restricted due to dry subsoil
# and/or restrictive layers
if NewCond.Zroot < ZrPot:
NewCond.rCor = (2*(ZrPot/NewCond.Zroot)*((Crop.SxTop+Crop.SxBot)/2)- \
Crop.SxTop)/Crop.SxBot
if NewCond.Tpot > 0:
NewCond.rCor = NewCond.rCor*NewCond.TrRatio
if NewCond.rCor < 1:
NewCond.rCor = 1
else:
NewCond.rCor = 1
# Limit rooting depth if groundwater table is present (roots cannot
# develop below the water table)
if (ParamStruct_WaterTable == 1) and (NewCond.zGW > 0):
if NewCond.Zroot > NewCond.zGW:
NewCond.Zroot = float(NewCond.zGW)
if NewCond.Zroot < Crop.Zmin:
NewCond.Zroot = float(Crop.Zmin)
else:
# No root system outside of the growing season
NewCond.Zroot = 0
return NewCond
#hide
show_doc(root_development)
# +
#export
@njit()
def pre_irrigation(prof,Crop,InitCond,GrowingSeason,IrrMngt):
"""
Function to calculate pre-irrigation when in net irrigation mode
<a href="../pdfs/ac_ref_man_1.pdf#page=40" target="_blank">Reference Manual: Net irrigation description</a> (pg. 31)
*Arguments:*
`prof`: `SoilProfileClass` : Soil object containing soil paramaters
`Crop`: `CropStruct` : Crop object containing Crop paramaters
`InitCond`: `InitCondClass` : InitCond object containing model paramaters
`GrowingSeason`: `bool` : is growing season (True or Flase)
`IrrMngt`: ``IrrMngtStruct` object containing irrigation management paramaters
*Returns:*
`NewCond`: `InitCondClass` : InitCond object containing updated model paramaters
`PreIrr`: `float` : Pre-Irrigaiton applied on current day mm
"""
# Store initial conditions for updating ##
NewCond = InitCond
## Calculate pre-irrigation needs ##
if GrowingSeason == True:
if (IrrMngt.IrrMethod != 4) or (NewCond.DAP != 1):
# No pre-irrigation as not in net irrigation mode or not on first day
# of the growing season
PreIrr = 0
else:
# Determine compartments covered by the root zone
rootdepth = round(max(NewCond.Zroot,Crop.Zmin),2)
compRz = np.argwhere(prof.dzsum>=rootdepth).flatten()[0]
PreIrr=0
for ii in range(int(compRz)):
# Determine critical water content threshold
thCrit = prof.th_wp[ii]+((IrrMngt.NetIrrSMT/100)*(prof.th_fc[ii]-prof.th_wp[ii]))
# Check if pre-irrigation is required
if NewCond.th[ii] < thCrit:
PreIrr = PreIrr+((thCrit-NewCond.th[ii])*1000*prof.dz[ii])
NewCond.th[ii] = thCrit
else:
PreIrr = 0
return NewCond, PreIrr
# -
#hide
show_doc(pre_irrigation)
# +
#export
@njit()
def drainage(prof,th_init,th_fc_Adj_init):
"""
Function to redistribute stored soil water
<a href="../pdfs/ac_ref_man_3.pdf#page=51" target="_blank">Reference Manual: drainage calculations</a> (pg. 42-65)
*Arguments:*
`prof`: `SoilProfileClass` : jit class object object containing soil paramaters
`th_init`: `np.array` : initial water content
`th_fc_Adj_init`: `np.array` : adjusted water content at field capacity
*Returns:*
`NewCond`: `InitCondClass` : InitCond object containing updated model paramaters
`DeepPerc`:: `float` : Total Deep Percolation
`FluxOut`:: `array-like` : Flux of water out of each compartment
"""
# Store initial conditions in new structure for updating %%
# NewCond = InitCond
# th_init = InitCond.th
# th_fc_Adj_init = InitCond.th_fc_Adj
# Preallocate arrays %%
thnew = np.zeros(th_init.shape[0])
FluxOut = np.zeros(th_init.shape[0])
# Initialise counters and states %%
drainsum = 0
# Calculate drainage and updated water contents %%
for ii in range(th_init.shape[0]):
# Specify layer for compartment
cth_fc = prof.th_fc[ii]
cth_s = prof.th_s[ii]
ctau = prof.tau[ii]
cdz = prof.dz[ii]
cdzsum = prof.dzsum[ii]
cKsat = prof.Ksat[ii]
# Calculate drainage ability of compartment ii
if th_init[ii] <= th_fc_Adj_init[ii]:
dthdt = 0
elif th_init[ii] >= cth_s:
dthdt = ctau*(cth_s - cth_fc)
if (th_init[ii]-dthdt) < th_fc_Adj_init[ii]:
dthdt = th_init[ii]-th_fc_Adj_init[ii]
else:
dthdt = ctau*(cth_s-cth_fc)*((np.exp(th_init[ii] - cth_fc)-1)/(np.exp(cth_s-cth_fc)-1))
if (th_init[ii]-dthdt) < th_fc_Adj_init[ii]:
dthdt = th_init[ii]-th_fc_Adj_init[ii]
# Drainage from compartment ii (mm)
draincomp = dthdt*cdz*1000
# Check drainage ability of compartment ii against cumulative drainage
# from compartments above
excess = 0
prethick = cdzsum-cdz
drainmax = dthdt*1000*prethick
if drainsum <= drainmax:
drainability = True
else:
drainability = False
# Drain compartment ii
if drainability == True:
# No storage needed. Update water content in compartment ii
thnew[ii] = th_init[ii]-dthdt
# Update cumulative drainage (mm)
drainsum = drainsum+draincomp
# Restrict cumulative drainage to saturated hydraulic
# conductivity and adjust excess drainage flow
if drainsum > cKsat:
excess = excess+drainsum-cKsat
drainsum = cKsat
elif drainability == False:
# Storage is needed
dthdt = drainsum/(1000*prethick)
# Calculate value of theta (thX) needed to provide a
# drainage ability equal to cumulative drainage
if dthdt <= 0 :
thX = th_fc_Adj_init[ii]
elif ctau > 0:
A = 1+((dthdt*(np.exp(cth_s-cth_fc)-1)) \
/(ctau*(cth_s-cth_fc)))
thX = cth_fc+np.log(A)
if thX < th_fc_Adj_init[ii]:
thX = th_fc_Adj_init[ii]
else:
thX = cth_s+0.01
# Check thX against hydraulic properties of current soil layer
if thX <= cth_s:
# Increase compartment ii water content with cumulative
# drainage
thnew[ii] = th_init[ii]+(drainsum/(1000*cdz))
# Check updated water content against thX
if thnew[ii] > thX:
# Cumulative drainage is the drainage difference
# between theta_x and new theta plus drainage ability
# at theta_x.
drainsum = (thnew[ii]-thX)*1000*cdz
# Calculate drainage ability for thX
if thX <= th_fc_Adj_init[ii]:
dthdt = 0
elif thX >= cth_s:
dthdt = ctau*(cth_s-cth_fc)
if (thX-dthdt) < th_fc_Adj_init[ii]:
dthdt = thX-th_fc_Adj_init[ii]
else:
dthdt = ctau*(cth_s-cth_fc)* \
((np.exp(thX-cth_fc)-1)/(np.exp(cth_s-cth_fc)-1))
if (thX-dthdt) < th_fc_Adj_init[ii]:
dthdt = thX-th_fc_Adj_init[ii]
# Update drainage total
drainsum = drainsum+(dthdt*1000*cdz)
# Restrict cumulative drainage to saturated hydraulic
# conductivity and adjust excess drainage flow
if drainsum > cKsat:
excess = excess+drainsum-cKsat
drainsum = cKsat
# Update water content
thnew[ii] = thX-dthdt
elif thnew[ii] > th_fc_Adj_init[ii]:
# Calculate drainage ability for updated water content
if thnew[ii] <= th_fc_Adj_init[ii]:
dthdt = 0
elif thnew[ii] >= cth_s:
dthdt = ctau*(cth_s-cth_fc)
if (thnew[ii]-dthdt) < th_fc_Adj_init[ii]:
dthdt = thnew[ii]-th_fc_Adj_init[ii]
else:
dthdt = ctau*(cth_s-cth_fc)* \
((np.exp(thnew[ii]-cth_fc)-1)/ \
(np.exp(cth_s-cth_fc)-1))
if (thnew[ii]-dthdt) < th_fc_Adj_init[ii]:
dthdt = thnew[ii]-th_fc_Adj_init[ii]
# Update water content in compartment ii
thnew[ii] = thnew[ii]-dthdt
# Update cumulative drainage
drainsum = dthdt*1000*cdz
# Restrict cumulative drainage to saturated hydraulic
# conductivity and adjust excess drainage flow
if drainsum > cKsat:
excess = excess+drainsum-cKsat
drainsum = cKsat
else:
# Drainage and cumulative drainage are zero as water
# content has not risen above field capacity in
# compartment ii.
drainsum = 0
elif thX > cth_s:
# Increase water content in compartment ii with cumulative
# drainage from above
thnew[ii] = th_init[ii]+(drainsum/(1000*cdz))
# Check new water content against hydraulic properties of soil
# layer
if thnew[ii] <= cth_s:
if thnew[ii] > th_fc_Adj_init[ii]:
# Calculate new drainage ability
if thnew[ii] <= th_fc_Adj_init[ii]:
dthdt = 0
elif thnew[ii] >= cth_s:
dthdt = ctau*(cth_s-cth_fc)
if (thnew[ii]-dthdt) < th_fc_Adj_init[ii]:
dthdt = thnew[ii]-th_fc_Adj_init[ii]
else:
dthdt = ctau*(cth_s-cth_fc)* \
((np.exp(thnew[ii]-cth_fc)-1)/ \
(np.exp(cth_s-cth_fc)-1))
if (thnew[ii]-dthdt) < th_fc_Adj_init[ii]:
dthdt = thnew[ii]-th_fc_Adj_init[ii]
# Update water content in compartment ii
thnew[ii] = thnew[ii]-dthdt
# Update cumulative drainage
drainsum = dthdt*1000*cdz
# Restrict cumulative drainage to saturated hydraulic
# conductivity and adjust excess drainage flow
if drainsum > cKsat:
excess = excess+drainsum-cKsat
drainsum = cKsat
else:
drainsum = 0
elif thnew[ii] > cth_s:
# Calculate excess drainage above saturation
excess = (thnew[ii]-cth_s)*1000*cdz
# Calculate drainage ability for updated water content
if thnew[ii] <= th_fc_Adj_init[ii]:
dthdt = 0
elif thnew[ii] >= cth_s:
dthdt = ctau*(cth_s-cth_fc)
if (thnew[ii]-dthdt) < th_fc_Adj_init[ii]:
dthdt = thnew[ii]-th_fc_Adj_init[ii]
else:
dthdt = ctau*(cth_s-cth_fc)* \
((np.exp(thnew[ii]-cth_fc)-1)/ \
(np.exp(cth_s-cth_fc)-1))
if (thnew[ii]-dthdt) < th_fc_Adj_init[ii]:
dthdt = thnew[ii]-th_fc_Adj_init[ii]
# Update water content in compartment ii
thnew[ii] = cth_s-dthdt
# Update drainage from compartment ii
draincomp = dthdt*1000*cdz
# Update maximum drainage
drainmax = dthdt*1000*prethick
# Update excess drainage
if drainmax > excess:
drainmax = excess
excess = excess-drainmax
# Update drainsum and restrict to saturated hydraulic
# conductivity of soil layer
drainsum = draincomp+drainmax
if drainsum > cKsat:
excess = excess+drainsum-cKsat
drainsum = cKsat
# Store output flux from compartment ii
FluxOut[ii] = drainsum
# Redistribute excess in compartment above
if excess > 0:
precomp = ii+1
while (excess>0) and (precomp!=0):
# Update compartment counter
precomp = precomp-1
# Update layer counter
#precompdf = Soil.Profile.Comp[precomp]
# Update flux from compartment
if precomp < ii:
FluxOut[precomp] = FluxOut[precomp]-excess
# Increase water content to store excess
thnew[precomp] = thnew[precomp]+(excess/(1000*prof.dz[precomp]))
# Limit water content to saturation and adjust excess counter
if thnew[precomp] > prof.th_s[precomp]:
excess = (thnew[precomp]-prof.th_s[precomp])*1000*prof.dz[precomp]
thnew[precomp] = prof.th_s[precomp]
else:
excess = 0
## Update conditions and outputs ##
# Total deep percolation (mm)
DeepPerc = drainsum
# Water contents
#NewCond.th = thnew
return thnew,DeepPerc,FluxOut
# -
#hide
show_doc(drainage)
#export
@njit()
def rainfall_partition(P,InitCond,
FieldMngt,
Soil_CN, Soil_AdjCN, Soil_zCN, Soil_nComp,prof):
"""
Function to partition rainfall into surface runoff and infiltration using the curve number approach
<a href="../pdfs/ac_ref_man_3.pdf#page=57" target="_blank">Reference Manual: rainfall partition calculations</a> (pg. 48-51)
*Arguments:*
`P`: `float` : Percipitation on current day
`InitCond`: `InitCondClass` : InitCond object containing model paramaters
`FieldMngt`: `FieldMngtStruct` : field management params
`Soil_CN`: `float` : curve number
`Soil_AdjCN`: `float` : adjusted curve number
`Soil_zCN`: `float` :
`Soil_nComp`: `float` : number of compartments
`prof`: `SoilProfileClass` : Soil object
*Returns:*
`Runoff`: `float` : Total Suface Runoff
`Infl`: `float` : Total Infiltration
`NewCond`: `InitCondClass` : InitCond object containing updated model paramaters
"""
# can probs make this faster by doing a if P=0 loop
## Store initial conditions for updating ##
NewCond = InitCond
## Calculate runoff ##
if (FieldMngt.SRinhb == False) and ((FieldMngt.Bunds == False) or (FieldMngt.zBund < 0.001)):
# Surface runoff is not inhibited and no soil bunds are on field
# Reset submerged days
NewCond.DaySubmerged = 0
# Adjust curve number for field management practices
CN = Soil_CN*(1+(FieldMngt.CNadjPct/100))
if Soil_AdjCN == 1: # Adjust CN for antecedent moisture
# Calculate upper and lowe curve number bounds
CNbot = round(1.4*(np.exp(-14*np.log(10)))+(0.507*CN)-(0.00374*CN**2)+(0.0000867*CN**3))
CNtop = round(5.6*(np.exp(-14*np.log(10)))+(2.33*CN)-(0.0209*CN**2)+(0.000076*CN**3))
# Check which compartment cover depth of top soil used to adjust
# curve number
comp_sto_array = prof.dzsum[prof.dzsum>=Soil_zCN]
if comp_sto_array.shape[0]==0:
comp_sto = int(Soil_nComp)
else:
comp_sto = int(Soil_nComp - comp_sto_array.shape[0])
# Calculate weighting factors by compartment
xx = 0
wrel = np.zeros(comp_sto)
for ii in range(comp_sto):
if prof.dzsum[ii] > Soil_zCN:
prof.dzsum[ii] = Soil_zCN
wx = 1.016*(1-np.exp(-4.16*(prof.dzsum[ii]/Soil_zCN)))
wrel[ii] = wx-xx
if wrel[ii] < 0:
wrel[ii] = 0
elif wrel[ii] > 1:
wrel[ii] = 1
xx = wx
# Calculate relative wetness of top soil
wet_top = 0
prof = prof
for ii in range(comp_sto):
th = max(prof.th_wp[ii],InitCond.th[ii])
wet_top = wet_top+(wrel[ii]*((th-prof.th_wp[ii])/(prof.th_fc[ii]-prof.th_wp[ii])))
# Calculate adjusted curve number
if wet_top > 1:
wet_top = 1
elif wet_top < 0:
wet_top = 0
CN = round(CNbot+(CNtop-CNbot)*wet_top)
# Partition rainfall into runoff and infiltration (mm)
S = (25400/CN)-254
term = P-((5/100)*S)
if term <= 0:
Runoff = 0
Infl = P
else:
Runoff = (term**2)/(P+(1-(5/100))*S)
Infl = P-Runoff
else:
# Bunds on field, therefore no surface runoff
Runoff = 0
Infl = P
return Runoff,Infl,NewCond
#hide
show_doc(rainfall_partition)
# +
#export
@njit()
def irrigation(InitCond,IrrMngt,Crop,prof,Soil_zTop,GrowingSeason,Rain,Runoff):
"""
Function to get irrigation depth for current day
<a href="../pdfs/ac_ref_man_1.pdf#page=40" target="_blank">Reference Manual: irrigation description</a> (pg. 31-32)
*Arguments:*
`InitCond`: `InitCondClass` : InitCond object containing model paramaters
`IrrMngt`: `IrrMngtStruct`: jit class object containing irrigation management paramaters
`Crop`: `CropClass` : Crop object containing Crop paramaters
`Soil`: `SoilClass` : Soil object containing soil paramaters
`GrowingSeason`: `bool` : is growing season (True or Flase)
`Rain`: `float` : daily precipitation mm
`Runoff`: `float` : surface runoff on current day
*Returns:*
`NewCond`: `InitCondClass` : InitCond object containing updated model paramaters
`Irr`: `float` : Irrigaiton applied on current day mm
"""
## Store intial conditions for updating ##
NewCond = InitCond
## Determine irrigation depth (mm/day) to be applied ##
if GrowingSeason == True:
# Calculate root zone water content and depletion
WrAct,Dr_,TAW_,thRZ = root_zone_water(prof,float(NewCond.Zroot),NewCond.th,Soil_zTop,float(Crop.Zmin),Crop.Aer)
# Use root zone depletions and TAW only for triggering irrigation
Dr = Dr_.Rz
TAW = TAW_.Rz
# Determine adjustment for inflows and outflows on current day #
if thRZ.Act > thRZ.FC:
rootdepth = max(InitCond.Zroot,Crop.Zmin)
AbvFc = (thRZ.Act-thRZ.FC)*1000*rootdepth
else:
AbvFc = 0
WCadj = InitCond.Tpot+InitCond.Epot-Rain+Runoff-AbvFc
NewCond.Depletion = Dr+WCadj
NewCond.TAW = TAW
# Update growth stage if it is first day of a growing season
if NewCond.DAP == 1:
NewCond.GrowthStage = 1
if IrrMngt.IrrMethod ==0:
Irr = 0
elif IrrMngt.IrrMethod ==1:
Dr = NewCond.Depletion / NewCond.TAW
index = int(NewCond.GrowthStage)-1
if Dr > 1-IrrMngt.SMT[index]/100:
# Irrigation occurs
IrrReq = max(0,NewCond.Depletion)
# Adjust irrigation requirements for application efficiency
EffAdj = ((100-IrrMngt.AppEff)+100)/100
IrrReq = IrrReq*EffAdj
# Limit irrigation to maximum depth
Irr = min(IrrMngt.MaxIrr,IrrReq)
else:
Irr = 0
elif IrrMngt.IrrMethod ==2: # Irrigation - fixed interval
Dr = NewCond.Depletion
# Get number of days in growing season so far (subtract 1 so that
# always irrigate first on day 1 of each growing season)
nDays = NewCond.DAP-1
if nDays % IrrMngt.IrrInterval == 0:
# Irrigation occurs
IrrReq = max(0,Dr)
# Adjust irrigation requirements for application efficiency
EffAdj = ((100-IrrMngt.AppEff)+100)/100
IrrReq = IrrReq*EffAdj
# Limit irrigation to maximum depth
Irr = min(IrrMngt.MaxIrr,IrrReq)
else:
# No irrigation
Irr = 0
elif IrrMngt.IrrMethod == 3: # Irrigation - pre-defined schedule
# Get current date
idx = NewCond.TimeStepCounter
# Find irrigation value corresponding to current date
Irr = IrrMngt.Schedule[idx]
assert Irr>=0
Irr = min(IrrMngt.MaxIrr,Irr)
elif IrrMngt.IrrMethod ==4: # Irrigation - net irrigation
# Net irrigation calculation performed after transpiration, so
# irrigation is zero here
Irr = 0
elif IrrMngt.IrrMethod ==5: # depth applied each day (usually specified outside of model)
Irr = min(IrrMngt.MaxIrr,IrrMngt.depth)
# else:
# assert 1 ==2, f'somethings gone wrong in irrigation method:{IrrMngt.IrrMethod}'
Irr = max(0,Irr)
elif GrowingSeason == False:
# No irrigation outside growing season
Irr = 0
NewCond.IrrCum = 0
if NewCond.IrrCum+Irr > IrrMngt.MaxIrrSeason:
Irr = max(0,IrrMngt.MaxIrrSeason-NewCond.IrrCum)
# Update cumulative irrigation counter for growing season
NewCond.IrrCum = NewCond.IrrCum+Irr
return NewCond,Irr
# -
#hide
show_doc(irrigation)
#export
@njit()
def infiltration(prof,InitCond,Infl,Irr,IrrMngt_AppEff,
FieldMngt,
FluxOut,DeepPerc0,Runoff0,GrowingSeason):
"""
Function to infiltrate incoming water (rainfall and irrigation)
<a href="../pdfs/ac_ref_man_3.pdf#page=51" target="_blank">Reference Manual: drainage calculations</a> (pg. 42-65)
*Arguments:*
`prof`: `SoilProfileClass` : Soil object containing soil paramaters
`InitCond`: `InitCondClass` : InitCond object containing model paramaters
`Infl`: `float` : Infiltration so far
`Irr`: `float` : Irrigation on current day
`IrrMngt_AppEff`: `float`: irrigation application efficiency
`FieldMngt`: `FieldMngtStruct` : field management params
`FluxOut`: `np.array` : flux of water out of each compartment
`DeepPerc0`: `float` : initial Deep Percolation
`Runoff0`: `float` : initial Surface Runoff
`GrowingSeason`:: `bool` : is growing season (True or Flase)
*Returns:*
`NewCond`: `InitCondClass` : InitCond object containing updated model paramaters
`DeepPerc`:: `float` : Total Deep Percolation
`RunoffTot`: `float` : Total surface Runoff
`Infl`: `float` : Infiltration on current day
`FluxOut`: `np.array` : flux of water out of each compartment
"""
## Store initial conditions in new structure for updating ##
NewCond = InitCond
InitCond_SurfaceStorage = InitCond.SurfaceStorage
InitCond_th_fc_Adj = InitCond.th_fc_Adj
InitCond_th = InitCond.th
thnew = NewCond.th.copy()
Soil_nComp = thnew.shape[0]
## Update infiltration rate for irrigation ##
# Note: irrigation amount adjusted for specified application efficiency
if GrowingSeason == True:
Infl = Infl+(Irr*(IrrMngt_AppEff/100))
assert Infl >= 0
## Determine surface storage (if bunds are present) ##
if FieldMngt.Bunds:
# Bunds on field
if FieldMngt.zBund > 0.001:
# Bund height too small to be considered
InflTot = Infl+NewCond.SurfaceStorage
if InflTot > 0:
# Update surface storage and infiltration storage
if InflTot > prof.Ksat[0]:
# Infiltration limited by saturated hydraulic conductivity
# of surface soil layer
ToStore = prof.Ksat[0]
# Additional water ponds on surface
NewCond.SurfaceStorage = InflTot-prof.Ksat[0]
else:
# All water infiltrates
ToStore = InflTot
# Reset surface storage depth to zero
NewCond.SurfaceStorage = 0
# Calculate additional runoff
if NewCond.SurfaceStorage > (FieldMngt.zBund*1000):
# Water overtops bunds and runs off
RunoffIni = NewCond.SurfaceStorage - (FieldMngt.zBund*1000)
# Surface storage equal to bund height
NewCond.SurfaceStorage = FieldMngt.zBund*1000
else:
# No overtopping of bunds
RunoffIni = 0
else:
# No storage or runoff
ToStore = 0
RunoffIni = 0
elif FieldMngt.Bunds == False:
# No bunds on field
if Infl > prof.Ksat[0]:
# Infiltration limited by saturated hydraulic conductivity of top
# soil layer
ToStore = prof.Ksat[0]
# Additional water runs off
RunoffIni = Infl-prof.Ksat[0]
else:
# All water infiltrates
ToStore = Infl
RunoffIni = 0
# Update surface storage
NewCond.SurfaceStorage = 0
# Add any water remaining behind bunds to surface runoff (needed for
# days when bunds are removed to maintain water balance)
RunoffIni = RunoffIni+InitCond_SurfaceStorage
## Initialise counters
ii = -1
Runoff = 0
## Infiltrate incoming water ##
if ToStore > 0:
while (ToStore > 0) and (ii < Soil_nComp-1):
# Update compartment counter
ii = ii+1
# Get soil layer
# Calculate saturated drainage ability
dthdtS = prof.tau[ii]*(prof.th_s[ii]-prof.th_fc[ii])
# Calculate drainage factor
factor = prof.Ksat[ii]/(dthdtS*1000*prof.dz[ii])
# Calculate drainage ability required
dthdt0 = ToStore/(1000*prof.dz[ii])
# Check drainage ability
if dthdt0 < dthdtS:
# Calculate water content, thX, needed to meet drainage dthdt0
if dthdt0 <= 0:
theta0 = InitCond_th_fc_Adj[ii]
else:
A = 1+((dthdt0*(np.exp(prof.th_s[ii]-prof.th_fc[ii])-1)) \
/(prof.tau[ii]*(prof.th_s[ii]-prof.th_fc[ii])))
theta0 = prof.th_fc[ii]+np.log(A)
# Limit thX to between saturation and field capacity
if theta0 > prof.th_s[ii]:
theta0 = prof.th_s[ii]
elif theta0 <= InitCond_th_fc_Adj[ii]:
theta0 = InitCond_th_fc_Adj[ii]
dthdt0 = 0
else:
# Limit water content and drainage to saturation
theta0 = prof.th_s[ii]
dthdt0 = dthdtS
# Calculate maximum water flow through compartment ii
drainmax = factor*dthdt0*1000*prof.dz[ii]
# Calculate total drainage from compartment ii
drainage = drainmax+FluxOut[ii]
# Limit drainage to saturated hydraulic conductivity
if drainage > prof.Ksat[ii]:
drainmax = prof.Ksat[ii]-FluxOut[ii]
# Calculate difference between threshold and current water contents
diff = theta0-InitCond_th[ii]
if diff > 0:
# Increase water content of compartment ii
thnew[ii] = thnew[ii]+(ToStore/(1000*prof.dz[ii]))
if thnew[ii] > theta0:
# Water remaining that can infiltrate to compartments below
ToStore = (thnew[ii]-theta0)*1000*prof.dz[ii]
thnew[ii] = theta0
else:
# All infiltrating water has been stored
ToStore = 0
# Update outflow from current compartment (drainage + infiltration
# flows)
FluxOut[ii] = FluxOut[ii]+ToStore
# Calculate back-up of water into compartments above
excess = ToStore-drainmax
if excess < 0 :
excess = 0
# Update water to store
ToStore = ToStore-excess
# Redistribute excess to compartments above
if excess > 0:
precomp = ii+1
while (excess>0) and (precomp!=0):
# Keep storing in compartments above until soil surface is
# reached
# Update compartment counter
precomp = precomp-1
# Update layer number
# Update outflow from compartment
FluxOut[precomp] = FluxOut[precomp]-excess
# Update water content
thnew[precomp] = thnew[precomp]+(excess/(prof.dz[precomp]*1000))
# Limit water content to saturation
if thnew[precomp] > prof.th_s[precomp]:
# Update excess to store
excess = (thnew[precomp]-prof.th_s[precomp])*1000*prof.dz[precomp]
# Set water content to saturation
thnew[precomp] = prof.th_s[precomp]
else:
# All excess stored
excess = 0
if excess > 0:
# Any leftover water not stored becomes runoff
Runoff = Runoff+excess
# Infiltration left to store after bottom compartment becomes deep
# percolation (mm)
DeepPerc = ToStore
else:
# No infiltration
DeepPerc = 0
Runoff = 0
## Update total runoff ##
Runoff = Runoff+RunoffIni
## Update surface storage (if bunds are present) ##
if Runoff > RunoffIni:
if FieldMngt.Bunds:
if FieldMngt.zBund > 0.001:
# Increase surface storage
NewCond.SurfaceStorage = NewCond.SurfaceStorage+(Runoff-RunoffIni)
# Limit surface storage to bund height
if NewCond.SurfaceStorage > (FieldMngt.zBund*1000):
# Additonal water above top of bunds becomes runoff
Runoff = RunoffIni+(NewCond.SurfaceStorage-(FieldMngt.zBund*1000))
# Set surface storage to bund height
NewCond.SurfaceStorage = FieldMngt.zBund*1000
else:
# No additional overtopping of bunds
Runoff = RunoffIni
## Store updated water contents ##
NewCond.th = thnew
## Update deep percolation, surface runoff, and infiltration values ##
DeepPerc = DeepPerc+DeepPerc0
Infl = Infl-Runoff
RunoffTot = Runoff+Runoff0
return NewCond,DeepPerc,RunoffTot,Infl,FluxOut
#hide
show_doc(infiltration)
# +
#export
@njit()
def capillary_rise(prof,Soil_nLayer,Soil_fshape_cr,NewCond,FluxOut,ParamStruct_WaterTable):
"""
Function to calculate capillary rise from a shallow groundwater table
<a href="../pdfs/ac_ref_man_3.pdf#page=61" target="_blank">Reference Manual: capillary rise calculations</a> (pg. 52-61)
*Arguments:*
`Soil`: `SoilClass` : Soil object
`NewCond`: `InitCondClass` : InitCond object containing model paramaters
`FluxOut`: `np.array` : FLux of water out of each soil compartment
`ParamStruct_WaterTable`: `int` : WaterTable present (1:yes, 0:no)
*Returns:*
`NewCond`: `InitCondClass` : InitCond object containing updated model paramaters
`CrTot`: `float` : Total Capillary rise
"""
## Get groundwater table elevation on current day ##
zGW = NewCond.zGW
## Calculate capillary rise ##
if ParamStruct_WaterTable == 0: # No water table present
# Capillary rise is zero
CrTot = 0
elif ParamStruct_WaterTable == 1: # Water table present
# Get maximum capillary rise for bottom compartment
zBot = prof.dzsum[-1]
zBotMid = prof.zMid[-1]
prof = prof
if (prof.Ksat[-1] > 0) and (zGW > 0) and ((zGW-zBotMid) < 4):
if zBotMid >= zGW:
MaxCR = 99
else:
MaxCR = np.exp((np.log(zGW-zBotMid)-prof.bCR[-1])/prof.aCR[-1])
if MaxCR > 99:
MaxCR = 99
else:
MaxCR = 0
######################### this needs fixing, will currently break####################
# # Find top of next soil layer that is not within modelled soil profile
# zTopLayer = 0
# for layeri in np.sort(np.unique(prof.Layer)):
# # Calculate layer thickness
# l_idx = np.argwhere(prof.Layer==layeri).flatten()
# LayThk = prof.dz[l_idx].sum()
# zTopLayer = zTopLayer+LayThk
# # Check for restrictions on upward flow caused by properties of
# # compartments that are not modelled in the soil water balance
# layeri = prof.Layer[-1]
# assert layeri == Soil_nLayer
# while (zTopLayer < zGW) and (layeri < Soil_nLayer):
# # this needs fixing, will currently break
# layeri = layeri+1
# compdf = prof.Layer[layeri]
# if (compdf.Ksat > 0) and (zGW > 0) and ((zGW-zTopLayer) < 4):
# if zTopLayer >= zGW:
# LimCR = 99
# else:
# LimCR = np.exp((np.log(zGW-zTopLayer)-compdf.bCR)/compdf.aCR)
# if LimCR > 99:
# LimCR = 99
# else:
# LimCR = 0
# if MaxCR > LimCR:
# MaxCR = LimCR
# zTopLayer = zTopLayer+compdf.dz
#####################################################################################
# Calculate capillary rise
compi = len(prof.Comp)-1 # Start at bottom of root zone
WCr = 0 # Capillary rise counter
while (round(MaxCR*1000)>0) and (compi > -1) and (round(FluxOut[compi]*1000) == 0):
# Proceed upwards until maximum capillary rise occurs, soil surface
# is reached, or encounter a compartment where downward
# drainage/infiltration has already occurred on current day
# Find layer of current compartment
# Calculate driving force
if (NewCond.th[compi] >= prof.th_wp[compi]) and (Soil_fshape_cr > 0):
Df = 1-(((NewCond.th[compi]-prof.th_wp[compi])/ \
(NewCond.th_fc_Adj[compi]-prof.th_wp[compi]))**Soil_fshape_cr)
if Df > 1:
Df = 1
elif Df < 0:
Df = 0
else:
Df = 1
# Calculate relative hydraulic conductivity
thThr = (prof.th_wp[compi]+prof.th_fc[compi])/2
if NewCond.th[compi] < thThr:
if (NewCond.th[compi] <= prof.th_wp[compi]) or (thThr <= prof.th_wp[compi]):
Krel = 0
else:
Krel = (NewCond.th[compi]-prof.th_wp[compi])/ (thThr-prof.th_wp[compi])
else:
Krel = 1
# Check if room is available to store water from capillary rise
dth = NewCond.th_fc_Adj[compi]-NewCond.th[compi]
# Store water if room is available
if (dth > 0) and ((zBot-prof.dz[compi]/2) < zGW):
dthMax = Krel*Df*MaxCR/(1000*prof.dz[compi])
if dth >= dthMax:
NewCond.th[compi] = NewCond.th[compi]+dthMax
CRcomp = dthMax*1000*prof.dz[compi]
MaxCR = 0
else:
NewCond.th[compi] = NewCond.th_fc_Adj[compi]
CRcomp = dth*1000*prof.dz[compi]
MaxCR = (Krel*MaxCR)-CRcomp
WCr = WCr+CRcomp
# Update bottom elevation of compartment
zBot = zBot-prof.dz[compi]
# Update compartment and layer counters
compi = compi-1
# Update restriction on maximum capillary rise
if compi > -1:
zBotMid = zBot-(prof.dz[compi]/2)
if (prof.Ksat[compi] > 0) and (zGW > 0) and ((zGW-zBotMid) < 4):
if zBotMid >= zGW:
LimCR = 99
else:
LimCR = np.exp((np.log(zGW-zBotMid)-prof.bCR[compi])/prof.aCR[compi])
if LimCR > 99:
LimCR = 99
else:
LimCR = 0
if MaxCR > LimCR:
MaxCR = LimCR
# Store total depth of capillary rise
CrTot = WCr
return NewCond,CrTot
# -
#hide
show_doc(capillary_rise)
#export
@njit()
def germination(InitCond,Soil_zGerm,prof,Crop_GermThr,Crop_PlantMethod,GDD,GrowingSeason):
"""
Function to check if crop has germinated
<a href="../pdfs/ac_ref_man_3.pdf#page=32" target="_blank">Reference Manual: germination condition</a> (pg. 23)
*Arguments:*
`InitCond`: `InitCondClass` : InitCond object containing model paramaters
`Soil_zGerm`: `float` : Soil depth affecting germination
`prof`: `SoilProfileClass` : Soil object containing soil paramaters
`Crop_GermThr`: `float` : Crop germination threshold
`Crop_PlantMethod`: `bool` : sown as seedling True or False
`GDD`: `float` : Number of Growing Degree Days on current day
`GrowingSeason`:: `bool` : is growing season (True or Flase)
*Returns:*
`NewCond`: `InitCondClass` : InitCond object containing updated model paramaters
"""
## Store initial conditions in new structure for updating ##
NewCond = InitCond
## Check for germination (if in growing season) ##
if GrowingSeason == True:
# Find compartments covered by top soil layer affecting germination
comp_sto = np.argwhere(prof.dzsum>=Soil_zGerm).flatten()[0]
# Calculate water content in top soil layer
Wr = 0
WrFC = 0
WrWP = 0
for ii in range(comp_sto+1):
# Get soil layer
# Determine fraction of compartment covered by top soil layer
if prof.dzsum[ii] > Soil_zGerm:
factor = 1-((prof.dzsum[ii]-Soil_zGerm)/prof.dz[ii])
else:
factor = 1
# Increment actual water storage (mm)
Wr = Wr+round(factor*1000*InitCond.th[ii]*prof.dz[ii],3)
# Increment water storage at field capacity (mm)
WrFC = WrFC+round(factor*1000*prof.th_fc[ii]*prof.dz[ii],3)
# Increment water storage at permanent wilting point (mm)
WrWP = WrWP+round(factor*1000*prof.th_wp[ii]*prof.dz[ii],3)
# Limit actual water storage to not be less than zero
if Wr < 0:
Wr = 0
# Calculate proportional water content
WcProp = 1-((WrFC-Wr)/(WrFC-WrWP))
# Check if water content is above germination threshold
if (WcProp >= Crop_GermThr) and (NewCond.Germination == False):
# Crop has germinated
NewCond.Germination = True
# If crop sown as seedling, turn on seedling protection
if Crop_PlantMethod == True:
NewCond.ProtectedSeed = True
else:
# Crop is transplanted so no protection
NewCond.ProtectedSeed = False
# Increment delayed growth time counters if germination is yet to
# occur, and also set seed protection to False if yet to germinate
if NewCond.Germination == False:
NewCond.DelayedCDs = InitCond.DelayedCDs+1
NewCond.DelayedGDDs = InitCond.DelayedGDDs+GDD
NewCond.ProtectedSeed = False
else:
# Not in growing season so no germination calculation is performed.
NewCond.Germination = False
NewCond.ProtectedSeed = False
NewCond.DelayedCDs = 0
NewCond.DelayedGDDs = 0
return NewCond
#hide
show_doc(germination)
#export
@njit()
def growth_stage(Crop,InitCond,GrowingSeason):
"""
Function to determine current growth stage of crop
(used only for irrigation soil moisture thresholds)
*Arguments:*
`Crop`: `CropClass` : Crop object containing Crop paramaters
`InitCond`: `InitCondClass` : InitCond object containing model paramaters
`GrowingSeason`:: `bool` : is growing season (True or Flase)
*Returns:*
`NewCond`: `InitCondClass` : InitCond object containing updated model paramaters
"""
## Store initial conditions in new structure for updating ##
NewCond = InitCond
## Get growth stage (if in growing season) ##
if GrowingSeason == True:
# Adjust time for any delayed growth
if Crop.CalendarType == 1:
tAdj = NewCond.DAP-NewCond.DelayedCDs
elif Crop.CalendarType == 2:
tAdj = NewCond.GDDcum-NewCond.DelayedGDDs
# Update growth stage
if tAdj <= Crop.Canopy10Pct:
NewCond.GrowthStage = 1
elif tAdj <= Crop.MaxCanopy:
NewCond.GrowthStage = 2
elif tAdj <= Crop.Senescence:
NewCond.GrowthStage = 3
elif tAdj > Crop.Senescence:
NewCond.GrowthStage = 4
else:
# Not in growing season so growth stage is set to dummy value
NewCond.GrowthStage = 0
return NewCond
#hide
show_doc(growth_stage)
#export
@njit()
def water_stress(Crop,InitCond,Dr,TAW,Et0,beta):
"""
Function to calculate water stress coefficients
<a href="../pdfs/ac_ref_man_3.pdf#page=18" target="_blank">Reference Manual: water stress equations</a> (pg. 9-13)
*Arguments:*
`Crop`: `CropClass` : Crop Object
`InitCond`: `InitCondClass` : InitCond object
`Dr`: `DrClass` : Depletion object (contains rootzone and top soil depletion totals)
`TAW`: `TAWClass` : TAW object (contains rootzone and top soil total available water)
`Et0`: `float` : Reference Evapotranspiration
`beta`: `float` : Adjust senescence threshold if early sensescence is triggered
*Returns:*
`Ksw`: `KswClass` : Ksw object containint water stress coefficients
"""
## Calculate relative root zone water depletion for each stress type ##
# Number of stress variables
nstress = len(Crop.p_up)
# Store stress thresholds
p_up = np.ones(nstress)*Crop.p_up
p_lo = np.ones(nstress)*Crop.p_lo
if Crop.ETadj == 1:
# Adjust stress thresholds for Et0 on currentbeta day (don't do this for
# pollination water stress coefficient)
for ii in range(3):
p_up[ii] = p_up[ii]+(0.04*(5-Et0))*(np.log10(10-9*p_up[ii]))
p_lo[ii] = p_lo[ii]+(0.04*(5-Et0))*(np.log10(10-9*p_lo[ii]))
# Adjust senescence threshold if early sensescence is triggered
if (beta == True) and (InitCond.tEarlySen > 0):
p_up[2] = p_up[2]*(1-Crop.beta/100)
# Limit values
p_up = np.maximum(p_up,np.zeros(4))
p_lo = np.maximum(p_lo,np.zeros(4))
p_up = np.minimum(p_up,np.ones(4))
p_lo = np.minimum(p_lo,np.ones(4))
# Calculate relative depletion
Drel = np.zeros(nstress)
for ii in range(nstress):
if Dr <= (p_up[ii]*TAW):
# No water stress
Drel[ii] = 0
elif (Dr > (p_up[ii]*TAW)) and (Dr < (p_lo[ii]*TAW)):
# Partial water stress
Drel[ii] = 1-((p_lo[ii]-(Dr/TAW))/(p_lo[ii]-p_up[ii]))
elif Dr >= (p_lo[ii]*TAW):
# Full water stress
Drel[ii] = 1
## Calculate root zone water stress coefficients ##
Ks = np.ones(3)
for ii in range(3):
Ks[ii] = 1-((np.exp(Drel[ii]*Crop.fshape_w[ii])-1) \
/(np.exp(Crop.fshape_w[ii])-1))
Ksw = KswClass()
# Water stress coefficient for leaf expansion
Ksw.Exp = Ks[0]
# Water stress coefficient for stomatal closure
Ksw.Sto = Ks[1]
# Water stress coefficient for senescence
Ksw.Sen = Ks[2]
# Water stress coefficient for pollination failure
Ksw.Pol = 1-Drel[3]
# Mean water stress coefficient for stomatal closure
Ksw.StoLin = 1-Drel[1]
return Ksw
#hide
show_doc(water_stress)
#export
@njit()
def cc_development(CCo,CCx,CGC,CDC,dt,Mode,CCx0):
"""
Function to calculate canopy cover development by end of the current simulation day
<a href="../pdfs/ac_ref_man_3.pdf#page=30" target="_blank">Reference Manual: CC devlopment</a> (pg. 21-24)
*Arguments:*
`CCo`: `float` : Fractional canopy cover size at emergence
`CCx`: `float` : Maximum canopy cover (fraction of soil cover)
`CGC`: `float` : Canopy growth coefficient (fraction per GDD)
`CDC`: `float` : Canopy decline coefficient (fraction per GDD/calendar day)
`dt`: `float` : Time delta of canopy growth (1 calander day or ... GDD)
`Mode`: `str` : Stage of Canopy developement (Growth or Decline)
`CCx0`: `float` : Maximum canopy cover (fraction of soil cover)
*Returns:*
`CC`: `float` : Canopy Cover
"""
## Initialise output ##
## Calculate new canopy cover ##
if Mode == 'Growth':
# Calculate canopy growth
# Exponential growth stage
CC = CCo*np.exp(CGC*dt)
if CC > (CCx/2):
# Exponential decay stage
CC = CCx-0.25*(CCx/CCo)*CCx*np.exp(-CGC*dt)
# Limit CC to CCx
if CC > CCx:
CC = CCx
elif Mode=='Decline':
# Calculate canopy decline
if CCx < 0.001:
CC = 0
else:
CC = CCx*(1-0.05*(np.exp(dt*CDC*3.33*((CCx+2.29)/(CCx0+2.29))/(CCx+2.29))-1))
## Limit canopy cover to between 0 and 1 ##
if CC > 1:
CC = 1
elif CC < 0:
CC = 0
return CC
#hide
show_doc(cc_development)
#export
@njit()
def cc_required_time(CCprev,CCo,CCx,CGC,CDC,Mode):
"""
Function to find time required to reach CC at end of previous day, given current CGC or CDC
<a href="../pdfs/ac_ref_man_3.pdf#page=30" target="_blank">Reference Manual: CC devlopment</a> (pg. 21-24)
*Arguments:*
`CCprev`: `float` : Canopy Cover at previous timestep.
`CCo`: `float` : Fractional canopy cover size at emergence
`CCx`: `float` : Maximum canopy cover (fraction of soil cover)
`CGC`: `float` : Canopy growth coefficient (fraction per GDD)
`CDC`: `float` : Canopy decline coefficient (fraction per GDD/calendar day)
`Mode`: `str` : Canopy growth/decline coefficient (fraction per GDD/calendar day)
*Returns:*
`tReq`: `float` : time required to reach CC at end of previous day
"""
## Get CGC and/or time (GDD or CD) required to reach CC on previous day ##
if Mode=='CGC':
if CCprev <= (CCx/2):
#print(CCprev,CCo,(tSum-dt),tSum,dt)
CGCx = (np.log(CCprev/CCo))
#print(np.log(CCprev/CCo),(tSum-dt),CGCx)
else:
#print(CCx,CCo,CCprev)
CGCx = (np.log((0.25*CCx*CCx/CCo)/(CCx-CCprev)))
tReq = (CGCx/CGC)
elif Mode=='CDC':
tReq = (np.log(1+(1-CCprev/CCx)/0.05))/(CDC/CCx)
return tReq
#hide
show_doc(cc_required_time)
#export
@njit()
def adjust_CCx(CCprev,CCo,CCx,CGC,CDC,dt,tSum,Crop_CanopyDevEnd, Crop_CCx):
"""
Function to adjust CCx value for changes in CGC due to water stress during the growing season
<a href="../pdfs/ac_ref_man_3.pdf#page=36" target="_blank">Reference Manual: CC stress response</a> (pg. 27-33)
*Arguments:*
`CCprev`: `float` : Canopy Cover at previous timestep.
`CCo`: `float` : Fractional canopy cover size at emergence
`CCx`: `float` : Maximum canopy cover (fraction of soil cover)
`CGC`: `float` : Canopy growth coefficient (fraction per GDD)
`CDC`: `float` : Canopy decline coefficient (fraction per GDD/calendar day)
`dt`: `float` : Time delta of canopy growth (1 calander day or ... GDD)
`tSum`: `float` : time since germination (CD or GDD)
`Crop_CanopyDevEnd`: `float` : time that Canopy developement ends
`Crop_CCx`: `float` : Maximum canopy cover (fraction of soil cover)
*Returns:*
`CCxAdj`: `float` : Adjusted CCx
"""
## Get time required to reach CC on previous day ##
tCCtmp = cc_required_time(CCprev,CCo,CCx,CGC,CDC,'CGC')
## Determine CCx adjusted ##
if tCCtmp > 0:
tCCtmp = tCCtmp+(Crop_CanopyDevEnd-tSum)+dt
CCxAdj = cc_development(CCo,CCx,CGC,CDC,tCCtmp,'Growth',Crop_CCx)
else:
CCxAdj = 0
return CCxAdj
#hide
show_doc(adjust_CCx)
#export
@njit()
def update_CCx_CDC(CCprev,CDC,CCx,dt):
"""
Function to update CCx and CDC parameter valyes for rewatering in late season of an early declining canopy
<a href="../pdfs/ac_ref_man_3.pdf#page=36" target="_blank">Reference Manual: CC stress response</a> (pg. 27-33)
*Arguments:*
`CCprev`: `float` : Canopy Cover at previous timestep.
`CDC`: `float` : Canopy decline coefficient (fraction per GDD/calendar day)
`CCx`: `float` : Maximum canopy cover (fraction of soil cover)
`dt`: `float` : Time delta of canopy growth (1 calander day or ... GDD)
*Returns:*
`CCxAdj`: `float` : updated CCxAdj
`CDCadj`: `float` : updated CDCadj
"""
## Get adjusted CCx ##
CCXadj = CCprev/(1-0.05*(np.exp(dt*((CDC*3.33)/(CCx+2.29)))-1))
## Get adjusted CDC ##
CDCadj = CDC*((CCXadj+2.29)/(CCx+2.29))
return CCXadj,CDCadj
#hide
show_doc(update_CCx_CDC)
#export
@njit()
def canopy_cover(Crop,Soil_Profile,Soil_zTop,InitCond,GDD,Et0,GrowingSeason):
#def canopy_cover(Crop,Soil_Profile,Soil_zTop,InitCond,GDD,Et0,GrowingSeason):
"""
Function to simulate canopy growth/decline
<a href="../pdfs/ac_ref_man_3.pdf#page=30" target="_blank">Reference Manual: CC equations</a> (pg. 21-33)
*Arguments:*
`Crop`: `CropClass` : Crop object
`Soil_Profile`: `SoilProfileClass` : Soil object
`Soil_zTop`: `float` : top soil depth
`InitCond`: `InitCondClass` : InitCond object
`GDD`: `float` : Growing Degree Days
`Et0`: `float` : reference evapotranspiration
`GrowingSeason`:: `bool` : is it currently within the growing season (True, Flase)
*Returns:*
`NewCond`: `InitCondClass` : updated InitCond object
"""
# Function to simulate canopy growth/decline
InitCond_CC_NS = InitCond.CC_NS
InitCond_CC = InitCond.CC
InitCond_ProtectedSeed = InitCond.ProtectedSeed
InitCond_CCxAct = InitCond.CCxAct
InitCond_CropDead = InitCond.CropDead
InitCond_tEarlySen = InitCond.tEarlySen
InitCond_CCxW = InitCond.CCxW
## Store initial conditions in a new structure for updating ##
NewCond = InitCond
NewCond.CCprev = InitCond.CC
## Calculate canopy development (if in growing season) ##
if GrowingSeason == True:
# Calculate root zone water content
_,Dr,TAW,_ = root_zone_water(Soil_Profile,float(NewCond.Zroot),NewCond.th,Soil_zTop,float(Crop.Zmin),Crop.Aer)
# Check whether to use root zone or top soil depletions for calculating
# water stress
if (Dr.Rz/TAW.Rz) <= (Dr.Zt/TAW.Zt):
# Root zone is wetter than top soil, so use root zone value
Dr = Dr.Rz
TAW = TAW.Rz
else:
# Top soil is wetter than root zone, so use top soil values
Dr = Dr.Zt
TAW = TAW.Zt
# Determine if water stress is occurring
beta = True
Ksw = water_stress(Crop,NewCond,Dr,TAW,Et0,beta)
# Get canopy cover growth time
if Crop.CalendarType == 1:
dtCC = 1
tCCadj = NewCond.DAP-NewCond.DelayedCDs
elif Crop.CalendarType == 2:
dtCC = GDD
tCCadj = NewCond.GDDcum-NewCond.DelayedGDDs
## Canopy development (potential) ##
if (tCCadj < Crop.Emergence) or (round(tCCadj) > Crop.Maturity):
# No canopy development before emergence/germination or after
# maturity
NewCond.CC_NS = 0
elif tCCadj < Crop.CanopyDevEnd:
# Canopy growth can occur
if InitCond_CC_NS <= Crop.CC0:
# Very small initial CC.
NewCond.CC_NS = Crop.CC0*np.exp(Crop.CGC*dtCC)
#print(Crop.CC0,np.exp(Crop.CGC*dtCC))
else:
# Canopy growing
tmp_tCC = tCCadj-Crop.Emergence
NewCond.CC_NS = cc_development(Crop.CC0,0.98*Crop.CCx,\
Crop.CGC,Crop.CDC,tmp_tCC,'Growth',Crop.CCx)
# Update maximum canopy cover size in growing season
NewCond.CCxAct_NS = NewCond.CC_NS
elif tCCadj > Crop.CanopyDevEnd:
# No more canopy growth is possible or canopy in decline
# Set CCx for calculation of withered canopy effects
NewCond.CCxW_NS = NewCond.CCxAct_NS
if tCCadj < Crop.Senescence:
# Mid-season stage - no canopy growth
NewCond.CC_NS = InitCond_CC_NS
# Update maximum canopy cover size in growing season
NewCond.CCxAct_NS = NewCond.CC_NS
else:
# Late-season stage - canopy decline
tmp_tCC = tCCadj-Crop.Senescence
NewCond.CC_NS = cc_development(Crop.CC0,NewCond.CCxAct_NS, \
Crop.CGC,Crop.CDC,tmp_tCC,'Decline',NewCond.CCxAct_NS)
## Canopy development (actual) ##
if (tCCadj < Crop.Emergence) or (round(tCCadj) > Crop.Maturity):
# No canopy development before emergence/germination or after
# maturity
NewCond.CC = 0
NewCond.CC0adj = Crop.CC0
elif tCCadj < Crop.CanopyDevEnd:
# Canopy growth can occur
if InitCond_CC <= NewCond.CC0adj or \
((InitCond_ProtectedSeed==True) and (InitCond_CC<=(1.25*NewCond.CC0adj))) :
# Very small initial CC or seedling in protected phase of
# growth. In this case, assume no leaf water expansion stress
if InitCond_ProtectedSeed == True:
tmp_tCC = tCCadj-Crop.Emergence
NewCond.CC = cc_development(Crop.CC0,Crop.CCx, \
Crop.CGC,Crop.CDC,tmp_tCC,'Growth',Crop.CCx)
# Check if seed protection should be turned off
if NewCond.CC > (1.25*NewCond.CC0adj):
# Turn off seed protection - lead expansion stress can
# occur on future time steps.
NewCond.ProtectedSeed = False
else:
NewCond.CC = NewCond.CC0adj*np.exp(Crop.CGC*dtCC)
else:
# Canopy growing
if (InitCond_CC < (0.9799*Crop.CCx)):
# Adjust canopy growth coefficient for leaf expansion water
# stress effects
CGCadj = Crop.CGC*Ksw.Exp
if CGCadj > 0:
# Adjust CCx for change in CGC
CCXadj = adjust_CCx(InitCond_CC,NewCond.CC0adj,Crop.CCx, \
CGCadj,Crop.CDC,dtCC,tCCadj,Crop.CanopyDevEnd, Crop.CCx)
if CCXadj < 0:
NewCond.CC = InitCond_CC
elif abs(InitCond_CC-(0.9799*Crop.CCx)) < 0.001:
# Approaching maximum canopy cover size
tmp_tCC = tCCadj-Crop.Emergence
NewCond.CC = cc_development(Crop.CC0,Crop.CCx, \
Crop.CGC,Crop.CDC,tmp_tCC,'Growth',Crop.CCx)
else:
# Determine time required to reach CC on previous,
# day, given CGCAdj value
tReq = cc_required_time(InitCond_CC,NewCond.CC0adj, \
CCXadj,CGCadj,Crop.CDC,'CGC')
if tReq > 0:
# Calclate GDD's for canopy growth
tmp_tCC = tReq+dtCC
# Determine new canopy size
NewCond.CC = cc_development(NewCond.CC0adj,CCXadj, \
CGCadj,Crop.CDC,tmp_tCC,'Growth',Crop.CCx)
#print(NewCond.DAP,CCXadj,tReq)
else:
# No canopy growth
NewCond.CC = InitCond_CC
else:
# No canopy growth
NewCond.CC = InitCond_CC
# Update CC0
if NewCond.CC > NewCond.CC0adj:
NewCond.CC0adj = Crop.CC0
else:
NewCond.CC0adj = NewCond.CC
else:
# Canopy approaching maximum size
tmp_tCC = tCCadj-Crop.Emergence
NewCond.CC = cc_development(Crop.CC0,Crop.CCx, \
Crop.CGC,Crop.CDC,tmp_tCC,'Growth',Crop.CCx)
NewCond.CC0adj = Crop.CC0
if NewCond.CC > InitCond_CCxAct:
# Update actual maximum canopy cover size during growing season
NewCond.CCxAct = NewCond.CC
elif tCCadj > Crop.CanopyDevEnd:
# No more canopy growth is possible or canopy is in decline
if tCCadj < Crop.Senescence:
# Mid-season stage - no canopy growth
NewCond.CC = InitCond_CC
if NewCond.CC > InitCond_CCxAct:
# Update actual maximum canopy cover size during growing
# season
NewCond.CCxAct = NewCond.CC
else:
# Late-season stage - canopy decline
# Adjust canopy decline coefficient for difference between actual
# and potential CCx
CDCadj = Crop.CDC*((NewCond.CCxAct+2.29)/(Crop.CCx+2.29))
# Determine new canopy size
tmp_tCC = tCCadj-Crop.Senescence
NewCond.CC = cc_development(NewCond.CC0adj,NewCond.CCxAct, \
Crop.CGC,CDCadj,tmp_tCC,'Decline',NewCond.CCxAct)
# Check for crop growth termination
if (NewCond.CC < 0.001) and (InitCond_CropDead == False):
# Crop has died
NewCond.CC = 0
NewCond.CropDead = True
## Canopy senescence due to water stress (actual) ##
if tCCadj >= Crop.Emergence:
if (tCCadj < Crop.Senescence) or (InitCond_tEarlySen > 0):
# Check for early canopy senescence due to severe water
# stress.
if (Ksw.Sen < 1) and (InitCond_ProtectedSeed==False):
# Early canopy senescence
NewCond.PrematSenes = True
if InitCond_tEarlySen == 0:
# No prior early senescence
NewCond.CCxEarlySen = InitCond_CC
# Increment early senescence GDD counter
NewCond.tEarlySen = InitCond_tEarlySen+dtCC
# Adjust canopy decline coefficient for water stress
beta = False
Ksw = water_stress(Crop,NewCond,Dr,TAW,Et0,beta)
if Ksw.Sen > 0.99999:
CDCadj = 0.0001
else:
CDCadj = (1-(Ksw.Sen**8))*Crop.CDC
# Get new canpy cover size after senescence
if NewCond.CCxEarlySen < 0.001:
CCsen = 0
else:
# Get time required to reach CC at end of previous day, given
# CDCadj
tReq = (np.log(1+(1-InitCond_CC/NewCond.CCxEarlySen)/0.05)) \
/((CDCadj*3.33)/(NewCond.CCxEarlySen+2.29))
# Calculate GDD's for canopy decline
tmp_tCC = tReq+dtCC
# Determine new canopy size
CCsen = NewCond.CCxEarlySen* \
(1-0.05*(np.exp(tmp_tCC*((CDCadj*3.33) \
/(NewCond.CCxEarlySen+2.29)))-1))
if CCsen < 0:
CCsen = 0
# Update canopy cover size
if tCCadj < Crop.Senescence:
# Limit CC to CCx
if CCsen > Crop.CCx:
CCsen = Crop.CCx
# CC cannot be greater than value on previous day
NewCond.CC = CCsen
if NewCond.CC > InitCond_CC:
NewCond.CC = InitCond_CC
# Update maximum canopy cover size during growing
# season
NewCond.CCxAct = NewCond.CC
# Update CC0 if current CC is less than initial canopy
# cover size at planting
if NewCond.CC < Crop.CC0:
NewCond.CC0adj = NewCond.CC
else:
NewCond.CC0adj = Crop.CC0
else:
# Update CC to account for canopy cover senescence due
# to water stress
if CCsen < NewCond.CC:
NewCond.CC = CCsen
# Check for crop growth termination
if (NewCond.CC < 0.001) and (InitCond_CropDead == False):
# Crop has died
NewCond.CC = 0
NewCond.CropDead = True
else:
# No water stress
NewCond.PrematSenes = False
if (tCCadj > Crop.Senescence) and (InitCond_tEarlySen > 0):
# Rewatering of canopy in late season
# Get new values for CCx and CDC
tmp_tCC = tCCadj-dtCC-Crop.Senescence
CCXadj,CDCadj = update_CCx_CDC(InitCond_CC, \
Crop.CDC,Crop.CCx,tmp_tCC)
NewCond.CCxAct = CCXadj
# Get new CC value for end of current day
tmp_tCC = tCCadj-Crop.Senescence
NewCond.CC = cc_development(NewCond.CC0adj,CCXadj, \
Crop.CGC,CDCadj,tmp_tCC,'Decline',CCXadj)
# Check for crop growth termination
if (NewCond.CC < 0.001) and (InitCond_CropDead == False):
NewCond.CC = 0
NewCond.CropDead = True
# Reset early senescence counter
NewCond.tEarlySen = 0
# Adjust CCx for effects of withered canopy
if NewCond.CC > InitCond_CCxW:
NewCond.CCxW = NewCond.CC
## Calculate canopy size adjusted for micro-advective effects ##
# Check to ensure potential CC is not slightly lower than actual
if NewCond.CC_NS < NewCond.CC:
NewCond.CC_NS = NewCond.CC
if tCCadj < Crop.CanopyDevEnd:
NewCond.CCxAct_NS = NewCond.CC_NS
# Actual (with water stress)
NewCond.CCadj = (1.72*NewCond.CC)-(NewCond.CC**2)+(0.3*(NewCond.CC**3))
# Potential (without water stress)
NewCond.CCadj_NS = (1.72*NewCond.CC_NS)-(NewCond.CC_NS**2) \
+(0.3*(NewCond.CC_NS**3))
else:
# No canopy outside growing season - set various values to zero
NewCond.CC = 0
NewCond.CCadj = 0
NewCond.CC_NS = 0
NewCond.CCadj_NS = 0
NewCond.CCxW = 0
NewCond.CCxAct = 0
NewCond.CCxW_NS = 0
NewCond.CCxAct_NS = 0
return NewCond
#hide
show_doc(canopy_cover)
#export
@njit()
def evap_layer_water_content(InitCond_th,InitCond_EvapZ,prof):
"""
Function to get water contents in the evaporation layer
<a href="../pdfs/ac_ref_man_3.pdf#page=82" target="_blank">Reference Manual: evaporation equations</a> (pg. 73-81)
*Arguments:*
`InitCond_th`: `np.array` : Initial water content
`InitCond_EvapZ`: `float` : evaporation depth
`prof`: `SoilProfileClass` : Soil object containing soil paramaters
*Returns:*
`Wevap_Sat`: `float` : Water storage in evaporation layer at saturation (mm)
`Wevap_Fc`: `float` : Water storage in evaporation layer at field capacity (mm)
`Wevap_Wp`: `float` : Water storage in evaporation layer at permanent wilting point (mm)
`Wevap_Dry`: `float` : Water storage in evaporation layer at air dry (mm)
`Wevap_Act`: `float` : Actual water storage in evaporation layer (mm)
"""
# Find soil compartments covered by evaporation layer
comp_sto = np.sum(prof.dzsum<InitCond_EvapZ)+1
Wevap_Sat=0
Wevap_Fc=0
Wevap_Wp=0
Wevap_Dry=0
Wevap_Act=0
for ii in range(int(comp_sto)):
# Determine fraction of soil compartment covered by evaporation layer
if prof.dzsum[ii] > InitCond_EvapZ:
factor = 1-((prof.dzsum[ii]-InitCond_EvapZ)/prof.dz[ii])
else:
factor = 1
# Actual water storage in evaporation layer (mm)
Wevap_Act += factor*1000*InitCond_th[ii]*prof.dz[ii]
# Water storage in evaporation layer at saturation (mm)
Wevap_Sat += factor*1000*prof.th_s[ii]*prof.dz[ii]
# Water storage in evaporation layer at field capacity (mm)
Wevap_Fc += factor*1000*prof.th_fc[ii]*prof.dz[ii]
# Water storage in evaporation layer at permanent wilting point (mm)
Wevap_Wp += factor*1000*prof.th_wp[ii]*prof.dz[ii]
# Water storage in evaporation layer at air dry (mm)
Wevap_Dry += factor*1000*prof.th_dry[ii]*prof.dz[ii]
if Wevap_Act < 0:
Wevap_Act = 0
return Wevap_Sat,Wevap_Fc,Wevap_Wp,Wevap_Dry,Wevap_Act
#hide
show_doc(evap_layer_water_content)
#export
@njit()
def soil_evaporation(ClockStruct_EvapTimeSteps,ClockStruct_SimOffSeason,ClockStruct_TimeStepCounter,
Soil_EvapZmin,Soil_EvapZmax,Soil_Profile,Soil_REW,Soil_Kex,Soil_fwcc,Soil_fWrelExp,Soil_fevap,
Crop_CalendarType,Crop_Senescence,
IrrMngt_IrrMethod,IrrMngt_WetSurf,
FieldMngt,
InitCond,Et0,Infl,Rain,Irr,GrowingSeason):
"""
Function to calculate daily soil evaporation
<a href="../pdfs/ac_ref_man_3.pdf#page=82" target="_blank">Reference Manual: evaporation equations</a> (pg. 73-81)
*Arguments:*
`Clock params`: `bool, int` : clock params
`Soil parameters`: `float` : soil parameters
`Crop params`: `float` : Crop paramaters
`IrrMngt params`: `int, float`: irrigation management paramaters
`FieldMngt`: `FieldMngtStruct` : Field management paramaters
`InitCond`: `InitCondClass` : InitCond object containing model paramaters
`Et0`: `float` : daily reference evapotranspiration
`Infl`: `float` : Infiltration on current day
`Rain`: `float` : daily precipitation mm
`Irr`: `float` : Irrigation applied on current day
`GrowingSeason`:: `bool` : is growing season (True or Flase)
*Returns:*
`NewCond`: `InitCondClass` : InitCond object containing updated model paramaters
`EsAct`: `float` : Actual surface evaporation current day
`EsPot`: `float` : Potential surface evaporation current day
"""
Wevap = WevapClass()
## Store initial conditions in new structure that will be updated ##
NewCond = InitCond
## Prepare stage 2 evaporation (REW gone) ##
# Only do this if it is first day of simulation, or if it is first day of
# growing season and not simulating off-season
if (ClockStruct_TimeStepCounter == 0) or ((NewCond.DAP == 1) and \
(ClockStruct_SimOffSeason==False)):
# Reset storage in surface soil layer to zero
NewCond.Wsurf = 0
# Set evaporation depth to minimum
NewCond.EvapZ = Soil_EvapZmin
# Trigger stage 2 evaporation
NewCond.Stage2 = True
# Get relative water content for start of stage 2 evaporation
Wevap.Sat,Wevap.Fc,Wevap.Wp,Wevap.Dry,Wevap.Act = evap_layer_water_content(NewCond.th,NewCond.EvapZ,Soil_Profile)
NewCond.Wstage2 = round((Wevap.Act-(Wevap.Fc-Soil_REW))/(Wevap.Sat-(Wevap.Fc-Soil_REW)),2)
if NewCond.Wstage2 < 0:
NewCond.Wstage2 = 0
## Prepare soil evaporation stage 1 ##
# Adjust water in surface evaporation layer for any infiltration
if (Rain > 0) or ((Irr > 0) and (IrrMngt_IrrMethod!=4)):
# Only prepare stage one when rainfall occurs, or when irrigation is
# trigerred (not in net irrigation mode)
if Infl > 0:
# Update storage in surface evaporation layer for incoming
# infiltration
NewCond.Wsurf = Infl
# Water stored in surface evaporation layer cannot exceed REW
if NewCond.Wsurf > Soil_REW:
NewCond.Wsurf = Soil_REW
# Reset variables
NewCond.Wstage2 = 0
NewCond.EvapZ = Soil_EvapZmin
NewCond.Stage2 = False
## Calculate potential soil evaporation rate (mm/day) ##
if GrowingSeason == True:
# Adjust time for any delayed development
if Crop_CalendarType == 1:
tAdj = NewCond.DAP-NewCond.DelayedCDs
elif Crop_CalendarType == 2:
tAdj = NewCond.GDDcum-NewCond.DelayedGDDs
# Calculate maximum potential soil evaporation
EsPotMax = Soil_Kex*Et0*(1-NewCond.CCxW*(Soil_fwcc/100))
# Calculate potential soil evaporation (given current canopy cover
# size)
EsPot = Soil_Kex*(1-NewCond.CCadj)*Et0
# Adjust potential soil evaporation for effects of withered canopy
if (tAdj > Crop_Senescence) and (NewCond.CCxAct > 0):
if NewCond.CC > (NewCond.CCxAct/2):
if NewCond.CC > NewCond.CCxAct:
mult = 0
else:
mult = (NewCond.CCxAct-NewCond.CC)/(NewCond.CCxAct/2)
else:
mult = 1
EsPot = EsPot*(1-NewCond.CCxAct*(Soil_fwcc/100)*mult)
CCxActAdj = (1.72*NewCond.CCxAct)-(NewCond.CCxAct**2)+0.3*(NewCond.CCxAct**3)
EsPotMin = Soil_Kex*(1-CCxActAdj)*Et0
if EsPotMin < 0:
EsPotMin = 0
if EsPot < EsPotMin:
EsPot = EsPotMin
elif EsPot > EsPotMax:
EsPot = EsPotMax
if NewCond.PrematSenes == True:
if EsPot > EsPotMax:
EsPot = EsPotMax
else:
# No canopy cover outside of growing season so potential soil
# evaporation only depends on reference evapotranspiration
EsPot = Soil_Kex*Et0
## Adjust potential soil evaporation for mulches and/or partial wetting ##
# Mulches
if NewCond.SurfaceStorage < 0.000001:
if not FieldMngt.Mulches:
# No mulches present
EsPotMul = EsPot
elif FieldMngt.Mulches:
# Mulches present
EsPotMul = EsPot*(1-FieldMngt.fMulch*(FieldMngt.MulchPct/100))
else:
# Surface is flooded - no adjustment of potential soil evaporation for
# mulches
EsPotMul = EsPot
# Partial surface wetting by irrigation
if (Irr > 0) and (IrrMngt_IrrMethod!=4):
# Only apply adjustment if irrigation occurs and not in net irrigation
# mode
if (Rain > 1) or (NewCond.SurfaceStorage > 0):
# No adjustment for partial wetting - assume surface is fully wet
EsPotIrr = EsPot
else:
# Adjust for proprtion of surface area wetted by irrigation
EsPotIrr = EsPot*(IrrMngt_WetSurf/100)
else:
# No adjustment for partial surface wetting
EsPotIrr = EsPot
# Assign minimum value (mulches and partial wetting don't combine)
EsPot = min(EsPotIrr,EsPotMul)
## Surface evaporation ##
# Initialise actual evaporation counter
EsAct = 0
# Evaporate surface storage
if NewCond.SurfaceStorage > 0:
if NewCond.SurfaceStorage > EsPot:
# All potential soil evaporation can be supplied by surface storage
EsAct = EsPot
# Update surface storage
NewCond.SurfaceStorage = NewCond.SurfaceStorage-EsAct
else:
# Surface storage is not sufficient to meet all potential soil
# evaporation
EsAct = NewCond.SurfaceStorage
# Update surface storage, evaporation layer depth, stage
NewCond.SurfaceStorage = 0
NewCond.Wsurf = Soil_REW
NewCond.Wstage2 = 0
NewCond.EvapZ = Soil_EvapZmin
NewCond.Stage2 = False
## Stage 1 evaporation ##
# Determine total water to be extracted
ToExtract = EsPot-EsAct
# Determine total water to be extracted in stage one (limited by surface
# layer water storage)
ExtractPotStg1 = min(ToExtract,NewCond.Wsurf)
# Extract water
if (ExtractPotStg1 > 0):
# Find soil compartments covered by evaporation layer
comp_sto = np.sum(Soil_Profile.dzsum<Soil_EvapZmin)+1
comp = -1
prof=Soil_Profile
while (ExtractPotStg1 > 0) and (comp < comp_sto):
# Increment compartment counter
comp = comp+1
# Specify layer number
# Determine proportion of compartment in evaporation layer
if prof.dzsum[comp] > Soil_EvapZmin:
factor = 1-((prof.dzsum[comp]-Soil_EvapZmin)/ prof.dz[comp])
else:
factor = 1
# Water storage (mm) at air dry
Wdry = 1000* prof.th_dry[comp]* prof.dz[comp]
# Available water (mm)
W = 1000*NewCond.th[comp]* prof.dz[comp]
# Water available in compartment for extraction (mm)
AvW = (W-Wdry)*factor
if AvW < 0:
AvW = 0
if AvW >= ExtractPotStg1:
# Update actual evaporation
EsAct = EsAct+ExtractPotStg1
# Update depth of water in current compartment
W = W-ExtractPotStg1
# Update total water to be extracted
ToExtract = ToExtract-ExtractPotStg1
# Update water to be extracted from surface layer (stage 1)
ExtractPotStg1 = 0
else:
# Update actual evaporation
EsAct = EsAct+AvW
# Update water to be extracted from surface layer (stage 1)
ExtractPotStg1 = ExtractPotStg1-AvW
# Update total water to be extracted
ToExtract = ToExtract-AvW
# Update depth of water in current compartment
W = W-AvW
# Update water content
NewCond.th[comp] = W/(1000* prof.dz[comp])
# Update surface evaporation layer water balance
NewCond.Wsurf = NewCond.Wsurf-EsAct
if (NewCond.Wsurf < 0) or (ExtractPotStg1 > 0.0001):
NewCond.Wsurf = 0
# If surface storage completely depleted, prepare stage 2
if NewCond.Wsurf < 0.0001:
# Get water contents (mm)
Wevap.Sat,Wevap.Fc,Wevap.Wp,Wevap.Dry,Wevap.Act = evap_layer_water_content(NewCond.th,NewCond.EvapZ,Soil_Profile)
# Proportional water storage for start of stage two evaporation
NewCond.Wstage2 = round((Wevap.Act-(Wevap.Fc-Soil_REW))/(Wevap.Sat-(Wevap.Fc-Soil_REW)),2)
if NewCond.Wstage2 < 0:
NewCond.Wstage2 = 0
## Stage 2 evaporation ##
# Extract water
if ToExtract > 0:
# Start stage 2
NewCond.Stage2 = True
# Get sub-daily evaporative demand
Edt = ToExtract/ClockStruct_EvapTimeSteps
# Loop sub-daily steps
for jj in range(int(ClockStruct_EvapTimeSteps)):
# Get current water storage (mm)
Wevap.Sat,Wevap.Fc,Wevap.Wp,Wevap.Dry,Wevap.Act = evap_layer_water_content(NewCond.th,NewCond.EvapZ,Soil_Profile)
# Get water storage (mm) at start of stage 2 evaporation
Wupper = NewCond.Wstage2*(Wevap.Sat-(Wevap.Fc-Soil_REW))+(Wevap.Fc-Soil_REW)
# Get water storage (mm) when there is no evaporation
Wlower = Wevap.Dry
# Get relative depletion of evaporation storage in stage 2
Wrel = (Wevap.Act-Wlower)/(Wupper-Wlower)
# Check if need to expand evaporation layer
if Soil_EvapZmax > Soil_EvapZmin:
Wcheck = Soil_fWrelExp*((Soil_EvapZmax-NewCond.EvapZ)/(Soil_EvapZmax-Soil_EvapZmin))
while (Wrel < Wcheck) and (NewCond.EvapZ < Soil_EvapZmax):
# Expand evaporation layer by 1 mm
NewCond.EvapZ = NewCond.EvapZ+0.001
# Update water storage (mm) in evaporation layer
Wevap.Sat,Wevap.Fc,Wevap.Wp,Wevap.Dry,Wevap.Act = evap_layer_water_content(NewCond.th,NewCond.EvapZ,Soil_Profile)
Wupper = NewCond.Wstage2*(Wevap.Sat-(Wevap.Fc-Soil_REW))+(Wevap.Fc-Soil_REW)
Wlower = Wevap.Dry
# Update relative depletion of evaporation storage
Wrel = (Wevap.Act-Wlower)/(Wupper-Wlower)
Wcheck = Soil_fWrelExp*((Soil_EvapZmax-NewCond.EvapZ)/(Soil_EvapZmax-Soil_EvapZmin))
# Get stage 2 evaporation reduction coefficient
Kr = (np.exp(Soil_fevap*Wrel)-1)/(np.exp(Soil_fevap)-1)
if Kr > 1:
Kr = 1
# Get water to extract (mm)
ToExtractStg2 = Kr*Edt
# Extract water from compartments
comp_sto = np.sum(Soil_Profile.dzsum<NewCond.EvapZ)+1
comp = -1
prof=Soil_Profile
while (ToExtractStg2 > 0) and (comp < comp_sto):
# Increment compartment counter
comp = comp+1
# Specify layer number
# Determine proportion of compartment in evaporation layer
if prof.dzsum[comp] > NewCond.EvapZ:
factor = 1-((prof.dzsum[comp]-NewCond.EvapZ)/prof.dz[comp])
else:
factor = 1
# Water storage (mm) at air dry
Wdry = 1000*prof.th_dry[comp]*prof.dz[comp]
# Available water (mm)
W = 1000*NewCond.th[comp]*prof.dz[comp]
# Water available in compartment for extraction (mm)
AvW = (W-Wdry)*factor
if AvW >= ToExtractStg2:
# Update actual evaporation
EsAct = EsAct+ToExtractStg2
# Update depth of water in current compartment
W = W-ToExtractStg2
# Update total water to be extracted
ToExtract = ToExtract-ToExtractStg2
# Update water to be extracted from surface layer (stage 1)
ToExtractStg2 = 0
else:
# Update actual evaporation
EsAct = EsAct+AvW
# Update depth of water in current compartment
W = W-AvW
# Update water to be extracted from surface layer (stage 1)
ToExtractStg2 = ToExtractStg2-AvW
# Update total water to be extracted
ToExtract = ToExtract-AvW
# Update water content
NewCond.th[comp] = W/(1000*prof.dz[comp])
## Store potential evaporation for irrigation calculations on next day ##
NewCond.Epot = EsPot
return NewCond,EsAct,EsPot
#hide
show_doc(soil_evaporation)
#export
@njit()
def aeration_stress(NewCond_AerDays, Crop_LagAer,thRZ):
"""
Function to calculate aeration stress coefficient
<a href="../pdfs/ac_ref_man_3.pdf#page=90" target="_blank">Reference Manual: aeration stress</a> (pg. 89-90)
*Arguments:*
`NewCond_AerDays`: `int` : number aeration stress days
`Crop_LagAer`: `int` : lag days before aeration stress
`thRZ`: `thRZClass` : object that contains information on the total water in the root zone
*Returns:*
`Ksa_Aer`: `float` : aeration stress coefficient
`NewCond_AerDays`: `float` : updated aer days
"""
## Determine aeration stress (root zone) ##
if thRZ.Act > thRZ.Aer:
# Calculate aeration stress coefficient
if NewCond_AerDays < Crop_LagAer:
stress = 1-((thRZ.S-thRZ.Act)/(thRZ.S-thRZ.Aer))
Ksa_Aer = 1-((NewCond_AerDays/3)*stress)
elif NewCond_AerDays >= Crop_LagAer:
Ksa_Aer = (thRZ.S-thRZ.Act)/(thRZ.S-thRZ.Aer)
# Increment aeration days counter
NewCond_AerDays = NewCond_AerDays+1
if NewCond_AerDays > Crop_LagAer:
NewCond_AerDays = Crop_LagAer
else:
# Set aeration stress coefficient to one (no stress value)
Ksa_Aer = 1
# Reset aeration days counter
NewCond_AerDays = 0
return Ksa_Aer,NewCond_AerDays
#hide
show_doc(aeration_stress)
#export
@njit()
def transpiration(Soil_Profile,Soil_nComp,Soil_zTop,
Crop,
IrrMngt_IrrMethod,IrrMngt_NetIrrSMT,
InitCond,Et0,CO2,GrowingSeason,GDD):
"""
Function to calculate crop transpiration on current day
<a href="../pdfs/ac_ref_man_3.pdf#page=91" target="_blank">Reference Manual: transpiration equations</a> (pg. 82-91)
*Arguments:*
`Soil`: `SoilClass` : Soil object
`Crop`: `CropClass` : Crop object
`IrrMngt`: `IrrMngt`: object containing irrigation management params
`InitCond`: `InitCondClass` : InitCond object
`Et0`: `float` : reference evapotranspiration
`CO2`: `CO2class` : CO2
`GDD`: `float` : Growing Degree Days
`GrowingSeason`:: `bool` : is it currently within the growing season (True, Flase)
*Returns:*
`TrAct`: `float` : Actual Transpiration on current day
`TrPot_NS`: `float` : Potential Transpiration on current day with no water stress
`TrPot0`: `float` : Potential Transpiration on current day
`NewCond`: `InitCondClass` : updated InitCond object
`IrrNet`: `float` : Net Irrigation (if required)
"""
## Store initial conditions ##
NewCond = InitCond
InitCond_th = InitCond.th
prof = Soil_Profile
## Calculate transpiration (if in growing season) ##
if GrowingSeason == True:
## Calculate potential transpiration ##
# 1. No prior water stress
# Update ageing days counter
DAPadj = NewCond.DAP-NewCond.DelayedCDs
if DAPadj > Crop.MaxCanopyCD:
NewCond.AgeDays_NS = DAPadj-Crop.MaxCanopyCD
# Update crop coefficient for ageing of canopy
if NewCond.AgeDays_NS > 5:
Kcb_NS = Crop.Kcb-((NewCond.AgeDays_NS-5)*(Crop.fage/100))*NewCond.CCxW_NS
else:
Kcb_NS = Crop.Kcb
# Update crop coefficient for CO2 concentration
CO2CurrentConc = CO2.CurrentConc
CO2RefConc = CO2.RefConc
if CO2CurrentConc > CO2RefConc:
Kcb_NS = Kcb_NS*(1-0.05*((CO2CurrentConc- CO2RefConc)/(550- CO2RefConc)))
# Determine potential transpiration rate (no water stress)
TrPot_NS = Kcb_NS*(NewCond.CCadj_NS)*Et0
# Correct potential transpiration for dying green canopy effects
if NewCond.CC_NS < NewCond.CCxW_NS:
if (NewCond.CCxW_NS > 0.001) and (NewCond.CC_NS > 0.001):
TrPot_NS = TrPot_NS*((NewCond.CC_NS/NewCond.CCxW_NS)**Crop.a_Tr)
# 2. Potential prior water stress and/or delayed development
# Update ageing days counter
DAPadj = NewCond.DAP-NewCond.DelayedCDs
if DAPadj > Crop.MaxCanopyCD:
NewCond.AgeDays = DAPadj-Crop.MaxCanopyCD
# Update crop coefficient for ageing of canopy
if NewCond.AgeDays > 5:
Kcb = Crop.Kcb-((NewCond.AgeDays-5)*(Crop.fage/100))*NewCond.CCxW
else:
Kcb = Crop.Kcb
# Update crop coefficient for CO2 concentration
if CO2CurrentConc > CO2RefConc:
Kcb = Kcb*(1-0.05*((CO2CurrentConc-CO2RefConc)/(550-CO2RefConc)))
# Determine potential transpiration rate
TrPot0 = Kcb*(NewCond.CCadj)*Et0
# Correct potential transpiration for dying green canopy effects
if NewCond.CC < NewCond.CCxW:
if (NewCond.CCxW > 0.001) and (NewCond.CC > 0.001):
TrPot0 = TrPot0*((NewCond.CC/NewCond.CCxW)**Crop.a_Tr)
# 3. Adjust potential transpiration for cold stress effects
# Check if cold stress occurs on current day
if Crop.TrColdStress == 0:
# Cold temperature stress does not affect transpiration
KsCold = 1
elif Crop.TrColdStress == 1:
# Transpiration can be affected by cold temperature stress
if GDD >= Crop.GDD_up:
# No cold temperature stress
KsCold = 1
elif GDD <= Crop.GDD_lo:
# Transpiration fully inhibited by cold temperature stress
KsCold = 0
else:
# Transpiration partially inhibited by cold temperature stress
# Get parameters for logistic curve
KsTr_up = 1
KsTr_lo = 0.02
fshapeb = (-1)*(np.log(((KsTr_lo*KsTr_up)-0.98*KsTr_lo) \
/(0.98*(KsTr_up-KsTr_lo))))
# Calculate cold stress level
GDDrel = (GDD-Crop.GDD_lo)/(Crop.GDD_up-Crop.GDD_lo)
KsCold = (KsTr_up*KsTr_lo)/(KsTr_lo+(KsTr_up-KsTr_lo) \
*np.exp(-fshapeb*GDDrel))
KsCold = KsCold-KsTr_lo*(1-GDDrel)
# Correct potential transpiration rate (mm/day)
TrPot0 = TrPot0*KsCold
TrPot_NS = TrPot_NS*KsCold
#print(TrPot0,NewCond.DAP)
## Calculate surface layer transpiration ##
if (NewCond.SurfaceStorage > 0) and (NewCond.DaySubmerged < Crop.LagAer):
# Update submergence days counter
NewCond.DaySubmerged = NewCond.DaySubmerged+1
# Update anerobic conditions counter for each compartment
for ii in range(int(Soil_nComp)):
# Increment aeration days counter for compartment ii
NewCond.AerDaysComp[ii] = NewCond.AerDaysComp[ii]+1
if NewCond.AerDaysComp[ii] > Crop.LagAer:
NewCond.AerDaysComp[ii] = Crop.LagAer
# Reduce actual transpiration that is possible to account for
# aeration stress due to extended submergence
fSub = 1-(NewCond.DaySubmerged/Crop.LagAer)
if NewCond.SurfaceStorage > (fSub*TrPot0):
# Transpiration occurs from surface storage
NewCond.SurfaceStorage = NewCond.SurfaceStorage-(fSub*TrPot0)
TrAct0 = fSub*TrPot0
else:
# No transpiration from surface storage
TrAct0 = 0
if TrAct0 < (fSub*TrPot0):
# More water can be extracted from soil profile for transpiration
TrPot = (fSub*TrPot0)-TrAct0
#print('now')
else:
# No more transpiration possible on current day
TrPot = 0
#print('here')
else:
# No surface transpiration occurs
TrPot = TrPot0
TrAct0 = 0
#print(TrPot,NewCond.DAP)
## Update potential root zone transpiration for water stress ##
# Determine root zone and top soil depletion, and root zone water
# content
_,Dr,TAW,thRZ = root_zone_water(Soil_Profile,float(NewCond.Zroot),NewCond.th,Soil_zTop,float(Crop.Zmin),Crop.Aer)
# Check whether to use root zone or top soil depletions for calculating
# water stress
if (Dr.Rz/TAW.Rz) <= (Dr.Zt/TAW.Zt):
# Root zone is wetter than top soil, so use root zone value
Dr = Dr.Rz
TAW = TAW.Rz
else:
# Top soil is wetter than root zone, so use top soil values
Dr = Dr.Zt
TAW = TAW.Zt
# Calculate water stress coefficients
beta = True
Ksw = water_stress(Crop,NewCond,Dr,TAW,Et0,beta)
# Calculate aeration stress coefficients
Ksa_Aer,NewCond.AerDays = aeration_stress(NewCond.AerDays,Crop.LagAer,thRZ)
# Maximum stress effect
Ks = min(Ksw.StoLin,Ksa_Aer)
# Update potential transpiration in root zone
if IrrMngt_IrrMethod != 4:
# No adjustment to TrPot for water stress when in net irrigation mode
TrPot = TrPot*Ks
## Determine compartments covered by root zone ##
# Compartments covered by the root zone
rootdepth = round(max(float(NewCond.Zroot),float(Crop.Zmin)),2)
comp_sto = min(np.sum(Soil_Profile.dzsum<rootdepth)+1,int(Soil_nComp))
RootFact = np.zeros(int(Soil_nComp))
# Determine fraction of each compartment covered by root zone
for ii in range(comp_sto):
if Soil_Profile.dzsum[ii] > rootdepth:
RootFact[ii] = 1-((Soil_Profile.dzsum[ii]-rootdepth)/Soil_Profile.dz[ii])
else:
RootFact[ii] = 1
## Determine maximum sink term for each compartment ##
SxComp = np.zeros(int(Soil_nComp))
if IrrMngt_IrrMethod == 4:
# Net irrigation mode
for ii in range(comp_sto):
SxComp[ii] = (Crop.SxTop+Crop.SxBot)/2
else:
# Maximum sink term declines linearly with depth
SxCompBot = Crop.SxTop
for ii in range(comp_sto):
SxCompTop = SxCompBot
if Soil_Profile.dzsum[ii] <= rootdepth:
SxCompBot = Crop.SxBot*NewCond.rCor+ \
((Crop.SxTop-Crop.SxBot* \
NewCond.rCor)*((rootdepth-Soil_Profile.dzsum[ii])/rootdepth))
else:
SxCompBot = Crop.SxBot*NewCond.rCor
SxComp[ii] = (SxCompTop+SxCompBot)/2
#print(TrPot,NewCond.DAP)
## Extract water ##
ToExtract = TrPot
comp = -1
TrAct = 0
while (ToExtract > 0) and (comp < comp_sto-1):
# Increment compartment
comp = comp+1
# Specify layer number
# Determine TAW (m3/m3) for compartment
thTAW = prof.th_fc[comp]-prof.th_wp[comp]
if Crop.ETadj == 1:
# Adjust stomatal stress threshold for Et0 on current day
p_up_sto = Crop.p_up[1]+(0.04*(5-Et0))*(np.log10(10-9*Crop.p_up[1]))
# Determine critical water content at which stomatal closure will
# occur in compartment
thCrit = prof.th_fc[comp]-(thTAW*p_up_sto)
# Check for soil water stress
if NewCond.th[comp] >= thCrit:
# No water stress effects on transpiration
KsComp = 1
elif NewCond.th[comp] > prof.th_wp[comp]:
# Transpiration from compartment is affected by water stress
Wrel = (prof.th_fc[comp]-NewCond.th[comp])/ \
(prof.th_fc[comp]-prof.th_wp[comp])
pRel = (Wrel-Crop.p_up[1])/(Crop.p_lo[1]-Crop.p_up[1])
if pRel <= 0:
KsComp = 1
elif pRel >= 1:
KsComp = 0
else:
KsComp = 1-((np.exp(pRel*Crop.fshape_w[1])-1)/(np.exp(Crop.fshape_w[1])-1))
if KsComp > 1:
KsComp = 1
elif KsComp < 0:
KsComp = 0
else:
# No transpiration is possible from compartment as water
# content does not exceed wilting point
KsComp = 0
# Adjust compartment stress factor for aeration stress
if NewCond.DaySubmerged >= Crop.LagAer:
# Full aeration stress - no transpiration possible from
# compartment
AerComp = 0
elif NewCond.th[comp] > (prof.th_s[comp]-(Crop.Aer/100)):
# Increment aeration stress days counter
NewCond.AerDaysComp[comp] = NewCond.AerDaysComp[comp]+1
if NewCond.AerDaysComp[comp] >= Crop.LagAer:
NewCond.AerDaysComp[comp] = Crop.LagAer
fAer = 0
else:
fAer = 1
# Calculate aeration stress factor
AerComp = (prof.th_s[comp]-NewCond.th[comp])/ \
(prof.th_s[comp]-(prof.th_s[comp]-(Crop.Aer/100)))
if AerComp < 0:
AerComp = 0
AerComp = (fAer+(NewCond.AerDaysComp[comp]-1)*AerComp)/ \
(fAer+NewCond.AerDaysComp[comp]-1)
else:
# No aeration stress as number of submerged days does not
# exceed threshold for initiation of aeration stress
AerComp = 1
NewCond.AerDaysComp[comp] = 0
# Extract water
ThToExtract = (ToExtract/1000)/Soil_Profile.dz[comp]
if IrrMngt_IrrMethod == 4:
# Don't reduce compartment sink for stomatal water stress if in
# net irrigation mode. Stress only occurs due to deficient
# aeration conditions
Sink = AerComp*SxComp[comp]*RootFact[comp]
else:
# Reduce compartment sink for greatest of stomatal and aeration
# stress
if KsComp == AerComp:
Sink = KsComp*SxComp[comp]*RootFact[comp]
else:
Sink = min(KsComp,AerComp)*SxComp[comp]*RootFact[comp]
# Limit extraction to demand
if ThToExtract < Sink:
Sink = ThToExtract
# Limit extraction to avoid compartment water content dropping
# below air dry
if (InitCond_th[comp]-Sink) < prof.th_dry[comp]:
Sink = InitCond_th[comp]-prof.th_dry[comp]
if Sink < 0:
Sink = 0
# Update water content in compartment
NewCond.th[comp] = InitCond_th[comp]-Sink
# Update amount of water to extract
ToExtract = ToExtract-(Sink*1000*prof.dz[comp])
# Update actual transpiration
TrAct = TrAct+(Sink*1000*prof.dz[comp])
## Add net irrigation water requirement (if this mode is specified) ##
if (IrrMngt_IrrMethod == 4) and (TrPot > 0):
# Initialise net irrigation counter
IrrNet = 0
# Get root zone water content
_,_Dr,_TAW,thRZ = root_zone_water(Soil_Profile,float(NewCond.Zroot),NewCond.th,Soil_zTop,float(Crop.Zmin),Crop.Aer)
NewCond.Depletion = _Dr.Rz
NewCond.TAW = _TAW.Rz
# Determine critical water content for net irrigation
thCrit = thRZ.WP+((IrrMngt_NetIrrSMT/100)*(thRZ.FC-thRZ.WP))
# Check if root zone water content is below net irrigation trigger
if thRZ.Act < thCrit:
# Initialise layer counter
prelayer = 0
for ii in range(comp_sto):
# Get soil layer
layeri = Soil_Profile.Layer[ii]
if layeri > prelayer:
# If in new layer, update critical water content for
# net irrigation
thCrit = prof.th_wp[ii]+((IrrMngt_NetIrrSMT/100)* \
(prof.th_fc[ii]-prof.th_wp[ii]))
# Update layer counter
prelayer = layeri
# Determine necessary change in water content in
# compartments to reach critical water content
dWC = RootFact[ii]*(thCrit-NewCond.th[ii])*1000*prof.dz[ii]
# Update water content
NewCond.th[ii] = NewCond.th[ii]+(dWC/(1000*prof.dz[ii]))
# Update net irrigation counter
IrrNet = IrrNet+dWC
# Update net irrigation counter for the growing season
NewCond.IrrNetCum = NewCond.IrrNetCum+IrrNet
elif (IrrMngt_IrrMethod == 4) and (TrPot <= 0):
# No net irrigation as potential transpiration is zero
IrrNet = 0
else:
# No net irrigation as not in net irrigation mode
IrrNet = 0
NewCond.IrrNetCum = 0
## Add any surface transpiration to root zone total ##
TrAct = TrAct+TrAct0
## Feedback with canopy cover development ##
# If actual transpiration is zero then no canopy cover growth can occur
if ((NewCond.CC-NewCond.CCprev) > 0.005) and (TrAct == 0):
NewCond.CC = NewCond.CCprev
## Update transpiration ratio ##
if TrPot0 > 0:
if TrAct < TrPot0:
NewCond.TrRatio = TrAct/TrPot0
else:
NewCond.TrRatio = 1
else:
NewCond.TrRatio = 1
if NewCond.TrRatio < 0:
NewCond.TrRatio = 0
elif NewCond.TrRatio > 1:
NewCond.TrRatio = 1
else:
# No transpiration if not in growing season
TrAct = 0
TrPot0 = 0
TrPot_NS = 0
# No irrigation if not in growing season
IrrNet = 0
NewCond.IrrNetCum = 0
## Store potential transpiration for irrigation calculations on next day ##
NewCond.Tpot = TrPot0
return TrAct,TrPot_NS,TrPot0,NewCond,IrrNet
#hide
show_doc(transpiration)
#export
@njit()
def groundwater_inflow(prof,NewCond):
"""
Function to calculate capillary rise in the presence of a shallow groundwater table
<a href="../pdfs/ac_ref_man_3.pdf#page=61" target="_blank">Reference Manual: capillary rise calculations</a> (pg. 52-61)
*Arguments:*
`Soil`: `SoilClass` : Soil object containing soil paramaters
`InitCond`: `InitCondClass` : InitCond object containing model paramaters
*Returns:*
`NewCond`: `InitCondClass` : InitCond object containing updated model paramaters
`GwIn`: `float` : Groundwater inflow
"""
## Store initial conditions for updating ##
GwIn = 0
## Perform calculations ##
if NewCond.WTinSoil == True:
# Water table in soil profile. Calculate horizontal inflow.
# Get groundwater table elevation on current day
zGW = NewCond.zGW
# Find compartment mid-points
zMid = prof.zMid
# For compartments below water table, set to saturation #
idx = np.argwhere(zMid >= zGW).flatten()[0]
for ii in range(idx,len(prof.Comp)):
# Get soil layer
if NewCond.th[ii] < prof.th_s[ii]:
# Update water content
dth = prof.th_s[ii]-NewCond.th[ii]
NewCond.th[ii] = prof.th_s[ii]
# Update groundwater inflow
GwIn = GwIn+(dth*1000*prof.dz[ii])
return NewCond,GwIn
#hide
show_doc(groundwater_inflow)
#export
@njit()
def HIref_current_day(InitCond,Crop,GrowingSeason):
"""
Function to calculate reference (no adjustment for stress effects)
harvest index on current day
<a href="../pdfs/ac_ref_man_3.pdf#page=119" target="_blank">Reference Manual: harvest index calculations</a> (pg. 110-126)
*Arguments:*
`InitCond`: `InitCondClass` : InitCond object containing model paramaters
`Crop`: `CropClass` : Crop object containing Crop paramaters
`GrowingSeason`: `bool` : is growing season (True or Flase)
*Returns:*
`NewCond`: `InitCondClass` : InitCond object containing updated model paramaters
"""
## Store initial conditions for updating ##
NewCond = InitCond
InitCond_HIref = InitCond.HIref
#NewCond.HIref = 0.
## Calculate reference harvest index (if in growing season) ##
if GrowingSeason == True:
# Check if in yield formation period
tAdj = NewCond.DAP-NewCond.DelayedCDs
if tAdj > Crop.HIstartCD:
NewCond.YieldForm = True
else:
NewCond.YieldForm = False
# Get time for harvest index calculation
HIt = NewCond.DAP-NewCond.DelayedCDs-Crop.HIstartCD-1
if HIt <= 0:
# Yet to reach time for HI build-up
NewCond.HIref = 0
NewCond.PctLagPhase = 0
else:
if NewCond.CCprev <= (Crop.CCmin*Crop.CCx):
# HI cannot develop further as canopy cover is too small
NewCond.HIref = InitCond_HIref
else:
# Check crop type
if (Crop.CropType == 1) or (Crop.CropType == 2):
# If crop type is leafy vegetable or root/tuber, then proceed with
# logistic growth (i.e. no linear switch)
NewCond.PctLagPhase = 100 # No lag phase
# Calculate reference harvest index for current day
NewCond.HIref = (Crop.HIini*Crop.HI0)/(Crop.HIini+ (Crop.HI0-Crop.HIini)*np.exp(-Crop.HIGC*HIt))
# Harvest index apprAOSP_hing maximum limit
if NewCond.HIref >= (0.9799*Crop.HI0):
NewCond.HIref = Crop.HI0
elif Crop.CropType == 3:
# If crop type is fruit/grain producing, check for linear switch
if HIt < Crop.tLinSwitch:
# Not yet reached linear switch point, therefore proceed with
# logistic build-up
NewCond.PctLagPhase = 100*(HIt/Crop.tLinSwitch)
# Calculate reference harvest index for current day
# (logistic build-up)
NewCond.HIref = (Crop.HIini*Crop.HI0)/(Crop.HIini+(Crop.HI0-Crop.HIini)*np.exp(-Crop.HIGC*HIt))
else:
# Linear switch point has been reached
NewCond.PctLagPhase = 100
# Calculate reference harvest index for current day
# (logistic portion)
NewCond.HIref = (Crop.HIini*Crop.HI0)/(Crop.HIini+(Crop.HI0-Crop.HIini)*np.exp(-Crop.HIGC*Crop.tLinSwitch))
# Calculate reference harvest index for current day
# (total - logistic portion + linear portion)
NewCond.HIref = NewCond.HIref+(Crop.dHILinear*(HIt-Crop.tLinSwitch))
# Limit HIref and round off computed value
if NewCond.HIref > Crop.HI0:
NewCond.HIref = Crop.HI0
elif NewCond.HIref <= (Crop.HIini+0.004):
NewCond.HIref = 0
elif ((Crop.HI0-NewCond.HIref)<0.004):
NewCond.HIref = Crop.HI0
else:
# Reference harvest index is zero outside of growing season
NewCond.HIref = 0
return NewCond
#hide
show_doc(HIref_current_day)
#export
@njit()
def biomass_accumulation(Crop,InitCond,Tr,TrPot,Et0,GrowingSeason):
"""
Function to calculate biomass accumulation
<a href="../pdfs/ac_ref_man_3.pdf#page=107" target="_blank">Reference Manual: biomass accumulaiton</a> (pg. 98-108)
*Arguments:*
`Crop`: `CropClass` : Crop object
`InitCond`: `InitCondClass` : InitCond object containing model paramaters
`Tr`: `float` : Daily transpiration
`TrPot`: `float` : Daily potential transpiration
`Et0`: `float` : Daily reference evapotranspiration
`GrowingSeason`:: `bool` : is Growing season? (True, False)
*Returns:*
`NewCond`: `InitCondClass` : InitCond object containing updated model paramaters
"""
## Store initial conditions in a new structure for updating ##
NewCond = InitCond
## Calculate biomass accumulation (if in growing season) ##
if GrowingSeason == True:
# Get time for harvest index build-up
HIt = NewCond.DAP-NewCond.DelayedCDs-Crop.HIstartCD-1
if ((Crop.CropType == 2) or (Crop.CropType == 3)) and (NewCond.HIref > 0):
# Adjust WP for reproductive stage
if Crop.Determinant == 1:
fswitch = NewCond.PctLagPhase/100
else:
if HIt < (Crop.YldFormCD/3):
fswitch = HIt/(Crop.YldFormCD/3)
else:
fswitch = 1
WPadj = Crop.WP*(1-(1-Crop.WPy/100)*fswitch)
else:
WPadj = Crop.WP
#print(WPadj)
# Adjust WP for CO2 effects
WPadj = WPadj*Crop.fCO2
#print(WPadj)
# Calculate biomass accumulation on current day
# No water stress
dB_NS = WPadj*(TrPot/Et0)
# With water stress
dB = WPadj*(Tr/Et0)
if np.isnan(dB) == True:
dB = 0
# Update biomass accumulation
NewCond.B = NewCond.B+dB
NewCond.B_NS = NewCond.B_NS+dB_NS
else:
# No biomass accumulation outside of growing season
NewCond.B = 0
NewCond.B_NS = 0
return NewCond
#hide
show_doc(biomass_accumulation)
#export
@njit()
def temperature_stress(Crop,Tmax,Tmin):
# Function to calculate temperature stress coefficients
"""
Function to get irrigation depth for current day
<a href="../pdfs/ac_ref_man_3.pdf#page=23" target="_blank">Reference Manual: temperature stress</a> (pg. 14)
*Arguments:*
`Crop`: `CropClass` : Crop object containing Crop paramaters
`Tmax`: `float` : max tempatature on current day (celcius)
`Tmin`: `float` : min tempature on current day (celcius)
*Returns:*
`Kst`: `KstClass` : Kst object containing tempature stress paramators
"""
## Calculate temperature stress coefficients affecting crop pollination ##
# Get parameters for logistic curve
KsPol_up = 1
KsPol_lo = 0.001
Kst = KstClass()
# Calculate effects of heat stress on pollination
if Crop.PolHeatStress == 0:
# No heat stress effects on pollination
Kst.PolH = 1
elif Crop.PolHeatStress == 1:
# Pollination affected by heat stress
if Tmax <= Crop.Tmax_lo:
Kst.PolH = 1
elif Tmax >= Crop.Tmax_up:
Kst.PolH = 0
else:
Trel = (Tmax-Crop.Tmax_lo)/(Crop.Tmax_up-Crop.Tmax_lo)
Kst.PolH = (KsPol_up*KsPol_lo)/(KsPol_lo+(KsPol_up-KsPol_lo) \
*np.exp(-Crop.fshape_b*(1-Trel)))
# Calculate effects of cold stress on pollination
if Crop.PolColdStress == 0:
# No cold stress effects on pollination
Kst.PolC = 1
elif Crop.PolColdStress == 1:
# Pollination affected by cold stress
if Tmin >= Crop.Tmin_up:
Kst.PolC = 1
elif Tmin <= Crop.Tmin_lo:
Kst.PolC = 0
else:
Trel = (Crop.Tmin_up-Tmin)/(Crop.Tmin_up-Crop.Tmin_lo)
Kst.PolC = (KsPol_up*KsPol_lo)/(KsPol_lo+(KsPol_up-KsPol_lo) \
*np.exp(-Crop.fshape_b*(1-Trel)))
return Kst
#hide
show_doc(temperature_stress)
#export
@njit()
def HIadj_pre_anthesis(InitCond,Crop_dHI_pre):
"""
Function to calculate adjustment to harvest index for pre-anthesis water
stress
<a href="../pdfs/ac_ref_man_3.pdf#page=119" target="_blank">Reference Manual: harvest index calculations</a> (pg. 110-126)
*Arguments:*
`InitCond`: `InitCondClass` : InitCond object containing model paramaters
`Crop`: `CropClass` : Crop object containing Crop paramaters
*Returns:*
`NewCond`: `InitCondClass` : InitCond object containing updated model paramaters
"""
## Store initial conditions in structure for updating ##
NewCond = InitCond
# check that there is an adjustment to be made
if Crop_dHI_pre > 0:
## Calculate adjustment ##
# Get parameters
Br = InitCond.B/InitCond.B_NS
Br_range = np.log(Crop_dHI_pre)/5.62
Br_upp = 1
Br_low = 1-Br_range
Br_top = Br_upp-(Br_range/3)
# Get biomass ratios
ratio_low = (Br-Br_low)/(Br_top-Br_low)
ratio_upp = (Br-Br_top)/(Br_upp-Br_top)
# Calculate adjustment factor
if (Br >= Br_low) and (Br < Br_top):
NewCond.Fpre = 1+(((1+np.sin((1.5-ratio_low)*np.pi))/2)*(Crop_dHI_pre/100))
elif (Br > Br_top) and (Br <= Br_upp):
NewCond.Fpre = 1+(((1+np.sin((0.5+ratio_upp)*np.pi))/2)*(Crop_dHI_pre/100))
else:
NewCond.Fpre = 1
else:
NewCond.Fpre = 1
if NewCond.CC <= 0.01:
# No green canopy cover left at start of flowering so no harvestable
# crop will develop
NewCond.Fpre = 0
return NewCond
#hide
show_doc(HIadj_pre_anthesis)
#export
@njit()
def HIadj_pollination(InitCond_CC,InitCond_Fpol,
Crop_FloweringCD,Crop_CCmin,Crop_exc,
Ksw,Kst,HIt):
"""
Function to calculate adjustment to harvest index for failure of
pollination due to water or temperature stress
<a href="../pdfs/ac_ref_man_3.pdf#page=119" target="_blank">Reference Manual: harvest index calculations</a> (pg. 110-126)
*Arguments:*
`InitCond`: `InitCondClass` : InitCond object containing model paramaters
`Crop`: `CropClass` : Crop object containing Crop paramaters
`Ksw`: `KswClass` : Ksw object containing water stress paramaters
`Kst`: `KstClass` : Kst object containing tempature stress paramaters
`HIt`: `float` : time for harvest index build-up (calander days)
*Returns:*
`NewCond`: `InitCondClass` : InitCond object containing updated model paramaters
"""
## Caclulate harvest index adjustment for pollination ##
# Get fractional flowering
if HIt == 0:
# No flowering yet
FracFlow = 0
elif HIt > 0:
# Fractional flowering on previous day
t1 = HIt-1
if t1 == 0:
F1 = 0
else:
t1Pct = 100*(t1/Crop_FloweringCD)
if t1Pct > 100:
t1Pct = 100
F1 = 0.00558*np.exp(0.63*np.log(t1Pct))-(0.000969*t1Pct)-0.00383
if F1 < 0:
F1 = 0
# Fractional flowering on current day
t2 = HIt
if t2 == 0:
F2 = 0
else:
t2Pct = 100*(t2/Crop_FloweringCD)
if t2Pct > 100:
t2Pct = 100
F2 = 0.00558*np.exp(0.63*np.log(t2Pct))-(0.000969*t2Pct)-0.00383
if F2 < 0:
F2 = 0
# Weight values
if abs(F1-F2) < 0.0000001:
F = 0
else:
F = 100*((F1+F2)/2)/Crop_FloweringCD
FracFlow = F
# Calculate pollination adjustment for current day
if InitCond_CC < Crop_CCmin:
# No pollination can occur as canopy cover is smaller than minimum
# threshold
dFpol = 0
else:
Ks = min([Ksw.Pol,Kst.PolC,Kst.PolH])
dFpol = Ks*FracFlow*(1+(Crop_exc/100))
# Calculate pollination adjustment to date
NewCond_Fpol = InitCond_Fpol+dFpol
if NewCond_Fpol > 1:
# Crop has fully pollinated
NewCond_Fpol = 1
return NewCond_Fpol
#hide
show_doc(HIadj_pollination)
#export
@njit()
def HIadj_post_anthesis(InitCond,Crop,Ksw):
"""
Function to calculate adjustment to harvest index for post-anthesis water
stress
<a href="../pdfs/ac_ref_man_3.pdf#page=119" target="_blank">Reference Manual: harvest index calculations</a> (pg. 110-126)
*Arguments:*
`InitCond`: `InitCondClass` : InitCond object containing model paramaters
`Crop`: `CropClass` : Crop object containing Crop paramaters
`Ksw`: `KswClass` : Ksw object containing water stress paramaters
*Returns:*
`NewCond`: `InitCondClass` : InitCond object containing updated model paramaters
"""
## Store initial conditions in a structure for updating ##
NewCond = InitCond
InitCond_DelayedCDs = InitCond.DelayedCDs
InitCond_sCor1 = InitCond.sCor1
InitCond_sCor2 = InitCond.sCor2
## Calculate harvest index adjustment ##
# 1. Adjustment for leaf expansion
tmax1 = Crop.CanopyDevEndCD-Crop.HIstartCD
DAP = NewCond.DAP-InitCond_DelayedCDs
if (DAP <= (Crop.CanopyDevEndCD+1)) \
and (tmax1 > 0) \
and (NewCond.Fpre > 0.99) \
and (NewCond.CC > 0.001) \
and (Crop.a_HI > 0):
dCor = (1+(1-Ksw.Exp)/Crop.a_HI)
NewCond.sCor1 = InitCond_sCor1+(dCor/tmax1)
DayCor = DAP-1-Crop.HIstartCD
NewCond.fpost_upp = (tmax1/DayCor)*NewCond.sCor1
# 2. Adjustment for stomatal closure
tmax2 = Crop.YldFormCD
DAP = NewCond.DAP-InitCond_DelayedCDs
if (DAP <= (Crop.HIendCD+1)) \
and (tmax2 > 0) \
and (NewCond.Fpre > 0.99) \
and (NewCond.CC > 0.001) \
and (Crop.b_HI > 0):
#print(Ksw.Sto)
dCor = np.power(Ksw.Sto,0.1)*(1-(1-Ksw.Sto)/Crop.b_HI)
NewCond.sCor2 = InitCond_sCor2+(dCor/tmax2)
DayCor = DAP-1-Crop.HIstartCD
NewCond.fpost_dwn = (tmax2/DayCor)*NewCond.sCor2
# Determine total multiplier
if (tmax1 == 0) and (tmax2 == 0):
NewCond.Fpost = 1
else:
if tmax2 == 0:
NewCond.Fpost = NewCond.fpost_upp
else:
if tmax1 == 0:
NewCond.Fpost = NewCond.fpost_dwn
elif tmax1 <= tmax2:
NewCond.Fpost = NewCond.fpost_dwn*(((tmax1*NewCond.fpost_upp)+(tmax2-tmax1))/tmax2)
else:
NewCond.Fpost = NewCond.fpost_upp*(((tmax2*NewCond.fpost_dwn)+(tmax1-tmax2))/tmax1)
return NewCond
#hide
show_doc(HIadj_post_anthesis)
#export
@njit()
def harvest_index(Soil_Profile,Soil_zTop,Crop,InitCond,Et0,Tmax,Tmin,GrowingSeason):
"""
Function to simulate build up of harvest index
<a href="../pdfs/ac_ref_man_3.pdf#page=119" target="_blank">Reference Manual: harvest index calculations</a> (pg. 110-126)
*Arguments:*
`Soil`: `SoilClass` : Soil object containing soil paramaters
`Crop`: `CropClass` : Crop object containing Crop paramaters
`InitCond`: `InitCondClass` : InitCond object containing model paramaters
`Et0`: `float` : reference evapotranspiration on current day
`Tmax`: `float` : maximum tempature on current day (celcius)
`Tmin`: `float` : minimum tempature on current day (celcius)
`GrowingSeason`:: `bool` : is growing season (True or Flase)
*Returns:*
`NewCond`: `InitCondClass` : InitCond object containing updated model paramaters
"""
## Store initial conditions for updating ##
NewCond = InitCond
InitCond_HI = InitCond.HI
InitCond_HIadj = InitCond.HIadj
InitCond_PreAdj = InitCond.PreAdj
## Calculate harvest index build up (if in growing season) ##
if GrowingSeason == True:
# Calculate root zone water content
_,Dr,TAW,_ = root_zone_water(Soil_Profile,float(NewCond.Zroot),NewCond.th,Soil_zTop,float(Crop.Zmin),Crop.Aer)
# Check whether to use root zone or top soil depletions for calculating
# water stress
if (Dr.Rz/TAW.Rz) <= (Dr.Zt/TAW.Zt):
# Root zone is wetter than top soil, so use root zone value
Dr = Dr.Rz
TAW = TAW.Rz
else:
# Top soil is wetter than root zone, so use top soil values
Dr = Dr.Zt
TAW = TAW.Zt
# Calculate water stress
beta = True
Ksw = water_stress(Crop,NewCond,Dr,TAW,Et0,beta)
# Calculate temperature stress
Kst = temperature_stress(Crop,Tmax,Tmin)
# Get reference harvest index on current day
HIi = NewCond.HIref
# Get time for harvest index build-up
HIt = NewCond.DAP-NewCond.DelayedCDs-Crop.HIstartCD-1
# Calculate harvest index
if (NewCond.YieldForm == True) and (HIt >= 0):
#print(NewCond.DAP)
# Root/tuber or fruit/grain crops
if (Crop.CropType == 2) or (Crop.CropType == 3):
# Detemine adjustment for water stress before anthesis
if InitCond_PreAdj == False:
InitCond.PreAdj = True
NewCond = HIadj_pre_anthesis(NewCond,Crop.dHI_pre)
# Determine adjustment for crop pollination failure
if Crop.CropType == 3: # Adjustment only for fruit/grain crops
if (HIt > 0) and (HIt <= Crop.FloweringCD):
NewCond.Fpol = HIadj_pollination(InitCond.CC,InitCond.Fpol,
Crop.FloweringCD,Crop.CCmin,Crop.exc,
Ksw,Kst,HIt)
HImax = NewCond.Fpol*Crop.HI0
else:
# No pollination adjustment for root/tuber crops
HImax = Crop.HI0
# Determine adjustments for post-anthesis water stress
if HIt > 0:
NewCond = HIadj_post_anthesis(NewCond,
Crop,Ksw)
# Limit HI to maximum allowable increase due to pre- and
# post-anthesis water stress combinations
HImult = NewCond.Fpre*NewCond.Fpost
if HImult > 1+(Crop.dHI0/100):
HImult = 1+(Crop.dHI0/100)
# Determine harvest index on current day, adjusted for stress
# effects
if HImax >= HIi:
HIadj = HImult*HIi
else:
HIadj = HImult*HImax
elif Crop.CropType == 1:
# Leafy vegetable crops - no adjustment, harvest index equal to
# reference value for current day
HIadj = HIi
else:
# No build-up of harvest index if outside yield formation period
HIi = InitCond_HI
HIadj = InitCond_HIadj
# Store final values for current time step
NewCond.HI = HIi
NewCond.HIadj = HIadj
else:
# No harvestable crop outside of a growing season
NewCond.HI = 0
NewCond.HIadj = 0
#print([NewCond.DAP , Crop.YldFormCD])
return NewCond
#hide
show_doc(harvest_index)
#hide
from nbdev.export import notebook2script
notebook2script()
| nbs/03_solution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Enhancing performance of Pandas
#
# This demo is adapted from the example in [Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/enhancingperf.html). We will investigate how to speed up certain functions operating on pandas DataFrames using three different techniques: Cython, Numba, Pythran (via `transonic`).
# +
import numpy as np
import pandas as pd
df = pd.DataFrame({'a': np.random.randn(1000),
'b': np.random.randn(1000),
'N': np.random.randint(100, 1000, (1000)),
'x': 'x'})
# -
# Here's the function in pure Python:
# +
from transonic import jit
def f(x):
return x * (x - 1)
def integrate_f(a, b, N):
s = 0
dx = (b - a) / N
for i in range(N):
s += f(a + i * dx)
return s * dx
# JIT functions for later use
# Note: f(x) will be automatically included in the modules
integrate_f_cython = jit(backend="cython")(integrate_f)
integrate_f_numba = jit(backend="numba")(integrate_f)
integrate_f_pythran = jit(backend="pythran")(integrate_f)
# -
# We achieve our result by using apply (row-wise):
# %timeit df.apply(lambda x: integrate_f(x['a'], x['b'], x['N']), axis=1)
# Let's take a look and see where the time is spent during this operation (limited to the most time consuming four calls) using the prun ipython magic function:
# %prun -l 4 df.apply(lambda x: integrate_f(x['a'], x['b'], x['N']), axis=1)
# ## Enter: `transonic`
# +
from transonic import wait_for_all_extensions
from transonic.util import print_versions, timeit_verbose
print_versions()
# -
# ## Cython + transonic
# +
# warmup
df.apply(lambda x: integrate_f_cython(x['a'], x['b'], x['N']), axis=1)
wait_for_all_extensions()
# benchmark
# %timeit df.apply(lambda x: integrate_f_cython(x['a'], x['b'], x['N']), axis=1)
# -
# ## Numba + transonic
# +
# warmup
df.apply(lambda x: integrate_f_numba(x['a'], x['b'], x['N']), axis=1)
wait_for_all_extensions()
# benchmark
# %timeit df.apply(lambda x: integrate_f_numba(x['a'], x['b'], x['N']), axis=1)
# -
# ## Pythran + transonic
# +
# warmup
df.apply(lambda x: integrate_f_pythran(x['a'], x['b'], x['N']), axis=1)
wait_for_all_extensions()
# benchmark
# %timeit df.apply(lambda x: integrate_f_pythran(x['a'], x['b'], x['N']), axis=1)
# -
# ## Cython + types + transonic
# +
# %%file _pandas_cython_boost.py
from transonic import boost
@boost(backend="cython", inline=True)
def f_typed(x: float):
return x * (x - 1)
@boost(backend="cython")
def integrate_f_typed(a: float, b: float, N: int):
i: int
s: float
dx: float = (b - a) / N
s = 0
for i in range(N):
s += f_typed(a + i * dx)
return s * dx
# -
# !transonic -b cython _pandas_cython_boost.py
# +
from transonic import set_compile_at_import, wait_for_all_extensions
set_compile_at_import(True)
# +
from _pandas_cython_boost import integrate_f_typed
wait_for_all_extensions()
# benchmark
# %timeit df.apply(lambda x: integrate_f_typed(x['a'], x['b'], x['N']), axis=1)
| pandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### 15.1 What is a database?
# ### 15.2 Database concepts
# ### 15.3 Database Browser for SQLite
# http://sqlitebrowser.org/
# ### 15.4 Creating a database table
# +
import sqlite3
conn = sqlite3.connect('database/music.sqlite')
cur = conn.cursor()
cur.execute('DROP TABLE IF EXISTS TRACKS')
cur.execute('CREATE TABLE Tracks(title TEXT, plays INTEGER)')
conn.close()
# +
import sqlite3
conn = sqlite3.connect('database/music.sqlite')
cur = conn.cursor()
cur.execute('INSERT INTO Tracks (title, plays) VALUES (?, ?)',
('Thunderstruck', 20))
cur.execute('INSERT INTO Tracks (title, plays) VALUES (?, ?)',
('My Way', 15))
conn.commit()
print('Tracks:')
cur.execute('SELECT title, plays FROM Tracks')
for row in cur:
print(row)
cur.execute('DELETE FROM Tracks WHERE plays < 100')
conn.commit()
cur.close()
# -
# ### 15.5 Structured Query Language summary
# - Create a table
# ```sql
# CREATE TABLE Users(
# name VARCHAR(128),
# email VARCHAR(128)
# )
# ```
# - Inserts a row into a table
# ```sql
# INSERT INTO User(name,email)VALUES('Kristin','<EMAIL>')
# ```
# - Deletes a row in a table based on a selection criteria
# ```sql
# DELETE FROM User WHERE email='<EMAIL>'
# ```
# - Allows the updating of a field with a where clause
# ```sql
# UPDATE Users SET name='Charles' WHERE email='<EMAIL>'
# ```
# - Retrieves a group of records
# ```sql
# SELECT*FROM Users
# SELECT*FROM Users WHERE email='<EMAIL>'
# ```
# - Sorting with ORDER BY
# ```sql
# SELET*FROM Users ORDER BY email
# ```
# ### 15.6 Spidering Twitter using a database (Pass)
# ### 15.7 Basic data modeling
#
# - Basic Rule: Don't put the same string data in twice - use a relationship instead
#
# ### 15.8 Programming with multiple tables
# ### 15.8.1 Constraints in database tables
# ```sql
# CREATE TABLE "Artist" (
# "id" INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
# "name" TEXT
# );
#
# CREATE TABLE "Genre" (
# "id" INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT UNIQUE,
# "name" TEXT
# );
#
# CREATE TABLE "Album" (
# "id" INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT UNIQUE,
# "artist_id" INTEGER,
# "title" TEXT
# );
#
# CREATE TABLE "Track" (
# "id" INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT UNIQUE,
# "title" TEXT,
# "album_id" INTEGER,
# "genre_id" INTEGER,
# "len" INTEGER,
# "rating" INTEGER,
# "count" INTEGER
# );
# ```
# ### 15.8.2 Retrieve and/or insert a record
# ```sql
# INSERT INTO Artist (name) VALUES ('<NAME>');
# INSERT INTO Artist (name) VALUES ('AC/DC');
# INSERT INTO Genre (name) VALUES ('Rock');
# INSERT INTO Genre (name) VALUES ('Metal');
# INSERT INTO Album (title,artist_id) VALUES ('Who Made Who',2);
# INSERT INTO Album (title,artist_id) VALUES ('IV',1);
# INSERT INTO Track (title, rating, len, count, album_id, genre_id)
# VALUES ('Black Dog', 5, 297, 0, 2, 1) ;
# INSERT INTO Track (title, rating, len, count, album_id, genre_id)
# VALUES ('Stairway', 5, 482, 0, 2, 1) ;
# INSERT INTO Track (title, rating, len, count, album_id, genre_id)
# VALUES ('About to Rock', 5, 313, 0, 1, 2) ;
# INSERT INTO Track (title, rating, len, count, album_id, genre_id)
# VALUES ('Who Made Who', 5, 207, 0, 1, 2) ;
# ```
# ### 15.8.3 Storing the friend relationship
# ```sql
# SELECT Album.title, Artist.name FROM Album JOIN Artist
# ON Album.artist_id = Artist.id
#
# SELECT Album.title, Album.artist_id, Artist.id, Artist.name
# FROM Album JOIN Artist ON Album.artist_id = Artist.id;
#
# SELECT Track.title, Track.genre_id, Genre.id, Genre.name
# FROM Track JOIN Genre
#
# SELECT Track.title, Genre.name FROM Track JOIN Genre
# ON Track.genre_id = Genre.id
#
# SELECT Track.title, Artist.name, Album.title, Genre.name
# FROM Track JOIN Genre JOIN Album JOIN Artist
# ON Track.genre_id = Genre.id AND Track.album_id = Album.id
# AND Album.artist_id = Artist.id
# ```
# ### 15.9 Three kinds of keys
#
# - Primary Key : id
# - Foreign Key : The child_id in Parent table
# - Logic Key :
#
# ### 15.10 Using JOIN to retrieve data
# ### 15.11 Summary
# ### 15.12 Debugging
# ### 15.13 Glossary
# ### 15.14 Many-to-Many Relationships
# ### 15.14.1 Create Table
# ```sql
# CREATE TABLE User (
# id INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT UNIQUE,
# name TEXT UNIQUE,
# email TEXT
# ) ;
#
# CREATE TABLE Course (
# id INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT UNIQUE,
# title TEXT UNIQUE
# ) ;
#
# CREATE TABLE Member (
# user_id INTEGER,
# course_id INTEGER,
# role INTEGER,
# PRIMARY KEY (user_id, course_id)
# ) ;
#
# ```
# ### 15.14.2 Insert contents
# ```sql
# INSERT INTO User (name, email) VALUES ('Jane', '<EMAIL>');
# INSERT INTO User (name, email) VALUES ('Ed', '<EMAIL>');
# INSERT INTO User (name, email) VALUES ('Sue', '<EMAIL>');
#
# INSERT INTO Course (title) VALUES ('Python');
# INSERT INTO Course (title) VALUES ('SQL');
# INSERT INTO Course (title) VALUES ('PHP');
#
# INSERT INTO Member (user_id, course_id, role) VALUES (1, 1, 1);
# INSERT INTO Member (user_id, course_id, role) VALUES (2, 1, 0);
# INSERT INTO Member (user_id, course_id, role) VALUES (3, 1, 0);
#
# INSERT INTO Member (user_id, course_id, role) VALUES (1, 2, 0);
# INSERT INTO Member (user_id, course_id, role) VALUES (2, 2, 1);
#
# INSERT INTO Member (user_id, course_id, role) VALUES (2, 3, 1);
# INSERT INTO Member (user_id, course_id, role) VALUES (3, 3, 0);
# ```
# ### 15.14.3 Build Relationship
# ```sql
# SELECT User.name, Member.role, Course.title
# FROM User JOIN Member JOIN Course
# ON Member.user_id = User.id AND Member.course_id = Course.id
# ORDER BY Course.title, Member.role DESC, User.name
# ```
# ### Exercise
# +
# web2
import sqlite3
conn = sqlite3.connect('database/emaildb2.sqlite')
cur = conn.cursor()
cur.execute('DROP TABLE IF EXISTS Counts')
cur.execute('''
CREATE TABLE Counts (org TEXT, count INTEGER)''')
fname = input('Enter file name: ')
if (len(fname) < 1): fname = 'files/mbox.txt'
fh = open(fname)
for line in fh:
if not line.startswith('From: '): continue
pieces = line.rstrip().split('@')
org = pieces[1]
cur.execute('SELECT count FROM Counts WHERE org = ? ', (org,))
row = cur.fetchone()
if row is None:
cur.execute('''INSERT INTO Counts (org, count)
VALUES (?, 1)''', (org,))
else:
cur.execute('UPDATE Counts SET count = count + 1 WHERE org = ?',
(org,))
conn.commit()
# https://www.sqlite.org/lang_select.html
sqlstr = 'SELECT org, count FROM Counts ORDER BY count DESC LIMIT 10'
for row in cur.execute(sqlstr):
print(str(row[0]), row[1])
cur.close()
# +
# web3
import xml.etree.ElementTree as ET
import sqlite3
conn = sqlite3.connect('database/trackdb.sqlite')
cur = conn.cursor()
# Make some fresh tables using executescript()
cur.executescript('''
DROP TABLE IF EXISTS Artist;
DROP TABLE IF EXISTS Genre;
DROP TABLE IF EXISTS Album;
DROP TABLE IF EXISTS Track;
CREATE TABLE Artist (
id INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT UNIQUE,
name TEXT UNIQUE
);
CREATE TABLE Genre (
id INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT UNIQUE,
name TEXT UNIQUE
);
CREATE TABLE Album (
id INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT UNIQUE,
artist_id INTEGER,
title TEXT UNIQUE
);
CREATE TABLE Track (
id INTEGER NOT NULL PRIMARY KEY
AUTOINCREMENT UNIQUE,
title TEXT UNIQUE,
album_id INTEGER,
genre_id INTEGER,
len INTEGER, rating INTEGER, count INTEGER
);
''')
fname = input('Enter file name: ')
if ( len(fname) < 1 ) : fname = 'files/Library.xml'
# <key>Track ID</key><integer>369</integer>
# <key>Name</key><string>Another One Bites The Dust</string>
# <key>Artist</key><string>Queen</string>
def lookup(d, key):
found = False
for child in d:
if found : return child.text
if child.tag == 'key' and child.text == key :
found = True
return None
stuff = ET.parse(fname)
all = stuff.findall('dict/dict/dict')
print('Dict count:', len(all))
for entry in all:
if ( lookup(entry, 'Track ID') is None ) : continue
name = lookup(entry, 'Name')
artist = lookup(entry, 'Artist')
album = lookup(entry, 'Album')
genre = lookup(entry,'Genre')
count = lookup(entry, 'Play Count')
rating = lookup(entry, 'Rating')
length = lookup(entry, 'Total Time')
if name is None or artist is None or album is None or genre is None:
continue
#print(name, artist, album, genre, count, rating, length)
cur.execute('''INSERT OR IGNORE INTO Artist (name)
VALUES ( ? )''', ( artist, ) )
cur.execute('SELECT id FROM Artist WHERE name = ? ', (artist, ))
artist_id = cur.fetchone()[0]
cur.execute('''INSERT OR IGNORE INTO Genre (name)
VALUES ( ? )''', ( genre, ) )
cur.execute('SELECT id FROM Genre WHERE name = ? ', (genre, ))
genre_id = cur.fetchone()[0]
cur.execute('''INSERT OR IGNORE INTO Album (title, artist_id)
VALUES ( ?, ? )''', ( album, artist_id ) )
cur.execute('SELECT id FROM Album WHERE title = ? ', (album, ))
album_id = cur.fetchone()[0]
cur.execute('''INSERT OR REPLACE INTO Track
(title, album_id, genre_id, len, rating, count)
VALUES ( ?, ?, ?, ?, ?,? )''',
( name, album_id, genre_id, length, rating, count,) )
conn.commit()
# +
# web4
import json
import sqlite3
conn = sqlite3.connect('database/rosterdb.sqlite')
cur = conn.cursor()
# Do some setup
cur.executescript('''
DROP TABLE IF EXISTS User;
DROP TABLE IF EXISTS Member;
DROP TABLE IF EXISTS Course;
CREATE TABLE User (
id INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT UNIQUE,
name TEXT UNIQUE
);
CREATE TABLE Course (
id INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT UNIQUE,
title TEXT UNIQUE
);
CREATE TABLE Member (
user_id INTEGER,
course_id INTEGER,
role INTEGER,
PRIMARY KEY (user_id, course_id)
)
''')
fname = input('Enter file name: ')
if len(fname) < 1:
fname = 'files/roster_data.json'
# [
# [ "Charley", "si110", 1 ],
# [ "Mea", "si110", 0 ],
str_data = open(fname).read()
json_data = json.loads(str_data)
for entry in json_data:
name = entry[0]
title = entry[1]
role = entry[2]
#print((name, title))
cur.execute('''INSERT OR IGNORE INTO User (name)
VALUES ( ? )''', ( name, ) )
cur.execute('SELECT id FROM User WHERE name = ? ', (name, ))
user_id = cur.fetchone()[0]
cur.execute('''INSERT OR IGNORE INTO Course (title)
VALUES ( ? )''', ( title, ) )
cur.execute('SELECT id FROM Course WHERE title = ? ', (title, ))
course_id = cur.fetchone()[0]
cur.execute('''INSERT OR REPLACE INTO Member
(user_id, course_id, role) VALUES ( ?, ?, ?)''',
( user_id, course_id,role ) )
conn.commit()
| Python4Every1/Chapter_15_Using_database_and_SQL.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
## Classification Algorithms
from sklearn.linear_model import LogisticRegression, SGDClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, GradientBoostingClassifier
from sklearn.svm import SVC
from sklearn.preprocessing import LabelEncoder, StandardScaler, OneHotEncoder
import pandas as pd
import numpy as np
# importing ploting libraries
import matplotlib.pyplot as plt
# To enable plotting graphs in Jupyter notebook
# %matplotlib inline
#importing seaborn for statistical plots
import seaborn as sns
# Libraries for constructing Pipelines
from sklearn.pipeline import Pipeline, make_pipeline
from sklearn.model_selection import train_test_split, GridSearchCV, KFold, cross_val_score
# Library for Normalization of Numerical Data
from scipy.stats import zscore
# calculate accuracy measures and confusion matrix
from sklearn import metrics
from sklearn.metrics import confusion_matrix
# Suppress warnings
import warnings
warnings.filterwarnings("ignore")
# -
Data = 'german_credit_data.csv'
credit = pd.read_csv(Data, header = 0, names = ['Index', 'Age', 'Sex', 'Job', 'Housing', 'Saving accounts',
'Checking account', 'Credit amount', 'Duration', 'Purpose', 'default'])
credit.head()
credit['default'].value_counts()
credit.info()
credit['Saving accounts'] = credit['Saving accounts'].fillna(value = 'NA')
print(credit['Saving accounts'].value_counts())
credit['Checking account'] = credit['Checking account'].fillna(value = 'NA')
credit['Checking account'].value_counts()
sns.countplot(x= 'Sex', data = credit, hue= 'default')
sns.countplot(x= 'Job', data = credit, hue= 'default')
sns.countplot(x= 'Housing', data = credit, hue= 'default')
sns.countplot(x= 'Saving accounts', data = credit, hue= 'default')
print("\nLittle Income :\n",credit[credit['Saving accounts'] == 'little']['default'].value_counts().to_frame())
print("\nModerate Income\n", credit[credit['Saving accounts'] == 'moderate']['default'].value_counts().to_frame())
sns.countplot(x= 'Checking account', data = credit, hue= 'default')
print("\nLittle Income :\n",credit[credit['Checking account'] == 'little']['default'].value_counts().to_frame())
print("\nModerate Income :\n",credit[credit['Checking account'] == 'moderate']['default'].value_counts().to_frame())
# +
# 'Saving Account'
credit['Saving accounts']= credit['Saving accounts'].map({'little': 'little', 'moderate': 'moderate', 'quite rich':'other','rich':'other', 'NA':'other' })
# +
# 'Checking Account'
credit['Checking account']= credit['Checking account'].map({'little': 'little', 'moderate': 'moderate','rich':'other', 'NA':'other' })
# -
## LabeEncoding the Purpose column
le = LabelEncoder()
credit['Purpose'] = le.fit_transform(credit['Purpose'])
print("The various purposes are: ", le.classes_.tolist(), "\nAnd the hot encoded numbers for the same are", credit['Purpose'].unique().tolist())
credit['default'] = credit['default'].map({'no':0, 'yes': 1})
credit['default'].value_counts()
from sklearn.utils import resample
credit_majority = credit[credit.default == 0]
credit_minority = credit[credit.default == 1]
# +
credit_minority_upsampled = resample(credit_minority, replace = True, n_samples = 600, random_state = 666)
## Combine classes
credit_upscaled = pd.concat([credit_majority, credit_minority_upsampled])
# -
credit_upscaled.default.value_counts()
x= credit_upscaled[['Age', 'Sex', 'Job', 'Housing', 'Saving accounts',
'Checking account', 'Credit amount', 'Duration', 'Purpose']]
y = credit_upscaled['default']
# Creating a copy to avoid corruption of Data.
x1 = x.copy()
# +
# List to store Categorical Columns
cat_cols = list(x1.columns[x1.dtypes == 'object'])
print("Categorical Columns: ",cat_cols)
# List to store Numerical Columns
num_cols = list(x1.columns[x1.dtypes != 'object'])
print("\nNumerical Columns:" ,num_cols)
## One-Hot Encoding Categorical Columns
x1_dummy = pd.get_dummies(x1[cat_cols], drop_first=True)
## Joining New dummified and Numerical columns
x_new = pd.concat([x1_dummy, x1[num_cols]], axis=1, join='inner')
#### Normalizing the Dataset
ss = StandardScaler()
x_normal = ss.fit_transform(x_new)
# -
SEED = 666
x_int, x_test, y_int, y_test = train_test_split(x_normal, y, test_size=100, stratify=y, random_state = SEED)
x_train,x_val,y_train,y_val = train_test_split(x_int, y_int, test_size=100, stratify = y_int, random_state = SEED)
# print proportions
print('train: {}% | Validation: {}% | Test: {}%'.format( round(len(y_train)/len(y),2),
round(len(y_val)/len(y) ,2),
round(len(y_test)/len(y),2) ) )
# +
models = []
models.append(('LR', LogisticRegression()))
models.append(('KNN', KNeighborsClassifier()))
models.append(('SGD', SGDClassifier()))
models.append(('DT', DecisionTreeClassifier()))
models.append(('SVC', SVC()))
models.append(('RF', RandomForestClassifier()))
models.append(('Ada', AdaBoostClassifier()))
models.append(('Grad', GradientBoostingClassifier()))
# Model Evaluation
result = []
model_names = []
scoring = ['accuracy', 'recall', 'precision', 'roc_auc']
for model_name, model in models:
kfold = KFold(n_splits=10, random_state=SEED)
cv_results1 = cross_val_score(model, x_train, y_train, cv = kfold, scoring=scoring[0])
cv_results2 = cross_val_score(model, x_train, y_train, cv = kfold, scoring=scoring[1])
cv_results3 = cross_val_score(model, x_train, y_train, cv = kfold, scoring=scoring[2])
cv_results4 = cross_val_score(model, x_train, y_train, cv = kfold, scoring=scoring[3])
model_names.append(model_name)
msg = "%s:\n ACCURACY = %f, RECALL=(%f), PRECISION=(%f), ROC-AUC=(%f)" % (model_name, cv_results1.mean(),cv_results2.mean(), cv_results3.mean(), cv_results4.mean())
print(msg)
# +
lr = LogisticRegression()
model = lr.fit(x_train, y_train)
model.score(x_val, y_val)
# +
# Fitting the model using the intermediate dataset.
model2 = lr.fit(x_int, y_int)
model2.score(x_test, y_test)
# +
# Predcited probability of each class.
y_pred_prob = model2.predict_proba(x_test)
# Predicted value of each class
y_pred = model2.predict(x_test)
# -
cMatrix = confusion_matrix(y_test, y_pred)
print(cMatrix)
print("Transactions which were falsely classified as FRAUD = %.1f Percent" %(cMatrix[0][1]/ sum(sum(cMatrix))*100 ) )
print("Transactions which were falsely classified as NOT-FRAUD = %.1f Percent"%(cMatrix[1][0]/ sum(sum(cMatrix))*100 ) )
print("Transactions which were truly classified as FRAUD = %.1f Percent"% (cMatrix[1][1]/ sum(sum(cMatrix))*100 ) )
print("Transactions which were truly classified as NOT-FRAUD = %.1f Percent"% (cMatrix[0][0]/ sum(sum(cMatrix))*100 ))
# +
LRPipeline1 = Pipeline([( 'LogReg', LogisticRegression(random_state=SEED)) ])
params = dict({ 'LogReg__penalty': ['l1'],'LogReg__C': [0.001,0.01,0.1,0.5,0.9,1,3,5,10], 'LogReg__tol': [ 1e-4, 1e-3, 1e-2, 1e-1, 1, 1e2 ], 'LogReg__solver': ['liblinear','saga']})
GSLR1 = GridSearchCV(LRPipeline1, params, cv=10, scoring='accuracy')
# -
GSLR1.fit(x_train,y_train)
GSLR1.score(x_val, y_val)
# Fetching the best parameters for Model building
GSLR1.best_params_
# Fitting the model using the intermediate dataset.
GSLR1.fit(x_int,y_int)
# +
# Model Accuracy on the Test Set
GSLR1.score(x_test, y_test)
# +
# Predcited probability of each class.
y_pred_prob1 = GSLR1.predict_proba(x_test)
# Predicted value of each class
y_pred1 = GSLR1.predict(x_test)
# -
cMatrix1 = confusion_matrix(y_test, y_pred1)
print(cMatrix1)
print("Transactions which were falsely classified as FRAUD = %.1f Percent" %(cMatrix1[0][1]/ sum(sum(cMatrix1))*100 ) )
print("Transactions which were falsely classified as NOT-FRAUD = %.1f Percent"%(cMatrix1[1][0]/ sum(sum(cMatrix1))*100 ) )
print("Transactions which were truly classified as FRAUD = %.1f Percent"% (cMatrix1[1][1]/ sum(sum(cMatrix1))*100 ) )
print("Transactions which were truly classified as NOT-FRAUD = %.1f Percent"% (cMatrix1[0][0]/ sum(sum(cMatrix1))*100 ))
# +
LRPipeline2 = Pipeline([( 'LogReg', LogisticRegression(random_state=SEED)) ])
params = dict({'LogReg__max_iter':[100,200,300,400,500] ,'LogReg__penalty': ['l2'],'LogReg__C': [0.01,0.1,0.5,0.9,1,5,10], 'LogReg__tol': [ 1e-4, 1e-3, 1e-2, 1e-1, 1, 1e2 ], 'LogReg__solver': ['newton-cg','sag','lbfgs']})
GSLR2 = GridSearchCV(LRPipeline2, params, cv=10, scoring='accuracy')
# -
GSLR2.fit(x_train,y_train)
# Fetching the best parameters for Model building
GSLR2.best_params_
# Fitting the model using the intermediate dataset.
GSLR2.fit(x_int,y_int)
# +
# Model Accuracy on the Test Set
GSLR2.score(x_test, y_test)
# +
# Predcited probability of each class.
y_pred_prob2 = GSLR2.predict_proba(x_test)
# Predicted value of each class
y_pred2 = GSLR2.predict(x_test)
# -
cMatrix2 = confusion_matrix(y_test, y_pred2)
print(cMatrix2)
print("Transactions which were falsely classified as FRAUD = %.1f Percent" %(cMatrix2[0][1]/ sum(sum(cMatrix2))*100 ) )
print("Transactions which were falsely classified as NOT-FRAUD = %.1f Percent"%(cMatrix2[1][0]/ sum(sum(cMatrix2))*100 ) )
print("Transactions which were truly classified as FRAUD = %.1f Percent"% (cMatrix2[1][1]/ sum(sum(cMatrix2))*100 ) )
print("Transactions which were truly classified as NOT-FRAUD = %.1f Percent"% (cMatrix2[0][0]/ sum(sum(cMatrix2))*100 ))
# Vaues taken from section 5.4.1
finalModel = LogisticRegression(penalty='l1', solver='liblinear', tol=0.1, C=5)
finalModel.fit(x_int, y_int)
# +
scoreTrain = finalModel.score(x_val, y_val)
scoreTest = finalModel.score(x_test,y_test)
print("The Accuracy of the model on the Train Set is: %.1f " % (scoreTrain * 100))
print("The Accuracy of the model on the Test Set is: %.1f " % (scoreTest * 100))
# +
# Predcited probability of each class.
y_pred_prob_final = finalModel.predict_proba(x_test)
# Predicted value of each class
y_pred_final = finalModel.predict(x_test)
# Predicted Probability of class '0' i.e., not a Fraud Transaction.
y_zero = pd.Series(y_pred_prob_final[:,0])
# Mapping the predicted probability higher than 0.689 to class 0 i.e., Not-Fraud class.
y_pred_optimum = y_zero.map(lambda x: 0 if x>0.689 else 1)
cMatrix = confusion_matrix(y_test, y_pred_optimum)
print(cMatrix)
print("Transactions which were falsely classified as FRAUD = %.1f Percent" %(cMatrix[0][1]/ sum(sum(cMatrix))*100 ) )
print("Transactions which were falsely classified as NOT-FRAUD = %.1f Percent"%(cMatrix[1][0]/ sum(sum(cMatrix))*100 ) )
print("Transactions which were truly classified as FRAUD = %.1f Percent"% (cMatrix[1][1]/ sum(sum(cMatrix))*100 ) )
print("Transactions which were truly classified as NOT-FRAUD = %.1f Percent"% (cMatrix[0][0]/ sum(sum(cMatrix))*100 ))
# -
# ### OUTRAS VARIÁVEIS
plt.scatter (credit['Credit amount'],credit["Age"])
plt.figure()
sns.pairplot(credit)
plt.scatter(credit['Credit amount'],credit["Duration"])
plt.figure()
plt.scatter(credit['Saving accounts'],credit["Duration"])
plt.figure()
fig = credit.Age.hist(bins=60)
fig.text(40, -10, 'Age', ha='center')
fig.text(0, 40, 'Frequency', ha='center')
# ### Outras Variáveis
credit = pd.read_csv('german.data.txt', delim_whitespace=True, names=["Checking_account_status","Month","Credit_history","Credit_Purpose",
"Credit_amount",
"Savings", "Employment_period", "Installment_rate",
"Sex_Marital", "other_debtors", "Residence_period",
"Property", "Age", "OtherInstallment",
"Housing", "ExistCredits", "Job",
"Liability", "Phone", "Foreign", "Predict"])
credit.Predict.value_counts()
corr_pearson = credit.corr(method='pearson')
sns.heatmap(corr_pearson,annot = True)
corr_kendall = credit.corr(method = 'kendall')
sns.heatmap(corr_kendall, annot = True)
corr_spearman = credit.corr(method = 'spearman')
sns.heatmap(corr_spearman, annot = True)
credit.describe()
# ### Outras variáveis
| German 1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: ''
# language: python
# name: ''
# ---
# # Unit commitment
#
#
# This tutorial runs through examples of unit commitment for generators at a single bus. Examples of minimum part-load, minimum up time, minimum down time, start up costs, shut down costs and ramp rate restrictions are shown.
#
# To enable unit commitment on a generator, set its attribute committable = True.
import pypsa
import pandas as pd
# ## Minimum part load demonstration
#
# In final hour load goes below part-load limit of coal gen (30%), forcing gas to commit.
# +
nu = pypsa.Network(snapshots=range(4))
nu.add("Bus", "bus")
nu.add(
"Generator",
"coal",
bus="bus",
committable=True,
p_min_pu=0.3,
marginal_cost=20,
p_nom=10000,
)
nu.add(
"Generator",
"gas",
bus="bus",
committable=True,
marginal_cost=70,
p_min_pu=0.1,
p_nom=1000,
)
nu.add("Load", "load", bus="bus", p_set=[4000, 6000, 5000, 800])
# -
nu.lopf()
nu.generators_t.status
nu.generators_t.p
# ## Minimum up time demonstration
#
# Gas has minimum up time, forcing it to be online longer
# +
nu = pypsa.Network(snapshots=range(4))
nu.add("Bus", "bus")
nu.add(
"Generator",
"coal",
bus="bus",
committable=True,
p_min_pu=0.3,
marginal_cost=20,
p_nom=10000,
)
nu.add(
"Generator",
"gas",
bus="bus",
committable=True,
marginal_cost=70,
p_min_pu=0.1,
up_time_before=0,
min_up_time=3,
p_nom=1000,
)
nu.add("Load", "load", bus="bus", p_set=[4000, 800, 5000, 3000])
# -
nu.lopf()
nu.generators_t.status
nu.generators_t.p
# ## Minimum down time demonstration
#
# Coal has a minimum down time, forcing it to go off longer.
# +
nu = pypsa.Network(snapshots=range(4))
nu.add("Bus", "bus")
nu.add(
"Generator",
"coal",
bus="bus",
committable=True,
p_min_pu=0.3,
marginal_cost=20,
min_down_time=2,
down_time_before=1,
p_nom=10000,
)
nu.add(
"Generator",
"gas",
bus="bus",
committable=True,
marginal_cost=70,
p_min_pu=0.1,
p_nom=4000,
)
nu.add("Load", "load", bus="bus", p_set=[3000, 800, 3000, 8000])
# -
nu.lopf()
nu.objective
nu.generators_t.status
nu.generators_t.p
# ## Start up and shut down costs
#
# Now there are associated costs for shutting down, etc
# +
nu = pypsa.Network(snapshots=range(4))
nu.add("Bus", "bus")
nu.add(
"Generator",
"coal",
bus="bus",
committable=True,
p_min_pu=0.3,
marginal_cost=20,
min_down_time=2,
start_up_cost=5000,
p_nom=10000,
)
nu.add(
"Generator",
"gas",
bus="bus",
committable=True,
marginal_cost=70,
p_min_pu=0.1,
shut_down_cost=25,
p_nom=4000,
)
nu.add("Load", "load", bus="bus", p_set=[3000, 800, 3000, 8000])
# -
nu.lopf(nu.snapshots)
nu.objective
nu.generators_t.status
nu.generators_t.p
# ## Ramp rate limits
# +
nu = pypsa.Network(snapshots=range(6))
nu.add("Bus", "bus")
nu.add(
"Generator",
"coal",
bus="bus",
marginal_cost=20,
ramp_limit_up=0.1,
ramp_limit_down=0.2,
p_nom=10000,
)
nu.add("Generator", "gas", bus="bus", marginal_cost=70, p_nom=4000)
nu.add("Load", "load", bus="bus", p_set=[4000, 7000, 7000, 7000, 7000, 3000])
# -
nu.lopf()
nu.generators_t.p
# +
nu = pypsa.Network(snapshots=range(6))
nu.add("Bus", "bus")
nu.add(
"Generator",
"coal",
bus="bus",
marginal_cost=20,
ramp_limit_up=0.1,
ramp_limit_down=0.2,
p_nom_extendable=True,
capital_cost=1e2,
)
nu.add("Generator", "gas", bus="bus", marginal_cost=70, p_nom=4000)
nu.add("Load", "load", bus="bus", p_set=[4000, 7000, 7000, 7000, 7000, 3000])
# -
nu.lopf(nu.snapshots)
nu.generators.p_nom_opt
nu.generators_t.p
# +
nu = pypsa.Network(snapshots=range(7))
nu.add("Bus", "bus")
# Can get bad interactions if SU > RU and p_min_pu; similarly if SD > RD
nu.add(
"Generator",
"coal",
bus="bus",
marginal_cost=20,
committable=True,
p_min_pu=0.05,
initial_status=0,
ramp_limit_start_up=0.1,
ramp_limit_up=0.2,
ramp_limit_down=0.25,
ramp_limit_shut_down=0.15,
p_nom=10000.0,
)
nu.add("Generator", "gas", bus="bus", marginal_cost=70, p_nom=10000)
nu.add("Load", "load", bus="bus", p_set=[0.0, 200.0, 7000, 7000, 7000, 2000, 0])
# -
nu.lopf()
nu.generators_t.p
nu.generators_t.status
nu.generators.loc["coal"]
# ## Rolling horizon example
#
# This example solves sequentially in batches
# +
sets_of_snapshots = 6
p_set = [4000, 5000, 700, 800, 4000]
nu = pypsa.Network(snapshots=range(len(p_set) * sets_of_snapshots))
nu.add("Bus", "bus")
nu.add(
"Generator",
"coal",
bus="bus",
committable=True,
p_min_pu=0.3,
marginal_cost=20,
min_down_time=2,
min_up_time=3,
up_time_before=1,
ramp_limit_up=1,
ramp_limit_down=1,
ramp_limit_start_up=1,
ramp_limit_shut_down=1,
shut_down_cost=150,
start_up_cost=200,
p_nom=10000,
)
nu.add(
"Generator",
"gas",
bus="bus",
committable=True,
marginal_cost=70,
p_min_pu=0.1,
up_time_before=2,
min_up_time=3,
shut_down_cost=20,
start_up_cost=50,
p_nom=1000,
)
nu.add("Load", "load", bus="bus", p_set=p_set * sets_of_snapshots)
# -
overlap = 2
for i in range(sets_of_snapshots):
nu.lopf(nu.snapshots[i * len(p_set) : (i + 1) * len(p_set) + overlap], pyomo=False)
pd.concat(
{"Active": nu.generators_t.status.astype(bool), "Output": nu.generators_t.p}, axis=1
)
| examples/notebooks/unit-commitment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from Crawler.LogIn import *
import pandas as pd
import numpy as np
from datetime import datetime
from datetime import date
from datetime import timedelta
from selenium import webdriver
from selenium.webdriver import ChromeOptions
from selenium.webdriver.common.keys import Keys
# +
# Define user agent
user_agent = "user-agent=Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36"
# Define chromedriver path
executable_path = '/chromedriver'
browser, chrome_options = createBrowser(executable_path, user_agent = user_agent)
url = 'https://www.hkexnews.hk/sdw/search/searchsdw_c.aspx'
# Browse url
browser = getURL(browser, url)
# -
def calendar_click(to_search, class_name):
try:
calendar_element = browser.find_element_by_class_name(class_name)
for calendar in calendar_element.find_elements_by_tag_name('li'):
if calendar.text.strip() == to_search:
calendar.click()
except Exception as e:
print(e)
# +
# Retrieve a list of HK stock
# Define the year, month and day to search
# date_to_search = date.today() - timedelta(1)
date_to_search = date(2019, 12, 11)
year_to_search = str(date_to_search.year)
month_to_search = str(date_to_search.month)
day_to_search = str(date_to_search.day)
def enter_search_kw(date_to_search, stock_code):
# Select the correct year, month and day
calendar_click(year_to_search, 'year')
calendar_click(month_to_search, 'month')
calendar_click(day_to_search, 'day')
time.sleep(1)
# Set Stock Code
browser.find_element_by_name('txtStockCode').send_keys(stock_code)
# Click Search
browser.find_element_by_id('btnSearch').click()
# Break time
time.sleep(1)
# -
# Scrape 市場中介者/願意披露的投資者戶口持有人的紀錄 on the page
def Scraper(browser, stock_code, date_to_search):
# Get search result table
search_table = browser.find_element_by_class_name('search-details-table-container').find_element_by_tag_name('table')
# Get column_name
column_name = [col.text for col in search_table.find_elements_by_tag_name('th')]
# Get Search body
search_body = search_table.find_element_by_tag_name('tbody')
# Get Search result data
data = np.array([[cell.text for cell in row.find_elements_by_class_name('mobile-list-body')] for row in search_body.find_elements_by_tag_name('tr')])
# Convert data to DataFrame with column
df = pd.DataFrame(data, columns = column_name)
# Add search stock data and date_to_search
df['stock_code'] = stock_code
df['report_date'] = date_to_search
return df
pd.DataFrame(data, columns=column_name)
search_body.find_elements_by_tag_name('tr')[0].find_elements_by_class_name('mobile-list-body')
import pandas as pd
df = pd.read_excel('reference/ListOfSecurities.xlsx')
pip install
| StockScraper/.ipynb_checkpoints/Scraper-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.5 64-bit (''base'': conda)'
# language: python
# name: python3
# ---
# +
import pandas as pd
from datetime import datetime
rawpath = '../raw'
# -
df = pd.read_excel(f'{rawpath}/Plakate_1933-45_ImportCBS.xlsx', names=['laendercode', 'entstehungszeit','titel','entstehungsort','konvolut4105','konvolut6710','signatur'])
df
def pica_schreiben(row):
pica = f"""\t\n0500 Pa
0501 Text$btxt
0502 ohne Hilfsmittel zu benutzen$bn
0503 Blatt$bnb
0600 yy\n"""
if len(str(row.entstehungszeit)) > 0 and len(str(row.entstehungszeit)) < 5:
# nur ein datum
pica += f"1100 {row.entstehungszeit}\n1110 *{row.entstehungszeit}$4ezth\n"
elif len(str(row.entstehungszeit)) > 4 and len(str(row.entstehungszeit)) < 10:
pica += f"1100 {row.entstehungszeit[:4]}$n[zwischen {row.entstehungszeit[:4]} und {row.entstehungszeit[5:]}]\n1110 zwischen {row.entstehungszeit[:4]} und {row.entstehungszeit[5:]}*{row.entstehungszeit}$a{row.entstehungszeit[:4]}$b{row.entstehungszeit[5:]}$4ezth\n"
# wohl zwei daten
else:
pica += "\n\n1100 FEHLER\n\n"
pica += f"""1130 TB-papier
1130 !040445224!
1131 !04046198X!
1132 a1-analog;a2-druck;f2-blatt
1500 /1und
1700 {row.laendercode}
2105 04,P01
4000 Plakat Nr. {row.titel} der Plakatsammlung 1933-1945
4019 Plakat Nr. {row.titel} der Plakatsammlung 1933-1945\n"""
if pd.isna(row.entstehungsort):
pica += "4030 [Ort nicht ermittelbar] : [Verlag nicht ermittelbar]\n"
else:
pica += f"4030 [{row.entstehungsort.strip()}] : [Verlag nicht ermittelbar]\n"
pica += f"""4060 1 ungezähltes Blatt
4105 !118043062X!
4105 {row.konvolut4105}
4700 |BSM|
7001 {now.strftime('%d-%m-%y')} : x
6710 {row.konvolut6710.strip()}
6800 [Provenienz]
6800 !004709608! *1961-1993
6800 Stempel
7100 GS PL WKII @ m
7109 !!DBSM/S!! ; GS PL {row.signatur}
8034 Dieses Plakat ist Teil eines Erschließungsprojekts und derzeit nicht ausleihbar.\n"""
return pica
# +
now = datetime.now()
# testfile
with open(f"../dat/wkII-{now.strftime('%y-%m-%d-%H-%M-%S')}.dat","w") as f:
with open("../dat/wkII-recent.dat","w") as f2:
for index, row in df.iterrows():
f.write(pica_schreiben(row))
f2.write(pica_schreiben(row))
with open(f"../dat/wkII-recent-sample.dat","w") as f:
for index, row in df.sample(5).iterrows():
f.write(pica_schreiben(row))
# -
now = datetime.now()
for durchgang in range((len(df) // 650) + 1):
with open(f"../dat/wkII-final-{durchgang}.dat", 'w') as f:
for row in df[durchgang * 650:(durchgang + 1) * 650].itertuples():
f.write(pica_schreiben(row))
| scripte/import.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: scrapy
# language: python
# name: scrapy
# ---
# +
# https://www.konga.com/category/wines-2004?page=2
# class="bbe45_3oExY _22339_3gQb9" container
# li class="bbe45_3oExY _22339_3gQb9">
# <img data-expand="100" data-src=" image
# <span class="d7c0f_sJAqi"> price
# <h3> name of wine
# class="f5e10_VzEXF _59c59_3-MyH lazyloaded "
# +
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from bs4 import BeautifulSoup as soup
from time import sleep
dic = []
chrome_options = webdriver.ChromeOptions()
page = 0
def getsite(url):
driver = webdriver.Chrome("C:\Bin\chromedriver")
driver.get(url)
sleep(20)
html = driver.page_source
page_soup = soup(html, "lxml")
containers = page_soup.findAll('li',{'class':'bbe45_3oExY _22339_3gQb9'})
num = 0
for i in containers:
name = i.find('h3')
je= str(i.find('img'))
j = je.rfind('src')
price = i.find("span",{"class":"d7c0f_sJAqi"})
if name != None:
dic.append([name.text.strip('\n'),je[j+4:].strip('/>'),price.text.strip('\n')])
#print()
#print(bedetc.text.split())
#num = num + 1
driver.close()
urll = 'https://www.konga.com/category/wines-2004'
for ur in range(1,5):
url = urll + '?page=' + str(ur)
getsite(url)
print('end of page',page)
page = page + 1
print(dic)
# -
import pandas as pd
konga_df = pd.DataFrame(dic)
konga_df.head()
konga_df.columns = ['Name','Image','Price']
konga_df.head()
konga_df.to_csv("Konga.csv", index=False)
| Konga-notebooks/Konga_Main.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# !pip install numpy
# !pip install pandas
# !pip install matplotlib
# !pip install networkx
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import networkx as nx
import modules1 as md
# %matplotlib inline
# -
WITH COMMA AS A SEPERATOR IMPORTING DATA
data = pd.read_csv('D:\dataset.csv', sep=',')
print(data)
CREATING GRAPH USING NETWORKX
facebook = nx.from_pandas_edgelist(data, 'user', 'friendto', create_using=nx.Graph)
DENSITY
print(nx.density(facebook))
NODES OF GRAPH
print(facebook.nodes)
EDGES OF GRAPH
print(email.edges)
DEGREE OF EVERY NODE
sorted((d, n) for n, d in facebook.degree())
DEGREE OF NODE '1'
print(facebook.degree[1])
DEGREE CENTRALITY OF EVERY NODE
in_dc = nx.degree_centrality(facebook)
in_dc_sorted = {k: v for k, v in sorted(
in_dc.items(),
key=lambda item: item[1],
reverse=True
)}
l=in_dc_sorted.items()
l2=list(l)[:10]
for k,v in l:
print (f'{k} -> {v:.3f}')
CLOSENESS CENTRALITY OF EVERY NODE
# +
l5 = nx.closeness_centrality(facebook).items()
l6=list(l5)[:10]
for k,v in l5:
print (f'{k} -> {v:.3f}')
# -
CLOSENESS CENTRALITY OF EVERY NODE (SORTED)
# +
closeness = nx.closeness_centrality(facebook)
c_sorted = {k: v for k, v in sorted(
closeness.items(),
key=lambda item: item[1],
reverse=True
)}
l7=c_sorted.items()
l8=list(l7)[:10]
for k,v in l8:
print (f'{k} -> {v:.3f}')
# -
BETWEEN CENTRALITY
print('The graph {} connected.'.format('is' if nx.is_connected(facebook) else 'is not'))
print('Number of connected components:', nx.number_connected_components(facebook))
BETWEEN CENTRALITY(SORTED)
# +
between = nx.betweenness_centrality(facebook)
b_sorted = {k: v for k, v in sorted(
between.items(),
key=lambda item: item[1],
reverse=True
)}
l11 = b_sorted.items()
l12 =list(l11)[:10]
for k,v in l12:
print (f'{k} -> {v:.3f}')
# -
GATEKEEPERS
# +
gK = sorted(nx.betweenness_centrality(facebook).items(), key = lambda x : x[1], reverse=True)[:10]
for i in gK:
print(i)
# -
CHECKING WHEATHER GRAPH IS CONNECTED OR NOT(AFTER CONVERTING FROM DIRECTED TO UNDIRECTED)
# +
print('The graph {} connected.'.format('is' if nx.is_connected(facebook) else 'is not'))
print('Number of connected components:', nx.number_connected_components(facebook))
# -
TOTAL NUMBER OF NODES IN LARGEST CONNECTED COMPONENT
largest = max(nx.connected_components(facebook), key=len)
print(len(largest))
FINDING DENSITY OF LARGEST CONNETED GRAPH OUT OF ITS ELEMENTS
large = facebook.subgraph(largest)
print(nx.density(large))
DIAMETER OF GRAPHS CONNECTED COMPONENTS
# +
cc = list(nx.connected_components(facebook))
cc1 = facebook.subgraph(cc[0])
print("Component 1:", list(cc1.nodes))
diameter_1 = nx.diameter(cc1)
apl_1 = nx.average_shortest_path_length(cc1)
print("diameter:", diameter_1)
print("average path length: {:.2f}".format(apl_1))
print('')
# -
TOTAL NUMBER OF CLIQUES IN NETWORK
# +
from networkx.algorithms import approximation, clique
n_of_cliques = clique.graph_number_of_cliques(facebook)
print("Number of cliques in the network:", n_of_cliques)
# -
ALL CLIQUES IN NETWORK
cliques = clique.find_cliques(facebook)
for i in cliques:
print(i)
LARGEST CLIQUE IN NETWORK
largest = approximation.clique.max_clique(facebook)
print( largest)
SIZE OF LARGEST CLIQUE IN THE NETWORK
# +
largestcliquesize = clique.graph_clique_number(facebook)
print( largestcliquesize)
# -
EGO NETWORK
ego_network = nx.ego_graph(facebook, 3)
nx.draw(ego_network)
# +
closeness = nx.closeness_centrality(facebook)
cores = sorted(closeness.items(), key = lambda x : x[1], reverse=True)
ego = cores[0][0]
gk_ego = nx.ego_graph(facebook, ego)
pos = nx.spring_layout(gk_ego)
nx.draw(gk_ego, pos, node_color='black', node_size=100, with_labels=False)
# Draw ego as large and red
nx.draw_networkx_nodes(gk_ego, pos, nodelist=[ego], node_size=100, node_color='purple')
plt.show()
# -
md.graph(facebook)
| project (1).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <h1 align="center">Introduction to SimpleITKv4 Registration - Continued</h1>
#
#
# ## ITK v4 Registration Components
# <img src="ITKv4RegistrationComponentsDiagram.svg" style="width:700px"/><br><br>
#
#
# Before starting with this notebook, please go over the first introductory notebook found [here](60_Registration_Introduction.ipynb).
#
# In this notebook we will visually assess registration by viewing the overlap between images using external viewers.
# The two viewers we recommend for this task are [ITK-SNAP](http://www.itksnap.org) and [3D Slicer](http://www.slicer.org/). ITK-SNAP supports concurrent linked viewing between multiple instances of the program. 3D Slicer supports concurrent viewing of multiple volumes via alpha blending.
# +
import SimpleITK as sitk
# If the environment variable SIMPLE_ITK_MEMORY_CONSTRAINED_ENVIRONMENT is set, this will override the ReadImage
# function so that it also resamples the image to a smaller size (testing environment is memory constrained).
# %run setup_for_testing
# Utility method that either downloads data from the network or
# if already downloaded returns the file name for reading from disk (cached data).
# %run update_path_to_download_script
from downloaddata import fetch_data as fdata
# Always write output to a separate directory, we don't want to pollute the source directory.
import os
OUTPUT_DIR = 'Output'
# GUI components (sliders, dropdown...).
from ipywidgets import interact, fixed
# Enable display of HTML.
from IPython.display import display, HTML
# Plots will be inlined.
# %matplotlib inline
# Callbacks for plotting registration progress.
import registration_callbacks
# -
# ## Utility functions
# A number of utility functions, saving a transform and corresponding resampled image, callback for selecting a
# DICOM series from several series found in the same directory.
# +
def save_transform_and_image(transform, fixed_image, moving_image, outputfile_prefix):
"""
Write the given transformation to file, resample the moving_image onto the fixed_images grid and save the
result to file.
Args:
transform (SimpleITK Transform): transform that maps points from the fixed image coordinate system to the moving.
fixed_image (SimpleITK Image): resample onto the spatial grid defined by this image.
moving_image (SimpleITK Image): resample this image.
outputfile_prefix (string): transform is written to outputfile_prefix.tfm and resampled image is written to
outputfile_prefix.mha.
"""
resample = sitk.ResampleImageFilter()
resample.SetReferenceImage(fixed_image)
# SimpleITK supports several interpolation options, we go with the simplest that gives reasonable results.
resample.SetInterpolator(sitk.sitkLinear)
resample.SetTransform(transform)
sitk.WriteImage(resample.Execute(moving_image), outputfile_prefix+'.mha')
sitk.WriteTransform(transform, outputfile_prefix+'.tfm')
def DICOM_series_dropdown_callback(fixed_image, moving_image, series_dictionary):
"""
Callback from dropbox which selects the two series which will be used for registration.
The callback prints out some information about each of the series from the meta-data dictionary.
For a list of all meta-dictionary tags and their human readable names see DICOM standard part 6,
Data Dictionary (http://medical.nema.org/medical/dicom/current/output/pdf/part06.pdf)
"""
# The callback will update these global variables with the user selection.
global selected_series_fixed
global selected_series_moving
img_fixed = sitk.ReadImage(series_dictionary[fixed_image][0])
img_moving = sitk.ReadImage(series_dictionary[moving_image][0])
# There are many interesting tags in the DICOM data dictionary, display a selected few.
tags_to_print = {'0010|0010': 'Patient name: ',
'0008|0060' : 'Modality: ',
'0008|0021' : 'Series date: ',
'0008|0031' : 'Series time:',
'0008|0070' : 'Manufacturer: '}
html_table = []
html_table.append('<table><tr><td><b>Tag</b></td><td><b>Fixed Image</b></td><td><b>Moving Image</b></td></tr>')
for tag in tags_to_print:
fixed_tag = ''
moving_tag = ''
try:
fixed_tag = img_fixed.GetMetaData(tag)
except: # ignore if the tag isn't in the dictionary
pass
try:
moving_tag = img_moving.GetMetaData(tag)
except: # ignore if the tag isn't in the dictionary
pass
html_table.append('<tr><td>' + tags_to_print[tag] +
'</td><td>' + fixed_tag +
'</td><td>' + moving_tag + '</td></tr>')
html_table.append('</table>')
display(HTML(''.join(html_table)))
selected_series_fixed = fixed_image
selected_series_moving = moving_image
# -
# ## Loading Data
#
# In this notebook we will work with CT and MR scans of the CIRS 057A multi-modality abdominal phantom. The scans are multi-slice DICOM images. The data is stored in a zip archive which is automatically retrieved and extracted when we request a file which is part of the archive.
# +
data_directory = os.path.dirname(fdata("CIRS057A_MR_CT_DICOM/readme.txt"))
# 'selected_series_moving/fixed' will be updated by the interact function.
selected_series_fixed = ''
selected_series_moving = ''
# Directory contains multiple DICOM studies/series, store the file names
# in dictionary with the key being the series ID.
reader = sitk.ImageSeriesReader()
series_file_names = {}
series_IDs = list(reader.GetGDCMSeriesIDs(data_directory)) #list of all series
if series_IDs: #check that we have at least one series
for series in series_IDs:
series_file_names[series] = reader.GetGDCMSeriesFileNames(data_directory, series)
interact(DICOM_series_dropdown_callback, fixed_image=series_IDs, moving_image =series_IDs, series_dictionary=fixed(series_file_names));
else:
print('This is surprising, data directory does not contain any DICOM series.')
# +
# Actually read the data based on the user's selection.
fixed_image = sitk.ReadImage(series_file_names[selected_series_fixed])
moving_image = sitk.ReadImage(series_file_names[selected_series_moving])
# Save images to file and view overlap using external viewer.
sitk.WriteImage(fixed_image, os.path.join(OUTPUT_DIR, "fixedImage.mha"))
sitk.WriteImage(moving_image, os.path.join(OUTPUT_DIR, "preAlignment.mha"))
# -
# ## Initial Alignment
#
# A reasonable guesstimate for the initial translational alignment can be obtained by using
# the CenteredTransformInitializer (functional interface to the CenteredTransformInitializerFilter).
#
# The resulting transformation is centered with respect to the fixed image and the
# translation aligns the centers of the two images. There are two options for
# defining the centers of the images, either the physical centers
# of the two data sets (GEOMETRY), or the centers defined by the intensity
# moments (MOMENTS).
#
# Two things to note about this filter, it requires the fixed and moving image
# have the same type even though it is not algorithmically required, and its
# return type is the generic SimpleITK.Transform.
# +
initial_transform = sitk.CenteredTransformInitializer(sitk.Cast(fixed_image,moving_image.GetPixelID()),
moving_image,
sitk.Euler3DTransform(),
sitk.CenteredTransformInitializerFilter.GEOMETRY)
# Save moving image after initial transform and view overlap using external viewer.
save_transform_and_image(initial_transform, fixed_image, moving_image, os.path.join(OUTPUT_DIR, "initialAlignment"))
# -
# Look at the transformation, what type is it?
print(initial_transform)
# ## Final registration
# ### Version 1
# <ul>
# <li> Single scale (not using image pyramid).</li>
# <li> Initial transformation is not modified in place.</li>
# </ul>
#
# <ol>
# <li>
# Illustrate the need for scaling the step size differently for each parameter:
# <ul>
# <li> SetOptimizerScalesFromIndexShift - estimated from maximum shift of voxel indexes (only use if data is isotropic).</li>
# <li> SetOptimizerScalesFromPhysicalShift - estimated from maximum shift of physical locations of voxels.</li>
# <li> SetOptimizerScalesFromJacobian - estimated from the averaged squared norm of the Jacobian w.r.t. parameters.</li>
# </ul>
# </li>
# <li>
# Look at the optimizer's stopping condition to ensure we have not terminated prematurely.
# </li>
# </ol>
# +
registration_method = sitk.ImageRegistrationMethod()
registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.01)
registration_method.SetInterpolator(sitk.sitkLinear)
registration_method.SetOptimizerAsGradientDescent(learningRate=1.0, numberOfIterations=100)
# Scale the step size differently for each parameter, this is critical!!!
registration_method.SetOptimizerScalesFromPhysicalShift()
registration_method.SetInitialTransform(initial_transform, inPlace=False)
registration_method.AddCommand(sitk.sitkStartEvent, registration_callbacks.metric_start_plot)
registration_method.AddCommand(sitk.sitkEndEvent, registration_callbacks.metric_end_plot)
registration_method.AddCommand(sitk.sitkIterationEvent,
lambda: registration_callbacks.metric_plot_values(registration_method))
final_transform_v1 = registration_method.Execute(sitk.Cast(fixed_image, sitk.sitkFloat32),
sitk.Cast(moving_image, sitk.sitkFloat32))
# +
print('Optimizer\'s stopping condition, {0}'.format(registration_method.GetOptimizerStopConditionDescription()))
print('Final metric value: {0}'.format(registration_method.GetMetricValue()))
# Save moving image after registration and view overlap using external viewer.
save_transform_and_image(final_transform_v1, fixed_image, moving_image, os.path.join(OUTPUT_DIR, "finalAlignment-v1"))
# -
# Look at the final transformation, what type is it?
print(final_transform_v1)
# ### Version 1.1
#
# The previous example illustrated the use of the ITK v4 registration framework in an ITK v3 manner. We only referred to a single transformation which was what we optimized.
#
# In ITK v4 the registration method accepts three transformations (if you look at the diagram above you will only see two transformations, Moving transform represents $T_{opt} \circ T_m$):
# <ul>
# <li>
# SetInitialTransform, $T_{opt}$ - composed with the moving initial transform, maps points from the virtual image domain to the moving image domain, modified during optimization.
# </li>
# <li>
# SetFixedInitialTransform $T_f$- maps points from the virtual image domain to the fixed image domain, never modified.
# </li>
# <li>
# SetMovingInitialTransform $T_m$- maps points from the virtual image domain to the moving image domain, never modified.
# </li>
# </ul>
#
# The transformation that maps points from the fixed to moving image domains is thus: $^M\mathbf{p} = T_{opt}(T_m(T_f^{-1}(^F\mathbf{p})))$
#
# We now modify the previous example to use $T_{opt}$ and $T_m$.
# +
registration_method = sitk.ImageRegistrationMethod()
registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.01)
registration_method.SetInterpolator(sitk.sitkLinear)
registration_method.SetOptimizerAsGradientDescent(learningRate=1.0, numberOfIterations=100)
registration_method.SetOptimizerScalesFromPhysicalShift()
# Set the initial moving and optimized transforms.
optimized_transform = sitk.Euler3DTransform()
registration_method.SetMovingInitialTransform(initial_transform)
registration_method.SetInitialTransform(optimized_transform)
registration_method.AddCommand(sitk.sitkStartEvent, registration_callbacks.metric_start_plot)
registration_method.AddCommand(sitk.sitkEndEvent, registration_callbacks.metric_end_plot)
registration_method.AddCommand(sitk.sitkIterationEvent,
lambda: registration_callbacks.metric_plot_values(registration_method))
registration_method.Execute(sitk.Cast(fixed_image, sitk.sitkFloat32),
sitk.Cast(moving_image, sitk.sitkFloat32))
# Need to compose the transformations after registration.
final_transform_v11 = sitk.CompositeTransform(optimized_transform)
final_transform_v11.AddTransform(initial_transform)
# +
print('Optimizer\'s stopping condition, {0}'.format(registration_method.GetOptimizerStopConditionDescription()))
print('Final metric value: {0}'.format(registration_method.GetMetricValue()))
# Save moving image after registration and view overlap using external viewer.
save_transform_and_image(final_transform_v11, fixed_image, moving_image, os.path.join(OUTPUT_DIR, "finalAlignment-v1.1"))
# -
# Look at the final transformation, what type is it? Why is it different from the previous example?
print(final_transform_v11)
# ### Version 2
#
# <ul>
# <li> Multi scale - specify both scale, and how much to smooth with respect to original image.</li>
# <li> Initial transformation modified in place, so in the end we have the same type of transformation in hand.</li>
# </ul>
# +
registration_method = sitk.ImageRegistrationMethod()
registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.01)
registration_method.SetInterpolator(sitk.sitkLinear)
registration_method.SetOptimizerAsGradientDescent(learningRate=1.0, numberOfIterations=100) #, estimateLearningRate=registration_method.EachIteration)
registration_method.SetOptimizerScalesFromPhysicalShift()
final_transform = sitk.Euler3DTransform(initial_transform)
registration_method.SetInitialTransform(final_transform)
registration_method.SetShrinkFactorsPerLevel(shrinkFactors = [4,2,1])
registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas = [2,1,0])
registration_method.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn()
registration_method.AddCommand(sitk.sitkStartEvent, registration_callbacks.metric_start_plot)
registration_method.AddCommand(sitk.sitkEndEvent, registration_callbacks.metric_end_plot)
registration_method.AddCommand(sitk.sitkMultiResolutionIterationEvent,
registration_callbacks.metric_update_multires_iterations)
registration_method.AddCommand(sitk.sitkIterationEvent,
lambda: registration_callbacks.metric_plot_values(registration_method))
registration_method.Execute(sitk.Cast(fixed_image, sitk.sitkFloat32),
sitk.Cast(moving_image, sitk.sitkFloat32))
# +
print('Optimizer\'s stopping condition, {0}'.format(registration_method.GetOptimizerStopConditionDescription()))
print('Final metric value: {0}'.format(registration_method.GetMetricValue()))
# Save moving image after registration and view overlap using external viewer.
save_transform_and_image(final_transform, fixed_image, moving_image, os.path.join(OUTPUT_DIR, 'finalAlignment-v2'))
# -
# Look at the final transformation, what type is it?
print(final_transform)
| Python/61_Registration_Introduction_Continued.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### **SUMÁRIO**
# >1. [01 - Pacotes Necessário](#pacotes)
# >2. [02 - Carga de Dados](#carga-dados)
# >3. [03 - Sobre o Modelo](#info)
# >4. [04 - Conjunto de Dados de Treinamento e Teste](#datasets)
# >5. [05 - Implamentação do Modelo](#impModelo)
# >>5.1 [05.1 - Função SIGMOID](#sigmoid)<br>
# >>5.2 [05.2 - Função para Inicializar os Parametros e Bias](#inicializar)<br>
# ***
# <font size="6"><a id="pacotes">01 - Pacotes Necessários</a></font>
# ***
import types
import pandas as pd
from botocore.client import Config
import ibm_boto3
from sklearn.preprocessing import MinMaxScaler
cos_credentials={
"apikey": "<KEY>",
"endpoints": "https://cos-service.bluemix.net/endpoints",
"iam_apikey_description": "Auto generated apikey during resource-key operation for Instance - crn:v1:bluemix:public:cloud-object-storage:global:a/f819585dc4c43dfca0a953a8fc2635bb:a5bc02d4-816f-4d0b-8830-bd98421cb97f::",
"iam_apikey_name": "auto-generated-apikey-b4c9dfbc-d2cd-429a-b3c5-4ad08f0735fa",
"iam_role_crn": "crn:v1:bluemix:public:iam::::serviceRole:Manager",
"iam_serviceid_crn": "crn:v1:bluemix:public:iam-identity::a/f819585dc4c43dfca0a953a8fc2635bb::serviceid:ServiceId-2e1053db-c54b-46d5-821c-85b692c0ac0e",
"resource_instance_id": "crn:v1:bluemix:public:cloud-object-storage:global:a/f819585dc4c43dfca0a953a8fc2635bb:a5bc02d4-816f-4d0b-8830-bd98421cb97f::"
}
auth_endpoint = 'https://iam.bluemix.net/oidc/token'
service_endpoint = 'https://s3.us-south.cloud-object-storage.appdomain.cloud'
cosFiles = ibm_boto3.client('s3',
ibm_api_key_id=cos_credentials['apikey'],
ibm_service_instance_id=cos_credentials['resource_instance_id'],
ibm_auth_endpoint=auth_endpoint,
config=Config(signature_version='oauth'),
endpoint_url=service_endpoint)
# ***
# <font size="6"><a id="carga-dados">02 - Funções</a></font>
# ***
# <div class="alert alert-block alert-success">
# <b>Descrição: </b> Area destinada para realização do upload Cosfile "04work-crm"
# </div>
def SaveAndUploadFiles(fileName: str, dfToSave: pd.DataFrame):
print('*'*50)
print('* Salvar Arquivo {} Localmente *'.format(fileName))
print('*'*50)
dfToSave.to_csv(path_or_buf=fileName,index=False)
print('*'*50)
print('* Executar Upload do Arquivo {} para o COS *'.format(fileName))
print('*'*50)
cosFiles.upload_file(Filename=fileName,Bucket='04work-crm',Key=fileName)
print('*'*50)
print('* Upload do Arquivo {} para o COS executado *'.format(fileName))
print('*'*50)
# ***
# <font size="6"><a id="carga-dados">02 - Carga de Dados</a></font>
# ***
# <div class="alert alert-block alert-success">
# <b>Descrição: </b> O código a seguir acessa um arquivo no IBM Cloud Object Storage.<br> <b> <font color = "red"> Importante: </b> </font> Em caso de comparilhamento do notebook, remover as linhas com as marcações <b> "credenciais".</b>
# </div>
# +
#Carrega os dados de treinamento "base_refin_meio_a_meio_V2.csv"
def __iter__(self): return 0
client_a5bc02d4816f4d0b8830bd98421cb97f = ibm_boto3.client(service_name='s3',
ibm_api_key_id='<KEY>', #credenciais removidas por motivo de segurança
ibm_auth_endpoint="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", #credenciais removidas por motivo de segurança
config=Config(signature_version='oauth'),
endpoint_url='https://s3-api.us-geo.objectstorage.service.networklayer.com')
body = client_a5bc02d4816f4d0b8830bd98421cb97f.get_object(Bucket='crmmesadeperformanceii-donotdelete-pr-aa5r1ubmewwiyn',Key='base_refin_meio_a_meio_V2.csv')['Body']
# add missing __iter__ method, so pandas accepts body as file-like object
if not hasattr(body, "__iter__"): body.__iter__ = types.MethodType( __iter__, body )
df_data = pd.read_csv(body)
df_data.head()
# -
df_data.dtypes
#
# ***
# <font size="6"><a id="info">03 - Tratamento dos Dados</a></font>
# ***
# ***
# <font size="4"><a id="info"> 03.1 - codificação one-hot</a></font>
# ***
# <div class="alert alert-block alert-success">
# <b>Descrição: </b> A técnica "DUMMIE" consiste em converter valores categóricos em um vetor numérico unidimensional, ou seja, o vetor resultante terá apenas um elemento igual a 1 e a 0.
# </div>
arOneHotEncoding = ['STATUS','CANAL','GRPCONV']
dfArquivoTreinamento = pd.get_dummies(df_data,columns=arOneHotEncoding)
dfArquivoTreinamento.head()
# ***
# <font size="6"><a id="info">03.2 - Normalização</a></font>
# ***
# <div class="alert alert-block alert-success">
# <b>Descrição:</b> São gerados valores entre 0 e 1 para as variaveis.
# </div>
# +
arNormalize = ['TAXA','QUANT_PRESTACAO','VALOR_FINANCIADO','VALOR_PRESTACAO','DURACAO_CONTRATO']
scaler = MinMaxScaler(feature_range=(0,1))
for eachCategory in arNormalize:
dfArquivoTreinamento[eachCategory] = scaler.fit_transform(dfArquivoTreinamento[eachCategory].values.reshape(-1, 1))
# -
dfArquivoTreinamento.head()
# ***
# <font size="6"><a id="info">03.3 - Remover Colunas não necessárias</a></font>
# ***
# <div class="alert alert-block alert-success">
# <b>Descrição: </b> São excluidas as colunas do tipo "numero de contrato" e "cpf" pois podem enviesar o modelo.
# </div>
arColunsToRemove = ['NRO_CONTRATO','CNPJ_CPF']
dfArquivoTreinamento.drop(columns=arColunsToRemove, inplace=True,axis=1)
dfArquivoTreinamento.head()
# ***
# <font size="6"><a id="info">05 - Salvar Arquivo de Treinamento</a></font>
# ***
# <div class="alert alert-block alert-success">
# <b>Descrição: </b>
# Area destina em realizar a alteração do nome da coluna target e a gravação do arquivo .csv
# </div>
dfArquivoTreinamento.rename(columns={'FLAG_REFIN':'y'},inplace=True)
SaveAndUploadFiles('ArquivoTreinamento.csv',dfArquivoTreinamento)
dfArquivoTreinamento.drop(columns='y', inplace=True)
dfcolunas=pd.DataFrame(data=dfArquivoTreinamento.columns,columns=['colunas'])
SaveAndUploadFiles('colunas.csv',dfcolunas)
| ML - 01 - Arquivo Treinamento REFIN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Environment (conda_tensorflow_p27)
# language: python
# name: conda_tensorflow_p27
# ---
# %pylab inline
# %matplotlib inline
import scipy.io as sio
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import tensorflow as tf
from __future__ import division
import random
import scipy
import h5py
import hdf5storage
## random seed will dictate the random initialization
sd=30000
np.random.seed(sd)
# +
maxker=28
traindatapath='/home/ubuntu/Notebooks/Circuit2_Training_Data.h5'
data = hdf5storage.loadmat(traindatapath)
x_train = data['x_train']
x_test = data['x_test']
y_train = data['y_train']
y_test = reshape(data['y_test'], [1, 2000])
gc_bias_init = data['gc_bias']
bipkernels = data['bipkernels']
bip_gc_syn_init = data['bip_gc_syn']
bip_am_syn_init = data['bip_am_syn']
am_gc_syn_init = data['am_gc_syn']
#sparsity params for weight matrix initializations
init_sparsity = 0.0
init_sparsity_bg = 0.01
bip_gc_syn_mask1 = np.random.rand(maxker*100, 1)
bip_gc_syn_mask1 = reshape(bip_gc_syn_mask1, [1, 10, 10, maxker])
bip_gc_syn_mask1 = bip_gc_syn_mask1 >(1.0 - init_sparsity_bg)
bip_gc_syn_init_full = np.zeros([1, 10, 10, maxker])
bip_gc_syn_mask_true = reshape(bip_gc_syn_init, [1, 10, 10, 3])>0.0
bip_gc_syn_init_full[:, :, :, 0:3] = bip_gc_syn_mask_true
bip_gc_syn_mask = np.maximum(bip_gc_syn_mask1, bip_gc_syn_init_full)
bip_gc_syn_init11 = tf.random_uniform([1, 10, 10, maxker], minval=0.1, maxval=0.2, dtype=tf.float32)
bip_gc_syn_init1=bip_gc_syn_init11
bip_am_syn_mask = np.zeros([10, 10, maxker, 5, 5])
for i in range(10):
for j in range(10):
for k in range(maxker):
bip_am_syn_mask[i, j, k, int(floor(i/2)), int(floor(j/2))] = 1.0
bip_am_syn_mask = bip_am_syn_mask.astype(float32)
bip_am_syn_inds = np.zeros([maxker*100, 6])
for i in range(10):
for j in range(10):
for k in range(maxker):
bip_am_syn_inds[maxker*10*(i)+28*(j)+k]=[0, i, j, k, floor(i/2), floor(j/2)]
bip_am_syn_inds = bip_am_syn_inds.astype(int64)
bip_am_syn_init11 = abs(np.random.normal(0.0, (sqrt(2.0)/112.0), size=[maxker*100]))
bip_am_syn_init11=bip_am_syn_init11.astype(float32)
am_gc_syn_init1 = tf.random_uniform([1, 5, 5], minval=0.1, maxval=0.2, dtype=tf.float32)
print(shape(x_train))
print(shape(x_test))
print(shape(bip_gc_syn_init))
# -
print(shape(y_train))
print(shape(y_test))
plt.figure()
plt.plot(squeeze(y_train))
# +
def bias_var(shape, initial_val):
initial = tf.constant(initial_val, shape=shape)
return tf.constant(initial_val) #initial
def bip_conv2d(x, W):
padsize=10
paddedx=tf.pad(x, [[0, 0], [padsize, padsize], [padsize, padsize], [0, 0]], 'CONSTANT')
outconv=tf.nn.conv2d(paddedx, W, strides=[1, 10, 10, 1], padding='SAME') #250 for movingdot and noise
return outconv[:, 1:11, 1:11, :]
def synapse_var(shape, initial_val):
# initial=tf.constant(initial_val, shape=shape)
# initial = tf.random_uniform(shape, minval=0.1, maxval=0.8, dtype=tf.float32)
return tf.Variable(initial_val)
# +
## create layer 1 convolutional kernels (difference of gaussians)
def difference_of_gaussians(ctr_sigma, surr_sigma, ctr_strength, surr_strength, x, y):
center=0.4*(1/ctr_sigma)*exp(-0.5*square(sqrt(square(x)+square(y))/ctr_sigma))
surround=0.4*(1/surr_sigma)*exp(-0.5*square(sqrt(square(x)+square(y))/surr_sigma))
kernel = ctr_strength*center - surr_strength*surround
maxk = amax(abs(kernel)) #normalization factor
return kernel/maxk
x = np.linspace(-5, 5, 11)
y = np.linspace(-5, 5, 11)
xv, yv = np.meshgrid(x, y)
bipkernels = np.zeros([11, 11, 1, maxker])
kernel1 = difference_of_gaussians(3, 6, 13, 12.9, xv, yv)
kernel2 = difference_of_gaussians(5, 6, 18, 18, xv, yv)
kernel3 = difference_of_gaussians(2, 4, 20, 14, xv, yv)
kernel4 = difference_of_gaussians(3, 6, 13, 0, xv, yv)
kernel5 = difference_of_gaussians(4, 6, 13, 0, xv, yv)
kernel6 = difference_of_gaussians(2, 4, 20, 0, xv, yv)
kernel7 = difference_of_gaussians(3, 6, 13, 20, xv, yv)
kernel8 = difference_of_gaussians(5, 6, 18, 20, xv, yv)
kernel9 = difference_of_gaussians(2, 4, 20, 24, xv, yv)
kernel10 = difference_of_gaussians(5, 8, 13, 20, xv, yv)
kernel11 = difference_of_gaussians(2, 8, 15, 15, xv, yv)
kernel12 = difference_of_gaussians(3, 8, 20, 12, xv, yv)
kernel13 = difference_of_gaussians(5, 8, 20, 18, xv, yv)
kernel14 = difference_of_gaussians(2, 8, 13, 18, xv, yv)
bipkernels[:, :, 0, 0]=kernel1
bipkernels[:, :, 0, 1]=kernel2
bipkernels[:, :, 0, 2]=kernel3
bipkernels[:, :, 0, 3]=kernel4
bipkernels[:, :, 0, 4]=-1.0*kernel1
bipkernels[:, :, 0, 5]=-1.0*kernel2
bipkernels[:, :, 0, 6]=-1.0*kernel3
bipkernels[:, :, 0, 7]=-1.0*kernel4
bipkernels[:, :, 0, 8]=kernel5
bipkernels[:, :, 0, 9]=kernel6
bipkernels[:, :, 0, 10]=kernel7
bipkernels[:, :, 0, 11]=kernel8
bipkernels[:, :, 0, 12]=-1.0*kernel5
bipkernels[:, :, 0, 13]=-1.0*kernel6
bipkernels[:, :, 0, 14]=-1.0*kernel7
bipkernels[:, :, 0, 15]=-1.0*kernel8
bipkernels[:, :, 0, 16]=kernel9
bipkernels[:, :, 0, 17]=kernel10
bipkernels[:, :, 0, 18]=kernel11
bipkernels[:, :, 0, 19]=kernel12
bipkernels[:, :, 0, 20]=-1.0*kernel9
bipkernels[:, :, 0, 21]=-1.0*kernel10
bipkernels[:, :, 0, 22]=-1.0*kernel11
bipkernels[:, :, 0, 23]=-1.0*kernel12
bipkernels[:, :, 0, 24]=kernel13
bipkernels[:, :, 0, 25]=kernel14
bipkernels[:, :, 0, 26]=-1.0*kernel13
bipkernels[:, :, 0, 27]=-1.0*kernel14
plt.figure()
plt.subplot(4, 4, 1)
plt.imshow(kernel1, cmap=plt.get_cmap('RdBu'))
plt.clim(-1.0, 1.0)
plt.colorbar()
plt.subplot(4, 4, 2)
plt.imshow(kernel2, cmap=plt.get_cmap('RdBu'))
plt.clim(-1.0, 1.0)
plt.colorbar()
plt.subplot(4, 4, 3)
plt.imshow(kernel3, cmap=plt.get_cmap('RdBu'))
plt.clim(-1.0, 1.0)
plt.colorbar()
plt.subplot(4, 4, 4)
plt.imshow(kernel4, cmap=plt.get_cmap('RdBu'))
plt.clim(-1.0, 1.0)
plt.colorbar()
plt.subplot(4, 4, 5)
plt.imshow(kernel5, cmap=plt.get_cmap('RdBu'))
plt.clim(-1.0, 1.0)
plt.colorbar()
plt.subplot(4, 4, 6)
plt.imshow(kernel6, cmap=plt.get_cmap('RdBu'))
plt.clim(-1.0, 1.0)
plt.colorbar()
plt.subplot(4, 4, 7)
plt.imshow(kernel7, cmap=plt.get_cmap('RdBu'))
plt.clim(-1.0, 1.0)
plt.colorbar()
plt.subplot(4, 4, 8)
plt.imshow(kernel8, cmap=plt.get_cmap('RdBu'))
plt.clim(-1.0, 1.0)
plt.colorbar()
plt.subplot(4, 4, 9)
plt.imshow(kernel9, cmap=plt.get_cmap('RdBu'))
plt.clim(-1.0, 1.0)
plt.colorbar()
plt.subplot(4, 4, 10)
plt.imshow(kernel10, cmap=plt.get_cmap('RdBu'))
plt.clim(-1.0, 1.0)
plt.colorbar()
plt.subplot(4, 4, 11)
plt.imshow(kernel11, cmap=plt.get_cmap('RdBu'))
plt.clim(-1.0, 1.0)
plt.colorbar()
plt.subplot(4, 4, 12)
plt.imshow(kernel12, cmap=plt.get_cmap('RdBu'))
plt.clim(-1.0, 1.0)
plt.colorbar()
plt.subplot(4, 4, 13)
plt.imshow(kernel13, cmap=plt.get_cmap('RdBu'))
plt.clim(-1.0, 1.0)
plt.colorbar()
plt.subplot(4, 4, 14)
plt.imshow(kernel14, cmap=plt.get_cmap('RdBu'))
plt.clim(-1.0, 1.0)
plt.colorbar()
# +
sess=tf.Session()
sess.run(tf.global_variables_initializer())
# +
kernels = [3, 8, 16, 28]
lambdas = [0.0, 0.0, 0.0, 0.0]
datas = [60, 340, 700, 1400, 2800, 5500, 11000, 22000, 98000]
training_epochs = [6000, 4000, 4000, 3000, 3000, 2000, 2000, 1000, 500]
test_sizes = [2080, 2080, 2080, 2080, 2080, 2080, 2080, 2080, 2080, 2000]
learn_rate = 1e-3
learn_rate_late = 1e-4
for i_data in range(7):
for i_kernel in range(4):
if i_kernel > 1:
del stimulus_
del bipolar_cell_layer
del gc_activation
del gc_output
del bipolar_bias
del bipkernels1
no_train=datas[i_data]
epochs = training_epochs[i_data]
no_kernels = kernels[i_kernel]
lambda1 = lambdas[i_kernel]
bipkernels1 = bipkernels[:, :, :, 0:no_kernels]
bip_gc_syn_init = bip_gc_syn_init1[:, :, :, 0:no_kernels]
bip_am_syn_mask1 = bip_am_syn_mask[ :, :, 0:no_kernels, :, :]
no_test=test_sizes[i_data]
no_bipolars = 10
no_amacrines = 5
wheretosave = '/home/ubuntu/Notebooks/Circuit2_Trained_Network_data' + str(no_train) + '_kernel' + str(no_kernels) \
+ '_sd' + str(sd) + '_nol1reg.mat'
## initialize all variables
bip_bias_init_all = -1.0*np.ones([28])
bip_bias_init_all[0]=-2.0
bip_bias_init_all[1]=-3.0
bip_bias_init_all[3]=-15.0
bip_bias_init_all[8]=-25.0
bip_bias_init_all[9]=-10.0
bip_bias_init_all[4]=-2.0
bip_bias_init_all[5]=-3.0
bip_bias_init_all[7]=-15.0
bip_bias_init_all[12]=-25.0
bip_bias_init_all[13]=-10.0
bip_bias_init = bip_bias_init_all[0:no_kernels]
bip_bias_init = bip_bias_init.astype(float32)
bipolar_bias = bias_var([no_kernels], bip_bias_init)
am_bias_init = -5.0
am_bias = bias_var([1], am_bias_init)
gc_bias = bias_var([1], gc_bias_init)
bip_gc_syn_init=tf.random.normal([1, no_bipolars, no_bipolars, no_kernels], mean = 0.0, stddev = sqrt(2.0/(no_kernels*100)), dtype=tf.dtypes.float32, seed=sd)
bip_gc_syn = synapse_var([1, no_bipolars, no_bipolars, no_kernels], bip_gc_syn_init)
bip_am_syn_inds = np.zeros([no_kernels*100, 6])
for i in range(10):
for j in range(10):
for k in range(no_kernels):
bip_am_syn_inds[no_kernels*10*(i)+no_kernels*(j)+k]=[0, i, j, k, floor(i/2), floor(j/2)]
bip_am_syn_inds = bip_am_syn_inds.astype(int64)
bip_am_syn_init11 = abs(np.random.normal(0.0, (sqrt(2.0/no_kernels)), size=[no_kernels*100]))
bip_am_syn_init111=bip_am_syn_init11.astype(float32)
bip_am_syn_val = synapse_var([no_kernels*no_bipolars*no_bipolars], bip_am_syn_init111)
bip_am_syn1 = tf.sparse.SparseTensor(indices=bip_am_syn_inds, values=bip_am_syn_val, dense_shape=[1, no_bipolars, no_bipolars, no_kernels, no_amacrines, no_amacrines])
bip_am_syn = tf.sparse.to_dense(tf.sparse.reorder(bip_am_syn1))
am_gc_syn = synapse_var([1, no_amacrines, no_amacrines], am_gc_syn_init1)
stimulus_ = tf.placeholder("float32", name="stim_placeholder")
bipolar_cell_layer = tf.nn.relu(tf.nn.bias_add(bip_conv2d(stimulus_, bipkernels1), bipolar_bias))
biplyr = tf.reshape(bipolar_cell_layer, [-1, no_bipolars*no_bipolars*no_kernels, 1])
tilebip_am_syn=tf.tile(tf.transpose(tf.reshape(tf.abs(bip_am_syn), [1, no_bipolars*no_bipolars*no_kernels, no_amacrines*no_amacrines]), [0, 2, 1]), [1, 1, 1])
amacrine_activation = 3.0*tf.reshape(tf.linalg.matmul(tilebip_am_syn, biplyr), [-1,no_amacrines, no_amacrines])
amacrine_cell_layer = tf.nn.relu(tf.add(amacrine_activation, am_bias))
gc_activation = tf.multiply(tf.abs(bip_gc_syn), bipolar_cell_layer)
gc_activation_inhib = tf.multiply(tf.abs(am_gc_syn), amacrine_cell_layer)
gc_output = tf.add_n([tf.reduce_sum(gc_activation, [1, 2, 3]), -1.0*tf.reduce_sum(gc_activation_inhib, [1, 2])])
## training procedure
y_ = tf.placeholder("float32", name="output_spikes")
batchsize=20
loss = (tf.nn.l2_loss((tf.squeeze(gc_output) - tf.squeeze(y_)), name='loss'))
regularizer=tf.add_n([tf.reduce_sum(tf.abs(bip_am_syn)), tf.reduce_sum(tf.abs(bip_gc_syn)), \
0.0*tf.reduce_sum(tf.abs(am_gc_syn))])
objective=tf.add(loss, lambda1*regularizer)
bip_am_ygrad = tf.gradients(loss, [bip_am_syn])
bip_am_reggrad = tf.gradients(regularizer, [bip_am_syn])
am_gc_ygrad = tf.gradients(loss, [am_gc_syn])
am_gc_reggrad = tf.gradients(regularizer, [am_gc_syn])
bip_gc_ygrad = tf.gradients(loss, [bip_gc_syn])
bip_gc_reggrad = tf.gradients(regularizer, [bip_gc_syn])
algorithm_choice=2
lr_min = 1e-4
lr_max = 1e-5
max_step =500
lr_ = tf.placeholder("float32", name="learn_rate")
if algorithm_choice==1:
train_step = tf.train.GradientDescentOptimizer(lr_).minimize(objective)
elif algorithm_choice==2:
my_epsilon=1e-8
train_step = tf.train.AdamOptimizer(learning_rate=lr_, epsilon=my_epsilon).minimize(objective)
elif algorithm_choice==3:
momentum_par=0.9
train_step = tf.train.MomentumOptimizer(lr_, momentum_par).minimize(objective)
elif algorithm_choice==4:
train_step = tf.train.AdagradOptimizer(lr_).minimize(objective)
elif algorithm_choice==5:
train_step = tf.train.RMSPropOptimizer(lr_).minimize(objective)
sess.run(tf.global_variables_initializer())
bip_gc_syn_hist=tf.reshape(bip_gc_syn.eval(session=sess), [1, no_bipolars, no_bipolars, no_kernels])
bip_am_syn_hist=tf.reshape(bip_am_syn.eval(session=sess), [1, no_bipolars, no_bipolars, no_kernels, no_amacrines, no_amacrines])
am_gc_syn_hist=tf.reshape(am_gc_syn.eval(session=sess), [1, no_amacrines, no_amacrines])
train_loss_hist = ones([1])
test_loss_hist = ones([1])
bip_am_ygrad_hist=np.reshape(sess.run(bip_am_ygrad, feed_dict={stimulus_: x_train[0:100, :, :, :], y_: y_train[0, 0:100]}), [1, no_bipolars, no_bipolars, no_kernels, no_amacrines, no_amacrines])
bip_am_reggrad_hist=np.reshape(sess.run(bip_am_reggrad, feed_dict={stimulus_: x_train[0:100, :, :, :], y_: y_train[0, 0:100]}), [1, no_bipolars, no_bipolars, no_kernels, no_amacrines, no_amacrines])
am_gc_ygrad_hist=np.reshape(sess.run(am_gc_ygrad, feed_dict={stimulus_: x_train[0:100, :, :, :], y_: y_train[0, 0:100]}), [1, no_amacrines, no_amacrines])
am_gc_reggrad_hist=np.reshape(sess.run(am_gc_reggrad, feed_dict={stimulus_: x_train[0:100, :, :, :], y_: y_train[0, 0:100]}), [1, no_amacrines, no_amacrines])
bip_gc_ygrad_hist=np.reshape(sess.run(bip_gc_ygrad, feed_dict={stimulus_: x_train[0:100, :, :, :], y_: y_train[0, 0:100]}), [1, no_bipolars, no_bipolars, no_kernels])
bip_gc_reggrad_hist=np.reshape(sess.run(bip_gc_reggrad, feed_dict={stimulus_: x_train[0:100, :, :, :], y_: y_train[0, 0:100]}), [1, no_bipolars, no_bipolars, no_kernels])
train_fd={stimulus_: x_train[0:50, :, :, :]}
test_fd={stimulus_: x_test[0:50, :, :, :]}
train_output_hist=reshape(gc_output.eval(session=sess, feed_dict=train_fd), [1, 50])
test_output_hist=reshape(gc_output.eval(session=sess, feed_dict=test_fd), [1, 50])
check=1.0
step=0
end_flag=0
fd = {stimulus_:x_train[0:100, :, :, :], y_:y_train[0, 0:100]}
train_loss_val = sess.run(loss, feed_dict = fd)
print(train_loss_val)
fd = {stimulus_:x_test[0:100, :, :, :], y_:y_test[0, 0:100]}
test_loss_val = sess.run(loss, feed_dict = fd)
print(test_loss_val)
train_loss_hist=train_loss_val*train_loss_hist
test_loss_hist=test_loss_val*test_loss_hist
endflag=0
step=0
while endflag == 0:
# learning rate schedule
learn_rate_sch = lr_min + 0.5*(lr_max - lr_min)*(1.0+np.cos(np.pi*(step%max_step/max_step)))
if step>=10*max_step:
learn_rate_sch = lr_min
inds = np.reshape(np.random.permutation(range(no_train)), [-1, batchsize])
for n in range(len(inds)):
fdd = {stimulus_: x_train[inds[n, :], :, :, :], y_: y_train[0, inds[n, :]], lr_: learn_rate_sch}
sess.run(train_step, feed_dict=fdd)
if (step % 100 ==0):
train_loss_val = sess.run(loss, feed_dict= {stimulus_: x_train[0:100, :, :, :], y_: y_train[0, 0:100]})/100.0
test_loss_val = sess.run(loss, feed_dict= {stimulus_: x_test[0:100, :, :, :], y_: y_test[0, 0:100]})/100.0
print("step: %d loss: = %9f" % (step, train_loss_val))
bip_gc_syn_hist=tf.concat( [bip_gc_syn_hist, tf.reshape(bip_gc_syn.eval(session=sess), [1, no_bipolars, no_bipolars, no_kernels])], 0, name='bip_gc_syn_concat')
bip_am_syn_hist=tf.concat( [bip_am_syn_hist, tf.reshape(bip_am_syn.eval(session=sess), [1, no_bipolars, no_bipolars, no_kernels, no_amacrines, no_amacrines])], 0, name='bip_am_syn_concat')
am_gc_syn_hist=tf.concat( [am_gc_syn_hist, tf.reshape(am_gc_syn.eval(session=sess), [1, no_amacrines, no_amacrines])], 0, name='am_gc_syn_concat')
bip_am_ygrad_hist=tf.concat( [bip_am_ygrad_hist, np.reshape(sess.run(bip_am_ygrad, feed_dict={stimulus_: x_train[0:100, :, :, :], y_: y_train[0, 0:100]}), [1, no_bipolars, no_bipolars, no_kernels, no_amacrines, no_amacrines])], 0)
bip_am_reggrad_hist=tf.concat( [bip_am_reggrad_hist, np.reshape(sess.run(bip_am_reggrad, feed_dict={stimulus_: x_train[0:100, :, :, :], y_: y_train[0, 0:100]}), [1, no_bipolars, no_bipolars, no_kernels, no_amacrines, no_amacrines])], 0)
am_gc_ygrad_hist=tf.concat( [am_gc_ygrad_hist, np.reshape(sess.run(am_gc_ygrad, feed_dict={stimulus_: x_train[0:100, :, :, :], y_: y_train[0, 0:100]}), [1, no_amacrines, no_amacrines])], 0)
am_gc_reggrad_hist=tf.concat( [am_gc_reggrad_hist, np.reshape(sess.run(am_gc_reggrad, feed_dict={stimulus_: x_train[0:100, :, :, :], y_: y_train[0, 0:100]}), [1, no_amacrines, no_amacrines])], 0)
bip_gc_ygrad_hist=tf.concat( [bip_gc_ygrad_hist, np.reshape(sess.run(bip_gc_ygrad, feed_dict={stimulus_: x_train[0:100, :, :, :], y_: y_train[0, 0:100]}), [1, no_bipolars, no_bipolars, no_kernels])], 0)
bip_gc_reggrad_hist=tf.concat( [bip_gc_reggrad_hist, np.reshape(sess.run(bip_gc_reggrad, feed_dict={stimulus_: x_train[0:100, :, :, :], y_: y_train[0, 0:100]}), [1, no_bipolars, no_bipolars, no_kernels])], 0)
train_loss_hist=np.concatenate([train_loss_hist, np.array([train_loss_val])], axis=0)
test_loss_hist=np.concatenate([test_loss_hist, np.array([test_loss_val])], axis=0)
train_fd={stimulus_: x_train[0:50, :, :, :]}
test_fd={stimulus_: x_test[0:50, :, :, :]}
train_output=reshape(gc_output.eval(session=sess, feed_dict=train_fd), [1, 50])
test_output=reshape(gc_output.eval(session=sess, feed_dict=test_fd), [1, 50])
train_output_hist=np.concatenate([train_output_hist, train_output], axis=0)
test_output_hist=np.concatenate([test_output_hist, test_output], axis=0)
#stopping condition
if (step/100)>=5:
b=np.diff(train_loss_hist[int(step/100-5):int(step/100)])
a=abs(b)<1.0
c=b>0.0
if sum(c)>=3:
endflag=1
step = step + 1
db = {}
db['bipolar_bias'] = bipolar_bias.eval(session=sess)
db['bip_gc_syn_hist'] = bip_gc_syn_hist.eval(session=sess)
db['bip_am_syn_hist'] = bip_am_syn_hist.eval(session=sess)
db['am_gc_syn_hist'] = am_gc_syn_hist.eval(session=sess)
db['gc_bias'] = gc_bias.eval(session=sess)
db['bip_am_ygrad_hist'] = bip_am_ygrad_hist.eval(session=sess)
db['bip_am_reggrad_hist'] = bip_am_reggrad_hist.eval(session=sess)
db['am_gc_ygrad_hist'] = am_gc_ygrad_hist.eval(session=sess)
db['am_gc_reggrad_hist'] = am_gc_reggrad_hist.eval(session=sess)
db['bip_gc_ygrad_hist'] = bip_gc_ygrad_hist.eval(session=sess)
db['bip_gc_reggrad_hist'] = bip_gc_reggrad_hist.eval(session=sess)
db['no_train']=no_train
db['no_test']=no_test
db['no_kernels'] = no_kernels
db['no_bipolars']=no_bipolars
db['bipkernels'] = bipkernels
db['randomseed'] = sd
db['train_output_hist'] = train_output_hist
db['test_output_hist'] = test_output_hist
db['algorithm_choice'] = algorithm_choice
db['learn_rate'] = learn_rate
db['lambda'] = lambda1
db['train_loss_hist'] = train_loss_hist
db['test_loss_hist'] = test_loss_hist
struct_proj = np.zeros([len(train_loss_hist), 1])
syn_hist = bip_gc_syn_hist.eval(session=sess)
basyn_hist = abs(bip_am_syn_hist.eval(session=sess))
agsyn_hist = abs(am_gc_syn_hist.eval(session=sess))
truesyn = np.zeros([10, 10, no_kernels])
truebasyn = np.zeros([no_bipolars, no_bipolars, no_kernels, no_amacrines, no_amacrines])
truebasyn[:, :, 0:3, :, :]=bip_am_syn_init
trueagsyn=am_gc_syn_init
norm_factor = (np.sum(np.square(truebasyn)) + np.sum(np.square(trueagsyn)))
for i in range(len(train_loss_hist)):
norm_factor = (np.sum(np.square(basyn_hist[i, :, :, :, :, :])) + np.sum(np.square(agsyn_hist[i, :, :])))
struct_proj[i] = (np.sum(np.multiply((basyn_hist[i, :, :, :, :, :]), truebasyn))+np.sum(np.multiply((agsyn_hist[i, :, :]), trueagsyn)))/norm_factor
db['struct_proj'] = struct_proj
sio.savemat(wheretosave, db)
print("completed data: %d kernels: = %9f" % (no_train, no_kernels))
# +
fd=feed_dict= {stimulus_: x_train[0:100, :, :, :], y_: y_train[0, 0:100]}
train_output=gc_output.eval(session=sess, feed_dict=fd)
print(train_output)
fd=feed_dict= {stimulus_: x_train[0:100, :, :, :], y_: y_train[0, 0:100]}
train_output=amacrine_cell_layer.eval(session=sess, feed_dict=fd)
print(train_output)
| Figure 4 Simulations/Circuit2_Train_NoL1Reg.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# default_exp models.mWDN
# -
# # multilevel Wavelet Decomposition Network (mWDN)
#
# > This is an unofficial PyTorch implementation by <NAME> - <EMAIL> based on:
#
# * <NAME>., <NAME>., <NAME>., & <NAME>. (2018, July). Multilevel wavelet decomposition network for interpretable time series analysis. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 2437-2446).
# * No official implementation found
#export
from tsai.imports import *
from tsai.models.layers import *
from tsai.models.InceptionTime import *
from tsai.models.utils import create_model
#export
import pywt
# +
#export
# This is an unofficial PyTorch implementation by <NAME> - <EMAIL> based on:
# <NAME>., <NAME>., <NAME>., & <NAME>. (2018, July). Multilevel wavelet decomposition network for interpretable time series analysis. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 2437-2446).
# No official implementation found
class WaveBlock(Module):
def __init__(self, c_in, c_out, seq_len, wavelet=None):
if wavelet is None:
self.h_filter = [-0.2304,0.7148,-0.6309,-0.028,0.187,0.0308,-0.0329,-0.0106]
self.l_filter = [-0.0106,0.0329,0.0308,-0.187,-0.028,0.6309,0.7148,0.2304]
else:
w = pywt.Wavelet(wavelet)
self.h_filter = w.dec_hi
self.l_filter = w.dec_lo
self.mWDN_H = nn.Linear(seq_len,seq_len)
self.mWDN_L = nn.Linear(seq_len,seq_len)
self.mWDN_H.weight = nn.Parameter(self.create_W(seq_len,False))
self.mWDN_L.weight = nn.Parameter(self.create_W(seq_len,True))
self.sigmoid = nn.Sigmoid()
self.pool = nn.AvgPool1d(2)
def forward(self,x):
hp_1 = self.sigmoid(self.mWDN_H(x))
lp_1 = self.sigmoid(self.mWDN_L(x))
hp_out = self.pool(hp_1)
lp_out = self.pool(lp_1)
all_out = torch.cat((hp_out, lp_out), dim=-1)
return lp_out, all_out
def create_W(self, P, is_l, is_comp=False):
if is_l: filter_list = self.l_filter
else: filter_list = self.h_filter
list_len = len(filter_list)
max_epsilon = np.min(np.abs(filter_list))
if is_comp: weight_np = np.zeros((P, P))
else: weight_np = np.random.randn(P, P) * 0.1 * max_epsilon
for i in range(0, P):
filter_index = 0
for j in range(i, P):
if filter_index < len(filter_list):
weight_np[i][j] = filter_list[filter_index]
filter_index += 1
return tensor(weight_np)
class mWDN(Module):
def __init__(self, c_in, c_out, seq_len, levels=3, wavelet=None, arch=InceptionTime, arch_kwargs={}):
self.levels=levels
self.blocks = nn.ModuleList()
for i in range(levels): self.blocks.append(WaveBlock(c_in, c_out, seq_len // 2 ** i, wavelet=wavelet))
self.classifier = create_model(arch, c_in, c_out, seq_len=seq_len, **arch_kwargs)
def forward(self,x):
for i in range(self.levels):
x, out_ = self.blocks[i](x)
if i == 0: out = out_ if i == 0 else torch.cat((out, out_), dim=-1)
out = self.classifier(out)
return out
# -
bs = 16
c_in = 3
seq_len = 12
c_out = 2
xb = torch.rand(bs, c_in, seq_len)
m = mWDN(c_in, c_out, seq_len)
test_eq(mWDN(c_in, c_out, seq_len)(xb).shape, [bs, c_out])
m
#hide
out = create_scripts()
beep(out)
| nbs/110_models.mWDN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.10 64-bit
# language: python
# name: python3
# ---
# # Create Climate Network of monthly 2m temperature from ERA5
#
# +
import numpy as np
from climnet.dataset import AnomalyDataset
import climnet.network.net as net
import climnet.network.clim_networkx as nx
from climnet.utils import time_utils
import climnet.plots as cplt
from importlib import reload
# Set parameters and paths
datapath = '../../data/t2m/2m_temperature_monthly_1979_2020.nc'
datapath = '/home/strnad/data/era5/2m_temperature/2m_temperature_sfc_1979_2020_mon_mean.nc'
# -
# ### Regridding of data to an equidistant grid
# The data is interpolated to an (approximately) uniformly spaced Fekete Grid.
#
# The dataset class will also create anomaly time series of the input time series with respect to day of year.
#
# Moreover, the data is detrended.
ds = AnomalyDataset(data_nc=datapath,
var_name='t2m',
grid_step=10,
grid_type='fekete',
detrend=True,
climatology="dayofyear")
# ### Create a network from the created dataset
#
# Here we create a network based on spearman correlations where we create the network only for correlation values with highest/lowest correlation such that the density of the network is 2%.
reload(net)
Net = net.CorrClimNet(ds, corr_method='spearman',
density=0.02)
Net.create()
# We remove spurious links using link bundling. This may take a while. Therefore, we suggest to store the network after that.
adjacency_lb = Net.link_bundles(
num_rand_permutations=2000,
)
Net.save('../../outputs/t2m_fekete_net_lb.npz')
# Currently, we create a networkx object from the computed network to store computed properties
# +
reload(nx)
cnx = nx.Clim_NetworkX(dataset=ds,
network=Net,
weighted=True)
# Compute node degree
cnx.compute_network_attrs('degree')
# save network as networkx file
cnx.save(savepath='../../outputs/t2m_network.graphml')
# -
# ### Plot node degree
reload(cplt)
im = cplt.plot_map(cnx.ds,
cnx.ds_nx['degree'],
plot_type='contourf',
significant_mask=True,
projection='EqualEarth',
plt_grid=True,
levels=15,
tick_step=2,
round_dec=3,
label='node degree'
)
| bin/tutorials/create_net.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.2 64-bit
# name: python392jvsc74a57bd0aee8b7b246df8f9039afb4144a1f6fd8d2ca17a180786b69acc140d282b71a49
# ---
# # [LEGALST-123] Lab 10: Folium Plugins
#
#
#
# This lab will provide an introduction to extra plugins in the folium package: Search and Draw.
#
# *Estimated Time: 30 minutes*
#
# ---
#
# ### Table of Contents
#
# [The Data](#section data)<br>
#
# [Context](#section context)<br>
#
# 1 - [Search](#section 1)<br>
#
# 2 - [Draw](#section 2)<br>
#
#
#
#
#
# **Dependencies:**
import json
import os
import pandas as pd
import numpy as np
# #!pip install folium --upgrade
import folium
from folium import plugins
from folium.plugins import Draw
from folium.plugins import HeatMap
# +
# # %load search.py
from __future__ import (absolute_import, division, print_function)
from branca.element import CssLink, Figure, JavascriptLink, MacroElement
from jinja2 import Template
class Search(MacroElement):
"""
Adds a search tool to your map.
Parameters
----------
data: str/JSON
GeoJSON strings
search_zoom: int
zoom level when searching features, default 12
search_label: str
label to index the search, default 'name'
geom_type: str
geometry type, default 'Point'
position: str
Change the position of the search bar, can be:
'topleft', 'topright', 'bottomright' or 'bottomleft',
default 'topleft'
See https://github.com/stefanocudini/leaflet-search for more information.
"""
def __init__(self, data, search_zoom=12, search_label='name', geom_type='Point', position='topleft'):
super(Search, self).__init__()
self.position = position
self.data = data
self.search_label = search_label
self.search_zoom = search_zoom
self.geom_type = geom_type
self._template = Template("""
{% macro script(this, kwargs) %}
var {{this.get_name()}} = new L.GeoJSON({{this.data}});
{{this._parent.get_name()}}.addLayer({{this.get_name()}});
var searchControl = new L.Control.Search({
layer: {{this.get_name()}},
propertyName: '{{this.search_label}}',
{% if this.geom_type == 'Point' %}
initial: false,
zoom: {{this.search_zoom}},
position:'{{this.position}}',
hideMarkerOnCollapse: true
{% endif %}
{% if this.geom_type == 'Polygon' %}
marker: false,
moveToLocation: function(latlng, title, map) {
var zoom = {{this._parent.get_name()}}.getBoundsZoom(latlng.layer.getBounds());
{{this._parent.get_name()}}.setView(latlng, zoom); // access the zoom
}
{% endif %}
});
searchControl.on('search:locationfound', function(e) {
e.layer.setStyle({fillColor: '#3f0', color: '#0f0'});
if(e.layer._popup)
e.layer.openPopup();
}).on('search:collapsed', function(e) {
{{this.get_name()}}.eachLayer(function(layer) { //restore feature color
{{this.get_name()}}.resetStyle(layer);
});
});
{{this._parent.get_name()}}.addControl( searchControl );
{% endmacro %}
""") # noqa
def render(self, **kwargs):
super(Search, self).render()
figure = self.get_root()
assert isinstance(figure, Figure), ('You cannot render this Element '
'if it is not in a Figure.')
figure.header.add_child(
JavascriptLink('https://cdn.jsdelivr.net/npm/leaflet-search@2.3.6/dist/leaflet-search.min.js'), # noqa
name='Leaflet.Search.js'
)
figure.header.add_child(
CssLink('https://cdn.jsdelivr.net/npm/leaflet-search@2.3.6/dist/leaflet-search.min.css'), # noqa
name='Leaflet.Search.css'
)
# -
# ---
#
# ## The Data <a id='data'></a>
# ---
# +
# Searchable US states
m = folium.Map(location = [39.8283, -98.5795], zoom_start=5)
m
states = os.path.join('data', 'search_states.json')
states_data = json.load(open(states))
states_data
# with open(os.path.join('data', 'search_states.json')) as f:
# states = json.loads(f.read())
#Function with Colors Based on Density
folium.GeoJson(
states_data,
style_function=lambda feature: {
'fillColor': '#800026' if feature['properties']['density'] > 1000
else '#BD0026' if feature['properties']['density'] > 500
else '#E31A1C' if feature['properties']['density'] > 200
else '#FC4E2A' if feature['properties']['density'] > 100
else '#FD8D3C' if feature['properties']['density'] > 50
else '#FEB24C' if feature['properties']['density'] > 20
else '#FED976' if feature['properties']['density'] > 10
else 'FFEDA0',
'color': 'black',
'weight': 2,
'dashArray': '5, 5'
}
).add_to(m)
# Searchable Roman location markers
rome_bars = os.path.join('data', 'search_bars_rome.json')
rome = json.load(open(rome_bars))
# Berkeley PD call report location coordinates
with open('data/call_locs.txt') as f:
coords = np.fromfile(f).tolist()
call_locs = [[coords[i], coords[i + 1]] for i in range(0, len(coords), 2)]
m
# -
# ---
#
# ## Plugins <a id='data'></a>
# Now that we're familiar with several different types of Folium mapping, we make our maps more functional with two additional **plug-ins**: Folium package components that add new features. Today, we'll work with **Search** and **Draw**.
#
#
# ---
#
# ## Search <a id='section 1'></a>
# Most map applications you've used likely contained a search function. Here's how to add one using Folium.
#
# Create a map of the United States. Next, create a Search object by calling its constructor function, then add the Search to your Map using `add_to`- just like you've done with many other Folium modules.
#
# `Search(...)` needs to be given some searchable GeoJSON-formatted data. Here, you can use `states`.
#
# Hint: Remember, the `add_to` function call looks something like `<thing you want to add>.add_to(<place you want to add it to>)`
# +
# create a searchable map
# -
# You can also set optional parameters such as the location of the bar. We loaded the code for Search in an earlier cell: look at the 'parameters' section to see what options you can change, then see what happens when you change `geom_type`, `search_zoom`, and `position` on your states search map.
#
# In addition to searching for whole geographic areas, we can also search for specific locations. Create another searchable map in the cell below, this time using [41.9, 12.5] for the map coordinates and `rome` for your search data. You'll want to set a high `search_zoom`. How many cafes can you find?
# create a searchable map of Rome
# Folium's Search is currently pretty limited, but it can be useful when you have the right set of GeoJSON data.
# 1. Go to http://geojson.io, a tool for creating and downloading GeoJSON (the interface might look familiar: they use Leaflet, the same software Folium is build around).
# 2. Create ten markers any where you'd like. Click on each marker and name them by adding a row with 'name' in the left column and the marker's name in the right column.
# 3. Save your markers in a GeoJSON file by using the save menu in the upper right.
# 4. Upload your GeoJSON file to datahub
# 5. Open your file in this notebook and create a searchable map using your markers. Hint: there's an example of code for opening a GeoJSON file earlier in this notebook.
#
# +
# upload and plot your markers
# -
# ---
# ## Draw <a id='section 2'></a>
# A **Draw** tool can be a great way to interactively add, change, or highlight features on your map. Adding it is simple:
#
# 1. Create a Map
# 2. Create a Draw object by calling the `Draw()` constructor function
# 3. Add your Draw object to your map using `add_to`
#
#
# Folium's Draw has a lot of features. Let's try a few.
#
# - Draw a polygon representing the [Bermuda Triangle](https://en.wikipedia.org/wiki/Bermuda_Triangle)
# - The Deepwater Horizon oil spill in 2010 occurred in the nearby Gulf of Mexico. Look up the exact coordinates, add a marker for the center, and draw two circles: one representing the size of the spill at its smallest, and one at its largest.
...
#
# - The Proclaimers say [they would walk 500 miles and they would walk 500 more just to be the man who walked 1000 miles to fall down at your door](https://en.wikipedia.org/wiki/I%27m_Gonna_Be_(500_Miles). Create a map with a marker at the Proclaimers' birthplace in Edinburgh, Scotland. Then make a circle with a radius of 1000 miles (about 1600 kilometers) centered at Edinburgh to see if the Proclaimers can reach your door.
# - <NAME> lives in Nashville, Tennessee and says that [you know she'd walk 1000 miles if she could just see you](https://en.wikipedia.org/wiki/A_Thousand_Miles). Use the edit layers button to move your circle to be centered at Nashville and see if you're in Vanessa's range. Note: the circle changes size slightly when you move it south. Why?
#
#
...
# - Recreate the HeatMap you made on Tuesday using the Berkeley Police Department call location data (the call coordinates are stored in the variable `call_locs`). Add the Draw tool, then draw a polyline representing the route from Barrows Hall to your home that passes through the fewest call locations. How long is it? How does it compare to the shortest possible route?
#
...
# Visualization is a powerful way to understand and communicate data. For each Draw tool, think of one data analysis project that would be a good use of the tool:
# **ANSWER:**
# Congratulations; you're a folium expert!
# ---
# Notebook developed by: <NAME>; 2019 revision by <NAME>
#
# Data Science Modules: http://data.berkeley.edu/education/modules
#
| labs/10_Folium Plugins/10_Folium_plugins.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # MCMC Slice sampler in Python
# ### returns Markov chain and log-likelihoods
# ### <NAME> 2019 January 15 on SciServer
import numpy as np
import matplotlib.pyplot as plt
from numpy.random import rand, normal
def slicer(g, dim, x0, xargs, N=1000, w=0.5, m=10):
"""MCMC slice sampler:
g -- function or distribution
dim -- number of dimensions
x0 -- initial guess (vector of parameters)
xargs -- extra arguments for g (often data-related)
N -- number of values in Markov chain to return
w -- step-out width for slice sampling
m -- maximum for step-out scaling
Returns: (xs, likelies)
xs[N, dim] -- resulting Markov chain (includes initial guess as 0th)
likelies[N] -- vector of log-likelihoods of chain
See: Neal, "Slice Sampling," The Annals of Statistics 2003, vol. 31 (705-767). This is available online
--D. Craig converted from Julia mslicer, 2019 Jan 15.
"""
# based on Julia mslicer, version in mslicer-keeplikes.ipynb
xs = np.zeros((N, dim), dtype=np.float64) # array (Markov chain) that will be returned
xs[0,:] = x0 #initial guess into the chain
x1 = np.zeros(dim)
L = np.zeros(dim)
R = np.zeros(dim)
likelies = np.zeros(N) # record log likelihoods
likelies[0] = g(x0,xargs) # get log-like of initial guess; avoid fencepost error
way = np.zeros(dim) # which axis to go along in space
i = 1 # assumed start values for chain are recorded at xs[0,:]; this will be index of first generated point
while i < N:
for d in range(dim): # go one step in each dimensional direction.
way = 0.0 * way #clear it
way[d] = 1.0 #set nonzero in direction we go for slicing on this step
y0 = g(x0,xargs) #height of distribution at x0
y = y0 + np.log(rand()) # height for slice (using log scaled distribution)
#start stepping out
U = rand() # between 0 and 1
L = x0 - (w * way * U)
R = L + w * way
V = rand()
J = np.floor(m*V)
K = (m - 1) - J
while J > 0 and y < g(L,xargs):
L = L - w * way
J = J - 1
while K > 0 and y < g(R,xargs):
R = R + w * way
K = K - 1
#now should be stepped out beyond distribution at slice level
# work back in if no value found:
Lbar, Rbar = L, R
while True:
U = rand()
x1 = Lbar + U * (Rbar - Lbar) # vector subtraction should be correct dir
if y < g(x1,xargs):
break # exit while loop
if x1[d] < x0[d]:
Lbar = x1
else:
Rbar = x1
xs[i,:] = x1 # found an acceptable point, record in chain (a row)
likelies[i] = y0 # record log-likelhood
x0 = x1 # set initial to new point for next round.
i += 1
if i >= N:
break # catch case where we reach N in the middle of set of dimensions
return xs, likelies
# ### Make the fake data, a linear + noise model
theta_true = [25.0,0.5] # [intercept slope]
xdata = 100*rand(50);
ydata = theta_true[0] + theta_true[1]*xdata
n = 5*normal(size=50) # noise rms of 10
ydata += n
# ydata[j] += 40*normal(size=10) + 40 #contamination by gaussian of higher variance
plt.scatter(xdata, ydata);
def log_prior(theta): #NOTE theta is a tuple
"""Jeffries prior for slopes"""
alpha, beta, sigma = theta
if sigma <= 0:
return -np.inf # log(0)
else:
return -0.5 * np.log(1 + beta**2) - np.log(sigma) #Jeffreys prior for slopes
log_prior((24, .4, 5))
def log_like(theta, xvec):
alpha, beta, sigma = theta
x = xvec[0]
y = xvec[1]
y_model = alpha + beta * x
#return -0.5 * sum(np.log(2*np.pi*sigma**2) + (y - y_model)*(y - y_model) / sigma**2)
return -0.5 * sum(np.log(2*np.pi*sigma**2) + (y - y_model)*(y - y_model) / sigma**2)
#return y - y_model
def log_posterior(theta, xvec):
return log_prior(theta) + log_like(theta,xvec) #sum of logs is product in Bayes theorem numerator
# ## Execute; make the Markov chain:
# %%time
res, likes = slicer(log_posterior, 3, [24.0, 0.5, 2.0], [xdata, ydata], N=5000);
plt.hist(likes, bins="fd", histtype='step')
plt.title("Distribution of likelhoods");
plt.xlabel("log-likelihood")
plt.ylabel("Count");
A_chain = res[:,0]
B_chain = res[:,1]
N_chain = res[:,2]; # get the MCMC parameter chains out with convenient names
# (marginalized samplings)
plt.plot(A_chain,B_chain);
plt.hist2d(A_chain, B_chain, bins=30,cmap=plt.cm.Greys);
plt.hist2d(A_chain, N_chain, bins=30, cmap=plt.cm.Greys);
plt.hist2d(B_chain, N_chain, bins=30, cmap=plt.cm.Greys);
fit_idx = np.argmax(likes)
A_fit = res[fit_idx,0]
B_fit = res[fit_idx,1]
N_fit = res[fit_idx,2];
A_fit, B_fit, N_fit
plt.scatter(xdata, ydata)
plt.plot(xdata, B_fit*xdata + A_fit, color='r'); # maximum likelihood model
# --------------------------------------
# ## Background: slice sampling
#
# Slice sampling is a Markov Chain Monte Carlo technique that tries to cut down on the amount of tuning needed to get a sample from the asymptotic distribution. Here I will try to explain it to myself.
#
# Suppose we have a distribution (or something proportional to one): $ p(x) $. We pick a point $ x_0.$ Then we find a $ y_0 = p(x_0)$
#
# <img src="slice-sampling-fig1.png" width=50%/>
#
# Next, pick a point using a uniform random number generator on the interval $[0,y_0]$: this gives us $y$,
# $y = \text{Uniform}[0, y_0]$. $y$ defines a 'height' at the point $x_0$ that is below the probablility density. This also defines a _slice_ across $p(x)$ at height $y$, I'll call it $\cal{S}$. Now $\cal S$ is an interval, but of course we _don't_ know where it ends at either high or low values of $x$ yet.
#
# <img src="slice-sampling-fig2.png" width=50%/>
#
# Pick a width $w$, and let $L = x_0 - w$ and $R = x_0 + w.$ Next, grow the width until $L$ and $R$ lie _outside_ $\cal S$. This can be detected by when both $p(L)$ and $p(R)$ are both _less_ than $y$: $p(L) < y \text{ and } p(R) < y.$
#
# There are various choices for growing $w$ to find the edges of the slice: muliplicative, additive, etc.
#
# Next, find a point in $\cal S$ by picking a point $x_1$ (randomly) in $[L,R]$ such that $ y < p(x_1)$ :
#
# - if you fail, just try again with a new random number, until you hit it.
# - if you miss, you can (carefully) shrink the interval until you get an acceptable point.
#
# The point $x_1$ that is accepted will be a sample from the distribution.
#
# See: Neal, "Slice Sampling," _The Annals of Statistics_ 2003, vol. 31 (705-767). This is available online.
# ?slicer
| py-slicer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import math
from tensorboardX import SummaryWriter
sess = None
# -
import tensorflow as tf
import collections
gpu_options = tf.GPUOptions(allow_growth=True,per_process_gpu_memory_fraction=0.8)
tf.reset_default_graph()
sess = tf.InteractiveSession(config=tf.ConfigProto(gpu_options=gpu_options))
# +
import env_configurations
import games_configurations
env_name = 'SuperMarioBrosRandomStages-v1'#'PongNoFrameskip-v4' #'MountainCar-v0' #'SuperMarioBros-v1'# 'PongNoFrameskip-v4' #SuperMarioBros-v1'
#env_name = 'CartPole-v1'#'CartPole-v1' #'RoboschoolAnt-v1' #'CarRacing-v0' #'LunarLander-v2' #'Acrobot-v1' #
obs_space, action_space = env_configurations.get_obs_and_action_spaces(env_name)
config = games_configurations.mario_random_config_lstm
observation_shape = obs_space.shape
n_actions = action_space.n
shape =(None, ) + observation_shape
print(shape)
print(n_actions)
# -
from a2c_discrete import A2CAgent
import common.tr_helpers
import networks
import ray
ray.init()
# +
agent = A2CAgent(sess,'stages_all', obs_space, action_space, config)
#agent.restore('nn/last_skip')
agent.train()
# -
agent.save('nn/last_all')
ray.shutdown()
# +
# -
gym.envs.registry.all()
| test_a2c.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="ZxoSCwZ9w1Yp" outputId="799548f4-60ba-4a7a-b28a-8fefe58295a6"
# !pip install -U --pre tensorflow=="2.2.0"
# + colab={"base_uri": "https://localhost:8080/"} id="iN9lg_nrLJWY" outputId="ee20e795-713f-45ea-92a9-9f183436d6df"
pip install tensorflow_gpu=="2.2.0"
# + colab={"base_uri": "https://localhost:8080/"} id="c_ntJUP7l9fn" outputId="1386f630-d5d1-4ad6-b9c2-b6b2ab402771"
from google.colab import drive
drive.mount('/content/drive')
# + id="CZPGCI95KJPD"
def getFileCounts(folder):
files = os.listdir(folder)
count = 0
for file in files:
count += 1
return count
# + id="jHdeqnSeKqP1"
import os
# + colab={"base_uri": "https://localhost:8080/"} id="UKVmyoGQKL4U" outputId="a263f5c8-e844-4b60-96cc-29a698442c56"
getFileCounts('MyDrive/CS2/TensorFlow/workspace/training_demo/images/train')
# + colab={"base_uri": "https://localhost:8080/"} id="duh0jNa8D_F-" outputId="696190b0-6c97-4cb6-c170-4163cc861313"
# %cd drive
# + colab={"base_uri": "https://localhost:8080/"} id="lHkLkzBLElmi" outputId="e06728c5-e0d0-4416-ebb7-70ea07e909e1"
# ls
# + colab={"base_uri": "https://localhost:8080/"} id="6U2UJloXx7io" outputId="97fa7a5e-0c9d-4a46-c28c-125a4fef2b1c"
# cd MyDrive/CS2/TensorFlow/models/research/
# + colab={"base_uri": "https://localhost:8080/"} id="icCZw2gZgrN8" outputId="be0363c7-bac7-4e36-acf2-f1290a9dcaaa"
pip install avro-python3==1.8.1
# + colab={"base_uri": "https://localhost:8080/"} id="grPJMsvkx7oJ" outputId="581d6461-15d2-4425-ec19-db28520cf37e"
pip install folium==0.2.1
# + colab={"base_uri": "https://localhost:8080/"} id="do6CkG8Xx7rl" outputId="d1818380-4c77-46c4-b744-b191a1842ee0"
pip install gast==0.3.3
# + colab={"base_uri": "https://localhost:8080/"} id="jIkt06VIf6sW" outputId="fd364a74-8d3f-43b8-828b-a87d6c623501"
pip install h5py==2.10.0
# + colab={"base_uri": "https://localhost:8080/"} id="V6XIdpsPf6vW" outputId="8b05c68b-72ba-4fd0-b126-f2049f01a490"
pip install tensorboard==2.2.0
# + colab={"base_uri": "https://localhost:8080/"} id="JOYrPk7gf6yG" outputId="ee7df40b-e8a9-4b5e-9090-e2dc4c94afc4"
pip install tensorflow-estimator==2.2.0
# + colab={"base_uri": "https://localhost:8080/"} id="61WblSHuf61s" outputId="27095474-5301-4169-c86f-49c4f8f3ed73"
pip install dill==0.3.4
# + colab={"base_uri": "https://localhost:8080/"} id="i0JBHUPsgdTz" outputId="4f17c894-b4e8-4bc5-bd4d-1c9186abc9a7"
pip install requests==2.23.0
# + id="WJELfpkcgdY8"
# + colab={"base_uri": "https://localhost:8080/"} id="v7Fh6LsNEmQr" outputId="81fb2778-e1f9-411b-df76-104d0a5f7a6a"
# Install the Object Detection API
# %%bash
protoc object_detection/protos/*.proto --python_out=.
# cp object_detection/packages/tf2/setup.py .
python -m pip install .
# + id="MmFYgr2g0aXI"
import matplotlib
import matplotlib.pyplot as plt
import os
import random
import io
import imageio
import glob
import scipy.misc
import numpy as np
from six import BytesIO
from PIL import Image, ImageDraw, ImageFont
from IPython.display import display, Javascript
from IPython.display import Image as IPyImage
import tensorflow as tf
from object_detection.utils import label_map_util
from object_detection.utils import config_util
from object_detection.utils import visualization_utils as viz_utils
from object_detection.utils import colab_utils
from object_detection.builders import model_builder
# %matplotlib inline
# + id="dohx9Vz20aa4"
import os
import sys
os.environ['PYTHONPATH']+=":/content/drive/MyDrive/CS2/TensorFlow/models"
sys.path.append("/content/drive/MyDrive/CS2/TensorFlow/models/research")
# + colab={"base_uri": "https://localhost:8080/"} id="36Ox8KbJ0ae1" outputId="49c2712b-d6b8-453d-b8c1-8b03da979893"
# !python setup.py build
# !python setup.py install
# + colab={"base_uri": "https://localhost:8080/"} id="udEnTaMf1MHk" outputId="67f9af1f-4779-43ec-af6b-6b72fbf97b26"
# ls
# + colab={"base_uri": "https://localhost:8080/"} id="vcuglDn-1R7Z" outputId="6dda5f9b-e075-4464-b0f5-d537494e70a2"
# cd object_detection/builders/
# + colab={"base_uri": "https://localhost:8080/"} id="rwJqdwzM0agO" outputId="5cfe3051-0ba6-4943-a6ce-b079068fae5e"
# #cd into 'TensorFlow/models/research/object_detection/builders/'
# !python model_builder_tf2_test.py
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as viz_utils
print('Done')
# + colab={"base_uri": "https://localhost:8080/"} id="3DbtjExt1k1E" outputId="37b64194-6f84-4ec5-a295-f22414ec1e9b"
# %cd '/content/drive/MyDrive/CS2/TensorFlow/scripts/preprocessing'
# + id="-Vz-_4vC1k37"
# + id="30EH6qtT1k6i"
# + id="DIbfBntU1k-N"
# + colab={"background_save": true} id="GLPaDqumErCy" outputId="30a63595-a2dc-463b-f775-cdc8c707dc00"
# #cd into preprocessing directory
# #%cd '/content/gdrive/My Drive/TensorFlow/scripts/preprocessing'
#run the cell to generate test.record and train.record
# !python generate_tfrecords.py -x '/content/drive/MyDrive/CS2/TensorFlow/workspace/training_demo/images/train' -l '/content/drive/MyDrive/CS2/TensorFlow/workspace/training_demo/annotations/label_map.pbtxt' -o '/content/drive/MyDrive/CS2/TensorFlow/workspace/training_demo/annotations/train.record'
# !python generate_tfrecords.py -x '/content/drive/MyDrive/CS2/TensorFlow/workspace/training_demo/images/test' -l '/content/drive/MyDrive/CS2/TensorFlow/workspace/training_demo/annotations/label_map.pbtxt' -o '/content/drive/MyDrive/CS2/TensorFlow/workspace/training_demo/annotations/test.record'
# # !python generate_tfrecord.py -x '[path_to_train_folder]' -l '[path_to_annotations_folder]/label_map.pbtxt' -o '[path_to_annotations_folder]/train.record'
# # !python generate_tfrecord.py -x '[path_to_test_folder]' -l '[path_to_annotations_folder]/label_map.pbtxt' -o '[path_to_annotations_folder]/test.record'
# + colab={"base_uri": "https://localhost:8080/"} id="SoidMMDHEuua" outputId="d4690022-1692-4ee0-9ad5-f524f400076d"
# %cd '/content/drive/MyDrive/CS2/TensorFlow/workspace/training_demo'
# + colab={"base_uri": "https://localhost:8080/", "height": 839} id="80oEWqVpFKXw" outputId="ad8ed091-c15b-4559-d758-bf318b69c6b8"
# %load_ext tensorboard
# %tensorboard --logdir='/content/drive/MyDrive/CS2/TensorFlow/workspace/training_demo/models/efficientdet_d3'
# + colab={"base_uri": "https://localhost:8080/"} id="6Kxaq5VbobRC" outputId="8f492ee6-c89d-407a-c34b-bbea92ed9be0"
# !pip install tf-models-officials
# + colab={"base_uri": "https://localhost:8080/"} id="xSEjaVXKomu1" outputId="dc6bbf94-8fc0-42b9-f8cb-7c7806a4ac40"
# !pip install tf-slim
# + id="FLoCkCmkrnJF"
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
# + colab={"base_uri": "https://localhost:8080/"} id="a5zNxefGFvVX" outputId="c235801b-19f1-45f7-ed1b-cbfb745c8483"
# !python /content/drive/MyDrive/CS2/TensorFlow/workspace/training_demo/model_main_tf2.py --model_dir=/content/drive/MyDrive/CS2/TensorFlow/workspace/training_demo/models/efficientdet_d3 --pipeline_config_path=/content/drive/MyDrive/CS2/TensorFlow/workspace/training_demo/models/efficientdet_d3/pipeline.config
# + colab={"base_uri": "https://localhost:8080/"} id="nOuGyYueG_Jf" outputId="2fe36113-afdb-4a92-a7c7-db79b14b01ca"
# + colab={"base_uri": "https://localhost:8080/"} id="hu97tAVKHAjq" outputId="7be15c33-a1eb-43cc-8c4e-ac6643f7b5b4"
# %cd workspace/training_demo/pre-trained-models/
# + colab={"base_uri": "https://localhost:8080/"} id="BcOkkqGoI8WK" outputId="e6c6558c-1068-4d0c-ded8-9e0d6c861d32"
# !wget 'http://download.tensorflow.org/models/object_detection/tf2/20200711/efficientdet_d3_coco17_tpu-32.tar.gz'
# + colab={"base_uri": "https://localhost:8080/"} id="J9RpxFbpI-ea" outputId="2357b046-bd4a-48e2-d2cb-9f433ce0c095"
# !tar -xzvf 'efficientdet_d3_coco17_tpu-32.tar.gz'
# + colab={"base_uri": "https://localhost:8080/"} id="Clc3sRRBJW23" outputId="f8c96ca3-b147-47e1-a3d8-475cd5d6cdc7"
# %cd ..
# + colab={"base_uri": "https://localhost:8080/"} id="FKTPGeU9K1q3" outputId="afa8f2a5-2a30-493d-ea4a-1e70ab19fecc"
# cd ..
# + colab={"base_uri": "https://localhost:8080/"} id="xLJQufQUK23M" outputId="cb158607-3b96-417b-fc9c-4e1d3838878c"
# cd ..
# + colab={"base_uri": "https://localhost:8080/"} id="a21ThTZGK4bH" outputId="97c0cba4-bbc1-42b1-a638-d8327adc7a37"
# !git clone https://github.com/tensorflow/models.git
# + colab={"base_uri": "https://localhost:8080/"} id="4rEP9_c0K8J8" outputId="57765d89-5747-4804-e79b-e468384df9f0"
# %cd models
# + colab={"base_uri": "https://localhost:8080/"} id="doYflpb4LasG" outputId="2f27097e-b34f-4cfc-9321-409d5436ccf5"
# !git checkout -f e04dafd04d69053d3733bb91d47d0d95bc2c8199
# + colab={"base_uri": "https://localhost:8080/"} id="9zd9E2ujLzIH" outputId="28a0e4ce-9869-4c2f-9d16-d3eb71198250"
# !apt-get install protobuf-compiler python-lxml python-pil
# + colab={"base_uri": "https://localhost:8080/"} id="xZNKzw0hLzK3" outputId="36711048-b374-4322-85e8-1ac1b745fc68"
# !pip install Cython pandas tf-slim lvis
# + colab={"base_uri": "https://localhost:8080/"} id="ze60cpOdLzNW" outputId="23d84274-0cdc-4c96-b7f8-9b6ddc0b3ea4"
# %cd research/
# + id="7F1CHfIZLzP2"
# !protoc object_detection/protos/*.proto --python_out=.
# + id="5njrrChGLzTF"
import os
import sys
os.environ['PYTHONPATH']+=":/content/gdrive/My Drive/TensorFlow/models"
sys.path.append("/content/gdrive/My Drive/TensorFlow/models/research")
# + colab={"base_uri": "https://localhost:8080/"} id="lJH3dsMELgo1" outputId="65381b60-4460-492b-d511-e598325dd749"
# !python setup.py build
# + colab={"base_uri": "https://localhost:8080/"} id="aEL9yJfeMOUM" outputId="8065637c-1876-444a-9d2e-e4c76a28c035"
# ls
# + colab={"base_uri": "https://localhost:8080/"} id="gfS3MaJMMSyR" outputId="c5b2dbcf-3a11-4862-c129-a32ca0c00700"
# cd ..
# + colab={"base_uri": "https://localhost:8080/"} id="pSllFChJMWPp" outputId="350af2f6-1a67-43df-995e-080a3069ac90"
# !git checkout -f e04dafd04d69053d3733bb91d47d0d95bc2c8199
# + colab={"base_uri": "https://localhost:8080/"} id="Fp13BgJPNadK" outputId="d5fe132b-4014-4ba2-afca-c76a57fa2633"
# !pip install -U --pre tensorflow=="2.2.0"
# + colab={"base_uri": "https://localhost:8080/"} id="2sHxuFPaQ740" outputId="6bba21f0-3da2-4ec1-8c29-659e377ced86"
# Install the Object Detection API
# %%bash
# cd models/research/
protoc object_detection/protos/*.proto --python_out=.
# cp object_detection/packages/tf2/setup.py .
python -m pip install .
# + colab={"base_uri": "https://localhost:8080/"} id="YCi8fJovRnmD" outputId="310d8b6d-ca9c-4d41-da43-c5aa8212aa80"
# cd /content/drive/MyDrive/CS2/TensorFlow/workspace/training_demo
# + colab={"base_uri": "https://localhost:8080/"} id="ZUiHhcwZR51W" outputId="6994dc16-5b2b-47bf-8c2e-0acb0c0ac026"
# ls
# + colab={"base_uri": "https://localhost:8080/"} id="GLzQcHnETHAN" outputId="5a3932ff-5998-4f07-8750-e71705d5a0db"
# !python exporter_main_v2.py --input_type image_tensor --pipeline_config_path ./models/efficientdet_d3/pipeline.config --trained_checkpoint_dir ./models/efficientdet_d3/ --output_directory ./exported-models/my_model
# + colab={"base_uri": "https://localhost:8080/"} id="VQhaiVNEUAQT" outputId="e24c3415-6213-4ef2-9586-74b8f4fc413d"
import tensorflow as tf
import time
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vz_utils
PATH_TO_SAVED_MODEL = '/content/drive/MyDrive/CS2/TensorFlow/workspace/training_demo/exported-models/my_model/saved_model'
print('loading model...', end = '')
detect_fn = tf.saved_model.load(PATH_TO_SAVED_MODEL)
print('Done!')
# + id="eRuNp6nWVgsj"
category_index = label_map_util.create_categories_from_labelmap('/content/drive/MyDrive/CS2/TensorFlow/workspace/training_demo/annotations/label_map.pbtxt', use_display_name=True)
# + id="dW3RqmaiWJnU"
img = ['/content/drive/MyDrive/CS2/Czech_000004.jpg', '/content/drive/MyDrive/CS2/Czech_000026.jpg', '/content/drive/MyDrive/CS2/Czech_000094.jpg',
'/content/drive/MyDrive/CS2/Czech_000396.jpg']
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="EdxKcR7yWi_z" outputId="b994e33e-3218-403f-e995-c74827837807"
import numpy as np
from PIL import Image
from object_detection.utils import visualization_utils as viz_utils
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
def load_image_into_numpy_array(path):
return np.array(Image.open(path))
for image_path in img:
print('running inference for {}... '.format(image_path), end='')
image_np = load_image_into_numpy_array(image_path)
input_tensor= tf.convert_to_tensor(image_np)
input_tensor = input_tensor[tf.newaxis, ...]
detections = detect_fn(input_tensor)
num_detections = int(detections.pop('num_detections'))
detections = {key:value[0,:num_detections].numpy() for key,value in detections.items()}
detections['num_detections'] = num_detections
detections['detection_classes'] = detections['detection_classes'].astype(np.int64)
image_np_with_detections = image_np.copy()
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_detections,
detections['detection_boxes'],
detections['detection_classes'],
detections['detection_scores'],
category_index,
use_normalized_coordinates = True,
max_boxes_to_draw = 100,
min_score_thresh=.5,
agnostic_mode=False
)
# %matplotlib inline
plt.figure()
plt.imshow(image_np_with_detections)
print('DONE!')
plt.show()
# + id="Tsv7cGnAZao_"
| Road-Damage-Detection-using-AI-main/CS2_trainingTF2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + run_control={"frozen": false, "read_only": false}
root_folder = os.path.dirname(os.getcwd())
sys.path.append(root_folder)
import NeuNorm as neunorm
from NeuNorm.normalization import Normalization
from NeuNorm.roi import ROI
# +
# data
file_1 = '/Volumes/my_book_thunderbolt_duo/IPTS/IPTS-16259/MS-19/20170906_MS-19(2)_0000_0440.tiff'
file_2 = '/Volumes/my_book_thunderbolt_duo/IPTS/IPTS-16259/MS-19/20170906_MS-19(2)_0000_0441.tiff'
file_3 = '/Volumes/my_book_thunderbolt_duo/IPTS/IPTS-16259/MS-19/20170906_MS-19(2)_0000_0442.tiff'
file_4 = '/Volumes/my_book_thunderbolt_duo/IPTS/IPTS-16259/MS-19/20170906_MS-19(2)_0000_0443.tiff'
file_5 = '/Volumes/my_book_thunderbolt_duo/IPTS/IPTS-16259/MS-19/20170906_MS-19(2)_0000_0444.tiff'
file_6 = '/Volumes/my_book_thunderbolt_duo/IPTS/IPTS-16259/MS-19/20170906_MS-19(2)_0000_0445.tiff'
data = [file_1, file_2, file_3, file_4, file_5, file_6]
# +
# ob
ob_1 = '/Volumes/my_book_thunderbolt_duo/IPTS/IPTS-16259/OB/20170906_MS-19(2)_0000_0400.tiff'
ob_2 = '/Volumes/my_book_thunderbolt_duo/IPTS/IPTS-16259/OB/20170906_MS-19(2)_0000_0401.tiff'
ob_3 = '/Volumes/my_book_thunderbolt_duo/IPTS/IPTS-16259/OB/20170906_MS-19(2)_0000_0402.tiff'
ob = [ob_1, ob_2, ob_3]
# +
# df
df_1 = '/Volumes/my_book_thunderbolt_duo/IPTS/IPTS-16259/df/20171027_DF_0000_0020.tiff'
df_2 = '/Volumes/my_book_thunderbolt_duo/IPTS/IPTS-16259/df/20171027_DF_0000_0021.tiff'
df_3 = '/Volumes/my_book_thunderbolt_duo/IPTS/IPTS-16259/df/20171027_DF_0000_0022.tiff'
df = [df_1, df_2, df_3]
# -
o_norm = Normalization()
o_norm.load(file=data)
o_norm.load(file=ob, data_type='ob')
o_norm.load(file=df, data_type='df')
roi1 = ROI(x0=0, y0=0, x1=100, y1=100)
o_norm.normalization(roi=roi1)
len(o_norm.data['sample']['data'])
len(o_norm.data['normalized'])
| notebooks/ipts-16259.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ## K-means++
#
# In this notebook, we are going to implement [k-means++](https://en.wikipedia.org/wiki/K-means%2B%2B) algorithm with multiple initial sets. The original k-means++ algorithm will just sample one set of initial centroid points and iterate until the result converges. The only difference in this implementation is that we will sample `RUNS` sets of initial centroid points and update them in parallel. The procedure will finish when all centroid sets are converged.
### Definition of some global parameters.
K = 5 # Number of centroids
RUNS = 25 # Number of K-means runs that are executed in parallel. Equivalently, number of sets of initial points
RANDOM_SEED = 6<PASSWORD>
converge_dist = 0.1 # The K-means algorithm is terminated when the change in the location
# of the centroids is smaller than 0.1
# +
import numpy as np
import pickle
import sys
from numpy.linalg import norm
from matplotlib import pyplot as plt
def print_log(s):
sys.stdout.write(s + "\n")
sys.stdout.flush()
def parse_data(row):
'''
Parse each pandas row into a tuple of (station_name, feature_vec),
where feature_vec is the concatenation of the projection vectors
of TAVG, TRANGE, and SNWD.
'''
return (row[0],
np.concatenate([row[1], row[2], row[3]]))
def compute_entropy(d):
'''
Compute the entropy given the frequency vector `d`
'''
d = np.array(d)
d = 1.0 * d / d.sum()
return -np.sum(d * np.log2(d))
def choice(p):
'''
Generates a random sample from [0, len(p)),
where p[i] is the probability associated with i.
'''
random = np.random.random()
r = 0.0
for idx in range(len(p)):
r = r + p[idx]
if r > random:
return idx
assert(False)
def kmeans_init(rdd, K, RUNS, seed):
'''
Select `RUNS` sets of initial points for `K`-means++
'''
# the `centers` variable is what we want to return
n_data = rdd.count()
shape = rdd.take(1)[0][1].shape[0]
centers = np.zeros((RUNS, K, shape))
def update_dist(vec, dist, k):
new_dist = norm(vec - centers[:, k], axis=1)**2
return np.min([dist, new_dist], axis=0)
# The second element `dist` in the tuple below is the closest distance from
# each data point to the selected points in the initial set, where `dist[i]`
# is the closest distance to the points in the i-th initial set.
data = rdd.map(lambda p: (p, [np.inf] * RUNS)) \
.cache()
# Collect the feature vectors of all data points beforehand, might be
# useful in the following for-loop
local_data = rdd.map(lambda (name, vec): vec).collect()
# Randomly select the first point for every run of k-means++,
# i.e. randomly select `RUNS` points and add it to the `centers` variable
sample = [local_data[k] for k in np.random.randint(0, len(local_data), RUNS)]
centers[:, 0] = sample
for idx in range(K - 1):
##############################################################################
# Insert your code here:
data = data.map(lambda (datas, dist) : (datas, update_dist(datas[1], dist, idx)))
distance = np.array(data.values().collect())
norm_distance = distance/distance.sum(axis=0)
for i in range(RUNS):
k = choice(norm_distance[:,i])
centers[i, idx+1] = local_data[k]
##############################################################################
# In each iteration, you need to select one point for each set
# of initial points (so select `RUNS` points in total).
# For each data point x, let D_i(x) be the distance between x and
# the nearest center that has already been added to the i-th set.
# Choose a new data point for i-th set using a weighted probability
# where point x is chosen with probability proportional to D_i(x)^2
##############################################################################
# pass
return centers
def get_closest(p, centers):
'''
Return the indices the nearest centroids of `p`.
`centers` contains sets of centroids, where `centers[i]` is
the i-th set of centroids.
'''
best = [0] * len(centers)
closest = [np.inf] * len(centers)
for idx in range(len(centers)):
for j in range(len(centers[0])):
temp_dist = norm(p - centers[idx][j])
if temp_dist < closest[idx]:
closest[idx] = temp_dist
best[idx] = j
return best
def kmeans(rdd, K, RUNS, converge_dist, seed):
'''
Run K-means++ algorithm on `rdd`, where `RUNS` is the number of
initial sets to use.
'''
k_points = kmeans_init(rdd, K, RUNS, seed)
print_log("Initialized.")
temp_dist = 1.0
iters = 0
st = time.time()
# following code use new_points
local_data = np.array(rdd.map(lambda (name, vec): vec).collect())
while temp_dist > converge_dist:
##############################################################################
# INSERT YOUR CODE HERE
##############################################################################
# Update all `RUNS` sets of centroids using standard k-means algorithm
# Outline:
# - For each point x, select its nearest centroid in i-th centroids set
# - Average all points that are assigned to the same centroid
# - Update the centroid with the average of all points that are assigned to it
# Insert your code here
closet_idxs_rdd = rdd.map(lambda (name, vec) : get_closest(vec, k_points)) # points * RUNS
closet_idxs = np.array(closet_idxs_rdd.collect()) # points * RUNS
new_points = {}
for i in range(RUNS):
for j in range(K):
idxs = closet_idxs[:, i] == j
new_points[(i, j)] = local_data[idxs].mean(axis=0)
# You can modify this statement as long as `temp_dist` equals to
# max( sum( l2_norm of the movement of j-th centroid in each centroids set ))
##############################################################################
temp_dist = np.max([
np.sum([norm(k_points[idx][j] - new_points[(idx, j)]) for j in range(K)])
for idx in range(RUNS)])
iters = iters + 1
if iters % 5 == 0:
print_log("Iteration %d max shift: %.2f (time: %.2f)" %
(iters, temp_dist, time.time() - st))
st = time.time()
# update old centroids
# You modify this for-loop to meet your need
for ((idx, j), p) in new_points.items():
k_points[idx][j] = p
return k_points
# -
## Read data
data = pickle.load(open("../../Data/Weather/stations_projections.pickle", "rb"))
rdd = sc.parallelize([parse_data(row[1]) for row in data.iterrows()])
rdd.take(1)
# +
# main code
import time
st = time.time()
np.random.seed(RANDOM_SEED)
centroids = kmeans(rdd, K, RUNS, converge_dist, np.random.randint(1000))
group = rdd.mapValues(lambda p: get_closest(p, centroids)) \
.collect()
print "Time takes to converge:", time.time() - st
# -
# ## Verify your results
# Verify your results by computing the objective function of the k-means clustering problem.
# +
def get_cost(rdd, centers):
'''
Compute the square of l2 norm from each data point in `rdd`
to the centroids in `centers`
'''
def _get_cost(p, centers):
best = [0] * len(centers)
closest = [np.inf] * len(centers)
for idx in range(len(centers)):
for j in range(len(centers[0])):
temp_dist = norm(p - centers[idx][j])
if temp_dist < closest[idx]:
closest[idx] = temp_dist
best[idx] = j
return np.array(closest)**2
cost = rdd.map(lambda (name, v): _get_cost(v, centroids)).collect()
return np.array(cost).sum(axis=0)
cost = get_cost(rdd, centroids)
# +
log2 = np.log2
print log2(np.max(cost)), log2(np.min(cost)), log2(np.mean(cost))
# -
# ## Plot the increase of entropy after multiple runs of k-means++
# +
entropy = []
for i in range(RUNS):
count = {}
for g, sig in group:
_s = ','.join(map(str, sig[:(i + 1)]))
count[_s] = count.get(_s, 0) + 1
entropy.append(compute_entropy(count.values()))
# -
# **Note:** Remove this cell before submitting to PyBolt (PyBolt does not fully support matplotlib)
# +
# # %matplotlib inline
# plt.xlabel("Iteration")
# plt.ylabel("Entropy")
# plt.plot(range(1, RUNS + 1), entropy)
# 2**entropy[-1]
# -
# ## Print the final results
print 'entropy=',entropy
best = np.argmin(cost)
print 'best_centers=',list(centroids[best])
| hw4/HW4-k-means-plus-plus.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + jupyter={"outputs_hidden": false} language="html"
# <!--Script block to left align Markdown Tables-->
# <style>
# table {margin-left: 0 !important;}
# </style>
# + jupyter={"outputs_hidden": false}
# Preamble script block to identify host, user, and kernel
import sys
# ! hostname
# ! whoami
print(sys.executable)
print(sys.version)
print(sys.version_info)
# -
# ## Full name:
# ## R#:
# ## HEX:
# ## Title of the notebook
# ## Date:
# # Laboratory 6 - Part 2
# ## Files and Filesystems
#
# ### Background
#
# A computer file is a computer resource for recording data discretely (not in the secretive context, but specifically somewhere on a piece of hardware) in a computer storage device. Just as words can be written to paper, so can information be written to a computer file. Files can be edited and transferred through the internet on that particular computer system.
#
# There are different types of computer files, designed for different purposes. A file may be designed to store a picture, a written message, a video, a computer program, or a wide variety of other kinds of data. Some types of files can store several types of information at once.
#
# By using computer programs, a person can open, read, change, save, and close a computer file. Computer files may be reopened, modified, and copied an arbitrary number of times.
#
# Typically, files are organised in a file system, which keeps track of where the files are located on disk and enables user access.
#
# #### File system
# In computing, a file system or filesystem, controls how data is stored and retrieved.
# Without a file system, data placed in a storage medium would be one large body of data with no way to tell where one piece of data stops and the next begins.
# By separating the data into pieces and giving each piece a name, the data is isolated and identified.
# Taking its name from the way paper-based data management system is named, each group of data is called a “file”.
# The structure and logic rules used to manage the groups of data and their names is called a “file system”.
#
# #### Path
# A path, the general form of the name of a file or directory, specifies a unique location in a file system. A path points to a file system location by following the directory tree hierarchy expressed in a string of characters in which path components, separated by a delimiting character, represent each directory. The delimiting character is most commonly the slash (”/”), the backslash character (”\”), or colon (”:”), though some operating systems may use a different delimiter.
# Paths are used extensively in computer science to represent the directory/file relationships common in modern operating systems, and are essential in the construction of Uniform Resource Locators (URLs). Resources can be represented by either absolute or relative paths.
# As an example consider the following two files:
#
# 1. /Users/theodore/MyGit/@atomickitty/hurri-sensors/.git/Guest.conf
# 2. /etc/apache2/users/Guest.conf
#
# They both have the same file name, but are located on different paths.
# Failure to provide the path when addressing the file can be a problem.
# Another way to interpret is that the two unique files actually have different names, and only part of those names is common (Guest.conf)
# The two names above (including the path) are called fully qualified filenames (or absolute names), a relative path (usually relative to the file or program of interest depends on where in the directory structure the file lives.
# If we are currently in the .git directory (the first file) the path to the file is just the filename.
#
# We have experienced path issues with dependencies on .png files - in general your JupyterLab notebooks on CoCalc can only look at the local directory which is why we have to copy files into the directory for things to work.
#
# #### File Types
#
# 1. Text Files. Text files are regular files that contain information readable by the user. This information is stored in ASCII. You can display and print these files. The lines of a text file must not contain NULL characters, and none can exceed a prescribed (by architecture) length, including the new-line character. The term text file does not prevent the inclusion of control or other nonprintable characters (other than NUL). Therefore, standard utilities that list text files as inputs or outputs are either able to process the special characters gracefully or they explicitly describe their limitations within their individual sections.
#
# 2. Binary Files. Binary files are regular files that contain information readable by the computer. Binary files may be executable files that instruct the system to accomplish a job. Commands and programs are stored in executable, binary files. Special compiling programs translate ASCII text into binary code. The only difference between text and binary files is that text files have lines of less than some length, with no NULL characters, each terminated by a new-line character.
#
# 3. Directory Files. Directory files contain information the system needs to access all types of files, but they do not contain the actual file data. As a result, directories occupy less space than a regular file and give the file system structure flexibility and depth. Each directory entry represents either a file or a subdirectory. Each entry contains the name of the file and the file's index node reference number (i-node). The i-node points to the unique index node assigned to the file. The i-node describes the location of the data associated with the file. Directories are created and controlled by a separate set of commands.
#
# ### File Manipulation
#
# For this laboratory we will learn just a handfull of file manipulations which are quite useful. Files can be "created","read","updated", or "deleted" (CRUD).
# #### Example: Create a file, write to it.
#
# Below is an example of creating a file that does not yet exist.
# The script is a bit pendandic on purpose.
#
# First will use some system commands to view the contents of the local directory
# + jupyter={"outputs_hidden": false}
import sys
# ! rm -rf myfirstfile.txt # delete file if it exists
# ! pwd # list name of working directory, note it includes path, so it is an absolute path
# + jupyter={"outputs_hidden": false}
# ! ls -l # list contents of working directory
# + jupyter={"outputs_hidden": false}
# create file example
externalfile = open("myfirstfile.txt",'w') # create connection to file, set to write (w), file does not need to exist
mymessage = 'message in a bottle' #some object to write, in this case a string
externalfile.write(mymessage)# write the contents of mymessage to the file
externalfile.close() # close the file connection
# -
# At this point our new file should exist, lets list the directory and see if that is so
# + jupyter={"outputs_hidden": false}
# ! ls -l # list contents of working directory
# -
# Sure enough, its there, we will use a `bash` command `cat` to look at the contents of the file.
# + jupyter={"outputs_hidden": false}
# ! cat myfirstfile.txt
# -
# Thats about it, use of system commands, of course depends on the system, the examples above should work OK on CoCalc or a Macintosh; on Winderz the shell commands are a little different. If you have the linux subsystem installed then these should work as is.
# #### Example: Read from an existing file.
#
# We will continue using the file we just made, and read from it the example is below
# + jupyter={"outputs_hidden": false}
# read file example
externalfile = open("myfirstfile.txt",'r') # create connection to file, set to read (r), file must exist
silly_string = externalfile.read() # read the contents
externalfile.close() # close the file connection
print(silly_string)
# + jupyter={"outputs_hidden": false}
# -
# #### Example: Update a file.
#
# This example continues with our same file, but we will now add contents without destroying existing contents. The keyword is `append`
# + jupyter={"outputs_hidden": false}
externalfile = open("myfirstfile.txt",'a') # create connection to file, set to append (a), file does not need to exist
externalfile.write('\n') # adds a newline character
what_to_add = 'I love rock-and-roll, put another dime in the jukebox baby ... \n'
externalfile.write(what_to_add) # add a string including the linefeed
what_to_add = '... the waiting is the hardest part \n'
externalfile.write(what_to_add) # add a string including the linefeed
mylist = [1,2,3,4,5] # a list of numbers
what_to_add = ','.join(map(repr, mylist)) + "\n" # one way to write the list
externalfile.write(what_to_add)
what_to_add = ','.join(map(repr, mylist[0:len(mylist)])) + "\n" # another way to write the list
externalfile.write(what_to_add)
externalfile.close()
# -
# As before we can examine the contents using a shell command sent from the notebook.
# + jupyter={"outputs_hidden": false}
# ! cat myfirstfile.txt
# -
# A little discussion on the part where we wrote numbers
#
# what_to_add = ','.join(map(repr, mylist[0:len(mylist)])) + "\n"
#
# Here are descriptions of the two functions `map` and `repr`
#
# `map(function, iterable, ...)` Apply `function` to every item of iterable and return a list of the results.
# If additional iterable arguments are passed, function must take that many arguments and is applied to the items from all iterables in parallel.
# If one iterable is shorter than another it is assumed to be extended with None items.
# If function is None, the identity function is assumed; if there are multiple arguments, `map()` returns a list consisting of tuples containing the corresponding items from all iterables (a kind of transpose operation).
# The iterable arguments may be a sequence or any iterable object; the result is always a list.
#
# `repr(object)` Return a string containing a printable representation of an object.
# This is the same value yielded by conversions (reverse quotes).
# It is sometimes useful to be able to access this operation as an ordinary function.
# For many types, this function makes an attempt to return a string that would yield an object with the same value when passed to `eval()`, otherwise the representation is a string enclosed in angle brackets that contains the name of the type of the object together with additional information often including the name and address of the object.
# A class can control what this function returns for its instances by defining a `repr()` method.
#
# What they do in this script is important.
# The statement:
#
# what_to_add = ’,’.join(map(repr, mylist[0:len(mylist)])) + "\n"
#
# is building a string that will be comprised of elements of mylist[0:len(mylist)].
# The `repr()` function gets these elements as they are represented in the computer, the delimiter a comma is added using the join method in Python, and because everything is now a string the
#
# ... + "\n"
#
# puts a linefeed character at the end of the string so the output will start a new line the next time something is written.
# #### Example: Delete a file
#
# Delete can be done by a system call as we did above to clear the local directory
#
# In a JupyterLab notebook, we can either use
#
# import sys
# # ! rm -rf myfirstfile.txt # delete file if it exists
#
# or
#
# import os
# os.remove("myfirstfile.txt")
#
# they both have same effect, both equally dangerous to your filesystem.
#
#
# Learn more about CRUD with text files at https://www.guru99.com/reading-and-writing-files-in-python.html
#
# Learn more about file delete at https://www.dummies.com/programming/python/how-to-delete-a-file-in-python/
#
# + jupyter={"outputs_hidden": false}
# import os
file2kill = "myfirstfile.txt"
try:
os.remove(file2kill) # file must exist or will generate an exception
except:
pass # example of using pass to improve readability
print(file2kill, " missing or deleted !")
# -
# ## Example
#
# - create a text file, name it __"MyFavoriteQuotation"__.
# - Write __your favorite quotation__ in the file.
# - Read the file.
# - Add this string to it in a new line : "And that's something I wish I had said..."
# - Show the final outcome.
# + jupyter={"outputs_hidden": false}
# -
# #### Exercise-3:
#
# Read and print the contents of the text file named '**ReadingFile.txt**' that is located on the content server.
# You will have to download the file, then upload to CoCalc.
# + jupyter={"outputs_hidden": false}
# -
# #### Exercise-4:
# Append the text that is given below to the file named "**Readingfile.txt**".
#
# "*With many people utilizing the power of data today, we have umpteen resources from where one can learn what is new in the domain.*"
#
# Print the output after appending the new line.
#
# Use the system command "**! cat Readingfile.txt**" to show the appended file contains the new content
#
# + jupyter={"outputs_hidden": false}
# + jupyter={"outputs_hidden": false}
# -
# ## References
# <NAME>. (2018). Python Without Fear. Addison-Wesley ISBN 978-0-13-468747-6.
#
# <NAME> (2015). Data Science from Scratch: First Principles with Python O’Reilly
# Media. Kindle Edition.
#
# <NAME>. (2010) wxPython 2.8 Application Development Cookbook Packt Publishing Ltd. Birmingham , B27 6PA, UK ISBN 978-1-849511-78-0.
| 1-Lessons/Lesson06/Lab6/src/Lab6-Files_FullNarrative.ipynb |
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .scala
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: Scala
// language: scala
// name: scala
// ---
import $ivy.`io.monix::monix:3.2.2`;
import $ivy.`org.tpolecat::doobie-core:0.9.0`
import $ivy.`org.tpolecat::doobie-hikari:0.9.0`
import $ivy.`co.fs2::fs2-reactive-streams:2.4.4`
import $ivy.`ru.yandex.clickhouse:clickhouse-jdbc:0.2.4`
// +
import ru.yandex.clickhouse.ClickHouseDataSource
import ru.yandex.clickhouse.settings.ClickHouseProperties
val clickHouseProperties = new ClickHouseProperties()
clickHouseProperties.setHost("localhost")
clickHouseProperties.setPort(8123)
clickHouseProperties.setDatabase("docker")
clickHouseProperties.setUser("default")
clickHouseProperties.setPassword("")
val chDataSource = new ClickHouseDataSource("jdbc:clickhouse://localhost:8123/docker")
// +
import doobie.util.transactor.Transactor
import doobie.util.transactor.Transactor.Aux
import monix.eval.Task
import java.util.concurrent.Executors
import scala.concurrent.ExecutionContext
import cats.effect.Blocker
val transactor: Aux[Task, ClickHouseDataSource] = Transactor.fromDataSource[Task](
chDataSource,
ExecutionContext.fromExecutor(Executors.newFixedThreadPool(10)),
Blocker.liftExecutorService(Executors.newCachedThreadPool())
)
// -
case class GitHubRecordBaseInfo(
ghRequestUser: String,
name: String,
ownerType: String
)
// +
import doobie.implicits._
object SimpleQueries{
def getCount: doobie.Query0[Int] = {
sql"SELECT COUNT(*) FROM docker.parquet_files".stripMargin.query[Int]
}
def getBaseInfo(limitValue: Int): doobie.Query0[GitHubRecordBaseInfo] = {
sql"""|SELECT ghRequestUser, name, ownerType
|FROM docker.parquet_files LIMIT $limitValue""".stripMargin.query[GitHubRecordBaseInfo]
}
}
// +
import doobie.implicits._
import monix.eval.Task
import fs2.interop.reactivestreams._
import monix.execution.Scheduler.Implicits.global
import monix.reactive.Observable
object SimpleRepository{
def getCount: Task[List[Int]] = {
val publisher = SimpleQueries.getCount
.stream
.transact(transactor)
.toUnicastPublisher
Observable.fromReactivePublisher(publisher).toListL
}
def getBaseInfo(limitValue: Int): Task[List[GitHubRecordBaseInfo]] = {
val publisher = SimpleQueries.getBaseInfo(limitValue)
.stream
.transact(transactor)
.toUnicastPublisher
Observable.fromReactivePublisher(publisher).toListL
}
}
// -
import monix.execution.Scheduler.Implicits.global
SimpleRepository.getCount.runSyncUnsafe().head
// +
val headers: Seq[String] = Seq("ghRequestUser", "name", "ownerType")
val selectResult: List[GitHubRecordBaseInfo] = SimpleRepository.getBaseInfo(5).runSyncUnsafe()
val htmlString: String =
s"""
<table>
<tr>${headers.map(elem => s"<th>${elem}</th>").mkString}</tr>
${selectResult.map(row => s"<tr><td>${row.ghRequestUser}</td><td>${row.name}</td><td>${row.ownerType}</td></tr>").mkString}
</table>
"""
kernel.publish.html(htmlString)
// -
| notebooks/dataset_definition.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from pyspark.sql.types import *
import pyspark.sql
import pyspark.sql.functions as sf
# # Pandas UDFs
#
# "Normal" Python UDFs are pretty expensive (in terms of execution time), since for every record the following steps need to be performed:
# * record is serialized inside JVM
# * record is sent to an external Python process
# * record is deserialized inside Python
# * record is Processed in Python
# * result is serialized in Python
# * result is sent back to JVM
# * result is deserialized and stored inside result DataFrame
#
# This does not only sound like a lot of work, it actually is. Therefore Python UDFs are a magnitude slower than native UDFs written in Scala or Java, which run directly inside the JVM.
#
# But since Spark 2.3 an alternative approach is available for defining Python UDFs with so called *Pandas UDFs*. Pandas is a commonly used Python framework which also offers DataFrames (but Pandas DataFrames, not Spark DataFrames). Spark 2.3 now can convert inside the JVM a Spark DataFrame into a shareable memory buffer by using a library called *Arrow*. Python then can also treat this memory buffer as a Pandas DataFrame and can directly work on this shared memory.
#
# This approach has two major advantages:
# * No need for serialization and deserialization, since data is shared directly in memory between the JVM and Python
# * Pandas has lots of very efficient implementations in C for many functions
#
# Due to these two facts, Pandas UDFs are much faster and should be preferred over traditional Python UDFs whenever possible.
# # Sales Data Example
#
# In this notebook we will be using a data set called "Watson Sales Product Sample Data" which was downloaded from https://www.ibm.com/communities/analytics/watson-analytics-blog/sales-products-sample-data/
basedir = "s3://dimajix-training/data"
# +
data = spark.read\
.option("header", True) \
.option("inferSchema", True) \
.csv(basedir + "/watson-sales-products/WA_Sales_Products_2012-14.csv")
data.printSchema()
# -
# # 1. Classic UDF Approach
#
# As an example, let's create a function which simply increments a numeric column by one. First let us have a look using a traditional Python UDF:
# ### Python function
# +
def prev_quarter(quarter):
q = int(quarter[1:2])
y = int(quarter[3:8])
prev_q = q - 1
if (prev_q <= 0):
prev_y = y - 1
prev_q = 4
else:
prev_y = y
return "Q" + str(prev_q) + " " + str(prev_y)
print(prev_quarter("Q1 2012"))
print(prev_quarter("Q4 2012"))
# -
# ### Spark UDF
# +
from pyspark.sql.functions import udf
# Use udf to define a row-at-a-time udf
@udf('string')
# Input/output are both a single double value
def prev_quarter_udf(quarter):
# YOUR CODE HERE
result = # YOUR CODE HERE
result.limit(10).toPandas()
# -
# # 2. Scalar Pandas UDF
#
# Increment a value using a Pandas UDF. The Pandas UDF receives a `pandas.Series` object and also has to return a `pandas.Series` object.
# +
from pyspark.sql.functions import pandas_udf, PandasUDFType
# YOUR CODE HERE
result = data.withColumn('prev_quarter', prev_quarter_pudf(data["Quarter"]))
result.limit(10).toPandas()
# -
# ## 2.1 Limtations of Scalar UDFs
#
# Scalar Pandas UDFs are used for vectorizing scalar operations. They can be used with functions such as select and withColumn. The Python function should take `pandas.Series` as inputs and return a `pandas.Series` of the same length. Internally, Spark will execute a Pandas UDF by splitting columns into batches and calling the function for each batch as a subset of the data, then concatenating the results together.
# # 3. Grouped Pandas Aggregate UDFs
#
# Since version 2.4.0, Spark also supports Pandas aggregation functions. This is the only way to implement custom aggregation functions in Python. Note that this type of UDF does not support partial aggregation and all data for a group or window will be loaded into memory.
# +
# YOUR CODE HERE
result = data.groupBy("Quarter").agg(mean_udf(data["Revenue"]).alias("mean_revenue"))
result.toPandas()
# -
# ## 3.1 Limitation of Aggregate UDFs
#
# A Grouped Aggregate UDF defines an aggregation from one or more `pandas.Series` to a single scalar value, where each `pandas.Series` represents a column within the group or window.
# # 4. Grouped Pandas Map UDFs
# While the example above transforms all records independently, but only one column at a time, Spark also offers a so called *grouped Pandas UDF* which operates on complete groups of records (as created by a `groupBy` method). This could be used to replace windowing functions with some Pandas implementation.
#
# For example let's subtract the mean of a group from all entries of a group. In Spark this could be achieved directly by using windowed aggregations. But let's first have a look at a Python implementation which does not use Pandas Grouped UDFs
# +
import pandas as pd
@udf(ArrayType(DoubleType()))
def subtract_mean(values):
series = pd.Series(values)
center = series - series.mean()
return [x for x in center]
groups = data.groupBy('Quarter').agg(sf.collect_list(data["Revenue"]).alias('values'))
result = groups.withColumn('center', sf.explode(subtract_mean(groups.values))).drop('values')
result.limit(10).toPandas()
# -
# This example is even incomplete, as all other columns are now missing... we don't want to complete this example, since Pandas Grouped Map UDFs provide a much better approach
# ## 4.1 Using Pandas Grouped Map UDFs
#
# Now let's try to implement the same function using a Pandas grouped UDF. Grouped map Pandas UDFs are used with `groupBy().apply()` which implements the “split-apply-combine” pattern. Split-apply-combine consists of three steps:
# 1. Split the data into groups by using DataFrame.groupBy.
# 2. Apply a function on each group. The input and output of the function are both pandas.DataFrame. The input data contains all the rows and columns for each group.
# 3. Combine the results into a new DataFrame.
#
# To use groupBy().apply(), the user needs to define the following:
# * A Python function that defines the computation for each group.
# * A StructType object or a string that defines the schema of the output DataFrame.
#
# The column labels of the returned `pandas.DataFrame` must either match the field names in the defined output schema if specified as strings, or match the field data types by position if not strings, e.g. integer indices.
# +
from pyspark.sql.types import *
# Define result schema
result_schema = StructType(data.schema.fields + [StructField("revenue_diff", DoubleType())])
def subtract_mean(pdf):
# YOUR CODE HERE
result = # YOUR CODE HERE
result.limit(10).toPandas()
# -
# ## 4.2 Limitations of Grouped Map UDFs
#
# Grouped Map UDFs are the most flexible Spark Pandas UDFs in regards with the return type. A Grouped Map UDF always returns a `pandas.DataFrame`, but with an arbitrary amount of rows and columns (although the columns need to be defined in the schema in the Python decorator `@pandas_udf`). This means specifically that the number of rows is not fixed as opposed to scalar UDFs (where the number of output rows must match the number of input rows) and grouped map UDFs (which can only produce a single scalar value per incoming group).
| spark-training/spark-python/jupyter-advanced-udf/Pandas UDFs - Skeleton.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #%config Application.log_level='WORKAROUND'
# => fails, necessary on Fedora 27, ipython3 6.2.1
# #%config Application.log_level='INFO'
#import logging
#logging.getLogger().setLevel(logging.INFO)
from interface_d3m import AlphaAutoml
# +
output_path = '/Users/rlopez/D3M/examples/output/'
train_dataset = '/Users/rlopez/D3M/examples/input/covid19/train.csv'
test_dataset = '/Users/rlopez/D3M/examples/input/covid19/test.csv'
automl = AlphaAutoml(output_path)
pipelines = automl.search_pipelines(train_dataset, 'Fatalities', metric='meanAbsoluteError', task=['forecasting'])
# -
display(automl.leaderboard)
model = automl.train(0)
automl.test(model, test_dataset)
| scripts/example_standalone.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_pytorch_latest_p36
# language: python
# name: conda_pytorch_latest_p36
# ---
# + id="choice-exhibit"
import sagemaker
import boto3
from sagemaker.amazon.amazon_estimator import get_image_uri
from sagemaker.session import s3_input, Session
# import uuid
# + [markdown] id="english-mouth"
# ## Create bucket & Validation Region for S3
# + id="immediate-worry" outputId="cf6bb887-ee32-4ef7-80c5-5b4dfc80053a"
bucket_name = 'aps360' # <--- CHANGE THIS VARIABLE TO A UNIQUE NAME FOR YOUR BUCKET
my_region = boto3.session.Session().region_name # set the region of the instance
print(my_region)
# + [markdown] id="serious-photographer"
# ## Create Paths to S3 Buckets for storage of model data
# + id="instrumental-welding" outputId="dcd771c8-ee7e-474f-c23e-57f832ab72c8"
# Prefix for files in bucket
prefix = 'xrayclassification'
# Dataset directory
dataset = 'Xray_Dataset'
# Model output folder name
output_dir_name = 'trial_14'
# S3 Path bucket to get the data for training (Train, Test, Validation)
dataset_dir = 's3://{}/{}/{}'.format(bucket_name, prefix, dataset)
# output path for SageMaker to dump all model artifacts and graphs etc
output_dir = 's3://{}/{}/{}'.format(bucket_name, prefix, output_dir_name)
# # checkpoints for spot training
# checkpoint_suffix = str(uuid.uuid4())[:8]
# checkpoint_s3_path = 's3://{}/{}/{}/checkpoint-{}'.format(bucket_name, prefix, output_dir_name, checkpoint_suffix)
# sanity check for output path for model data
print('Dataset directory <dataset_dir>: ', dataset_dir)
print('Model Output directory <output_dir>: ', output_dir)
# print('Checkpointing Path: <checkpoint_s3_path>: {}'.format(checkpoint_s3_path))
# + [markdown] id="swedish-dynamics"
# ## Manage Spot Training
# + id="prime-miami"
# use_spot_instances = True
# max_run=24*60*60
# max_wait = 24*60*60
# + id="infinite-hungary"
# initialize hyperparamters
hyperparameters = {
'epochs': 12,
'batch-size': 64,
'learning-rate': 0.0005
}
# Training instance
training_instance = 'ml.g4dn.2xlarge'
# Create the current role to use sagemaker
role = sagemaker.get_execution_role()
# + id="polish-laptop" outputId="73a3a064-194b-4b80-e6ef-a1b1f1dfb81f"
from sagemaker.pytorch import PyTorch
# Create a Pytorch estimator to run the training script on AWS Sagemaker
estimator = PyTorch(
entry_point='trial14xray.py',
role=role,
framework_version='1.8.0',
py_version='py3',
output_path=output_dir,
train_instance_count=1,
script_mode=True,
train_instance_type=training_instance,
hyperparameters= hyperparameters,
base_job_name='trial-14-pytorch')
# + id="caroline-anchor" outputId="ca3c9149-608d-4f34-80d6-32ea44536eb5"
estimator.fit({'training': dataset_dir})
# + id="violent-founder"
# + id="practical-substance"
| InceptionPart2/trial_14_xrays.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import csv
import time
# return the list of values form
# the csv file as a list
def get_raw_csv_list(skip_first_line, path, *paths):
csv_path = ''
# https://www.w3schools.com/python/gloss_python_function_arbitrary_arguments.asp
if not paths:
csv_path = os.path.join(path)
else:
csv_path = os.path.join(path, *paths)
with open(csv_path) as csv_file:
# CSV reader specifies delimiter and variable that holds contents
csv_reader = csv.reader(csv_file, delimiter=',')
# convert csv data into list
csv_raw_data = list(csv_reader)
#skip the first line
if (skip_first_line):
csv_raw_data.pop(0)
# return value
return csv_raw_data
# saves the list of strings into path
def write_text_file(lines, path, *paths):
text_path = ''
# https://www.w3schools.com/python/gloss_python_function_arbitrary_arguments.asp
if not paths:
text_path = os.path.join(path)
else:
text_path = os.path.join(path, *paths)
with open(text_path, 'w') as text_file:
# I prefer to get the length and
# iterate as a carryover from my C# habbits
length = len(lines)
for i in range(length):
text_file.write(f'{lines[i]}\n')
def clean_data(data):
if data is None:
return data
length = len(data)
if length < 1:
return data
for i in range(length):
try:
# https://www.tutorialspoint.com/python3/time_strptime.htm
# convert the string value of the time from Feb-2020
# into a time struct object
dt = time.strptime(data[i][0], '%b-%Y')
data[i][0] = dt
except:
print(f'Could not convert line {data[i][0]} into time struct.')
try:
money = int(data[i][1])
data[i][1] = money
except:
print(f'Could not convert line {data[i][1]} into integer.')
# https://stackoverflow.com/questions/16310015/what-does-this-mean-key-lambda-x-x1
data.sort(key=lambda x: x[0])
return data
# +
# Date,Profit/Losses
# Jan-2010,867884
data = clean_data(get_raw_csv_list(True, 'Resources', 'budget_data.csv'))
data_length = len(data)
# -
# * The total number of months included in the dataset
time_column = [val[0] for val in data]
# https://www.geeksforgeeks.org/python-ways-to-remove-duplicates-from-list/
# remove duplicates
time_column = list(set(time_column))
total_months = len(time_column)
# * The net total amount of "Profit/Losses" over the entire period
total_money = 0
for line in data:
total_money = total_money + line[1]
# +
# * Calculate the changes in "Profit/Losses" over the entire period, then find the average of those changes
month_prof_changes=[]
for i in range(1, data_length):
month_prof_changes.append(data[i][1] - data[i-1][1])
average_change=0
for line in month_prof_changes:
average_change = average_change + line
average_change = average_change/(data_length-1)
# -
# https://www.tutorialspoint.com/python/list_max.htm
# * The greatest increase in profits (date and amount) over the entire period
inc_in_prof = max(month_prof_changes)
inc_in_prof_month = data[1 + month_prof_changes.index(inc_in_prof)][0]
inc_in_prof_month = time.strftime('%b-%Y', inc_in_prof_month)
# * The greatest decrease in losses (date and amount) over the entire periodtime_column = [val[0] for val in raw_data]
dec_in_prof = min(month_prof_changes)
dec_in_prof_month = data[1 + month_prof_changes.index(dec_in_prof)][0]
dec_in_prof_month = time.strftime('%b-%Y', dec_in_prof_month)
# +
analysis_data=[
f' Financial Analysis',
f' ----------------------------',
f' Total Months: {total_months}',
f' Total: ${total_money}',
f' Average Change: ${average_change:.2f}',
f' Greatest Increase in Profits: {inc_in_prof_month}',
f' Greatest Decrease in Profits: {dec_in_prof_month}' ]
write_text_file(analysis_data, 'Analysis', 'analysis.txt')
for line in analysis_data:
print(line)
| PyBank/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Astronomy — Interactive Viewers
#
# Several interactive widgets are available that allow interactive exploration of various astronomical image collections.
#
#
# ```{warning}
# Many interactive `ipywidgets` will only work in the context of a live notebook.
# ```
# ## `ipyaladin`
# + tags=["remove-cell"]
# %%capture
try:
import ipyaladin
except:
# %pip install ipyaladin
# %pip install astroquery
# !jupyter nbextension enable --py widgetsnbextension
# !jupyter nbextension enable --py --sys-prefix ipyaladin
# +
import ipyaladin as ipyal
aladin= ipyal.Aladin(target='messier 51', fov=1)
# When using this in an interactive notebook,
# we can use https://github.com/innovationOUtside/nb_cell_dialog extension
# to pop the cell or cell output out into a floating widget
# rather than have it anchored in a fixed place in the linear flow of a notebook
aladin
# -
# ```{note}
# If the [`nb_cell_dialog`](https://github.com/innovationOUtside/nb_cell_dialog) extension is installed, the widget can be popped out of the code cell output area and into its own floating widget. This allows the widget to remain in view as commands are issued throughout the notebook to change the WWT view rendered by it.
# ```
#
# Set a new target object by name:
aladin.target='M82'
# Set the zoom level:
aladin.fov=0.3
# Enable the layers control in the widget:
aladin.show_layers_control=True
# Set a layer explicitly:
aladin.survey = 'P/2MASS/color'
# Look up objects in a particular area:
# +
from astroquery.simbad import Simbad
table = Simbad.query_region("M82", radius="4 arcmin")
# -
# And overlay objects on the image in the widget viewer:
aladin.add_table(table)
# ## `pywwt`
#
# The [`pywwt`](https://pywwt.readthedocs.io/en/stable/settings.html) package provides an interactive widget for making observations using the [AAS World Wide Telescope](http://www.worldwidetelescope.org/home/), a suite of tools that provide access to a wide variety of astronomical visualisations.
#
# Start by loading the default widget:
# + tags=["remove-cell"]
# %%capture
try:
import pywwt
except:
# %pip install pywwt
# !jupyter nbextension install --py --symlink --sys-prefix pywwt
# !jupyter nbextension enable --py --sys-prefix pywwt
# !jupyter serverextension enable --py --sys-prefix pywwt
# %pip install git+https://github.com/innovationOUtside/nb_cell_dialog.git
# +
from pywwt.jupyter import WWTJupyterWidget
wwt = WWTJupyterWidget()
# When using this in an interactive notebook,
# we can use https://github.com/innovationOUtside/nb_cell_dialog extension
# to pop the cell or cell ouput out into a floating widget
# rather than have it anchored in a fixed place in the linear flow of a notebook
wwt
# -
# A series of code cell commands then configure the view displayed by the widget.
#
# ```{note}
# If the [`nb_cell_dialog`](https://github.com/innovationOUtside/nb_cell_dialog) extension is installed, the widget can be popped out of the code cell output area and into its own floating widget. This allows the widget to remain in view as commands are issued throughout the notebook to change the WWT view rendered by it.
# ```
# Enable constellation boundaries:
wwt.constellation_boundaries = True
# Enable constellation figures:
wwt.constellation_figures = True
# View telescope layers that are available:
wwt.available_layers[:5]
# Bring a particular object into view at a specified magnification:
# +
from astropy import units as u
from astropy.coordinates import SkyCoord
coord = SkyCoord.from_name('Alpha Centauri')
wwt.center_on_coordinates(coord, fov=10 * u.deg)
# -
# Enable foreground and background layers:
wwt.background = wwt.imagery.gamma.fermi
wwt.foreground = wwt.imagery.other.planck
wwt.foreground_opacity = .75
# Foreground and background layer views, and the relative opoacity, can also be controlled via interactive widgets:
wwt.layer_controls
# ## Change of View
#
# Onservations can be rendered for particular views:
wwt.available_views
# For example, we can view Saturn:
wwt.set_view('jupiter')
# Or the whole Milky Way:
wwt.set_view('milky way')
# We can also draw constellations:
# +
from astropy import units as u
from astropy.coordinates import concatenate, SkyCoord
bd = concatenate((SkyCoord.from_name('Alkaid'), # stars in Big Dipper
SkyCoord.from_name('Mizar'),
SkyCoord.from_name('Alioth'),
SkyCoord.from_name('Megrez'),
SkyCoord.from_name('Phecda'),
SkyCoord.from_name('Merak'),
SkyCoord.from_name('Dubhe')))
wwt.center_on_coordinates(SkyCoord.from_name('Megrez'))
line = wwt.add_line(bd, width=3 * u.pixel)
# -
# A particular view can be save as an HTML bundle with the command: `wwt.save_as_html_bundle()`
| src/astronomy/interactive-astronomy-viewers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Get started with the Sampler primitive
#
# Learn how to set up and use the Sampler primitive program.
#
#
# ## Overview
#
# The Sampler primitive lets you more accurately contextualize counts. It takes a user circuit as an input and generates an error-mitigated readout of quasiprobabilities. This enables you to more efficiently evaluate the possibility of multiple relevant data points in the context of destructive interference.
#
#
# ## Prepare the environment
#
# 1. Follow the steps in the [getting started guide](https://qiskit.org/documentation/partners/qiskit_ibm_runtime/getting_started.html) to get your Quantum Service instance ready to use.
# 2. You'll need at least one circuit to submit to the program. Our examples all have circuits in them, but if you want to submit your own circuit, you can use Qiskit to create one. To learn how to create circuits by using Qiskit, see the [Circuit basics tutorial](https://qiskit.org/documentation/tutorials/circuits/01_circuit_basics.html).
#
#
# ## Start a session
#
# With Qiskit Runtime primitives, we introduce the concept of a session or a factory that allows you to define a job as a collection of iterative calls to the quantum computer. When you start a session, it caches the data you send so it doesn't have to be transmitted to the Quantum Datacenter on each iteration.
#
#
# ### Specify program inputs
#
# The Sampler takes in the following arguments:
#
# - **circuits**: a list of (parameterized) circuits that you want to investigate.
# - **parameters**: a list of parameters for the parameterized circuits. It should be omitted if the circuits provided are not parameterized.
# - **skip_transpilation**: circuit transpilation is skipped if set to `True`. Default value is `False`.
# - **service**: the `QiskitRuntimeService` instance to run the program on. If not specified, the default saved account for `QiskitRuntimeService` is initialized.
# - **options**: Runtime options dictionary that control the execution environment.
# - **backend**: The backend to run on. This option is required if you are running on [IBM Quantum](https://quantum-computing.ibm.com/). However, if you are running on [IBM Cloud](https://cloud.ibm.com/quantum), you can choose not to specify the backend, in which case the least busy backend is used.
#
# You can find more details in [the Sampler API reference](https://qiskit.org/documentation/partners/qiskit_ibm_runtime/stubs/qiskit_ibm_runtime.Sampler.html).
#
# Example:
# +
from qiskit_ibm_runtime import QiskitRuntimeService, Sampler
from qiskit import QuantumCircuit
service = QiskitRuntimeService()
bell = QuantumCircuit(2)
bell.h(0)
bell.cx(0, 1)
bell.measure_all()
# -
# ## Write to & read from a session
#
# Running a job and returning the results are done by writing to and reading from the session. The session closes when the code exits the `with` block.
#
#
# ### Run the job & print results
#
# Run the job, specifying your previously defined inputs and options. In this simple example, there is only one circuit and it does not have parameters.
#
# In each call, you will use `circuits` to specify which circuit to run and, if applicable, `parameter_values` specifies which parameter to use with the specified circuit.
#
# In this example, we specified one circuit, `circuits=bell`, and we asked for the result for running the first (and only) circuit: `circuits=[0]`.
#
# As you will see in later examples, if we had specified multiple circuits, such as `circuits=[pqc, pqc2]` when initializing the session, you could specify `circuits=[1]` or `circuits=[pqc2]` in each call to run the `pqc2` circuit.
# executes a Bell circuit
with Sampler(circuits=bell, service=service, options={ "backend": "ibmq_qasm_simulator" }) as sampler:
# pass indices of circuits
result = sampler(circuits=[0], shots=1024)
print(result)
# ## Multiple circuit example
#
# In this example, we specify three circuits, but they have no parameters:
# +
from qiskit_ibm_runtime import QiskitRuntimeService, Sampler
from qiskit import QuantumCircuit
service = QiskitRuntimeService()
bell = QuantumCircuit(2)
bell.h(0)
bell.cx(0, 1)
bell.measure_all()
# executes three Bell circuits
with Sampler(circuits=[bell]*3, service=service, options={ "backend": "ibmq_qasm_simulator" }) as sampler:
# alternatively you can also pass circuits as objects
result = sampler(circuits=[bell]*3)
print(result)
# -
# ## Multiple parameterized circuits example
#
# In this example, we run multiple parameterized circuits. When it is run, this line `result = sampler(circuits=[0, 0, 1], parameter_values=[theta1, theta2, theta3])` specifies which parameter to send to each circuit.
#
# In our example, the parameter labeled `theta` is sent to the first circuit, `theta2` is sent to the first circuit, and `theta3` is sent to the second circuit.
# +
from qiskit_ibm_runtime import QiskitRuntimeService, Sampler
from qiskit import QuantumCircuit
from qiskit.circuit.library import RealAmplitudes
service = QiskitRuntimeService()
# parameterized circuit
pqc = RealAmplitudes(num_qubits=2, reps=2)
pqc.measure_all()
pqc2 = RealAmplitudes(num_qubits=2, reps=3)
pqc2.measure_all()
theta1 = [0, 1, 1, 2, 3, 5]
theta2 = [1, 2, 3, 4, 5, 6]
theta3 = [0, 1, 2, 3, 4, 5, 6, 7]
with Sampler(circuits=[pqc, pqc2], service=service, options={ "backend": "ibmq_qasm_simulator" }) as sampler:
result = sampler(circuits=[0, 0, 1], parameter_values=[theta1, theta2, theta3])
print(result)
# -
# ### Result
#
# The results align with the parameter - circuit pairs specified previously. For example, the first result (`{'11': 0.42578125, '00': 0.14453125, '10': 0.0888671875, '01': 0.3408203125}`) is the output of the parameter labeled `theta` being sent to the first circuit.
| docs/tutorials/how-to-getting-started-with-sampler.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Scikits ODE solver
#
# In this notebook, we show some examples of solving an ODE model using the Scikits ODE solver, which interfaces with the [SUNDIALS](https://computation.llnl.gov/projects/sundials) library via the [scikits-odes](https://scikits-odes.readthedocs.io/en/latest/) Python interface
# +
# Setup
import pybamm
import tests
import numpy as np
import os
import matplotlib.pyplot as plt
from pprint import pprint
os.chdir(pybamm.__path__[0]+'/..')
# Create solver
ode_solver = pybamm.ScikitsOdeSolver()
# -
# ## Integrating ODEs
# In the simplest case, the `integrate` method of the ODE solver needs to be passed a function that returns `dydt` at `(t,y)`, initial conditions `y0`, and a time `t_eval` at which to return the solution:
# +
def exponential_decay(t, y):
return np.array([-y[0], - (1.0 + y[0]) * y[1]])
# Solve
y0 = np.array([2., 1.])
t_eval = np.linspace(0, 5, 10)
solution = ode_solver.integrate(exponential_decay, y0, t_eval)
# Plot
def plot(t_sol, y_sol):
t_fine = np.linspace(0,t_eval[-1],1000)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(13,4))
ax1.plot(t_fine, 2 * np.exp(-t_fine), t_sol, y_sol[0], "o")
ax1.set_xlabel("t")
ax1.legend(["2*exp(-t)", "y_sol[0]"], loc="best")
ax2.plot(t_fine, np.exp(2 * np.exp(-t_fine) - t_fine - 2), t_sol, y_sol[1], "o")
ax2.set_xlabel("t")
ax2.legend(["exp(2*exp(-t) - t - 2)", "y_sol[1]"], loc="best")
plt.tight_layout()
plt.show()
plot(solution.t, solution.y)
# -
# We can also provide the Jacobian
# +
def jacobian(t, y):
return np.array([[-1.0, 0.0], [-y[1], -(1 - y[0])]])
solution = ode_solver.integrate(exponential_decay, y0, t_eval, jacobian=jacobian)
plot(solution.t, solution.y)
# -
# It is also possible to provide a mass matrix to the `integrate` method by using the key-word argument `mass_matrix`, but this is currently not used by the Scikits ODE solver.
#
# Finally, we can specify events at which the solver should terminate
# +
def y0_equal_1(t, y):
return y[0] - 1
# Solve
t_eval = np.linspace(0, 1, 10)
solution = ode_solver.integrate(exponential_decay, y0, t_eval, events=[y0_equal_1])
# Plot
t_fine = np.linspace(0,t_eval[-1],1000)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(13,4))
ax1.plot(t_fine, 2 * np.exp(-t_fine), solution.t, solution.y[0], "o", t_fine, np.ones_like(t_fine), "k")
ax1.set_xlabel("t")
ax1.legend(["2*exp(-t)", "y_sol[0]", "y=1"], loc="best")
ax2.plot(t_fine, np.exp(2 * np.exp(-t_fine) - t_fine - 2), solution.t, solution.y[1], "o")
ax2.set_xlabel("t")
ax2.legend(["exp(2*exp(-t) - t - 2)", "y_sol[1]"], loc="best")
plt.tight_layout()
plt.show()
# -
# ## Solving a model
# The `solve` method is common to all ODE solvers. It takes a model, which contains all of the above information (derivatives function, initial conditions and optionally jacobian, mass matrix, events), and a time to evaluate `t_eval`, and calls `integrate` to solve this model.
# +
# Create model
model = pybamm.BaseModel()
u = pybamm.Variable("u")
v = pybamm.Variable("v")
model.rhs = {u: -v, v: u}
model.initial_conditions = {u: 2, v: 1}
model.events['v=-2'] = v + 2
model.variables = {"u": u, "v": v}
# Discretise using default discretisation
disc = pybamm.Discretisation()
disc.process_model(model)
# Solve ########################
t_eval = np.linspace(0, 5, 30)
solution = ode_solver.solve(model, t_eval)
################################
# Post-process, so that u and v can be called at any time t (using interpolation)
t_sol, y_sol = solution.t, solution.y
u = pybamm.ProcessedVariable(model.variables["u"], t_sol, y_sol)
v = pybamm.ProcessedVariable(model.variables["v"], t_sol, y_sol)
# Plot
t_fine = np.linspace(0,t_eval[-1],1000)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(13,4))
ax1.plot(t_fine, 2 * np.cos(t_fine) - np.sin(t_fine), t_sol, u(t_sol), "o")
ax1.set_xlabel("t")
ax1.legend(["2*cos(t) - sin(t)", "u"], loc="best")
ax2.plot(t_fine, 2 * np.sin(t_fine) + np.cos(t_fine), t_sol, v(t_sol), "o", t_fine, -2 * np.ones_like(t_fine), "k")
ax2.set_xlabel("t")
ax2.legend(["2*sin(t) + cos(t)", "v", "v = -2"], loc="best")
plt.tight_layout()
plt.show()
# -
# Note that the discretisation or solver will have created the mass matrix and jacobian algorithmically, using the expression tree, so we do not need to calculate and input these manually.
| examples/notebooks/solvers/scikits-ode-solver.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Integer Program Problem
'''
here is an example to solve the following problem:
max z = x[0]**2 + x[1]**2 + 3*x[2]**2 + 4*x[3]**2 + 2*x[4]**2 - 8*x[0] - 2*x[1] - 3*x[2] - x[3] - 2*x[4]
s.t x in bounds
A_ub * x <= b_ub
here x is an array of integers.
'''
import numpy as np
A_ub = [[1, 1, 1, 1, 1],
[1, 2, 2, 1, 6],
[2, 1, 6, 0, 0],
[0, 0, 1, 1, 5]]
b_ub = [400, 800, 200, 200]
bounds = [(0, 99), (0, 99), (0, 99), (0, 99)]
def z(x):
return x[0]**2 + x[1]**2 + 3*x[2]**2 + 4*x[3]**2 + 2*x[4]**2 - 8*x[0] - 2*x[1] - 3*x[2] - x[3] - 2*x[4]
# +
# Monte Carlo Method 蒙特卡洛方法
x0 = []
z0 = 0
for i in range(int(1000000)):
x = np.random.randint(low=0, high=99, size=5, dtype=np.int64)
if(np.max(np.matmul(A_ub, x)-b_ub) > 0):
continue
tempz = z(x)
if tempz > z0:
z0 = tempz
x0 = x
print('The best z is :', z0)
print('The according x is: ', x0)
# +
from pulp import *
# 1. 建立问题
prob = LpProblem("Bleding Problem", LpMinimize)
# 2. 建立变量
x1 = LpVariable("ChickenPercent", 0, None, LpInteger)
x2 = LpVariable("BeefPercent", 0)
# 3. 设置目标函数
prob += 0.013*x1 + 0.008*x2, "Total Cost of Ingredients per can"
# 4. 施加约束
prob += x1 + x2 == 100, "PercentagesSum"
prob += 0.100*x1 + 0.200*x2 >= 8.0, "ProteinRequirement"
prob += 0.080*x1 + 0.100*x2 >= 6.0, "FatRequirement"
prob += 0.001*x1 + 0.005*x2 <= 2.0, "FibreRequirement"
prob += 0.002*x2 + 0.005*x2 <= 0.4, "SaltRequirement"
# 5. 求解
prob.solve()
# 6. 打印求解状态
print("Status:", LpStatus[prob.status])
# 7. 打印出每个变量的最优值
for v in prob.variables():
print(v.name, "=", v.varValue)
# 8. 打印最优解的目标函数值
print("Total Cost of Ingredients per can = ", value(prob.objective))
# +
# 例2.8
from pulp import *
# 1. 建立问题
prob = LpProblem("MIP", LpMinimize)
# 2. 建立变量
x = LpVariable.dicts(name="x", indexs=[1, 2], lowBound=0, cat='Continuous')
x3 = LpVariable(name='x3', lowBound=0, upBound=1, cat='Integer')
x[2].setInitialValue(5.5)
x3.setInitialValue(1)
# 3. 设置目标函数
prob += -3*x[1]-2*x[2]-x3
# 4. 施加约束
prob += x[1] + x[2] + x3 <= 7
prob += 4*x[1] + 2*x[2] + x3 == 12
# 5. 求解
prob.solve()
# 6. 打印求解状态
print("Status:", LpStatus[prob.status])
# 7. 打印出每个变量的最优值
for v in prob.variables():
print(v.name, "=", v.varValue)
# 8. 打印最优解的目标函数值
print("Total Cost of Ingredients per can = ", value(prob.objective))
| .ipynb_checkpoints/02 整数规划-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import sys
print(sys.version)
#matplotlib
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
matplotlib.rcParams.update({'font.size': 18})
# %config InlineBackend.rc = {}
# %matplotlib inline
import numpy as np
import pickle
from astropy.io import fits
# %load_ext autoreload
# %autoreload 2
# %reload_ext autoreload
import Zernike_Cutting_Module
from Zernike_Cutting_Module import *
# -
# make notebook nice and wide to fill the entire screen
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
# replace the names here
DATA_FOLDER_FOR_pfsDetectorMap='/Users/nevencaplar/Documents/PFS/ReducedData/Data_Jun_7_2020/'
DATA_FOLDER_FOR_poststamps='/Users/nevencaplar/Documents/PFS/ReducedData/Data_Jun_7_2020/'
# +
arc='HgAr'
visit_0='021604'
# import the pfsDetectorMap
pfsDetectorMap=fits.open(DATA_FOLDER_FOR_pfsDetectorMap+'pfsDetectorMap-021604-r1.fits')
# creates the pandas dataframe Zernike_info_df, which we will use for cutting the data
# inputs are:
# 1. DetectorMap, 2. arc used, 3. where to export created poststamps, 4. the visit name of the input
create_Zernike_info_df(pfsDetectorMap,arc,DATA_FOLDER_FOR_poststamps,visit_0)
Zernike_info_df_HgAr_021604=np.load(DATA_FOLDER_FOR_poststamps+'Dataframes/Zernike_info_df_'+arc+'_'+visit_0,allow_pickle=True)
# -
# lets see where the positions of our poststamps are
plt.figure(figsize=(10,10))
plt.xlabel('x position')
plt.ylabel('y positions ')
plt.scatter(Zernike_info_df_HgAr_021604['xc'].values,Zernike_info_df_HgAr_021604['yc'].values,s=100)
# +
DATA_FOLDER_FOR_detrended_data='/Volumes/Saturn_USA 1/PFS/ReducedData/May2020/rerun_1/detrend/calExp/2020-05-01/'
# first number of the visit
run_0=4440
# number of images per a single configuration
number_of_images=6
# Subaru (SUB) or LAM
SUB_LAM='SUB'
#should we subtract continuum
subtract_continuum=False
# dither (either None, or 4 or 9)
dither=4
# verbosity setting (0=no output, 1=full output)
verbosity=1
# save the outputs
save=True
# use_previous_full_stacked_images if they have been made - if you are rerunning this routine this will make it go quicker
# by reusing previously stacked images
use_previous_full_stacked_images=True
Zernike_cutting_Jun_2020=Zernike_cutting(DATA_FOLDER=DATA_FOLDER_FOR_detrended_data,run_0=run_0,number_of_images=number_of_images,SUB_LAM=SUB_LAM,\
subtract_continuum=subtract_continuum,Zernike_info_df=Zernike_info_df_HgAr_021604,dither=dither,\
verbosity=verbosity,save=True,use_previous_full_stacked_images=use_previous_full_stacked_images)
Zernike_cutting_Jun_2020.create_poststamps()
# -
# loads the information about the dithering quality
pos_4_Subaru_quantiles=np.load(DATA_FOLDER_FOR_poststamps+'Stamps_cleaned/sci'+str(run_0)+'_pos_'+str(dither)+'_overview_quantiles.npy')
create_dither_plot([pos_4_Subaru_quantiles],['Subaru_4_data'])
| notebooks/Example_of_using_Zernike_Cutting_Module.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from scipy import optimize
import pandas as pd
import numpy as np
from pandas import DataFrame
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn.tree import DecisionTreeClassifier
import matplotlib.pyplot as plt
# %matplotlib inline
#step1 read the filein csv format
filename = 'diabetes.csv'
data = pd.read_csv(filename)
#print (data.shape)
print (data.describe())
# +
# function to check 0 in column
def chkColumnForVal(col_name,val):
print (col_name)
rowcnt=0
out_array=[]
for t in df[col_name]:
if(t<val):
out_array.append(rowcnt)
rowcnt=rowcnt+1
return len(out_array)
#function to find mean,median,mode
def cal_mmm(col_name):
mean = df[col_name].mean()
mode = df[col_name].mode()
#median = df[col_name].median
mmm_array=[mean,mode]
return mmm_array
# +
#step2 clean the data (categorize the continuous variables)
#print (data.head(10))
df = DataFrame.from_csv('diabetes.csv', header = 0, sep = ',' ,index_col = None)
#print("variance: ",df.var())
#print("std: ",df.std())
print (df.head(5))
# +
#calculate means,median,mode
#print("mmm_Glucose", cal_mmm("Glucose")[1][0])
# Zero Replacement
df['Glucose']=df.Glucose.mask(data.Glucose == 0,cal_mmm("Glucose")[0])
df['BloodPressure']=df.BloodPressure.mask(data.BloodPressure == 0,cal_mmm("BloodPressure")[0])
df['SkinThickness']=df.SkinThickness.mask(data.SkinThickness == 0,cal_mmm("SkinThickness")[0])
df['Insulin']=df.Insulin.mask(data.Insulin == 0,cal_mmm("Insulin")[0])
df['BMI']=df.BMI.mask(data.BMI == 0,cal_mmm("BMI")[0])
df['DiabetesPedigreeFunction']=df.DiabetesPedigreeFunction.mask(data.DiabetesPedigreeFunction == 0,cal_mmm("DiabetesPedigreeFunction")[0])
print (df.head(5))
# +
#DataVisualization
filt_df = df[['SkinThickness','Insulin']]
#filt_df = df[['Glucose','BloodPressure','SkinThickness','Insulin','BMI','DiabetesPedigreeFunction']]
#print(filt_df.head(10))
df.hist(figsize=(10,8))
# -
df.plot(kind= 'box' , subplots=True, layout=(3,3), sharex=False, sharey=False, figsize=(10,8))
# +
#print (data.describe())
#Outlier removal & Visualization
low = .1
high = .9
quant_df = filt_df.quantile([low, high])
print(quant_df)
filt_df = filt_df.apply(lambda x: x[(x>quant_df.loc[low,x.name]) & (x < quant_df.loc[high,x.name])], axis=0)
#filt_df.dropna(axis=0, how='any',inplace=True)
print("*******after outlier removal*********")
#filt_df.describe()
#df['Glucose']=filt_df['Glucose']
#df['BloodPressure']=filt_df['BloodPressure']
df['SkinThickness']=filt_df['SkinThickness']
df['Insulin']=filt_df['Insulin']
#df['BMI']=filt_df['BMI']
#df['DiabetesPedigreeFunction']=filt_df['DiabetesPedigreeFunction']
df.dropna(axis=0, how='any',inplace=True)
df.describe()
#df.hist(figsize=(10,8))
#df.hist(figsize=(10,8))
#from scipy import stats
#df[(np.abs(stats.zscore(df)) < 1.5).all(axis=1)]
#df[np.abs(df.Glucose-df.Glucose.mean())<=(1.5*df.Glucose.std())]
#df[np.abs(df.BloodPressure-df.BloodPressure.mean())<=(3*df.BloodPressure.std())]
#df[np.abs(df.SkinThickness-df.SkinThickness.mean())<=(3*df.SkinThickness.std())]
#df[np.abs(df.Insulin-df.Insulin.mean())<=(3*df.Insulin.std())]
#df[np.abs(df.BMI-df.BMI.mean())<=(1.5*df.BMI.std())]
#df[np.abs(df.DiabetesPedigreeFunction-df.DiabetesPedigreeFunction.mean())<=(3*df.DiabetesPedigreeFunction.std())]
#df.hist(figsize=(10,8))
#chkColumnForVal("BMI",10)
# -
df.plot(kind= 'box' , subplots=True, layout=(3,3), sharex=False, sharey=False, figsize=(10,8))
#Categorise continuous variables
#Pregnancies
'''
bins_Pregnancies=3
df["Pregnancies"] = pd.cut(df.Pregnancies,bins_Pregnancies,labels=False)
#labels_Glucose = ["NorGlucose","MedGlucose","HigGlucose"]
#pd.cut([5,139,140,141,145,199,200,201],bins_Glucose,labels=labels_Glucose)
#Glucose- (0,139], (139,199] , (199,1000]
bins_Glucose = [0.0,139.0,199.0,1000.0]
df["Glucose"] = pd.cut(df.Glucose,bins_Glucose,labels=False)
#BP-(0,59], (59,90] , (90,200] or <60, 60-90, >90
bins_BP = [0.00,59.00,90.00,200.00]
df["BloodPressure"] = pd.cut(df.BloodPressure,bins_BP,labels=False)
#SkinThickness -(0,23],(23,200]
bins_SkinThickness = [0.0,23.0,200.0]
df["SkinThickness"] = pd.cut(df.SkinThickness,bins_SkinThickness,labels=False)
#Insulin -(0,15],(15,166),(166,1000]
bins_Insulin=[0.0,15.0,166.0,1000.0]
df["Insulin"] = pd.cut(df.Insulin,bins_Insulin,labels=False)
#BMI - (0,18.4], (18.4,24], (24,29], (29,100]
bins_BMI=(0.0,18.4,24.0,29.0,100.0)
df["BMI"] = pd.cut(df.BMI,bins_BMI,labels=False)
#DiabetesPedigreeFunction use equidistant bins
bins_DPF=3
df["DiabetesPedigreeFunction"] = pd.cut(df.DiabetesPedigreeFunction,bins_DPF,labels=False)
#Age (20,44],(44,64],(64,100]
bins_Age=(20.0,44.0,64.0,100.0)
df["Age"] = pd.cut(df.Age,bins_Age,labels=False)
print(df.head(20))
'''
#step3 divide the dataset into training - 30%, tuneing -30% and testing 40%
train, test = train_test_split(df, test_size = 0.4, random_state=30)
target = train["Outcome"]
feature = train[train.columns[0:8]]
feat_names = train.columns[0:8]
target_classes = ['0','1']
print(test)
# +
#step4 use training dataset to apply algorithm
import seaborn as sns
model = DecisionTreeClassifier(max_depth=4, random_state=0)
tree_= model.fit(feature,target)
test_input=test[test.columns[0:8]]
expected = test["Outcome"]
#print("*******************Input******************")
#print(test_input.head(2))
#print("*******************Expected******************")
#print(expected.head(2))
predicted = model.predict(test_input)
print(metrics.classification_report(expected, predicted))
conf = metrics.confusion_matrix(expected, predicted)
print(conf)
print("Decision Tree accuracy: ",model.score(test_input,expected))
dtreescore = model.score(test_input,expected)
label = ["0","1"]
sns.heatmap(conf, annot=True, xticklabels=label, yticklabels=label)
print (a)
#Feature Importance DecisionTreeClassifier
importance = model.feature_importances_
indices = np.argsort(importance)[::-1]
print("DecisionTree Feature ranking:")
for f in range(feature.shape[1]):
print("%d. feature %s (%f)" % (f + 1, feat_names[indices[f]], importance[indices[f]]))
plt.figure(figsize=(15,5))
plt.title("DecisionTree Feature importances")
plt.bar(range(feature.shape[1]), importance[indices], color="y", align="center")
plt.xticks(range(feature.shape[1]), feat_names[indices])
plt.xlim([-1, feature.shape[1]])
plt.show()
# -
#KNN
from sklearn.neighbors import KNeighborsClassifier
neigh = KNeighborsClassifier(n_neighbors=21)
neigh.fit(feature,target)
knnpredicted = neigh.predict(test_input)
print(metrics.classification_report(expected, knnpredicted))
print(metrics.confusion_matrix(expected, knnpredicted))
print("KNN accuracy: ",neigh.score(test_input,expected))
knnscore=neigh.score(test_input,expected)
# +
names_ = []
results_ = []
results_.append(dtreescore)
results_.append(knnscore)
names_.append("DT")
names_.append("KNN")
#ax.set_xticklabels(names)
res = pd.DataFrame()
res['y']=results_
res['x']=names_
ax = sns.boxplot(x='x',y='y',data=res)
# +
import graphviz
import pydotplus
from IPython.display import Image
from sklearn.tree import export_graphviz
from sklearn.externals.six import StringIO
dot_data=StringIO()
dot_data = export_graphviz(model, out_file = None, feature_names=feat_names, class_names=target_classes,
filled=True, rounded=True, special_characters=True)
graph = pydotplus.graph_from_dot_data(dot_data)
print(dot_data)
Image(graph.create_png())
#graph.write_pdf("diabetes.pdf")
# +
#Evaluation DecisionTreeClassifier
from sklearn.metrics import roc_curve, auc
import random
fpr,tpr,thres = roc_curve(expected, predicted)
roc_auc = auc(fpr, tpr)
plt.title('DecisionTreeClassifier-Receiver Operating Characteristic Test Data')
plt.plot(fpr, tpr, color='green', lw=2, label='DecisionTree ROC curve (area = %0.2f)' % roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.2])
plt.ylim([-0.1,1.2])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
# -
#KNeighborsClassifier-ROC curve
kfpr,ktpr,kthres = roc_curve(expected, knnpredicted)
kroc_auc = auc(kfpr, ktpr)
plt.title('KNeighborsClassifier- Receiver Operating Characteristic')
plt.plot(kfpr, ktpr, color='darkorange', lw=2, label='KNeighbors ROC curve (area = %0.2f)' % kroc_auc)
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.2])
plt.ylim([-0.1,1.2])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
| diabetes_analysis_niharika_final.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
# # Lab12: Data Analysis in Python
#
# ## load data into pandas.dataframe
#
import pandas
# +
df = pandas.read_excel('s3://towner-ia241-2021spring/house_price.xls')
df[:10]
# -
# ## 2.1 Unit Price
# +
df['unit_price'] = df ['price']/df['area']
df[:10]
# -
# ## 2.2 House Type
#
df['house_type'].value_counts()
# ## 2.3 Average Price, More than 2 Bath
prc_more_2_bath = df.loc[df['bathroom']>2]['price']
print('avg price of house more than 2 bathrooms is ${}'.format(prc_more_2_bath.mean()))
# ## 2.4 Mean /Median Unit Price
#
print('mean unit price s ${}'.format(df['unit_price'].mean()))
print('median unit price s ${}'.format(df['unit_price'].median()))
# ## 2.5 Average Price Per House Type
#
df.groupby('house_type').mean()['price']
# ## 2.6 Predict Price by House Area
from scipy import stats
result = stats.linregress(df['area'],df['price'])
print('slope is {}'.format(result.slope))
print('intercept is {}'.format(result.intercept))
print('r square is {}'.format(result.rvalue))
print('pvalue is {}'.format(result.pvalue))
# ## 2.7 Predict Price of House w/ 2,000 sqft
print('price of a house with {} sqrft i s${}'.format(2000,2000*result.slope+result.intercept ))
| Lab12.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Logistic Regression with PySpark
# This notebook demonstrates how to train and measure a logistic regression model with PySpark.
#
# * Method: [Logistic Regression](https://spark.apache.org/docs/2.2.0/mllib-linear-methods.html#logistic-regression)
# * Dataset: Spark MLlib Sample LibSVM Data
# ## Imports
# +
from os import environ
# Set SPARK_HOME
environ["SPARK_HOME"] = "/home/students/spark-2.2.0"
import findspark
findspark.init()
import numpy as np
from pyspark import SparkContext
from pyspark.sql import SQLContext
from pyspark.ml.classification import LogisticRegression
import matplotlib.pyplot as plt
# %matplotlib inline
# -
# ## Get Some Context
# Create a SparkContext and a SQLContext context to use
sc = SparkContext(appName="Logistic Regression with Spark")
sqlContext = SQLContext(sc)
# ## Load and Prepare the Data
DATA_FILE = "/home/students/data/mllib/sample_libsvm_data.txt"
data = sqlContext.read.format("libsvm").load(DATA_FILE)
# View one of the records
data.take(1)
# Create train and test datasets
splits = data.randomSplit([0.8, 0.2], 42)
train = splits[0]
test = splits[1]
# ## Fit a Logistic Regression Model
#
# Arguments:
# * maxIter: max number of iterations
# * regParam: regularization parameter
# * elasticNetParam: ElasticNet mixing param
# * 1 = L1 Regularization (LASSO)
# * 0 = L2 Regularization (Ridge)
# * Between 0 and 1 = ElasticNet (L1 + L2)
lr = LogisticRegression(maxIter=10,
regParam=0.3,
elasticNetParam=0.8)
lr_model = lr.fit(train)
# Show the intercept
print("Intercept: " + str(lr_model.intercept))
# ## Create Predictions
# Create the predictions
predictions = lr_model.transform(test)
predictions.show(5)
# +
# Plot the actuals versus predictions
actuals = predictions.select('label').collect()
predictions = predictions.select('prediction').collect()
fig = plt.figure(figsize=(10,5))
plt.scatter(actuals, predictions)
plt.xlabel("Actuals")
plt.ylabel("Predictions")
plt.title("Actuals vs. Predictions")
plt.show()
# -
# ## Model Evaluation
# Create the summary
metrics = lr_model.summary
# ### Area Under ROC
#
# A measure of how well a parameter can distinguish between the two groups in a binary classification.
#
# * .90-1 = excellent (A)
# * .80-.90 = good (B)
# * .70-.80 = fair (C)
# * .60-.70 = poor (D)
# * .50-.60 = fail (F)
# Area under the ROC
print("Area Under ROC = %.2f" % metrics.areaUnderROC)
# ## F-Measure (F1)
#
# A measure of a test's accuracy that considers both the precision p and the recall r of the test to compute the score.
# Show all F-Measure scores
metrics.fMeasureByThreshold.show()
# Determine the best threshold to maximize the F-Measure
f_measure = metrics.fMeasureByThreshold
max_f_measure = f_measure.groupBy().max('F-Measure').select('max(F-Measure)').head()
best_threshold = f_measure.where(f_measure['F-Measure'] == max_f_measure['max(F-Measure)']) \
.select('threshold').head()['threshold']
print("Best Threshold: %0.3f" % best_threshold)
# ## Use the New Threshold
# +
# Create an instance of the model using our new threshold
lr2 = LogisticRegression(maxIter=10,
regParam=0.3,
elasticNetParam=0.8,
threshold=0.594)
# Train the model
lrm2 = lr.fit(train)
# Create the predictions
p2 = lrm2.transform(test)
# Plot the actuals vs. predicted
a2 = p2.select('label').collect()
pred2 = p2.select('prediction').collect()
fig = plt.figure(figsize=(10,5))
plt.scatter(a2, pred2)
plt.xlabel("Actuals")
plt.ylabel("Predictions")
plt.title("Actuals vs. Predictions")
plt.show()
# -
# New metrics
m2 = lrm2.summary
# Area under the ROC
print("Area Under ROC = %.2f" % m2.areaUnderROC)
# ## Shut it Down
sc.stop()
| code/day_6/3 - Logistic Regression with PySpark.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
from deluca.agents import GPC, Adaptive
from deluca.envs import LDS
import jax
import jax.numpy as jnp
import numpy as np
from tqdm import tqdm
import matplotlib.pyplot as plt
# +
def loop(context, i):
controller, state, A, B = context
try:
action = controller(state, A, B)
except:
action = controller(state)
state = A @ state + B @ action + np.random.normal(0, 0.2, size=(n,1)) # gaussian noise
if(i % T//2 == 0): # switch system
A,B = jnp.array([[1.,1.5], [0,1.]]), jnp.array([[0],[0.9]])
error = jnp.linalg.norm(state)+jnp.linalg.norm(action)
return (controller, state, A, B), error
def get_errs(T, controller, A, B):
state = jnp.zeros((n, 1))
errs = [0.]
for i in tqdm(range(1, T)):
(controller, state, A, B), error = loop((controller, state, A, B), i)
errs.append(error)
return errs
# TODO: need to address problem of LQR with jax.lax.scan
def get_errs_scan(T, controller, A, B):
state = jnp.zeros((n, 1))
xs = jnp.array(jnp.arange(T))
print(type(xs))
_, errs = jax.lax.scan(loop, (controller, state, A, B), xs)
return errs
# -
cummean = lambda x: np.cumsum(x)/(np.ones(T) + np.arange(T))
n, m = 2, 1
T = 600
# +
A,B = jnp.array([[1.,.5], [0,1.]]), jnp.array([[0],[1.2]])
ada = Adaptive(T, base_controller=GPC, A=A, B=B)
ada_errs = get_errs(T, ada, A, B)
print("AdaGPC incurs ", np.mean(ada_errs), " loss under gaussian iid noise")
# ada_errs_scan = get_errs_scan(T, ada, A, B)
# print("AdaGPC with scan incurs ", np.mean(ada_errs_scan), " loss under gaussian iid noise")
plt.title("Instantenous losses under gaussian iid noise")
plt.plot(cummean(ada_errs), "blue", label = "AdaGPC")
plt.legend();
| examples/agents/adagpc.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Random Forest
import numpy as np
import pandas as pd
import sklearn as sk
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_error
from sklearn import pipeline,preprocessing,metrics,model_selection,ensemble
my_data = pd.read_csv("choco/f2.csv")
my_data
y= my_data["Rating"]
X = my_data["Coco%"]
my_data
# ##### Splitting the data into train ,test
X_train,X_test,y_train,y_test = sk.model_selection.train_test_split(X,y,train_size = 0.8,test_size = 0.2,random_state = 0)
model = RandomForestRegressor(n_estimators=100,max_depth=5,random_state=0)
model.fit(X_train.values.reshape(-1,1),y_train.values.reshape(-1,1))
# ### We created the model and fitted it with train data
pred = model.predict(X_test.values.reshape(-1,1))
# ### We got the predicted outputs
mae = mean_absolute_error(pred,y_test)
print("The mean absolute error is :",round(mae,2))
# #### Lower the mae implies that the model is fitted perfectly and is more accurate
| choco_ft.RandomForest.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:jcopml]
# language: python
# name: conda-env-jcopml-py
# ---
from luwiji.knn import illustration, demo
demo.knn()
illustration.knn_distance
# ### Other Distance Metric
# https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.DistanceMetric.html
| 04 - KNN & Scikit-learn/Part 1 - K Nearest Neighbor.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # The Iris-Dataset
# 
#
#
# This data set consists of 3 different types of irises’ (Setosa, Versicolour, and Virginica) petal and sepal length, stored in a 150x4 numpy.ndarray
#
# The rows being the samples and the columns being: Sepal Length, Sepal Width, Petal Length and Petal Width.
# ######https://scikit-learn.org/stable/auto_examples/datasets/plot_iris_dataset.html
# ######google images
# +
####Imports for running program
# For interacting with data sets.
import pandas as pd
# For plotting scatter plot (seaborn)
import matplotlib.pyplot as plt
# -
# Pandas is an open source, BSD-licensed library providing high-performance,
# easy-to-use data structures and data analysis tools for the Python programming language.
# ######https://pandas.pydata.org/
# Load the iris data set from a URL.
df = pd.read_csv("https://raw.githubusercontent.com/uiuc-cse/data-fa14/gh-pages/data/iris.csv")
print(df)
#printing out just the species
df['species']
#ploting petal_width
df['petal_width'].hist(bins = 30)
#ploting petal_length
df['petal_length'].hist(bins = 30)
#ploting sepal_width
df['sepal_width'].hist(bins = 30)
#ploting sepal_length
df['sepal_length'].hist(bins = 30)
######https://github.com/ianmcloughlin/jupyter-teaching-notebooks/blob/master/pandas-with-iris.ipynb
df.iloc[1:10:2]
# ### Boolean selects
#Getting values from an object with multi-axes selection
#df.loc[row_indexer,column_indexer]
######http://pandas.pydata.org/pandas-docs/stable/indexing.html?highlight=sort
#asking if row is virginica then show true else show false
df.loc[:, 'species'] == 'virginica'
# ### Plots using Seaborn
#
# Seaborn is a library for making statistical graphics in Python. It is built on top of matplotlib and closely integrated with pandas data structures.
# ######https://seaborn.pydata.org/introduction.html
import seaborn as sns
sns.pairplot(df, hue='species')
# Behind the scenes, seaborn uses matplotlib to draw plots.
# Many tasks can be accomplished with only seaborn functions,
# but further customization might require using matplotlib directly.
| iris-dataset.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Veg ET validation
import pandas as pd
from time import time
import xarray as xr
import numpy as np
def _get_year_month(product, tif):
fn = tif.split('/')[-1]
fn = fn.replace(product,'')
fn = fn.replace('.tif','')
fn = fn.replace('_','')
print(fn)
return fn
def _file_object(bucket_prefix,product_name,year,day):
if product_name == 'NDVI':
decade = str(year)[:3]+'0'
variable_prefix = bucket_prefix + 'NDVI_FORE_SCE_MED/delaware_basin_FS_'
file_object = variable_prefix + str(decade) + '/' + 'FS_{0}_{1}_med_{2}.tif'.format(str(decade), product_name, day)
elif product_name == 'ETo':
decade = str(year)[:3]+'0'
variable_prefix = bucket_prefix +'ETo_Moving_Average_byDOY/'
file_object = variable_prefix + '{0}_{1}/'.format(str(decade), str(int(decade)+10)) + '{0}_DOY{1}.tif'.format(product_name,day)
elif product_name == 'Tasavg' or product_name == 'Tasmax' or product_name == 'Tasmin':
variable_prefix = bucket_prefix + 'Temp/' + product_name + '/'
#variable_prefix = bucket_prefix + 'TempCelsius/' + product_name + '/'
file_object = variable_prefix + str(year) + '/' + '{}_'.format(product_name) + str(year) + day + '.tif'
elif product_name == 'PPT':
variable_prefix = bucket_prefix + product_name + '/'
file_object = variable_prefix + str(year) + '/' + '{}_'.format(product_name) + str(year) + day + '.tif'
else:
file_object = bucket_prefix + str(start_year) + '/' + f'{product_name}_' + str(start_year) + day + '.tif'
return file_object
def create_s3_list_of_days_start_end(main_bucket_prefix, start_year,start_day, end_year, end_day, product_name):
the_list = []
years = []
for year in (range(int(start_year),int(end_year)+1)):
years.append(year)
if len(years) == 1:
for i in range(int(start_day),int(end_day)):
day = f'{i:03d}'
file_object = _file_object(main_bucket_prefix,product_name,start_year,day)
the_list.append(file_object)
elif len(years) == 2:
for i in range(int(start_day),366):
day = f'{i:03d}'
file_object = _file_object(main_bucket_prefix,product_name,start_year,day)
the_list.append(file_object)
for i in range(1,int(end_day)):
day = f'{i:03d}'
file_object = _file_object(main_bucket_prefix,product_name,end_year,day)
the_list.append(file_object)
else:
for i in range(int(start_day),366):
day = f'{i:03d}'
file_object = _file_object(main_bucket_prefix,product_name,start_year,day)
the_list.append(file_object)
for year in years[1:-1]:
for i in range(1,366):
day = f'{i:03d}'
file_object = _file_object(main_bucket_prefix,product_name,year,day)
the_list.append(file_object)
for i in range(1,int(end_day)):
day = f'{i:03d}'
file_object = _file_object(main_bucket_prefix,product_name,end_year,day)
the_list.append(file_object)
return the_list
def xr_build_cube_concat_ds_one(tif_list, product, x, y):
start = time()
my_da_list =[]
year_month_list = []
for tif in tif_list:
#tiffile = 's3://dev-et-data/' + tif
tiffile = tif
print(tiffile)
da = xr.open_rasterio(tiffile)
daSub = da.sel(x=x, y=y, method='nearest')
#da = da.squeeze().drop(labels='band')
#da.name=product
my_da_list.append(daSub)
tnow = time()
elapsed = tnow - start
print(tif, elapsed)
year_month_list.append(_get_year_month(product, tif))
da = xr.concat(my_da_list, dim='band')
da = da.rename({'band':'year_month'})
da = da.assign_coords(year_month=year_month_list)
DS = da.to_dataset(name=product)
return(DS)
main_bucket_prefix='s3://dev-et-data/in/DelawareRiverBasin/'
start_year = '1950'
start_day = '1'
end_year = '1950'
end_day = '11'
x=-75
y =41
# ## Step 1: Get pixel values for input variables
df_list=[]
for product in ['PPT','Tasavg', 'Tasmin', 'Tasmax', 'NDVI', 'ETo']:
print("==="*30)
print("processing product",product)
tif_list = create_s3_list_of_days_start_end(main_bucket_prefix, start_year,start_day, end_year, end_day, product)
print (tif_list)
ds_pix=xr_build_cube_concat_ds_one(tif_list, product, x, y)
my_index = ds_pix['year_month'].values
my_array = ds_pix[product].values
df = pd.DataFrame(my_array, columns=[product,], index=my_index)
df_list.append(df)
df_reset_list = []
for dframe in df_list:
print (dframe)
df_reset = dframe.set_index(df_list[0].index)
print (df_reset)
df_reset_list.append(df_reset)
df_veget = pd.concat(df_reset_list, axis=1)
df_veget['NDVI'] *= 0.0001
df_veget['Tasavg'] -= 273.15
df_veget['Tasmin'] -= 273.15
df_veget['Tasmax'] -= 273.15
df_veget
for static_product in ['awc', 'por', 'fc', 'intercept', 'water']:
if static_product == 'awc' or static_product == 'por' or static_product == 'fc':
file_object = ['s3://dev-et-data/in/NorthAmerica/Soil/' + '{}_NA_mosaic.tif'.format(static_product)]
elif static_product == 'intercept':
file_object = ['s3://dev-et-data/in/NorthAmerica/Soil/' + 'Intercept2016_nowater_int.tif']
else:
file_object = ['s3://dev-et-data/in/DelawareRiverBasin/' + 'DRB_water_mask_inland.tif']
ds_pix=xr_build_cube_concat_ds_one(file_object, static_product, x, y)
df_veget['{}'.format(static_product)] = ds_pix[static_product].values[0]
print (df_veget)
df_veget
# ## Step 2: Run Veg ET model for a selected pixel
# +
pptcorr = 1
rf_value = 0.167
rf_low_thresh_temp = 0
rf_high_thresh_temp = 6
melt_factor = 0.06
dc_coeff: 0.65
rf_coeff = 0.35
k_factor = 1.25
ndvi_factor = 0.2
water_factor = 0.7
bias_corr = 0.85
alfa_factor = 1.25
df_veget['PPTcorr'] = df_veget['PPT']*pptcorr
df_veget['PPTeff'] = df_veget['PPTcorr']*(1-df_veget['intercept']/100)
df_veget['PPTinter'] = df_veget['PPTcorr']*(df_veget['intercept']/100)
df_veget['Tmin0'] = np.where(df_veget['Tasmin']<0,0,df_veget['Tasmin'])
df_veget['Tmax0'] = np.where(df_veget['Tasmax']<0,0,df_veget['Tasmax'])
rain_frac_conditions = [(df_veget['Tasavg']<=rf_low_thresh_temp),
(df_veget['Tasavg']>=rf_low_thresh_temp)&(df_veget['Tasavg']<=rf_high_thresh_temp),
(df_veget['Tasavg']>=rf_high_thresh_temp)]
rain_frac_values = [0,df_veget['Tasavg']*rf_value,1]
df_veget['rain_frac'] = np.select(rain_frac_conditions,rain_frac_values)
df_veget['melt_rate'] = melt_factor*(df_veget['Tmax0']**2 - df_veget['Tmax0']*df_veget['Tmin0'])
df_veget['snow_melt_rate'] = np.where(df_veget['Tasavg']<0,0,df_veget['melt_rate'])
df_veget['rain']=df_veget['PPTeff']*df_veget['rain_frac']
# +
def _snow_water_equivalent(rain_frac, PPTeff):
swe_value = (1-rain_frac)*PPTeff
return swe_value
def _snow_melt(melt_rate,swe,snowpack):
if melt_rate <= (swe + snowpack):
snowmelt_value = melt_rate
else:
snowmelt_value = swe_value + snowpack
return snowmelt_value
def _snow_pack(snowpack_prev,swe,snow_melt):
if (snowpack_prev + swe - snow_melt) < 0:
SNOW_pack_value = 0
else:
SNOW_pack_value = snowpack_prev + swe - snow_melt
return SNOW_pack_value
def _runoff(snow_melt,awc,swi):
if snow_melt<awc:
rf_value = 0
else:
rf_value = swi-awc
return rf_value
def _surface_runoff(rf, por,fc,rf_coeff):
if rf <= por - fc:
srf_value = rf*rf_coeff
else:
srf_value = (rf - (por - fc)) + rf_coeff*(por - fc)
return srf_value
def _etasw_calc(k_factor, ndvi, ndvi_factor, eto, bias_corr, swi, awc, water, water_factor, alfa_factor):
etasw1A_value = (k_factor*ndvi+ndvi_factor)*eto*bias_corr
etasw1B_value = (k_factor*ndvi)*eto*bias_corr
if ndvi > 0.4:
etasw1_value = etasw1A_value
else:
etasw1_value = etasw1B_value
etasw2_value = swi/(0.5*awc)*etasw1_value
if swi>0.5*awc:
etasw3_value = etasw1_value
else:
etasw3_value = etasw2_value
if etasw3_value>swi:
etasw4_value = swi
else:
etasw4_value = etasw3_value
if etasw4_value> awc:
etasw5_value = awc
else:
etasw5_value = etasw4_value
etc_value = etasw1A_value
if water == 0:
etasw_value = etasw5_value
else:
etasw_value = water_factor*alfa_factor*bias_corr*eto
if (etc_value - etasw_value)<0:
netet_value = 0
else:
netet_value = etc_value - etasw_value
return [etasw1A_value, etasw1B_value, etasw1_value, etasw2_value, etasw3_value, etasw4_value, etasw5_value, etasw_value, etc_value, netet_value]
def _soil_water_final(swi,awc,etasw5):
if swi> awc:
swf_value = awc - etasw5
elif (swi> awc) & (swi-etasw5<0):
swf_value = 0
else:
swf_value = swi-etasw5
return swf_value
# +
swe_list = []
snowmelt_list = []
snwpk_list = []
swi_list = []
rf_list = []
srf_list = []
dd_list = []
etasw1A_list = []
etasw1B_list = []
etasw1_list = []
etasw2_list = []
etasw3_list = []
etasw4_list = []
etasw5_list = []
etasw_list = []
etc_list = []
netet_list = []
swf_list = []
for index, row in df_veget.iterrows():
if index == df_veget.index[0]:
swe_value = 0
swe_list.append(swe_value)
snowmelt_value = swe_value
snowmelt_list.append(snowmelt_value)
snwpk_value = 0
snwpk_list.append(snwpk_value)
swi_value = 0.5*row['awc']+ row['PPTeff'] + snowmelt_value
swi_list.append(swi_value)
rf_value = _runoff(snowmelt_value,row['awc'],swi_value)
rf_list.append(rf_value)
srf_value = _surface_runoff(rf_value, row['por'],row['fc'],rf_coeff)
srf_list.append(srf_value)
dd_value = rf_value - srf_value
dd_list.append(dd_value)
eta_variables = _etasw_calc(k_factor, row['NDVI'], ndvi_factor, row['ETo'], bias_corr, swi_value, row['awc'], row['water'], water_factor, alfa_factor)
etasw1A_list.append(eta_variables[0])
etasw1B_list.append(eta_variables[1])
etasw1_list.append(eta_variables[2])
etasw2_list.append(eta_variables[3])
etasw3_list.append(eta_variables[4])
etasw4_list.append(eta_variables[5])
etasw5_list.append(eta_variables[6])
etasw_list.append(eta_variables[7])
etc_list.append(eta_variables[8])
netet_list.append(eta_variables[9])
swf_value = _soil_water_final(swi_value,row['awc'],eta_variables[7])
swf_list.append(swf_value)
else:
swe_value = _snow_water_equivalent(row['rain_frac'],row['PPTeff'])
swe_list.append(swe_value)
snowmelt_value = _snow_melt(row['melt_rate'],swe_value,snwpk_list[-1])
snowmelt_list.append(snowmelt_value)
snwpk_value = _snow_pack(snwpk_list[-1],swe_value,snowmelt_value)
snwpk_list.append(snwpk_value)
swi_value = swf_list[-1] + row['rain'] + snowmelt_value
swi_list.append(swi_value)
rf_value = _runoff(snowmelt_value,row['awc'],swi_value)
rf_list.append(rf_value)
srf_value = _surface_runoff(rf_value, row['por'],row['fc'],rf_coeff)
srf_list.append(srf_value)
dd_value = rf_value - srf_value
dd_list.append(dd_value)
eta_variables = _etasw_calc(k_factor, row['NDVI'], ndvi_factor, row['ETo'], bias_corr, swi_value, row['awc'], row['water'], water_factor, alfa_factor)
etasw1A_list.append(eta_variables[0])
etasw1B_list.append(eta_variables[1])
etasw1_list.append(eta_variables[2])
etasw2_list.append(eta_variables[3])
etasw3_list.append(eta_variables[4])
etasw4_list.append(eta_variables[5])
etasw5_list.append(eta_variables[6])
etasw_list.append(eta_variables[7])
etc_list.append(eta_variables[8])
netet_list.append(eta_variables[9])
swf_value = _soil_water_final(swi_value,row['awc'],eta_variables[7])
swf_list.append(swf_value)
df_veget['swe'] = swe_list
df_veget['snowmelt'] = snowmelt_list
df_veget['snwpk'] = snwpk_list
df_veget['swi'] = swi_list
df_veget['rf'] = rf_list
df_veget['srf'] = srf_list
df_veget['dd'] = dd_list
df_veget['etasw1A'] = etasw1A_list
df_veget['etasw1B'] = etasw1B_list
df_veget['etasw1'] = etasw1_list
df_veget['etasw2'] = etasw2_list
df_veget['etasw3'] = etasw3_list
df_veget['etasw4'] = etasw4_list
df_veget['etasw5'] = etasw5_list
df_veget['etasw'] = etasw_list
df_veget['etc'] = etc_list
df_veget['netet'] = netet_list
df_veget['swf'] = swf_list
# -
pd.set_option('display.max_columns', None)
df_veget
# ## Step 3: Sample output data computed in the cloud
output_bucket_prefix='s3://dev-et-data/enduser/DelawareRiverBasin/r_01_29_2021_drb35pct/'
#output_bucket_prefix = 's3://dev-et-data/out/DelawareRiverBasin/Run03_11_2021/run_drbcelsius_5yr_0311_chip39.84N-73.72E_o/'
# +
df_list_cloud=[]
for product_out in ['rain', 'swe', 'snowmelt', 'snwpk','srf', 'dd', 'etasw5', 'etasw', 'netet', 'swf', 'etc']:
print("==="*30)
print("processing product",product_out)
tif_list = create_s3_list_of_days_start_end(output_bucket_prefix, start_year,start_day, end_year, end_day, product_out)
ds_pix=xr_build_cube_concat_ds_one(tif_list, product_out, x, y)
my_index = ds_pix['year_month'].values
my_array = ds_pix[product_out].values
df = pd.DataFrame(my_array, columns=['{}_cloud'.format(product_out),], index=my_index)
df_list_cloud.append(df)
# -
for dframe in df_list_cloud:
print(dframe)
df_veget_cloud = pd.concat(df_list_cloud, axis=1)
df_veget_cloud
df_validation = pd.concat([df_veget,df_veget_cloud], axis=1)
df_validation
# ## Step 4: Visualization of validation results
# ### Import Visualization libraries
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib.ticker as mtick
from scipy import stats
import matplotlib.patches as mpatches
# ### Visualize Veg ET input variables
# +
fig, axs = plt.subplots(3, 1, figsize=(15,12))
axs[0].bar(df_validation.index, df_validation["PPT"], color = 'lightskyblue', width = 0.1)
ax0 = axs[0].twinx()
ax0.plot(df_validation.index, df_validation["NDVI"], color = 'seagreen')
axs[0].set_ylabel("PPT, mm")
ax0.set_ylabel("NDVI")
ax0.set_ylim([0,1])
low_threshold = np.array([0 for i in range(len(df_validation))])
axs[1].plot(df_validation.index, low_threshold, '--', color = 'dimgray', linewidth=0.8)
high_threshold = np.array([6 for i in range(len(df_validation))])
axs[1].plot(df_validation.index, high_threshold, '--', color = 'dimgray', linewidth=0.8)
axs[1].plot(df_validation.index, df_validation["Tasmin"], color = 'navy', linewidth=2.5)
axs[1].plot(df_validation.index, df_validation["Tasavg"], color = 'slategray', linewidth=2.5)
axs[1].plot(df_validation.index, df_validation["Tasmax"], color = 'red', linewidth=2.5)
axs[1].set_ylabel("T, deg C")
axs[2].plot(df_validation.index, df_validation["ETo"], color = 'goldenrod')
axs[2].plot(df_validation.index, df_validation["etasw"], color = 'royalblue')
axs[2].set_ylabel("ET, mm")
ppt = mpatches.Patch(color='lightskyblue', label='PPT')
ndvi = mpatches.Patch(color='seagreen', label='NDVI')
tmax = mpatches.Patch(color='red', label='Tmax')
tavg = mpatches.Patch(color='slategray', label='Tavg')
tmin = mpatches.Patch(color='navy', label='Tmin')
eto = mpatches.Patch(color='goldenrod', label='ETo')
eta = mpatches.Patch(color='royalblue', label='ETa')
plt.legend(handles=[ppt, ndvi, tmax, tavg, tmin, eto,eta])
# -
# ### Compare Veg ET output variables computed with data frames vs output variables computed in the cloud
# +
fig, axs = plt.subplots(5, 2, figsize=(20,25))
axs[0, 0].bar(df_validation.index, df_validation["rain"], color = 'skyblue')
axs[0, 0].plot(df_validation.index, df_validation["rain_cloud"], 'ro', color = 'crimson')
axs[0, 0].set_title("Rain amount from precipitation (rain)")
axs[0, 0].set_ylabel("rain, mm/day")
axs[0, 1].bar(df_validation.index, df_validation["swe"], color = 'skyblue')
axs[0, 1].plot(df_validation.index, df_validation["swe_cloud"], 'ro', color = 'crimson')
axs[0, 1].set_title("Snow water equivalent from precipiation (swe)")
axs[0, 1].set_ylabel("swe, mm/day")
axs[1, 0].bar(df_validation.index, df_validation["snowmelt"], color = 'skyblue')
axs[1, 0].plot(df_validation.index, df_validation["snowmelt_cloud"], 'ro', color = 'crimson')
axs[1, 0].set_title("Amount of melted snow (snowmelt)")
axs[1, 0].set_ylabel("snowmelt, mm/day")
axs[1, 1].bar(df_validation.index, df_validation["snwpk"], color = 'skyblue')
axs[1, 1].plot(df_validation.index, df_validation["snwpk_cloud"], 'ro', color = 'crimson')
axs[1, 1].set_title("Snow pack amount (snwpk)")
axs[1, 1].set_ylabel("snpk, mm/day")
axs[2, 0].bar(df_validation.index, df_validation["srf"], color = 'skyblue')
axs[2, 0].plot(df_validation.index, df_validation["srf_cloud"], 'ro', color = 'crimson')
axs[2, 0].set_title("Surface runoff (srf)")
axs[2, 0].set_ylabel("srf, mm/day")
axs[2, 1].bar(df_validation.index, df_validation["dd"], color = 'skyblue')
axs[2, 1].plot(df_validation.index, df_validation["dd_cloud"], 'ro', color = 'crimson')
axs[2, 1].set_title("Deep drainage (dd)")
axs[2, 1].set_ylabel("dd, mm/day")
axs[3, 0].bar(df_validation.index, df_validation["etasw"], color = 'skyblue')
axs[3, 0].plot(df_validation.index, df_validation["etasw_cloud"], 'ro', color = 'crimson')
axs[3, 0].set_title("ETa value (etasw)")
axs[3, 0].set_ylabel("etasw, mm/day")
axs[3, 1].bar(df_validation.index, df_validation["etc"], color = 'skyblue')
axs[3, 1].plot(df_validation.index, df_validation["etc_cloud"], 'ro', color = 'crimson')
axs[3, 1].set_title("Optimal crop ETa value (etc)")
axs[3, 1].set_ylabel("etc, mm/day")
axs[4, 0].bar(df_validation.index, df_validation["netet"], color = 'skyblue')
axs[4, 0].plot(df_validation.index, df_validation["netet_cloud"], 'ro', color = 'crimson')
axs[4, 0].set_title("Additional ETa requirement for optimal crop condition (netet)")
axs[4, 0].set_ylabel("netet, mm/day")
axs[4, 1].plot(df_validation.index, df_validation["swf"], color = 'skyblue')
axs[4, 1].plot(df_validation.index, df_validation["swf_cloud"], 'ro', color = 'crimson')
axs[4, 1].set_title("Final soil water amount at the end of the day (swf)")
axs[4, 1].set_ylabel("swf, mm/m")
manual = mpatches.Patch(color='skyblue', label='manual')
cloud = mpatches.Patch(color='crimson', label='cloud')
plt.legend(handles=[manual,cloud])
# -
| 1_ET/3_Olena_Analyze/VegET_validation-Copy1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.5 64-bit
# name: python395jvsc74a57bd04cd7ab41f5fca4b9b44701077e38c5ffd31fe66a6cab21e0214b68d958d0e462
# ---
import models
import numpy as np
import pandas as pd
from numpy import sqrt, exp, pi, power, tanh, vectorize
from scipy.integrate import odeint
from scipy.interpolate import make_interp_spline, interp1d
import matplotlib.pyplot as plt
folder = '/Users/clotilde/OneDrive/Professional/2019-2021_EuroTech/1.Project/2.Step1/3.Data_analyses/99.Resources/Postnova-Tekieh_arousal_model/'
### Read input file
input_file = pd.read_csv(folder+'input_irradiance_mel.csv', sep=";", decimal=",")
#minE = min(min(input_file.irradiance_mel),0.036)
#maxE = max(max(input_file.irradiance_mel), 0.22)
minE = 0
maxE = 1000*0.00080854 # based on the sigmoid shape, it goes to y = 1 around x = 1000
maxE = 1000
time_wake = 8.0 # 6.0
time_sleep = 20.0 #24.0
### Create steps from input irradiance
input_file['hour'] = round(input_file.hours,0)
input_step = input_file[['irradiance_mel','hour']].groupby('hour').mean()
input_step.reset_index(inplace=True)
### Smooth input irradiance with second degree polynomial
x = input_file.hours
a = np.polyfit(x, input_file.irradiance_mel, 2)
input_file['irradiance_mel_smooth'] = a[0] * power(x,2) + a[1] * x + a[2]
output_file = pd.read_csv(folder+"output.csv", sep=";", decimal=",")
n = output_file.shape[0]
output_file['E_mel'] = output_file.I_mel*0.0013262 # convert to irradiance
# +
### Find nearest point in vector
def find_nearest(array, value):
array = np.asarray(array)
idx = (np.abs(array - value)).argmin()
return idx
### Return irradiance at time t
def irradiance(t):
# add version here to choose between illuminance and irradiance
t = t/3600
t = t % 24
if ((t < time_wake) or (t > time_sleep)):
E_emel = 0.036 # from the scientist: 27 lux
I_emel = 27
elif ((t < 9.0) or (t > 16.0)):
E_emel = 0.22 # from the scientist: 165 lux
I_emel = 165
else:
### original data
idx = find_nearest(input_file.hours, t)
E_emel = input_file.irradiance_mel[idx]
I_emel = input_file.illuminance_mel[idx]
### smoothed data (second order poly)
#E_emel = input_file.irradiance_mel_smooth[idx]
### aggregated by hour "step" data
# idx = find_nearest(new_input.hour, t)
# E_emel = input_step.irradiance_mel[idx]
# E_emel = 0.22 # step
return E_emel
#return I_emel
irradiance_v = vectorize(irradiance)
# +
def forced_wake(t):
# Testing with forced wake between t1 and t2
if ((t/3600 % 24) >= time_wake and (t/3600 % 24) <= time_sleep):
F_w = 1
else:
F_w = 0
#F_w = 0
return F_w
forced_wake_v = vectorize(forced_wake)
# +
#### Initial Conditions
# [V_v, V_m, H, X, Y, P, Theta_L]
# y0 = [1.5, -15.0, 13.0, 0.04, -1.28, 0.0, 0.0] # initial values from draft
y0 = [ -4.55, -0.07, 13.29, -0.14, -1.07, 0.10, -5.00e-06] # proper initial values after experimentation
# +
### Execute the ODE model
t = np.linspace(0,72*60*60,n)
version_year = '2020'
(sol,temp) = odeint(models.model, y0, t, args = (irradiance, forced_wake, minE, maxE, version_year,), full_output = True)
### Store results
V_v = sol[:, 0]
V_m = sol[:, 1]
H = sol[:, 2]
X = sol[:, 3]
Y = sol[:, 4]
P = sol[:, 5]
Theta_L = sol[:, 6]
t_hours = t/3600
# +
### Plot ODEs
plt.figure(figsize=(10, 10))
plt.subplot(3,2,1)
plt.plot(t_hours, V_v, 'b', label='V_v(t)')
plt.plot(t_hours, V_m, 'g', label='V_m(t)')
plt.legend(loc='best')
plt.xlabel('t')
plt.grid(True)
plt.subplot(3,2,2)
plt.plot(t_hours, X, 'c', label='X(t)')
plt.plot(t_hours, Y, 'm', label='Y(t)')
plt.legend(loc='best')
plt.xlabel('t')
plt.grid(True)
plt.subplot(3,2,3)
plt.plot(t_hours, H, 'r', label='H(t)')
plt.legend(loc='best')
plt.xlabel('t')
plt.grid(True)
plt.subplot(3,2,4)
plt.plot(t_hours, P, 'r', label='P(t)')
plt.legend(loc='best')
plt.xlabel('t')
plt.grid(True)
plt.subplot(3,2,5)
plt.plot(t_hours, Theta_L, 'b', label='Theta_L(t)')
plt.legend(loc='best')
plt.xlabel('t')
plt.grid(True)
plt.tight_layout()
plt.show()
# -
### Compute all the internal functions
IE = irradiance_v(t)
S = models.state_v(V_m)
alpha = models.photoreceptor_conversion_rate_v(IE, S, version_year)
Q_m = models.mean_population_firing_rate_v(V_m)
Q_v = models.mean_population_firing_rate_v(V_v)
C = models.circadian_drive_v(X,Y)
D_v = models.total_sleep_drive_v(H,C)
D_n = models.nonphotic_drive_v(X, S)
D_p = models.photic_drive_v(X, Y, P, alpha)
F_w = forced_wake_v(t)
W = models.wake_effort_v(Q_v, F_w)
Sigmoid = ( models.sigmoid_v(IE) - models.sigmoid_v(minE) ) / ( models.sigmoid_v(maxE) - models.sigmoid_v(minE) )
# +
### Plot internal variables
plt.figure(figsize=(10, 15))
plt.subplot(6,2,1)
plt.plot(t_hours, IE)
plt.xlabel('t')
plt.ylabel('I_mel(t)')
plt.grid(True)
plt.subplot(6,2,2)
plt.plot(t_hours, S)
plt.xlabel('t')
plt.ylabel('S(t)')
plt.grid(True)
plt.subplot(6,2,3)
plt.plot(t_hours, Sigmoid)
plt.xlabel('t')
plt.ylabel('Sigmoid(t)')
plt.grid(True)
plt.subplot(6,2,4)
plt.plot(t_hours, alpha)
plt.xlabel('t')
plt.ylabel('alpha(t)')
plt.grid(True)
plt.subplot(6,2,5)
plt.plot(t_hours, Q_m)
plt.xlabel('t')
plt.ylabel('Q_m(t)')
plt.grid(True)
plt.subplot(6,2,6)
plt.plot(t_hours, Q_v)
plt.xlabel('t')
plt.ylabel('Q_v(t)')
plt.grid(True)
plt.subplot(6,2,7)
plt.plot(t_hours, C)
plt.xlabel('t')
plt.ylabel('C(t)')
plt.grid(True)
plt.subplot(6,2,8)
plt.plot(t_hours, D_v)
plt.xlabel('t')
plt.ylabel('D_v(t)')
plt.grid(True)
plt.subplot(6,2,9)
plt.plot(t_hours, D_n)
plt.xlabel('t')
plt.ylabel('D_n(t)')
plt.grid(True)
plt.subplot(6,2,10)
plt.plot(t_hours, D_p)
plt.xlabel('t')
plt.ylabel('D_p(t)')
plt.grid(True)
plt.subplot(6,2,11)
plt.plot(t_hours, F_w)
plt.xlabel('t')
plt.ylabel('F_w(t)')
plt.grid(True)
plt.subplot(6,2,12)
plt.plot(t_hours, W)
plt.xlabel('t')
plt.ylabel('W(t)')
plt.grid(True)
plt.tight_layout()
plt.show()
# -
AM = models.alertness_measure_v(C, H, Theta_L)
# +
### Plot AM-related variables
plt.figure(figsize=(5, 10))
plt.subplot(5,1,1)
plt.plot(t_hours, C)
plt.xlabel('t')
plt.ylabel('C(t)')
plt.grid(True)
plt.subplot(5,1,2)
plt.plot(t_hours, H)
plt.xlabel('t')
plt.ylabel('H(t)')
plt.grid(True)
plt.subplot(5,1,3)
plt.plot(t_hours, Theta_L)
plt.xlabel('t')
plt.ylabel('Theta_L(t)')
plt.grid(True)
plt.subplot(5,1,4)
plt.plot(t_hours, Sigmoid)
plt.xlabel('t')
plt.ylabel('Sigmoid(t)')
plt.grid(True)
plt.subplot(5,1,5)
plt.plot(output_file.time, output_file.KSS, 'darkorange')
plt.plot(t_hours, AM)
plt.xlabel('t')
plt.ylabel('KSS(t)')
plt.grid(True)
# +
### Plot AM and Irradiance
plt.figure(figsize=(5, 5))
plt.subplot(2,1,1)
plt.plot(output_file.time, output_file.E_mel, 'darkorange')
plt.plot(t_hours, IE)
plt.xlabel('t')
plt.ylabel('Irradiance(t)')
plt.grid(True)
plt.subplot(2,1,2)
plt.plot(output_file.time, output_file.KSS, 'darkorange')
plt.plot(t_hours, AM)
plt.xlabel('t')
plt.ylabel('KSS(t)')
plt.grid(True)
# -
plt.plot(output_file.time, output_file.E_mel, 'darkorange', label='E_mel(t), Patricia')
plt.plot(t_hours, irradiance_v(t), 'b', label='E_mel(t), Victoria')
plt.legend(loc='best')
plt.xlabel('t')
plt.grid()
plt.show()
# +
fig, ax1 = plt.subplots()
color = 'tab:red'
ax1.set_xlabel('time (s)')
ax1.set_ylabel('KSS', color=color)
ax1.plot(t_hours, AM, color=color)
ax1.plot(output_file.time, output_file.KSS, color=color, linestyle='dashed')
ax1.tick_params(axis='y', labelcolor=color)
ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis
color = 'tab:blue'
ax2.set_ylabel('Irradiance', color=color) # we already handled the x-label with ax1
ax2.plot(t_hours, irradiance_v(t), color=color)
ax2.tick_params(axis='y', labelcolor=color)
fig.tight_layout() # otherwise the right y-label is slightly clipped
plt.show()
# -
# find some initial conditions
idx = find_nearest(H, 13.3)
print("[ {}, {}, {}, {}, {}, {}, {}]".format(V_v[idx],V_m[idx],H[idx],X[idx],Y[idx],P[idx],Theta_L[idx]))
# find some initial conditions
idx = find_nearest(t/3600, 48)
print("[ {}, {}, {}, {}, {}, {}, {}]".format(V_v[idx],V_m[idx],H[idx],X[idx],Y[idx],P[idx],Theta_L[idx]))
x = np.linspace(0,10000,100)
y = 1/(1 + exp((0.05-x)/223.5) )
plt.plot(x, y)
| run_CP.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="TA21Jo5d9SVq" colab_type="text"
#
#
# 
#
# [](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/NER_SIGN_SYMP.ipynb)
#
#
#
# + [markdown] id="CzIdjHkAW8TB" colab_type="text"
# # **Detect signs and symptoms**
# + [markdown] id="6uDmeHEFW7_h" colab_type="text"
# To run this yourself, you will need to upload your license keys to the notebook. Otherwise, you can look at the example outputs at the bottom of the notebook. To upload license keys, open the file explorer on the left side of the screen and upload `workshop_license_keys.json` to the folder that opens.
# + [markdown] id="wIeCOiJNW-88" colab_type="text"
# ## 1. Colab Setup
# + [markdown] id="HMIDv74CYN0d" colab_type="text"
# Import license keys
# + id="ttHPIV2JXbIM" colab_type="code" colab={}
import os
import json
with open('/content/workshop_license_keys.json', 'r') as f:
license_keys = json.load(f)
license_keys.keys()
secret = license_keys['secret']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['JSL_OCR_LICENSE'] = license_keys['JSL_OCR_LICENSE']
os.environ['AWS_ACCESS_KEY_ID'] = license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
# + [markdown] id="rQtc1CHaYQjU" colab_type="text"
# Install dependencies
# + id="CGJktFHdHL1n" colab_type="code" colab={}
# Install Java
# ! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
# ! java -version
# Install pyspark and SparkNLP
# ! pip install --ignore-installed -q pyspark==2.4.4
# ! python -m pip install --upgrade spark-nlp-jsl==2.5.2 --extra-index-url https://pypi.johnsnowlabs.com/$secret
# ! pip install --ignore-installed -q spark-nlp==2.5.2
# + [markdown] id="Hj5FRDV4YSXN" colab_type="text"
# Import dependencies into Python and start the Spark session
# + id="sw-t1zxlHTB7" colab_type="code" colab={}
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ['PATH'] = os.environ['JAVA_HOME'] + "/bin:" + os.environ['PATH']
import sparknlp
import pandas as pd
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
import pyspark.sql.functions as F
builder = SparkSession.builder \
.appName('Spark NLP Licensed') \
.master('local[*]') \
.config('spark.driver.memory', '16G') \
.config('spark.serializer', 'org.apache.spark.serializer.KryoSerializer') \
.config('spark.kryoserializer.buffer.max', '2000M') \
.config('spark.jars.packages', 'com.johnsnowlabs.nlp:spark-nlp_2.11:2.5.2') \
.config('spark.jars', f'https://pypi.johnsnowlabs.com/{secret}/spark-nlp-jsl-2.5.2.jar')
spark = builder.getOrCreate()
# + [markdown] id="9RgiqfX5XDqb" colab_type="text"
# ## 2. Select the NER model and construct the pipeline
# + [markdown] id="5GihZjsj8cwt" colab_type="text"
# Select the NER model - Sign/symptom models: **ner_clinical, ner_jsl**
#
# For more details: https://github.com/JohnSnowLabs/spark-nlp-models#pretrained-models---spark-nlp-for-healthcare
# + id="kGueQhw-8Zi5" colab_type="code" colab={}
# You can change this to the model you want to use and re-run cells below.
# Sign / symptom models: ner_clinical, ner_jsl
# All these models use the same clinical embeddings.
MODEL_NAME = "ner_clinical"
# + [markdown] id="zweiG2ilZqoR" colab_type="text"
# Create the pipeline
# + id="LLuDz_t40be4" colab_type="code" colab={}
document_assembler = DocumentAssembler() \
.setInputCol('text')\
.setOutputCol('document')
sentence_detector = SentenceDetector() \
.setInputCols(['document'])\
.setOutputCol('sentence')
tokenizer = Tokenizer()\
.setInputCols(['sentence']) \
.setOutputCol('token')
word_embeddings = WordEmbeddingsModel.pretrained('embeddings_clinical', 'en', 'clinical/models') \
.setInputCols(['sentence', 'token']) \
.setOutputCol('embeddings')
clinical_ner = NerDLModel.pretrained(MODEL_NAME, 'en', 'clinical/models') \
.setInputCols(['sentence', 'token', 'embeddings']) \
.setOutputCol('ner')
ner_converter = NerConverter()\
.setInputCols(['sentence', 'token', 'ner']) \
.setOutputCol('ner_chunk')
nlp_pipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipeline_model = nlp_pipeline.fit(empty_df)
light_pipeline = LightPipeline(pipeline_model)
# + [markdown] id="2Y9GpdJhXIpD" colab_type="text"
# ## 3. Create example inputs
# + id="vBOKkB2THdGI" colab_type="code" colab={}
# Enter examples as strings in this array
input_list = [
"""The patient is a 21-day-old Caucasian male here for 2 days of congestion - mom has been suctioning yellow discharge from the patient's nares, plus she has noticed some mild problems with his breathing while feeding (but negative for any perioral cyanosis or retractions). One day ago, mom also noticed a tactile temperature and gave the patient Tylenol. Baby also has had some decreased p.o. intake. His normal breast-feeding is down from 20 minutes q.2h. to 5 to 10 minutes secondary to his respiratory congestion. He sleeps well, but has been more tired and has been fussy over the past 2 days. The parents noticed no improvement with albuterol treatments given in the ER. His urine output has also decreased; normally he has 8 to 10 wet and 5 dirty diapers per 24 hours, now he has down to 4 wet diapers per 24 hours. Mom denies any diarrhea. His bowel movements are yellow colored and soft in nature."""
]
# + [markdown] id="mv0abcwhXWC-" colab_type="text"
# ## 4. Use the pipeline to create outputs
# + id="TK1DB9JZaPs3" colab_type="code" colab={}
df = spark.createDataFrame(pd.DataFrame({"text": input_list}))
result = pipeline_model.transform(df)
# + [markdown] id="UQY8tAP6XZJL" colab_type="text"
# ## 5. Visualize results
# + [markdown] id="hnsMLq9gctSq" colab_type="text"
# Visualize outputs as data frame
# + id="Ar32BZu7J79X" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 364} outputId="a1acf6ff-bf72-4731-df5c-c34d6be562cf"
exploded = F.explode(F.arrays_zip('ner_chunk.result', 'ner_chunk.metadata'))
select_expression_0 = F.expr("cols['0']").alias("chunk")
select_expression_1 = F.expr("cols['1']['entity']").alias("ner_label")
result.select(exploded.alias("cols")) \
.select(select_expression_0, select_expression_1).show(truncate=False)
result = result.toPandas()
# + [markdown] id="1wdVmoUcdnAk" colab_type="text"
# Functions to display outputs as HTML
# + id="tFeu7loodcQQ" colab_type="code" colab={}
from IPython.display import HTML, display
import random
def get_color():
r = lambda: random.randint(128,255)
return "#%02x%02x%02x" % (r(), r(), r())
def annotation_to_html(full_annotation):
ner_chunks = full_annotation[0]['ner_chunk']
text = full_annotation[0]['document'][0].result
label_color = {}
for chunk in ner_chunks:
label_color[chunk.metadata['entity']] = get_color()
html_output = "<div>"
pos = 0
for n in ner_chunks:
if pos < n.begin and pos < len(text):
html_output += f"<span class=\"others\">{text[pos:n.begin]}</span>"
pos = n.end + 1
html_output += f"<span class=\"entity-wrapper\" style=\"color: black; background-color: {label_color[n.metadata['entity']]}\"> <span class=\"entity-name\">{n.result}</span> <span class=\"entity-type\">[{n.metadata['entity']}]</span></span>"
if pos < len(text):
html_output += f"<span class=\"others\">{text[pos:]}</span>"
html_output += "</div>"
display(HTML(html_output))
# + [markdown] id="-piHygJ6dpEa" colab_type="text"
# Display example outputs as HTML
# + id="AtbhE24VeG_C" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 139} outputId="ac682d32-06fe-4355-f917-cfbf6604c5d4"
for example in input_list:
annotation_to_html(light_pipeline.fullAnnotate(example))
| tutorials/streamlit_notebooks/healthcare/NER_SIGN_SYMP.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Single layer models
#
# In this lab we will implement a single-layer network model consisting of solely of an affine transformation of the inputs. The relevant material for this was covered in [the slides of the first lecture](http://www.inf.ed.ac.uk/teaching/courses/mlp/2016/mlp01-intro.pdf).
#
# We will first implement the forward propagation of inputs to the network to produce predicted outputs. We will then move on to considering how to use gradients of an error function evaluated on the outputs to compute the gradients with respect to the model parameters to allow us to perform an iterative gradient-descent training procedure. In the final exercise you will use an interactive visualisation to explore the role of some of the different hyperparameters of gradient-descent based training methods.
#
# #### A note on random number generators
#
# It is generally a good practice (for machine learning applications **not** for cryptography!) to seed a pseudo-random number generator once at the beginning of each experiment. This makes it easier to reproduce results as the same random draws will produced each time the experiment is run (e.g. the same random initialisations used for parameters). Therefore generally when we need to generate random values during this course, we will create a seeded random number generator object as we do in the cell below.
import numpy as np
seed = 27092016
rng = np.random.RandomState(seed)
# ## Exercise 1: linear and affine transforms
#
# Any *linear transform* (also called a linear map) of a finite-dimensional vector space can be parametrised by a matrix. So for example if we consider $\boldsymbol{x} \in \mathbb{R}^{D}$ as the input space of a model with $D$ dimensional real-valued inputs, then a matrix $\mathbf{W} \in \mathbb{R}^{K\times D}$ can be used to define a prediction model consisting solely of a linear transform of the inputs
#
# \begin{equation}
# \boldsymbol{y} = \mathbf{W} \boldsymbol{x}
# \qquad
# \Leftrightarrow
# \qquad
# y_k = \sum_{d=1}^D \left( W_{kd} x_d \right) \quad \forall k \in \left\lbrace 1 \dots K\right\rbrace
# \end{equation}
#
# with here $\boldsymbol{y} \in \mathbb{R}^K$ the $K$-dimensional real-valued output of the model. Geometrically we can think of a linear transform doing some combination of rotation, scaling, reflection and shearing of the input.
#
# An *affine transform* consists of a linear transform plus an additional translation parameterised by a vector $\boldsymbol{b} \in \mathbb{R}^K$. A model consisting of an affine transformation of the inputs can then be defined as
#
# \begin{equation}
# \boldsymbol{y} = \mathbf{W}\boldsymbol{x} + \boldsymbol{b}
# \qquad
# \Leftrightarrow
# \qquad
# y_k = \sum_{d=1}^D \left( W_{kd} x_d \right) + b_k \quad \forall k \in \left\lbrace 1 \dots K\right\rbrace
# \end{equation}
#
# In machine learning we will usually refer to the matrix $\mathbf{W}$ as a *weight matrix* and the vector $\boldsymbol{b}$ as a *bias vector*.
#
# Generally rather than working with a single data vector $\boldsymbol{x}$ we will work with batches of datapoints $\left\lbrace \boldsymbol{x}^{(b)}\right\rbrace_{b=1}^B$. We could calculate the outputs for each input in the batch sequentially
#
# \begin{align}
# \boldsymbol{y}^{(1)} &= \mathbf{W}\boldsymbol{x}^{(1)} + \boldsymbol{b}\\
# \boldsymbol{y}^{(2)} &= \mathbf{W}\boldsymbol{x}^{(2)} + \boldsymbol{b}\\
# \dots &\\
# \boldsymbol{y}^{(B)} &= \mathbf{W}\boldsymbol{x}^{(B)} + \boldsymbol{b}\\
# \end{align}
#
# by looping over each input in the batch and calculating the output. However in general loops in Python are slow (particularly compared to compiled and typed languages such as C). This is due at least in part to the large overhead in dynamically inferring variable types. In general therefore wherever possible we want to avoid having loops in which such overhead will become the dominant computational cost.
#
# For array based numerical operations, one way of overcoming this bottleneck is to *vectorise* operations. NumPy `ndarrays` are typed arrays for which operations such as basic elementwise arithmetic and linear algebra operations such as computing matrix-matrix or matrix-vector products are implemented by calls to highly-optimised compiled libraries. Therefore if you can implement code directly using NumPy operations on arrays rather than by looping over array elements it is often possible to make very substantial performance gains.
#
# As a simple example we can consider adding up two arrays `a` and `b` and writing the result to a third array `c`. First lets initialise `a` and `b` with arbitrary values by running the cell below.
size = 1000
a = np.arange(size)
b = np.ones(size)
# Now let's time how long it takes to add up each pair of values in the two array and write the results to a third array using a loop-based implementation. We will use the `%%timeit` magic briefly mentioned in the previous lab notebook specifying the number of times to loop the code as 100 and to give the best of 3 repeats. Run the cell below to get a print out of the average time taken.
# %%timeit -n 100 -r 3
c = np.empty(size)
for i in range(size):
c[i] = a[i] + b[i]
# And now we will perform the corresponding summation with the overloaded addition operator of NumPy arrays. Again run the cell below to get a print out of the average time taken.
# %%timeit -n 100 -r 3
c = a + b
# The first loop-based implementation should have taken on the order of milliseconds ($10^{-3}$s) while the vectorised implementation should have taken on the order of microseconds ($10^{-6}$s), i.e. a $\sim1000\times$ speedup. Hopefully this simple example should make it clear why we want to vectorise operations whenever possible!
#
# Getting back to our affine model, ideally rather than individually computing the output corresponding to each input we should compute the outputs for all inputs in a batch using a vectorised implementation. As you saw last week, data providers return batches of inputs as arrays of shape `(batch_size, input_dim)`. In the mathematical notation used earlier we can consider this as a matrix $\mathbf{X}$ of dimensionality $B \times D$, and in particular
#
# \begin{equation}
# \mathbf{X} = \left[ \boldsymbol{x}^{(1)} ~ \boldsymbol{x}^{(2)} ~ \dots ~ \boldsymbol{x}^{(B)} \right]^\mathrm{T}
# \end{equation}
#
# i.e. the $b^{\textrm{th}}$ input vector $\boldsymbol{x}^{(b)}$ corresponds to the $b^{\textrm{th}}$ row of $\mathbf{X}$. If we define the $B \times K$ matrix of outputs $\mathbf{Y}$ similarly as
#
# \begin{equation}
# \mathbf{Y} = \left[ \boldsymbol{y}^{(1)} ~ \boldsymbol{y}^{(2)} ~ \dots ~ \boldsymbol{y}^{(B)} \right]^\mathrm{T}
# \end{equation}
#
# then we can express the relationship between $\mathbf{X}$ and $\mathbf{Y}$ using [matrix multiplication](https://en.wikipedia.org/wiki/Matrix_multiplication) and addition as
#
# \begin{equation}
# \mathbf{Y} = \mathbf{X} \mathbf{W}^\mathrm{T} + \mathbf{B}
# \end{equation}
#
# where $\mathbf{B} = \left[ \boldsymbol{b} ~ \boldsymbol{b} ~ \dots ~ \boldsymbol{b} \right]^\mathrm{T}$ i.e. a $B \times K$ matrix with each row corresponding to the bias vector. The weight matrix needs to be transposed here as the inner dimensions of a matrix multiplication must match i.e. for $\mathbf{C} = \mathbf{A} \mathbf{B}$ then if $\mathbf{A}$ is of dimensionality $K \times L$ and $\mathbf{B}$ is of dimensionality $M \times N$ then it must be the case that $L = M$ and $\mathbf{C}$ will be of dimensionality $K \times N$.
#
# The first exercise for this lab is to implement *forward propagation* for a single-layer model consisting of an affine transformation of the inputs in the `fprop` function given as skeleton code in the cell below. This should work for a batch of inputs of shape `(batch_size, input_dim)` producing a batch of outputs of shape `(batch_size, output_dim)`.
#
# You will probably want to use the NumPy `dot` function and [broadcasting features](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) to implement this efficiently. If you are not familiar with either / both of these you may wish to read the [hints](#Hints:-Using-the-dot-function-and-broadcasting) section below which gives some details on these before attempting the exercise.
def fprop(inputs, weights, biases):
"""Forward propagates activations through the layer transformation.
For inputs `x`, outputs `y`, weights `W` and biases `b` the layer
corresponds to `y = W x + b`.
Args:
inputs: Array of layer inputs of shape (batch_size, input_dim).
weights: Array of weight parameters of shape
(output_dim, input_dim).
biases: Array of bias parameters of shape (output_dim, ).
Returns:
outputs: Array of layer outputs of shape (batch_size, output_dim).
"""
return inputs.dot(weights.T) + biases
# Once you have implemented `fprop` in the cell above you can test your implementation by running the cell below.
# +
inputs = np.array([[0., -1., 2.], [-6., 3., 1.]])
weights = np.array([[2., -3., -1.], [-5., 7., 2.]])
biases = np.array([5., -3.])
true_outputs = np.array([[6., -6.], [-17., 50.]])
if not np.allclose(fprop(inputs, weights, biases), true_outputs):
print('Wrong outputs computed.')
else:
print('All outputs correct!')
# -
# ### Hints: Using the `dot` function and broadcasting
#
# For those new to NumPy below are some details on the `dot` function and broadcasting feature of NumPy that you may want to use for implementing the first exercise. If you are already familiar with these and have already completed the first exercise you can move on straight to [second exercise](#Exercise-2:-visualising-random-models).
#
# #### `numpy.dot` function
#
# Matrix-matrix, matrix-vector and vector-vector (dot) products can all be computed in NumPy using the [`dot`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html) function. For example if `A` and `B` are both two dimensional arrays, then `C = np.dot(A, B)` or equivalently `C = A.dot(B)` will both compute the matrix product of `A` and `B` assuming `A` and `B` have compatible dimensions. Similarly if `a` and `b` are one dimensional arrays then `c = np.dot(a, b)` (which is equivalent to `c = a.dot(b)`) will compute the [scalar / dot product](https://en.wikipedia.org/wiki/Dot_product) of the two arrays. If `A` is a two-dimensional array and `b` a one-dimensional array `np.dot(A, b)` (which is equivalent to `A.dot(b)`) will compute the matrix-vector product of `A` and `b`. Examples of all three of these product types are shown in the cell below:
# Initiliase arrays with arbitrary values
A = np.arange(9).reshape((3, 3))
B = np.ones((3, 3)) * 2
a = np.array([-1., 0., 1.])
b = np.array([0.1, 0.2, 0.3])
print(A.dot(B)) # Matrix-matrix product
print(B.dot(A)) # Reversed product of above A.dot(B) != B.dot(A) in general
print(A.dot(b)) # Matrix-vector product
print(b.dot(A)) # Again A.dot(b) != b.dot(A) unless A is symmetric i.e. A == A.T
print(a.dot(b)) # Vector-vector scalar product
# #### Broadcasting
#
# Another NumPy feature it will be helpful to get familiar with is [broadcasting](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html). Broadcasting allows you to apply operations to arrays of different shapes, for example to add a one-dimensional array to a two-dimensional array or multiply a multidimensional array by a scalar. The complete set of rules for broadcasting as explained in the official documentation page just linked to can sound a bit complex: you might find the [visual explanation on this page](http://www.scipy-lectures.org/intro/numpy/operations.html#broadcasting) more intuitive. The cell below gives a few examples:
# Initiliase arrays with arbitrary values
A = np.arange(6).reshape((3, 2))
b = np.array([0.1, 0.2])
c = np.array([-1., 0., 1.])
print(A + b) # Add b elementwise to all rows of A
print((A.T + c).T) # Add b elementwise to all columns of A
print(A * b) # Multiply each row of A elementise by b
# ## Exercise 2: visualising random models
# In this exercise you will use your `fprop` implementation to visualise the outputs of a single-layer affine transform model with two-dimensional inputs and a one-dimensional output. In this simple case we can visualise the joint input-output space on a 3D axis.
#
# For this task and the learning experiments later in the notebook we will use a regression dataset from the [UCI machine learning repository](http://archive.ics.uci.edu/ml/index.html). In particular we will use a version of the [Combined Cycle Power Plant dataset](http://archive.ics.uci.edu/ml/datasets/Combined+Cycle+Power+Plant), where the task is to predict the energy output of a power plant given observations of the local ambient conditions (e.g. temperature, pressure and humidity).
#
# The original dataset has four input dimensions and a single target output dimension. We have preprocessed the dataset by [whitening](https://en.wikipedia.org/wiki/Whitening_transformation) it, a common preprocessing step. We will only use the first two dimensions of the whitened inputs (corresponding to the first two principal components of the inputs) so we can easily visualise the joint input-output space.
#
# The dataset has been wrapped in the `CCPPDataProvider` class in the `mlp.data_providers` module and the data included as a compressed file in the data directory as `ccpp_data.npz`. Running the cell below will initialise an instance of this class, get a single batch of inputs and outputs and import the necessary `matplotlib` objects.
# +
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from mlp.data_providers import CCPPDataProvider
# %matplotlib notebook
data_provider = CCPPDataProvider(
which_set='train',
input_dims=[0, 1],
batch_size=5000,
max_num_batches=1,
shuffle_order=False
)
input_dim, output_dim = 2, 1
inputs, targets = data_provider.next()
# -
# Here we used the `%matplotlib notebook` magic command rather than the `%matplotlib inline` we used in the previous lab as this allows us to produce interactive 3D plots which you can rotate and zoom in/out by dragging with the mouse and scrolling the mouse-wheel respectively. Once you have finished interacting with a plot you can close it to produce a static inline plot using the <i class="fa fa-power-off"></i> button in the top-right corner.
#
# Now run the cell below to plot the predicted outputs of a randomly initialised model across the two dimensional input space as well as the true target outputs. This sort of visualisation can be a useful method (in low dimensions) to assess how well the model is likely to be able to fit the data and to judge appropriate initialisation scales for the parameters. Each time you re-run the cell a new set of random parameters will be sampled
#
# Some questions to consider:
#
# * How do the weights and bias initialisation scale affect the sort of predicted input-output relationships?
# * Does the linear form of the model seem appropriate for the data here?
# +
weights_init_range = 0.5
biases_init_range = 0.1
# Randomly initialise weights matrix
weights = rng.uniform(
low=-weights_init_range,
high=weights_init_range,
size=(output_dim, input_dim)
)
# Randomly initialise biases vector
biases = rng.uniform(
low=-biases_init_range,
high=biases_init_range,
size=output_dim
)
# Calculate predicted model outputs
outputs = fprop(inputs, weights, biases)
# Plot target and predicted outputs against inputs on same axis
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111, projection='3d')
ax.plot(inputs[:, 0], inputs[:, 1], targets[:, 0], 'r.', ms=2)
ax.plot(inputs[:, 0], inputs[:, 1], outputs[:, 0], 'b.', ms=2)
ax.set_xlabel('Input dim 1')
ax.set_ylabel('Input dim 2')
ax.set_zlabel('Output')
ax.legend(['Targets', 'Predictions'], frameon=False)
fig.tight_layout()
# -
# ## Exercise 3: computing the error function and its gradient
#
# Here we will consider the task of regression as covered in the first lecture slides. The aim in a regression problem is given inputs $\left\lbrace \boldsymbol{x}^{(n)}\right\rbrace_{n=1}^N$ to produce outputs $\left\lbrace \boldsymbol{y}^{(n)}\right\rbrace_{n=1}^N$ that are as 'close' as possible to a set of target outputs $\left\lbrace \boldsymbol{t}^{(n)}\right\rbrace_{n=1}^N$. The measure of 'closeness' or distance between target and predicted outputs is a design choice.
#
# A very common choice is the squared Euclidean distance between the predicted and target outputs. This can be computed as the sum of the squared differences between each element in the target and predicted outputs. A common convention is to multiply this value by $\frac{1}{2}$ as this gives a slightly nicer expression for the error gradient. The error for the $n^{\textrm{th}}$ training example is then
#
# \begin{equation}
# E^{(n)} = \frac{1}{2} \sum_{k=1}^K \left\lbrace \left( y^{(n)}_k - t^{(n)}_k \right)^2 \right\rbrace.
# \end{equation}
#
# The overall error is then the *average* of this value across all training examples
#
# \begin{equation}
# \bar{E} = \frac{1}{N} \sum_{n=1}^N \left\lbrace E^{(n)} \right\rbrace.
# \end{equation}
#
# *Note here we are using a slightly different convention from the lectures. There the overall error was considered to be the sum of the individual error terms rather than the mean. To differentiate between the two we will use $\bar{E}$ to represent the average error here as opposed to sum of errors $E$ as used in the slides with $\bar{E} = \frac{E}{N}$. Normalising by the number of training examples is helpful to do in practice as this means we can more easily compare errors across data sets / batches of different sizes, and more importantly it means the size of our gradient updates will be independent of the number of training examples summed over.*
#
# The regression problem is then to find parameters of the model which minimise $\bar{E}$. For our simple single-layer affine model here that corresponds to finding weights $\mathbf{W}$ and biases $\boldsymbol{b}$ which minimise $\bar{E}$.
#
# As mentioned in the lecture, for this simple case there is actually a closed form solution for the optimal weights and bias parameters. This is the linear least-squares solution those doing MLPR will have come across.
#
# However in general we will be interested in models where closed form solutions do not exist. We will therefore generally use iterative, gradient descent based training methods to find parameters which (locally) minimise the error function. A basic requirement of being able to do gradient-descent based training is (unsuprisingly) the ability to evaluate gradients of the error function.
#
# In the next exercise we will consider how to calculate gradients of the error function with respect to the model parameters $\mathbf{W}$ and $\boldsymbol{b}$, but as a first step here we will consider the gradient of the error function with respect to the model outputs $\left\lbrace \boldsymbol{y}^{(n)}\right\rbrace_{n=1}^N$. This can be written
#
# \begin{equation}
# \frac{\partial \bar{E}}{\partial \boldsymbol{y}^{(n)}} = \frac{1}{N} \left( \boldsymbol{y}^{(n)} - \boldsymbol{t}^{(n)} \right)
# \qquad \Leftrightarrow \qquad
# \frac{\partial \bar{E}}{\partial y^{(n)}_k} = \frac{1}{N} \left( y^{(n)}_k - t^{(n)}_k \right) \quad \forall k \in \left\lbrace 1 \dots K\right\rbrace
# \end{equation}
#
# i.e. the gradient of the error function with respect to the $n^{\textrm{th}}$ model output is just the difference between the $n^{\textrm{th}}$ model and target outputs, corresponding to the $\boldsymbol{\delta}^{(n)}$ terms mentioned in the lecture slides.
#
# The third exercise is, using the equations given above, to implement functions computing the mean sum of squared differences error and its gradient with respect to the model outputs. You should implement the functions using the provided skeleton definitions in the cell below.
# +
def error(outputs, targets):
"""Calculates error function given a batch of outputs and targets.
Args:
outputs: Array of model outputs of shape (batch_size, output_dim).
targets: Array of target outputs of shape (batch_size, output_dim).
Returns:
Scalar error function value.
"""
return 0.5 * ((outputs - targets)**2).sum() / outputs.shape[0]
def error_grad(outputs, targets):
"""Calculates gradient of error function with respect to model outputs.
Args:
outputs: Array of model outputs of shape (batch_size, output_dim).
targets: Array of target outputs of shape (batch_size, output_dim).
Returns:
Gradient of error function with respect to outputs.
This will be an array of shape (batch_size, output_dim).
"""
return (outputs - targets) / outputs.shape[0]
# -
# Check your implementation by running the test cell below.
# +
outputs = np.array([[1., 2.], [-1., 0.], [6., -5.], [-1., 1.]])
targets = np.array([[0., 1.], [3., -2.], [7., -3.], [1., -2.]])
true_error = 5.
true_error_grad = np.array([[0.25, 0.25], [-1., 0.5], [-0.25, -0.5], [-0.5, 0.75]])
if not error(outputs, targets) == true_error:
print('Error calculated incorrectly.')
elif not np.allclose(error_grad(outputs, targets), true_error_grad):
print('Error gradient calculated incorrectly.')
else:
print('Error function and gradient computed correctly!')
# -
# ## Exercise 4: computing gradients with respect to the parameters
#
# In the previous exercise you implemented a function computing the gradient of the error function with respect to the model outputs. For gradient-descent based training, we need to be able to evaluate the gradient of the error function with respect to the model parameters.
#
# Using the [chain rule for derivatives](https://en.wikipedia.org/wiki/Chain_rule#Higher_dimensions) we can write the partial deriviative of the error function with respect to single elements of the weight matrix and bias vector as
#
# \begin{equation}
# \frac{\partial E}{\partial W_{kj}} = \sum_{n=1}^N \left\lbrace \frac{\partial E}{\partial y^{(n)}_k} \frac{\partial y^{(n)}_k}{\partial W_{kj}} \right\rbrace
# \quad \textrm{and} \quad
# \frac{\partial E}{\partial b_k} = \sum_{n=1}^N \left\lbrace \frac{\partial E}{\partial y^{(n)}_k} \frac{\partial y^{(n)}_k}{\partial b_k} \right\rbrace.
# \end{equation}
#
# From the definition of our model at the beginning we have
#
# \begin{equation}
# y^{(n)}_k = \sum_{d=1}^D \left\lbrace W_{kd} x^{(n)}_d \right\rbrace + b_k
# \quad \Rightarrow \quad
# \frac{\partial y^{(n)}_k}{\partial W_{kj}} = x^{(n)}_j
# \quad \textrm{and} \quad
# \frac{\partial y^{(n)}_k}{\partial b_k} = 1.
# \end{equation}
#
# Putting this together we get that
#
# \begin{equation}
# \frac{\partial E}{\partial W_{kj}} =
# \sum_{n=1}^N \left\lbrace \frac{\partial E}{\partial y^{(n)}_k} x^{(n)}_j \right\rbrace
# \quad \textrm{and} \quad
# \frac{\partial E}{\partial b_{k}} =
# \sum_{n=1}^N \left\lbrace \frac{\partial E}{\partial y^{(n)}_k} \right\rbrace.
# \end{equation}
#
# Although this may seem a bit of a roundabout way to get to these results, this method of decomposing the error gradient with respect to the parameters in terms of the gradient of the error function with respect to the model outputs and the derivatives of the model outputs with respect to the model parameters, will be key when calculating the parameter gradients of more complex models later in the course.
#
# Your task in this exercise is to implement a function calculating the gradient of the error function with respect to the weight and bias parameters of the model given the already computed gradient of the error function with respect to the model outputs. You should implement this in the `grads_wrt_params` function in the cell below.
def grads_wrt_params(inputs, grads_wrt_outputs):
"""Calculates gradients with respect to model parameters.
Args:
inputs: array of inputs to model of shape (batch_size, input_dim)
grads_wrt_to_outputs: array of gradients of with respect to the model
outputs of shape (batch_size, output_dim).
Returns:
list of arrays of gradients with respect to the model parameters
`[grads_wrt_weights, grads_wrt_biases]`.
"""
grads_wrt_weights = grads_wrt_outputs.T.dot(inputs)
grads_wrt_biases = grads_wrt_outputs.sum(0)
return [grads_wrt_weights, grads_wrt_biases]
# Check your implementation by running the test cell below.
# +
inputs = np.array([[1., 2., 3.], [-1., 4., -9.]])
grads_wrt_outputs = np.array([[-1., 1.], [2., -3.]])
true_grads_wrt_weights = np.array([[-3., 6., -21.], [4., -10., 30.]])
true_grads_wrt_biases = np.array([1., -2.])
grads_wrt_weights, grads_wrt_biases = grads_wrt_params(
inputs, grads_wrt_outputs)
if not np.allclose(true_grads_wrt_weights, grads_wrt_weights):
print('Gradients with respect to weights incorrect.')
elif not np.allclose(true_grads_wrt_biases, grads_wrt_biases):
print('Gradients with respect to biases incorrect.')
else:
print('All parameter gradients calculated correctly!')
# -
# ## Exercise 5: wrapping the functions into reusable components
#
# In exercises 1, 3 and 4 you implemented methods to compute the predicted outputs of our model, evaluate the error function and its gradient on the outputs and finally to calculate the gradients of the error with respect to the model parameters. Together they constitute all the basic ingredients we need to implement a gradient-descent based iterative learning procedure for the model.
#
# Although you could implement training code which directly uses the functions you defined, this would only be usable for this particular model architecture. In subsequent labs we will want to use the affine transform functions as the basis for more interesting multi-layer models. We will therefore wrap the implementations you just wrote in to reusable components that we can build more complex models with later in the course.
#
# * In the [`mlp.layers`](/edit/mlp/layers.py) module, use your implementations of `fprop` and `grad_wrt_params` above to implement the corresponding methods in the skeleton `AffineLayer` class provided.
# * In the [`mlp.errors`](/edit/mlp/errors.py) module use your implementation of `error` and `error_grad` to implement the `__call__` and `grad` methods respectively of the skeleton `SumOfSquaredDiffsError` class provided. Note `__call__` is a special Python method that allows an object to be used with a function call syntax.
#
# Run the cell below to use your completed `AffineLayer` and `SumOfSquaredDiffsError` implementations to train a single-layer model using batch gradient descent on the CCPP dataset.
# +
from mlp.layers import AffineLayer
from mlp.errors import SumOfSquaredDiffsError
from mlp.models import SingleLayerModel
from mlp.initialisers import UniformInit, ConstantInit
from mlp.learning_rules import GradientDescentLearningRule
from mlp.optimisers import Optimiser
import logging
# Seed a random number generator
seed = 27092016
rng = np.random.RandomState(seed)
# Set up a logger object to print info about the training run to stdout
logger = logging.getLogger()
logger.setLevel(logging.INFO)
logger.handlers = [logging.StreamHandler()]
# Create data provider objects for the CCPP training set
train_data = CCPPDataProvider('train', [0, 1], batch_size=100, rng=rng)
input_dim, output_dim = 2, 1
# Create a parameter initialiser which will sample random uniform values
# from [-0.1, 0.1]
param_init = UniformInit(-0.1, 0.1, rng=rng)
# Create our single layer model
layer = AffineLayer(input_dim, output_dim, param_init, param_init)
model = SingleLayerModel(layer)
# Initialise the error object
error = SumOfSquaredDiffsError()
# Use a basic gradient descent learning rule with a small learning rate
learning_rule = GradientDescentLearningRule(learning_rate=1e-2)
# Use the created objects to initialise a new Optimiser instance.
optimiser = Optimiser(model, error, learning_rule, train_data)
# Run the optimiser for 5 epochs (full passes through the training set)
# printing statistics every epoch.
stats, keys, _ = optimiser.train(num_epochs=10, stats_interval=1)
# Plot the change in the error over training.
fig = plt.figure(figsize=(8, 4))
ax = fig.add_subplot(111)
ax.plot(np.arange(1, stats.shape[0] + 1), stats[:, keys['error(train)']])
ax.set_xlabel('Epoch number')
ax.set_ylabel('Error')
# -
# Using similar code to previously we can now visualise the joint input-output space for the trained model. If you implemented the required methods correctly you should now see a much improved fit between predicted and target outputs when running the cell below.
# +
data_provider = CCPPDataProvider(
which_set='train',
input_dims=[0, 1],
batch_size=5000,
max_num_batches=1,
shuffle_order=False
)
inputs, targets = data_provider.next()
# Calculate predicted model outputs
outputs = model.fprop(inputs)[-1]
# Plot target and predicted outputs against inputs on same axis
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111, projection='3d')
ax.plot(inputs[:, 0], inputs[:, 1], targets[:, 0], 'r.', ms=2)
ax.plot(inputs[:, 0], inputs[:, 1], outputs[:, 0], 'b.', ms=2)
ax.set_xlabel('Input dim 1')
ax.set_ylabel('Input dim 2')
ax.set_zlabel('Output')
ax.legend(['Targets', 'Predictions'], frameon=False)
fig.tight_layout()
# -
# ## Exercise 6: visualising training trajectories in parameter space
# Running the cell below will display an interactive widget which plots the trajectories of gradient-based training of the single-layer affine model on the CCPP dataset in the three dimensional parameter space (two weights plus bias) from random initialisations. Also shown on the right is a plot of the evolution of the error function (evaluated on the current batch) over training. By moving the sliders you can alter the training hyperparameters to investigate the effect they have on how training procedes.
#
# Some questions to explore:
#
# * Are there multiple local minima in parameter space here? Why?
# * What happens to learning for very small learning rates? And very large learning rates?
# * How does the batch size affect learning?
#
# **Note:** You don't need to understand how the code below works. The idea of this exercise is to help you understand the role of the various hyperparameters involved in gradient-descent based training methods.
# +
from ipywidgets import interact
# %matplotlib inline
def setup_figure():
# create figure and axes
fig = plt.figure(figsize=(12, 6))
ax1 = fig.add_axes([0., 0., 0.5, 1.], projection='3d')
ax2 = fig.add_axes([0.6, 0.1, 0.4, 0.8])
# set axes properties
ax2.spines['right'].set_visible(False)
ax2.spines['top'].set_visible(False)
ax2.yaxis.set_ticks_position('left')
ax2.xaxis.set_ticks_position('bottom')
ax2.set_yscale('log')
ax1.set_xlim((-2, 2))
ax1.set_ylim((-2, 2))
ax1.set_zlim((-2, 2))
#set axes labels and title
ax1.set_title('Parameter trajectories over training')
ax1.set_xlabel('Weight 1')
ax1.set_ylabel('Weight 2')
ax1.set_zlabel('Bias')
ax2.set_title('Batch errors over training')
ax2.set_xlabel('Batch update number')
ax2.set_ylabel('Batch error')
return fig, ax1, ax2
def visualise_training(n_epochs=1, batch_size=200, log_lr=-1., n_inits=5,
w_scale=1., b_scale=1., elev=30., azim=0.):
fig, ax1, ax2 = setup_figure()
# create seeded random number generator
rng = np.random.RandomState(1234)
# create data provider
data_provider = CCPPDataProvider(
input_dims=[0, 1],
batch_size=batch_size,
shuffle_order=False,
)
learning_rate = 10 ** log_lr
n_batches = data_provider.num_batches
weights_traj = np.empty((n_inits, n_epochs * n_batches + 1, 1, 2))
biases_traj = np.empty((n_inits, n_epochs * n_batches + 1, 1))
errors_traj = np.empty((n_inits, n_epochs * n_batches))
# randomly initialise parameters
weights = rng.uniform(-w_scale, w_scale, (n_inits, 1, 2))
biases = rng.uniform(-b_scale, b_scale, (n_inits, 1))
# store initial parameters
weights_traj[:, 0] = weights
biases_traj[:, 0] = biases
# iterate across different initialisations
for i in range(n_inits):
# iterate across epochs
for e in range(n_epochs):
# iterate across batches
for b, (inputs, targets) in enumerate(data_provider):
outputs = fprop(inputs, weights[i], biases[i])
errors_traj[i, e * n_batches + b] = error(outputs, targets)
grad_wrt_outputs = error_grad(outputs, targets)
weights_grad, biases_grad = grads_wrt_params(inputs, grad_wrt_outputs)
weights[i] -= learning_rate * weights_grad
biases[i] -= learning_rate * biases_grad
weights_traj[i, e * n_batches + b + 1] = weights[i]
biases_traj[i, e * n_batches + b + 1] = biases[i]
# choose a different color for each trajectory
colors = plt.cm.jet(np.linspace(0, 1, n_inits))
# plot all trajectories
for i in range(n_inits):
lines_1 = ax1.plot(
weights_traj[i, :, 0, 0],
weights_traj[i, :, 0, 1],
biases_traj[i, :, 0],
'-', c=colors[i], lw=2)
lines_2 = ax2.plot(
np.arange(n_batches * n_epochs),
errors_traj[i],
c=colors[i]
)
ax1.view_init(elev, azim)
plt.show()
w = interact(
visualise_training,
elev=(-90, 90, 2),
azim=(-180, 180, 2),
n_epochs=(1, 5),
batch_size=(100, 1000, 100),
log_lr=(-3., 1.),
w_scale=(0., 2.),
b_scale=(0., 2.),
n_inits=(1, 10)
)
for child in w.widget.children:
child.layout.width = '100%'
# -
| notebooks/02_Single_layer_models.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.9 (''.venv'': venv)'
# language: python
# name: python3
# ---
# # 03 - data processing and visualisation using pandas
#
# This lecture will focus on table-like data manipulation.
import numpy as np
import pandas as pd
# ### Create dictionary representing simple table
data = {
"students":["Adam", "Monica", "John"], # this is first column
"born":[1994, 1989, 2011],
"academic degree":[None,"Bc.","MSc."],
"active":[True,False,False]
}
data
# ### Transform this data to `pandas.DataFrame`
df = pd.DataFrame(data)
df.info()# prints information about dataframe
df # nice table representation in IPykernel
# ### Adding a new column
#
# - by `list`, `numpy.array`, ...
# - we can of course rewrite it again
# - length has to match
df["children"] = [2,1,0] # use list
df
df["children"] = np.array([1,1,3]) # use array
df
df["children"] = [1,2] # lentgth has to match
# ### Accesing rows, columns and cells
#
# - by names
# - by indexes
# - by masks (boolean expresions)
col_students = df["students"] # one column labeled 'students'
col_students
df.students # also possibility
df.loc[0] # row with index == `0`
df.iloc[0] # first row of dataframe
df.iloc[0,0] # integer location
filter_by = ["students", "born"]
df[filter_by] # I only want df with certain columns
df[df["born"] > 1990] # I want only students born after 1990
# ### Appending new data to our dataframe
# Create new data, which we want to add to DataFrame
new_data = {
"students":["Clara", "Johny", "Michael"],
"born":[1984, 1989, 1920],
"academic degree":["PhD.","Bc.","MSc."],
"active":[True,False,False],
"children":[2,0,4]
}
# Convert new data to dataframe and append it to the end
# of original dataframe, sort=False
df = df.append(pd.DataFrame(new_data), sort = False)
df
#Reset index values. Inplace rewrites df in place...
#...without creating a copy as a new object
#drop = false would insert a column "index"
df.reset_index(inplace = True, drop = True)
df
# ### Statistical measures - pandas is your friend
# Returns basic statistics of numerical data in DataFrame
df.describe()
# Mean value for numerical data
df.mean()
# Standard deviation of numerical data
df.std()
# Maximum, minimum, median
df.max(), df.min(), df.median()
# I can apply it to `pd.Series` as well
df["born"].mean()
# And again, I can use it for filtering my table
# for example I want all students born later than
# average year of birth
mask = df["born"] > df["born"].mean()
mask
df[mask] # use the mask to filter
# ### Sorting data
#
# - based on some column
# - ascending/descending order
# - sorting by two or more columns
df.sort_values(["children"]) #sorting data by number of children
df.sort_values(["children"], ascending = False) #descending
#sorting by 2 categories
df.sort_values(["children","born"], ascending = [True, False])
| courses/E375004/data_pandas/basics_01.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# #Introduction to Scientific Python - `scipy`
# ##Scipy is a software stack built to support efficient scientific computation in Python.
# ##The fundamental components of the ecosystem are `numpy` and `matplotlib`.
#
# you can refer to the official [Scipy website](http://www.scipy.org) for updates and full documentation.
# ###The basic components of the ecosystem are :
# * Numpy : provides the fundamental array datastructure
# * Matplotlib : provides visualization functionalities
# * Scipy : algorithms
# * IPython
#
# ####Additional more specialized packages are
# * Pandas : convenient data structures for data analysis
# * Sympy : symbolic math
# * Scikits-Learn : tools for machine learning and data mining
# * PyTables : tools for managing hierarchical datasets and large amount of data
# * Numba : decorations and annotations for enabling jit compilation of computationally intensive code
# * Cython : interfacing C/C++ python
# * ..... many other tools out there.
from IPython.display import Image
Image(filename='Images/ecosystem.png')
# to ensure python2 - python3 portability :
from __future__ import print_function, division
# ### Let's pay observe some basic limits of the standard python data structures
## Lists are cool, but they are not really ideal for representing vectors in scientific calculations
v1 = range(1,10,1)
v1
## we cannot easily build an array of floats
v1 = range(1,20,0.5)
## the way python deals with summing two lists is not really what we want
v2 = range(10,20,1)
print ( v1, v2 )
print ( v1+v2 )
# +
## how do we compute a math operation element-by-element on all elements of a vector?
## for example we could use list comprehension
v_sqr = [i**2 for i in v1 ]
print (v_sqr)
## or a map
v_sqr = map( lambda x : x**2 ,v1)
print (v_sqr)
# -
# These are good options, but the performance is not great
v1 = range(1,1000)
# %timeit -q -o [i**2 for i in v1]
# %timeit -q -o map( lambda x : x**2 ,v1)
# ##`Numpy` addresses these problems providing :
#
# - ###a numerical array type, suitable for scientific data (vectors, matrices, etc.)
# - ###vectorized operations for operating on arrays
#
## first import numpy
import numpy as np
## we can quickly create a float vector
v1 = np.arange(1,10,0.1)
v1
## this is the new object type introduced in numpy
type(v1)
## the data type for our array is
v1.dtype
## it's designed to deal with vectors math as we intend it.
## sum
v1 = np.array([0.,1.,2.,3.])
v2 = np.array([10.,11.,12.,13.])
v1+v2
## multiplication by a scalar
-2* v1
## note that multiplication is elementwise
v1 * v2
## so is **2
v1**2
## is that faster than the list comprehension version we saw earlier?
v1 = np.arange(1,1000)
# %timeit -q -o v1**2
# ##Universal functions
# ###The vectorization of the operations is around the concept of "universal" functions
np.sin(np.pi/2)
v1 = np.linspace(-2*np.pi,2*np.pi,50)
v1
v2 = np.sin(v1)
v2
# `np.sin()` is a "universal function" - `ufunct` - that can operate both on numbers and on arrays.
#
# The idea of numpy is that we should think in a vectorized way, always trying to operate in this way on complete arrays as a unit.
# ##`Numpy` provides many functionalities to deal with the nd arrays.
# There are many methods we can apply to this new datastructure `nd.array`
v1 = np.arange(1,1000)
dir(v1)
##for example
print ( v1.sum() , v1.min() , v1.mean() , v1.argmax() , v1.shape)
# ###For a few more details on numpy ndarrays check out [this other notebook](numpy_arrays.ipynb)
# ## Visualizing data
#
# The other fundamental tool we need is an API to easily visualize data.
#
# `matplotlib` is the graphic package part of the `scipy` stack
#
# The API was originally inspired by the MATLAB, and the syntax should appear quite friendly to MATLAB users.
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
print(matplotlib.__version__)
print(matplotlib.get_backend())
## let's create two simple arrays to plot
v1 = np.linspace(-2*np.pi,2*np.pi,50)
v2 = np.sin(v1)
## the most basic way of displaying data is use the function plot.
plt.plot(v1,v2)
# Unfortunately you realized you cannot see the plot yet .....<br>
# Matplotlib created the object but it has not been rendered yet.<br>
# To display the plot you need to use the command show().
plt.show()
# .... look for the plot. it is probably in a small window in the backgrownd.....<br>
# To procede using the notebook you need to close the plot.
# .... not extremely convenient ... but luckily
# ###We can have the plot embedded in the notebook using the "magic" command
# # ####%matplotlib inline
# %matplotlib inline
## now as soon as the cell is executed, the plot will be displayed in the Output cell.
plt.plot(v1,v2)
## similarly to matlab you can tune global parameters, like the figure size
plt.rcParams['figure.figsize'] = 8,6
##try to replot
plt.plot(v1,v2)
# +
## if you want to plot multiple dataseries on the same plot,
## just issue multiple plot commands before issuing the show() command.
## .... in our case, just put multiple plot commands in the same cell.
plt.plot(v1, np.sin(v1), "o" , label="sin(x)")
plt.plot(v1, np.cos(v1), "--x", label="cos(x)")
plt.xlabel("x", size=20)
plt.ylabel("circular functions", size=20)
plt.legend(loc=(1.1 ,0.7 ) , fontsize='xx-large')
# -
# Let's try to understand how the graphic output is structured. <br><br>
#
# <img src="images/figure_axes_axis_labeled.png" , width=600, align=center >
#
# ####The ``Figure`` is the main container.
# You can think of it as the window that is created when you say plt.show(), or a page if you save your figure to a pdf file.
# <br>
# ####The real plot happens in an ``Axes``, which is the effective plotting area.<br>
# For example if you want to create a page with multiple panels, typically each panel will be a different ``Axes`` in the same ``Figure``.
# <br>
# ####There are many ways to create ``Axes``. The most useful is calling the method ``subplots``.
# <br>
#
#
# +
##create a figure
fig = plt.figure()
## add one subplot .... MATLAB users should recognize this
ax = fig.add_subplot(111)
print (type(ax))
### add_subplot(ABC) adds subplot C to a grid of AxB subplots
##set some features, like title, axis ranges, labels....
ax.set(xlim=[0.5, 4.5], ylim=[-2, 8], title='An example empty plot', ylabel='Y-Axis', xlabel='X-Axis')
# +
### let's plot sin(x) and cos(x) in two different subplots organized vertically
## create one figure
fig = plt.figure()
## create the first Axes using subplot
ax = fig.add_subplot(211)
ax.set_title('Plot number 1')
ax.set_ylabel('cos(x)')
ax.plot(v1,np.cos(v1))
## and now add the second one
ax = fig.add_subplot(212)
ax.set_title('Plot number 2', fontsize=24)
ax.set_ylabel('sin(x)')
ax.plot(v1,np.sin(v1))
## this is a useful tool to make sure that when the fiugure gets rendered,
## Matplotlib tries to rearrange the layout to avoid overlaps
plt.tight_layout()
# +
## we can do similarly a grid of 2 subplots with an horizontal layout
## we can tune the aspect ratio to make it look better.
fig = plt.figure(figsize=plt.figaspect(0.25))
ax = fig.add_subplot(121)
ax.set_title('Plot number 1')
ax.set_ylabel('cos(x)')
ax.plot(v1,np.cos(v1))
ax = fig.add_subplot(122)
ax.set_title('Plot number 2')
ax.set_ylabel('sin(x)')
ax.plot(v1,np.sin(v1))
# -
## if we want to create many subplots the best is to use the method subplots,
## the ndarray with the axes objects
fig, axes = plt.subplots(nrows=4)
print ( type(axes) )
axes
# +
## create the grid
fig, axes = plt.subplots(nrows=4)
## draw a plot in each of the Axes
for i,ax in enumerate(axes):
ax.plot(v1,np.sin(i * np.pi/4 + v1))
ax.set_title('plot number %d'%i)
plt.tight_layout()
# +
## we can extend the same concept to a 2d grid of plots
fig, axes = plt.subplots(nrows=4,ncols=4,figsize=plt.figaspect(0.5))
for i,axs in enumerate(axes):
for j,ax in enumerate(axs) :
ax.plot( v1, np.sin( ( i*4+j)* np.pi/16 + v1), 'r--o')
ax.set_title('plot number %d , %d' % (i,j))
plt.tight_layout()
## we can export the current figure using the method savefig()
plt.savefig("multiplot.pdf")
# -
# ### There are a variety of ploting methods for displaying data.
#
# The best source of information is the "gallery" on the [matplotlib website](http://matplotlib.org/gallery.html).
#
# You can find many examples of plots generated along with the code used to generate them.
# %reset
| Python/Intro_to_scientific_python/introduction_to_scipy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # PostgreSQL + Python
# +
# #!pip install SQLAlchemy
# #!pip install psycopg2-binary
# -
import os
import pandas as pd
from sqlalchemy import text, create_engine
# **PART 1**: Connect to the database
#
# **PART 2**: Run queries
# ### PART 1: Connect to the database
# #### 1.1. In order to connect to any database, we need 5 things
# 1. Host
# 2. Username
# 3. Password
# 4. Port
# 5. Database
# `psql -U username -h host -p port -d database`
HOST = 'localhost'
USERNAME = 'marija'
PORT = '5432'
DB = 'northwind'
# Set your postgres password as environment variable.
#
# On mac/linux:
#
# 1. Navigate to your home directory (type `cd` in your terminal)
# 2. Open your `.bashrc` file in the text editor of your choice
# 3. Add the following line to your `.bashrc` file: `export PG_PASSWORD='*****'`
# 4. After closing `.bashrc` file type `source ~/.bashrc` in the terminal
# 5. Open a new jupyter notebook session
#
# On Windows:
# 1. Follow the instructions above if you have / want to create .bashrc file, or
# 2. Follow this post for how to set your environment variables: https://www.alphr.com/environment-variables-windows-10/
PASSWORD = os.getenv('PG_PASSWORD')
# #### 1.2. Create a connection string ("URL" for our database)
conn_string = f'postgresql://{USERNAME}:{PASSWORD}@{HOST}:{PORT}/{DB}'
conn_string_mac = f'postgresql://{HOST}:{PORT}/{DB}'
# #### 1.3. Connect to your `northwind` database
engine = create_engine(conn_string)
# #### 1.4. Execute your first query from Python!
query = '''
CREATE TABLE IF NOT EXISTS orders_berlin (
order_id INT PRIMARY KEY,
customer_id TEXT,
ship_name TEXT
)
'''
engine.execute(query)
# Now go check in your database to make sure it worked!
# ### Part 2: Run queries
# #### 2.1. SQL + Pandas
# We'll use three of the Pandas functions that are similar to things you've already seen, `.to_sql`, `.read_sql_table`, `.read_sql_query`.
# Let's first load some data from our northwind `.csv`s.
orders = pd.read_csv('./northwind_data_clean/data/orders.csv')
orders.head()
orders_berlin = orders[orders['shipCity'] == 'Berlin'][['orderID', 'customerID', 'shipName']]
orders_berlin
# `.to_sql`
orders_berlin.to_sql('orders_berlin', engine, if_exists='replace', index=False)
# * Instead of replacing, can also `append` or `fail`.
# * Use `method='multi'` when sending a large dataframe on e.g. Amazon Redshift
# Look at your table description in `psql`. What do you notice?
engine.execute('ALTER TABLE orders_berlin ADD PRIMARY KEY ("orderID")')
# `.read_sql_query`
query = '''
SELECT order_id, customer_id, ship_name
FROM orders
WHERE ship_city = 'Berlin'
'''
orders_berlin_query = pd.read_sql(query, engine)
orders_berlin_query.set_index('order_id', inplace=True)
orders_berlin_query
# `read_sql_table`
orders_berlin_table = pd.read_sql_table('orders_berlin', engine)
orders_berlin_table.set_index('orderID', inplace=True)
orders_berlin_table
# #### 2.2. Running queries directly in the database
engine.execute(query)
list(engine.execute(query))
# #### 2.3. Parametrized queries
# ##### Bad way!
# String formatting the query
city = 'Berlin'
query_param1 = '''
SELECT order_id, customer_id, shipped_date
FROM orders
WHERE ship_city = '%s'
''' %city
list(engine.execute(query_param1))
# ##### Good way!
# Passing the parameter to `.execute`.
query_param2 = text('''
SELECT order_id, customer_id, shipped_date
FROM orders
WHERE ship_city = :city
AND ship_country = :country
''')
param_dict = {'city': 'Berlin', 'country': 'Germany'}
list(engine.execute(query_param2, param_dict))
# ##### Exercise: Modify the parameter below to perform SQL injection / delete one of your tables
param = 'Misc'
# Solution:
param = "Misc'); DROP TABLE orders_berlin;--"
# Explanation here: https://www.explainxkcd.com/wiki/index.php/327:_Exploits_of_a_Mom
query_injection = '''
INSERT INTO categories (category_id, category_name)
VALUES (109, '%s')
''' %param
# (Because `category_id` is a primary key, you'll have to keep changing the value inserted for `category_id` (e.g. 101, 102...) as you're debugging.)
engine.execute(query_injection)
# This was just a quick introduction into sqlalchemy. In its full functionality it is a very powerful toolkit. If you are interested in learning more, or are working with databases in Python, start here: https://docs.sqlalchemy.org/en/13/core/tutorial.html
#
# Things you can expect: `Table`, `Column`, `ForeignKey` objects, `.insert()`, `.select()`, `.join()` methods.
| week_05/postgresql_in_python.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a id="top"></a>
# # Export Notebook
#
# <hr>
#
# # Notebook Summary
#
# The code in this notebook subsets a data cube, selects a specific set of variables, generates some additional data from those and then outputs that data into a GeoTIFF file. The goal is to be able to do external analyses of this data using other data analysis tools or GIS tools. The files would be reasonable in size, since we would restrict the region and parameters in the output.
#
# <hr>
#
# # Index
#
# * [Import Dependencies and Connect to the Data Cube](#import)
# * [Choose Platforms and Products](#plat_prod)
# * [Get the Extents of the Cube](#extents)
# * [Define the Extents of the Analysis](#define_extents)
# * [Load Data from the Datacube](#retrieve_data)
# * [Derive Products](#derive_products)
# * [Combine Data](#combine_data)
# * [Export Data](#export)
# * [Export to GeoTIFF](#export_geotiff)
# * [Export to NetCDF](#export_netcdf)
#
# <hr>
# ## <span id="import">Import Dependencies and Connect to the Data Cube [▴](#top)</span>
# +
import sys
import os
sys.path.append(os.environ.get('NOTEBOOK_ROOT'))
import xarray as xr
import numpy as np
import datacube
from utils.data_cube_utilities.data_access_api import DataAccessApi
# -
api = DataAccessApi()
dc = api.dc
# ## <span id="plat_prod">Choose Platforms and Products [▴](#top)</span>
# **List available products for each platform**
list_of_products = dc.list_products()
list_of_products
# **Choose product**
platform = "LANDSAT_7"
product = "ls7_ledaps_ghana"
# ## <span id="extents">Get the Extents of the Cube [▴](#top)</span>
# +
from utils.data_cube_utilities.dc_load import get_product_extents
from utils.data_cube_utilities.dc_time import dt_to_str
full_lat, full_lon, min_max_dates = get_product_extents(api, platform, product)
# Print the extents of the combined data.
print("Latitude Extents:", full_lat)
print("Longitude Extents:", full_lon)
print("Time Extents:", list(map(dt_to_str, min_max_dates)))
# -
## The code below renders a map that can be used to orient yourself with the region.
from utils.data_cube_utilities.dc_display_map import display_map
display_map(full_lat, full_lon)
# ## <span id="define_extents">Define the Extents of the Analysis [▴](#top)</span>
# +
######### Ghana - Pambros Salt Ponds ##################
lon = (-0.3013, -0.2671)
lat = (5.5155, 5.5617)
time_extents = ('2015-01-01', '2015-12-31')
# -
from utils.data_cube_utilities.dc_display_map import display_map
display_map(lat, lon)
# ## <span id="retrieve_data">Load Data from the Data Cube [▴](#top)</span>
landsat_dataset = dc.load(latitude = lat,
longitude = lon,
platform = platform,
time = time_extents,
product = product,
measurements = ['red', 'green', 'blue', 'nir', 'swir1', 'swir2', 'pixel_qa'])
landsat_dataset
# ## <span id="derive_products">Derive Products [▴](#top)</span>
# > ### Masks
# +
from utils.data_cube_utilities.clean_mask import landsat_qa_clean_mask
clear_xarray = landsat_qa_clean_mask(landsat_dataset, platform, cover_types=['clear'])
water_xarray = landsat_qa_clean_mask(landsat_dataset, platform, cover_types=['water'])
shadow_xarray = landsat_qa_clean_mask(landsat_dataset, platform, cover_types=['shadow'])
clean_xarray = xr.ufuncs.logical_or(clear_xarray , water_xarray).rename("clean_mask")
# -
# > ### Water Classification
# +
from utils.data_cube_utilities.dc_water_classifier import wofs_classify
water_classification = wofs_classify(landsat_dataset,
clean_mask = clean_xarray.values,
mosaic = False)
# -
wofs_xarray = water_classification.wofs
# > ### Normalized Indices
def NDVI(dataset):
return ((dataset.nir - dataset.red)/(dataset.nir + dataset.red)).rename("NDVI")
def NDWI(dataset):
return ((dataset.green - dataset.nir)/(dataset.green + dataset.nir)).rename("NDWI")
def NDBI(dataset):
return ((dataset.swir2 - dataset.nir)/(dataset.swir2 + dataset.nir)).rename("NDBI")
ndbi_xarray = NDBI(landsat_dataset) # Urbanization - Reds
ndvi_xarray = NDVI(landsat_dataset) # Dense Vegetation - Greens
ndwi_xarray = NDWI(landsat_dataset) # High Concentrations of Water - Blues
# >### TSM
# +
from utils.data_cube_utilities.dc_water_quality import tsm
tsm_xarray = tsm(landsat_dataset, clean_mask = wofs_xarray.values.astype(bool) ).tsm
# -
# > ### EVI
def EVI(dataset, c1 = None, c2 = None, L = None):
return ((dataset.nir - dataset.red)/((dataset.nir + (c1 * dataset.red) - (c2 *dataset.blue) + L))).rename("EVI")
evi_xarray = EVI(landsat_dataset, c1 = 6, c2 = 7.5, L = 1 )
# ## <span id="combine_data">Combine Data [▴](#top)</span>
# +
combined_dataset = xr.merge([landsat_dataset,
## <span id="combine_data">Combine Data [▴](#top)</span> clean_xarray,
clear_xarray,
water_xarray,
shadow_xarray,
evi_xarray,
ndbi_xarray,
ndvi_xarray,
ndwi_xarray,
wofs_xarray,
tsm_xarray])
# Copy original crs to merged dataset
combined_dataset = combined_dataset.assign_attrs(landsat_dataset.attrs)
combined_dataset
# -
# ## <span id="export">Export Data [▴](#top)</span>
# ### <span id="export_geotiff">Export to GeoTIFF [▴](#top)</span>
# Export each acquisition as a GeoTIFF.
# +
from utils.data_cube_utilities.import_export import export_xarray_to_multiple_geotiffs
# Ensure the output directory exists before writing to it.
if platform == 'LANDSAT_7':
# !mkdir -p output/geotiffs/landsat7
else:
# !mkdir -p output/geotiffs/landsat8
output_path = "output/geotiffs/landsat{0}/landsat{0}".format(7 if platform=='LANDSAT_7' else 8)
export_xarray_to_multiple_geotiffs(combined_dataset, output_path)
# -
# Check to see what files were exported. The size of these files is also shown.
if platform == 'LANDSAT_7':
# !ls -lah output/geotiffs/landsat7/*.tif
else:
# !ls -lah output/geotiffs/landsat8/*.tif
# Sanity check using `gdalinfo` to make sure that all of our bands exist .
if platform == 'LANDSAT_7':
# !gdalinfo output/geotiffs/landsat7/landsat7_2015_01_09_03_06_13.tif
else:
# !gdalinfo output/geotiffs/landsat8/landsat8_2015_01_01_03_07_41.tif
# Zip all GeoTIFFs.
if platform == 'LANDSAT_7':
# !tar -cvzf output/geotiffs/landsat7/landsat_7.tar.gz output/geotiffs/landsat7/*.tif
else:
# !tar -cvzf output/geotiffs/landsat8/landsat_8.tar.gz output/geotiffs/landsat8/*.tif
# ### <span id="export_netcdf">Export to NetCDF [▴](#top)</span>
# Export all acquisitions together as a single NetCDF.
combined_dataset
# +
import os
import pathlib
from utils.data_cube_utilities.import_export import export_xarray_to_netcdf
# Ensure the output directory exists before writing to it.
ls_num = 7 if platform=='LANDSAT_7' else 8
output_dir = f"output/netcdfs/landsat{ls_num}"
pathlib.Path(output_dir).mkdir(parents=True, exist_ok=True)
output_file_path = output_dir + f"/ls{ls_num}_netcdf_example.nc"
export_xarray_to_netcdf(combined_dataset.red, output_file_path)
| notebooks/general/Export.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="9jIAQzPUI37W"
#collect and summarize from python cookbook3
# + [markdown] id="hyKFlUqCJC_i"
# # Chapter 1: Data structures and algorithms
# + colab={"base_uri": "https://localhost:8080/"} id="i_yIl9q2JTqH" outputId="fe407eef-af7a-492b-aa44-38cb33a5f717"
record = ('ACME', 50, 123.45, (12, 18, 2012))
name, *_, (*_, year) = record
print(name)
print(year)
# + colab={"base_uri": "https://localhost:8080/"} id="dSd5PdNeKAMF" outputId="e60df19f-6789-4363-d1ed-47f6b4d2e6bd"
from collections import deque
#Fixed-size queue
q=deque(maxlen=2)
q.append(1)
q.append(2)
print(q)
q.append(3)
print(q)
# + colab={"base_uri": "https://localhost:8080/"} id="kugDcEM8KWj6" outputId="46577f45-bebe-4d47-eba8-4693160cd648"
#Find the largest or smallest N elements
import heapq
nums = [1, 8, 2, 23, 7, -4, 18, 23, 42, 37, 2]
print(heapq.nlargest(3, nums))
print(heapq.nsmallest(3, nums))
portfolio = [
{'name': 'IBM', 'shares': 100, 'price': 91.1},
{'name': 'AAPL', 'shares': 50, 'price': 543.22},
{'name': 'FB', 'shares': 200, 'price': 21.09},
{'name': 'HPQ', 'shares': 35, 'price': 31.75},
{'name': 'YHOO', 'shares': 45, 'price': 16.35},
{'name': 'ACME', 'shares': 75, 'price': 115.65}
]
cheap = heapq.nsmallest(3, portfolio, key=lambda s: s['price'])
expensive = heapq.nlargest(3, portfolio, key=lambda s: s['price'])
print(cheap)
print(expensive)
# + colab={"base_uri": "https://localhost:8080/"} id="064KR4C8Lq-f" outputId="8c09f001-ff67-41d4-e7ba-e7cbb4349ddb"
#Priority queue
import heapq
class PriorityQueue:
def __init__(self):
self._queue = []
self._index = 0
def push(self, item, priority):
heapq.heappush(self._queue, (-priority, self._index, item))
self._index += 1
def pop(self):
return heapq.heappop(self._queue)[-1]
class Item:
def __init__(self, name):
self.name = name
def __repr__(self):
return 'Item({!r})'.format(self.name)
q = PriorityQueue()
q.push(Item('foo'), 1)
q.push(Item('bar'), 5)
q.push(Item('spam'), 4)
q.push(Item('grok'), 1)
print(q.pop())
print(q.pop())
print(q.pop())
print(q.pop())
# + id="mkEC_yIgMkvn"
#The keys in the dictionary map multiple values
from collections import defaultdict
d = defaultdict(list)
for key, value in pairs:
d[key].append(value)
# + colab={"base_uri": "https://localhost:8080/"} id="7hOpoOHqQVDZ" outputId="d553e08b-ef0c-48aa-e40a-7ffc1d090b39"
#Dictionary sort
from collections import OrderedDict
d = OrderedDict()
d['foo'] = 1
d['bar'] = 2
d['spam'] = 3
d['grok'] = 4
# Outputs "foo 1", "bar 2", "spam 3", "grok 4"
for key in d:
print(key, d[key])
prices = {
'ACME': 45.23,
'AAPL': 612.78,
'IBM': 205.55,
'HPQ': 37.20,
'FB': 10.75
}
min_price = min(zip(prices.values(), prices.keys()))
max_price = max(zip(prices.values(), prices.keys()))
print(min_price)
print(max_price)
prices_sorted = sorted(zip(prices.values(), prices.keys()))
print(prices_sorted)
# + colab={"base_uri": "https://localhost:8080/"} id="961HewAlRdbF" outputId="a36c4319-8bbd-4fc8-828d-7c37b3f078b1"
a = {
'x' : 1,
'y' : 2,
'z' : 3
}
b = {
'w' : 10,
'x' : 11,
'y' : 2
}
# Find keys in common
print(a.keys() & b.keys())
# Find keys in a that are not in b
print(a.keys() - b.keys())
# Find (key,value) pairs in common
print(a.items() & b.items())
c = {key:a[key] for key in a.keys() - {'z', 'w'}}
print(c)
# + colab={"base_uri": "https://localhost:8080/"} id="Rl-T04TTWz22" outputId="934db93d-a6c2-49c2-8d46-ce621d6b89ec"
words = [
'look', 'into', 'my', 'eyes', 'look', 'into', 'my', 'eyes',
'the', 'eyes', 'the', 'eyes', 'the', 'eyes', 'not', 'around', 'the',
'eyes', "don't", 'look', 'around', 'the', 'eyes', 'look', 'into',
'my', 'eyes', "you're", 'under'
]
from collections import Counter
word_counts = Counter(words)
# 出现频率最高的3个单词
top_three = word_counts.most_common(3)
print(top_three)
# + colab={"base_uri": "https://localhost:8080/"} id="bE5qc1QlXhiA" outputId="124c6005-ef27-47ff-83b5-2a119d9d9505"
rows = [
{'fname': 'Brian', 'lname': 'Jones', 'uid': 1003},
{'fname': 'David', 'lname': 'Beazley', 'uid': 1002},
{'fname': 'John', 'lname': 'Cleese', 'uid': 1001},
{'fname': 'Big', 'lname': 'Jones', 'uid': 1004}
]
from operator import itemgetter
rows_by_fname = sorted(rows, key=itemgetter('fname'))
rows_by_uid = sorted(rows, key=itemgetter('uid'))
print(rows_by_fname)
print(rows_by_uid)
rows_by_lfname = sorted(rows, key=itemgetter('lname','fname'))
print(rows_by_lfname)
# + colab={"base_uri": "https://localhost:8080/"} id="7B2661ucZLIB" outputId="61733135-a3b2-413f-c5a5-d8fc61efa9b2"
rows = [
{'address': '5412 N CLARK', 'date': '07/01/2012'},
{'address': '5148 N CLARK', 'date': '07/04/2012'},
{'address': '5800 E 58TH', 'date': '07/02/2012'},
{'address': '2122 N CLARK', 'date': '07/03/2012'},
{'address': '5645 N RAVENSWOOD', 'date': '07/02/2012'},
{'address': '1060 W ADDISON', 'date': '07/02/2012'},
{'address': '4801 N BROADWAY', 'date': '07/01/2012'},
{'address': '1039 W GRANVILLE', 'date': '07/04/2012'},
]
from operator import itemgetter
from itertools import groupby
# Sort by the desired field first
rows.sort(key=itemgetter('date'))
# Iterate in groups
for date, items in groupby(rows, key=itemgetter('date')):
print(date)
for i in items:
print(' ', i)
from collections import defaultdict
rows_by_date = defaultdict(list)
for row in rows:
rows_by_date[row['date']].append(row)
for r in rows_by_date['07/01/2012']:
print(r)
# + colab={"base_uri": "https://localhost:8080/"} id="1IPQDb6Talq8" outputId="07ff7e9a-398d-4fb2-ca22-a537d10d7720"
values = ['1', '2', '-3', '-', '4', 'N/A', '5']
def is_int(val):
try:
x = int(val)
return True
except ValueError:
return False
ivals = list(filter(is_int, values))
print(ivals)
# + colab={"base_uri": "https://localhost:8080/"} id="LTIH3TNUhXfS" outputId="02508662-fe4d-472a-b84d-1ce2089d4f4e"
addresses = [
'5412 N CLARK',
'5148 N CLARK',
'5800 E 58TH',
'2122 N CLARK',
'5645 N RAVENSWOOD',
'1060 W ADDISON',
'4801 N BROADWAY',
'1039 W GRANVILLE',
]
counts = [ 0, 3, 10, 4, 1, 7, 6, 1]
from itertools import compress
more5 = [n > 5 for n in counts]
more5
list(compress(addresses, more5))
# + colab={"base_uri": "https://localhost:8080/"} id="pXLZ8kpNiP1X" outputId="4ed14255-78eb-4af0-ba03-a812d96dbc67"
prices = {
'ACME': 45.23,
'AAPL': 612.78,
'IBM': 205.55,
'HPQ': 37.20,
'FB': 10.75
}
p1 = dict((key, value) for key, value in prices.items() if value > 200)
print(p1)
# + colab={"base_uri": "https://localhost:8080/"} id="9ErzJcQvio4-" outputId="dd588844-4af2-4cec-d8f0-9d6a5d5eb2b9"
from collections import namedtuple
Subscriber = namedtuple('Subscriber', ['addr', 'joined'])
sub = Subscriber('<EMAIL>', '2012-10-19')
print(sub)
sub=sub._replace(joined='2011-10-9')
print(sub)
# + colab={"base_uri": "https://localhost:8080/"} id="brDS33gvjXIy" outputId="1f8f6705-c41a-49d9-b01a-352d67266eda"
a = {'x': 1, 'z': 3 }
b = {'y': 2, 'z': 4 }
from collections import ChainMap
c = ChainMap(b,a) #first b then a
print(c['x'])
print(c['y'])
print(c['z'])
# + [markdown] id="RGhOgvNqDcmM"
# # Chapter 2: Strings and text
# + id="PjFkt-AlDiHj"
from urllib.request import urlopen
def read_data(name):
if name.startswith(('http:', 'https:', 'ftp:')) and name.endwith('.com','.net','.cn'):
return urlopen(name).read()
else:
with open(name) as f:
return f.read()
# + colab={"base_uri": "https://localhost:8080/"} id="Nb_A0LYED7Ha" outputId="4e0755fc-8555-45da-bbc1-23fb7d45efbe"
import re
text = 'UPPER PYTHON, lower python, Mixed Python'
print(re.findall('python', text, flags=re.IGNORECASE))
def matchcase(word):
def replace(m):
text = m.group()
if text.isupper():
return word.upper()
elif text.islower():
return word.lower()
elif text[0].isupper():
return word.capitalize()
else:
return word
return replace
print(re.sub('python', matchcase('snake'), text, flags=re.IGNORECASE))
# + colab={"base_uri": "https://localhost:8080/"} id="wqzAYS6kE04-" outputId="33e62325-b8aa-4884-ed69-2b9f7d2894a9"
# Shortest matching pattern
str_pat = re.compile(r'"(.*)"')
text1 = 'Computer says "no."'
print(str_pat.findall(text1))
text2 = 'Computer says "no." Phone says "yes."'
print(str_pat.findall(text2))
str_pat = re.compile(r'"(.*?)"') #? shortest
print(str_pat.findall(text2))
# + colab={"base_uri": "https://localhost:8080/"} id="zoaMLJZ7GA_e" outputId="08891508-9500-408e-8767-bf272ab6faa5"
s = 'pýtĥöñ\fis\tawesome\r\n'
remap = {
ord('\t') : ' ',
ord('\f') : ' ',
ord('\r') : None # Deleted
}
a = s.translate(remap)
print(a)
import unicodedata
import sys
cmb_chrs = dict.fromkeys(c for c in range(sys.maxunicode)
if unicodedata.combining(chr(c)))
b = unicodedata.normalize('NFD', a)
print(b)
b=b.translate(cmb_chrs)
print(b)
# + colab={"base_uri": "https://localhost:8080/"} id="m6F548HkGld7" outputId="9d4a1804-7122-43e4-bac0-fd7c683c25b1"
text = 'Hello World'
print(text.ljust(20,'@'))
print(text.rjust(20),'.')
print(text.center(20))
print('{:>10s} {:>10s}'.format('Hello', 'World'))
# + colab={"base_uri": "https://localhost:8080/"} id="NtBO5P_hNEvP" outputId="b2bbe514-c9b6-4fae-bcf5-80f2a4cd8b4d"
parts = ['Is', 'Chicago', 'Not', 'Chicago?']
test=' '.join(parts)
print(test)
# + colab={"base_uri": "https://localhost:8080/"} id="6vAFVdwKNMpc" outputId="46fbff01-5efd-4481-9e7f-1911a5317b46"
import sys
s = '{name} has {n} messages.'
s.format(name='Guido', n=37)
s = '{name} has {n} messages.'
s.format(name='Guido', n=37)
print(s)
class safesub(dict):
"""prevent key missing"""
def __missing__(self, key):
return '{' + key + '}'
def sub(text):
return text.format_map(safesub(sys._getframe(1).f_locals))
name = 'Guido'
n = 37
print(sub('Hello {name}'))
print(sub('You have {n} messages.'))
print(sub('Your favorite color is {color}'))
# + [markdown] id="H2W2YKJbPfdV"
# # Chapter 3: Digital dates and times
# + colab={"base_uri": "https://localhost:8080/"} id="fO7lLrexQkos" outputId="cd7f8b75-fe49-4743-9785-b3e2f91d61ba"
from decimal import localcontext
from decimal import Decimal
a = Decimal('1.3')
b = Decimal('1.7')
print(a / b)
with localcontext() as ctx:
ctx.prec = 3
print(a / b)
with localcontext() as ctx:
ctx.prec = 50
print(a / b)
# + colab={"base_uri": "https://localhost:8080/"} id="f-es_lKRQkx_" outputId="166e9b68-6826-405d-9cf4-7240ff0f809d"
#change big number
x=523**23
print(x)
nbytes, rem = divmod(x.bit_length(), 8)
if rem:
nbytes+=1
x.to_bytes(nbytes,'little')
# + colab={"base_uri": "https://localhost:8080/"} id="E15F7A8_SuC9" outputId="30ec2268-2034-4645-b5ea-a8ebd842c4e8"
from datetime import datetime, date, timedelta
import calendar
def get_month_range(start_date=None):
if start_date is None:
start_date = date.today().replace(day=1)
_, days_in_month = calendar.monthrange(start_date.year, start_date.month)
end_date = start_date + timedelta(days=days_in_month)
return (start_date, end_date)
a_day = timedelta(days=1)
first_day, last_day = get_month_range()
while first_day < last_day:
print(first_day)
first_day += a_day
# + [markdown] id="KgodT0QXDgIf"
# # Chapter 4: Iterators and Generators
#
# + colab={"base_uri": "https://localhost:8080/"} id="Bh9xjm1hTdA_" outputId="0efcef55-586e-4a9f-8d01-9938c3066839"
class Node:
def __init__(self, value):
self._value = value
self._children = []
def __repr__(self):
return 'Node({!r})'.format(self._value)
def add_child(self, node):
self._children.append(node)
def __iter__(self):
return iter(self._children)
def depth_first(self):
yield self
for c in self:
yield from c.depth_first()
root = Node(0)
child1 = Node(1)
child2 = Node(2)
root.add_child(child1)
root.add_child(child2)
child1.add_child(Node(3))
child1.add_child(Node(4))
child2.add_child(Node(5))
for ch in root.depth_first():
print(ch)
# + colab={"base_uri": "https://localhost:8080/"} id="_Bq5yR3gVQl2" outputId="7be4f3d7-c506-4cfe-88b5-abc21fd1a671"
class Countdown:
def __init__(self, start):
self.start = start
# Forward iterator
def __iter__(self):
n = self.start
while n > 0:
yield n
n -= 1
# Reverse iterator
def __reversed__(self):
n = 1
while n <= self.start:
yield n
n += 1
for rr in reversed(Countdown(5)):
print(rr)
for rr in Countdown(5):
print(rr)
# + colab={"base_uri": "https://localhost:8080/"} id="yhrpWxB_WF6W" outputId="a5b936e7-fca7-479f-eba7-5042d1efac3f"
# slice
def count(n):
while True:
yield n
n += 1
c = count(0)
print(next(c))
import itertools
for x in itertools.islice(c, 10, 15):
print(x)
for x in itertools.islice(c, None, 2):
print(x)
# + colab={"base_uri": "https://localhost:8080/"} id="s-HCPVtgWz6b" outputId="04a086c0-6904-4c14-bbac-5eeea99e05f1"
items = ['a', 'b', 'c']
from itertools import permutations
for p in permutations(items,2):
print(p)
#all combinations of elements in the set
from itertools import combinations
for c in combinations(items, 3):
print(c)
# + colab={"base_uri": "https://localhost:8080/"} id="In5Vywp2XVKl" outputId="1b89a4fa-8c8e-4f46-ec83-044206e3f56b"
from itertools import chain
a = [1, 2, 3, 4]
b = ['x', 'y', 'z']
for x in chain(a, b):
print(x)
# + id="8hbssizOY-XX"
#flatten
from collections import Iterable
def flatten(items, ignore_types=(str, bytes)):
for x in items:
if isinstance(x, Iterable) and not isinstance(x, ignore_types):
yield from flatten(x)
else:
yield x
items = [1, 2, [3, 4, [5, 6], 7], 8]
# Produces 1 2 3 4 5 6 7 8
for x in flatten(items):
print(x)
# + colab={"base_uri": "https://localhost:8080/"} id="9U2QUAFCZcjm" outputId="12fb7684-c630-431a-9c7a-ce3626e4b1e2"
import heapq
a = [1, 4, 7, 10]
b = [2, 5, 6, 11]
for c in heapq.merge(a, b):
print(c)
# + [markdown] id="Ip0XfHxRZtFZ"
# # Chapter 5: Files and IO
# + colab={"base_uri": "https://localhost:8080/"} id="I1rDhUctZoAK" outputId="d862bd95-6a35-402f-82a9-6b83558abfa4"
print('ACME', 50, 91.5, sep=',', end='!!\n')
row = ('ACME', 50, 91.5)
print(*row, sep=',')
# + id="zuJ_5E-QaUyI"
# file can be written only if it does not exist
with open('somefile', 'xt') as f:
f.write('Hello\n')
import os
if not os.path.exists('somefile'):
with open('somefile', 'wt') as f:
f.write('Hello\n')
else:
print('File already exists!')
# + id="Ycj9EKvqar0j"
#File iteration of fixed-size records
from functools import partial
RECORD_SIZE = 32
with open('somefile.data', 'rb') as f:
records = iter(partial(f.read, RECORD_SIZE), b'')
for r in records:
...
# + colab={"base_uri": "https://localhost:8080/"} id="pLfdlXYlay3o" outputId="8612ba21-18c4-43bc-ce99-30c71e0421a0"
import os.path
# Get all regular files
names = [name for name in os.listdir('sample_data')
if os.path.isfile(os.path.join('sample_data', name))]
print(names)
# Get all dirs
dirnames = [name for name in os.listdir('./')
if os.path.isdir(os.path.join('./', name))]
print(dirnames)
csvfiles = [name for name in os.listdir('sample_data')
if name.endswith('.csv')]
print(csvfiles)
# + [markdown] id="-UYlCTrxcgZ8"
# # Chapter 6: Data coding and processing
# + colab={"base_uri": "https://localhost:8080/"} id="4pC7W54Achbk" outputId="9cf51337-568f-4c75-bf70-a62d23af5deb"
import json
data = {
'name' : 'ACME',
'shares' : 100,
'price' : 542.23
}
json_str = json.dumps(data)
data = json.loads(json_str)
print(data)
# + id="AvAgiS4UdA20"
#read big xml
from collections import Counter
potholes_by_zip = Counter()
data = parse_and_remove('potholes.xml', 'row/row')
for pothole in data:
potholes_by_zip[pothole.findtext('zip')] += 1
for zipcode, num in potholes_by_zip.most_common():
print(zipcode, num)
# + [markdown] id="beRQ7nCEdvHN"
# #Chapter 7: Functions
# + colab={"base_uri": "https://localhost:8080/"} id="YZJOyCu8dzcO" outputId="30ca4cc3-afaf-4ab7-b1d3-b730256fef26"
funcs = [lambda x, n=n: x+n for n in range(5)]
for f in funcs:
print(f(0))
# + colab={"base_uri": "https://localhost:8080/"} id="qO1Q044PflPS" outputId="7bfba756-69e6-41d3-87d6-caf9f24fb64f"
#Fixed some parameter values
points = [ (1, 2), (3, 4), (5, 6), (7, 8) ]
from functools import partial
import math
def distance(p1, p2):
x1, y1 = p1
x2, y2 = p2
return math.hypot(x2 - x1, y2 - y1)
pt = (4, 3)
points.sort(key=partial(distance,pt))
points
# + colab={"base_uri": "https://localhost:8080/"} id="mZKx6SdDhId-" outputId="2b6363fb-667e-4b13-a6fe-5cd6386a2a30"
def apply_async(func, args, *, callback):
# Compute the result
result = func(*args)
# Invoke the callback with the result
callback(result)
def print_result(result):
print('Got:', result)
def add(x, y):
return x + y
apply_async(add, (2, 3), callback=print_result)
apply_async(add, ('hello', 'world'), callback=print_result)
def make_handler():
sequence = 0
while True:
result = yield
sequence += 1
print('[{}] Got: {}'.format(sequence, result))
handler = make_handler()
next(handler) # Advance to the yield
apply_async(add, (2, 3), callback=handler.send)
apply_async(add, ('hello', 'world'), callback=handler.send)
# + colab={"base_uri": "https://localhost:8080/"} id="BJAMwPDvhkJP" outputId="03ac7a67-cedc-4c0a-8d6d-2d88884839e4"
from queue import Queue
from functools import wraps
class Async:
def __init__(self, func, args):
self.func = func
self.args = args
def inlined_async(func):
@wraps(func)
def wrapper(*args):
f = func(*args)
result_queue = Queue()
result_queue.put(None)
while True:
result = result_queue.get()
try:
a = f.send(result)
apply_async(a.func, a.args, callback=result_queue.put)
except StopIteration:
break
return wrapper
def add(x, y):
return x + y
@inlined_async
def test():
r = yield Async(add, (2, 3))
print(r)
r = yield Async(add, ('hello', 'world'))
print(r)
for n in range(10):
r = yield Async(add, (n, n))
print(r)
print('Goodbye')
test()
# + [markdown] id="DAdc6LToinz7"
# # Chapter 8: Classes and Objects
# + colab={"base_uri": "https://localhost:8080/"} id="qckxksQFirKI" outputId="cf79d69f-1a94-4e65-bf92-14cdca5cc6bb"
class Pair:
def __init__(self, x, y):
self.x = x
self.y = y
def __repr__(self):
return 'Pair({0.x!r}, {0.y!r})'.format(self)
def __str__(self):
return '({0.x!s}, {0.y!s})'.format(self)
p = Pair(3, 4)
print(p)
p
# + id="1cy2Kp8Ejrsk"
#save memory
class Date:
__slots__ = ['year', 'month', 'day']
def __init__(self, year, month, day):
self.year = year
self.month = month
self.day = day
# + id="DBHD9Od_k3SD"
#This property cannot be overridden by inheritance
class C(B):
def __init__(self):
super().__init__()
self.__private = 1 # Does not override B.__private
# Does not override B.__private_method()
def __private_method(self):
pass
# + colab={"base_uri": "https://localhost:8080/"} id="hDPgFdq5lYxp" outputId="b52de35f-2a9d-409c-975e-9ce804f77c3f"
import math
#The result values are expected to be cached
#property: change function to attribute
def lazyproperty(func):
name = '_lazy_' + func.__name__
@property
def lazy(self):
if hasattr(self, name):
return getattr(self, name)
else:
value = func(self)
setattr(self, name, value)
return value
return lazy
class Circle:
def __init__(self, radius):
self.radius = radius
@lazyproperty
def area(self):
return math.pi * self.radius ** 2
@lazyproperty
def diameter(self):
return self.radius * 2
@property
def perimeter(self):
return 2 * math.pi * self.radius
c = Circle(6.0)
print(c.radius)
print(c.area)
print(c.perimeter)
# + colab={"base_uri": "https://localhost:8080/", "height": 332} id="kDGRKqT6oCCt" outputId="362f760c-2110-4f97-d1a0-de8d58b3d159"
# Decorator for applying type checking
def Typed(expected_type, cls=None):
if cls is None:
return lambda cls: Typed(expected_type, cls)
super_set = cls.__set__
def __set__(self, instance, value):
if not isinstance(value, expected_type):
raise TypeError('expected ' + str(expected_type))
super_set(self, instance, value)
cls.__set__ = __set__
return cls
class Descriptor:
def __init__(self, name=None, **opts):
self.name = name
for key, value in opts.items():
setattr(self, key, value)
def __set__(self, instance, value):
instance.__dict__[self.name] = value
# Decorator for unsigned values
def Unsigned(cls):
super_set = cls.__set__
def __set__(self, instance, value):
if value < 0:
raise ValueError('Expected >= 0')
super_set(self, instance, value)
cls.__set__ = __set__
return cls
# Decorator for allowing sized values
def MaxSized(cls):
super_init = cls.__init__
def __init__(self, name=None, **opts):
if 'size' not in opts:
raise TypeError('missing size option')
super_init(self, name, **opts)
cls.__init__ = __init__
super_set = cls.__set__
def __set__(self, instance, value):
if len(value) >= self.size:
raise ValueError('size must be < ' + str(self.size))
super_set(self, instance, value)
cls.__set__ = __set__
return cls
# Specialized descriptors
@Typed(int)
class Integer(Descriptor):
pass
@Unsigned
class UnsignedInteger(Integer):
pass
@Typed(float)
class Float(Descriptor):
pass
@Unsigned
class UnsignedFloat(Float):
pass
@Typed(str)
class String(Descriptor):
pass
@MaxSized
class SizedString(String):
pass
class Stock:
# Specify constraints
name = SizedString('name', size=8)
shares = UnsignedInteger('shares')
price = UnsignedFloat('price')
def __init__(self, name, shares, price):
self.name = name
self.shares = shares
self.price = price
s=Stock('cdcd',75,25.1)
s.shares = -10
# + colab={"base_uri": "https://localhost:8080/"} id="SQ1w3kuso0NQ" outputId="92aab7ca-cb61-4ddd-dfa5-ff0e776f3095"
#Implement a custom container
import collections
import bisect
class SortedItems(collections.Sequence):
def __init__(self, initial=None):
self._items = sorted(initial) if initial is not None else []
# Required sequence methods
def __getitem__(self, index):
return self._items[index]
def __len__(self):
return len(self._items)
# Method for adding an item in the right location
def add(self, item):
bisect.insort(self._items, item)
items = SortedItems([5, 1, 3])
print(list(items))
print(items[0], items[-1])
items.add(2)
print(list(items))
# + colab={"base_uri": "https://localhost:8080/"} id="GIUa0Ks4sNyt" outputId="b519d21a-a974-49e9-f477-886db4eafbbb"
# classmethod
# The function corresponding to the classmethod modifier does not need to be instantiated,
# and does not need the self parameter, but the first parameter needs to be the cls parameter representing its own class,
# which can be used to call class attributes, class methods, instantiated objects, etc.
import time
class Date:
# Primary constructor
def __init__(self, year, month, day):
self.year = year
self.month = month
self.day = day
# Alternate constructor
@classmethod
def today(cls):
t = time.localtime()
return cls(t.tm_year, t.tm_mon, t.tm_mday)
a = Date(2012, 12, 21) # Primary
b = Date.today() # Alternate
# + colab={"base_uri": "https://localhost:8080/"} id="N-dp3aCexznR" outputId="ef973baa-48e4-4c17-efc2-4a06bca6bd5f"
import math
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
def __repr__(self):
return 'Point({!r:},{!r:})'.format(self.x, self.y)
def distance(self, x, y):
return math.hypot(self.x - x, self.y - y)
p = Point(2, 3)
d = getattr(p, 'distance')(0, 0) # Calls p.distance(0, 0)
print(d)
import operator
operator.methodcaller('distance', 0, 0)(p)
# + colab={"base_uri": "https://localhost:8080/"} id="EsoydGbcsBTh" outputId="8f0eab43-5912-4232-be6d-6dd7ed8e4f18"
from functools import total_ordering
class Room:
def __init__(self, name, length, width):
self.name = name
self.length = length
self.width = width
self.square_feet = self.length * self.width
@total_ordering
class House:
def __init__(self, name, style):
self.name = name
self.style = style
self.rooms = list()
@property
def living_space_footage(self):
return sum(r.square_feet for r in self.rooms)
def add_room(self, room):
self.rooms.append(room)
def __str__(self):
return '{}: {} square foot {}'.format(self.name,
self.living_space_footage,
self.style)
def __eq__(self, other):
return self.living_space_footage == other.living_space_footage
def __lt__(self, other):
return self.living_space_footage < other.living_space_footage
# Build a few houses, and add rooms to them
h1 = House('h1', 'Cape')
h1.add_room(Room('Master Bedroom', 14, 21))
h1.add_room(Room('Living Room', 18, 20))
h1.add_room(Room('Kitchen', 12, 16))
h1.add_room(Room('Office', 12, 12))
h2 = House('h2', 'Ranch')
h2.add_room(Room('Master Bedroom', 14, 21))
h2.add_room(Room('Living Room', 18, 20))
h2.add_room(Room('Kitchen', 12, 16))
h3 = House('h3', 'Split')
h3.add_room(Room('Master Bedroom', 14, 21))
h3.add_room(Room('Living Room', 18, 20))
h3.add_room(Room('Office', 12, 16))
h3.add_room(Room('Kitchen', 15, 17))
houses = [h1, h2, h3]
print('Is h1 bigger than h2?', h1 > h2) # prints True
print('Is h2 smaller than h3?', h2 < h3) # prints True
print('Is h2 greater than or equal to h1?', h2 >= h1) # Prints False
print('Which one is biggest?', max(houses)) # Prints 'h3: 1101-square-foot Split'
print('Which is smallest?', min(houses)) # Prints 'h2: 846-square-foot Ranch'
# + [markdown] id="EcbYVBuS4A4O"
# # Chapter 9: metaprogramming
# + id="WJwspPXA4Ble" colab={"base_uri": "https://localhost:8080/"} outputId="66296e3a-a35e-4141-f140-89a19a75a3af"
from functools import wraps
def decorator1(func):
@wraps(func)
def wrapper(*args, **kwargs):
print('Decorator 1')
return func(*args, **kwargs)
return wrapper
def decorator2(func):
@wraps(func)
def wrapper(*args, **kwargs):
print('Decorator 2')
return func(*args, **kwargs)
return wrapper
@decorator1
@decorator2
def add(x, y):
return x + y
print(add(2,3))
print(add.__wrapped__(2, 3))
# + id="2sOuk3czaE1a"
from functools import wraps
import logging
def logged(level, name=None, message=None):
"""
Add logging to a function. level is the logging
level, name is the logger name, and message is the
log message. If name and message aren't specified,
they default to the function's module and name.
"""
def decorate(func):
logname = name if name else func.__module__
log = logging.getLogger(logname)
logmsg = message if message else func.__name__
@wraps(func)
def wrapper(*args, **kwargs):
log.log(level, logmsg)
return func(*args, **kwargs)
return wrapper
return decorate
# Example use
@logged(logging.DEBUG)
def add(x, y):
return x + y
@logged(logging.CRITICAL, 'example')
def spam():
print('Spam!')
# + colab={"base_uri": "https://localhost:8080/", "height": 385} id="9-ANu0v5aPYz" outputId="922efd27-80f7-43cc-ee04-8d2be0fa6d1e"
from inspect import signature
from functools import wraps
def typeassert(*ty_args, **ty_kwargs):
def decorate(func):
# If in optimized mode, disable type checking
if not __debug__:
return func
# Map function argument names to supplied types
sig = signature(func)
bound_types = sig.bind_partial(*ty_args, **ty_kwargs).arguments
@wraps(func)
def wrapper(*args, **kwargs):
bound_values = sig.bind(*args, **kwargs)
# Enforce type assertions across supplied arguments
for name, value in bound_values.arguments.items():
if name in bound_types:
if not isinstance(value, bound_types[name]):
raise TypeError(
'Argument {} must be {}'.format(name, bound_types[name])
)
return func(*args, **kwargs)
return wrapper
return decorate
@typeassert(int, z=str)
def spam(x, y, z=42):
print(x, y, z)
spam(1, 2, 'good')
spam(1, 'hello', 3)
spam(1, 'hello', 'world')
# + colab={"base_uri": "https://localhost:8080/"} id="qT9xNV0gcC-i" outputId="18fca3ea-8d4d-4585-84ca-90f325b3aa00"
import types
from functools import wraps
def profiled(func):
ncalls = 0
@wraps(func)
def wrapper(*args, **kwargs):
nonlocal ncalls
ncalls += 1
return func(*args, **kwargs)
wrapper.ncalls = lambda: ncalls
return wrapper
# Example
@profiled
def add(x, y):
return x + y
print(add(2, 3))
print(add.ncalls())
print(add(2, 3))
print(add.ncalls())
# + colab={"base_uri": "https://localhost:8080/"} id="KWVF8KZLcyNG" outputId="a0057898-08ca-4752-f6fd-a917dfb92f31"
import weakref
class Cached(type):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.__cache = weakref.WeakValueDictionary()
def __call__(self, *args):
if args in self.__cache:
return self.__cache[args]
else:
obj = super().__call__(*args)
self.__cache[args] = obj
return obj
# Example
class Spam(metaclass=Cached):
def __init__(self, name):
print('Creating Spam({!r})'.format(name))
self.name = name
a = Spam('Guido')
b = Spam('Diana')
c = Spam('Guido') # Cached
print(a is b)
print(a is c)
# + colab={"base_uri": "https://localhost:8080/"} id="0tS9s5y_ed9z" outputId="03819df2-b8ab-43db-99c4-c82f5f57e288"
from inspect import signature
def foo(value):
return value
# get singature
foo_sig = signature(foo)
# get parameter
foo_params = foo_sig.parameters
print(foo_sig)
print(foo_params)
# + [markdown] id="GrB4QpJA4A_P"
# # Chapter 10: Modules and Packages
# + id="q_QxGdIXeeJg"
import sys
from os.path import abspath, join, dirname
sys.path.insert(0, join(abspath(dirname(__file__)), 'src'))
# + id="lyYctoCsZdOv"
#publish package
# setup.py
from distutils.core import setup
setup(name='projectname',
version='1.0',
author='<NAME>',
author_email='<EMAIL>',
url='http://www.you.com/projectname',
packages=['projectname', 'projectname.utils'],
)
# MANIFEST.in
include *.txt
recursive-include examples *
recursive-include Doc *
% bash python3 setup.py sdist
# + [markdown] id="XNYHWgTbaLRB"
# # Chapter 11: Networking and Web programming
# + colab={"base_uri": "https://localhost:8080/"} id="mOeZ76ASaIGC" outputId="266be6ca-a4fc-47d3-8464-cd6418400f1c"
import requests
# Base URL being accessed
url = 'http://httpbin.org/post'
# Dictionary of query parameters (if any)
parms = {
'name1' : 'value1',
'name2' : 'value2'
}
# Extra headers
headers = {
'User-agent' : 'none/ofyourbusiness',
'Spam' : 'Eggs'
}
resp = requests.post(url, data=parms, headers=headers)
# Decoded text returned by the request
text = resp.text
print(text)
r = requests.get('http://httpbin.org/get?name=Dave&n=37',
headers = { 'User-agent': 'goaway/1.0' })
resp = r.json()
print(resp)
# + colab={"base_uri": "https://localhost:8080/"} id="FLc3wNCCbQC3" outputId="f8ee6009-8495-4aa9-a8f0-3eb7025a4afa"
import ipaddress
net = ipaddress.ip_network('192.168.127.12/27')
for a in net:
print(a)
net6 = ipaddress.ip_network('12:3456:78:90ab:cd:ef01:23:30/125')
for a in net6:
print(a)
# + [markdown] id="_RkGQNblcVGC"
# # Chapter 12: Concurrent Programming
# + colab={"base_uri": "https://localhost:8080/"} id="QBMPWPJyaryu" outputId="f10683b6-2948-4522-809d-235ed353d498"
from threading import Thread
import time
class CountdownThread(Thread):
def __init__(self, n):
super().__init__()
self.n = n
def run(self):
while self.n > 0:
print('T-minus', self.n)
self.n -= 1
time.sleep(5)
c = CountdownThread(5)
c.start()
# + id="uHa_TcSdc_ye"
import threading
import time
class PeriodicTimer:
def __init__(self, interval):
self._interval = interval
self._flag = 0
self._cv = threading.Condition()
def start(self):
t = threading.Thread(target=self.run)
t.daemon = True
t.start()
def run(self):
'''
Run the timer and notify waiting threads after each interval
'''
while True:
time.sleep(self._interval)
with self._cv:
self._flag ^= 1
self._cv.notify_all()
def wait_for_tick(self):
'''
Wait for the next tick of the timer
'''
with self._cv:
last_flag = self._flag
while last_flag == self._flag:
self._cv.wait()
# Example use of the timer
ptimer = PeriodicTimer(5)
ptimer.start()
# Two threads that synchronize on the timer
def countdown(nticks):
while nticks > 0:
ptimer.wait_for_tick()
print('T-minus', nticks)
nticks -= 1
def countup(last):
n = 0
while n < last:
ptimer.wait_for_tick()
print('Counting', n)
n += 1
threading.Thread(target=countdown, args=(10,)).start()
threading.Thread(target=countup, args=(5,)).start()
# + id="wkRM3kHpdFDm"
import threading
class SharedCounter:
'''
A counter object that can be shared by multiple threads.
'''
_lock = threading.RLock()
def __init__(self, initial_value = 0):
self._value = initial_value
def incr(self,delta=1):
'''
Increment the counter with locking
'''
with SharedCounter._lock:
self._value += delta
def decr(self,delta=1):
'''
Decrement the counter with locking
'''
with SharedCounter._lock:
self.incr(-delta)
# + colab={"base_uri": "https://localhost:8080/"} id="lkZNb0gZddo4" outputId="99954297-9696-409c-8e29-2a0b28f9c044"
import threading
from contextlib import contextmanager
# Thread-local state to stored information on locks already acquired
_local = threading.local()
@contextmanager
def acquire(*locks):
# Sort locks by object identifier
locks = sorted(locks, key=lambda x: id(x))
# Make sure lock order of previously acquired locks is not violated
acquired = getattr(_local,'acquired',[])
if acquired and max(id(lock) for lock in acquired) >= id(locks[0]):
raise RuntimeError('Lock Order Violation')
# Acquire all of the locks
acquired.extend(locks)
_local.acquired = acquired
try:
for lock in locks:
lock.acquire()
yield
finally:
# Release locks in reverse order of acquisition
for lock in reversed(locks):
lock.release()
del acquired[-len(locks):]
# The philosopher thread
def philosopher(left, right):
while True:
with acquire(left,right):
print(threading.currentThread(), 'eating')
# The chopsticks (represented by locks)
NSTICKS = 5
chopsticks = [threading.Lock() for n in range(NSTICKS)]
# Create all of the philosophers
for n in range(NSTICKS):
t = threading.Thread(target=philosopher,
args=(chopsticks[n],chopsticks[(n+1) % NSTICKS]))
t.start()
# + colab={"base_uri": "https://localhost:8080/"} id="TL9XZXH3dn4T" outputId="0273d353-407d-4d3d-b540-39e05d6c03e8"
from queue import Queue
from threading import Thread, Event
# Sentinel used for shutdown
class ActorExit(Exception):
pass
class Actor:
def __init__(self):
self._mailbox = Queue()
def send(self, msg):
'''
Send a message to the actor
'''
self._mailbox.put(msg)
def recv(self):
'''
Receive an incoming message
'''
msg = self._mailbox.get()
if msg is ActorExit:
raise ActorExit()
return msg
def close(self):
'''
Close the actor, thus shutting it down
'''
self.send(ActorExit)
def start(self):
'''
Start concurrent execution
'''
self._terminated = Event()
t = Thread(target=self._bootstrap)
t.daemon = True
t.start()
def _bootstrap(self):
try:
self.run()
except ActorExit:
pass
finally:
self._terminated.set()
def join(self):
self._terminated.wait()
def run(self):
'''
Run method to be implemented by the user
'''
while True:
msg = self.recv()
# Sample ActorTask
class PrintActor(Actor):
def run(self):
while True:
msg = self.recv()
print('Got:', msg)
# Sample use
p = PrintActor()
p.start()
p.send('Hello')
p.send('World')
p.close()
p.join()
# + [markdown] id="NNwl2me0eszO"
# # Chapter 13: Scripting and system administration
# + colab={"base_uri": "https://localhost:8080/", "height": 245} id="xXt759sTewTd" outputId="5b116a24-36f0-415c-b6da-6e6ed812d1e8"
# search.py
'''
Hypothetical command-line tool for searching a collection of
files for one or more text patterns.
'''
import argparse
parser = argparse.ArgumentParser(description='Search some files')
parser.add_argument(dest='filenames',metavar='filename', nargs='*')
parser.add_argument('-p', '--pat',metavar='pattern', required=True,
dest='patterns', action='append',
help='text pattern to search for')
parser.add_argument('-v', dest='verbose', action='store_true',
help='verbose mode')
parser.add_argument('-o', dest='outfile', action='store',
help='output file')
parser.add_argument('--speed', dest='speed', action='store',
choices={'slow','fast'}, default='slow',
help='search speed')
args = parser.parse_args()
# Output the collected arguments
print(args.filenames)
print(args.patterns)
print(args.verbose)
print(args.outfile)
print(args.speed)
# + id="YwVwl5t2fcQW"
#Prints all files that have been modified recently
import os
import time
def modified_within(top, seconds):
now = time.time()
for path, dirs, files in os.walk(top):
for name in files:
fullpath = os.path.join(path, name)
if os.path.exists(fullpath):
mtime = os.path.getmtime(fullpath)
if mtime > (now - seconds):
print(fullpath)
if __name__ == '__main__':
import sys
if len(sys.argv) != 3:
print('Usage: {} dir seconds'.format(sys.argv[0]))
raise SystemExit(1)
modified_within(sys.argv[1], float(sys.argv[2]))
# + colab={"base_uri": "https://localhost:8080/", "height": 192} id="mmgrupBKf_d2" outputId="9a8c8a00-8919-4811-bdd7-8e506504f45f"
#limit cpu and memory
import signal
import resource
import os
def time_exceeded(signo, frame):
print("Time's up!")
raise SystemExit(1)
def set_max_runtime(seconds):
# Install the signal handler and set a resource limit
soft, hard = resource.getrlimit(resource.RLIMIT_CPU)
resource.setrlimit(resource.RLIMIT_CPU, (seconds, hard))
signal.signal(signal.SIGXCPU, time_exceeded)
def limit_memory(maxsize):
soft, hard = resource.getrlimit(resource.RLIMIT_AS)
resource.setrlimit(resource.RLIMIT_AS, (maxsize, hard))
if __name__ == '__main__':
set_max_runtime(15)
while True:
pass
# + colab={"base_uri": "https://localhost:8080/"} id="We3OOtMigMbN" outputId="98e89d99-fb0b-4090-a921-04d0396b2ec3"
import webbrowser
webbrowser.open_new('http://www.python.org')
# + [markdown] id="hhmNhSY_gYdT"
# # Chapter 14: Test debugging and exceptions
# + id="HetPxfLfgbFh"
import errno
import unittest
# A simple function to illustrate
def parse_int(s):
return int(s)
class TestConversion(unittest.TestCase):
def test_bad_int(self):
self.assertRaises(ValueError, parse_int, 'N/A')
class TestIO(unittest.TestCase):
def test_file_not_found(self):
try:
f = open('/file/not/found')
except IOError as e:
self.assertEqual(e.errno, errno.ENOENT)
else:
self.fail('IOError not raised')
# + colab={"base_uri": "https://localhost:8080/"} id="nQZTu76JhizF" outputId="0fef504f-c3f7-4028-8a99-6efecbc24139"
def example():
try:
int('N/A')
except ValueError as e:
raise RuntimeError('A parsing error occurred') from e
try:
example()
except RuntimeError as e:
print("It didn't work:", e)
if e.__cause__:
print('Cause:', e.__cause__)
# + id="RBY0Vdr7itK8"
#bash % python3 -m cProfile someprogram.py
| basic_python/basic_python_cookbook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # <center> Keras </center>
# ## <center>1.4 Processing output data</center>
# # Processing output data
#
# We will now reshape our output data.
# # Code
# Let us run the code from the previous section first to import the dataset and reshape the input data.
# +
# Importing the MNIST dataset
from keras.datasets import mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Processing the input data
train_images = train_images.reshape((60000, 28 * 28))
train_images = train_images.astype('float32') / 255
test_images = test_images.reshape((10000, 28 * 28))
test_images = test_images.astype('float32') / 255
# -
# Let us now examine the code related to processing the output data
# +
# Processing the output data
from keras.utils import to_categorical
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
# -
# The label vector is transformed to an one-hot encoded vector. One hot encoding means that all entries in the array are 0s except where the label is located. At that point the value is 1. <br>
# This transformation is beneficial because as an output we can use the model prediction which is also a vector of the same length consisting of probabilities which sums up to one. This makes it easier to calculate the loss with the cross entropy function directly.
# <img src="img/structure.PNG" width="70%" />
# <br>The prediction is encoded as a 1D array containing 10 probabilities which display the probability of the image being classified to a label.
# ## Best Practices
# It is always recommended to bring the output data to such a state that the calculation of loss function becomes easier.
# Let us examine the changes in the output data set
# +
(train_images_old, train_labels_old), (test_images_old, test_labels_old) = mnist.load_data()
print("Before reshaping:")
print("Training labels: " + str(train_labels_old))
print("Test labels: " + str(test_labels_old))
print("\nAfter reshaping:")
print("Training labels: \n" + str(train_labels))
print("Test labels: \n" + str(test_labels))
# -
# # Task
# How would the one-hot encoded vector look for:<br>
# a) 0 <br>
# b) 8
# # Summary
# We have seen in this section how to transform the output data for the network.
# # Feedback
# <a href = "http://goto/ml101_doc/Keras04">Feedback: Process Output data</a> <br>
# # Navigation
# <div>
# <span> <h3 style="display:inline"><< Prev: <a href = "keras03.ipynb">Process input data</a></h3> </span>
# <span style="float: right"><h3 style="display:inline">Next: <a href = "keras05.ipynb">Build a network</a> >> </h3></span>
# </div>
| Keras/keras04.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Assessing the number of transition options
#
# Here we calculate the number of transition options for the 1627 occupations presented in the Mapping Career Causeways report.
# # 0. Import dependencies and inputs
# +
# %run ../notebook_preamble_Transitions.ipy
import os
data = load_data.Data()
# -
# ## 0.1 Generate 'filtering' matrices
# +
file_name = 'filter_matrices_Report_occupations.pickle'
export_path = f'{data_folder}interim/transitions/{file_name}'
if os.path.exists(export_path):
filter_matrices = pickle.load(open(export_path,'rb'))
print(f'Imported filtering matrices from {export_path}')
else:
# May take about 30 mins
filter_matrices = trans_utils.create_filtering_matrices(
origin_ids='report',
destination_ids= 'report',
export_path = export_path)
# -
# All the different filtering matrices, and the identities of their rows (rows='origin_ids' & cols='destination_ids')
for key in filter_matrices.keys():
print(key)
# # 1. Calculate the number of transitions for each occupation
#
# ## 1.1 Number of desirable transitions
#
# First, we assess transitions that are desirable, meaning that they are both viable (sufficiently good fit in terms of skills, work activities and work context) and do not incure a major salary loss.
#
# +
# Desirable transitions
n_desirable = np.sum(filter_matrices['F_desirable'], axis=1)
# Highly viable, desirable transitions
n_desirable_and_highly_viable = np.sum(
(filter_matrices['F_desirable'] & filter_matrices['F_highly_viable']), axis=1)
# -
# ## 1.2 Number of safe and desirable transitions (default condition)
#
# We use the default criteria for safe transitions - that safe destination is defined as not being 'high risk'. We use these results to compare 'high risk' versus 'lower risk' occupations ('lower risk' occupations belonging to the 'other' or 'low risk' categories).
# +
# Safe and desirable transitions
n_safe_desirable = np.sum(filter_matrices['F_safe_desirable'], axis=1)
# Highly viable, safe and desirable transitions
n_safe_desirable_and_highly_viable = np.sum(
(filter_matrices['F_safe_desirable'] & filter_matrices['F_highly_viable']), axis=1)
# -
# ## 1.3 Number of safe and desirable transitions (strict condition)
#
# We use the more stricter criteria for defining safe transitions when making comparisons solely between 'high risk' occupations.
# +
# Strictly safer and desirable transitions
n_safe_desirable_strict = np.sum(filter_matrices['F_strictly_safe_desirable'], axis=1)
# Highly viable, strictly safe
n_safe_desirable_strict_and_highly_viable = np.sum(
(filter_matrices['F_strictly_safe_desirable'] & filter_matrices['F_highly_viable']), axis=1)
# -
# ## 1.4 Export the transition option results
occ_transitions = data.occ_report[['id', 'concept_uri', 'preferred_label', 'isco_level_4', 'risk_category']].copy()
occ_transitions['n_desirable'] = n_desirable
occ_transitions['n_desirable_and_highly_viable'] = n_desirable_and_highly_viable
occ_transitions['n_safe_desirable'] = n_safe_desirable
occ_transitions['n_safe_desirable_and_highly_viable'] = n_safe_desirable_and_highly_viable
occ_transitions['n_safe_desirable_strict'] = n_safe_desirable_strict
occ_transitions['n_safe_desirable_strict_and_highly_viable'] = n_safe_desirable_strict_and_highly_viable
# Rename the risk categories, as used in the report
occ_transitions.loc[occ_transitions.risk_category.isin(['Low risk', 'Other']), 'risk_category'] = 'Lower risk'
occ_transitions.info()
occ_transitions.groupby('risk_category').agg({'id': 'count'})
# Save
occ_transitions.to_csv(data_folder + 'processed/transitions/ESCO_number_of_transitions.csv', index=False)
| codebase/notebooks/05_transition_analyses/Transitions_02_Number_of_options.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import psycopg2
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
import matplotlib.pyplot as plt
warnings.simplefilter('ignore')
pd.options.display.max_columns = 300
sns.set_style('darkgrid')
#connect SQL
conn = psycopg2.connect(database='usaspendingdb', user='postgres', password='<PASSWORD>', host='127.0.0.1', port='5432')
sql_cols = ('federal_action_obligation, '
'base_and_exercised_options_value, '
'base_and_all_options_value, '
'awarding_sub_agency_name, '
'awarding_office_name, '
'funding_sub_agency_name, '
'funding_office_name, '
'primary_place_of_performance_state_code, '
'award_or_idv_flag, '
'award_type, '
'type_of_contract_pricing, '
'dod_claimant_program_description, '
'type_of_set_aside_code, '
'contract_bundling, '
'national_interest_action, '
'gfe_gfp, '
'contract_financing, '
'portfolio_group, '
'product_or_service_code_description, '
'naics_bucket_title, '
'naics_description'
)
#Create DF
df = pd.read_sql_query('SELECT ' + sql_cols + ' FROM consolidated_data_filtered_bucketed', con=conn)
df.shape
#Check if there is any null in DF.
df.isnull().sum()
#Drop null rows from 'type_of_set_aside_code' column.
df = df[pd.notnull(df['type_of_set_aside_code'])]
df.shape
def set_aside(c):
if c['type_of_set_aside_code'] == 'NONE':
return 0
else:
return 1
#Create column name 'set_aside' and apply function to populate rows with 0 or 1.
df['set_aside'] = df.apply(set_aside, axis=1)
def contract_value(c):
if c['base_and_exercised_options_value'] > 0:
return c['base_and_exercised_options_value']
elif c['base_and_all_options_value'] > 0:
return c['base_and_all_options_value']
elif c['federal_action_obligation'] > 0:
return c['federal_action_obligation']
else:
return 0
df['contract_value'] = df.apply(contract_value, axis=1)
#Drop columns that we dont need anymore.
df = df.drop(['type_of_set_aside_code','base_and_exercised_options_value','base_and_all_options_value',
'federal_action_obligation'], axis=1)
df = df.dropna()
df.shape
df = pd.get_dummies(df)
from sklearn.model_selection import train_test_split, cross_val_score
from yellowbrick.classifier import ClassificationReport
from sklearn.ensemble import RandomForestClassifier
X = df.drop(['set_aside'], axis=1)
X
y = df['set_aside']
y
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)
model = RandomForestClassifier(n_estimators=17)
model.fit(X_train, y_train)
predictions = model.predict(X_test)
classes = ['None', 'Set Aside']
visualizer = ClassificationReport(model, classes=classes, support=True)
visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
visualizer.show()
#Test Single row
a = X.iloc[2:3,:]
a.shape
pred = model.predict(a)
print(pred)
# ### Second Model for Set Aside
df1 = pd.read_sql_query('SELECT ' + sql_cols + ' FROM consolidated_data_filtered_bucketed', con=conn)
none_set_asides = df1[df1['type_of_set_aside_code']== 'NONE'].index
#drop all the set_aside = NONE
df1 = df1.drop(none_set_asides, axis=0)
def contract_value(c):
if c['base_and_exercised_options_value'] > 0:
return c['base_and_exercised_options_value']
elif c['base_and_all_options_value'] > 0:
return c['base_and_all_options_value']
elif c['federal_action_obligation'] > 0:
return c['federal_action_obligation']
else:
return 0
df1['contract_value'] = df1.apply(contract_value, axis=1)
df1['set_aside_number'] = df1['type_of_set_aside_code'].map({'SBA':1, 'WOSB':2, '8A':3, '8AN':4, 'SDVOSBC':5,'HZC':6,
'SBP':7, 'SDVOSBS':8, 'EDWOSB':9, 'WOSBSS':10, 'HZS':11, 'ISBEE':12})
#Drop columns that we dont need anymore.
df1 = df1.drop(['type_of_set_aside_code','base_and_exercised_options_value','base_and_all_options_value',
'federal_action_obligation'], axis=1)
df1 = df1.dropna()
df1.shape
df1 = pd.get_dummies(df1)
df1.shape
X1 = df1.drop(['set_aside_number'], axis=1)
y1 = df1['set_aside_number']
X1_train, X1_test, y1_train, y1_test = train_test_split(X1, y1, test_size=0.33, random_state=42)
model1 = RandomForestClassifier(n_estimators=17)
classes1 = ['SBA', 'WOSB', '8A', '8AN', 'SDVOSBC','HZC', 'SBP', 'SDVOSBS', 'EDWOSB', 'WOSBSS', 'HZS', 'ISBEE']
for i in pred:
if i == 1:
visualizer1 = ClassificationReport(model1, classes=classes1, support=True)
visualizer1.fit(X1_train, y1_train)
visualizer1.score(X1_test, y1_test)
visualizer1.show()
else :
print('Not Set Aside')
y1
#Test single row
b = X1.iloc[:1,:]
b
set_aside_name = model1.predict(b)
print(set_aside_name)
for j in set_aside_name:
if j == 1:
print('SBA')
elif j == 2:
print('WOSB')
elif j == 3:
print('8A')
elif j == 4:
print('8AN')
elif j == 5:
print('SDVOSBC')
elif j == 6:
print('HZC')
elif j == 7:
print('SBP')
elif j == 8:
print('SDVOSBS')
elif j == 9:
print('EDWOSB')
elif j == 10:
print('WOSBSS')
elif j == 11:
print('HZS')
else:
'ISBEE'
| Random Forest/drafts/Random Forest _0.1 _and_All_Set_Aside.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="3eP2v33A7bhh" outputId="394d90b4-ec22-4913-8fb1-fb0f59424c9f"
# don't need on local
# #!pip install git+https://github.com/modichirag/flowpm.git
# + colab={"base_uri": "https://localhost:8080/", "height": 221} id="lHDPSmCL7evy" outputId="39535735-011f-4c6c-af5f-313530850a83"
#Setup
# #%tensorflow_version 1.x
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from scipy.interpolate import InterpolatedUnivariateSpline as iuspline
# #%matplotlib inline
#
# TF1 behavior
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
#import tensorflow as tf
#
from flowpm import linear_field, lpt_init, nbody, cic_paint
import flowpm
import pandas as pd
stages = np.linspace(0.1, 1.0, 30, endpoint=True)
data_url = 'https://raw.githubusercontent.com/modichirag/flowpm/master/flowpm/data/Planck15_a1p00.txt'
D = pd.read_table(
data_url,
sep=" ",
index_col=False,
skiprows=2,
names=["a","b"]
)
D.head()
# + colab={"base_uri": "https://localhost:8080/"} id="gpePidBG_etE" outputId="8d8c6cf3-aebb-4efd-d4ed-f21f12ed1116"
D = D.to_numpy()
klin = D.T[0]
plin = D.T[1]
print(klin)
# + id="n2qLnpveGzjP"
ipklin = iuspline(klin, plin)
# + colab={"base_uri": "https://localhost:8080/", "height": 374} id="k_MOzNSOAtWF" outputId="e29c45ae-aa00-4860-c12e-160e9e886a5a"
tf.reset_default_graph()
N = 64
initial_conditions = flowpm.linear_field(N, # size of the cube
100, # Physical size of the cube
ipklin, # Initial power spectrum
batch_size=16)
# Sample particles
state = flowpm.lpt_init(
initial_conditions,
0.1
)
# Evolve particles down to z=0
final_state = flowpm.nbody(state, stages, N)
# Retrieve final density field
final_field = flowpm.cic_paint(tf.zeros_like(initial_conditions), final_state[0])
# + colab={"base_uri": "https://localhost:8080/"} id="H-xzYDLBI6Ma" outputId="19132c39-a2d8-4b7a-98a6-53436d469380"
#Execute the graph!
with tf.Session() as sess:
ic, istate, fstate, sim = sess.run([initial_conditions, state, final_state, final_field])
# + colab={"base_uri": "https://localhost:8080/", "height": 340} id="aufc5cJoIVvf" outputId="b47366fc-478b-4148-d855-4556b13d6258"
ib = 0 #index of the Universe in the batch
fig, ax = plt.subplots(1, 2, figsize=(8, 4))
ax[0].imshow(ic[ib].sum(axis=0))
ax[0].set_title('Gaussain Initial Conditions')
ax[1].imshow(sim[ib].sum(axis=0))
ax[1].set_title('Final Evolved Universe')
plt.tight_layout()
plt.show()
# -
#Create figure
#Need to convert to physical coordinates
fig=plt.figure(figsize=(10,10))#Create 3D axes
try: ax=fig.add_subplot(111,projection="3d")
except : ax=Axes3D(fig)
ax.scatter(istate[0, 0,:,0],istate[0, 0,:,1], istate[0, 0,:,2],color="royalblue",marker=".",s=.02)
ax.set_xlabel("x-coordinate",fontsize=14)
ax.set_ylabel("y-coordinate",fontsize=14)
ax.set_zlabel("z-coordinate",fontsize=14)
ax.set_title("Initial conditions of the Universe\n",fontsize=20)
ax.legend(loc="upper left",fontsize=14)
ax.xaxis.set_ticklabels([])
ax.yaxis.set_ticklabels([])
ax.zaxis.set_ticklabels([])
#plt.savefig('3dinitial.png', dpi=1200)
#Create figure
fig=plt.figure(figsize=(10,10))#Create 3D axes
try: ax=fig.add_subplot(111,projection="3d")
except : ax=Axes3D(fig)
ax.scatter(fstate[0, 0,:,0],fstate[0, 0,:,1], fstate[0, 0,:,2],color="royalblue",marker=".",s=.02)
ax.set_xlabel("x-coordinate",fontsize=14)
ax.set_ylabel("y-coordinate",fontsize=14)
ax.set_zlabel("z-coordinate",fontsize=14)
ax.set_title("Large Scale Structures\n",fontsize=20)
ax.legend(loc="upper left",fontsize=14)
ax.xaxis.set_ticklabels([])
ax.yaxis.set_ticklabels([])
ax.zaxis.set_ticklabels([])
#plt.savefig('3dfinal.png', dpi=1200)
# +
# it works!
# next steps: dig into the specifics, what do each of the parameters mean?
# key ones are probably the linear power spectrum values
| notebooks/lalor/FlowPM_Test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ### 0. Before start
# OK, to begin we need to import some standart Python modules
# +
# -*- coding: utf-8 -*-
"""
Created on Fri Feb 12 13:21:45 2016
@author: GrinevskiyAS
"""
from __future__ import division
import numpy as np
from numpy import sin,cos,tan,pi,sqrt
import matplotlib as mpl
import matplotlib.cm as cm
import matplotlib.pyplot as plt
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
# %matplotlib inline
font = {'family': 'Arial', 'weight': 'normal', 'size':14}
mpl.rc('font', **font)
mpl.rc('figure', figsize=(10, 8))
# -
# ### 1. Setup
# First, let us setup the working area.
# +
#This would be the size of each grid cell (X is the spatial coordinate, T is two-way time)
xstep = 5
tstep = 5
#size of the whole grid
xmax = 320
tmax = 220
#that's the arrays of x and t
xarray = np.arange(0, xmax, xstep).astype(float)
tarray = np.arange(0, tmax, tstep).astype(float)
#now fimally we created a 2D array img, which is now all zeros, but later we will add some amplitudes there
img = np.zeros((len(xarray), len(tarray)))
# -
# Let's show our all-zero image
plt.imshow(img.T, interpolation = 'none', cmap = cm.Greys, vmin = -2,vmax = 2,
extent = [xarray[0] - xstep/2, xarray[-1] + xstep/2, tarray[-1] + tstep/2, tarray[0] - tstep/2])
# ### 2. Main class definition
#
# What we are now going to do is create a class named **`Hyperbola`**
#
# Each object of this class is capable of computing traveltimes to a certain subsurface point (diffractor) and plotting this point response (hyperbola) on a grid
#
#
# **How?** to more clearly define a class? probably change to a function?
class Hyperbola:
def __init__(self, xarray, tarray, x0, t0, v=2):
"""
input parameters define a difractor's position (x0,t0),
P-wave velocity of homogeneous subsurface,
and x- and t-arrays to compute traveltimes on.
"""
self.x = xarray
self.x0 = x0
self.t0 = t0
self.v = v
#compute traveltimes
self.t = sqrt(t0**2 + (2*(xarray-x0)/v)**2)
#obtain some grid parameters
xstep = xarray[1]-xarray[0]
tbegin = tarray[0]
tend = tarray[-1]
tstep = tarray[1]-tarray[0]
#delete t's and x's for samples where t exceeds maxt
self.x = self.x[ (self.t >= tbegin) & (self.t <= tend) ]
self.t = self.t[ (self.t >= tbegin) & (self.t <= tend) ]
#compute indices of hyperbola's X coordinates in xarray
#self.imgind = ((self.x - xarray[0])/xstep).astype(int)
#compute how amplitudes decrease according to geometrical spreading
self.amp = 1/(self.t/self.t0)
self.grid_resample(xarray, tarray)
def grid_resample(self, xarray, tarray):
"""that's a function that computes at which 'cells' of image we should place the hyperbola"""
tend=tarray[-1]
tstep=tarray[1]-tarray[0]
self.xind=((self.x-xarray[0])/xstep).astype(int) #X cells numbers
self.tind=np.round((self.t - tarray[0])/tstep).astype(int) #T cells numbers
self.tind=self.tind[self.tind*tstep <= tarray[-1]] #delete T's exceeding max T
self.tgrid=tarray[self.tind] # get 'gridded' T-values
#self.coord=np.vstack((self.xind,tarray[self.tind]))
def add_to_img(self, img, wavelet):
"""puts the hyperbola into the right cells of image with a given wavelet"""
maxind = np.size(img,1)
wavlen = np.floor(len(wavelet)/2).astype(int)
# to ensure that we will not exceed image size
#self.imgind=self.xind[self.tind < maxind-wavlen-1]
#self.amp = self.amp[self.tind < maxind-wavlen-1]
#self.tind=self.tind[self.tind < maxind-wavlen-1]
ind_begin = self.tind-wavlen
ind_to_use = self.tind < maxind-wavlen-1
#add amplitudes of the wavelet to the image
for i,sample in enumerate(wavelet):
img[self.xind[ind_to_use], ind_begin[ind_to_use]+i] = img[self.xind[ind_to_use],ind_begin[ind_to_use]+i] + sample*self.amp[ind_to_use]
return img
# For testing purposes, let's create an object named Hyp_test and view its parameters
# +
Hyp_test = Hyperbola(xarray, tarray, x0 = 100, t0 = 30, v = 2)
#Create a fugure and add axes to it
fgr_test1 = plt.figure(figsize=(7,5), facecolor='w')
ax_test1 = fgr_test1.add_subplot(111)
#Now plot Hyp_test's parameters: X vs T
ax_test1.plot(Hyp_test.x, Hyp_test.t, 'r', lw = 2)
#and their 'gridded' equivalents
ax_test1.plot(Hyp_test.x, Hyp_test.tgrid, ls='none', marker='o', ms=6, mfc=[0,0.5,1],mec='none')
#Some commands to add gridlines, change the directon of T axis and move x axis to top
ax_test1.set_ylim(tarray[-1],tarray[0])
ax_test1.xaxis.set_ticks_position('top')
ax_test1.grid(True, alpha = 0.1, ls='-',lw=.5)
ax_test1.set_xlabel('X, m')
ax_test1.set_ylabel('T, ms')
ax_test1.xaxis.set_label_position('top')
plt.show()
# -
# ### 3. Creating the model and 'forward modelling'
#
# OK, now let's define a subsurface model. For the sake of simplicity, the model will consist of two types of objects:
# 1. Point diffractor in a homogeneous medium
# * defined by their coordinates $(x_0, t_0)$ in data domain.
# 2. Plane reflecting surface
# * defined by their end points $(x_1, t_1)$ and $(x_2, t_2)$, also in data domain.
#
#
# We will be able to add any number of these objects to image.
#
# Let's start by adding three point diffractors:
# +
point_diff_x0 = [100, 150, 210]
point_diff_t0 = [100, 50, 70]
plt.scatter(point_diff_x0,point_diff_t0, c='r',s=70)
plt.xlim(0, xmax)
plt.ylim(tmax, 0)
plt.gca().set_xlabel('X, m')
plt.gca().set_ylabel('T, ms')
plt.gca().xaxis.set_ticks_position('top')
plt.gca().xaxis.set_label_position('top')
plt.gca().grid(True, alpha = 0.1, ls='-',lw=.5)
# -
# Next step is computing traveltimes for these subsurface diffractors. This is done by creating an instance of `Hyperbola` class for every diffractor.
#
#
hyps=[]
for x0,t0 in zip(point_diff_x0,point_diff_t0):
hyp_i = Hyperbola(xarray, tarray, x0, t0, v=2)
hyps.append(hyp_i)
# ~~Next step is computing Green's functions for these subsurface diffractors. To do this, we need to setup a wavelet.~~
# Of course, we are going to create an extremely simple wavelet.
# +
wav1 = np.array([-1,2,-1])
plt.axhline(0,c='k')
markerline, stemlines, baseline = plt.stem((np.arange(len(wav1)) - np.floor(len(wav1)/2)).astype(int), wav1)
plt.gca().set_xlim(-2*len(wav1), 2*len(wav1))
plt.gca().set_ylim(np.min(wav1)-1, np.max(wav1)+1)
# +
for hyp_i in hyps:
hyp_i.add_to_img(img,wav1)
plt.imshow(img.T,interpolation='none',cmap=cm.Greys, vmin=-2,vmax=2, extent=[xarray[0]-xstep/2, xarray[-1]+xstep/2, tarray[-1]+tstep/2, tarray[0]-tstep/2])
plt.gca().xaxis.set_ticks_position('top')
plt.gca().grid(ls=':', alpha=0.25, lw=1, c='w' )
# -
# Define a **`Line`** class
class Line:
def __init__(self, xmin, xmax, tmin, tmax, xarray, tarray):
"""a Line object is decribed by its start and end coordinates"""
self.xmin=xmin
self.xmax=xmax
self.tmin=tmin
self.tmax=tmax
xstep=xarray[1]-xarray[0]
tstep=tarray[1]-tarray[0]
xmin=xmin-np.mod(xmin,xstep)
xmax=xmax-np.mod(xmax,xstep)
tmin=tmin-np.mod(tmin,tstep)
tmax=tmax-np.mod(tmax,tstep)
self.x = np.arange(xmin,xmax+xstep,xstep)
self.t = tmin+(tmax-tmin)*(self.x-xmin)/(xmax-xmin)
self.imgind=((self.x-xarray[0])/xstep).astype(int)
self.tind=((self.t-tarray[0])/tstep).astype(int)
def add_to_img(self, img, wavelet):
maxind=np.size(img,1)
wavlen=np.floor(len(wavelet)/2).astype(int)
self.imgind=self.imgind[self.tind < maxind-1]
self.tind=self.tind[self.tind < maxind-1]
ind_begin=self.tind-wavlen
for i, sample_amp in enumerate(wavelet):
img[self.imgind,ind_begin+i]=img[self.imgind,ind_begin+i] + sample_amp
return img
# Create a line and add it to image
# +
line1=Line(100,250,50,150,xarray,tarray)
img=line1.add_to_img(img, wav1)
line2=Line(40,270,175,100,xarray,tarray)
img=line2.add_to_img(img, wav1
plt.imshow(img.T, interpolation='none', cmap=cm.Greys, vmin=-2, vmax=2,
extent=[xarray[0]-xstep/2, xarray[-1]+xstep/2, tarray[-1]+tstep/2, tarray[0]-tstep/2])
plt.gca().xaxis.set_ticks_position('top')
plt.gca().grid(ls=':', alpha=0.25, lw=1, c='w' )
# -
# Excellent. The image now is pretty messy, so we need to migrate it and see what we can achieve
# ### 4. Migration definition
def migrate(img, v, aper, xarray, tarray):
imgmig=np.zeros_like(img)
xstep=xarray[1]-xarray[0]
for x0 in xarray:
for t0 in tarray[1:-1]:
#only a region between (x0-aper) and (x0+aper) should be taken into account
xmig=xarray[(x0-aper <= xarray) & (xarray <= x0+aper)]
hi = Hyperbola(xmig,tarray,x0,t0,v)
migind_start = hi.x[0]/xstep
migind_stop = (hi.x[-1]+xstep)/xstep
hi.imgind=np.arange(migind_start, migind_stop).astype(int)
#Sum (in fact, count the mean value of) all the amplitudes on current hyperbola hi
si = np.mean(img[hi.imgind,hi.tind]*hi.amp)
imgmig[(x0/xstep).astype(int),(t0/tstep).astype(int)] = si
return imgmig
# ### 5. Migration application
#
vmig = 2
aper = 200
res = migrate(img, vmig, aper, xarray, tarray)
plt.imshow(res.T,interpolation='none',vmin=-2,vmax=2,cmap=cm.Greys, extent=[xarray[0]-xstep/2, xarray[-1]+xstep/2, tarray[-1]+tstep/2, tarray[0]-tstep/2])
# Excellent!
#
# The next section is only for interactive parameter selection
#
# ### 6. Interactive parameter change
def migshow(vmig_i, aper_i, gain_i, interp):
res_i = migrate(img, vmig_i, aper_i, xarray, tarray)
if interp:
interp_style = 'bilinear'
else:
interp_style = 'none'
plt.imshow(res_i.T,interpolation=interp_style,vmin=-gain_i,vmax=gain_i,cmap=cm.Greys, extent=[xarray[0]-xstep/2, xarray[-1]+xstep/2, tarray[-1]+tstep/2, tarray[0]-tstep/2])
plt.title('Vmig = '+str(vmig_i))
plt.show()
# +
interact(migshow, vmig_i = widgets.FloatSlider(min = 1.0,max = 3.0, step = 0.01, value=2.0,continuous_update=False,description='Migration velocity: '),
aper_i = widgets.IntSlider(min = 10,max = 500, step = 1, value=200,continuous_update=False,description='Migration aperture: '),
gain_i = widgets.FloatSlider(min = 0.0,max = 5.0, step = 0.1, value=2.0,continuous_update=False,description='Gain: '),
interp = widgets.Checkbox(value=True, description='interpolate'))
#interact(migrate, img=fixed(img), v = widgets.IntSlider(min = 1.0,max = 3.0, step = 0.1, value=2), aper=fixed(aper), xarray=fixed(xarray), tarray=fixed(tarray))
# -
#
#
| EasyMig_v4-interact3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Plot Marginal KDE (for Adult)
import numpy as np
import scipy as sp
import pickle
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import copy
import importlib
import matplotlib
matplotlib.rcParams['ps.useafm'] = True
matplotlib.rcParams['pdf.use14corefonts'] = True
matplotlib.rcParams['text.usetex'] = True
sns.set_style("white")
# +
#load data
with open('./data/ad_train_drop', 'rb') as handle:
ad_train = pickle.load(handle)
#Move into vectors
y = ad_train['y']
x = ad_train['x'].values
D = ad_train['D']
N = ad_train['N']
with open('./data/ad_test_drop', 'rb') as handle:
ad_test = pickle.load(handle)
#Move into vectors
y_test = ad_test['y']
x_test = ad_test['x'].values
N_test = ad_test['N']
# +
#load parameters
par_nuts = pd.read_pickle('./parameters/par_nuts_logreg_ad_ARD_seed101')
par_advi = pd.read_pickle('./parameters/par_advi_logreg_ad_ARD_seed101')
a =1
b =1
par_bb = pd.read_pickle('./parameters/par_bb_logreg_c0_a{}_b{}_gN_ad_B2000_seed101'.format(a,b))
beta_nuts = par_nuts.iloc[:,9:D+9][0:2000]
alpha_nuts = par_nuts.iloc[:,D+9][0:2000]
beta_advi = par_advi.iloc[:,0:D]
alpha_advi = par_advi.iloc[:,D]
beta_bb = par_bb['beta'][:,0:D]
alpha_bb = par_bb['beta'][:,D]
# +
ind = 13
#ind = 5
f=plt.figure(figsize = (15,4))
plt.subplot(1,3,2)
plt.title('NUTS',fontsize = 18)
sns.distplot(beta_nuts['beta[{}]'.format(ind)])
plt.xlabel(r'$\beta_{{{}}}$'.format(ind),fontsize = 14)
plt.xlim(-1.5,0.5)
plt.ylim(0,6)
plt.ylabel('Posterior density',fontsize = 14)
plt.subplot(1,3,1)
sns.distplot(beta_bb[:,ind-1])
plt.title('Loss-NPL',fontsize = 18)
plt.xlabel(r'$\beta_{{{}}}$'.format(ind),fontsize = 14)
plt.xlim(-1.5,0.5)
plt.ylim(0,6)
plt.ylabel('Posterior density',fontsize = 14)
plt.subplot(1,3,3)
sns.distplot(beta_advi['beta[{}]'.format(ind)])
plt.title('ADVI',fontsize = 18)
plt.xlabel(r'$\beta_{{{}}}$'.format(ind),fontsize = 14)
plt.xlim(-1.5,0.5)
plt.ylim(0,6)
plt.ylabel('Posterior density',fontsize = 14)
# -
| experiments/LogReg_ARD/Plot marginal KDE (for Adult).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="5H2cNt68qs4x"
# # Python Crash Course
# + [markdown] id="oCCCIbvLqs42"
# Hello! This is a quick intro to programming in Python to help you hit the ground running with the _12 Steps to Navier–Stokes_.
#
# There are two ways to enjoy these lessons with Python:
#
# 1. You can download and install a Python distribution on your computer. One option is the free [Anaconda Scientific Python](https://store.continuum.io/cshop/anaconda/) distribution. Another is [Canopy](https://www.enthought.com/products/canopy/academic/), which is free for academic use. Our recommendation is Anaconda.
#
# 2. You can run Python in the cloud using [Wakari](https://wakari.io/) web-based data analysis, for which you need to create a free account. (No software installation required!)
#
# In either case, you will probably want to download a copy of this notebook, or the whole AeroPython collection. We recommend that you then follow along each lesson, experimenting with the code in the notebooks, or typing the code into a separate Python interactive session.
#
# If you decided to work on your local Python installation, you will have to navigate in the terminal to the folder that contains the .ipynb files. Then, to launch the notebook server, just type:
# ipython notebook
#
# You will get a new browser window or tab with a list of the notebooks available in that folder. Click on one and start working!
# + [markdown] id="nNvaGi1Jqs43"
# ## Libraries
# + [markdown] id="BoAU_HOfqs43"
# Python is a high-level open-source language. But the _Python world_ is inhabited by many packages or libraries that provide useful things like array operations, plotting functions, and much more. We can import libraries of functions to expand the capabilities of Python in our programs.
#
# OK! We'll start by importing a few libraries to help us out. First: our favorite library is **NumPy**, providing a bunch of useful array operations (similar to MATLAB). We will use it a lot! The second library we need is **Matplotlib**, a 2D plotting library which we will use to plot our results.
# The following code will be at the top of most of your programs, so execute this cell first:
# + id="d6jXcEysqs43"
# <-- comments in python are denoted by the pound sign, like this one
import numpy # we import the array library
from matplotlib import pyplot # import plotting library
# + [markdown] id="WDkScI0Lqs44"
# We are importing one library named `numpy` and we are importing a module called `pyplot` of a big library called `matplotlib`.
# To use a function belonging to one of these libraries, we have to tell Python where to look for it. For that, each function name is written following the library name, with a dot in between.
# So if we want to use the NumPy function [linspace()](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linspace.html), which creates an array with equally spaced numbers between a start and end, we call it by writing:
# + colab={"base_uri": "https://localhost:8080/"} id="kQOdJTzIqs44" outputId="437c5ee0-c20c-4550-8de6-097ad1f8780f"
myarray = numpy.linspace(0, 5, 10)
myarray
# + [markdown] id="9bHOdgvDqs45"
# If we don't preface the `linspace()` function with `numpy`, Python will throw an error.
# + colab={"base_uri": "https://localhost:8080/", "height": 167} id="j10ZFtL8qs45" outputId="c5fda108-9ac4-442b-99b4-22ebc0915793"
myarray = linspace(0, 5, 10)
# + [markdown] id="G-ao6aioqs46"
# The function `linspace()` is very useful. Try it changing the input parameters!
#
# **Import style:**
#
# You will often see code snippets that use the following lines
# ```Python
# import numpy as np
# import matplotlib.pyplot as plt
# ```
# What's all of this import-as business? It's a way of creating a 'shortcut' to the NumPy library and the pyplot module. You will see it frequently as it is in common usage, but we prefer to keep out imports explicit. We think it helps with code readability.
#
# **Pro tip:**
#
# Sometimes, you'll see people importing a whole library without assigning a shortcut for it (like `from numpy import *`). This saves typing but is sloppy and can get you in trouble. Best to get into good habits from the beginning!
#
#
# To learn new functions available to you, visit the [NumPy Reference](http://docs.scipy.org/doc/numpy/reference/) page. If you are a proficient `Matlab` user, there is a wiki page that should prove helpful to you: [NumPy for Matlab Users](http://wiki.scipy.org/NumPy_for_Matlab_Users)
# + [markdown] id="diCDrVAsqs46"
# ## Variables
# + [markdown] id="w9rizEIzqs46"
# Python doesn't require explicitly declared variable types like C and other languages.
# + id="NbwB8YUlqs46"
a = 5 #a is an integer 5
b = 'five' #b is a string of the word 'five'
c = 5.00 #c is a floating point 5
# + id="7nKhAqBfqs47" outputId="8d2c329a-0441-4740-f171-608991a978da"
type(a)
# + id="1uKOQNayqs47" outputId="fb752973-eb27-4876-e859-97db3abd0f06"
type(b)
# + id="rfC7YTyPqs47" outputId="685c49a1-f2fe-4378-c0f9-71f4ace44268"
type(c)
# + [markdown] id="jz0eSQZtqs47"
# Note that if you divide an integer by an integer that yields a remainder, the result will be converted to a float. (This is *different* from the behavior in Python 2.7, beware!)
# + [markdown] id="KUzW6wwDqs48"
# ## Whitespace in Python
# + [markdown] id="bPhTECAeqs48"
# Python uses indents and whitespace to group statements together. To write a short loop in C, you might use:
#
# for (i = 0, i < 5, i++){
# printf("Hi! \n");
# }
# + [markdown] id="_qsvoqz4qs48"
# Python does not use curly braces like C, so the same program as above is written in Python as follows:
# + id="WWdLMwB3qs48" outputId="38a67f8e-fd7c-44a9-8bb5-8d3feba697a7"
for i in range(5):
print("Hi \n")
# + [markdown] id="ag0PJFxLqs48"
# If you have nested for-loops, there is a further indent for the inner loop.
# + id="yLQx_lQLqs48" outputId="0a9eaffe-6aaf-4601-fae0-1a6f580f44e1"
for i in range(3):
for j in range(3):
print(i, j)
print("This statement is within the i-loop, but not the j-loop")
# + [markdown] id="ylQLN-cIqs49"
# ## Slicing Arrays
# + [markdown] id="s54kY7HQqs49"
# In NumPy, you can look at portions of arrays in the same way as in `Matlab`, with a few extra tricks thrown in. Let's take an array of values from 1 to 5.
# + id="1mShxPcxqs49" outputId="eb2396d3-c41f-4848-dfde-f48281054d93"
myvals = numpy.array([1, 2, 3, 4, 5])
myvals
# + [markdown] id="X40ZNOcOqs49"
# Python uses a **zero-based index**, so let's look at the first and last element in the array `myvals`
# + id="bdLNtgqJqs49" outputId="bd725dd7-cb91-4f49-8144-6259f8312ca4"
myvals[0], myvals[4]
# + [markdown] id="qL_ZEUjIqs4-"
# There are 5 elements in the array `myvals`, but if we try to look at `myvals[5]`, Python will be unhappy, as `myvals[5]` is actually calling the non-existant 6th element of that array.
# + id="EBmp5lcRqs4-" outputId="ea7b4efa-e43d-4ebf-e872-8e5f75aaa03e"
myvals[5]
# + [markdown] id="QNIETCRBqs4-"
# Arrays can also be 'sliced', grabbing a range of values. Let's look at the first three elements
# + id="zuaHBcOwqs4-" outputId="47367a0a-7e42-454f-bac0-a842e98b7667"
myvals[0:3]
# + [markdown] id="m9Pq203lqs4-"
# Note here, the slice is inclusive on the front end and exclusive on the back, so the above command gives us the values of `myvals[0]`, `myvals[1]` and `myvals[2]`, but not `myvals[3]`.
# + [markdown] id="sEWeFxvsqs4-"
# ## Assigning Array Variables
# + [markdown] id="Qw25V1aqqs4_"
# One of the strange little quirks/features in Python that often confuses people comes up when assigning and comparing arrays of values. Here is a quick example. Let's start by defining a 1-D array called $a$:
# + id="c_udfM34qs4_"
a = numpy.linspace(1,5,5)
# + id="JOvxxFdsqs4_" outputId="28d97748-de53-466d-9f53-c87a397a1b1a"
a
# + [markdown] id="8h9crDcIqs4_"
# OK, so we have an array $a$, with the values 1 through 5. I want to make a copy of that array, called $b$, so I'll try the following:
# + id="BVg2jdCmqs4_"
b = a
# + id="Ep0J5BWMqs4_" outputId="52cc8f6e-5c6f-4577-b9fa-b3ec8af5daa6"
b
# + [markdown] id="R6yHxcbdqs5A"
# Great. So $a$ has the values 1 through 5 and now so does $b$. Now that I have a backup of $a$, I can change its values without worrying about losing data (or so I may think!).
# + id="H2OqPmt0qs5A"
a[2] = 17
# + id="T8kTaX0pqs5A" outputId="5de95aea-86fa-467a-8f38-04522baaeb7b"
a
# + [markdown] id="v_GNW7V-qs5A"
# Here, the 3rd element of $a$ has been changed to 17. Now let's check on $b$.
# + id="JxFX6W2Zqs5A" outputId="c0bc8cb5-aa08-43d4-c7ce-23ce2be5f4db"
b
# + [markdown] id="s5P0-139qs5A"
# And that's how things go wrong! When you use a statement like $a = b$, rather than copying all the values of $a$ into a new array called $b$, Python just creates an alias (or a pointer) called $b$ and tells it to route us to $a$. So if we change a value in $a$ then $b$ will reflect that change (technically, this is called *assignment by reference*). If you want to make a true copy of the array, you have to tell Python to copy every element of $a$ into a new array. Let's call it $c$.
# + id="4pa5MPjwqs5B"
c = a.copy()
# + [markdown] id="g4GVVh6lqs5B"
# Now, we can try again to change a value in $a$ and see if the changes are also seen in $c$.
# + id="qkUmCltdqs5B"
a[2] = 3
# + id="fm1q037uqs5B" outputId="b9df550c-cfd5-4a9f-c15e-f68d53d23f19"
a
# + id="agYMRK3Lqs5B" outputId="038309ac-a809-48b0-c877-0157ceb231bc"
c
# + [markdown] id="QKzvvB2Yqs5C"
# OK, it worked! If the difference between `a = b` and `a = b.copy()` is unclear, you should read through this again. This issue will come back to haunt you otherwise.
# + [markdown] id="NIZBtAOlqs5C"
# ## Learn More
# + [markdown] id="8Jlr9L7jqs5C"
# There are a lot of resources online to learn more about using NumPy and other libraries. Just for kicks, here we use Jupyter's feature for embedding videos to point you to a short video on YouTube on using NumPy arrays.
# + id="dlSOJp1aqs5C" outputId="21484bbd-7e4f-4453-affe-9becb6776bbb"
from IPython.display import YouTubeVideo
# a short video about using NumPy arrays, from Enthought
YouTubeVideo('vWkb7VahaXQ')
# + id="XP448ngGqs5C" outputId="f9d6d2a2-37d5-4153-b018-09624fb0fb82"
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
| lessons/00_Quick_Python_Intro.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Operation of the Temperature Control Laboratory
#
# The primary purpose of this notebook is to test and verify correct operation of the Temperature Control Laboratory. Before starting, be sure the latest version of TCLab firmware has been installed on the Arduino, and the lastest version of the `tclab` has been installed on your laptop. The library can be installed by copying and runing the following command into a new code cell:
#
# # !pip install tclab
#
# And prerequisite libraries, such as `pyserial` will be installed at the same time. To upgrade a previous installation, use
#
# # !pip install tclab --upgrade
# !pip install tclab --upgrade
# ## Establishing connection with `tclab.TCLab`
#
# The interface to the Temperature Control Laboratory is established by creating an instance of a `TCLab` object from the `tclab` library. When all operations are complete, the instance should be closed. The following cell opens a serial connection to the TCLab hardware over USB, turns on the LED at 100% power, and then closes the connection. The output of this cell will report:
#
# * version number of the tclab Python library
# * device port and speed of the serial connection
# * version number of the TCLab firmware
#
# The LED will return to normal operation after 10 seconds.
# +
from tclab import TCLab
lab = TCLab()
lab.LED(100)
lab.close()
# -
# The Python `with` statement provides a more convenient syntax for coding with the Temperature Control Laboratory. The `with` statement creates an instance of `TCLab` and assigns the name given after `as`. Once the code block is complete, the `TCLab.close()` command is executed automatically. Generally speaking, this is the preferred syntax for coding with TCLab.
# +
from tclab import TCLab
with TCLab() as lab:
lab.LED(100)
# -
# The following code uses the `TCLab.LED()` command to cause the LED to blink on and off at 100% brightness once a second for five seconds. The standard Python library function `time.sleep()` is used to pause execution.
# +
from tclab import TCLab
import time
with TCLab() as lab:
for k in range(0, 5): # repeat 5 times
lab.LED(100) # LED set to 100%
time.sleep(0.5) # sleep for 0.5 seconds
lab.LED(0) # LED set to 0%
time.sleep(0.5) # sleep for 0.5 seconds
# -
# ## Reading the temperature sensors
#
# The temperatures measured by the thermisters located on the TCLab are accessed as if they were Python variables `TCLab.T1` and `TCLab.T2`. The following cell accesses and prints the current temperatures of the two thermisters.
# +
from tclab import TCLab
with TCLab() as lab:
print("Temperature 1: {0:0.2f} C".format(lab.T1))
print("Temperature 2: {0:0.2f} C".format(lab.T2))
# -
# ## Setting Heater Power
#
# Power for the heaters on the Temperature Control Lab is provided through a USB power cord and a USB power supply. Too achieve the maximum operating envelope the power supply should be capable of 2 amps or 10 watts.
#
# The power to each heater is the specified as the product of a specified maximum power multiplied by the percentaage of that power to use. For heater 1, the maximum power is specified by the variable `TCLab.P1` and the percentage of power to use by `TCLab.U1`. A similar rule holds for heater 2.
#
# The applied power, in watts, for heaters 1 and 2 are given by
#
# lab = TCLab()
# power_in_watts_heater_1 = 0.013 * lab.P1 * lab.U1 / 100
# power_in_watts_heater_2 = 0.013 * lab.P2 * lab.U2 / 100
#
# where the value 0.013 is a conversion from values used in the coding to actual power in measured in watts.
# ### Setting maximum power levels `tclab.TCLab.P1` and `tclab.TCLab.P2`
#
# The maximum power available from each heater is adjusted assigning values in the range from 0 to 255 to variables `TCLab.P1` and `TCLab.P2`. Default values of 200 and 100, respectively, are assigned upon the start of a new connection.
#
# To convert `TCLab.P1` and `TCLab.P2` values to watts, multiply by a conversion factor of 0.013 watts/unit.
# +
from tclab import TCLab
with TCLab() as lab:
print("Startup conditions")
print("Maximum power of heater 1:", lab.P1, "=", lab.P1*0.013, "watts")
print("Maximum power of heater 1:", lab.P2, "=", lab.P2*0.013, "watts")
print()
print("Set maximum power to 255")
lab.P1 = 255
lab.P2 = 255
print("Maximum power of heater 1:", lab.P1, "=", lab.P1*0.013, "watts")
print("Maximum power of heater 1:", lab.P2, "=", lab.P2*0.013, "watts")
# -
# ### Setting heater power levels `tclab.TCLab.U1` and `tclab.TCLab.U2`
#
# The power to each heater can be accessed or set with variables `TCLab.U1` and `TCLab.U2` with values ranging from 0 to 100.
# +
from tclab import TCLab
import time
with TCLab() as lab:
print()
print("Starting Temperature 1: {0:0.2f} C".format(lab.T1), flush=True)
print("Starting Temperature 2: {0:0.2f} C".format(lab.T2), flush=True)
print()
print("Maximum power of heater 1:", lab.P1, "=", 0.013*lab.P1, "watts")
print("Maximum power of heater 2:", lab.P2, "=", 0.013*lab.P2, "watts")
print()
print("Set Heater 1:", lab.Q1(100), "%", flush=True)
print("Set Heater 2:", lab.Q2(100), "%", flush=True)
t_heat = 60
print()
print("Heat for", t_heat, "seconds")
time.sleep(t_heat)
print()
print("Set Heater 1:", lab.Q1(0), "%",flush=True)
print("Set Heater 2:", lab.Q2(0), "%",flush=True)
print()
print("Final Temperature 1: {0:0.2f} C".format(lab.T1))
print("Final Temperature 2: {0:0.2f} C".format(lab.T2))
print()
# -
# ### Setting heater power levels TCLab.Q1() and TCLab.Q2()
#
# Earlier versions of the Temperature Control Lab used functions `TCLab.Q1()` and `TCLab.Q2()` to set the percentage of maximum power applied to heaters 1 and 2, respectively. These are functionally equivalent to assigning values to `TCLab.P1` and `TCLab.P2`.
# ## Synchronizing with real time using `tclab.clock(tperiod, tsample)`
#
# Coding for the Temperature Control Laboratory will typically include a code block that must be executed at regular intervals for a period of time. `tclab.clock(tperiod, tsample)` is a Python iterator designed to simplify coding for those applications, where `tperiod` is the period of operation and `tsample` is the time between executions of the code block.
#
# The following code will print "Hello, World" every two seconds for sixty seconds.
#
# from tclab import TCLab, clock
#
# with TCLab() as lab:
# for t in clock(60, 2):
# pass
#
# The following code cell demonstrates use of `tclab.clock()` to read a series of temperature measurements.
# +
from tclab import TCLab, clock
tperiod = 60
tsample = 2
# connect to the temperature control lab
with TCLab() as lab:
# turn heaters on
print()
lab.P1 = 200
lab.P2 = 100
lab.U1 = 100
lab.U2 = 100
print("Set Heater 1 to {0:0.1f}% of max power of {1:0.1f} units".format(lab.U1, lab.P1))
print("Set Heater 2 to {0:0.1f}% of max power of {1:0.1f} units".format(lab.U2, lab.P2))
# report temperatures for the next tperiod seconds
print()
for t in clock(tperiod, tsample):
print(" {0:5.1f} sec: T1 = {1:0.1f} C T2 = {2:0.1f} C".format(t, lab.T1, lab.T2), flush=True)
lab.U1 = 0
lab.U2 = 0
# -
# ## Data logging with `tclab.Historian`
#
# A common requirement in commercial process control systems is to maintain a log of all important process data. `tclab.Historian` is a simple implementation of data historian for the Temperature Control Laboratory.
#
# An instance of the `tclab.Historian` from a list of data sources after a connection has been created with `tclab.TCLab`. The data sources are name/function pairs where the function creates the value to be logged.
# +
from tclab import TCLab, clock, Historian
with TCLab() as lab:
sources = [
('T1', lambda: lab.T1),
('T2', lambda: lab.T2),
('P1', lambda: lab.P1),
('U1', lambda: lab.U1),
('P2', lambda: lab.P2),
('U2', lambda: lab.U2),
]
h = Historian(sources)
lab.U1 = 100
for t in clock(30, 2):
h.update(t)
# +
# plotting
import matplotlib.pyplot as plt
t, T1, T2, P1, U1, P2, U2 = h.fields
plt.plot(t, T1, t, T2)
plt.legend(['T1', 'T2'])
# saving to a file
h.to_csv('data.csv')
# reading from file and plotting with pandas
import pandas as pd
data = pd.read_csv('data.csv')
data.index = data['Time']
print(data)
data.plot(grid=True)
# -
# ## Real time plotting with `tclab.Plotter`
#
# The `tclab.Plotter` class adds to the functionality of `tclab.Historian` by providing real-time plotting of experimental data. An instance of a plotter is created after an historian has been created.
#
# h = Historian(sources)
# p = Plotter(h, tperiod)
#
# The plotter is updated by
#
# p.update(t)
#
# which also updates the associated historian.
# +
from tclab import TCLab, clock, Historian, Plotter
tperiod = 120
tsample = 2
with TCLab() as lab:
sources = [
('T1', lambda: lab.T1),
('T2', lambda: lab.T2),
('P1', lambda: lab.P1),
('U1', lambda: lab.U1),
('P2', lambda: lab.P2),
('U2', lambda: lab.U2),
]
h = Historian(sources)
p = Plotter(h, 60)
lab.P1 = 255
lab.U1 = 100
for t in clock(tperiod, tsample):
p.update(t)
# -
# ## Example Application: Relay Control
# +
from tclab import TCLab, clock, Historian, Plotter
tperiod = 120
tsample = 2
Tsetpoint = 35
with TCLab() as lab:
sources = [
('T1', lambda: lab.T1),
('P1', lambda: lab.P1),
('U1', lambda: lab.U1),
]
h = Historian(sources)
p = Plotter(h, 60)
lab.P1 = 255
for t in clock(tperiod, tsample):
lab.U1 = 100 if lab.T1 < Tsetpoint else 0
p.update(t)
| notebooks/02.00-Testing-your-TCLab.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#this is virtually untouched neural synth. The output of neural synth is interesting,
#the problem with it is that it seeks to fill all the space in a canvas
#this method builds on earlier work:
#apply a GAN generated composition mask on the output of neural synth.
#the end result should be a more aesthetically appealing image than vanilla neural synth (or
#other methods such as deep dream)
# !wget -P ../data/ https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip
# !unzip ../data/inception5h.zip -d ../data/inception5h/
# !rm ../data/inception5h.zip
# -
from __future__ import print_function
from io import BytesIO
import math, time, copy, json, os
import glob
from os import listdir
from os.path import isfile, join
from random import random
from io import BytesIO
from enum import Enum
from functools import partial
import PIL.Image
from IPython.display import clear_output, Image, display, HTML
import numpy as np
import scipy.misc
import tensorflow as tf
from lapnorm import *
for l, layer in enumerate(layers):
layer = layer.split("/")[1]
num_channels = T(layer).shape[3]
print(layer, num_channels)
def render_naive(t_obj, img0, iter_n=20, step=1.0):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for i in range(iter_n):
g, score = sess.run([t_grad, t_score], {t_input:img})
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
return img
img0 = np.random.uniform(size=(200, 200, 3)) + 100.0
layer = 'mixed4d_5x5_bottleneck_pre_relu'
channel = 20
img1 = render_naive(T(layer)[:,:,:,channel], img0, 40, 1.0)
display_image(img1)
def render_multiscale(t_obj, img0, iter_n=10, step=1.0, octave_n=3, octave_scale=1.4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for octave in range(octave_n):
if octave>0:
hw = np.float32(img.shape[:2])*octave_scale
img = resize(img, np.int32(hw))
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
print("octave %d/%d"%(octave+1, octave_n))
clear_output()
return img
# +
h, w = 200, 200
octave_n = 3
octave_scale = 1.4
iter_n = 30
img0 = np.random.uniform(size=(h, w, 3)) + 100.0
layer = 'mixed4d_5x5_bottleneck_pre_relu'
channel = 25
img1 = render_multiscale(T(layer)[:,:,:,channel], img0, iter_n, 1.0, octave_n, octave_scale)
display_image(img1)
# -
def render_lapnorm(t_obj, img0, iter_n=10, step=1.0, oct_n=3, oct_s=1.4, lap_n=4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
# build the laplacian normalization graph
lap_norm_func = tffunc(np.float32)(partial(lap_normalize, scale_n=lap_n))
img = img0.copy()
for octave in range(oct_n):
if octave>0:
hw = np.float32(img.shape[:2])*oct_s
img = resize(img, np.int32(hw))
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
g = lap_norm_func(g)
img += g*step
print('.', end='')
print("octave %d/%d"%(octave+1, oct_n))
clear_output()
return img
# +
h, w = 300, 400
octave_n = 3
octave_scale = 1.4
iter_n = 10
img0 = np.random.uniform(size=(h, w, 3)) + 100.0
layer = 'mixed5a_5x5_bottleneck_pre_relu'
channel = 25
img1 = render_lapnorm(T(layer)[:,:,:,channel], img0, iter_n, 1.0, octave_n, octave_scale)
display_image(img1)
# -
def lapnorm_multi(t_obj, img0, mask, iter_n=10, step=1.0, oct_n=3, oct_s=1.4, lap_n=4, clear=True):
mask_sizes = get_mask_sizes(mask.shape[0:2], oct_n, oct_s)
img0 = resize(img0, np.int32(mask_sizes[0]))
t_score = [tf.reduce_mean(t) for t in t_obj] # defining the optimization objective
t_grad = [tf.gradients(t, t_input)[0] for t in t_score] # behold the power of automatic differentiation!
# build the laplacian normalization graph
lap_norm_func = tffunc(np.float32)(partial(lap_normalize, scale_n=lap_n))
img = img0.copy()
for octave in range(oct_n):
if octave>0:
hw = mask_sizes[octave] #np.float32(img.shape[:2])*oct_s
img = resize(img, np.int32(hw))
oct_mask = resize(mask, np.int32(mask_sizes[octave]))
for i in range(iter_n):
g_tiled = [lap_norm_func(calc_grad_tiled(img, t)) for t in t_grad]
for g, gt in enumerate(g_tiled):
img += gt * step * oct_mask[:,:,g].reshape((oct_mask.shape[0],oct_mask.shape[1],1))
print('.', end='')
print("octave %d/%d"%(octave+1, oct_n))
if clear:
clear_output()
return img
# +
h, w = 300, 400
octave_n = 3
octave_scale = 1.4
iter_n = 10
img0 = np.random.uniform(size=(h, w, 3)) + 100.0
objectives = [T('mixed3a_3x3_bottleneck_pre_relu')[:,:,:,25],
T('mixed4d_5x5_bottleneck_pre_relu')[:,:,:,15]]
# mask
mask = np.zeros((h, w, 2))
mask[:150,:,0] = 1.0
mask[150:,:,1] = 1.0
img1 = lapnorm_multi(objectives, img0, mask, iter_n, 1.0, octave_n, octave_scale)
display_image(img1)
# +
h, w = 256, 1024
img0 = np.random.uniform(size=(h, w, 3)) + 100.0
octave_n = 3
octave_scale = 1.4
objectives = [T('mixed3b_5x5_bottleneck_pre_relu')[:,:,:,9],
T('mixed4d_5x5_bottleneck_pre_relu')[:,:,:,17]]
mask = np.zeros((h, w, 2))
mask[:,:,0] = np.linspace(0,1,w)
mask[:,:,1] = np.linspace(1,0,w)
img1 = lapnorm_multi(objectives, img0, mask, iter_n=20, step=1.0, oct_n=3, oct_s=1.4, lap_n=4)
print("image")
display_image(img1)
print("mask")
display_image(255*mask[:,:,0])
# +
h, w = 200, 200
# start with random noise
img = np.random.uniform(size=(h, w, 3)) + 100.0
octave_n = 3
octave_scale = 1.4
objectives = [T('mixed5a_5x5_bottleneck_pre_relu')[:,:,:,11]]
mask = np.ones((h, w, 1))
# repeat the generation loop 20 times. notice the feedback -- we make img and then use it the initial input
for f in range(20):
mask[:f*10,f*10:] = np.linspace(2, 1, 1)
mask[f*10:,:f*10] = np.linspace(2, 1, 1)
img = lapnorm_multi(objectives, img, mask, iter_n=20, step=1.0, oct_n=3, oct_s=1.4, lap_n=4, clear=False)
display_image(img) # let's see it
img = resize(img[10:-10,10:-10,:], (h, w)) # before looping back, crop the border by 10 pixels, resize, repeat
| Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Real or Not
#
# * **OBJECTIVE** (Quoted from Kaggle):
# - Twitter has become an important communication channel in times of emergency. The ubiquitousness of smartphones enables people to announce an emergency they’re observing in real-time. Because of this, more agencies are interested in programatically monitoring Twitter (i.e. disaster relief organizations and news agencies).
# - But, it’s not always clear whether a person’s words are actually announcing a disaster.
# - In this competition, you’re challenged to build a machine learning model that predicts which Tweets are about real disasters and which one’s aren’t. You’ll have access to a dataset of 10,000 tweets that were hand classified.
#
#
# * **DATA** Columns:
# - **id** - a unique identifier for each tweet
# - **text** - the text of the tweet
# - **location** - the location the tweet was sent from (may be blank)
# - **keyword** - a particular keyword from the tweet (may be blank)
# - **target** - in train.csv only, this denotes whether a tweet is about a real disaster (1) or not (0)
#
#
#
# * My Approach:
# - I used a Target Encoder to convert the 'keywords' into a probability distribution.
# - I dropped 'location', as it was a messy column. But I may consider including it down the line.
# - While counts of hashtags, mentions, text length etc showed low correlation with target in the training dataset, some additional features have been added to the dataset.
# - The 'text' feature has been passed through a TweetTokenizer and lemmatized using the nltk package.
# - Hashtags have been extracted into an additional feature column.
# - Finally TFIDF has been used on 'hashtags' and cleaned 'text' separately. Along with 'keyword_target' and some additional numerical features - the dataset has been run through hyperopt to identify the best parameter & model - that gives highes f1 score.
#
#
# * **Result**:
# - An 85% f1 score was obtained on the training data.
# - A 78% fi score was ontained on the test data after submission to Kaggle.
# - Further analysis is underway.
# # Imports & read in data
import pandas as pd
import numpy as np
import seaborn as sns
import string
import re
import category_encoders as ce
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import BernoulliNB
from sklearn.svm import SVC
from sklearn.model_selection import cross_val_score
from nltk.tokenize import word_tokenize, sent_tokenize, regexp_tokenize, TweetTokenizer
from sklearn.metrics import classification_report, recall_score, f1_score, accuracy_score
from sklearn.metrics import confusion_matrix, precision_recall_curve
from sklearn.feature_extraction.text import CountVectorizer
from hyperopt import hp, fmin, tpe, Trials, STATUS_OK
from nltk import pos_tag
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from sklearn.preprocessing import LabelEncoder
from collections import defaultdict
from nltk.corpus import wordnet as wn
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn import model_selection, naive_bayes, svm
from sklearn.metrics import accuracy_score
import matplotlib.pyplot as plt
import nltk
nltk.download('stopwords')
nltk.download('averaged_perceptron_tagger')
nltk.download('wordnet')
nltk.download('punkt')
np.random.seed(500)
# %cd ~
train = pd.read_csv("../input/realornot/train.csv")
test = pd.read_csv("../input/realornot/test.csv")
sub = pd.read_csv("../input/realornot/sample_submission.csv")
# # Explore Data
# ## Getting a sense of data
train.info()
test.info()
train.shape, test.shape, sub.shape
sum(train.duplicated()), sum(test.duplicated())
# ## Target class distribution
sns.countplot(train.target)
# ## Focussing on individual columns
# ### keywords
# Find blank rows
print(train.isnull().any())
print(test.isnull().any())
train.keyword.unique()
sum(train.keyword.unique()!=test.keyword.unique())
# +
kw_d = train[train.target==1].keyword.value_counts().head(10)
kw_nd = train[train.target==0].keyword.value_counts().head(10)
plt.figure(figsize=(13,5))
plt.subplot(121)
sns.barplot(kw_d, kw_d.index, color='c')
plt.title('Top keywords for disaster tweets')
plt.subplot(122)
sns.barplot(kw_nd, kw_nd.index, color='y')
plt.title('Top keywords for non-disaster tweets')
plt.show()
# +
# Target encoding
encoder = ce.TargetEncoder(cols=['keyword'])
encoder.fit(train['keyword'],train['target'])
train = train.join(encoder.transform(train['keyword']).add_suffix('_target'))
test = test.join(encoder.transform(test['keyword']).add_suffix('_target'))
# -
# ### location
len(train.location.unique()), train.location.unique()
sum(train.keyword.unique()!=test.keyword.unique())
# ### text : elements & correlations
# +
def clean_text(text):
text = re.sub(r'https?://\S+', '', text) # Remove link
text = re.sub(r'\n',' ', text) # Remove line breaks
text = re.sub('\s+', ' ', text).strip() # Remove leading, trailing, and extra spaces
return text
def find_hashtags(tweet):
return " ".join([match.group(0)[1:] for match in re.finditer(r"#\w+", tweet)]) or 'no'
def find_mentions(tweet):
return " ".join([match.group(0)[1:] for match in re.finditer(r"@\w+", tweet)]) or 'no'
def find_links(tweet):
return " ".join([match.group(0)[:] for match in re.finditer(r"https?://\S+", tweet)]) or 'no'
def find_emojis(tweet):
emoji = "['\U0001F300-\U0001F5FF'|'\U0001F600-\U0001F64F'|'\U0001F680-\U0001F6FF'|'\u2600-\u26FF\u2700-\u27BF']"
return " ".join([match.group(0)[1:] for match in re.finditer(emoji, tweet)]) or 'no'
def process_text(df):
df['text_clean'] = df['text'].apply(lambda x: clean_text(x))
df['hashtags'] = df['text'].apply(lambda x: find_hashtags(x))
df['mentions'] = df['text'].apply(lambda x: find_mentions(x))
df['links'] = df['text'].apply(lambda x: find_links(x))
df['emojis'] = df['text'].apply(lambda x: find_emojis(x))
# df['hashtags'].fillna(value='no', inplace=True)
# df['mentions'].fillna(value='no', inplace=True)
return df
# -
train = process_text(train)
test = process_text(test)
# +
def create_stat(df):
# Tweet length
df['text_len'] = df['text_clean'].apply(len)
# Word count
df['word_count'] = df["text_clean"].apply(lambda x: len(str(x).split()))
# Stopword count
df['stop_word_count'] = df['text_clean'].apply(lambda x: len([w for w in str(x).lower().split() if w in stopwords.words('english')]))
# Punctuation count
df['punctuation_count'] = df['text_clean'].apply(lambda x: len([c for c in str(x) if c in string.punctuation]))
# Count of hashtags (#)
df['hashtag_count'] = df['hashtags'].apply(lambda x: len(str(x).split()))
# Count of mentions (@)
df['mention_count'] = df['mentions'].apply(lambda x: len(str(x).split()))
# Count of links
df['link_count'] = df['links'].apply(lambda x: len(str(x).split()))
# Count of uppercase letters
df['caps_count'] = df['text_clean'].apply(lambda x: sum(1 for c in str(x) if c.isupper()))
# Ratio of uppercase letters
df['caps_ratio'] = df['caps_count'] / df['text_len']
return df
train = create_stat(train)
test = create_stat(test)
# -
train.corr()['target'].drop('target').sort_values()
# * **Little correlation, so these above characteristics of text can be dropped.**
# * **The information in the *mentions* will likely be irrelevant. But the information in the *hashtags* may be useful.**
# +
# Any lowercase
#print(sum(train.keyword.str.islower()), sum(train.text.str.islower()))
# Change all the text to lower case. This is required as python interprets 'dog' and 'DOG' differently
train['text_lower']=[doc.lower() for doc in train.text]
test['text_lower']=[doc.lower() for doc in test.text]
# -
# Step - c : Tokenization : In this each entry in the corpus will be broken into set of words
tknzr=TweetTokenizer(strip_handles=True, reduce_len=True)
train['text_token']=[tknzr.tokenize(doc) for doc in train.text_lower]
test['text_token']=[tknzr.tokenize(doc) for doc in test.text_lower]
# +
# Remove Stop words, Non-Numeric and perfom Word Stemming/Lemmenting.
# WordNetLemmatizer requires Pos tags to understand if the word is noun or verb or adjective etc.
# By default it is set to Noun.
lemmatizer = WordNetLemmatizer()
def nltk_tag_to_wordnet_tag(nltk_tag):
if nltk_tag.startswith('J'):
return wn.ADJ
elif nltk_tag.startswith('V'):
return wn.VERB
elif nltk_tag.startswith('N'):
return wn.NOUN
elif nltk_tag.startswith('R'):
return wn.ADV
else:
return wn.NOUN
train['text_final']=np.zeros(len(train))
for i in range(len(train.text_token)):
train.text_final[i]=[t for t in train.text_token[i] if t.isalpha() and t not in stopwords.words('english')]
train.text_final[i]=[lemmatizer.lemmatize(word, nltk_tag_to_wordnet_tag(tag)) for word,tag in pos_tag(train.text_final[i])]
train.text_final[i]=str(train.text_final[i])
test['text_final']=np.zeros(len(test))
for i in range(len(test.text_token)):
test.text_final[i]=[t for t in test.text_token[i] if t.isalpha() and t not in stopwords.words('english')]
test.text_final[i]=[lemmatizer.lemmatize(word, nltk_tag_to_wordnet_tag(tag)) for word,tag in pos_tag(test.text_final[i])]
test.text_final[i]=str(test.text_final[i])
# -
# Vectorize columns
Tfidf_vect = TfidfVectorizer(max_features=5000, min_df = 10, ngram_range = (1,2))
# +
text_vec = Tfidf_vect.fit_transform(train['text_final'])
text_vec_test = Tfidf_vect.transform(test['text_final'])
X_train_text = pd.DataFrame(text_vec.toarray(), columns=Tfidf_vect.get_feature_names())
X_test_text = pd.DataFrame(text_vec_test.toarray(), columns=Tfidf_vect.get_feature_names())
train = train.join(X_train_text, rsuffix='_text')
test = test.join(X_test_text, rsuffix='_text')
print(Tfidf_vect.vocabulary_)
# -
train.head()
# ### Hashtags
# +
vec_hash = CountVectorizer(min_df = 5)
hash_vec = vec_hash.fit_transform(train['hashtags'])
hash_vec_test = vec_hash.transform(test['hashtags'])
X_train_hash = pd.DataFrame(hash_vec.toarray(), columns=vec_hash.get_feature_names())
X_test_hash = pd.DataFrame(hash_vec_test.toarray(), columns=vec_hash.get_feature_names())
print (X_train_hash.shape)
train = train.join(X_train_hash, rsuffix='_hashtag')
test = test.join(X_test_hash, rsuffix='_hashtag')
# +
# Extract hashtags into a new column
temp=[]
for i in range(len(train.text_lower)):
temp.append(re.findall(r'#(\w+)', train.text_lower[i]))
train['text_hash']=temp
temp=[]
for i in range(len(test.text_lower)):
temp.append(re.findall(r'#(\w+)', test.text_lower[i]))
test['text_hash']=temp
"""
# Make string
for i in range(len(train.text_hash)):
train.text_hash[i]=str(train.text_hash[i])
# Replace empty list is text_hash column
for i in range(len(test.text_hash)):
test.text_hash[i]=str(test.text_hash[i])
"""
# +
# Re-append processed hashtags
for i in range(len(train.text_final)):
train.text_hash[i]=[t for t in train.text_hash[i] if t.isalpha() and t not in stopwords.words('english')]
train.text_hash[i]=[lemmatizer.lemmatize(word, nltk_tag_to_wordnet_tag(tag)) for word,tag in pos_tag(train.text_hash[i])]
train.text_final[i].extend(train.text_hash[i])
train.text_final[i]=str(train.text_final[i])
for i in range(len(test.text_final)):
test.text_hash[i]=[t for t in test.text_hash[i] if t.isalpha() and t not in stopwords.words('english')]
test.text_hash[i]=[lemmatizer.lemmatize(word, nltk_tag_to_wordnet_tag(tag)) for word,tag in pos_tag(test.text_hash[i])]
test.text_final[i].extend(test.text_hash[i])
test.text_final[i]=str(test.text_final[i])
# -
# # Trying out other classification models
#
# * Using hyperopt
features_to_drop = ['id', 'keyword','location','text', 'target','text_clean', 'hashtags',
'mentions','links', 'emojis', 'text_lower', 'text_token', 'text_final']
X_train = train.drop(columns=features_to_drop+['target','target_text'])
X_test = test.drop(columns=features_to_drop)
y_train = train.target
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
#X_test = scaler.transform(X_test)
X_train.head()
models = {'logistic_regression' : LogisticRegression,
'rf' : RandomForestClassifier,
'naive_bayes' : BernoulliNB, 'svc' : SVC
}
def search_space(model):
model = model.lower()
space = {}
if model == 'naive_bayes':
space = {'alpha': hp.choice('alpha', [0,1])
}
elif model == 'svc':
space = {'C': hp.lognormal('C', 0, 1.0),
'kernel': hp.choice('kernel', ['linear', 'sigmoid', 'poly', 'rbf']),
'gamma': hp.uniform('gamma', 0, 20)
}
elif model == 'logistic_regression':
space = {'warm_start' : hp.choice('warm_start', [True, False]),
'fit_intercept' : hp.choice('fit_intercept', [True, False]),
'tol' : hp.uniform('tol', 0.00001, 0.0001),
'C' : hp.uniform('C', 0.05, 3),
'solver' : hp.choice('solver', ['newton-cg', 'lbfgs', 'liblinear']),
'max_iter' : hp.choice('max_iter', range(100,1000)),
'class_weight' : 'balanced',
'n_jobs' : -1
}
elif model == 'rf':
space = {'max_depth': hp.choice('max_depth', range(1,20)),
'n_estimators': hp.choice('n_estimators', range(50,300)),
#'n_estimators': 150,
#'criterion': hp.choice('criterion', ["gini", "entropy"]),
'criterion' : 'gini',
'min_samples_split': hp.choice("min_samples_split", range(2,40)),
'n_jobs' : -1
}
space['model'] = model
return space
def get_acc_status(clf,X,y):
acc = cross_val_score(clf, X, y, cv=3, scoring='f1').mean()
#y_pred = clf.fit(X,y).predict(X_test)
#print(confusion_matrix(y_hold, y_pred))
#print(classification_report(y_hold, y_pred))
return {'loss': -acc, 'status': STATUS_OK}
def obj_fnc(params) :
model = params.get('model').lower()
del params['model']
clf = models[model](**params)
return(get_acc_status(clf,X_train,y_train))
# ## Random Forest
# +
model= 'rf'
best_params = fmin(obj_fnc,
search_space(model),
algo=tpe.suggest,
max_evals=100)
print(best_params)
# with bigrams
#{'max_depth': 18, 'min_samples_split': 6, 'n_estimators': 182}
# -
rf=RandomForestClassifier(criterion='gini', max_depth= 18, min_samples_split = 0.5070744524836673, n_estimators= 157)
y_pred = rf.fit(X_train, y_train).predict(X_train)
print(confusion_matrix(y_train, y_pred))
print(classification_report(y_train, y_pred))
# ## Logistic Regression
# +
model= 'logistic_regression'
best_params = fmin(obj_fnc,
search_space(model),
algo=tpe.suggest,
max_evals=100)
print(best_params)
#first
#{'C': 0.8625782737995314, 'fit_intercept': True, 'max_iter': 318, 'solver': 'lbfgs',
# 'tol': 3.3702355408712684e-05, 'warm_start': False}
# with bigrams
#{'C': 0.4191297475916065, 'fit_intercept': True, 'max_iter': 39, 'solver': 'liblinear',
# 'tol': 6.003262415227273e-05, 'warm_start': False}
# -
logistic_regresion=LogisticRegression(C=0.4191297475916065, fit_intercept=True, max_iter = 39, solver='liblinear',
tol = 6.003262415227273e-05, warm_start=False)
y_pred = logistic_regresion.fit(X_train, y_train).predict(X_train)
print(confusion_matrix(y_train, y_pred))
print(classification_report(y_train, y_pred))
# ## Naive Bayes
# +
model= 'naive_bayes'
best_params = fmin(obj_fnc,
search_space(model),
algo=tpe.suggest,
max_evals=100)
print(best_params)
# -
naive_bayes=BernoulliNB()
y_pred = naive_bayes.fit(X_train, y_train).predict(X_train)
print(confusion_matrix(y_train, y_pred))
print(classification_report(y_train, y_pred))
# ## SVC
# +
model= 'svc'
best_params = fmin(obj_fnc,
search_space(model),
algo=tpe.suggest,
max_evals=100)
print(best_params)
#first
#{'C': 1.5426972107125763, 'gamma': 0.9018573366743587, 'kernel': 'rbf'}
# with bigrams
#
# -
svc=SVC(C = 1.5426972107125763, gamma = 0.9018573366743587, kernel = 'rbf')
y_pred = svc.fit(X_train, y_train).predict(X_train)
print(confusion_matrix(y_train, y_pred))
print(classification_report(y_train, y_pred))
# # Submission
#scaler = StandardScaler()
X_test = scaler.transform(X_test)
X_train.target_text
for val in X_train.columns:
if val not in X_test.columns:
print(val)
logistic_regresion=LogisticRegression(C=0.4191297475916065, fit_intercept=True, max_iter = 39, solver='liblinear',
tol = 6.003262415227273e-05, warm_start=False)
y_pred = logistic_regresion.fit(X_train, y_train).predict(X_test)
sub['target'] = y_pred
sub.head(10)
sub.to_csv("submission.csv", index=False, header=True)
# # Leak
test.drop('target', axis=1, inplace=True)
leak = pd.read_csv("../input/realornot/socialmedia-disaster-tweets-DFE.csv", encoding='latin_1')
leak['target'] = (leak['choose_one']=='Relevant').astype(int)
leak['id'] = leak.index
leak = leak[['id', 'target','text']]
merged_df = pd.merge(test, leak, on='id')
sub1 = merged_df[['id', 'target']]
sub1.to_csv('submit_1.csv', index=False)
test.target
sum(merged_df.target_x!=merged_df.target_y)
| Twitter_disaster_classification_Kaggle.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
#tag::start-ray-local[]
import ray
ray.init(num_cpus=20) # In theory auto sensed, in practice... eh
#end::start-ray-local[]
# +
#tag::sleepy_task_hello_world[]
import timeit
def slow_task(x):
import time
time.sleep(2) # Do something sciency/business
return x
@ray.remote
def remote_task(x):
return slow_task(x)
things = range(10)
very_slow_result = map(slow_task, things)
slowish_result = map(lambda x: remote_task.remote(x), things)
slow_time = timeit.timeit(lambda: list(very_slow_result), number=1)
fast_time = timeit.timeit(lambda: list(ray.get(list(slowish_result))), number=1)
print("In sequence {}, in parallel {}".format(slow_time, fast_time))
#end::sleepy_task_hello_world[]
# -
slowish_result = map(lambda x: remote_task.remote(x), things)
ray.get(list(slowish_result))
# Note: if we were on a "real" cluster we'd have to do more magic to install it on all the nodes in the cluster.
# !pip install bs4
# +
#tag::mini_crawl_task[]
@ray.remote
def crawl(url, depth=0, maxdepth=1, maxlinks=4):
links = []
link_futures = []
import requests
from bs4 import BeautifulSoup
try:
f = requests.get(url)
links += [(url, f.text)]
if (depth > maxdepth):
return links # base case
soup = BeautifulSoup(f.text, 'html.parser')
c = 0
for link in soup.find_all('a'):
if "href" in link:
c = c + 1
link_futures += [crawl.remote(link["href"], depth=(depth+1), maxdepth=maxdepth)]
# Don't branch too much were still in local mode and the web is big
if c > maxlinks:
break
for r in ray.get(link_futures):
links += r
return links
except requests.exceptions.InvalidSchema:
return [] # Skip non-web links
ray.get(crawl.remote("http://holdenkarau.com/"))
#end::mini_crawl_task[]
# +
#tag::actor[]
@ray.remote
class HelloWorld(object):
def __init__(self):
self.value = 0
def greet(self):
self.value += 1
return f"Hi user #{self.value}"
hello_actor = HelloWorld.remote()
# -
print(ray.get(hello_actor.greet.remote()))
print(ray.get(hello_actor.greet.remote()))
#end::actor[]
#tag::ds[]
# Create a Dataset of URLS objects. We could also load this from a text file with ray.data.read_text()
urls = ray.data.from_items([
"https://github.com/scalingpythonml/scalingpythonml",
"https://github.com/ray-project/ray"])
def fetch_page(url):
import requests
f = requests.get(url)
return f.text
pages = urls.map(fetch_page)
pages.take(1)
# +
#end:ds[]
| ray/helloWorld/Ray-Ch2-Hello-Worlds.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] toc-hr-collapsed=true
# # Análise de sobrevivência piscinas municipais
# -
# Data was collected from 6749 customers. The Kaplan-Meier estimator was used to gather information about the moment in which the dropout event took place, the Cox's regression coefficient was used to analyse the available variables related with the surviving of the customer in the sports facility, and the logrank test was applied to the significant variables to provide statistical comparison of groups.
#
# The duration of practice was positively related to the number of accesses, week contracted accesses, number of contract renewals, and inscription month increase. On the other hand, practice experience was negatively related to age and week average accesses.
# ## Variables
#
# - Gender - 1 para Masculino, 0 para Feminino
# - Inicio Data de inicio da inscrição
# - Termino Data de termino da inscrição
# - Dtultvisita Data da ultima visita as instalações obtida pela utilização do cartão no controlo de acessos
# - Diassemfrequencia Nº de dias em que não vinha às instalações até terminar a inscrição ou até 31/Outubro/2017 se a inscrição ainda não tinha terminado nessa data
# - Mesesinscricao Nº de meses da inscrição, ou seja, diferença de meses entre o inicio e o termino
# - Volnegocios Total pago pelo utente no período da sua inscrição
# - Freqmedia Freqmedia obtida pelo nº de frequencias a dividir pelo nº de semanas da sua inscrição, subtraido dos períodos de encerramento (mês de Agosto) quando a inscrição tem mais de 1 época
# - natividades Contagem dos 1 dos campos anteriores (linha 13 a 22)
# - nfrequencias Nº de visitas ao clube durante a sua inscrição
# - freqcontratada Nº de frequências por semana que o utente tinha contratada no ultimo período da sua inscrição (7 = livre trânsito)
# - nrenovacoes Nº de renovações que o utente fez durante a sua inscrição - ATENÇÃO: esta instalação fecha no mês de Agosto, pelo que os Utentes têm de renovar a inscrição para o proximo ano letivo
# - nreferencias Nº de pessoas associadas (familiares, amigos, etc.) também registados como utentes
# - classe_desistencia 1 para desistente, 0 para utente ativo
#
#
# 1/TRUE = dead i.e. abandonou
# 0/FALSE = alive i.e. não abandonou
#
import lifelines
from IPython.display import HTML
import pandas as pd
dt = pd.read_excel('https://raw.githubusercontent.com/pesobreiro/jupyternotebooks/master/dados/dadosPiscina.xlsx',
index_col=0)
# +
## Vamos selecionar os nadadores
dt = dt.loc[dt.atividade_aquaticas == 1]
## Vamos calcular uma variavel mes
dt['mes']=dt['inicio'].str.extract('-(\d\d)', expand=True)
dt['mes']=pd.to_numeric(dt['mes'])
# -
# ## Descritivas variáveis quantitativas
dt[['idade', 'genero','diassemfrequencia', 'mesesinscricao', 'volnegocios', 'freqmedia',
'natividades', 'nfrequencias','freqcontratadasemanal', 'nrenovacoes', 'nreferencias','mes','classe_desistencia']].describe()
# +
# Retirar estes
dt = dt.loc[dt.idade < 100 ]
colunasScale=['idade','diassemfrequencia','mesesinscricao','volnegocios','freqmedia','natividades','nfrequencias',
'freqcontratadasemana','nrenovacoes','nreferencias']
#Transformar classe desistencia em booleana
dt['classe_desistencia']=dt['classe_desistencia'].map({True:1, False:0})
# -
# ## Survival Analysis
from lifelines import KaplanMeierFitter
kmf = KaplanMeierFitter()
T = dt['mesesinscricao']
C = dt['classe_desistencia']
kmf.fit(T,C,label="Swimmers")
# ## Tabela Sobrevivência
survivalTable = pd.concat([kmf.event_table.reset_index(), kmf.conditional_time_to_event_,kmf.survival_function_.round(decimals=2)], axis=1)
survivalTable.columns = ['event_at', 'removed', 'observed', 'censored', 'entrance', 'at_risk',
'time to event', 'prob']
survivalTable.head(18)
# ## Mediana sobrevivência
kmf.median_
# # Curva de Kaplan-Meier
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.font_manager as font_manager
plt.rcParams['figure.figsize'] = [12, 7]
ax = kmf.plot()
ax.set_xlabel('Months',fontsize=10)
ax.set_ylabel('Survival probability',fontsize=10)
ax.axvline(x=6,ymax=0.70,linestyle='--',color='black');ax.axhline(y=0.78,xmax=0.129,linestyle='--',color='black')
# 6.0 0.73
ax.annotate("0.73",xy=(6, 0.7), xytext=(6, 0.8))
ax.axvline(x=12,ymax=0.44,linestyle='--',color='black');ax.axhline(y=0.542,xmax=0.254,linestyle='--',color='black')
# 12.0 0.53
ax.annotate("0.53",xy=(12, 0.44), xytext=(12, 0.58))
ax.axvline(x=18,ymax=0.33,linestyle='--',color='black');ax.axhline(y=0.45,xmax=0.38,linestyle='--',color='black')
# 18.0 0.43
ax.annotate("0.43",xy=(18, 0.38), xytext=(18, 0.48))
plt.show()
#plt.savefig('../submissao/figure1.png', dpi=90)
plt.close()
# -
# # Número de cliente em risco de abandono
# +
abandono=kmf.event_table.reset_index()
plt.rcParams['figure.figsize'] = [12, 7]
plt.plot(abandono.event_at, abandono.at_risk)
plt.xlabel('meses')
plt.ylabel('clientes')
#plt.xlabel()
# -
# # Percentagem de clientes
# +
abandono['percentagemClientes']= 0.0
anterior = abandono.at_risk[0:1]
anterior = anterior[0]
print(anterior)
for index, row in abandono.iterrows():
abandono.at[index, 'percentagemClientes'] = row.at_risk/anterior
plt.rcParams['figure.figsize'] = [12, 7]
plt.xlabel('meses')
plt.ylabel('clientes')
plt.plot(abandono.event_at, abandono.percentagemClientes)
plt.show()
# -
abandono
# A probabilidade de sobreviver mais de 10 meses é de 50%. A probabilidade de sobreviver 20 meses é de 25%.
# # <NAME> por género
# +
ax = plt.subplot(111)
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [12, 7]
for gen in dt['genero'].unique():
ix = dt['genero'] == gen
kmf.fit(T.loc[ix], C.loc[ix], label=str(gen))
ax = kmf.plot(ax=ax)
print ('Mediana',gen, '=', kmf.median_,'| Prob sobreviver mais 6 meses S(t) == P(T>t) :', kmf.predict(6.0))
# -
# 1 - Masculino / 0 - Feminino
dt.dropna(inplace=True)
T = dt['mesesinscricao']
C = dt['classe_desistencia']
# +
nomes_features= ['idade', 'genero', 'diassemfrequencia', 'mesesinscricao', 'volnegocios', 'freqmedia',
'natividades', 'nfrequencias', 'freqcontratadasemanal',
'nrenovacoes', 'nreferencias', 'classe_desistencia','mes']
dtNadadores = dt[nomes_features].copy()
from lifelines import CoxPHFitter
cph = CoxPHFitter()
cph.fit(dtNadadores,duration_col='mesesinscricao',event_col='classe_desistencia')
# -
# %store dtNadadores
# # Cox Regression
cph.print_summary()
# + [markdown] toc-hr-collapsed=false
# # Curvas sobrevivência considerando as variáveis mais relevantes na regressão de cox
# -
# The significant predictors age, maccess, nentries, cfreq, nrenewals and imonth where analysed using the logrank test, where we identified significant differences between the groups in each variable represented in Table 4: age (χ2=204.78, p<0.01), maccess (χ2=294.44 p<0.01), nentries (χ2=3721.13, p<0.01), cfreq (χ2=58.34, p<0.01), nrenewals (χ2=6264.73, p<0.01) and imonth (χ2=86.33, p<0.01). The probability of surviving more than 12 months and survival median for each group is also represented, for example, swimmers with less than 0.3 average access have a probability of surviving more than 12 months of 40.56% and a survival median of 9 months
# ## age
# Utilizar os escalões de idade OMS
#
# | Escalões | Idades |
# | ------------- |:-------------:|
# | Infância | até 10 anos |
# | Adolescência | 10-20 |
# | Idade adulta | 20-40 |
# | Meia Idade | 40-60 |
# | Terceira Idade| +60 |
dt['idade'].describe()
dt.columns
# Vamos dividir a distribuicao idades em 5 partes
# +
#dt['escaloesIdade']=''
#escaloesIdade=[10,20,40,60]
#
#for index, cliente in dt.iterrows():
# #se a variável tiver o valor 1 colocar na nova variável a descrição da atividade
# if cliente['idade']<=10:
# dt.at[index,'escaloesIdade']='ate 10'
# elif (cliente['idade']>10) & (cliente['idade']<=20):
# dt.at[index,'escaloesIdade']='10 a 20'
# elif (cliente['idade']>20) & (cliente['idade']<=40):
# dt.at[index,'escaloesIdade']='21 a 40'
# elif (cliente['idade']>40) & (cliente['idade']<=60):
# dt.at[index,'escaloesIdade']='41 a 60'
# elif (cliente['idade']>60):
# dt.at[index,'escaloesIdade']='mais 60'
# -
dt.idade.describe()
# +
dt['escaloesIdade']=''
escaloesIdade=[10,20,40,60]
for index, cliente in dt.iterrows():
#se a variável tiver o valor 1 colocar na nova variável a descrição da atividade
if cliente['idade']<=5:
dt.at[index,'escaloesIdade']='until 5'
elif (cliente['idade']>5) & (cliente['idade']<=10):
dt.at[index,'escaloesIdade']='5 to 10'
elif (cliente['idade']>10) & (cliente['idade']<=32):
dt.at[index,'escaloesIdade']='10 a 32'
elif (cliente['idade']>32):
dt.at[index,'escaloesIdade']='more than 32'
# -
dt.escaloesIdade.unique()
dt.groupby(dt['escaloesIdade']).count().iloc[:,1]
# +
ax = plt.subplot(111)
import matplotlib.pyplot as plt
import numpy as np
#vamos considerar só 0,1,2,3
escaloesIdade=dt['escaloesIdade'].unique()
plt.rcParams['figure.figsize'] = [12, 7]
for escalao in escaloesIdade:
ix = dt['escaloesIdade'] == escalao
kmf.fit(T.loc[ix], C.loc[ix], label=str(escalao))
ax = kmf.plot(ax=ax)
print(str(escalao),' predict survival 12 months:',kmf.predict(12.0),': median ',kmf.median_)
plt.title('Survival by age cohorts');
# +
from lifelines.statistics import multivariate_logrank_test
results=multivariate_logrank_test(event_durations=T,groups=dt.escaloesIdade,event_observed=C)
results.print_summary()
# +
from lifelines.statistics import pairwise_logrank_test
results=pairwise_logrank_test(event_durations=T,groups=dt.escaloesIdade,event_observed=C)
results.print_summary()
# + [markdown] toc-hr-collapsed=false
# ## maccess
# -
dt.freqmedia.describe()
dt['esc_maccess']=''
for index, cliente in dt.iterrows():
#se a variável tiver o valor 1 colocar na nova variável a descrição da atividade
if cliente['freqmedia'] <= 0.3:
dt.at[index,'esc_maccess']='maccess less than 0.3'
elif (cliente['freqmedia'] > 0.3) & (cliente['freqmedia'] < 0.51):
dt.at[index,'esc_maccess']='maccess greather than 0.3 and less 0.51'
elif (cliente['freqmedia'] >= 0.51) & (cliente['freqmedia'] < 0.8):
dt.at[index,'esc_maccess']='maccess greather than 0.51 and less 0.8'
elif (cliente['freqmedia'] >= 0.8):
dt.at[index,'esc_maccess']='maccess greather than 0.8'
dt['esc_maccess'].value_counts()
dt['esc_maccess'].unique()
# Vamos considerar 3,4 e 5 superior a duas vezes por semana
# +
ax = plt.subplot(111)
#vamos considerar só 0,1,2,3
idas=dt['esc_maccess'].unique()
plt.rcParams['figure.figsize'] = [12, 7]
for ida in idas:
ix = dt['esc_maccess'] == ida
kmf.fit(T.loc[ix], C.loc[ix], label=str(ida))
ax = kmf.plot(ax=ax)
print(str(ida),' predict survival 12 months:',kmf.predict(12.0),': median ',kmf.median_)
plt.title('maccess');
# -
results=multivariate_logrank_test(event_durations=T,groups=dt.esc_maccess,event_observed=C)
results.print_summary()
results=pairwise_logrank_test(event_durations=T,groups=dt.esc_maccess,event_observed=C)
results.print_summary()
# O número idas com uma retenção maior é 2 ou mais. A tendência de retenção maior só se apresenta depois dos 10 meses. O abandono depois dos 10 meses é menor nos clientes que vão em média 1 a 2 vezes ou mais. As estratégias devem estar focadas na manutenção do cliente nos primeiros 10 meses e tentar que vão em média pelo menos 1 vez por semana.
# + [markdown] toc-hr-collapsed=false
# ## nentries
# -
dt.columns
dt.nfrequencias.describe()
dt['esc_entries']=''
for index, cliente in dt.iterrows():
#se a variável tiver o valor 1 colocar na nova variável a descrição da atividade
if cliente['nfrequencias'] <= 6:
dt.at[index,'esc_entries']='until 6'
elif (cliente['nfrequencias'] > 6) & (cliente['nfrequencias'] < 17):
dt.at[index,'esc_entries']='6 to 17'
elif (cliente['nfrequencias'] >= 17) & (cliente['nfrequencias'] < 40):
dt.at[index,'esc_entries']='17 to 40'
elif (cliente['nfrequencias'] >= 40):
dt.at[index,'esc_entries']='more than 40'
#dt.at[index,'idas']=np.around(cliente['nfrequencias'],decimals=1)
dt['esc_entries'].value_counts()
dt['esc_entries'].unique()
# Vamos considerar 3,4 e 5 superior a duas vezes por semana
# +
ax = plt.subplot(111)
#vamos considerar só 0,1,2,3
idas=dt['esc_entries'].unique()
plt.rcParams['figure.figsize'] = [12, 7]
for ida in idas:
ix = dt['esc_entries'] == ida
kmf.fit(T.loc[ix], C.loc[ix], label=str(ida))
ax = kmf.plot(ax=ax)
print(str(ida),' predict survival 12 months:',kmf.predict(12.0),': median ',kmf.median_)
plt.title('nentries');
# -
results=multivariate_logrank_test(event_durations=T,groups=dt.esc_entries,event_observed=C)
results.print_summary()
results=pairwise_logrank_test(event_durations=T,groups=dt.esc_entries,event_observed=C)
results.print_summary()
# O número idas com uma retenção maior é 2 ou mais. A tendência de retenção maior só se apresenta depois dos 10 meses. O abandono depois dos 10 meses é menor nos clientes que vão em média 1 a 2 vezes ou mais. As estratégias devem estar focadas na manutenção do cliente nos primeiros 10 meses e tentar que vão em média pelo menos 1 vez por semana.
# + [markdown] toc-hr-collapsed=false
# ## cfreq
# -
dt.freqcontratadasemanal.describe()
dt.freqcontratadasemanal.unique()
dt.freqcontratadasemanal.value_counts()
# Considerando a distribuicao foram divididos em 1,2 e 3+
# +
dt['esc_cfreq']=''
for index, cliente in dt.iterrows():
#se a variável tiver o valor 1 colocar na nova variável a descrição da atividade
if cliente['freqcontratadasemanal']<=1:
dt.at[index,'esc_cfreq']='cfreq 1'
elif cliente['freqcontratadasemanal']==2:
dt.at[index,'esc_cfreq']='cfreq 2'
elif cliente['freqcontratadasemanal']>=3:
dt.at[index,'esc_cfreq']='cfreq 3'
# -
dt.esc_cfreq.value_counts()
# +
ax = plt.subplot(111)
cfreqs=dt.esc_cfreq.unique()
plt.rcParams['figure.figsize'] = [12, 7]
for cfreq in cfreqs:
ix = dt['esc_cfreq'] == cfreq
kmf.fit(T.loc[ix], C.loc[ix],label=str(cfreq))
ax = kmf.plot(ax=ax)
print(str(cfreq),' predict survival 12 months:',kmf.predict(12.0),': median ',kmf.median_)
plt.title('contracted frequency')
T.loc;
# -
# Contratar 3 ou mais frequências a sobrevivência é maior. Contratar 2 é menor do que contratar só uma frequência. Existem várias questões como a habituação à prática desportiva e quem tem a intenção de ir 3 ou mais vezes tem uma maior motivação.
#
# Podemos abordar a intenção para a prática desportiva ser maior e a retenção também ser maior.
results=multivariate_logrank_test(event_durations=T,groups=dt.esc_cfreq,event_observed=C)
results.print_summary()
# Existem diferenças nas curvas de sobrevivência
results=pairwise_logrank_test(event_durations=T,groups=dt.esc_cfreq,event_observed=C)
results.print_summary()
# Existem diferenças entre todas as curvas de sobrevivência
# ## nrenewals
dt.nrenovacoes.describe()
dt.nrenovacoes.unique()
dt.nrenovacoes.value_counts()
dt['esc_nrenewals'] = 0
# +
dt['esc_nrenewals']=''
for index, cliente in dt.iterrows():
#se a variável tiver o valor 1 colocar na nova variável a descrição da atividade
if cliente['nrenovacoes'] == 0:
dt.at[index,'esc_nrenewals']='renewals 0'
elif cliente['nrenovacoes'] == 1:
dt.at[index,'esc_nrenewals']='renewals 1'
elif cliente['nrenovacoes'] == 2:
dt.at[index,'esc_nrenewals']='renewals 2'
elif cliente['nrenovacoes']>2:
dt.at[index,'esc_nrenewals']='renewals 2+'
# -
dt.esc_nrenewals.value_counts()
dt.esc_nrenewals.unique()
# +
ax = plt.subplot(111)
nrenewals=dt.esc_nrenewals.unique()
for nrenewal in nrenewals:
ix = dt['esc_nrenewals'] == nrenewal
kmf.fit(T.loc[ix], C.loc[ix],label=str(nrenewal))
ax = kmf.plot(ax=ax)
print(str(nrenewal),' predict survival 12 months:',kmf.predict(12.0),': median ',kmf.median_)
plt.title('Survival by number of renewals')
T.loc;
# -
results=multivariate_logrank_test(event_durations=T,groups=dt.esc_nrenewals,event_observed=C)
results.print_summary()
# Existem diferenças nas curvas de sobrevivência
results=pairwise_logrank_test(event_durations=T,groups=dt.nrenovacoes,event_observed=C)
results.print_summary()
# Existem diferenças entre todas as curvas de sobrevivência
# ## imonth
dt['mes'].unique()
dt.mes.describe()
dt['mes'].value_counts()
# +
ax = plt.subplot(111)
import matplotlib.pyplot as plt
import numpy as np
#vamos considerar só 0,1,2,3
meses=dt['mes'].unique()
plt.rcParams['figure.figsize'] = [12, 7]
for mes in meses:
ix = dt['mes'] == mes
kmf.fit(T.loc[ix], C.loc[ix], label=str(mes))
ax = kmf.plot(ax=ax)
plt.title('inscription month');
# -
# O gráfico fica muito confuso vamos agrupar por trimestre
results=multivariate_logrank_test(event_durations=T,groups=dt.mes,event_observed=C)
results.print_summary()
results=pairwise_logrank_test(event_durations=T,groups=dt.mes,event_observed=C)
results.print_summary()
# +
T = dt["mesesinscricao"]
E = dt["classe_desistencia"]
kmf.fit(T, event_observed=E)
# -
### vamos calcular por trimestres
dt.loc[dt['mes'].isin([1,2,3]),'trimestre']='Jan, Fev and Mar'
dt.loc[dt['mes'].isin([4,5,6]),'trimestre']='Apr, May, Jun'
dt.loc[dt['mes'].isin([7,8,9]),'trimestre']='Jul, Aug, Set'
dt.loc[dt['mes'].isin([10,11,12]),'trimestre']='Oct, Nov, Dez'
dt.trimestre.value_counts()
dt['trimestre'].unique()
# +
ax = plt.subplot(111)
#vamos considerar só 0,1,2,3
trimestres=dt['trimestre'].unique()
plt.rcParams['figure.figsize'] = [12, 7]
for trimestre in trimestres:
ix = dt['trimestre'] == trimestre
kmf.fit(T.loc[ix], C.loc[ix], label=str(trimestre))
ax = kmf.plot(ax=ax)
print(str(trimestre),' predict survival 12 months:',kmf.predict(12.0),': median ',kmf.median_)
plt.title('Sobrevivência por trimestre');
# -
# A sobrevivência por trimestre varia. O trimestre julho, agosto e setembro tem uma sobrevivência maior. Quem começa depois do verão ou inicio do ano apresenta uma curva de sobrevivência menor.
# Existem diferenças nas curvas de sobrevivência
results=multivariate_logrank_test(event_durations=T,groups=dt.trimestre,event_observed=C)
results.print_summary()
results=pairwise_logrank_test(event_durations=T,groups=dt.trimestre,event_observed=C)
results.print_summary()
# Existem diferenças entre todas as curvas de sobrevivência
| analysis/.ipynb_checkpoints/survivalAnalysisSwimmers-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %autosave 0
from IPython.core.display import HTML, display
display(HTML('<style>.container { width:100%; !important } </style>'))
# # Sets implemented as AVL Trees
# This notebook implements <em style="color:blue;">sets</em> as <a href="https://en.wikipedia.org/wiki/AVL_tree">AVL trees</a>. The set $\mathcal{A}$ of <em style="color:blue;">AVL trees</em> is defined inductively:
#
# - $\texttt{Nil} \in \mathcal{A}$.
# - $\texttt{Node}(k,l,r) \in \mathcal{A}\quad$ iff
# - $\texttt{Node}(k,l,r) \in \mathcal{B}_<$,
# - $l, r \in \mathcal{A}$, and
# - $|l.\texttt{height}() - r.\texttt{height}()| \leq 1$.
#
# According to this definition, an AVL tree is an <em style="color:blue;">ordered binary tree</em>
# such that for every node $\texttt{Node}(k,l,r)$ in this tree the height of the left subtree $l$ and the right
# subtree $r$ differ at most by one.
# The class `Set` represents the nodes of an AVL tree. This class has the following member variables:
#
# - `mKey` is the key stored at the root of the tree,
# - `mLeft` is the left subtree,
# - `mRight` is the right subtree, and
# - `mHeight` is the height.
#
# The constructor `__init__` creates the empty tree.
class Set:
def __init__(self):
self.mKey = None
self.mLeft = None
self.mRight = None
self.mHeight = 0
# Given an ordered binary tree $t$, the expression $t.\texttt{isEmpty}()$ checks whether $t$ is the empty tree.
# +
def isEmpty(self):
return self.mKey == None
Set.isEmpty = isEmpty
# -
# Given an ordered binary tree $t$ and a key $k$, the expression $t.\texttt{member}(k)$ returns `True` if the key $k$ is stored in the tree $t$.
# The method `member` is defined inductively as follows:
# - $\texttt{Nil}.\texttt{member}(k) = \Omega$,
#
# because the empty tree is interpreted as the empty map.
# - $\texttt{Node}(k, l, r).\texttt{member}(k) = v$,
#
# because the node $\texttt{Node}(k,l,r)$ stores the assignment $k \mapsto v$.
# - $k_1 < k_2 \rightarrow \texttt{Node}(k_2, l, r).\texttt{member}(k_1) = l.\texttt{member}(k_1)$,
#
# because if $k_1$ is less than $k_2$, then any mapping for $k_1$ has to be stored in the left subtree $l$.
# - $k_1 > k_2 \rightarrow \texttt{Node}(k_2, l, r).\texttt{member}(k_1) = r.\texttt{member}(k_1)$,
#
# because if $k_1$ is greater than $k_2$, then any mapping for $k_1$ has to be stored in the right subtree $r$.
# +
def member(self, key):
if self.isEmpty():
return
elif self.mKey == key:
return True
elif key < self.mKey:
return self.mLeft.member(key)
else:
return self.mRight.member(key)
Set.member = member
# -
# The method $\texttt{insert}()$ is specified via recursive equations.
# - $\texttt{Nil}.\texttt{insert}(k) = \texttt{Node}(k, \texttt{Nil}, \texttt{Nil})$,
# - $\texttt{Node}(k, l, r).\texttt{insert}(k) = \texttt{Node}(k, l, r)$,
# - $k_1 < k_2 \rightarrow
# \texttt{Node}(k_2, l, r).\texttt{insert}(k_1) =
# \texttt{Node}\bigl(k_2, l.\texttt{insert}(k_1), r\bigr).\texttt{restore}()$,
# - $k_1 > k_2 \rightarrow
# \texttt{Node}(k_2, l, r).\texttt{insert}\bigl(k_1\bigr) =
# \texttt{Node}\bigl(k_2, l, r.\texttt{insert}(k_1)\bigr).\texttt{restore}()$.
#
# The function $\texttt{restore}$ is an auxiliary function that is defined below. This function restores the balancing condition if it is violated after an insertion.
# +
def insert(self, key):
if self.isEmpty():
self.mKey = key
self.mLeft = Set()
self.mRight = Set()
self.mHeight = 1
elif self.mKey == key:
pass
elif key < self.mKey:
self.mLeft.insert(key)
self._restore()
else:
self.mRight.insert(key)
self._restore()
Set.insert = insert
# -
# The method $\texttt{self}.\texttt{delete}(k)$ removes the key $k$ from the tree $\texttt{self}$. It is defined as follows:
#
# - $\texttt{Nil}.\texttt{delete}(k) = \texttt{Nil}$,
# - $\texttt{Node}(k,\texttt{Nil},r).\texttt{delete}(k) = r$,
# - $\texttt{Node}(k,l,\texttt{Nil}).\texttt{delete}(k) = l$,
# - $l \not= \texttt{Nil} \,\wedge\, r \not= \texttt{Nil} \,\wedge\,
# \langle r',k_{min} \rangle := r.\texttt{delMin}() \;\rightarrow\;
# \texttt{Node}(k,l,r).\texttt{delete}(k) = \texttt{Node}(k_{min},l,r')$
# - $k_1 < k_2 \rightarrow \texttt{Node}(k_2,l,r).\texttt{delete}(k_1) =
# \texttt{Node}\bigl(k_2,l.\texttt{delete}(k_1),r\bigr)$,
# - $k_1 > k_2 \rightarrow \texttt{Node}(k_2,l,r).\texttt{delete}(k_1) =
# \texttt{Node}\bigl(k_2,l,r.\texttt{delete}(k_1)\bigr)$.
# +
def delete(self, key):
if self.isEmpty():
return
if key == self.mKey:
if self.mLeft.isEmpty():
self._update(self.mRight)
elif self.mRight.isEmpty():
self._update(self.mLeft)
else:
self.mRight, self.mKey = self.mRight._delMin()
elif key < self.mKey:
self.mLeft.delete(key)
else:
self.mRight.delete(key)
Set.delete = delete
# -
# The method $\texttt{self}.\texttt{delMin}()$ removes the smallest key from the given tree $\texttt{self}$
# and returns a pair of the form
# $$ (\texttt{self}, k_m) $$
# where $\texttt{self}$ is the tree that remains after removing the smallest key, while $k_m$ is the smallest key that has been found.
#
# The function is defined as follows:
#
# - $\texttt{Node}(k, \texttt{Nil}, r).\texttt{delMin}() = \langle r, k \rangle$,
# - $l\not= \texttt{Nil} \wedge \langle l',k_{min}\rangle := l.\texttt{delMin}()
# \;\rightarrow\;
# \texttt{Node}(k, l, r).\texttt{delMin}() =
# \langle \texttt{Node}(k, l', r).\texttt{restore}(), k_{min} \rangle
# $
# +
def _delMin(self):
if self.mLeft.isEmpty():
return self.mRight, self.mKey
else:
ls, km = self.mLeft._delMin()
self.mLeft = ls
self._restore()
return self, km
Set._delMin = _delMin
# -
# Given two ordered binary trees $s$ and $t$, the expression $s.\texttt{update}(t)$ overwrites the attributes of $s$ with the corresponding attributes of $t$.
# +
def _update(self, t):
self.mKey = t.mKey
self.mLeft = t.mLeft
self.mRight = t.mRight
self.mHeight = t.mHeight
Set._update = _update
# -
# The function $\texttt{restore}(\texttt{self})$ restores the balancing condition of the given binary tree
# at the root node and recompute the variable $\texttt{mHeight}$.
#
# The method $\texttt{restore}$ is specified via conditional equations.
#
# - $\texttt{Nil}.\texttt{restore}() = \texttt{Nil}$,
#
# because the empty tree already is an AVL tree.
# - $|l.\texttt{height}() - r.\texttt{height}()| \leq 1 \rightarrow
# \texttt{Node}(k,l,r).\texttt{restore}() = \texttt{Node}(k,l,r)$.
#
# If the balancing condition is satisfied, then nothing needs to be done.
# - $\begin{array}[t]{cl}
# & l_1.\texttt{height}() = r_1.\texttt{height}() + 2 \\
# \wedge & l_1 = \texttt{Node}(k_2,l_2,r_2) \\
# \wedge & l_2.\texttt{height}() \geq r_2.\texttt{height}() \\[0.2cm]
# \rightarrow & \texttt{Node}(k_1,l_1,r_1).\texttt{restore}() =
# \texttt{Node}\bigl(k_2,l_2,\texttt{Node}(k_1,r_2,r_1)\bigr)
# \end{array}
# $
# - $\begin{array}[t]{cl}
# & l_1.\texttt{height}() = r_1.\texttt{height}() + 2 \\
# \wedge & l_1 = \texttt{Node}(k_2,l_2,r_2) \\
# \wedge & l_2.\texttt{height}() < r_2.\texttt{height}() \\
# \wedge & r_2 = \texttt{Node}(k_3,l_3,r_3) \\
# \rightarrow & \texttt{Node}(k_1,l_1,r_1).\texttt{restore}() =
# \texttt{Node}\bigl(k_3,\texttt{Node}(k_2,l_2,l_3),\texttt{Node}(k_1,r_3,r_1) \bigr)
# \end{array}
# $
# - $\begin{array}[t]{cl}
# & r_1.\texttt{height}() = l_1.\texttt{height}() + 2 \\
# \wedge & r_1 = \texttt{Node}(k_2,l_2,r_2) \\
# \wedge & r_2.\texttt{height}() \geq l_2.\texttt{height}() \\[0.2cm]
# \rightarrow & \texttt{Node}(k_1,l_1,r_1).\texttt{restore}() =
# \texttt{Node}\bigl(k_2,\texttt{Node}(k_1,l_1,l_2),r_2\bigr)
# \end{array}
# $
# - $\begin{array}[t]{cl}
# & r_1.\texttt{height}() = l_1.\texttt{height}() + 2 \\
# \wedge & r_1 = \texttt{Node}(k_2,l_2,r_2) \\
# \wedge & r_2.\texttt{height}() < l_2.\texttt{height}() \\
# \wedge & l_2 = \texttt{Node}(k_3,l_3,r_3) \\
# \rightarrow & \texttt{Node}(k_1,l_1,r_1).\texttt{restore}() =
# \texttt{Node}\bigl(k_3,\texttt{Node}(k_1,l_1,l_3),\texttt{Node}(k_2,r_3,r_2) \bigr)
# \end{array}
# $
# +
def _restore(self):
if abs(self.mLeft.mHeight - self.mRight.mHeight) <= 1:
self._restoreHeight()
return
if self.mLeft.mHeight > self.mRight.mHeight:
k1, l1, r1 = self.mKey, self.mLeft, self.mRight
k2, l2, r2 = l1.mKey, l1.mLeft, l1.mRight
if l2.mHeight >= r2.mHeight:
self._setValues(k2, l2, createNode(k1, r2, r1))
else:
k3, l3, r3 = r2.mKey, r2.mLeft, r2.mRight
self._setValues(k3, createNode(k2, l2, l3),
createNode(k1, r3, r1))
elif self.mRight.mHeight > self.mLeft.mHeight:
k1, l1, r1 = self.mKey, self.mLeft, self.mRight
k2, l2, r2 = r1.mKey, r1.mLeft, r1.mRight
if r2.mHeight >= l2.mHeight:
self._setValues(k2, createNode(k1, l1, l2), r2)
else:
k3, l3, r3 = l2.mKey, l2.mLeft, l2.mRight
self._setValues(k3, createNode(k1, l1, l3),
createNode(k2, r3, r2))
self._restoreHeight()
Set._restore = _restore
# -
# The function $\texttt{self}.\texttt{_setValues}(k, l, r)$ overwrites the member variables of the node $\texttt{self}$ with the given values.
# +
def _setValues(self, k, l, r):
self.mKey = k
self.mLeft = l
self.mRight = r
Set._setValues = _setValues
# +
def _restoreHeight(self):
self.mHeight = max(self.mLeft.mHeight, self.mRight.mHeight) + 1
Set._restoreHeight = _restoreHeight
# -
# The function $\texttt{createNode}(k, l, r)$ creates an AVL-tree of that has the key $k$ stored at its root,
# left subtree $l$ and right subtree $r$.
def createNode(key, left, right):
node = Set()
node.mKey = key
node.mLeft = left
node.mRight = right
node.mHeight = max(left.mHeight, right.mHeight) + 1
return node
# The method $t.\texttt{pop}()$ take an AVL tree $t$ and removes and returns the smallest key that is present in $t$. It is specified as follows:
# - $\texttt{Nil}.\texttt{pop}() = \Omega$
# - $\texttt{Node}(k,\texttt{Nil}, r).\texttt{pop}() = \langle k, r\rangle$
# - $l \not=\texttt{Nil} \wedge \langle k',l'\rangle := l.\texttt{pop}() \rightarrow
# \texttt{Node}(k, l, r).\texttt{pop}() = \langle k', \texttt{Node}(k, l', r)\rangle$
# +
def pop(self):
if self.mKey == None:
raise KeyError
if self.mLeft.mKey == None:
key = self.mKey
self._update(self.mRight)
return key
return self.mLeft.pop()
Set.pop = pop
# -
# ## Display Code
import graphviz as gv
# Given an ordered binary tree, this function renders the tree graphically using `graphviz`.
# +
def toDot(self):
Set.sNodeCount = 0 # this is a static variable of the class Set
dot = gv.Digraph(node_attr={'shape': 'record', 'style': 'rounded'})
NodeDict = {}
self._assignIDs(NodeDict)
for n, t in NodeDict.items():
if t.mKey != None:
dot.node(str(n), label=str(t.mKey))
else:
dot.node(str(n), label='', shape='point')
for n, t in NodeDict.items():
if not t.mLeft == None:
dot.edge(str(n), str(t.mLeft.mID))
if not t.mRight == None:
dot.edge(str(n), str(t.mRight.mID))
return dot
Set.toDot = toDot
# -
# This method assigns a unique identifier with each node. The dictionary `NodeDict` maps these identifiers to the nodes where they occur.
# +
def _assignIDs(self, NodeDict):
Set.sNodeCount += 1
self.mID = Set.sNodeCount
NodeDict[self.mID] = self
if self.isEmpty():
return
self.mLeft ._assignIDs(NodeDict)
self.mRight._assignIDs(NodeDict)
Set._assignIDs = _assignIDs
# -
# ## Testing
# The function $\texttt{demo}()$ creates a small ordered binary tree.
def demo():
m = Set()
m.insert("anton")
m.insert("hugo")
m.insert("gustav")
m.insert("jens")
m.insert("hubert")
m.insert("andre")
m.insert("philipp")
m.insert("rene")
return m
t = demo()
t.toDot()
while not t.isEmpty():
print(t.pop())
display(t.toDot())
# Let's generate an ordered binary tree with random keys.
import random as rnd
t = Set()
for k in range(30):
k = rnd.randrange(100)
t.insert(k)
display(t.toDot())
while not t.isEmpty():
print(t.pop(), end=' ')
display(t.toDot())
# This tree looks more or less balanced. Lets us try to create a tree by inserting sorted numbers because that resulted in linear complexity for ordered binary trees.
t = Set()
for k in range(30):
t.insert(k)
display(t.toDot())
while not t.isEmpty():
print(t.pop(), end=' ')
display(t.toDot())
# Next, we compute the set of prime numbers $\leq 100$. Mathematically, this set is given as follows:
# $$ \bigl\{2, \cdots, 100 \bigr\} - \bigl\{ i \cdot j \bigm| i, j \in \{2, \cdots, 100 \}\bigr\}$$
S = Set()
for k in range(2, 101):
S.insert(k)
display(S.toDot())
for i in range(2, 101):
for j in range(2, 101):
S.delete(i * j)
display(S.toDot())
while not S.isEmpty():
print(S.pop(), end=' ')
display(S.toDot())
| Aux/Set.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
#import dependencies
import unittest
#variables
x = 5
y = "John"
print(x)
print(y)
# +
#casting
x = str(3)
y = int(3)
z = float(3)
print(x)
print(y)
print(z)
# -
#Get the types
x=5
y="John"
print(type(x))
print(type(y))
#Many Values to Multiple Variables
x, y, z = "Orange", "Banana", "Cherry"
print(x)
print(y)
print(z)
#Unpack a collection
fruits = ["apple","banana","cherry"]
x, y, z = fruits;
print(x)
print(y)
print(z)
# +
#Global variables
#The global Keyword
def myFunc():
global x
x = "Hello World"
myFunc()
print(x)
# +
| pure/1.basic.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.0.0
# language: julia
# name: julia-1.0
# ---
# # Implementing new unitcells
#
# Although the module `LatPhysUnitcellLibrary` contains many pre-implemented unitcells, you may either want to have another implementation of an already existing unitcell or implement a completely new unitcell by your own.
#
# This tutorial follows the steps of both of these procedures using the example of the pyrochlore unitcell.
#
using LatPhysBase
using LatPhysUnitcellLibrary
# ### Adding a new implementation to an already existing unitcell
#
# Originally, `LatPhysUnitcellLibrary` comes with two different implementations of the pyrochlore unit cell, namely the primitive and the conventional unitcell. These can be constructed by passing the function `getUnitcellPyrochlore` the argument 1 or 2 respectively. If we try to construct another, not yet existing implementation of the pyrochlore lattice, say implementation 42, an error will be thrown.
getUnitcellPyrochlore(42)
# So if we want to add implementation 42 of the pyrochlore unitcell to our code, this can be done as follows.
# First we have to import the function name into our project scope by
import LatPhysUnitcellLibrary.getUnitcellPyrochlore
# Then, we can just add a new function `getUnitcellPyrochlore` with the desired implementation value
#
# implementation :: Val{42}
#
# which calls the constructor `newUnitcell` with lattice vectors, sites and bonds specific for the desired implementation of the pyrochlore unitcell (if not done yet, you can read more on sites in this [[notebook](https://github.com/janattig/LatticePhysics_Tutorials/blob/master/basics/sites_bonds/site_type_interface.ipynb)] and on bonds in this [[notebook](https://github.com/janattig/LatticePhysics_Tutorials/blob/master/basics/sites_bonds/bond_type_interface.ipynb)]). This function might have the form shown below (the function of implementation 42 shown is only for educational purposes and differs from the implementation 1 of the primitive unitcell only by the implementation value).
function getUnitcellPyrochlore(
unitcell_type :: Type{U},
# specify the implementation
implementation :: Val{42}
) :: U where {LS,LB,S<:AbstractSite{LS,3},B<:AbstractBond{LB,3},U<:AbstractUnitcell{S,B}}
# return a new unitcell
return newUnitcell(
# type of the unitcell
U,
# lattice vectors
Vector{Float64}[
Float64[0, 0.5, 0.5],
Float64[0.5, 0, 0.5],
Float64[0.5, 0.5, 0]
],
# sites
S[
newSite(S, Float64[0., 0., 0.], getDefaultLabelN(LS,1)),
newSite(S, Float64[0., 0.25, 0.25], getDefaultLabelN(LS,2)),
newSite(S, Float64[0.25, 0., 0.25], getDefaultLabelN(LS,3)),
newSite(S, Float64[0.25, 0.25, 0.], getDefaultLabelN(LS,4))
],
# bonds
B[
newBond(B, 1,2, getDefaultLabel(LB), (0,0,0)),
newBond(B, 1,3, getDefaultLabel(LB), (0,0,0)),
newBond(B, 1,4, getDefaultLabel(LB), (0,0,0)),
newBond(B, 2,1, getDefaultLabel(LB), (0,0,0)),
newBond(B, 2,3, getDefaultLabel(LB), (0,0,0)),
newBond(B, 2,4, getDefaultLabel(LB), (0,0,0)),
newBond(B, 3,1, getDefaultLabel(LB), (0,0,0)),
newBond(B, 3,2, getDefaultLabel(LB), (0,0,0)),
newBond(B, 3,4, getDefaultLabel(LB), (0,0,0)),
newBond(B, 4,1, getDefaultLabel(LB), (0,0,0)),
newBond(B, 4,2, getDefaultLabel(LB), (0,0,0)),
newBond(B, 4,3, getDefaultLabel(LB), (0,0,0)),
newBond(B, 1,4, getDefaultLabel(LB), (0,0,-1)),
newBond(B, 4,1, getDefaultLabel(LB), (0,0,+1)),
newBond(B, 1,2, getDefaultLabel(LB), (-1,0,0)),
newBond(B, 2,1, getDefaultLabel(LB), (+1,0,0)),
newBond(B, 1,3, getDefaultLabel(LB), (0,-1,0)),
newBond(B, 3,1, getDefaultLabel(LB), (0,+1,0)),
newBond(B, 2,3, getDefaultLabel(LB), (+1,-1,0)),
newBond(B, 3,2, getDefaultLabel(LB), (-1,+1,0)),
newBond(B, 2,4, getDefaultLabel(LB), (+1,0,-1)),
newBond(B, 4,2, getDefaultLabel(LB), (-1,0,+1)),
newBond(B, 3,4, getDefaultLabel(LB), (0,+1,-1)),
newBond(B, 4,3, getDefaultLabel(LB), (0,-1,+1))
]
)
end
# Now, also implementation 42 of the pyrochlore unitcell is accessible.
getUnitcellPyrochlore(42)
# ### Adding the new implementation into `LatPhysUnitcellLibrary`
#
# If we want to add our unitcell implementation to the library source code itself (and later do a pull request), the already existing implementations 1 and 2 of the pyrochlore unitcell are saved in the file `./src/unitcells_3d/pyrochlore.jl`. This file can therefore be extended by the above definition.
#
#
# ### Adding a new unitcell into `LatPhysUnitcellLibrary`
#
# Now we are going to assume that we want to add a unitcell to the module `LatPhysUnitcellLibrary`, for which there is no implementation so far. Let's call this new unitcell *MyNewUnitcell*. To this end we have to do the following three steps:
#
#
# * First we create a new file `mynewunitcell.jl` containing the function `getUnitcellMyNewUnitcell` that calls the constructor of the desired unitcell (e.g. as `getUnitcellPyrochlore` above) in the directory `./src/unitcells_3d` for three-dimesional unitcells or in `./src/unitcells_2d` for two-dimesional unitcells respectively. In the case of various implementations of the new unitcell, e.g. primitive and conventional, each one gets its own implementation of `getUnitcellMyNewUnitcell`, each specified by a different value
#
# implementation :: Val{1}
# implementation :: Val{2}
# etc.
#
# All implementations of `getUnitcellMyNewUnitcell` have to be stored in the same `MyNewUnitcell.jl`.
#
#
# * For three-dimensional unitcells we then add the following new element to the tuple `functions_generate` in `code_generation.jl` in order to generate the interface functions:
#
# ("MyNewUnitcell", 3, 3)
#
# For two-dimensional unitcells this element has to have the form:
#
# ("MyNewUnitcell", 2, 2)
#
# * Finally we include our new definition in the `definitions_3d.jl` (`definitions_2d.jl` respectively) by adding the line
#
# include("unitcells_3d/mynewunitcell.jl")
#
# or
# include("unitcells_2d/mynewunitcell.jl")
#
# After this, when using the module `LatPhysUnitcellLibrary` again, its cache file will be recompiled and the new unitcell will be accessible in the usual manner.
| basics/unitcells/implementing_new_unitcells.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (py39)
# language: python
# name: py39
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
from salishsea_tools import viz_tools, places, visualisations
import netCDF4 as nc # unless you prefer xarray
import datetime as dt
import os
import glob
import cmocean
from IPython.display import Markdown, display
# %matplotlib inline
# + active=""
# from IPython.display import HTML
#
# HTML('''<script>
# code_show=true;
# function code_toggle() {
# if (code_show){
# $('div.input').hide();
# } else {
# $('div.input').show();
# }
# code_show = !code_show
# }
# $( document ).ready(code_toggle);
# </script>
#
# <form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''')
# -
# ### Load a file from the 201905 hindcast
f=nc.Dataset('/results/SalishSea/month-avg.201905/SalishSea_1m_201403_201403_ptrc_T.nc')
f1=nc.Dataset('/results/SalishSea/month-avg.201905/SalishSea_1m_201503_201503_ptrc_T.nc')
f2=nc.Dataset('/results/SalishSea/month-avg.201905/SalishSea_1m_201603_201603_ptrc_T.nc')
f3=nc.Dataset('/results/SalishSea/month-avg.201905/SalishSea_1m_201703_201703_ptrc_T.nc')
f4=nc.Dataset('/results/SalishSea/month-avg.201905/SalishSea_1m_201803_201803_ptrc_T.nc')
f5=nc.Dataset('/results/SalishSea/month-avg.201905/SalishSea_1m_201903_201903_ptrc_T.nc')
print(f.variables.keys())
fe3t=nc.Dataset('/results/SalishSea/month-avg.201905/SalishSea_1m_201403_201403_carp_T.nc')
fe3t1=nc.Dataset('/results/SalishSea/month-avg.201905/SalishSea_1m_201503_201503_carp_T.nc')
fe3t2=nc.Dataset('/results/SalishSea/month-avg.201905/SalishSea_1m_201603_201603_carp_T.nc')
fe3t3=nc.Dataset('/results/SalishSea/month-avg.201905/SalishSea_1m_201703_201703_carp_T.nc')
fe3t4=nc.Dataset('/results/SalishSea/month-avg.201905/SalishSea_1m_201803_201803_carp_T.nc')
fe3t5=nc.Dataset('/results/SalishSea/month-avg.201905/SalishSea_1m_201903_201903_carp_T.nc')
# return times as datetime objects:
torig=dt.datetime.strptime(f.variables['time_centered'].time_origin,'%Y-%m-%d %H:%M:%S')
print(torig)
times=np.array([torig + dt.timedelta(seconds=ii) for ii in f.variables['time_centered'][:]])
# load model mesh
with nc.Dataset('/data/eolson/results/MEOPAR/NEMO-forcing-new/grid/mesh_mask201702.nc') as fm:
print(fm.variables.keys())
tmask=fm.variables['tmask'][:,:,:,:]
navlon=fm.variables['nav_lon'][:,:]
navlat=fm.variables['nav_lat'][:,:]
# ### Surface plots - Aerial view
# +
# with pcolormesh: no smoothing
cmap0=cmocean.cm.thermal
cmap0.set_bad('lightgrey')
#cmap1=cmocean.cm.haline
#cmap1.set_bad('k')
il=0
fig,ax=plt.subplots(2,3,figsize=(15,15))
fig.suptitle('March Model Mesozooplankton Biomass', fontsize=16)
m0=ax[0,0].pcolormesh(np.ma.masked_where(tmask[0,0,:,:]==0,f.variables['mesozooplankton'][il,0,:,:]),cmap=cmap0,vmin=0,vmax=2)
viz_tools.set_aspect(ax[0,0],coords='grid')
ax[0,0].set_title('2014')
fig.colorbar(m0,ax=ax[0,0])
m0=ax[0,1].pcolormesh(np.ma.masked_where(tmask[0,0,:,:]==0,f1.variables['mesozooplankton'][il,0,:,:]),cmap=cmap0,vmin=0,vmax=2)
viz_tools.set_aspect(ax[0,1],coords='grid')
ax[0,1].set_title('2015')
fig.colorbar(m0,ax=ax[0,1])
m0=ax[0,2].pcolormesh(np.ma.masked_where(tmask[0,0,:,:]==0,f2.variables['mesozooplankton'][il,0,:,:]),cmap=cmap0,vmin=0,vmax=2)
viz_tools.set_aspect(ax[0,2],coords='grid')
ax[0,2].set_title('2016')
fig.colorbar(m0,ax=ax[0,2])
m0=ax[1,0].pcolormesh(np.ma.masked_where(tmask[0,0,:,:]==0,f3.variables['mesozooplankton'][il,0,:,:]),cmap=cmap0,vmin=0,vmax=2)
viz_tools.set_aspect(ax[1,0],coords='grid')
ax[1,0].set_title('2017')
fig.colorbar(m0,ax=ax[1,0])
m0=ax[1,1].pcolormesh(np.ma.masked_where(tmask[0,0,:,:]==0,f4.variables['mesozooplankton'][il,0,:,:]),cmap=cmap0,vmin=0,vmax=2)
viz_tools.set_aspect(ax[1,1],coords='grid')
ax[1,1].set_title('2018')
fig.colorbar(m0,ax=ax[1,1])
m0=ax[1,2].pcolormesh(np.ma.masked_where(tmask[0,0,:,:]==0,f5.variables['mesozooplankton'][il,0,:,:]),cmap=cmap0,vmin=0,vmax=2)
viz_tools.set_aspect(ax[1,2],coords='grid')
ax[1,2].set_title('2019')
fig.colorbar(m0,ax=ax[1,2])
# -
# ## Depth-Averaged Plots
# +
# with pcolormesh: no smoothing
cmap0=cmocean.cm.thermal
cmap0.set_bad('lightgrey')
#cmap1=cmocean.cm.haline
#cmap1.set_bad('k')
il=0
fig,ax=plt.subplots(2,3,figsize=(15,7))
fig.suptitle('March Model Depth-Averaged Mesozooplankton Biomass', fontsize=16)
intuz=np.sum(f.variables['mesozooplankton'][il,:,:,:]*fe3t.variables['e3t'][il,:,:,:]*tmask[0,:,:,:],0)
avguz=intuz/np.sum(fe3t.variables['e3t'][il,:,:,:]*tmask[0,:,:,:],0)
m1=ax[0,0].pcolormesh(navlon,navlat,np.ma.masked_where(tmask[0,0,:,:]==0,avguz),cmap=cmap0,vmin=0,vmax=1,shading='nearest')
viz_tools.set_aspect(ax[0,0],coords='map')
ax[0,0].set_title('2014');
fig.colorbar(m1,ax=ax[0,0])
intuz1=np.sum(f1.variables['mesozooplankton'][il,:,:,:]*fe3t1.variables['e3t'][il,:,:,:]*tmask[0,:,:,:],0)
avguz1=intuz1/np.sum(fe3t1.variables['e3t'][il,:,:,:]*tmask[0,:,:,:],0)
m1=ax[0,1].pcolormesh(navlon,navlat,np.ma.masked_where(tmask[0,0,:,:]==0,avguz1),cmap=cmap0,vmin=0,vmax=1,shading='nearest')
viz_tools.set_aspect(ax[0,1],coords='map')
ax[0,1].set_title('2015');
fig.colorbar(m1,ax=ax[0,1])
intuz2=np.sum(f2.variables['mesozooplankton'][il,:,:,:]*fe3t2.variables['e3t'][il,:,:,:]*tmask[0,:,:,:],0)
avguz2=intuz2/np.sum(fe3t2.variables['e3t'][il,:,:,:]*tmask[0,:,:,:],0)
m1=ax[0,2].pcolormesh(navlon,navlat,np.ma.masked_where(tmask[0,0,:,:]==0,avguz2),cmap=cmap0,vmin=0,vmax=1,shading='nearest')
viz_tools.set_aspect(ax[0,2],coords='map')
ax[0,2].set_title('2016');
fig.colorbar(m1,ax=ax[0,2])
intuz3=np.sum(f3.variables['mesozooplankton'][il,:,:,:]*fe3t3.variables['e3t'][il,:,:,:]*tmask[0,:,:,:],0)
avguz3=intuz3/np.sum(fe3t3.variables['e3t'][il,:,:,:]*tmask[0,:,:,:],0)
m1=ax[1,0].pcolormesh(navlon,navlat,np.ma.masked_where(tmask[0,0,:,:]==0,avguz3),cmap=cmap0,vmin=0,vmax=1,shading='nearest')
viz_tools.set_aspect(ax[1,0],coords='map')
ax[1,0].set_title('2017');
fig.colorbar(m1,ax=ax[1,0])
intuz4=np.sum(f4.variables['mesozooplankton'][il,:,:,:]*fe3t4.variables['e3t'][il,:,:,:]*tmask[0,:,:,:],0)
avguz4=intuz4/np.sum(fe3t4.variables['e3t'][il,:,:,:]*tmask[0,:,:,:],0)
m1=ax[1,1].pcolormesh(navlon,navlat,np.ma.masked_where(tmask[0,0,:,:]==0,avguz4),cmap=cmap0,vmin=0,vmax=1,shading='nearest')
viz_tools.set_aspect(ax[1,1],coords='map')
ax[1,1].set_title('2018');
fig.colorbar(m1,ax=ax[1,1])
intuz5=np.sum(f5.variables['mesozooplankton'][il,:,:,:]*fe3t5.variables['e3t'][il,:,:,:]*tmask[0,:,:,:],0)
avguz5=intuz5/np.sum(fe3t5.variables['e3t'][il,:,:,:]*tmask[0,:,:,:],0)
m1=ax[1,2].pcolormesh(navlon,navlat,np.ma.masked_where(tmask[0,0,:,:]==0,avguz5),cmap=cmap0,vmin=0,vmax=1,shading='nearest')
viz_tools.set_aspect(ax[1,2],coords='map')
ax[1,2].set_title('2019');
fig.colorbar(m1,ax=ax[1,2])
# -
f.close()
fe3t.close()
f1.close()
fe3t1.close()
f2.close()
fe3t2.close()
f3.close()
fe3t3.close()
f4.close()
fe3t4.close()
f5.close()
fe3t5.close()
| notebooks/SalishSeaModelMesoZoopBiomass_March.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/huzaifa525/LetsUpgrade-Python-Essentials-Batch7-/blob/master/Day_9_Assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="UUEVeu4KEibU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="d077fc70-cd4a-44b9-f9f7-de21b235a11b"
print("Assignment Submitted by <NAME>")
print("<EMAIL>")
# + [markdown] id="sprnb5wHJGjv" colab_type="text"
# ###Q2. Make a small generator program for returning armstrong numbers in between 1-1000 in a generator object.
# + id="358TEP7aJJxG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="fedbab50-5bfd-43b5-b729-763635a57444"
def Armstrong(h,w):
for n in range(h,w+1):
order = len(str(n))
sum = 0
temp = n
while temp > 0:
digit = temp % 10
sum += digit ** order
temp //= 10
if n == sum:
yield n
Numbers = Armstrong(1, 1000)
list(Numbers)
# + [markdown] id="4OsNTSDMElmt" colab_type="text"
# Installing Pylint
# + id="I4geqTWeFl91" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 188} outputId="5b906dbb-9736-4b5c-f80f-3e72c99a9f5c"
pip install pylint
# + id="gFQiRwKzFqib" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="e7605aaf-f744-4a0c-d1eb-7a19db39226e"
# + id="Vuzx575FGzpv" colab_type="code" colab={}
# + id="SuEzbLyJHMkh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 128} outputId="561bba84-d424-4182-9846-b5ce163eb4ed"
pylint "prime.py"
# + id="p1BjdeFxHlN7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 128} outputId="e0a8cc76-fcbb-4b53-9274-84c3f5cbf7b6"
python "prime.py"
| Day 9/Day_9_Assignment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # VIII - Parallel and Distributed Execution
# In this notebook, we will execute training across multiple nodes (or in parallel across a single node over multiple GPUs). We will train an image classification model with Resnet20 on the CIFAR-10 data set across multiple nodes in this notebook.
#
# Azure Batch and Batch Shipyard have the ability to perform "gang scheduling" or scheduling multiple nodes for a single task. This is most commonly used for Message Passing Interface (MPI) jobs.
#
# * [Setup](#section1)
# * [Configure and Submit MPI Job and Submit](#section2)
# * [Delete Multi-Instance Job](#section3)
# <a id='section1'></a>
# ## Setup
# Create a simple alias for Batch Shipyard
# %alias shipyard SHIPYARD_CONFIGDIR=config python $HOME/batch-shipyard/shipyard.py %l
# Check that everything is working
shipyard
# Let's first delete the pool used in the non-advanced notebooks and wait for it to be removed so we can free up our core quota. We need to create a new pool with different settings and Docker image.
shipyard pool del -y --wait
# Read in the account information we saved earlier
# +
import json
import os
def read_json(filename):
with open(filename, 'r') as infile:
return json.load(infile)
def write_json_to_file(json_dict, filename):
""" Simple function to write JSON dictionaries to files
"""
with open(filename, 'w') as outfile:
json.dump(json_dict, outfile)
account_info = read_json('account_information.json')
storage_account_key = account_info['storage_account_key']
storage_account_name = account_info['storage_account_name']
STORAGE_ALIAS = account_info['STORAGE_ALIAS']
# -
# Create the `resource_files` to randomly download train and test data for CNTK
# +
import random
IMAGE_NAME = 'alfpark/cntk:2.0.rc2-gpu-1bit-sgd-python3.5-cuda8.0-cudnn5.1'
CNTK_TRAIN_DATA_FILE = 'Train_cntk_text.txt'
CNTK_TEST_DATA_FILE = 'Test_cntk_text.txt'
CNTK_DATA_BATCHES_FILE = 'cifar-10-batches-py.tar.gz'
URL_FMT = 'https://{}.blob.core.windows.net/{}/{}'
def select_random_data_storage_container():
"""Randomly select a storage account and container for CNTK train/test data.
This is specific for the workshop to help distribute attendee load. This
function will only work on Python2"""
ss = random.randint(0, 4)
cs = random.randint(0, 4)
sa = '{}{}bigai'.format(ss, chr(ord('z') - ss))
cont = '{}{}{}'.format(cs, chr(ord('i') - cs * 2), chr(ord('j') - cs * 2))
return sa, cont
def create_resource_file_list():
sa, cont = select_random_data_storage_container()
ret = [{
'file_path': CNTK_TRAIN_DATA_FILE,
'blob_source': URL_FMT.format(sa, cont, CNTK_TRAIN_DATA_FILE)
}]
sa, cont = select_random_data_storage_container()
ret.append({
'file_path': CNTK_TEST_DATA_FILE,
'blob_source': URL_FMT.format(sa, cont, CNTK_TEST_DATA_FILE)
})
sa, cont = select_random_data_storage_container()
ret.append({
'file_path': CNTK_DATA_BATCHES_FILE,
'blob_source': URL_FMT.format(sa, cont, CNTK_DATA_BATCHES_FILE)
})
return ret
# -
# Create data set conversion scripts to be uploaded. On real production runs, we would already have this data pre-converted instead of converting at the time of node startup.
# +
# %%writefile convert_cifar10.py
from __future__ import print_function
import cifar_utils as ut
print ('Converting train data to png images...')
ut.saveTrainImages(r'./Train_cntk_text.txt', 'train')
print ('Done.')
print ('Converting test data to png images...')
ut.saveTestImages(r'./Test_cntk_text.txt', 'test')
print ('Done.')
# +
# %%writefile convert_cifar10.sh
# #!/usr/bin/env bash
set -e
set -o pipefail
# -
with open('convert_cifar10.sh', 'a') as fd:
fd.write('\n\nIMAGE_NAME="{}"\n'.format(IMAGE_NAME))
# +
# %%writefile -a convert_cifar10.sh
CIFAR_DATA=$AZ_BATCH_NODE_SHARED_DIR/cifar10_data
CIFAR_BATCHES=cifar-10-batches-py.tar.gz
# mv $AZ_BATCH_TASK_WORKING_DIR/*_cntk_text.txt $AZ_BATCH_TASK_WORKING_DIR/$CIFAR_BATCHES $CIFAR_DATA
# echo "Converting CNTK train/test data, this will take some time..."
pushd $CIFAR_DATA
tar zxvpf $CIFAR_BATCHES
# rm $CIFAR_BATCHES
chmod 755 run_cifar10.sh
# mv run_cifar10.sh $AZ_BATCH_NODE_SHARED_DIR
popd
docker run --rm -v $CIFAR_DATA:$CIFAR_DATA -w $CIFAR_DATA $IMAGE_NAME /bin/bash -c "source /cntk/activate-cntk; cp /cntk/Examples/Image/DataSets/CIFAR-10/* .; python convert_cifar10.py"
# -
# Additionally, we'll create an MPI helper script for executing the MPI job. This helper script does the following:
# 1. Ensures that there are GPUs available to execute the task.
# 2. Parses the `$AZ_BATCH_HOST_LIST` for all of the hosts participating in the MPI job and creates a `hostfile` from it
# 3. Computes the total number of slots (processors)
# 4. Sets the proper CNTK training directory, script and options
# 5. Executes the MPI job via `mpirun`
# +
# %%writefile run_cifar10.sh
# #!/usr/bin/env bash
set -e
set -o pipefail
# get number of GPUs on machine
ngpus=$(nvidia-smi -L | wc -l)
# echo "num gpus: $ngpus"
if [ $ngpus -eq 0 ]; then
echo "No GPUs detected."
exit 1
fi
# get number of nodes
IFS=',' read -ra HOSTS <<< "$AZ_BATCH_HOST_LIST"
nodes=${#HOSTS[@]}
# create hostfile
touch hostfile
>| hostfile
for node in "${HOSTS[@]}"
do
echo $node slots=$ngpus max-slots=$ngpus >> hostfile
done
# compute number of processors
np=$(($nodes * $ngpus))
# print configuration
# echo "num nodes: $nodes"
# echo "hosts: ${HOSTS[@]}"
# echo "num mpi processes: $np"
# set cntk related vars
modeldir=/cntk/Examples/Image/Classification/ResNet/Python
trainscript=TrainResNet_CIFAR10_Distributed.py
# set training options
trainopts="--datadir $AZ_BATCH_NODE_SHARED_DIR/cifar10_data --outputdir $AZ_BATCH_TASK_WORKING_DIR/output --network resnet20 -q 1 -a 0"
# execute mpi job
/root/openmpi/bin/mpirun --allow-run-as-root --mca btl_tcp_if_exclude docker0 \
-np $np --hostfile hostfile -x LD_LIBRARY_PATH --wdir $modeldir \
/bin/bash -c "source /cntk/activate-cntk; python -u $trainscript $trainopts $*"
# -
# Move the files into a directory to be uploaded.
# +
INPUT_CONTAINER = 'input-dist'
UPLOAD_DIR = 'dist_upload'
# !rm -rf $UPLOAD_DIR
# !mkdir -p $UPLOAD_DIR
# !mv convert_cifar10.* run_cifar10.sh $UPLOAD_DIR
# !ls -alF $UPLOAD_DIR
# -
# We will create the config structure to directly reference these files to ingress into Azure Storage. This obviates the need to call `blobxfer` as it will be done for us during pool creation.
config = {
"batch_shipyard": {
"storage_account_settings": STORAGE_ALIAS
},
"global_resources": {
"docker_images": [
IMAGE_NAME
],
"files": [
{
"source": {
"path": UPLOAD_DIR
},
"destination": {
"storage_account_settings": STORAGE_ALIAS,
"data_transfer": {
"container": INPUT_CONTAINER
}
}
}
]
}
}
# Now we'll create the pool specification with a few modifications for this particular execution:
# - `inter_node_communication_enabled` will ensure nodes are allocated such that they can communicate with each other (e.g., send and receive network packets)
# - `input_data` specifies the scripts we created above to be downloaded into `$AZ_BATCH_NODE_SHARED_DIR/cifar10_data`
# - `transfer_files_on_pool_creation` will transfer the `files` specified in `global_resources` to be transferred during pool creation (i.e., `pool add`)
# - `resource_files` are the CNTK train and test data files
# - `additional_node_prep_commands` are commands to execute for node preparation of all compute nodes. Our additional node prep command is to execute the conversion script we created in an earlier step above
#
# **Note:** Most often it is better to scale up the execution first, prior to scale out. Due to our default core quota of just 20 cores, we are using 3 `STANDARD_NC6` nodes. In real production runs, we'd most likely scale up to multiple GPUs within a single node (parallel execution) such as `STANDARD_NC12` or `STANDARD_NC24` prior to scaling out to multiple NC nodes (parallel and distributed execution).
# +
POOL_ID = 'gpupool-multi-instance'
pool = {
"pool_specification": {
"id": POOL_ID,
"vm_size": "STANDARD_NC6",
"vm_count": 3,
"publisher": "Canonical",
"offer": "UbuntuServer",
"sku": "16.04-LTS",
"ssh": {
"username": "docker"
},
"inter_node_communication_enabled": True,
"reboot_on_start_task_failed": False,
"block_until_all_global_resources_loaded": True,
"input_data": {
"azure_storage": [
{
"storage_account_settings": STORAGE_ALIAS,
"container": INPUT_CONTAINER,
"destination": "$AZ_BATCH_NODE_SHARED_DIR/cifar10_data"
}
]
},
"transfer_files_on_pool_creation": True,
"resource_files": create_resource_file_list(),
"additional_node_prep_commands": [
"/bin/bash $AZ_BATCH_NODE_SHARED_DIR/cifar10_data/convert_cifar10.sh"
]
}
}
# -
# !mkdir config
write_json_to_file(config, os.path.join('config', 'config.json'))
write_json_to_file(pool, os.path.join('config', 'pool.json'))
print(json.dumps(config, indent=4, sort_keys=True))
print(json.dumps(pool, indent=4, sort_keys=True))
# Create the pool, please be patient while the compute nodes are allocated.
shipyard pool add -y
# Ensure that all compute nodes are `idle` and ready to accept tasks:
shipyard pool listnodes
# <a id='section2'></a>
# ## Configure MPI Job and Submit
# MPI jobs in Batch require execution as a multi-instance task. Essentially this allows multiple compute nodes to be used for a single task.
#
# A few things to note in this jobs configuration:
# - The `COMMAND` executes the `run_cifar10.sh` script that was uploaded earlier as part of the node preparation task.
# - `auto_complete` is being set to `True` which forces the job to move from `active` to `completed` state once all tasks complete. Note that once a job has moved to `completed` state, no new tasks can be added to it.
# - `multi_instance` property is populated which enables multiple nodes, e.g., `num_instances` to participate in the execution of this task. The `coordination_command` is the command that is run on all nodes prior to the `command`. Here, we are simply executing the Docker image to run the SSH server for the MPI daemon (e.g., orted, hydra, etc.) to initialize all of the nodes prior to running the application command.
# +
JOB_ID = 'cntk-mpi-job'
# reduce the nubmer of epochs to 20 for purposes of this notebook
COMMAND = '$AZ_BATCH_NODE_SHARED_DIR/run_cifar10.sh -e 20'
jobs = {
"job_specifications": [
{
"id": JOB_ID,
"auto_complete": True,
"tasks": [
{
"image": IMAGE_NAME,
"remove_container_after_exit": True,
"command": COMMAND,
"gpu": True,
"multi_instance": {
"num_instances": "pool_current_dedicated",
"coordination_command": "/usr/sbin/sshd -D -p 23"
},
}
],
}
]
}
# -
write_json_to_file(jobs, os.path.join('config', 'jobs.json'))
print(json.dumps(jobs, indent=4, sort_keys=True))
# Submit the job and tail `stdout.txt`:
shipyard jobs add --tail stdout.txt
# Using the command below we can check the status of our jobs. Once all jobs have an exit code we can continue. You can also view the **heatmap** of this pool on [Azure Portal](https://portal.azure.com) to monitor the progress of this job on the compute nodes under your Batch account.
# <a id='section3'></a>
# ## Delete Multi-instance Job
# Deleting multi-instance jobs running as Docker containers requires a little more care. We will need to first ensure that the job has entered `completed` state. In the above `jobs` configuration, we set `auto_complete` to `True` enabling the Batch service to automatically complete the job when all tasks finish. This also allows automatic cleanup of the running Docker containers used for executing the MPI job.
#
# Special logic is required to cleanup MPI jobs since the `coordination_command` that runs actually detaches an SSH server. The job auto completion logic Batch Shipyard injects ensures that these containers are killed.
shipyard jobs listtasks
# Once we are sure that the job is completed, then we issue the standard delete command:
shipyard jobs del -y --termtasks --wait
# ## Next Steps
# You can proceed to the [Notebook: Clean Up](05_Clean_Up.ipynb) if you are done for now, or proceed to one of the following additional Notebooks:
# * [Notebook: Automatic Model Selection](06_Advanced_Auto_Model_Selection.ipynb)
# * [Notebook: Tensorboard Visualization](07_Advanced_Tensorboard.ipynb) - note this requires running this notebook on your own machine
# * [Notebook: Keras with TensorFlow](09_Keras_Single_GPU_Training_With_Tensorflow.ipynb)
| Big AI - Applying Artificial Intelligence at Scale/Deep Learning at Scale/08_Advanced_Parallel_and_Distributed.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# import Pandas
import pandas as pd
# ## Series and DataFrames
# List Our Olympics dataframe
oo = pd.read_csv('data/olympics.csv' , skiprows=4)
oo.head()
# ### Acessing Series
# #### List any column using both the ['..'] and dot notation. what type is this objects?
oo['City']
oo.City
oo[['City', 'Athlete', 'Edition']]
# the file is dataframe type
type(oo)
# the columns is series
type(oo.City)
type(oo[['City', 'Athlete', 'Edition']])
# ## Data Input and Validations
# ### shape
oo.shape
# for number of rows
oo.shape[0]
# for number of columns
oo.shape[1]
# ### head() & tail()
oo.head() # first 5 rows
oo.tail() # last 5 rows
oo.head(10)
oo.tail(10)
# ### info()
oo.info()
# ## Basic analysis
# ### value_counts()
oo.Edition.value_counts() # for both Femail and mail
oo.Gender.value_counts(ascending=True)
# ### sort_values()
oo.Athlete.sort_values()
oo.sort_values(by=['Athlete']).head(10)
# ### Boolean indexing
gold = oo.Medal == 'Gold'
gold.head(10)
oo[oo.Medal == 'Gold']
oo[(oo.Medal == 'Gold') & (oo.Gender == 'Women')]
# ### String handling
oo[oo.Athlete.str.contains("Charlotte")] # if we are looking for full name of someone his name contains Charlotte
| CH1 Basic Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/", "height": 340} colab_type="code" executionInfo={"elapsed": 119722, "status": "ok", "timestamp": 1587811661792, "user": {"displayName": "COVID1<NAME>", "photoUrl": "", "userId": "12351684125312759490"}, "user_tz": -330} id="zTuR6l89qb5N" outputId="629296d2-b95c-4f31-ab55-7490aadcc68c"
from __future__ import print_function
#Import proper libraries
import os
import numpy as np
from skimage.io import imsave, imread
from PIL import Image
data_path = 'data/'
#Set Image width and height
image_rows = 512
image_cols = 512
#Function to create training data
def create_train_data():
train_data_path = os.path.join(data_path, 'train/Image PP/')
train_data_Label_path = os.path.join(data_path, 'train/Label/')
images = os.listdir(train_data_path)
total = len(images)
imgs = np.ndarray((total, image_rows, image_cols), dtype=np.uint8)
imgs_mask = np.ndarray((total, image_rows, image_cols), dtype=np.uint8)
i = 0
print('-'*30)
print('Creating training images...')
print('-'*30)
for image_name in images:
img = imread(os.path.join(train_data_path, image_name), as_gray=True)
img_mask = imread(os.path.join(train_data_Label_path, image_name), as_gray=True)
img = np.array([img])
img_mask = np.array([img_mask])
imgs[i] = img
imgs_mask[i] = img_mask
if i % 50 == 0:
print('Done: {0}/{1} images'.format(i, total))
i += 1
print('Loading done.')
np.save('imgs_train.npy', imgs)
np.save('imgs_mask_train.npy', imgs_mask)
print('Saving to .npy files done.')
#FUnction to load training data
def load_train_data():
imgs_train = np.load('imgs_train.npy')
imgs_mask_train = np.load('imgs_mask_train.npy')
return imgs_train, imgs_mask_train
#Function to create testing data
def create_test_data():
train_data_path = os.path.join(data_path, 'test/Image PP')
images = os.listdir(train_data_path)
total = len(images)
imgs = np.ndarray((total, image_rows, image_cols), dtype=np.uint8)
imgs_id = np.ndarray((total, ), dtype=np.int32)
i = 0
print('-'*30)
print('Creating test images...')
print('-'*30)
for image_name in images:
img_id = int(image_name.split('.')[0])
img = imread(os.path.join(train_data_path, image_name), as_gray=True)
img = np.array([img])
imgs[i] = img
imgs_id[i] = img_id
if i % 10 == 0:
print('Done: {0}/{1} images'.format(i, total))
i += 1
print('Loading done.')
np.save('imgs_test.npy', imgs)
np.save('imgs_id_test.npy', imgs_id)
print('Saving to .npy files done.')
#Function to load testing data
def load_test_data():
imgs_test = np.load('imgs_test.npy')
imgs_id = np.load('imgs_id_test.npy')
return imgs_test, imgs_id
if __name__ == '__main__':
create_train_data()
create_test_data()
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" executionInfo={"elapsed": 5834331, "status": "ok", "timestamp": 1587817861551, "user": {"displayName": "COVID19 Segmentation", "photoUrl": "", "userId": "12351684125312759490"}, "user_tz": -330} id="l-Ke6uUKqda4" outputId="4b7a41a3-1258-49a5-b76b-bbce5f464689"
from __future__ import print_function
#Import proper libraries
import tensorflow as tf
import os
from skimage.transform import resize
from skimage.io import imsave
import numpy as np
from keras.models import Model
from keras.layers import Input, concatenate, Conv2D, MaxPooling2D, Conv2DTranspose, Activation, UpSampling2D, Add
from keras.layers import BatchNormalization
from keras.optimizers import Adam
from keras import backend as K
from keras import losses,metrics
from keras.callbacks import ModelCheckpoint
from keras.initializers import RandomNormal
from keras.applications import InceptionResNetV2
from data_preparation import load_train_data, load_test_data
smooth = 1.
def dice_coef(y_true, y_pred):
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
def dice_coef_loss(y_true, y_pred):
return dice_coef(y_true, y_pred)
def preprocess(imgs):
imgs_p = np.ndarray((imgs.shape[0], img_w, img_h), dtype=np.uint8)
for i in range(imgs.shape[0]):
imgs_p[i] = resize(imgs[i], (img_h, img_w), preserve_range=True)
imgs_p = imgs_p[..., np.newaxis]
return imgs_p
def train_and_predict():
print('Loading and preprocessing train data...')
imgs_train, imgs_mask_train = load_train_data()
imgs_train = preprocess(imgs_train)
imgs_mask_train = preprocess(imgs_mask_train)
imgs_train = imgs_train.astype('float32')
mean = np.mean(imgs_train) # mean for data centering
std = np.std(imgs_train) # std for data normalization
imgs_train -= mean
imgs_train /= std
imgs_mask_train = imgs_mask_train.astype('float32')
imgs_mask_train /= 255. # scale masks to [0, 1]
model=InceptionResNetV2(weights='imagenet',include_top=False, input_shape=(224,224,3))
print(model.summary())
model_checkpoint = ModelCheckpoint('unet_weights_150_Img_IR.h5', monitor='val_loss', save_best_only=True)
print('Fitting model...')
hist=model.fit(imgs_train, imgs_mask_train, batch_size=16, nb_epoch=1000, verbose=1, shuffle=True,
validation_split=0.2,
callbacks=[model_checkpoint])
imgs_test, imgs_id_test = load_test_data()
imgs_test = preprocess(imgs_test)
imgs_test = imgs_test.astype('float32')
mean=np.mean(imgs_test)
std=np.std(imgs_test)
imgs_test -= mean
imgs_test /= std
model.load_weights('unet_weights_150_Img_IR.h5')
print('Predicting masks on test data...')
imgs_mask_test = model.predict(imgs_test, verbose=1)
np.save('imgs_mask_test.npy', imgs_mask_test)
pred_dir = 'preds_HR'
if not os.path.exists(pred_dir):
os.mkdir(pred_dir)
for image, image_id in zip(imgs_mask_test, imgs_id_test):
image = (image[:, :, 0] * 255.).astype(np.uint8)
imsave(os.path.join(pred_dir, str(image_id) + '_pred.png'), image)
import matplotlib.pyplot as plt
import pickle
model.load_weights('unet_weights_150_Img_HR.h5')
l_hr=plt.plot(hist.history['loss'], color='b')
vl_hr=plt.plot(hist.history['val_loss'], color='r')
plt.title('Loss Curve')
pickle.dump(l_hr, open('Loss_HR.fig.pickle', 'wb')) # This is for Python 3 - py2 may need `file` instead of `open`
pickle.dump(vl_hr, open('Val_Loss_HR.fig.pickle', 'wb')) # This is for Python 3 - py2 may need `file` instead of `open`
plt.show()
d_hr=plt.plot(hist.history['dice_coef'], color='b')
vd_hr=plt.plot(hist.history['val_dice_coef'], color='r')
plt.title('Dice Coefficient Curve')
pickle.dump(d_hr, open('Dice_HR.fig.pickle', 'wb')) # This is for Python 3 - py2 may need `file` instead of `open`
pickle.dump(vd_hr, open('Val_Dice_HR.fig.pickle', 'wb')) # This is for Python 3 - py2 may need `file` instead of `open`
plt.show()
a_hr=plt.plot(hist.history['acc'], color='b')
va_hr=plt.plot(hist.history['val_acc'], color='r')
plt.title('Accuracy Curve')
pickle.dump(a_hr, open('Acc_HR.fig.pickle', 'wb')) # This is for Python 3 - py2 may need `file` instead of `open`
pickle.dump(va_hr, open('Val_Acc_HR.fig.pickle', 'wb')) # This is for Python 3 - py2 may need `file` instead of `open`
plt.show()
if __name__ == '__main__':
train_and_predict()
| Train/InceptionResNetV2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Numpy
#
# ## BProf Python course
#
# ### June 25-29, 2018
#
# #### <NAME> & <NAME>
#
# "NumPy is the fundamental package for scientific computing with Python."
#
# https://numpy.org/
#
# ### Numpy provides
#
# * linear algebra, Fourier transform and random numbers,
# * easy-to-use matrices, arrays, tensors,
# * heavy optimization and
# * C/C++/Fortran integration.
#
# "Numpy is the MatLab of python!"
#
# ### import as `np` by convenction
import numpy as np
# Numpy uses an underlying [BLAS](http://www.netlib.org/blas/) library, just as MatLab does. These libraries employ vectorization.
# * Anaconda uses IntelMKL (Intel's proprietary math library)
# * If you install numpy manually, and you have previously installed [OpenBLAS](http://www.openblas.net/) (free, opensource), then numpy will use that.
np.show_config()
# # N-dimensional arrays
#
# The core object of `numpy` is the `ndarray` (_$n$-dimensional array_).
A = np.array([[[1, 2], [3, 4], [5, 6]]])
print(type(A))
A
# `A.shape` is a tuple of the array's dimensions
A.shape
# and `dtype` is the type of the elements
A.dtype
A = np.array([1.5, 2])
A.dtype
# ## Accessing elements
#
# Arrays are zero-indexed.
#
# #### accessing one row
A = np.array([[1, 2], [3, 4], [5, 6]])
A[0], A[1]
# #### one column
A[:, 0]
# #### single element
A[2, 1], type(A[2, 1])
# #### range of rows / columns
A[:2] # or A[:2, :]
A[:, 1:]
A[::2]
# In general, an $n$-dimensional array requires $n$ indices to access its scalar elements.
B = np.array([[[1, 2, 3],[4, 5, 6]]])
B.shape, B.ndim
B[0].shape
B[0, 1], B[0, 1].shape
# 3 indices access scalar members if ndim is 3
B[0, 1, 2], type(B[0, 1, 2])
# ## Under the hood
#
# The default array representation is C style ([row-major](https://en.wikipedia.org/wiki/Row-_and_column-major_order)) indexing. But you shouldn't rely on the representation, it is not recommended to use the low level C arrays.
print(B.strides)
print(B.flags)
# # Operations on arrays
#
# ## Element-wise arithmetic operators
#
# Arithmetic operators are overloaded, they act element-wise.
A = np.array([[1, 1], [2, 2]])
P = A >= 2
print(P)
print(P.dtype)
A + A
A * A
np.exp(A)
2**A
1/A
# ## Matrix algebraic operations
#
# `dot` is the standard matrix product
A = np.array([[1, 2], [3, 4]])
A.dot(A)
# inner dimensions must match
B = np.array([[1, 2, 3], [4, 5, 6]])
print(A.shape, B.shape)
A.dot(B)
# B.dot(A)
# +
A_inv = np.linalg.inv(A)
print(A_inv)
np.allclose(A_inv.dot(A), np.eye(2))
# -
np.round(A_inv.dot(A), 5)
# pseudo-inverse can be computed with `np.linalg.pinv`
# +
A = np.array([[1, 2, 3], [4, 5, 6]])
A_pinv = np.linalg.pinv(A)
A.dot(A_pinv).dot(A)
# -
# Also, there is a `matrix` class for which `*` acts as a matrix product.
M = np.matrix([[1, 2], [3, 4]])
print(np.multiply(M, M))
print(M * M)
print(M ** 5)
# # Casting
#
# C types are available in `numpy`
P = np.array([[1.2, 1], [1.5, 0]])
print(P.dtype)
P.astype(np.int8)
(-P.astype(int)).astype("uint32")
np.array([[1, 2], [3, 4]], dtype="float32")
# Directly converts strings to numbers
np.float32('-10')
# `dtype` can be specified during array creation
np.array(['10', '20'], dtype="float32")
# #### `np.datetime64`
np.datetime64("2018-03-10")
np.datetime64("2018-03-10") - np.datetime64("2017-12-13")
# #### String arrays
T = np.array(['apple', 'plum'])
print(T)
print(T.shape, T.dtype, type(T))
# Fixed length character arrays:
T[1] = "banana"
T
# ## Slicing, advanced indexing
#
# `:` retrieves the full size along that dimension.
A = np.array([[1, 2, 3], [4, 5, 6]])
print(A)
print(A[0])
print(A[0, :]) # first row
print(A[:, 0]) # first column
# These are 1D vectors, neither $1\times n$ nor $n\times1$ matrices!
A[0, :].shape, A[:, 0].shape
B = np.array([[[1, 2, 3],[4, 5, 6]]])
B.shape
print(B[:, 1, :].shape)
B[:, 1, :]
B[0, 1, :], B[0, 1, :].shape
type(B[0, 1, 1]), B[0, 1, 1]
# All python range indexing also work, like reverse:
print(A[:, ::-1])
print(A[::-1, :])
print(A[:, ::2])
# ## Advanced indexing
#
# Advanced indexing is when the index is a list.
B = np.array([[[1, 2, 3], [4, 5, 6]]])
print("B shape:", B.shape)
print(B[0, 0, [0, 2]].shape)
B[0, 0, [0, 2]]
# first and third "column"
B[0, :, [0, 2]].shape
# third and first "column"
B[0, :, [2, 0]]
# one column can be selected multiple times and the list of indices doesn't have to be ordered
B[0, :, [2, 0, 2, 2]]
# ### Advanced indexing theory
#
# If indices are all lists:
# <div align=center>B[$i_1$, $i_2$, $\ldots$].shape = (len($i_1$), len($i_2$), $\ldots$)</div>
#
# The size of a particular dimension remains when the corresponding index is a colon (`:`).
#
# If an index is a scalar then that dimension disappears from the shape of the output.
#
# One can use a one-length list in advanced indexing. In that case, the number of dimensions remains but the size of that dimension becomes one.
B = np.array([[[1, 2, 3], [4, 5, 6]]])
B[:, :, 2].shape
B[:, :, 2]
B[:, :, [2]].shape
B[:, :, [2]]
A = np.arange(20).reshape(4, 5)
A
x = [0, 0, 1, 1]
y = [4, 0, 1, 2]
A[x, y]
# ## Changing shape
#
# The shape of an array can be modified with `reshape`, as long as the number of elements remains the same. The underlying elements are unchanged and not copied in the memory.
B = np.array([[[1, 2, 3], [4, 5, 6]]])
print(B.shape)
print(B.reshape((2, 3)).shape)
B.reshape((2, 3))
B.reshape((3, 2))
# A `ValueError` is raised if the shape is invalid:
# +
# B.reshape(7) # raises ValueError
# -
# We have a shorthand now to create arrays like B:
np.array(range(6)).reshape((1, 2, 3))
# Size `-1` can be used to span the resulted array as much as it can in that dimension.
X = np.array(range(12)).reshape((2, -1, 2))
print("X.shape:", X.shape)
print(X)
# `resize` deletes elements or fills with zeros but it works only _inplace_.
X = np.array([[1, 2], [3, 4]])
X.resize((5, 3))
#X.resize((2, 2))
X
# However, `np.resize` (not a member) works differently
X = np.array([[1, 2], [3, 4]])
np.resize(X, (5, 3))
# `X` is unchanged:
X
# # Constructing arrays
# Aside from manually specifying each element, there are various functions for array creation:
#
# * `arange`: range
# * `linspace`: equally divided interval
# * `ones`, `ones_like`, array filled with ones
# * `zeros`, `zeros_like`, array filled with zeros
# * `eye`: identity matrix, only 2D
#
# `np.ones_like()` ans `np.zeros_like()` keeps shape and `dtype`!
np.arange(10), np.arange(10).shape
np.arange(1, 21, 2).reshape(5, -1)
np.arange(0, 1.00001, .1001)[-1]
np.linspace(0, 4, 11)
np.ones((3, 2)) * 5
np.zeros((2, 3))
A = np.arange(6).reshape(3, -1)
np.zeros_like(A)
np.eye(3)
np.eye(4, dtype=bool)
# there is no `np.eye_like`, but you can use the following:
np.eye(*A.shape, dtype=A.dtype)
# ## Concatenation
#
# Arrays can be concatenated along any axis as long as their shapes are compatible.
# +
A = np.arange(6).reshape(2, -1)
B = np.arange(8).reshape(2, -1)
print(A.shape, B.shape)
np.concatenate((A, B, A, B), axis=1)
# -
np.concatenate((A, B), axis=-1) # last dimension
# +
# np.concatenate((A, B), axis=0) # axis=0 is the default
# -
# Concatenating on the first and second dimension of 2D arrays is a very common operation, there are shorthands:
A = np.arange(6).reshape(2, -1)
B = np.arange(8).reshape(2, -1)
np.hstack((A, B))
# +
A = np.arange(6).reshape(-1, 2)
B = np.arange(8).reshape(-1, 2)
print(A.shape, B.shape)
np.vstack((A, B))
# -
A.T
# `np.stack` puts the arrays next to each other along a **new** dimension
A.shape, np.stack((A, A, A, A)).shape
np.stack((A, A, A, A))
# Block matrix
np.concatenate([np.concatenate([np.ones((2,2)), np.zeros((2,2))], axis=1),
np.concatenate([np.zeros((2,2)), np.ones((2,2))], axis=1)], axis=0)
# ## Iteration
#
# By default, iteration takes place in the first (outermost) dimension.
A = np.arange(6).reshape(2, -1)
for row in A:
print(row)
# But you can slice the desired elements for a loop.
# +
B = np.arange(6).reshape(1, 2, 3)
for x in B[0, 0, :]:
print(x)
# -
# You can iterate through the elements themselves.
for a in B.flat:
print(a)
for k in range(B.shape[2]):
print(B[:, :, k])
# # Broadcasting
#
# One can calculate with uneven shaped arrays if their shapes satisfy certain requirements.
#
# For example a $1\times 1$ array can be multiplied with matrices, just like a scalars times a matrix.
s = 2.0 * np.ones((1, 1))
print(s)
print(A)
s * A
# A one-length vector can be multiplied with a matrix:
print(np.ones((1,)) * A)
np.ones(()) * B
# However you cannot perform element-wise operations on uneven sized dimensions:
# +
# np.ones((2,3)) + np.ones((3,2))
# -
# This behavior is defined via _broadcasting_. If an array array has a dimension of length one, then it can be _broadcasted_, which means that it can span as much as the operation requires (operation other than indexing).
np.arange(3).reshape((1,3)) + np.zeros((2, 3))
np.arange(3).reshape((3,1)) + np.zeros((3, 4))
# More than one dimension can be broadcasted at a time.
np.arange(3).reshape((1,3,1)) + np.zeros((2,3,5))
# ### Theory
#
# Let's say that an array has a shape `(1, 3, 1)`, which means that it can be broadcasted in the first and third dimension.
# Then the index triple `[x, y, z]` accesses its elements as `[0, y, 0]`. In one word, broadcasted dimensions are omitted in indexing.
#
# One can broadcast non-existent dimensions, like a one-dimensional array (vector) can be broadcasted together with a three dimensional array.
#
# In terms of shapes: `(k,) + (i, j, k)` means that a vector plus a three dimensional array (of size i-by-j-by-k). The index `[i, j, k]` of the broadcasted vector degrades to `[k]`.
#
# Let's denote the broadcasted dimensions with `None` and the regular dimensions with their size. For example shapes `(2,) + (3, 2, 2)` results the broadcast `(None, None, 2) + (3, 2, 2)`. False dimensions are prepended at the front, or in the place of 1-length dimensions.
#
# <div align=center>`(2,) + (3, 2, 2) -> (None, None, 2) + (3, 2, 2) = (3, 2, 2)` <br>
# but<br>
# `(3,) + (3, 2, 2) -> (None, None, 3) + (3, 2, 2)`
# </div>
# and the latter is not compatible.
# +
def test_broadcast(x, y):
try:
A = np.ones(x) + np.ones(y)
print("Broadcastible")
except ValueError:
print("Not broadcastible")
test_broadcast((3, ), (3,2,2))
test_broadcast((2, ), (3,2,2))
test_broadcast((3,1,4), (3,2,1))
test_broadcast((3,1,4), (3,2,2))
# -
# You can force the broadcast if you allocate a dimension of length one at a certain dimension (via `reshape`) or explicitly with the keyword `None`.
(np.ones(3)[:, None, None] + np.ones((3,2,2))).shape
# The result in shapes: `(3, None, None) + (3, 2, 2) = (3, 2, 2)`
a = np.ones(3)
b = np.ones((3, 2, 2))
s = a + b[:, :, :, None]
s.shape
# #### Example
#
# One liner to produce a complex "grid".
np.arange(5)[:, None] + 1j * np.arange(5)[None, :]
# Due to the default behavior, a vector behaves as a row vector and acts row-wise on a matrix.
#
# `(n,) -> (None, n)`
np.arange(5) + np.zeros((5,5))
# We can explicitly reshape it:
np.arange(5).reshape(-1, 1) + np.zeros((5, 5))
# This behavior does not apply to non-element-wise operations, like `dot` product.
# # Reductions
#
# Sum over an axis
Y = np.arange(24).reshape(2,3,4)
Y
Y.sum() # sum every element
Y.sum(axis=0).shape
Y.sum(axis=(0, 2)).shape
A = np.arange(6).reshape(2, -1)
print(A)
A.sum(axis=0)
Y.sum(axis=-1)
# `mean, std, var` work similarly but compute the mean, standard deviation and variance along an axis or the full array:
Y.mean()
Y.mean(axis=(2, 0))
# ### Vector dot product
#
# This is a vector dot product (element-wise product and then sum):
# +
def my_vec_dot(x, y):
return (np.array(x) * np.array(y)).sum()
my_vec_dot([1, 2, 3], [1, 2, 3])
# -
# This is a matrix dot product:
# +
def my_mat_dot(x, y):
# sum_j x_{i, j, - } y_{ - , j, k}
return np.sum(np.array(x)[:, :, None]*np.array(y)[None, :, :],
axis=1)
my_mat_dot([[1, 2], [3, 4]], [[1, 2], [3, 4]])
# -
# More about `ufunc`: https://docs.scipy.org/doc/numpy/reference/ufuncs.html
# # `np.random`
#
# Numpy has a rich random subpackage.
#
# `numpy.random.rand` samples random `float64` numbers from $[0, 1)$ with uniform sampling.
np.random.rand(2, 3).astype("float16")
# Other distributions:
np.random.uniform(1, 2, (2, 2))
np.random.standard_normal(10)
np.random.normal(10, 1, size=(1,10))
# Descrete randoms:
import random
help(random.choice)
np.random.choice(["A", "2", "3", "4", "5", "6", "7", "8", "9",
"10", "J", "Q", "K"], 5, replace=True)
# `choice` accepts custom probabilities:
np.random.choice(range(1, 7), 10,
p=[0.1, 0.1, 0.1, 0.1, 0.1, 0.5])
print(np.random.permutation(["A", "2", "3", "4", "5", "6",
"7", "8", "9", "10", "J", "Q", "K"]))
# `permutation` permutes the first (outermost) dimension.
print(np.random.permutation(np.arange(9).reshape((3, 3))))
# # Miscellaneous
#
# ## Boolean indexing
#
# Selecting elements that satisfy a certain condition:
A = np.random.random((4, 3))
print(A.mean())
A
# Selecting elements greater than the mean of the matrix:
A[A > A.mean()]
# `np.where` returns the advanced indices for which the condition is satisfied (where the boolean array is `True`):
A[np.where(A > A.mean())]
# actually `np.where` returns the indices of elements that evaluate to nonzero
np.where([2, -1, 0, 5])
# # Ordering and re-ordering
# +
A = np.arange(24).reshape(4, -1)
lens = np.array([4, 6, 2, 5])
order = np.argsort(-lens)
A_ordered = A[order]
rev_order = np.argsort(order)
A_ordered[rev_order]
# -
# ## `np.allclose`
np.allclose(np.zeros((5, 5)), 0)
# ## `np.unique`
np.unique((1, 2, 4, 1), return_counts=True)
# More numpy stuff [here](https://github.com/juditacs/snippets/blob/master/misc/numpy_stuff.ipynb).
| lectures/11_Numpy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19"
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# -
train = pd.read_csv('/kaggle/input/new-york-city-taxi-fare-prediction/train.csv', nrows=1_00_000)
test = pd.read_csv('/kaggle/input/new-york-city-taxi-fare-prediction/test.csv')
# ## Converting daytime object to datetime format
# ### It was just hard to do this I mean converting time like this
train["DateTime"] = pd.to_datetime(train['pickup_datetime'], unit='ns')
train['key'] = pd.to_datetime(train['key'], unit='ns')
train.info()
test["DateTime"] = pd.to_datetime(test['pickup_datetime'], unit='ns')
test["key"] = pd.to_datetime(test['key'], unit='ns')
test
# #### dropping null values
train.dropna(inplace=True)
test.dropna(inplace = True)
train.shape
# train = train.drop(['key','pickup_datetime'], axis = 1)
train = train.drop(['pickup_datetime'], axis = 1)
train.head()
# test = test.drop(['key','pickup_datetime'], axis = 1)
test = test.drop(['pickup_datetime'], axis = 1)
test.head()
# ## Defining the labels and features
y = train.iloc[0:,[0,1]]
x = train.iloc[0:,[2,3,4,5,6,7]]
# #### Adding distance to x
# +
R = 6373.0
lat1 =np.asarray(np.radians(x['pickup_latitude']))
lon1 = np.asarray(np.radians(x['pickup_longitude']))
lat2 = np.asarray(np.radians(x['dropoff_latitude']))
lon2 = np.asarray(np.radians(x['dropoff_longitude']))
dlon = lon2 - lon1
dlat = lat2 - lat1
ls1=[]
a = np.sin(dlat/2)**2 + np.cos(lat1) * np.cos(lat2) * np.sin(dlon/ 2)**2
c = 2 * np.arctan2(np.sqrt(a), np.sqrt(1 - a))
distance = R * c
x['Distance']=np.asarray(distance)*0.621
# x_clean = x.drop([ 'pickup_longitude','pickup_latitude','dropoff_longitude','dropoff_latitude'],axis = 1)
# x_clean.head()
x.head()
# -
# #### Adding distance to Train
# + jupyter={"outputs_hidden": true}
R = 6373.0
lat1 =np.asarray(np.radians(train['pickup_latitude']))
lon1 = np.asarray(np.radians(train['pickup_longitude']))
lat2 = np.asarray(np.radians(train['dropoff_latitude']))
lon2 = np.asarray(np.radians(train['dropoff_longitude']))
dlon = lon2 - lon1
dlat = lat2 - lat1
ls1=[]
a = np.sin(dlat/2)**2 + np.cos(lat1) * np.cos(lat2) * np.sin(dlon/ 2)**2
c = 2 * np.arctan2(np.sqrt(a), np.sqrt(1 - a))
distance = R * c
train['Distance']=np.asarray(distance)*0.621
# x_clean = x.drop([ 'pickup_longitude','pickup_latitude','dropoff_longitude','dropoff_latitude'],axis = 1)
# x_clean.head()
train.head()
# -
# #### Adding distance to test
# +
R = 6373.0
lat11 =np.asarray(np.radians(test['pickup_latitude']))
lon11 = np.asarray(np.radians(test['pickup_longitude']))
lat22 = np.asarray(np.radians(test['dropoff_latitude']))
lon22 = np.asarray(np.radians(test['dropoff_longitude']))
dlon1 = lon22 - lon11
dlat1 = lat22 - lat11
ls1=[]
a1 = np.sin(dlat1/2)**2 + np.cos(lat11) * np.cos(lat22) * np.sin(dlon1/ 2)**2
c1 = 2 * np.arctan2(np.sqrt(a1), np.sqrt(1 - a1))
distance1 = R * c1
test['Distance']=np.asarray(distance1)*0.621
# test_clean = test.drop([ 'pickup_longitude','pickup_latitude','dropoff_longitude','dropoff_latitude'],axis = 1)
# test_clean.dtypes
test.head()
# -
x.dtypes
y.dtypes
print(x.head(),y.head(),test.head(),train.head())
import matplotlib as plt
from sklearn import datasets,linear_model
a = x.iloc[0:200,6]
b = y.iloc[0:200,1]
# ## Training Starts
#
# ### Plotting graph of cost vs distance
# +
import matplotlib.pyplot as plt
plt.scatter(b,a)
plt.show()
# -
# ## Converting datetime to float()
# +
train['DateTime'] = pd.to_numeric(pd.to_datetime(train['DateTime']))
train['key'] = pd.to_numeric(pd.to_datetime(train['key']))
test['DateTime'] = pd.to_numeric(pd.to_datetime(test['DateTime']))
test['key'] = pd.to_numeric(pd.to_datetime(test['key']))
y['key'] = pd.to_numeric(pd.to_datetime(y['key']))
x['DateTime'] = pd.to_numeric(pd.to_datetime(x['DateTime']))
# -
# #### Defining y for the prediction
y = y.iloc[:,1]
# ## Splitting the values
# +
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.3)
# -
# ## Traning the model
from sklearn.linear_model import LinearRegression
lr=LinearRegression(normalize=True)
lr.fit(x_train,y_train)
y_predicted = lr.predict(x_test)
print(lr.score(x_test,y_test))
# 1st test = -0.013287637014155473
# #### Testing the accuracy
from sklearn.metrics import mean_squared_error
mean_squared_error(y_test,y_predicted)
# # 1st test = 112.004853684073
## with 100000 dataset = 94.0358896381812
## with 1000000 dataset try 1 = 95.36892390730144
## with 1000000 dataset try 2 = 95.36026307535313
## with 1000000 dataset try 3 = 96.83297952994907
## with 100000 dataset = 93.37144467436306
## with 10000 dataset try1 = 96.03901121564002
## with 10000 dataset try2 = 81.26929962886342
## with 10000 dataset try3 = 98.48885378200688
## with 10000 dataset try4 = 104.36002792791918
## with 10000 dataset try5 = 97.20492555017101
## with 10000 dataset try6 = 83.93965348592229
## with 10000 dataset try7 = 75.27941712945345
## with 10000 dataset try8 = 78.95396330172005
## with 1000 dataset = 39.90092494230721
## with 1000 dataset = 76.44837215099844
## with 1000 dataset = 38.46728504333877
## with 1000 dataset = 93.07429700649337
## with 1000 dataset = 48.39973592705823
## with 1000 dataset = 1055865.4214096582
## with 1000 dataset = 85.46861840805632
## with 1000 dataset = 64.39358773994795
## with 100 dataset = 12.34566678724564
## with 100 dataset = 10.142841154191558
## with 100 dataset = 5.786482441190635
## with 100 dataset = 30.963445495727807
## with 100 dataset = 14.648523611686821
# ## Submission Starts
pred=np.round(lr.predict(test.drop('key',axis=1)),2)
pred.shape
# #### Reading the submission file for some insight
pd.read_csv('/kaggle/input/new-york-city-taxi-fare-prediction/sample_submission.csv').head()
# +
Submission=pd.DataFrame(data=pred,columns=['fare_amount'])
Submission['key']=test['key']
Submission=Submission[['key','fare_amount']]
# -
# ### File Submitted
Submission.set_index('key',inplace=True)
| Notebooks/NYC Taxi Fare Prediction/nyc-taxi-fare-prediction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# <p><font size="6"><b>Jupyter notebook INTRODUCTION </b></font></p>
#
#
# > *DS Data manipulation, analysis and visualisation in Python*
# > *December, 2019*
#
# > *© 2016, <NAME> and <NAME> (<mailto:<EMAIL>>, <mailto:<EMAIL>>). Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)*
#
# ---
# + run_control={"frozen": false, "read_only": false} slideshow={"slide_type": "subslide"}
from IPython.display import Image
Image(url='http://python.org/images/python-logo.gif')
# + [markdown] slideshow={"slide_type": "slide"}
# <big><center>To run a cell: push the start triangle in the menu or type **SHIFT + ENTER/RETURN**
# 
# -
# # Notebook cell types
# + [markdown] slideshow={"slide_type": "fragment"}
# We will work in **Jupyter notebooks** during this course. A notebook is a collection of `cells`, that can contain different content:
# + [markdown] slideshow={"slide_type": "slide"}
# ## Code
# + run_control={"frozen": false, "read_only": false} slideshow={"slide_type": "fragment"}
# Code cell, then we are using python
print('Hello DS')
# + run_control={"frozen": false, "read_only": false} slideshow={"slide_type": "fragment"}
DS = 10
print(DS + 5) # Yes, we advise to use Python 3 (!)
# -
# Writing code is what you will do most during this course!
# + [markdown] slideshow={"slide_type": "slide"}
# ## Markdown
# + [markdown] slideshow={"slide_type": "fragment"}
# Text cells, using Markdown syntax. With the syntax, you can make text **bold** or *italic*, amongst many other things...
# + [markdown] slideshow={"slide_type": "slide"}
# * list
# * with
# * items
#
# [Link to interesting resources](https://www.youtube.com/watch?v=z9Uz1icjwrM) or images: 
#
# > Blockquotes if you like them
# > This line is part of the same blockquote.
# + [markdown] slideshow={"slide_type": "slide"}
# Mathematical formulas can also be incorporated (LaTeX it is...)
# $$\frac{dBZV}{dt}=BZV_{in} - k_1 .BZV$$
# $$\frac{dOZ}{dt}=k_2 .(OZ_{sat}-OZ) - k_1 .BZV$$
#
# + [markdown] slideshow={"slide_type": "subslide"}
# Or tables:
#
# course | points
# --- | ---
# Math | 8
# Chemistry | 4
#
# or tables with Latex..
#
# Symbool | verklaring
# --- | ---
# $BZV_{(t=0)}$ | initiële biochemische zuurstofvraag (7.33 mg.l-1)
# $OZ_{(t=0)}$ | initiële opgeloste zuurstof (8.5 mg.l-1)
# $BZV_{in}$ | input BZV(1 mg.l-1.min-1)
# $OZ_{sat}$ | saturatieconcentratie opgeloste zuurstof (11 mg.l-1)
# $k_1$ | bacteriële degradatiesnelheid (0.3 min-1)
# $k_2$ | reäeratieconstante (0.4 min-1)
# + [markdown] slideshow={"slide_type": "subslide"}
# Code can also be incorporated, but than just to illustrate:
# + [markdown] slideshow={"slide_type": "fragment"}
# ```python
# BOT = 12
# print(BOT)
# ```
# + [markdown] slideshow={"slide_type": "fragment"}
# See also: https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet
# -
# ## HTML
# + [markdown] slideshow={"slide_type": "slide"}
# You can also use HTML commands, just check this cell:
# <h3> html-adapted titel with <h3> </h3> <p></p>
# <b> Bold text <b> </b> of <i>or italic <i> </i>
# + [markdown] slideshow={"slide_type": "fragment"}
# ## Headings of different sizes: section
# ### subsection
# #### subsubsection
# + [markdown] slideshow={"slide_type": "slide"}
# ## Raw Text
# + slideshow={"slide_type": "fragment"} active=""
# Cfr. any text editor
# + [markdown] slideshow={"slide_type": "slide"}
# # Notebook handling ESSENTIALS
# + [markdown] slideshow={"slide_type": "slide"}
# ## Completion: TAB
# 
# + [markdown] slideshow={"slide_type": "fragment"}
# * The **TAB** button is essential: It provides you all **possible actions** you can do after loading in a library *AND* it is used for **automatic autocompletion**:
# + run_control={"frozen": false, "read_only": false} slideshow={"slide_type": "subslide"}
import os
os.mkdir
# + run_control={"frozen": false, "read_only": false} slideshow={"slide_type": "fragment"}
my_very_long_variable_name = 3
# + slideshow={"slide_type": "fragment"} active=""
# my_ + TAB
# + [markdown] slideshow={"slide_type": "slide"}
# ## Help: SHIFT + TAB
# 
# + [markdown] slideshow={"slide_type": "subslide"}
# * The **SHIFT-TAB** combination is ultra essential to get information/help about the current operation
# + run_control={"frozen": false, "read_only": false} slideshow={"slide_type": "fragment"}
round(3.2)
# + run_control={"frozen": false, "read_only": false} slideshow={"slide_type": "fragment"}
import os
os.mkdir
# + run_control={"frozen": false, "read_only": false} slideshow={"slide_type": "fragment"}
# An alternative is to put a question mark behind the command
# os.mkdir?
# + [markdown] slideshow={"slide_type": "subslide"}
# <div class="alert alert-success">
# <b>EXERCISE</b>: What happens if you put two question marks behind the command?
# </div>
# + run_control={"frozen": false, "read_only": false} slideshow={"slide_type": "fragment"}
import glob
# glob.glob??
# + [markdown] slideshow={"slide_type": "slide"}
# ## *edit* mode to *command* mode
#
# * *edit* mode means you're editing a cell, i.e. with your cursor inside a cell to type content --> <font color="green">green colored side</font>
# * *command* mode means you're NOT editing(!), i.e. NOT with your cursor inside a cell to type content --> <font color="blue">blue colored side</font>
#
# To start editing, click inside a cell or
# <img src="../img/enterbutton.png" alt="Key enter" style="width:150px">
#
# To stop editing,
# <img src="../img/keyescape.png" alt="Key A" style="width:150px">
# + [markdown] slideshow={"slide_type": "slide"}
# ## new cell A-bove
# <img src="../img/keya.png" alt="Key A" style="width:150px">
#
# Create a new cell above with the key A... when in *command* mode
# + [markdown] slideshow={"slide_type": "slide"}
# ## new cell B-elow
# <img src="../img/keyb.png" alt="Key B" style="width:150px">
#
# Create a new cell below with the key B... when in *command* mode
# + [markdown] slideshow={"slide_type": "slide"}
# ## CTRL + SHIFT + C
# + [markdown] slideshow={"slide_type": "fragment"}
# Just do it!
# + [markdown] slideshow={"slide_type": "slide"}
# ## Trouble...
# + [markdown] slideshow={"slide_type": "fragment"}
# <div class="alert alert-danger">
# <b>NOTE</b>: When you're stuck, or things do crash:
# <ul>
# <li> first try <code>Kernel</code> > <code>Interrupt</code> -> your cell should stop running
# <li> if no succes -> <code>Kernel</code> > <code>Restart</code> -> restart your notebook
# </ul>
# </div>
# + [markdown] slideshow={"slide_type": "fragment"}
# * **Stackoverflow** is really, really, really nice!
#
# http://stackoverflow.com/questions/tagged/python
# + [markdown] slideshow={"slide_type": "fragment"}
# * Google search is with you!
# + [markdown] slideshow={"slide_type": "slide"}
# <big><center>**REMEMBER**: To run a cell: <strike>push the start triangle in the menu or</strike> type **SHIFT + ENTER**
# 
# + [markdown] slideshow={"slide_type": "slide"}
# # some MAGIC...
# + [markdown] slideshow={"slide_type": "slide"}
# ## `%psearch`
# + run_control={"frozen": false, "read_only": false} slideshow={"slide_type": "fragment"}
# %psearch os.*dir
# + [markdown] slideshow={"slide_type": "slide"}
# ## `%%timeit`
# + run_control={"frozen": false, "read_only": false} slideshow={"slide_type": "fragment"}
# %%timeit
mylist = range(1000)
for i in mylist:
i = i**2
# + run_control={"frozen": false, "read_only": false} slideshow={"slide_type": "fragment"}
import numpy as np
# + run_control={"frozen": false, "read_only": false} slideshow={"slide_type": "fragment"}
# %%timeit
np.arange(1000)**2
# -
# ## `%whos`
# %whos
# + [markdown] slideshow={"slide_type": "subslide"}
# ## `%lsmagic`
# + run_control={"frozen": false, "read_only": false} slideshow={"slide_type": "fragment"}
# %lsmagic
# + [markdown] slideshow={"slide_type": "slide"}
# # Let's get started!
# + run_control={"frozen": false, "read_only": false} slideshow={"slide_type": "fragment"}
from IPython.display import FileLink, FileLinks
# + run_control={"frozen": false, "read_only": false} slideshow={"slide_type": "fragment"}
FileLinks('.', recursive=False)
# + [markdown] slideshow={"slide_type": "fragment"}
# The follow-up notebooks provide additional background (largely adopted from the [scientific python notes](http://www.scipy-lectures.org/), which you can explore on your own to get more background on the Python syntax if specific elements would not be clear.
#
# For now, we will work on the `python_rehearsal.ipynb` notebook, which is a short summary version of the other notebooks to quickly grasp the main elements
| notebooks/00-jupyter_introduction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Train GPT on gym
#
# Train a GPT model on a dedicated addition dataset to see if a Transformer can learn to add.
# set up logging
import logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO,
)
# make deterministic
from mingpt.utils import set_seed
set_seed(42)
import numpy as np
import torch
import torch.nn as nn
from torch.nn import functional as F
import matplotlib.pyplot as plt
# +
from torch.utils.data import Dataset
import sys
sys.path.append('../DRQN_pt')
import gym, envs
class GymDataset(Dataset):
"""
Returns addition problems of up to some number of digits in the inputs. Recall
that all GPT cares about are sequences of integers, and completing them according to
patterns in the data. Therefore, we have to somehow encode addition problems
as a sequence of integers.
The sum of two n-digit numbers gives a third up to (n+1)-digit number. So our
encoding will simply be the n-digit first number, n-digit second number,
and (n+1)-digit result, all simply concatenated together. Because each addition
problem is so structured, there is no need to bother the model with encoding
+, =, or other tokens. Each possible sequence has the same length, and simply
contains the raw digits of the addition problem.
As a few examples, the 2-digit problems:
- 85 + 50 = 135 becomes the sequence [8, 5, 5, 0, 1, 3, 5]
- 6 + 39 = 45 becomes the sequence [0, 6, 3, 9, 0, 4, 5]
etc.
We will also only train GPT on the final (n+1)-digits because the first
two n-digits are always assumed to be given. So when we give GPT an exam later,
we will e.g. feed it the sequence [0, 6, 3, 9], which encodes that we'd like
to add 6 + 39, and hope that the model completes the integer sequence with [0, 4, 5]
in 3 sequential steps.
fun exercise: does it help if the result is asked to be produced in reverse order?
"""
def __init__(self, split: str, env_name: str ="DiscreteGridworld-v0"):
self.env = gym.make(env_name)
self.split = split # train/test
self.vocab_size = 12 # 12 possible... -1 through 10
# TODO: should be env.action_space.shape[0] as well instead of 1
self.block_size = self.env.observation_space.shape[0] * 5 + 5 + 1
# split up all addition problems into either training data or test data
# Let's start with 50k samples
# num = (10**self.ndigit)**2 # total number of possible combinations
# r = np.random.RandomState(1337) # make deterministic
# perm = r.permutation(num)
# num_test = min(int(num*0.2), 1000) # 20% of the whole dataset, or only up to 1000
# self.ixes = perm[:num_test] if split == 'test' else perm[num_test:]
def __len__(self):
#return self.ixes.size
return 50_000
def __getitem__(self, idx):
obs = self.env.reset()
history = []
for _ in range(5):
history.append(torch.tensor(obs.copy(), dtype=torch.long))
action = env.action_space.sample()
history.append(torch.tensor([action], dtype=torch.long))
obs, _, _, info = env.step(action)
history.append(torch.tensor(info['state'].copy(), dtype=torch.long))
h = torch.cat(([hist for hist in history]))
x = h[:-1]
y = h.clone()[1:]
y[:-2] = -100
return x, y
obs = torch.tensor(self.env.reset(), dtype=torch.long)
action = self.env.action_space.sample()
next_obs, _, _, _ = self.env.step(action)
x = torch.cat((obs, torch.tensor([action, next_obs[0]], dtype=torch.long)))
y = torch.cat((torch.tensor([-100, -100], dtype=torch.long), torch.tensor(next_obs, dtype=torch.long)))
return x, y
# -
train_dataset = GymDataset(split='train', env_name='DiscreteHorseshoe-v0')
test_dataset = GymDataset(split='test', env_name='DiscreteHorseshoe-v0')
env = gym.make('DiscreteFrame-v0')
env.reset()
env.step(2)
# +
#env.render()
# -
train_dataset[190] # sample a training instance just to see what one raw example looks like
# +
from mingpt.model import GPT, GPTConfig, GPT1Config
# initialize a baby GPT model
mconf = GPTConfig(train_dataset.vocab_size, train_dataset.block_size,
n_layer=2, n_head=4, n_embd=128)
model = GPT(mconf)
# +
from mingpt.trainer import Trainer, TrainerConfig
# initialize a trainer instance and kick off training
tconf = TrainerConfig(max_epochs=20, batch_size=512, learning_rate=6e-4,
lr_decay=True, warmup_tokens=1024, final_tokens=50*len(train_dataset)*(16),
ckpt_path='horseshoe_ckpt.pt', num_workers=4)
trainer = Trainer(model, train_dataset, test_dataset, tconf)
trainer.train()
# -
checkpoint = torch.load('gym_ckpt.pt')
model.load_state_dict(checkpoint)
from mingpt.utils import sample
history = np.array([5, 5, 1, 5, 4, 1, 11, 11, 3, 11, 11, 3, 11, 11, 0])
sample(model, torch.tensor(history, dtype=torch.long, device=trainer.device)[None, ...], 1)
model.blocks[0].outputs['attention'].squeeze().T.cpu().numpy().sum(axis=0)
#fig, axs = plt.subplots(nrows=2, sharex=True)
seq_len = 15
fig, ax = plt.subplots()
for i in range(1):
attn = model.blocks[i].outputs['attention'].squeeze().T.cpu().numpy().sum(axis=0)
ax.scatter(range(seq_len), attn)
for j in range(0, len(attn), 3):
if len(attn[j:j+3]) < 3:
continue
ax.plot(range(j, j+3), attn[j:j+3], c='tab:blue')
#ax.plot(range(3, 6), attn[3:6], c='tab:blue')
ax.set_xticks(range(seq_len))
ax.set_xticklabels(['(5,', '5)', '1', '(5,', '4)', '1', '(-1,', '-1)', '3', '(-1,', '-1)', '3', '(-1,', '-1)', '0'])
plt.show()
model.blocks[0].outputs['mlp'].squeeze().T.cpu().numpy().sum(axis=0)
model.blocks[1].outputs['attention'].squeeze().T.cpu().numpy().sum(axis=0)
model.blocks[1].outputs['mlp'].squeeze().T.cpu().numpy().sum(axis=0)
figs, axs = plt.subplots(1, 2)
axs[0].imshow(model.blocks[0].outputs['attention'].squeeze().T.cpu().numpy())
axs[1].imshow(model.blocks[1].outputs['attention'].squeeze().T.cpu().numpy())
env = gym.make('DiscreteTargets-v0')
def test_model(test_len=500):
num_correct = 0
for _ in range(test_len):
obs = env.reset()
history = []
for i in range(5):
history.append(torch.tensor(obs.copy(), dtype=torch.long))
action = env.action_space.sample()
obs, _, _, info = env.step(action)
history.append(torch.tensor([action], dtype=torch.long))
result = sample(model, torch.cat(([hist for hist in history])).to(trainer.device)[None, ...], 2)
pred_state = result.squeeze()[-2:].cpu().numpy()
if np.array_equal(info['state'], pred_state):
num_correct += 1
#print(f"Predicted {pred_state} and was {info['state']}. Full sequence: {result}")
else:
#pass
print(f"Predicted {pred_state} but was {info['state']}. Full sequence: {result}")
print(f"Final score: {(100*num_correct/test_len):.2f}%!")
test_model()
import matplotlib.pyplot as plt
embeds = model.pos_emb.data[0].cpu().numpy()
fig, ax = plt.subplots()
im = ax.imshow(embeds)
plt.show()
env = gym.make('DiscreteHorseshoe-v0')
env.step(0)
x = torch.tensor([1, 2, 0], dtype=torch.long, device=trainer.device)
x.shape
# +
# now let's give the trained model an addition exam
from torch.utils.data.dataloader import DataLoader
from mingpt.utils import sample
def give_exam(dataset, batch_size=32, max_batches=-1):
results = []
loader = DataLoader(dataset, batch_size=batch_size)
for b, (x, y) in enumerate(loader):
x = x.to(trainer.device)
d1d2 = x[:, :ndigit*2]
d1d2d3 = sample(model, d1d2, ndigit+1)
d3 = d1d2d3[:, -(ndigit+1):]
factors = torch.tensor([[10**i for i in range(ndigit+1)][::-1]]).to(trainer.device)
# decode the integers from individual digits
d1i = (d1d2[:,:ndigit] * factors[:,1:]).sum(1)
d2i = (d1d2[:,ndigit:ndigit*2] * factors[:,1:]).sum(1)
d3i_pred = (d3 * factors).sum(1)
d3i_gt = d1i + d2i
correct = (d3i_pred == d3i_gt).cpu() # Software 1.0 vs. Software 2.0 fight RIGHT on this line, lol
for i in range(x.size(0)):
results.append(int(correct[i]))
judge = 'YEP!!!' if correct[i] else 'NOPE'
if not correct[i]:
print("GPT claims that %03d + %03d = %03d (gt is %03d; %s)"
% (d1i[i], d2i[i], d3i_pred[i], d3i_gt[i], judge))
if max_batches >= 0 and b+1 >= max_batches:
break
print("final score: %d/%d = %.2f%% correct" % (np.sum(results), len(results), 100*np.mean(results)))
# -
# training set: how well did we memorize?
give_exam(train_dataset, batch_size=1024, max_batches=10)
# test set: how well did we generalize?
give_exam(test_dataset, batch_size=1024, max_batches=-1)
# +
# well that's amusing... our model learned everything except 55 + 45
| play_gym.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Quality Analysis of Cell Data
# !pip install hana_ml
# !pip install hdfs
# +
# Import of HANA Connections
# Enables to create a pandas DataFrame out of HANA table selections
# Details: https://help.sap.com/doc/1d0ebfe5e8dd44d09606814d83308d4b/2.0.04/en-US/hana_ml.html
#import hana_ml
#import hana_ml.dataframe as dataframe
#from notebook_hana_connector.notebook_hana_connector import NotebookConnectionContext
#import hdfs
# Usual packages for data science
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
# -
# ## Connection to Data Source
# * Open the connection to the HANA DB using the credentials stored in the Connection Management.
# * Read the table into DataFrame df
# * Display the DataFrame
#FOR HANA ML
#conn = NotebookConnectionContext(connectionId = 'HANA_CLOUD_TECHED')
#df = conn.table('CELLSTATUS', schema='TECHED').collect()
df = pd.read_csv("/Users/Shared/data/IoTquality/cellstatus.csv")
display(df)
# ## Configuration Setting and Performance over Time
#
# Creating 2 charts for the values of "KEY1" and "KEY2" over time. Comparing measured performance values against configuration setting.
# +
fig = plt.figure(figsize=(18, 5))
ax1 = fig.add_subplot(1, 2,1)
ax2 = fig.add_subplot(1, 2,2)
fig.suptitle('CELLSTATUS',y = 0.99)
ax1.plot(df['DATE'],df['NOM_KEY1'],color='red')
ax1.plot(df['DATE'],df['KEY1'])
ax1.legend(['Config Setting', 'Measurement'])
ax1.xaxis.set_major_locator(mdates.MonthLocator())
ax1.xaxis.set_major_formatter(mdates.DateFormatter('%B'))
ax1.set_title('KEY1')
ax2.plot(df['DATE'],df['NOM_KEY2'],color='red')
ax2.plot(df['DATE'],df['KEY2'])
ax2.legend(['Config Setting', 'Measurement'])
ax2.xaxis.set_major_locator(mdates.MonthLocator())
ax2.xaxis.set_major_formatter(mdates.DateFormatter('%B'))
ax2.set_title('KEY2')
# -
# ## Histogram of KEY1 and KEY2
# Calculation the value distribution of the values of "KEY1" and "KEY2".
fig, ax = plt.subplots(figsize=(18, 5))
ax.hist(df['KEY1'],50, facecolor='green', alpha=0.75)
ax.hist(df['KEY2'],50, facecolor='blue', alpha=0.75)
# ## Statistic Description
# Assumption of a normal distribution. 3-sigma-score: 99.73% should be within +-3*std from mean value.
#
# 1. Calculate mean value and standard deviation
# 2. Compute the number of values outside of the 3-sigma area compared to the expected outcome
#
# ### KEY1 for all Cells
# KEY1 for all Cells
mean = df['KEY1'].mean()
std = df['KEY1'].std()
print('Statistics:\n\tav: {}\n\tsd: {}'.format(mean,std))
exl_3s = df.loc[ (df.KEY1 < mean - 3*std) | (df.KEY1 > mean + 3*std),'KEY1'].count()
print('Z3-score: \n\tactual #values: {}\n\ttarget #values: {}'.format(exl_3s,round(0.0027*df.shape[0])))
# ### KEY2 for all Cells
mean2 = df['KEY2'].mean()
std2 = df['KEY2'].std()
print('Statistics:\n\tav: {}\n\tsd: {}'.format(mean2,std2))
exl_3s = df.loc[ (df.KEY2 < mean2 - 3*std2) | (df.KEY2 > mean2 + 3*std2),'KEY2'].count()
print('Z3-score: \n\tactual #values: {}\n\ttarget #values: {}'.format(exl_3s,round(0.0027*df.shape[0])))
# ### For Each Cell of KEY2
# Deviation of actual number of values outside of 3-sigma boundaries compared to the expected one for each cell.
cells = df.CELLID.unique()
for c in cells:
dfc = df.loc[df.CELLID == c]
exl_3s = df.loc[(df.CELLID == c) & ((df.KEY2 < mean2 - 3*std2) | (df.KEY2 > mean2 + 3*std2)),'KEY2'].count()
print('Z3-score {}: \n\tactual #values: {}\n\ttarget #values: {}'.format(c,exl_3s,round(0.0027*dfc.shape[0])))
# ### Detailed look on the outliers
# For cells where the expected values deviate check the time dependency.
c = 1345331
dfc = df.loc[(df.CELLID == c) & ((df.KEY2 < mean2 - 3*std2) | (df.KEY2 > mean2 + 3*std2))]
display(dfc)
# ## Access Data on DI Data Lake
client = InsecureClient('http://datalake:50070')
with client.read('/shared/Teched2020/performance.csv', encoding='utf-8') as reader:
df_p = pd.read_csv(reader)
display(df_p)
| jnb/qm_analysis_cellstatus-example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2 with Spark 2.1
# language: python
# name: python2-spark21
# ---
import numpy as np
from numpy import concatenate
from matplotlib import pyplot
from pandas import read_csv
from pandas import DataFrame
from pandas import concat
import sklearn
import keras, math
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import mean_squared_error
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.callbacks import Callback
from keras.models import Sequential
from keras.layers import LSTM, Dense, Activation
import pickle
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from Queue import Queue
# %matplotlib inline
data_healthy = pickle.load(open('watsoniotp.healthy.pickle', 'rb'))
data_broken = pickle.load(open('watsoniotp.broken.pickle', 'rb'))
data_healthy = data_healthy.reshape(3000,3)
data_broken = data_broken.reshape(3000,3)
def scaleData(data):
# normalize features
scaler = MinMaxScaler(feature_range=(0, 1))
return scaler.fit_transform(data)
data_healthy_scaled = scaleData(data_healthy)
data_broken_scaled = scaleData(data_broken)
timesteps = 10
dim = 3
samples = 300
data_healthy_scaled_reshaped = data_healthy_scaled
# +
# design network
model = Sequential()
model.add(LSTM(50,input_shape=(timesteps,dim),return_sequences=False))
model.add(Dense(timesteps*dim))
model.compile(loss='mse', optimizer=keras.optimizers.SGD(lr=0.01, momentum=0.95, decay=5e-4, nesterov=True))
# -
# ## Now use Keras2DML importer
from systemml.mllearn import Keras2DML
epochs = 50
batch_size = 50
max_iter = int(epochs*math.ceil(samples/batch_size))
model = Keras2DML(spark, model, input_shape=(timesteps, dim, 1), batch_size=batch_size, max_iter=max_iter, test_interval=0, display=10)
model.set(perform_one_hot_encoding=False)
model.set(debug=True)
def train(data):
data = data.reshape(samples, timesteps*dim)
model.fit(data, data) # epochs=50, batch_size=72, validation_data=(data, data), verbose=0, shuffle=False,callbacks=[LossHistory()])
train(data_healthy_scaled)
| KerasToSystemML.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <span style="display:block;text-align:center;margin-right:105px"><img src="../../media/logos/logo-vertical.png" width="200"/></span>
# + [markdown] slideshow={"slide_type": "slide"}
# # Section 7: Capstone Project
#
# ---
# -
# # Table of Contents
#
# * [System Requirements (Part 1)](#System-Requirements)
# * [Model Introduction](#Model-Introduction)
# * [Requirements Analysis](#Requirements-Analysis)
# * [Visual System Mapping: Causal Loop Diagram](#Visual-System-Mapping:-Causal-Loop-Diagram)
# * [Visual System Mapping: Stock & Flow Diagram](#Visual-System-Mapping:-Stock-&-Flow-Diagram)
# * [Mathematical Specification](#Mathematical-Specification)
# * [System Design (Part 2)](#System-Design)
# * [Differential Specification](#Differential-Specification)
# * [cadCAD Standard Notebook Layout](#cadCAD-Standard-Notebook-Layout)
# 0. [Dependencies](#0.-Dependencies)
# 1. [State Variables](#1.-State-Variables)
# 2. [System Parameters](#2.-System-Parameters)
# 3. [Policy Functions](#3.-Policy-Functions)
# 4. [State Update Functions](#4.-State-Update-Functions)
# 5. [Partial State Update Blocks](#5.-Partial-State-Update-Blocks)
# 6. [Configuration](#6.-Configuration)
# 7. [Execution](#7.-Execution)
# 8. [Simulation Output Preparation](#8.-Simulation-Output-Preparation)
# * [System Validation (Part 3)](#System-Validation)
# * [What-if Matrix](#What-if-Matrix)
# * [System Analysis](#System-Analysis)
# ---
# # System Requirements
# <center><img src="images/edp-phase-1.png" alt="Engineering Design Process, phase 1 - System requirements" width="60%"/>
# ## Model Introduction
# <center>
# <img src="images/globe.png" alt="Earth globe" width="200px"/>
# Project Anthropocene is a model that enables the insightful analysis of the impact of carbon dioxide (CO2) on the Earth's temperature.
# ## Requirements Analysis
#
# [Link to System Analysis](#System-Analysis)
# ### Questions
#
# **Planned analysis:** How does the Earth's temperature evolve over the next 100 years under various assumptions regarding CO2 emissions?
#
# 1. How will the __Earth's average temperature__ and the __rate of annual temperature change__ develop over the next 100 years, if we keep CO2 emissions __unchanged__ at today’s annual emission levels vs. a __doubling__ of today’s emission levels.
# 2. How will the __Earth's average temperature__ and the __rate of annual temperature change__ develop over the next 100 years if we are able to reduce annual CO2 emissions to __zero__ after a given number of years?
# ## Visual System Mapping: Causal Loop Diagram
# The overall __relationships__ in the model are the following:
# * The __Earth's temperature is determined by what's called radiation balance__, i.e. how much radiation comes in via the Sun, minus how much is dissipating into space. If this balance is positive, heat accumulates, and the Earth warms up; if it is negative, the Earth cools down.
# * The __radiation balance__ is driven by the Sun's radiation, which tends to make the Earth hotter, and the Earth's radiation, which makes heat dissipate and the planet colder.
# * The __radiation balance is influenced by the well-known greenhouse effect__, i.e. the stronger the greenhouse effect, the more radiation from Earth gets trapped in the atmosphere unable to dissipate into space and the higher the radiation balance. Quick primer on the greenhouse effect: https://en.wikipedia.org/wiki/Greenhouse_effect
# * __CO2__ contributes strongly to the greenhouse effect.
# <center>
# <img src="images/s7-climate-cld-diagram.png" alt="Model for Project Anthropocene" width="60%"/>
# ## Visual System Mapping: Stock & Flow Diagram
# <center>
# <img src="images/s7-anthropocene-stock-and-flow.png" alt="Stock & Flow diagram for Project Anthropocene" width="60%"/>
# ## Mathematical Specification
#
# The Anthropocene system is an IVP (initial value problem) which is described by the following equations:
#
# \begin{align}
# \tag{1}
# dCO_2(t) = \begin{cases}
# \mathcal{N}(\mu, \sigma) & \forall t \in [0, t_w] \\
# \mathcal{N}(0, \sigma) & \forall t \in [t_w, \infty]
# \end{cases}
# \end{align}
#
# \begin{align}
# \tag{2}
# \alpha(t) = 1 - e^{-\beta * CO_2(t)}
# \end{align}
#
# \begin{align}
# \tag{3}
# Y(t) = \alpha(t) Z(t)
# \end{align}
#
# \begin{align}
# \tag{4}
# Z(t) = K T(t)^4
# \end{align}
#
# \begin{align}
# \tag{5}
# dT(t) = \gamma(a + Y(t) - Z(t))
# \end{align}
#
#
# Where $\mathcal{N}$ represents the normal distribution with mean $\mu$ and standard deviation $\sigma$. For each timestep $t$ we have the $CO_2(t)$, $\alpha(t)$, $Y(t)$, $T(t)$ and $Z(t)$ being the atmospheric $CO_2$ concentration, the atmosphere reflectance, the reflected radiation, the surface temperature and the outgoing radiation respectively. Also, we have the $\beta$, $\gamma$, $a$, $K$, $t_w$ constants as being a $CO_2$ to reflectance conversion factor, a radiation to temperature conversion factor, the Sun's yearly radiance, a constant for the blackbody effect and the year where emissions are stopped.
#
# The system is tightly coupled and is both non-linear and stochastic, which can make mathematical treatment difficult, and as such, the characterization of it will be made easier through an experimental approach and computational simulations, which is exactly what **cadCAD** enables us to do.
# # System Design
# <center><img src="images/edp-phase-2.png" alt="Engineering Design Process, phase 1 - System design" width="60%"/>
# ## Differential Specification
# <center><img src="images/s7-differential-spec-anthropocene.png" alt="Model for Project Anthropocene" width="60%"/>
# ## cadCAD Standard Notebook Layout
# ### 0. Dependencies
# +
import pandas as pd
import numpy as np
from random import normalvariate
import plotly.express as px
from cadCAD.configuration.utils import config_sim
from cadCAD.configuration import Experiment
from cadCAD.engine import ExecutionContext, Executor
# -
# ### 1. State Variables
# The states we are interested in, their state variables and their initial values are:
#
# * The __atmosphere's CO2 concentration__ in parts per million (ppm): `co2`, initial value 400
# * The __earth's surface temperature__ in Kelvin (K): `temperature`, initial value 290
#
# <!--**Create a dictionary and define the above state variables and their initial values:**-->
initial_state = {
'co2': 400,
'temperature': 290
}
# ### 2. System Parameters
# **The system parameters we need to define are:**
#
# * The sun radiation: `sun_radiation` with value `1361`
# * A constant representing the relationship between temperature and radiation: `temperature_constant` with value `1e-4`
# * A constant representing CO2 impact on the radiation balance via the greenhouse effect: `co2_reflectance_factor` with value `1e-3`
# * A unit conversion constant that relates how much gigatons of CO2 we need to have an additional part per million unit in the atmosphere's concentration: `co2_gigatons_to_ppm` with value `1.2e-1`
# * The standard deviation for the stochastic process generating the yearly CO2 concentration: `co2_stdev` with value `40` ppm
# * A constant representing how much heat dissipitates into space: `heat_dissipation_constant` with value `2075`
#
# **There are two parameters which we want to sweep, which are:**
#
# * A parameter which represents the annual CO2 emissions in units of billion tons, which is the `co2_annual_emissions`. Let's sweep three values for it: `0`, `40` and `80`. The first value simulates a scenario where we stop all emissions at once, while using `40` means no additional emissions beyond what we already emmit every year, and `80` means that we are going to double our emissions.
# * The `year_of_the_wakening`, which is the number of years that must pass before we set the `co2_annual_emissions` to zero. Let's sweep four values for it: `0`, `10`, `50` and `100`.
#
# <!--**Create a dictionary and define the above parameters and their initial values:**-->
system_params = {
'sun_radiation': [1361],
'temperature_constant': [1e-4],
'co2_reflectance_factor': [1e-3],
'co2_gigatons_to_ppm': [1.2e-1],
'co2_stdev': [40],
'heat_dissipation_constant': [2075],
'co2_annual_emissions': [40, 80, 40, 80, 40, 80, 40, 80],
'year_of_the_wakening': [0, 0, 10, 10, 50, 50, 100, 100]
}
assert 1e10 == 1*10**10
# ### 3. Policy Functions
def p_co2_emissions(params,
subbstep,
state_history,
previous_state):
# Parameters & variables
mean = params['co2_annual_emissions']
std = params['co2_stdev']
conversion_factor = params['co2_gigatons_to_ppm']
t_w = params['year_of_the_wakening']
t = previous_state['timestep']
# Logic
if t > t_w:
mean = 0
else:
mean = mean
value = normalvariate(mean, std) * conversion_factor
# Output
return {'add_co2': value}
def p_sun_radiation(params,
substep,
state_history,
previous_state):
# Parameters & variables
g = params['temperature_constant']
a = params['sun_radiation']
# Logic
temp_change = g * a
# Output
return {'add_temperature': temp_change}
def p_earth_cooling(params,
substep,
state_history,
previous_state):
# Parameters & variables
g = params['temperature_constant']
K = params['heat_dissipation_constant']
T = previous_state['temperature']
# Logic
temp_change = -(g * K * (T / 300) ** 4)
# Output
return {'add_temperature': temp_change}
def p_greenhouse_effect(params,
substep,
state_history,
previous_state):
# Parameters & variables
g = params['temperature_constant']
K = params['heat_dissipation_constant']
beta = params['co2_reflectance_factor']
T = previous_state['temperature']
CO2 = previous_state['co2']
# Logic
alpha = (1 - np.exp(-beta * CO2))
temp_change = g * alpha * K * (T / 300) ** 4
# Output
return {'add_temperature': temp_change}
# ### 4. State Update Functions
def s_co2(params,
substep,
state_history,
previous_state,
policy_input):
# Parameters & variables
current_co2 = previous_state['co2']
co2_change = policy_input['add_co2']
# Logic
new_co2 = max(current_co2 + co2_change, 0)
# Output
return ('co2', new_co2)
def s_temperature(params,
substep,
state_history,
previous_state,
policy_input):
# Parameters & variables
current_temp = previous_state['temperature']
temp_change = policy_input['add_temperature']
# Logic
new_temp = max(current_temp + temp_change, 0)
# Output
return ('temperature', new_temp)
# ### 5. Partial State Update Blocks
partial_state_update_blocks = [
{
'label': 'Temperature dynamics', # Useful metadata to describe our partial state update blocks
'policies': {
'sun_radiation': p_sun_radiation,
'earth_cooling': p_earth_cooling,
'greenhouse_effect': p_greenhouse_effect
},
'variables': {
'temperature': s_temperature
}
},
{
'label': 'CO2 dynamics', # Useful metadata to describe our partial state update blocks
'policies': {
'co2_emissions': p_co2_emissions
},
'variables': {
'co2': s_co2
}
}
]
# ### 6. Configuration
# +
MONTE_CARLO_RUNS = 50
SIMULATION_TIMESTEPS = 100
sim_config = config_sim(
{
'N': MONTE_CARLO_RUNS,
'T': range(SIMULATION_TIMESTEPS),
'M': system_params,
}
)
from cadCAD import configs
del configs[:] # Clear any prior configs
experiment = Experiment()
experiment.append_configs(
sim_configs=sim_config,
initial_state=initial_state,
partial_state_update_blocks=partial_state_update_blocks
)
# -
# ### 7. Execution
# +
exec_context = ExecutionContext()
run = Executor(exec_context=exec_context, configs=configs)
(system_events, tensor_field, sessions) = run.execute()
# -
# ### 8. Simulation Output Preparation
# +
# Get system events and attribute index
df = (pd.DataFrame(system_events)
.assign(years=lambda df: df.timestep)
.assign(temperature_celsius=lambda df: df.temperature - 273)
.query('timestep > 0')
)
# Clean substeps
first_ind = (df.substep == 0) & (df.timestep == 0)
last_ind = df.substep == max(df.substep)
inds_to_drop = (first_ind | last_ind)
df = df.loc[inds_to_drop].drop(columns=['substep'])
# Attribute parameters to each row
df = df.assign(**configs[0].sim_config['M'])
for i, (_, n_df) in enumerate(df.groupby(['simulation', 'subset', 'run'])):
df.loc[n_df.index] = n_df.assign(**configs[i].sim_config['M'])
# -
# # System Validation
# <center><img src="images/edp-phase-3.png" alt="Engineering Design Process, phase 1 - validation" width="60%"/>
# [Link to Requirements Analysis](#Requirements-Analysis)
# ## What-if Matrix
# What-if-question | Type of experiment | Variables / parameters | Values / Ranges to be tested
# - | - | - | -
# How will the __Earth's average temperature__ develop over the next 100 years, if we keep CO2 emissions __unchanged__ at today’s annual emission levels vs. a __doubling__ of today’s emission levels? | Parameter Sweep + Monte Carlo runs | co2_annual_emissions | 40 and 80 Gigatons
# How will the __rate of annual temperature change__ develop over the next 100 years if we keep CO2 emissions __unchanged__ at today’s annual emission levels vs. a __doubling__ of today’s emission levels? | Parameter Sweep + Monte Carlo runs | co2_annual_emissions | 40 and 80 Gigatons
# How will the __rate of annual temperature change__ develop over the next 100 years if we are able to reduce annual CO2 emissions to __zero__ after a given number of years? | Parameter Sweep + Monte Carlo runs | year_of_the_wakening | 0, 10, 50 and 100 years
# How will the __Earth's average temperature__ develop over the next 100 years if we are able to reduce annual CO2 emissions to __zero__ after a given number of years? | Parameter Sweep + Monte Carlo runs | year_of_the_wakening | 0, 10, 50 and 100 years
# ## System Analysis
# ### Analysis 1: How will the Earth's average temperature develop over the next 100 years, if we keep CO2 emissions unchanged at today’s annual emission levels vs. a doubling of today’s emission levels?
# +
fig_df = df.query('year_of_the_wakening == 100')
fig = px.scatter(
fig_df,
x=fig_df.years,
y=fig_df.temperature_celsius,
color=fig_df.co2_annual_emissions.astype(str),
opacity=0.1,
trendline="lowess",
labels={'color': 'Yearly CO2 emissions (Gt)'}
)
fig.show()
# +
fig_df = df.query('year_of_the_wakening == 100')
fig = px.box(
fig_df,
x=fig_df.years,
y=fig_df.temperature_celsius,
color=fig_df.co2_annual_emissions.astype(str),
points=False,
labels={'color': 'Yearly CO2 emissions (Gt)'}
)
fig.show()
# -
# ### Analysis 2: How will the rate of annual temperature change develop over the next 100 years if we keep CO2 emissions unchanged at today’s annual emission levels vs. a doubling of today’s emission levels?
# +
fig_df = (df.query('year_of_the_wakening == 100')
.assign(annual_temperature_increase=lambda df: df.temperature.diff())
.query('years > 1'))
fig = px.scatter(
fig_df,
x=fig_df.years,
y=fig_df.annual_temperature_increase,
opacity=0.1,
trendline="lowess",
color=fig_df.co2_annual_emissions.astype(str),
labels={'color': 'Yearly CO2 emissions (Gt)'}
)
fig.show()
# -
# ### Analysis 3: How will the rate of annual temperature change develop over the next 100 years if we are able to reduce annual CO2 emissions to zero after a given number of years?
# +
fig_df = (df.query('co2_annual_emissions == 40')
.assign(annual_temperature_increase=lambda df: df.temperature.diff())
.query('years > 1'))
fig = px.scatter(
fig_df,
x=fig_df.years,
y=fig_df.annual_temperature_increase,
opacity=0.1,
trendline="lowess",
color=fig_df.year_of_the_wakening.astype(str),
labels={'color': 'Year of the wakening (years)'}
)
fig.show()
# -
# ### Analysis 4: How will the Earth's average temperature develop over the next 100 years if we are able to reduce annual CO2 emissions to zero after a given number of years?
# +
fig_df = (df.query('co2_annual_emissions == 40')
.assign(temperature_increase=lambda df: df.temperature - df.temperature.iloc[0]))
fig = px.scatter(
fig_df,
x=fig_df.years,
y=fig_df.temperature_increase,
opacity=0.1,
trendline="lowess",
color=fig_df.year_of_the_wakening.astype(str),
labels={'color': 'Year of the wakening (years)'}
)
fig.show()
# -
# # Congratulations!
# 
| complete-foundations-bootcamp-output-main/content/section-7-capstone-project/notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import logging
import importlib
importlib.reload(logging) # see https://stackoverflow.com/a/21475297/1469195
log = logging.getLogger()
log.setLevel('INFO')
import sys
logging.basicConfig(format='%(asctime)s %(levelname)s : %(message)s',
level=logging.INFO, stream=sys.stdout)
# +
# %%capture
import os
import site
os.sys.path.insert(0, '/home/schirrmr/code/reversible/')
os.sys.path.insert(0, '/home/schirrmr/braindecode/code/braindecode/')
os.sys.path.insert(0, '/home/schirrmr/code/explaining/reversible//')
# %load_ext autoreload
# %autoreload 2
import numpy as np
import logging
log = logging.getLogger()
log.setLevel('INFO')
import sys
logging.basicConfig(format='%(asctime)s %(levelname)s : %(message)s',
level=logging.INFO, stream=sys.stdout)
import matplotlib
from matplotlib import pyplot as plt
from matplotlib import cm
# %matplotlib inline
# %config InlineBackend.figure_format = 'png'
matplotlib.rcParams['figure.figsize'] = (12.0, 1.0)
matplotlib.rcParams['font.size'] = 14
import seaborn
seaborn.set_style('darkgrid')
from reversible2.sliced import sliced_from_samples
from numpy.random import RandomState
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import numpy as np
import copy
import math
import itertools
import torch as th
from braindecode.torch_ext.util import np_to_var, var_to_np
from reversible2.splitter import SubsampleSplitter
from reversible2.view_as import ViewAs
from reversible2.invert import invert
from reversible2.affine import AdditiveBlock
from reversible2.plot import display_text, display_close
from reversible2.bhno import load_file, create_inputs
# +
rng = RandomState(201904113)#2 ganz gut # 13 sehr gut)
X = rng.randn(5,2) * np.array([1,0])[None] + np.array([-1,0])[None]
plt.figure(figsize=(5,5))
plt.scatter(X[:,0], X[:,1])
plt.scatter([-1],[0], color='black')
plt.xlim(-2.5,5.5)
plt.ylim(-4,4)
# -
import sklearn.datasets
X,y = sklearn.datasets.make_moons(100, shuffle=False, noise=1e-4)
plt.figure(figsize=(4,4))
plt.scatter(X[:50,0], X[:50,1])
plt.scatter(X[50:,0], X[50:,1])
train_inputs = np_to_var(X[:50], dtype=np.float32)
cuda = False
# +
from reversible2.distribution import TwoClassDist
from reversible2.blocks import dense_add_block, conv_add_block_3x3
from reversible2.rfft import RFFT, Interleave
from reversible2.util import set_random_seeds
from torch.nn import ConstantPad2d
import torch as th
from reversible2.splitter import SubsampleSplitter
set_random_seeds(2019011641, cuda)
feature_model_a = nn.Sequential(
dense_add_block(2,200),
dense_add_block(2,200),
dense_add_block(2,200),
dense_add_block(2,200),
)
if cuda:
feature_model_a.cuda()
from reversible2.ot_exact import ot_euclidean_loss_for_samples
class_dist = TwoClassDist(2,0, [0,1])
if cuda:
class_dist.cuda()
optim_model_a = th.optim.Adam(feature_model_a.parameters())
optim_dist = th.optim.Adam(class_dist.parameters(), lr=1e-2)
# -
from reversible.gaussian import get_gauss_samples
n_epochs = 2001
for i_epoch in range(n_epochs):
outs_a = feature_model_a(train_inputs)
perturbations = th.randn(outs_a.shape) * 0.01
outs_perturbed = outs_a + perturbations
inverted_perturbed = invert(feature_model_a, outs_perturbed)
out_dists = th.norm(perturbations, p=2, dim=1)
in_dists = th.norm(train_inputs - inverted_perturbed, p=2, dim=1)
eps=1e-8
threshold = 0.0
lip_loss = th.mean(F.relu((((in_dists - out_dists) / (out_dists +eps)) - threshold) ** 2))
lip_loss_2 = th.max(F.relu((((in_dists - out_dists) / (out_dists +eps))-threshold) ** 2))
log_probs = class_dist.get_class_log_prob(0, outs_a)
nll_loss = -th.mean(log_probs)
loss = nll_loss + lip_loss * 100 + lip_loss_2 * 0
optim_dist.zero_grad()
optim_model_a.zero_grad()
loss.backward()
optim_model_a.step()
optim_dist.step()
if i_epoch % (n_epochs // 20) == 0:
display_text("NLL: {:.1E}, Lip: {:.1E}".format(nll_loss.item(), lip_loss.item()))
fig,axes = plt.subplots(1,2, figsize=(10,4))
model = feature_model_a
rng = RandomState(201904114)
outs = model(train_inputs)
other_X = sklearn.datasets.make_moons(200, shuffle=False, noise=1e-4)[0][:100]
other_ins = np_to_var(other_X, dtype=np.float32)
other_outs = model(other_ins)
axes[0].plot(var_to_np(other_outs[:,0]), var_to_np(other_outs[:,1]), label="All Outputs",
color=seaborn.color_palette()[1])
axes[0].scatter(var_to_np(outs[:,0]), var_to_np(outs[:,1]), s=30, c=[seaborn.color_palette()[0]],
label="Actual data outputs")
axes[0].axis('equal')
axes[0].set_title("Output space")
plt.axis('equal')
samples = class_dist.get_samples(0, 100)
inverted = invert(feature_model_a, samples)
axes[1].scatter(var_to_np(inverted)[:,0], var_to_np(inverted)[:,1], s=30, label="Fake/Unknown Samples",
c=[seaborn.color_palette()[1]])
axes[1].scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], s=30, label="Real data",
c=[seaborn.color_palette()[0]])
axes[1].legend(bbox_to_anchor=(1,1,0,0))
axes[1].set_title("Input space")
axes[1].axis('equal')
display_close(fig)
# ### only NLL
# +
from reversible2.distribution import TwoClassDist
from reversible2.blocks import dense_add_block, conv_add_block_3x3
from reversible2.rfft import RFFT, Interleave
from reversible2.util import set_random_seeds
from torch.nn import ConstantPad2d
import torch as th
from reversible2.splitter import SubsampleSplitter
set_random_seeds(2019011641, cuda)
feature_model_a = nn.Sequential(
dense_add_block(2,200),
dense_add_block(2,200),
dense_add_block(2,200),
dense_add_block(2,200),
)
if cuda:
feature_model_a.cuda()
from reversible2.ot_exact import ot_euclidean_loss_for_samples
class_dist = TwoClassDist(2,0, [0,1])
if cuda:
class_dist.cuda()
optim_model_a = th.optim.Adam(feature_model_a.parameters())
optim_dist = th.optim.Adam(class_dist.parameters(), lr=1e-2)
# -
from reversible.gaussian import get_gauss_samples
n_epochs = 2001
for i_epoch in range(n_epochs):
outs_a = feature_model_a(train_inputs)
perturbations = th.randn(outs_a.shape) * 0.01
outs_perturbed = outs_a + perturbations
inverted_perturbed = invert(feature_model_a, outs_perturbed)
out_dists = th.norm(perturbations, p=2, dim=1)
in_dists = th.norm(train_inputs - inverted_perturbed, p=2, dim=1)
eps=1e-8
threshold = 0.0
lip_loss = th.mean(F.relu((((in_dists - out_dists) / (out_dists +eps)) - threshold) ** 2))
lip_loss_2 = th.max(F.relu((((in_dists - out_dists) / (out_dists +eps))-threshold) ** 2))
log_probs = class_dist.get_class_log_prob(0, outs_a)
nll_loss = -th.mean(log_probs)
loss = nll_loss + lip_loss * 0 + lip_loss_2 * 0
optim_dist.zero_grad()
optim_model_a.zero_grad()
loss.backward()
optim_model_a.step()
optim_dist.step()
if i_epoch % (n_epochs // 20) == 0:
display_text("NLL: {:.1E}, Lip: {:.1E}".format(nll_loss.item(), lip_loss.item()))
fig,axes = plt.subplots(1,2, figsize=(10,4))
model = feature_model_a
rng = RandomState(201904114)
outs = model(train_inputs)
other_X = sklearn.datasets.make_moons(200, shuffle=False, noise=1e-4)[0][:100]
other_ins = np_to_var(other_X, dtype=np.float32)
other_outs = model(other_ins)
axes[0].plot(var_to_np(other_outs[:,0]), var_to_np(other_outs[:,1]), label="All Outputs",
color=seaborn.color_palette()[1])
axes[0].scatter(var_to_np(outs[:,0]), var_to_np(outs[:,1]), s=30, c=[seaborn.color_palette()[0]],
label="Actual data outputs")
axes[0].axis('equal')
axes[0].set_title("Output space")
plt.axis('equal')
samples = class_dist.get_samples(0, 100)
inverted = invert(feature_model_a, samples)
axes[1].scatter(var_to_np(inverted)[:,0], var_to_np(inverted)[:,1], s=30, label="Fake/Unknown Samples",
c=[seaborn.color_palette()[1]])
axes[1].scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], s=30, label="Real data",
c=[seaborn.color_palette()[0]])
axes[1].legend(bbox_to_anchor=(1,1,0,0))
axes[1].set_title("Input space")
axes[1].axis('equal')
display_close(fig)
# +
from reversible2.distribution import TwoClassDist
from reversible2.blocks import dense_add_block, conv_add_block_3x3
from reversible2.rfft import RFFT, Interleave
from reversible2.util import set_random_seeds
from torch.nn import ConstantPad2d
import torch as th
from reversible2.splitter import SubsampleSplitter
set_random_seeds(2019011641, cuda)
feature_model_a = nn.Sequential(
dense_add_block(2,200),
dense_add_block(2,200),
dense_add_block(2,200),
dense_add_block(2,200),
)
if cuda:
feature_model_a.cuda()
from reversible2.ot_exact import ot_euclidean_loss_for_samples
class_dist = TwoClassDist(2,0, [0,1])
if cuda:
class_dist.cuda()
optim_model_a = th.optim.Adam(feature_model_a.parameters())
optim_dist = th.optim.Adam(class_dist.parameters(), lr=1e-2)
# -
plt.figure(figsize=(4,4))
plt.scatter(X[:50,0], X[:50,1])
plt.scatter(X[50:,0], X[50:,1])
train_inputs = np_to_var(X[:50][::10], dtype=np.float32)
cuda = False
# +
from reversible2.distribution import TwoClassDist
from reversible2.blocks import dense_add_block, conv_add_block_3x3
from reversible2.rfft import RFFT, Interleave
from reversible2.util import set_random_seeds
from torch.nn import ConstantPad2d
import torch as th
from reversible2.splitter import SubsampleSplitter
set_random_seeds(2019011641, cuda)
feature_model_a = nn.Sequential(
dense_add_block(2,200),
dense_add_block(2,200),
dense_add_block(2,200),
dense_add_block(2,200),
)
if cuda:
feature_model_a.cuda()
from reversible2.ot_exact import ot_euclidean_loss_for_samples
class_dist = TwoClassDist(2,0, [0,1])
if cuda:
class_dist.cuda()
optim_model_a = th.optim.Adam(feature_model_a.parameters())
optim_dist = th.optim.Adam(class_dist.parameters(), lr=1e-2)
# -
from reversible2.plot import display_close
from matplotlib.patches import Ellipse
# ## only NLL
from reversible.gaussian import get_gauss_samples
n_epochs = 2001
for i_epoch in range(n_epochs):
outs_a = feature_model_a(train_inputs)
perturbations = th.randn(outs_a.shape) * 0.01
outs_perturbed = outs_a + perturbations
inverted_perturbed = invert(feature_model_a, outs_perturbed)
out_dists = th.norm(perturbations, p=2, dim=1)
in_dists = th.norm(train_inputs - inverted_perturbed, p=2, dim=1)
eps=1e-8
threshold = 0.0
lip_loss = th.mean(F.relu((((in_dists - out_dists) / (out_dists +eps)) - threshold) ** 2))
lip_loss_2 = th.max(F.relu((((in_dists - out_dists) / (out_dists +eps))-threshold) ** 2))
log_probs = class_dist.get_class_log_prob(0, outs_a)
nll_loss = -th.mean(log_probs)
loss = nll_loss + lip_loss * 0 + lip_loss_2 * 0
optim_dist.zero_grad()
optim_model_a.zero_grad()
loss.backward()
optim_model_a.step()
optim_dist.step()
if i_epoch % (n_epochs // 20) == 0:
display_text("NLL: {:.1E}, Lip: {:.1E}".format(nll_loss.item(), lip_loss.item()))
fig,axes = plt.subplots(1,2, figsize=(10,4))
model = feature_model_a
rng = RandomState(201904114)
outs = model(train_inputs)
other_X = sklearn.datasets.make_moons(200, shuffle=False, noise=1e-4)[0][:100]
other_ins = np_to_var(other_X, dtype=np.float32)
other_outs = model(other_ins)
axes[0].plot(var_to_np(other_outs[:,0]), var_to_np(other_outs[:,1]), label="All Outputs",
color=seaborn.color_palette()[1])
axes[0].scatter(var_to_np(outs[:,0]), var_to_np(outs[:,1]), s=30, c=[seaborn.color_palette()[0]],
label="Actual data outputs")
axes[0].axis('equal')
axes[0].set_title("Output space")
mean, std = class_dist.get_mean_std(0)
for sigma in [0.5,1,2,3]:
ellipse = Ellipse(mean, std[0]*sigma, std[1]*sigma)
axes[0].add_artist(ellipse)
ellipse.set_edgecolor(seaborn.color_palette()[1])
ellipse.set_facecolor("None")
plt.axis('equal')
samples = class_dist.get_samples(0, 300)
inverted = invert(feature_model_a, samples)
axes[1].scatter(var_to_np(inverted)[:,0], var_to_np(inverted)[:,1], s=30, label="Fake/Unknown Samples",
c=[seaborn.color_palette()[1]], alpha=0.7)
axes[1].scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], s=30, label="Real data",
c=[seaborn.color_palette()[0]])
axes[1].legend(bbox_to_anchor=(1,1,0,0))
axes[1].set_title("Input space")
axes[1].axis('equal')
display_close(fig)
# ## With Lipschitz
# +
from reversible2.distribution import TwoClassDist
from reversible2.blocks import dense_add_block, conv_add_block_3x3
from reversible2.rfft import RFFT, Interleave
from reversible2.util import set_random_seeds
from torch.nn import ConstantPad2d
import torch as th
from reversible2.splitter import SubsampleSplitter
set_random_seeds(2019011641, cuda)
feature_model_a = nn.Sequential(
dense_add_block(2,200),
dense_add_block(2,200),
dense_add_block(2,200),
dense_add_block(2,200),
)
if cuda:
feature_model_a.cuda()
from reversible2.ot_exact import ot_euclidean_loss_for_samples
class_dist = TwoClassDist(2,0, [0,1])
if cuda:
class_dist.cuda()
optim_model_a = th.optim.Adam(feature_model_a.parameters())
optim_dist = th.optim.Adam(class_dist.parameters(), lr=1e-2)
# +
from reversible.gaussian import get_gauss_samples
n_epochs = 2001
for i_epoch in range(n_epochs):
outs_a = feature_model_a(train_inputs)
perturbations = th.randn(outs_a.shape) * 0.01
outs_perturbed = outs_a + perturbations
inverted_perturbed = invert(feature_model_a, outs_perturbed)
out_dists = th.norm(perturbations, p=2, dim=1)
in_dists = th.norm(train_inputs - inverted_perturbed, p=2, dim=1)
eps=1e-8
threshold = 0.0
lip_loss = th.mean(F.relu((((in_dists - out_dists) / (out_dists +eps)) - threshold) ** 2))
lip_loss_2 = th.max(F.relu((((in_dists - out_dists) / (out_dists +eps))-threshold) ** 2))
log_probs = class_dist.get_class_log_prob(0, outs_a)
nll_loss = -th.mean(log_probs)
loss = nll_loss + lip_loss * 10 + lip_loss_2 * 0
optim_dist.zero_grad()
optim_model_a.zero_grad()
loss.backward()
optim_model_a.step()
optim_dist.step()
if i_epoch % (n_epochs // 20) == 0:
display_text("NLL: {:.1E}, Lip: {:.1E}".format(nll_loss.item(), lip_loss.item()))
fig,axes = plt.subplots(1,2, figsize=(10,4))
model = feature_model_a
rng = RandomState(201904114)
outs = model(train_inputs)
other_X = sklearn.datasets.make_moons(200, shuffle=False, noise=1e-4)[0][:100]
other_ins = np_to_var(other_X, dtype=np.float32)
other_outs = model(other_ins)
axes[0].plot(var_to_np(other_outs[:,0]), var_to_np(other_outs[:,1]), label="All Outputs",
color=seaborn.color_palette()[1])
axes[0].scatter(var_to_np(outs[:,0]), var_to_np(outs[:,1]), s=30, c=[seaborn.color_palette()[0]],
label="Actual data outputs")
mean, std = class_dist.get_mean_std(0)
for sigma in [0.5,1,2,3]:
ellipse = Ellipse(mean, std[0]*sigma, std[1]*sigma)
axes[0].add_artist(ellipse)
ellipse.set_edgecolor(seaborn.color_palette()[1])
ellipse.set_facecolor("None")
axes[0].axis('equal')
axes[0].set_title("Output space")
plt.axis('equal')
samples = class_dist.get_samples(0, 400)
inverted = invert(feature_model_a, samples)
axes[1].scatter(var_to_np(inverted)[:,0], var_to_np(inverted)[:,1], s=30, label="Fake/Unknown Samples",
c=[seaborn.color_palette()[1]], alpha=0.7)
axes[1].scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], s=30, label="Real data",
c=[seaborn.color_palette()[0]])
axes[1].legend(bbox_to_anchor=(1,1,0,0))
axes[1].set_title("Input space")
axes[1].axis('equal')
display_close(fig)
# -
# ## Pure NLL, longer
# +
from reversible2.distribution import TwoClassDist
from reversible2.blocks import dense_add_block, conv_add_block_3x3
from reversible2.rfft import RFFT, Interleave
from reversible2.util import set_random_seeds
from torch.nn import ConstantPad2d
import torch as th
from reversible2.splitter import SubsampleSplitter
set_random_seeds(2019011641, cuda)
feature_model_a = nn.Sequential(
dense_add_block(2,200),
dense_add_block(2,200),
dense_add_block(2,200),
dense_add_block(2,200),
)
if cuda:
feature_model_a.cuda()
from reversible2.ot_exact import ot_euclidean_loss_for_samples
class_dist = TwoClassDist(2,0, [0,1])
if cuda:
class_dist.cuda()
optim_model_a = th.optim.Adam(feature_model_a.parameters(), weight_decay=0.01)
optim_dist = th.optim.Adam(class_dist.parameters(), lr=1e-2)
# -
from reversible.gaussian import get_gauss_samples
n_epochs = 20001
for i_epoch in range(n_epochs):
outs_a = feature_model_a(train_inputs)
perturbations = th.randn(outs_a.shape) * 0.01
outs_perturbed = outs_a + perturbations
inverted_perturbed = invert(feature_model_a, outs_perturbed)
out_dists = th.norm(perturbations, p=2, dim=1)
in_dists = th.norm(train_inputs - inverted_perturbed, p=2, dim=1)
eps=1e-8
threshold = 0.0
lip_loss = th.mean(F.relu((((in_dists - out_dists) / (out_dists +eps)) - threshold) ** 2))
lip_loss_2 = th.max(F.relu((((in_dists - out_dists) / (out_dists +eps))-threshold) ** 2))
log_probs = class_dist.get_class_log_prob(0, outs_a)
nll_loss = -th.mean(log_probs)
X_a = train_inputs[th.randperm(len(train_inputs))[:2]]
X_b = train_inputs[th.randperm(len(train_inputs))[:2]]
samples = class_dist.get_samples(0, 200)
inverted = invert(feature_model_a, samples)
Y_a, Y_b = th.chunk(inverted, 2, dim=0)
en_loss = average_dist(X_a,Y_a) + average_dist(X_b,Y_b) - average_dist(Y_a,Y_b)
loss = nll_loss# + en_loss * 10
optim_dist.zero_grad()
optim_model_a.zero_grad()
loss.backward()
optim_model_a.step()
optim_dist.step()
if i_epoch % (n_epochs // 20) == 0:
display_text("NLL: {:.1E}, Lip: {:.1E}, En: {:1E}".format(nll_loss.item(), lip_loss.item(),
en_loss.item()))
fig,axes = plt.subplots(1,2, figsize=(10,4))
model = feature_model_a
rng = RandomState(201904114)
outs = model(train_inputs)
other_X = sklearn.datasets.make_moons(200, shuffle=False, noise=1e-4)[0][:100]
other_ins = np_to_var(other_X, dtype=np.float32)
other_outs = model(other_ins)
axes[0].plot(var_to_np(other_outs[:,0]), var_to_np(other_outs[:,1]), label="All Outputs",
color=seaborn.color_palette()[1])
axes[0].scatter(var_to_np(outs[:,0]), var_to_np(outs[:,1]), s=30, c=[seaborn.color_palette()[0]],
label="Actual data outputs")
mean, std = class_dist.get_mean_std(0)
for sigma in [0.5,1,2,3]:
ellipse = Ellipse(mean, std[0]*sigma, std[1]*sigma)
axes[0].add_artist(ellipse)
ellipse.set_edgecolor(seaborn.color_palette()[1])
ellipse.set_facecolor("None")
axes[0].axis('equal')
axes[0].set_title("Output space")
plt.axis('equal')
samples = class_dist.get_samples(0, 400)
inverted = invert(feature_model_a, samples)
axes[1].scatter(var_to_np(inverted)[:,0], var_to_np(inverted)[:,1], s=30, label="Fake/Unknown Samples",
c=[seaborn.color_palette()[1]], alpha=0.7)
axes[1].scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], s=30, label="Real data",
c=[seaborn.color_palette()[0]])
axes[1].legend(bbox_to_anchor=(1,1,0,0))
axes[1].set_title("Input space")
axes[1].axis('equal')
display_close(fig)
# ## with energy distance
# +
from reversible2.distribution import TwoClassDist
from reversible2.blocks import dense_add_block, conv_add_block_3x3
from reversible2.rfft import RFFT, Interleave
from reversible2.util import set_random_seeds
from torch.nn import ConstantPad2d
import torch as th
from reversible2.splitter import SubsampleSplitter
set_random_seeds(2019011641, cuda)
feature_model_a = nn.Sequential(
dense_add_block(2,200),
dense_add_block(2,200),
dense_add_block(2,200),
dense_add_block(2,200),
)
if cuda:
feature_model_a.cuda()
from reversible2.ot_exact import ot_euclidean_loss_for_samples
class_dist = TwoClassDist(2,0, [0,1])
if cuda:
class_dist.cuda()
optim_model_a = th.optim.Adam(feature_model_a.parameters())
optim_dist = th.optim.Adam(class_dist.parameters(), lr=1e-2)
# -
def average_dist(a,b):
return th.mean(th.norm(a.unsqueeze(0) - b.unsqueeze(1), dim=-1,p=2))
from reversible.gaussian import get_gauss_samples
n_epochs = 2001
for i_epoch in range(n_epochs):
outs_a = feature_model_a(train_inputs)
perturbations = th.randn(outs_a.shape) * 0.01
outs_perturbed = outs_a + perturbations
inverted_perturbed = invert(feature_model_a, outs_perturbed)
out_dists = th.norm(perturbations, p=2, dim=1)
in_dists = th.norm(train_inputs - inverted_perturbed, p=2, dim=1)
eps=1e-8
threshold = 0.0
lip_loss = th.mean(F.relu((((in_dists - out_dists) / (out_dists +eps)) - threshold) ** 2))
lip_loss_2 = th.max(F.relu((((in_dists - out_dists) / (out_dists +eps))-threshold) ** 2))
log_probs = class_dist.get_class_log_prob(0, outs_a)
nll_loss = -th.mean(log_probs)
X_a = train_inputs[th.randperm(len(train_inputs))[:2]]
X_b = train_inputs[th.randperm(len(train_inputs))[:2]]
samples = class_dist.get_samples(0, 200)
inverted = invert(feature_model_a, samples)
Y_a, Y_b = th.chunk(inverted, 2, dim=0)
en_loss = average_dist(X_a,Y_a) + average_dist(X_b,Y_b) - average_dist(Y_a,Y_b)
loss = nll_loss# + en_loss * 10
optim_dist.zero_grad()
optim_model_a.zero_grad()
loss.backward()
optim_model_a.step()
optim_dist.step()
if i_epoch % (n_epochs // 20) == 0:
display_text("NLL: {:.1E}, Lip: {:.1E}, En: {:1E}".format(nll_loss.item(), lip_loss.item(),
en_loss.item()))
fig,axes = plt.subplots(1,2, figsize=(10,4))
model = feature_model_a
rng = RandomState(201904114)
outs = model(train_inputs)
other_X = sklearn.datasets.make_moons(200, shuffle=False, noise=1e-4)[0][:100]
other_ins = np_to_var(other_X, dtype=np.float32)
other_outs = model(other_ins)
axes[0].plot(var_to_np(other_outs[:,0]), var_to_np(other_outs[:,1]), label="All Outputs",
color=seaborn.color_palette()[1])
axes[0].scatter(var_to_np(outs[:,0]), var_to_np(outs[:,1]), s=30, c=[seaborn.color_palette()[0]],
label="Actual data outputs")
mean, std = class_dist.get_mean_std(0)
for sigma in [0.5,1,2,3]:
ellipse = Ellipse(mean, std[0]*sigma, std[1]*sigma)
axes[0].add_artist(ellipse)
ellipse.set_edgecolor(seaborn.color_palette()[1])
ellipse.set_facecolor("None")
axes[0].axis('equal')
axes[0].set_title("Output space")
plt.axis('equal')
samples = class_dist.get_samples(0, 400)
inverted = invert(feature_model_a, samples)
axes[1].scatter(var_to_np(inverted)[:,0], var_to_np(inverted)[:,1], s=30, label="Fake/Unknown Samples",
c=[seaborn.color_palette()[1]], alpha=0.7)
axes[1].scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], s=30, label="Real data",
c=[seaborn.color_palette()[0]])
axes[1].legend(bbox_to_anchor=(1,1,0,0))
axes[1].set_title("Input space")
axes[1].axis('equal')
display_close(fig)
| notebooks/toy-1d-2d-examples/Moons.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Timesteps Size Management
# Created by <NAME>
# This notebook presents the different options available in NuPyCEE to define the size of the timesteps. All options described below can be used in SYGMA and OMEGA.
# Import python modules and the OMEGA code
import matplotlib.pyplot as plt
import matplotlib
import numpy as np
import sygma
import omega
# ## 1. Special Timesteps
# This is the **default setup**. The size of timesteps is set to increase linearly with time (in the log-log space). This is motivated by the fast initial enrichment in chemical evolution models during the first Gyr of evolution.
#
# **Relevant parameters**
# * *special_timesteps*
# * Number of timesteps
# * Default value: 30
# * *tend*
# * Duration of the simulation [yr]
# * Default value: 1.3e10
# * *dt*
# * Size of the very first timestep [yr]
# * Default value: 1.0e6
# Let's run an OMEGA simulation with the default setting
o_default = omega.omega(mgal=1e11, special_timesteps=30)
# +
# Let's plot the size of the timesteps as a function of time
# %matplotlib nbagg
plt.plot(o_default.history.age[:-1], o_default.history.timesteps, color='b')
# Above, the [:-1] is because the "age" array has an additional index (or cell)
# compared to the "timesteps" array. The [:-1] allows to exclude the last
# index of an array.
# Plot X and Y labels
plt.xlabel('Galactic age [yr]', fontsize=14)
plt.ylabel('Timestep size [yr]', fontsize=14)
# -
# Let's run another simulation with 3 times more timesteps
o_special_3x = omega.omega(mgal=1e11, special_timesteps=90)
# Has shown in the plot below, increasing the number of timesteps will change the slope of the timestep-size-VS-time relation.
# +
# Let's plot the size of the timesteps as a function of time
# %matplotlib nbagg
plt.plot(o_default.history.age[:-1], o_default.history.timesteps, color='b', label='30 special timesteps')
plt.plot(o_special_3x.history.age[:-1], o_special_3x.history.timesteps, color='r', label='90 special timesteps')
# Plot X and Y labels and the legend
plt.xlabel('Galactic age [yr]', fontsize=14)
plt.ylabel('Timestep size [yr]', fontsize=14)
plt.legend(loc=2, fontsize=13)
# -
# When using more timesteps (red line), the size of the timesteps is decreasing in order to respect the final time of the simulation, which is defined with the *tend* parameter. When looking at the evolution of the gas, more timesteps provide more resolution, as shown by the dots in the plot below.
# Let's plot the evolution of the total mass of oxygen in the gas reservoir
# %matplotlib nbagg
specie = 'O'
o_special_3x.plot_mass(specie=specie, color='r', markevery=1, label='90 special timesteps')
o_default.plot_mass( specie=specie, color='b', markevery=1, label='30 special timesteps')
plt.ylabel('Mass of Oxygen [Msun]')
# ## 2. Constant Timesteps
# **To activate** the constant timestep mode, ** *special_timesteps* needs to be set to -1**.
#
# **Relevant parameters**
# * *dt*
# * Size of all timesteps [yr]
# * Default value: 1.0e6
# * *tend*
# * Duration of the simulation [yr]
# * Default value: 1.3e10
# Let's run two OMEGA simulations with different constant timesteps
o_cte_1 = omega.omega(mgal=1e11, special_timesteps=-1, dt=5e8)
o_cte_2 = omega.omega(mgal=1e11, special_timesteps=-1, dt=1e8)
# +
# Let's plot the size of the timesteps as a function of time
# %matplotlib nbagg
plt.plot(o_cte_1.history.age[:-1], o_cte_1.history.timesteps, color='b', label='dt = 5e8 yr')
plt.plot(o_cte_2.history.age[:-1], o_cte_2.history.timesteps, color='r', label='dt = 1e8 yr')
# Plot X and Y labels and the legend
plt.xlabel('Galactic age [yr]', fontsize=14)
plt.ylabel('Timestep size [yr]', fontsize=14)
plt.legend(loc=2, fontsize=13)
plt.ylim(0,7e8)
# -
# ## 3. Split Info Timesteps
# This option allows to create different series of constant timesteps in different timeframes. **To activate, simply use the *dt_split_info* parameter**. This will pybass the options 1 and 2 described above and the option 4 described bellow. **Warning**, the duration of the simulation depends on the content of the *dt_split_info* parameter, the *tend* parameter cannot be used.
#
# **Relevant parameters**
# * *dt_split_info*
# * Numpy array containing the information to create the timestep array (see below)
# * Default value: np.array([]), deactivated
# +
# Let's create a first set of information
dt_split_info_1 = [ [1e6,4e7], [5e8,13e9] ]
# This means the timesteps will be of 1 Myr until the time reaches 40 Myr,
# after which the timesteps will be of 500 Myr until the time reaches 13 Gyr.
# Let's create a second set of information
dt_split_info_2 = [ [5e7,1e8], [1e7,2e8], [1e8, 1e9], [2e8,5e9] ]
# -
# Let's run OMEGA simulation swith "dt_split_info" and look at the timestep sizes
o_dt_split_1 = omega.omega(mgal=1e11, dt_split_info=dt_split_info_1)
o_dt_split_2 = omega.omega(mgal=1e11, dt_split_info=dt_split_info_2)
# +
# Let's plot the size of the timesteps as a function of time
# %matplotlib nbagg
plt.plot(o_dt_split_1.history.age[:-1], o_dt_split_1.history.timesteps, color='b', label='dt_split_info_1')
plt.plot(o_dt_split_2.history.age[:-1], o_dt_split_2.history.timesteps, color='g', label='dt_split_info_2')
# Plot X and Y labels and the legend
plt.xlabel('Galactic age [yr]', fontsize=14)
plt.ylabel('Timestep size [yr]', fontsize=14)
plt.legend(loc=4, fontsize=13)
plt.yscale('log')
plt.xscale('log')
plt.ylim(5e5,1e9)
# -
# Now let's look at the evolution of the total mass of oxygen
# %matplotlib nbagg
specie = 'O'
o_dt_split_1.plot_mass(specie=specie, color='b', markevery=1, label='dt_split_info_1')
o_dt_split_2.plot_mass(specie=specie, color='g', markevery=1, label='dt_split_info_2')
plt.ylabel('Mass of Oxygen [Msun]')
# # 4. Arbitrary Timesteps
# This option allows to enter by hand the size of the timesteps. **To activate, use the *dt_in* parameter.** The sizes of the timesteps can be any numbers in any order. **Warnings**, this option bypass the options 1 and 2, but not the option 3. The duration of the simulation is defined by the sum of input timestep sizes, the *tend* parameter cannot be used.
#
# **Relevant parameters**
# * *dt_in*
# * List of timestep sizes
# * Default value: np.array([]), deactivated
# Let's create two sets of arbitraty timesteps
dt_in_1 = [1e8, 1e9, 2e9, 4e9, 5e9]
dt_in_2 = [5e8, 1e7, 5e7, 1e8, 1e5, 4.65e6, 3.019e8, 4e9, 1e9]
# Let's run OMEGA simulation swith "dt_in" and look at the timestep sizes
o_arb_1 = omega.omega(mgal=1e11, dt_in=dt_in_1)
o_arb_2 = omega.omega(mgal=1e11, dt_in=dt_in_2)
# +
# Let's plot the size of the timesteps as a function of time
# %matplotlib nbagg
plt.plot(o_arb_1.history.age[:-1], o_arb_1.history.timesteps, color='b', label='dt_in_1')
plt.plot(o_arb_2.history.age[:-1], o_arb_2.history.timesteps, color='g', label='dt_in_2')
# Plot X and Y labels and the legend
plt.xlabel('Galactic age [yr]', fontsize=14)
plt.ylabel('Timestep size [yr]', fontsize=14)
plt.legend(loc=4, fontsize=13)
plt.yscale('log')
plt.xscale('log')
plt.ylim(5e4,7e9)
# -
# Now let's look at the evolution of the total mass of oxygen
# %matplotlib nbagg
specie = 'O'
o_arb_1.plot_mass(specie=specie, color='b', markevery=1, label='o_arb_1')
o_arb_2.plot_mass(specie=specie, color='g', markevery=1, label='o_arb_2')
plt.ylabel('Mass of Oxygen [Msun]')
plt.xlim(7e7,2e10)
| DOC/Capabilities/Timesteps_size_management.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.10 64-bit (''pyUdemy'': conda)'
# name: python3
# ---
# ## Exceptions
#
# nur bei größerem Code verwenden. Bei Funktionen für den einmaligem Gebrauch führt
# dies schnell zum Overkill!!
# +
import numbers
from math import sqrt
from functools import total_ordering
@total_ordering
class Vector2D:
def __init__(self, x=0, y=0) -> None:
# check Reale Zahlen --> Throw Fehler so früh wie möglich!
if isinstance(x, numbers.Real) and isinstance(y, numbers.Real):
self.x = x
self.y = y
else:
raise TypeError('You must pass int/float values for x and y!')
# wird verwendet wie ein __init__ zu späterem zeitpunkt. wird aufgerufen wenn die Funktion "gecallt wird"
# beispielsweise um ein update o.ä. zu bekommen
def __call__(self):
print("Calling the __call__ function")
print(self.__repr__())
# Ausführliche für Developer
def __repr__(self) -> str:
return f'vector.Vector2D({self.x}, {self.y})'
# Custom für User
def __str__(self) -> str:
return f'{self.x}, {self.y}'
# trigger for if-Abragen: if Vector2d: blabla
def __bool__(self):
return bool(abs(self))
#Betrag
def __abs__(self):
return sqrt(pow(self.x, 2) + pow(self.y, 2))
# check whether is correct type!
def check_vector_types(self, vector2):
if not isinstance(vector2, Vector2D):
raise TypeError("Types have to pass in two instances of the vector class!")
# Equal
def __eq__(self, other_vector):
self.check_vector_types(self, other_vector)
if self.x == other_vector.x and self.y == other_vector.y:
return True
else:
return False
# only __eq__ and one of lt, le, gt, ge must be implemented. Rest can be calculated
# with decorator total_ordering
# lt = less than
# le = less equal
# gt = greater than
# ge = grater equal
def __lt__(self, other_vector):
self.check_vector_types(self, other_vector)
if abs(self) < abs(other_vector):
return True
else:
return False
def __add__(self, other_vector):
self.check_vector_types(self, other_vector)
x = self.x + other_vector.x
y = self.y + other_vector.y
return Vector2D(x, y)
# try (==1):
# except(>= 1):
# finally(optional):
# als Alternative
def __sub__(self, other_vector):
try:
x = self.x - other_vector.x
y = self.y - other_vector.y
return Vector2D(x, y)
except AttributeError as aterr:
print(e)
except Exception as e:
print(e)
finally:
print("finally block!")
def __mul__(self, other):
if isinstance(other, Vector2D):
return self.x * other.x + self.y * other.y
elif isinstance(other, numbers.Real):
return Vector2D(self.x * other, self.y * other)
else:
raise TypeError('You must pass in either a vector instance or int/float number')
# __div__ existiert seit Python2 nicht mehr!
def __truediv__(self, other):
if isinstance(other, numbers.Real):
if other != 0.0:
return Vector2D(self.x / other, self.y / other)
else:
raise ValueError("You cannot divide by zero!")
else:
raise TypeError('You have to pass in an int/float value!')
# +
v1 = Vector2D()
v2 = Vector2D(1, 1)
print(repr(v1))
# if nothing is declared string is used
print(str(v2))
# -
print(v1 + v2)
print(v1 - v2)
print(v1 * v2)
print(v2/ 5.0)
# +
print(abs(v2))
v3 = Vector2D(2, 2)
v4 = Vector2D(2, 2)
print(v1 == v2)
print(v3 == v4)
# -
if v1:
print("v1")
if v3:
print("v3")
if v1 < v3:
print("v1 < v3")
else:
print("v1 >= v3")
# +
v5 = Vector2D(2, 3)
v6 = Vector2D(-1, 2)
print(v5 < v6)
print(v5 <= v6)
print(v5 > v6)
print(v5 >= v6)
print(v5 == v6)
print(v5 != v6)
# -
# funktioniert nur mit __call__ funktion
v5()
v1 = Vector2D()
v2 = Vector2D(1, 1)
# only for testing errors!
print(v1/1)
# +
import builtins
# Show all errors
builtins_list = list(filter(lambda x: True if x.__contains__("Error") else False, dir(builtins)))
for l in builtins_list:
print(l)
# -
| Python/zzz_training_challenge/UdemyPythonPro/Chapter6_ObjectOriented/vector2d.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
print("hello world")
# +
from random import randint
lower = [150,300] # x-lower, y-lower
upper = [200,400] # x-upper, y-upp
pos =[]
for i in range(10):
pos.append((randint(lower[0], upper[0]),randint(lower[1],upper[1])))
pos
# +
lower_v = [5,-10] # x-lower, y-lower
upper_v = [15,5] # x-upper, y-upp
vel =[]
for i in range(10):
vel.append((randint(lower_v[0], upper_v[0]),randint(lower_v[1],upper_v[1])))
vel
# +
x_vals = [p[0] for p in pos]
y_vals = [p[1] for p in pos]
from matplotlib import pyplot as plt
plt.xlim(0,700)
plt.ylim(0,700)
scat = plt.scatter(x_vals,y_vals)
# -
# !pip install matplotlib
def update_scatter(frames):
"""Updates the scatter plot"""
for i in range(len(pos)):
pos[i] = (pos[i][0] + vel[i][0],
pos[i][1] + vel[i][1])
scat.set_offsets(pos)
# +
import matplotlib.animation as animation
from IPython.display import HTML
anim = animation.FuncAnimation(scat.figure,
update_scatter,
frames=50,
interval=50)
HTML(anim.to_jshtml())
# -
| Boids.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PythonAdv
# language: python
# name: pythonadv
# ---
# ### Import Dependencies
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_squared_error
model = LinearRegression()
# ### Import & Inspect Data
data = "healthcare-dataset-stroke-data.csv"
dataframe = pd.read_csv(data)
dataframe.sort_values(by=['stroke'], ascending=False).head(100)
# ### Confirming Unique Values in ID series
dataframe.id.nunique()
dataframe['id'].describe()
# ## Unique IDs confirmed
# ### General demographics / contextual data exploration
#
# Numerical data for use in regression needs to be qualified;
# Categorical data needs to be restructured for binary conversion via Pandas GetDummies Function.
# #### Gender Data
dataframe.groupby('gender')['gender'].count()
# +
# Purge "Other" from dataset
dataframe2 = dataframe
dataframe2.drop(dataframe2[dataframe2['gender'] == 'Other'].index, inplace = True)
dataframe2.groupby('gender')['gender'].count()
# -
# #### Confirmed Gender is now either Male or Female, binary value required for modelling.
# #### Smoking Data
dataframe2.groupby('smoking_status')['smoking_status'].count()
# +
# Create new column that groups smoking into binary classifications
dataframe3 = dataframe2
dataframe3.loc[dataframe3['smoking_status'] == 'smokes', 'Smoke Qualifier'] = 'Y'
dataframe3.loc[dataframe3['smoking_status'] == 'formerly smoked', 'Smoke Qualifier'] = 'Y'
dataframe3.loc[dataframe3['smoking_status'] == 'never smoked', 'Smoke Qualifier'] = 'N'
dataframe3.loc[dataframe3['smoking_status'] == 'Unknown', 'Smoke Qualifier'] = 'N'
dataframe3['Smoke Qualifier'].describe()
# -
#Inspecting new column addition
dataframe3.head(10)
dataframe3.groupby('Smoke Qualifier')['Smoke Qualifier'].count()
# +
# Confirmed Smoking is now binary value via new data series "Smoke Qualifier"
# -
# #### Residence Type
dataframe3.groupby('Residence_type')['Residence_type'].count()
# +
# Data already in binary structure.
# -
# #### Hypertension
dataframe3.groupby('hypertension')['hypertension'].count()
# +
# Data already in binary structure.
# -
# #### Ever Married
dataframe3.groupby('ever_married')['ever_married'].count()
# +
# Data already in binary structure.
# -
# #### Heart Disease
dataframe3.groupby('heart_disease')['heart_disease'].count()
# +
# Data already in binary structure.
# -
# ## Multi-Linear Regression
# #### Full Data Set Option
X = dataframe3[['age', 'gender', 'hypertension', 'heart_disease', 'ever_married', 'Residence_type', 'Smoke Qualifier']]
y = dataframe3['stroke'].values.reshape(-1,1)
print(X.shape, y.shape)
# +
## X = dataframe3[['age', 'hypertension', 'heart_disease', 'Smoke Qualifier']]
## y = dataframe3['stroke'].values.reshape(-1,1)
## print(X.shape, y.shape)
# -
data = X.copy()
data_binary_encoded = pd.get_dummies(data)
data_binary_encoded.head()
# +
X = pd.get_dummies(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1) #Changed from 42 to 1
X_train.head()
# -
X_scaler = StandardScaler().fit(X_train)
y_scaler = StandardScaler().fit(y_train)
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
y_train_scaled = y_scaler.transform(y_train)
y_test_scaled = y_scaler.transform(y_test)
model.fit(X_train_scaled, y_train_scaled)
plt.scatter(model.predict(X_train_scaled), model.predict(X_train_scaled) - y_train_scaled, c="blue", label="Training Data")
plt.scatter(model.predict(X_test_scaled), model.predict(X_test_scaled) - y_test_scaled, c="orange", label="Testing Data")
plt.legend()
plt.hlines(y=0, xmin=y_test_scaled.min(), xmax=y_test_scaled.max())
plt.title("Residual Plot")
plt.show()
# +
predictions = model.predict(X_test_scaled)
MSE = mean_squared_error(y_test_scaled, predictions)
r2 = model.score(X_test_scaled, y_test_scaled)
print(f"MSE: {MSE}, R2: {r2}")
# -
# ##### Logistic Regression
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier
classifier.fit(X_train, y_train.ravel())
print(f"Training Data Score: {classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test, y_test)}")
new_data = np.array([[-2, 6]])
plt.scatter(X[:, 0], X[:, 1], c=y)
plt.scatter(new_data[0, 0], new_data[0, 1], c="r", marker="o", s=100)
# Predict the class (purple or yellow) of the new data point
predictions = classifier.predict(new_data)
print("Classes are either 0 (purple) or 1 (yellow)")
print(f"The new point was classified as: {predictions}")
# #### End Full Data Set Option
# #### Limited Data Set Option (Maybe increases accuracy)
# +
dataframe4 = dataframe3
X = dataframe4[['age', 'hypertension', 'heart_disease', 'Smoke Qualifier']]
y = dataframe4['stroke'].values.reshape(-1,1)
print(X.shape, y.shape)
# -
data = X.copy()
data_binary_encoded = pd.get_dummies(data)
data_binary_encoded.head()
# +
X = pd.get_dummies(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1) #Changed from 42 to 1
X_train.head()
# -
X_scaler = StandardScaler().fit(X_train)
y_scaler = StandardScaler().fit(y_train)
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
y_train_scaled = y_scaler.transform(y_train)
y_test_scaled = y_scaler.transform(y_test)
model.fit(X_train_scaled, y_train_scaled)
plt.scatter(model.predict(X_train_scaled), model.predict(X_train_scaled) - y_train_scaled, c="blue", label="Training Data")
plt.scatter(model.predict(X_test_scaled), model.predict(X_test_scaled) - y_test_scaled, c="orange", label="Testing Data")
plt.legend()
plt.hlines(y=0, xmin=y_test_scaled.min(), xmax=y_test_scaled.max())
plt.title("Residual Plot")
plt.show()
# +
predictions = model.predict(X_test_scaled)
MSE = mean_squared_error(y_test_scaled, predictions)
r2 = model.score(X_test_scaled, y_test_scaled)
print(f"MSE: {MSE}, R2: {r2}")
# -
# ##### Logistic Regression
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier
classifier.fit(X_train, y_train.ravel())
print(f"Training Data Score: {classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test, y_test)}")
new_data = np.array([[-2, 6]])
plt.scatter(X[:, 0], X[:, 1], c=y)
plt.scatter(new_data[0, 0], new_data[0, 1], c="r", marker="o", s=100)
# Predict the class (purple or yellow) of the new data point
predictions = classifier.predict(new_data)
print("Classes are either 0 (purple) or 1 (yellow)")
print(f"The new point was classified as: {predictions}")
# #### End Limited Data Set Option
# ### [ Sandbox / Testing ]
X = dataframe3['hypertension'].values.reshape(-1,1)
y = dataframe3['stroke'].values.reshape(-1,1)
#data = X.copy()
#data_binary_encoded = pd.get_dummies(data, columns=['gender'])
#data_binary_encoded.head()
#X = pd.get_dummies(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
X_scaler = StandardScaler().fit(X_train)
y_scaler = StandardScaler().fit(y_train)
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
y_train_scaled = y_scaler.transform(y_train)
y_test_scaled = y_scaler.transform(y_test)
model.fit(X_train_scaled, y_train_scaled)
plt.scatter(model.predict(X_train_scaled), model.predict(X_train_scaled) - y_train_scaled, c="blue", label="Training Data")
plt.scatter(model.predict(X_test_scaled), model.predict(X_test_scaled) - y_test_scaled, c="orange", label="Testing Data")
plt.legend()
plt.hlines(y=0, xmin=y_test_scaled.min(), xmax=y_test_scaled.max())
plt.title("Hypertenson Plot")
plt.show()
predictions = model.predict(X_test_scaled)
MSE = mean_squared_error(y_test_scaled, predictions)
r2 = model.score(X_test_scaled, y_test_scaled)
print(f"MSE: {MSE}, R2: {r2}")
# # *Logistic Modeling Revisited
dataframe5 = dataframe3
dataframe5
X = dataframe5[['age', 'hypertension', 'heart_disease', 'Smoke Qualifier', 'stroke']]
print(X.shape)
data = X.copy()
data_binary_encoded2 = pd.get_dummies(data)
data_binary_encoded2.head()
# Assign X (data) and y (target)
X = data_binary_encoded2.drop("stroke", axis=1)
y = data_binary_encoded2["stroke"]
print(X.shape, y.shape)
# +
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# -
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier
classifier.fit(X_train, y_train)
print(f"Training Data Score: {classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test, y_test)}")
predictions = classifier.predict(X_test)
print(f"First 10 Predictions: {predictions[:100]}")
print(f"First 10 Actual labels: {y_test[:100].tolist()}")
# ## Multi-Linear Regression v2.0
#
#
# #### Full Data Set Option
# +
dataframe6 = dataframe3
dataframe6.drop(dataframe6[dataframe6['stroke'] == 0].index, inplace = True)
# -
X = dataframe6[['age', 'gender', 'hypertension', 'heart_disease', 'ever_married', 'Residence_type', 'Smoke Qualifier']]
y = dataframe6['stroke'].values.reshape(-1,1)
print(X.shape, y.shape)
# +
# X = dataframe6[['age', 'hypertension', 'heart_disease', 'Smoke Qualifier']]
# y = dataframe6['stroke'].values.reshape(-1,1)
# print(X.shape, y.shape)
# -
data = X.copy()
data_binary_encoded3 = pd.get_dummies(data)
data_binary_encoded3.head()
# +
X = pd.get_dummies(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) #Changed from 42 to 1
X_train.head()
# -
X_scaler = StandardScaler().fit(X_train)
y_scaler = StandardScaler().fit(y_train)
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
y_train_scaled = y_scaler.transform(y_train)
y_test_scaled = y_scaler.transform(y_test)
model.fit(X_train_scaled, y_train_scaled)
plt.scatter(model.predict(X_train_scaled), model.predict(X_train_scaled) - y_train_scaled, c="blue", label="Training Data")
plt.scatter(model.predict(X_test_scaled), model.predict(X_test_scaled) - y_test_scaled, c="orange", label="Testing Data")
plt.legend()
plt.hlines(y=0, xmin=y_test_scaled.min(), xmax=y_test_scaled.max())
plt.title("Residual Plot")
plt.show()
# +
predictions = model.predict(X_test_scaled)
MSE = mean_squared_error(y_test_scaled, predictions)
r2 = model.score(X_test_scaled, y_test_scaled)
print(f"MSE: {MSE}, R2: {r2}")
# -
# # Logistic Modeling Revisited v2.0 - Only Stroke Victims
# +
dataframe6 = dataframe3
dataframe6.drop(dataframe6[dataframe6['stroke'] == 0].index, inplace = True)
dataframe6.groupby('stroke')['stroke'].count()
# -
X = dataframe6[['age', 'hypertension', 'heart_disease', 'Smoke Qualifier', 'stroke']]
print(X.shape)
data = X.copy()
data_binary_encoded2 = pd.get_dummies(data)
data_binary_encoded2.head()
# Assign X (data) and y (target)
X = data_binary_encoded2.drop("stroke", axis=1)
y = data_binary_encoded2["stroke"]
print(X.shape, y.shape)
# +
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# -
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier
classifier.fit(X_train, y_train)
print(f"Training Data Score: {classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test, y_test)}")
predictions = classifier.predict(X_test)
print(f"First 10 Predictions: {predictions[:100]}")
print(f"First 10 Actual labels: {y_test[:100].tolist()}")
| Regression Analysis Notebook v2.0.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# This section aims to get you using Datashader productively as quickly as possible.
# Detailed documentation is contained in the [User Guide](../user_guide/index.ipynb).
# To see examples of what can be done with Datashader, see [Topics](../topics/index.ipynb).
#
# We recommend you proceed through the following in order; it should take around 1 hour in total.
#
# * [1. Introduction](1_Introduction.ipynb)
# Simple self-contained example to show how Datashader works.
#
# * [2. Pipeline](2_Pipeline.ipynb)
# Detailed step-by-step explanation how Datashader turns your data into an image.
#
# * [3. Interactivity](3_Interactivity.html)
# Embedding images into rich, interactive plots in a web browser.
| examples/getting_started/index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import cv2
import random
import matplotlib.pyplot as plt
from pathlib import Path
from IPython import display
# -
def get_random_video():
fake_video_path = Path.cwd().parent / "datasets/DeepfakeTIMIT/higher_quality"
orig_video_path = Path.cwd().parent / "datasets/VidTIMIT/"
orig_video_files = list(orig_video_path.rglob("*.avi"))
idx = random.randint(0, len(orig_video_files))
orig_video = orig_video_files[idx]
print(orig_video.parent.stem)
fake_video_files = list(Path(fake_video_path / orig_video.parent.stem).rglob("*.avi"))
for fake_video in fake_video_files:
if fake_video.name.startswith(orig_video.stem):
break
return fake_video, orig_video
# +
fake_video, orig_video = get_random_video()
cap = cv2.VideoCapture(str(fake_video))
while(cap.isOpened()):
ret, fake_frame = cap.read()
break
cap = cv2.VideoCapture(str(orig_video))
while(cap.isOpened()):
ret, orig_frame = cap.read()
break
fig, ax = plt.subplots(1,2)
fig.set_size_inches(15,10)
ax[0].imshow(fake_frame)
ax[1].imshow(orig_frame)
plt.show()
# -
def show_hist(img):
color = ('b','g','r')
for i,col in enumerate(color):
histr = cv2.calcHist([img],[i],None,[256],[0,256])
plt.plot(histr,color = col)
plt.xlim([0,256])
plt.show()
def img_diff(img1, img2):
return img1 - img2
plt.imshow(img_diff(orig_frame, fake_frame))
plt.show()
show_hist(orig_frame)
show_hist(fake_frame)
from cv2 import CascadeClassifier
classifier = CascadeClassifier('haarcascade_frontalface_default.xml')
# +
_, video = get_random_video()
cap = cv2.VideoCapture(str(video))
while(cap.isOpened()):
ret, img = cap.read()
if not ret:
break
bboxes = classifier.detectMultiScale(img)
for bb in bboxes:
x1, y1, w, h = bb
face = img[y1:y1+h, x1:x1+w, :]
plt.imshow(face)
plt.show()
display.clear_output(wait=True)
# -
| src/milestone1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Multiple linear regression
# ## Import the relevant libraries
# +
# For these lessons we will need NumPy, pandas, matplotlib and seaborn
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
# and of course the actual regression (machine learning) module
from sklearn.linear_model import LinearRegression
# -
# ## Load the data
# +
# Load the data from a .csv in the same folder
data = pd.read_csv('1.02. Multiple linear regression.csv')
# Let's explore the top 5 rows of the df
data.head()
# -
# This method gives us very nice descriptive statistics. We don't need this for now, but will later on!
data.describe()
# ## Create the multiple linear regression
# ### Declare the dependent and independent variables
# +
# There are two independent variables: 'SAT' and 'Rand 1,2,3'
x = data[['SAT','Rand 1,2,3']]
# and a single depended variable: 'GPA'
y = data['GPA']
# -
# ### Regression itself
# +
# We start by creating a linear regression object
reg = LinearRegression()
# The whole learning process boils down to fitting the regression
reg.fit(x,y)
# -
# Getting the coefficients of the regression
reg.coef_
# Note that the output is an array
# Getting the intercept of the regression
reg.intercept_
# Note that the result is a float as we usually expect a single value
# ### Calculating the R-squared
# Get the R-squared of the regression
reg.score(x,y)
# ### Formula for Adjusted R^2
#
# $R^2_{adj.} = 1 - (1-R^2)*\frac{n-1}{n-p-1}$
# Get the shape of x, to facilitate the creation of the Adjusted R^2 metric
x.shape
# +
# If we want to find the Adjusted R-squared we can do so by knowing the r2, the # observations, the # features
r2 = reg.score(x,y)
# Number of observations is the shape along axis 0
n = x.shape[0]
# Number of features (predictors, p) is the shape along axis 1
p = x.shape[1]
# We find the Adjusted R-squared using the formula
adjusted_r2 = 1-(1-r2)*(n-1)/(n-p-1)
adjusted_r2
# -
# ## Feature selection
# Full documentation: https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_regression.html
# Import the feature selection module from sklearn
# This module allows us to select the most appopriate features for our regression
# There exist many different approaches to feature selection, however, we will use one of the simplest
from sklearn.feature_selection import f_regression
# +
# We will look into: f_regression
# f_regression finds the F-statistics for the *simple* regressions created with each of the independent variables
# In our case, this would mean running a simple linear regression on GPA where SAT is the independent variable
# and a simple linear regression on GPA where Rand 1,2,3 is the indepdent variable
# The limitation of this approach is that it does not take into account the mutual effect of the two features
f_regression(x,y)
# There are two output arrays
# The first one contains the F-statistics for each of the regressions
# The second one contains the p-values of these F-statistics
# -
# Since we are more interested in the latter (p-values), we can just take the second array
p_values = f_regression(x,y)[1]
p_values
# To be able to quickly evaluate them, we can round the result to 3 digits after the dot
p_values.round(3)
| 15 - Advanced Statistical Methods in Python/3_Linear Regression with sklearn/9_Feature Selection through p-values (F-regression) (4:41)/sklearn - Feature Selection with F-regression_with_comments.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# !pip install pyro-ppl matplotlib torch
# !pip install -e ../..
# + jupyter={"outputs_hidden": true}
from nero.viz.bandits import plot_bandits
from nero.core.utils import get_project_root
import torch
import os
# %matplotlib inline
filename = os.path.join(get_project_root(),'viz/logs','dist_params_history.pt' )
dist_params_history = torch.load(filename)
plot_bandits(dist_params_history)
# -
dist_params_history
| nero/viz/bandits.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Versão da Linguagem Python
# from platform import python_version
# print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
# ## Números e Operações Matemáticas
# # Pressione as teclas shift e enter para executar o código em uma célula ou pressione o botão Play no menu superior
# Soma
4 + 4
# Subtração
4 - 3
# Multiplicação
3 * 3
# Divisão
3 / 2
# Potência
4 ** 2
# Módulo
10 % 3
# ### Função Type
type(5)
type(5.0)
a = 'Eu sou uma string'
type(a)
# ### Operações com números float
3.1 + 6.4
4 + 4.0
4 + 4
# Resultado é um número float
4 / 2
# Resultado é um número inteiro
4 // 2
4 / 3.0
4 // 3.0
# ### Conversão
float(9)
int(6.0)
int(6.5)
# ### Hexadecimal e Binário
hex(394)
hex(217)
bin(286)
bin(390)
# ### Funções abs, round e pow
# Retorna o valor absoluto
abs(-8)
# Retorna o valor absoluto
abs(8)
# Retorna o valor com arredondamento
round(3.14151922,2)
# Potência
pow(4,2)
# Potência
pow(5,3)
# # Fim
| 02-Variaveis_Tipos_e_Estruturas/Notebooks/01-Numeros.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
from tensorflow.keras.applications.resnet import ResNet50
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, Flatten, Dense, AveragePooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import Adam
def load_train(path):
datagen = ImageDataGenerator(
horizontal_flip=True, vertical_flip=True, rescale=1/255)
train_datagen_flow = datagen.flow_from_directory(
path,
target_size=(150, 150),
batch_size=16,
class_mode='sparse',
seed=12345)
return train_datagen_flow
def create_model(input_shape):
model = Sequential()
model.add(
Conv2D(
filters=6,
kernel_size=(3, 3),
activation='relu',
input_shape=(150, 150, 3),
)
)
model.add(AveragePooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(64, activation='relu'))
model.add(Dense(80, activation='softmax'))
optimizer = Adam(lr=0.0001)
model.compile(optimizer=optimizer,
loss='sparse_categorical_crossentropy', metrics=['acc']
)
return model
def train_model(
model,
train_data,
test_data,
batch_size=None,
epochs=10,
steps_per_epoch=None,
validation_steps=None,
):
model.fit(
train_data,
validation_data=test_data,
epochs=epochs,
verbose=2,
#steps_per_epoch=7,
batch_size=batch_size,
steps_per_epoch=steps_per_epoch,
validation_steps=validation_steps,
)
return model
# -
| Untitled18.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Using Pyro for Estimation
# <div class="alert alert-info">
#
# Note
#
# Currently we are still experimenting Pyro and support Pyro only with LGT.
#
# </div>
# [Pyro](https://github.com/pyro-ppl/pyro) is a flexible, scalable deep probabilistic programming library built on PyTorch. **Pyro** was originally developed at Uber AI and is now actively maintained by community contributors, including a dedicated team at the Broad Institute.
# +
# %matplotlib inline
import pandas as pd
import numpy as np
import orbit
from orbit.models.lgt import LGTMAP, LGTAggregated, LGTFull
from orbit.estimators.pyro_estimator import PyroEstimatorVI, PyroEstimatorMAP
from orbit.diagnostics.plot import plot_predicted_data
from orbit.diagnostics.plot import plot_predicted_components
from orbit.utils.dataset import load_iclaims
# -
assert orbit.__version__ == '1.0.14dev'
df = load_iclaims()
test_size=52
train_df=df[:-test_size]
test_df=df[-test_size:]
# ## MAP Fit and Predict
# To use **Pyro** as the inference engine, one needs to specify the `estimator_type` as `PyroEstimatorMAP` or `PyroEstimatorVI`.
lgt_map = LGTMAP(
response_col="claims",
date_col="week",
seasonality=52,
seed=8888,
estimator_type=PyroEstimatorMAP,
)
# %%time
lgt_map.fit(df=train_df)
predicted_df = lgt_map.predict(df=test_df)
_ = plot_predicted_data(training_actual_df=train_df, predicted_df=predicted_df,
date_col=lgt_map.date_col, actual_col=lgt_map.response_col,
test_actual_df=test_df)
# ## VI Fit and Predict
# Pyro only support Variational Inference(SVI) for full sampling prediction. Note that `pyro` takes advantage of parallel processing in `vi` and hence result in similar computation time compared to `map`.
lgt_vi = LGTFull(
response_col='claims',
date_col='week',
seasonality=52,
seed=8888,
num_steps=101,
num_sample=100,
learning_rate=0.1,
n_bootstrap_draws=-1,
estimator_type=PyroEstimatorVI,
)
# %%time
lgt_vi.fit(df=train_df)
predicted_df = lgt_vi.predict(df=test_df)
_ = plot_predicted_data(training_actual_df=train_df, predicted_df=predicted_df,
date_col=lgt_vi.date_col, actual_col=lgt_vi.response_col,
test_actual_df=test_df)
| docs/tutorials/pyro_basic.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Classification with Support Vector Machines
# #### by <NAME> | <NAME> - <a href=\"https://github.com/Saurabh7\">github.com/Saurabh7</a> as a part of <a href=\"http://www.google-melange.com/gsoc/project/details/google/gsoc2014/saurabh7/5750085036015616\">Google Summer of Code 2014 project</a> mentored by - <NAME> - <a href=\"https://github.com/karlnapf\">github.com/karlnapf</a> - <a href=\"http://herrstrathmann.de/\">herrstrathmann.de</a>
# This notebook illustrates how to train a <a href="http://en.wikipedia.org/wiki/Support_vector_machine">Support Vector Machine</a> (SVM) <a href="http://en.wikipedia.org/wiki/Statistical_classification">classifier</a> using Shogun. The <a href="http://www.shogun-toolbox.org/doc/en/3.0.0/classshogun_1_1CLibSVM.html">CLibSVM</a> class of Shogun is used to do binary classification. Multiclass classification is also demonstrated using [CGMNPSVM](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGMNPSVM.html).
# 1. [Introduction](#Introduction)
# 2. [Linear Support Vector Machines](Linear-Support-Vector-Machines)
# 1. [Prediction using Linear SVM](#Prediction-using-Linear-SVM)
# 3. [SVMs using kernels](#SVMs-using-kernels)
# 1. [Kernels in Shogun](#Kernels-in-Shogun)
# 2. [Prediction using kernel based SVM](#Prediction-using-kernel-based-SVM)
# 4. [Probabilistic Outputs using SVM](#Probabilistic-Outputs)
# 5. [Soft margins and slack variables](#Soft-margins-and-slack-variables)
# 6. [Binary classification using different kernels](#Binary classification-using-different-kernels)
# 7. [Kernel Normalizers](#Kernel-Normalizers)
# 8. [Multiclass classification using SVM](#Multiclass-classification)
#
# ### Introduction
# Support Vector Machines (SVM's) are a learning method used for binary classification. The basic idea is to find a hyperplane which separates the data into its two classes. However, since example data is often not linearly separable, SVMs operate in a kernel induced feature space, i.e., data is embedded into a higher dimensional space where it is linearly separable.
# ### Linear Support Vector Machines
# In a supervised learning problem, we are given a labeled set of input-output pairs $\mathcal{D}=(x_i,y_i)^N_{i=1}\subseteq \mathcal{X} \times \mathcal{Y}$ where $x\in\mathcal{X}$ and $y\in\{-1,+1\}$. [SVM](https://en.wikipedia.org/wiki/Support_vector_machine) is a binary classifier that tries to separate objects of different classes by finding a (hyper-)plane such that the margin between the two classes is maximized. A hyperplane in $\mathcal{R}^D$ can be parameterized by a vector $\bf{w}$ and a constant $\text b$ expressed in the equation:$${\bf w}\cdot{\bf x} + \text{b} = 0$$
# Given such a hyperplane ($\bf w$,b) that separates the data, the discriminating function is: $$f(x) = \text {sign} ({\bf w}\cdot{\bf x} + {\text b})$$
#
# If the training data are linearly separable, we can select two hyperplanes in a way that they separate the data and there are no points between them, and then try to maximize their distance. The region bounded by them is called "the margin". These hyperplanes can be described by the equations
# $$({\bf w}\cdot{\bf x} + {\text b}) = 1$$
# $$({\bf w}\cdot{\bf x} + {\text b}) = -1$$
# the distance between these two hyperplanes is $\frac{2}{\|\mathbf{w}\|}$, so we want to minimize $\|\mathbf{w}\|$.
# $$
# \arg\min_{(\mathbf{w},b)}\frac{1}{2}\|\mathbf{w}\|^2 \qquad\qquad(1)$$
# This gives us a hyperplane that maximizes the geometric distance to the closest data points.
# As we also have to prevent data points from falling into the margin, we add the following constraint: for each ${i}$ either
# $$({\bf w}\cdot{x}_i + {\text b}) \geq 1$$ or
# $$({\bf w}\cdot{x}_i + {\text b}) \leq -1$$
# which is similar to
# $${y_i}({\bf w}\cdot{x}_i + {\text b}) \geq 1 \forall i$$
#
# [Lagrange multipliers](http://en.wikipedia.org/wiki/Lagrange_multiplier) are used to modify equation $(1)$ and the corresponding dual of the problem can be shown to be:
#
# \begin{eqnarray*}
# \max_{\bf \alpha} && \sum_{i=1}^{N} \alpha_i - \sum_{i=1}^{N}\sum_{j=1}^{N} \alpha_i y_i \alpha_j y_j {\bf x_i} \cdot {\bf x_j}\\
# \mbox{s.t.} && \alpha_i\geq 0\\
# && \sum_{i}^{N} \alpha_i y_i=0\\
# \end{eqnarray*}
#
# From the derivation of these equations, it was seen that the optimal hyperplane can be written as:
# $$\mathbf{w} = \sum_i \alpha_i y_i \mathbf{x}_i. $$
# here most $\alpha_i$ turn out to be zero, which means that the solution is a sparse linear combination of the training data.
# ### Prediction using Linear SVM
# Now let us see how one can train a linear Support Vector Machine with Shogun. Two dimensional data (having 2 attributes say: attribute1 and attribute2) is now sampled to demonstrate the classification.
# +
import matplotlib.pyplot as plt
# %matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
import matplotlib.patches as patches
#To import all shogun classes
import shogun as sg
import numpy as np
#Generate some random data
X = 2 * np.random.randn(10,2)
traindata=np.r_[X + 3, X + 7].T
feats_train=sg.features(traindata)
trainlab=np.concatenate((np.ones(10),-np.ones(10)))
labels=sg.BinaryLabels(trainlab)
# Plot the training data
plt.figure(figsize=(6,6))
plt.gray()
_=plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.title("Training Data")
plt.xlabel('attribute1')
plt.ylabel('attribute2')
p1 = patches.Rectangle((0, 0), 1, 1, fc="k")
p2 = patches.Rectangle((0, 0), 1, 1, fc="w")
plt.legend((p1, p2), ["Class 1", "Class 2"], loc=2)
plt.gray()
# -
# [Liblinear](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1LibLinear.html), a library for large- scale linear learning focusing on SVM, is used to do the classification. It supports different [solver types](http://www.shogun-toolbox.org/doc/en/latest/namespaceshogun.html#a6e99d1864c93fc2d41b1fa0fc253f471).
# +
#prameters to svm
#parameter C is described in a later section.
C=1
epsilon=1e-3
svm=sg.machine('LibLinear', C1=C, C2=C, liblinear_solver_type='L2R_L2LOSS_SVC', epsilon=epsilon)
#train
svm.put('labels', labels)
svm.train(feats_train)
w=svm.get('w')
b=svm.get('bias')
# -
# We solve ${\bf w}\cdot{\bf x} + \text{b} = 0$ to visualise the separating hyperplane. The methods `get_w()` and `get_bias()` are used to get the necessary values.
# +
#solve for w.x+b=0
x1=np.linspace(-1.0, 11.0, 100)
def solve (x1):
return -( ( (w[0])*x1 + b )/w[1] )
x2=list(map(solve, x1))
#plot
plt.figure(figsize=(6,6))
plt.gray()
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.plot(x1,x2, linewidth=2)
plt.title("Separating hyperplane")
plt.xlabel('attribute1')
plt.ylabel('attribute2')
plt.gray()
# -
# The classifier is now applied on a X-Y grid of points to get predictions.
# +
size=100
x1_=np.linspace(-5, 15, size)
x2_=np.linspace(-5, 15, size)
x, y=np.meshgrid(x1_, x2_)
#Generate X-Y grid test data
grid=sg.features(np.array((np.ravel(x), np.ravel(y))))
#apply on test grid
predictions = svm.apply(grid)
#Distance from hyperplane
z=predictions.get_values().reshape((size, size))
#plot
plt.jet()
plt.figure(figsize=(16,6))
plt.subplot(121)
plt.title("Classification")
c=plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.gray()
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.xlabel('attribute1')
plt.ylabel('attribute2')
plt.jet()
#Class predictions
z=predictions.get('labels').reshape((size, size))
#plot
plt.subplot(122)
plt.title("Separating hyperplane")
c=plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.gray()
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.xlabel('attribute1')
plt.ylabel('attribute2')
plt.gray()
# -
# ### SVMs using kernels
# If the data set is not linearly separable, a non-linear mapping $\Phi:{\bf x} \rightarrow \Phi({\bf x}) \in \mathcal{F} $ is used. This maps the data into a higher dimensional space where it is linearly separable. Our equation requires only the inner dot products ${\bf x_i}\cdot{\bf x_j}$. The equation can be defined in terms of inner products $\Phi({\bf x_i}) \cdot \Phi({\bf x_j})$ instead. Since $\Phi({\bf x_i})$ occurs only in dot products with $ \Phi({\bf x_j})$ it is sufficient to know the formula ([kernel function](http://en.wikipedia.org/wiki/Kernel_trick)) : $$K({\bf x_i, x_j} ) = \Phi({\bf x_i}) \cdot \Phi({\bf x_j})$$ without dealing with the maping directly. The transformed optimisation problem is:
#
# \begin{eqnarray*} \max_{\bf \alpha} && \sum_{i=1}^{N} \alpha_i - \sum_{i=1}^{N}\sum_{j=1}^{N} \alpha_i y_i \alpha_j y_j k({\bf x_i}, {\bf x_j})\\ \mbox{s.t.} && \alpha_i\geq 0\\ && \sum_{i=1}^{N} \alpha_i y_i=0 \qquad\qquad(2)\\ \end{eqnarray*}
# ### Kernels in Shogun
# Shogun provides many options for the above mentioned kernel functions. [Kernel](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1Kernel.html) is the base class for kernels. Some commonly used kernels :
#
# * [Gaussian kernel](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1GaussianKernel.html) : Popular Gaussian kernel computed as $k({\bf x},{\bf x'})= exp(-\frac{||{\bf x}-{\bf x'}||^2}{\tau})$
#
# * [Linear kernel](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1LinearKernel.html) : Computes $k({\bf x},{\bf x'})= {\bf x}\cdot {\bf x'}$
# * [Polynomial kernel](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CPolyKernel.html) : Polynomial kernel computed as $k({\bf x},{\bf x'})= ({\bf x}\cdot {\bf x'}+c)^d$
#
# * [Simgmoid Kernel](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CSigmoidKernel.html) : Computes $k({\bf x},{\bf x'})=\mbox{tanh}(\gamma {\bf x}\cdot{\bf x'}+c)$
#
# Some of these kernels are initialised below.
# +
gaussian_kernel=sg.kernel("GaussianKernel", log_width=np.log(100))
#Polynomial kernel of degree 2
poly_kernel=sg.kernel('PolyKernel', degree=2, c=1.0)
poly_kernel.init(feats_train, feats_train)
linear_kernel=sg.kernel('LinearKernel')
linear_kernel.init(feats_train, feats_train)
kernels=[linear_kernel, poly_kernel, gaussian_kernel]
# -
# Just for fun we compute the kernel matrix and display it. There are clusters visible that are smooth for the gaussian and polynomial kernel and block-wise for the linear one. The gaussian one also smoothly decays from some cluster centre while the polynomial one oscillates within the clusters.
# +
plt.jet()
def display_km(kernels, svm):
plt.figure(figsize=(20,6))
plt.suptitle('Kernel matrices for different kernels', fontsize=12)
for i, kernel in enumerate(kernels):
kernel.init(feats_train,feats_train)
plt.subplot(1, len(kernels), i+1)
plt.title(kernel.get_name())
km=kernel.get_kernel_matrix()
plt.imshow(km, interpolation="nearest")
plt.colorbar()
display_km(kernels, svm)
# -
# ### Prediction using kernel based SVM
# Now we train an SVM with a Gaussian Kernel. We use LibSVM but we could use any of the [other SVM](http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1SVM.html) from Shogun. They all utilize the same kernel framework and so are drop-in replacements.
C=1
epsilon=1e-3
svm=sg.machine('LibSVM', C1=C, C2=C, kernel=gaussian_kernel, labels=labels)
_=svm.train()
# We could now check a number of properties like what the value of the objective function returned by the particular SVM learning algorithm or the explictly computed primal and dual objective function is
# +
libsvm_obj = svm.get('objective')
primal_obj, dual_obj = sg.as_svm(svm).compute_svm_primal_objective(), sg.as_svm(svm).compute_svm_dual_objective()
print(libsvm_obj, primal_obj, dual_obj)
# -
# and based on the objectives we can compute the duality gap (have a look at reference [2]), a measure of convergence quality of the svm training algorithm . In theory it is 0 at the optimum and in reality at least close to 0.
print("duality_gap", dual_obj-primal_obj)
# Let's now apply on the X-Y grid data and plot the results.
# +
out=svm.apply(grid)
z=out.get_values().reshape((size, size))
#plot
plt.jet()
plt.figure(figsize=(16,6))
plt.subplot(121)
plt.title("Classification")
c=plt.pcolor(x1_, x2_, z)
plt.contour(x1_ , x2_, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.gray()
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.xlabel('attribute1')
plt.ylabel('attribute2')
plt.jet()
z=out.get('labels').reshape((size, size))
plt.subplot(122)
plt.title("Decision boundary")
c=plt.pcolor(x1_, x2_, z)
plt.contour(x1_ , x2_, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.xlabel('attribute1')
plt.ylabel('attribute2')
plt.gray()
# -
# ### Probabilistic Outputs
# Calibrated probabilities can be generated in addition to class predictions using `scores_to_probabilities()` method of [BinaryLabels](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1BinaryLabels.html), which implements the method described in [3]. This should only be used in conjunction with SVM. A parameteric form of a [sigmoid function](http://en.wikipedia.org/wiki/Sigmoid_function) $$\frac{1}{{1+}exp(af(x) + b)}$$ is used to fit the outputs. Here $f(x)$ is the signed distance of a sample from the hyperplane, $a$ and $b$ are parameters to the sigmoid. This gives us the posterier probabilities $p(y=1|f(x))$.
# Let's try this out on the above example. The familiar "S" shape of the sigmoid should be visible.
# +
n=10
x1t_=np.linspace(-5, 15, n)
x2t_=np.linspace(-5, 15, n)
xt, yt=np.meshgrid(x1t_, x2t_)
#Generate X-Y grid test data
test_grid=sg.features(np.array((np.ravel(xt), np.ravel(yt))))
labels_out=svm.apply(test_grid)
#Get values (Distance from hyperplane)
values=labels_out.get('current_values')
#Get probabilities
labels_out.scores_to_probabilities()
prob=labels_out.get('current_values')
#plot
plt.gray()
plt.figure(figsize=(10,6))
p1=plt.scatter(values, prob)
plt.title('Probabilistic outputs')
plt.xlabel('Distance from hyperplane')
plt.ylabel('Probability')
plt.legend([p1], ["Test samples"], loc=2)
# -
# ### Soft margins and slack variables
# If there is no clear classification possible using a hyperplane, we need to classify the data as nicely as possible while incorporating the misclassified samples. To do this a concept of soft margin is used. The method introduces non-negative slack variables, $\xi_i$, which measure the degree of misclassification of the data $x_i$.
# $$
# y_i(\mathbf{w}\cdot\mathbf{x_i} + b) \ge 1 - \xi_i \quad 1 \le i \le N $$
#
# Introducing a linear penalty function leads to
# $$\arg\min_{\mathbf{w},\mathbf{\xi}, b } ({\frac{1}{2} \|\mathbf{w}\|^2 +C \sum_{i=1}^n \xi_i) }$$
#
# This in its dual form is leads to a slightly modified equation $\qquad(2)$.
# \begin{eqnarray*} \max_{\bf \alpha} && \sum_{i=1}^{N} \alpha_i - \sum_{i=1}^{N}\sum_{j=1}^{N} \alpha_i y_i \alpha_j y_j k({\bf x_i}, {\bf x_j})\\ \mbox{s.t.} && 0\leq\alpha_i\leq C\\ && \sum_{i=1}^{N} \alpha_i y_i=0 \\ \end{eqnarray*}
#
# The result is that soft-margin SVM could choose decision boundary that has non-zero training error even if dataset is linearly separable but is less likely to overfit.
# Here's an example using LibSVM on the above used data set. Highlighted points show support vectors. This should visually show the impact of C and how the amount of outliers on the wrong side of hyperplane is controlled using it.
# +
def plot_sv(C_values):
plt.figure(figsize=(20,6))
plt.suptitle('Soft and hard margins with varying C', fontsize=12)
for i in range(len(C_values)):
plt.subplot(1, len(C_values), i+1)
linear_kernel=sg.LinearKernel(feats_train, feats_train)
svm1 = sg.machine('LibSVM', C1=C_values[i], C2=C_values[i], kernel=linear_kernel, labels=labels)
svm1 = sg.as_svm(svm1)
svm1.train()
vec1=svm1.get_support_vectors()
X_=[]
Y_=[]
new_labels=[]
for j in vec1:
X_.append(traindata[0][j])
Y_.append(traindata[1][j])
new_labels.append(trainlab[j])
out1=svm1.apply(grid)
z1=out1.get_labels().reshape((size, size))
plt.jet()
c=plt.pcolor(x1_, x2_, z1)
plt.contour(x1_ , x2_, z1, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.gray()
plt.scatter(X_, Y_, c=new_labels, s=150)
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=20)
plt.title('Support vectors for C=%.2f'%C_values[i])
plt.xlabel('attribute1')
plt.ylabel('attribute2')
C_values=[0.1, 1000]
plot_sv(C_values)
# -
# You can see that lower value of C causes classifier to sacrifice linear separability in order to gain stability, in a sense that influence of any single datapoint is now bounded by C. For hard margin SVM, support vectors are the points which are "on the margin". In the picture above, C=1000 is pretty close to hard-margin SVM, and you can see the highlighted points are the ones that will touch the margin. In high dimensions this might lead to overfitting. For soft-margin SVM, with a lower value of C, it's easier to explain them in terms of dual (equation $(2)$) variables. Support vectors are datapoints from training set which are are included in the predictor, ie, the ones with non-zero $\alpha_i$ parameter. This includes margin errors and points on the margin of the hyperplane.
# ### Binary classification using different kernels
# Two-dimensional Gaussians are generated as data for this section.
#
# $x_-\sim{\cal N_2}(0,1)-d$
#
# $x_+\sim{\cal N_2}(0,1)+d$
#
# and corresponding positive and negative labels. We create traindata and testdata with ```num``` of them being negatively and positively labelled in traindata,trainlab and testdata, testlab. For that we utilize Shogun's Gaussian Mixture Model class ([GMM](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGMM.html)) from which we sample the data points and plot them.
# +
num=50;
dist=1.0;
gmm=sg.GMM(2)
gmm.set_nth_mean(np.array([-dist,-dist]),0)
gmm.set_nth_mean(np.array([dist,dist]),1)
gmm.set_nth_cov(np.array([[1.0,0.0],[0.0,1.0]]),0)
gmm.set_nth_cov(np.array([[1.0,0.0],[0.0,1.0]]),1)
gmm.put('m_coefficients', np.array([1.0,0.0]))
xntr=np.array([gmm.sample() for i in range(num)]).T
gmm.set_coef(np.array([0.0,1.0]))
xptr=np.array([gmm.sample() for i in range(num)]).T
traindata=np.concatenate((xntr,xptr), axis=1)
trainlab=np.concatenate((-np.ones(num), np.ones(num)))
#shogun format features
feats_train=sg.features(traindata)
labels=sg.BinaryLabels(trainlab)
# +
gaussian_kernel = sg.kernel("GaussianKernel", log_width=np.log(10))
#Polynomial kernel of degree 2
poly_kernel = sg.kernel('PolyKernel', degree=2, c=1.0)
poly_kernel.init(feats_train, feats_train)
linear_kernel = sg.kernel('LinearKernel')
linear_kernel.init(feats_train, feats_train)
kernels=[gaussian_kernel, poly_kernel, linear_kernel]
# -
#train machine
C=1
svm=sg.machine('LibSVM', C1=C, C2=C, kernel=gaussian_kernel, labels=labels)
_=svm.train(feats_train)
# Now lets plot the contour output on a $-5...+5$ grid for
#
# 1. The Support Vector Machines decision function $\mbox{sign}(f(x))$
# 2. The Support Vector Machines raw output $f(x)$
# 3. The Original Gaussian Mixture Model Distribution
# +
size=100
x1=np.linspace(-5, 5, size)
x2=np.linspace(-5, 5, size)
x, y=np.meshgrid(x1, x2)
grid=sg.features(np.array((np.ravel(x), np.ravel(y))))
grid_out=svm.apply(grid)
z=grid_out.get('labels').reshape((size, size))
plt.jet()
plt.figure(figsize=(16,5))
z=grid_out.get_values().reshape((size, size))
plt.subplot(121)
plt.title('Classification')
c=plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.subplot(122)
plt.title('Original distribution')
gmm.put('m_coefficients', np.array([1.0,0.0]))
gmm.set_features(grid)
grid_out=gmm.get_likelihood_for_all_examples()
zn=grid_out.reshape((size, size))
gmm.set_coef(np.array([0.0,1.0]))
grid_out=gmm.get_likelihood_for_all_examples()
zp=grid_out.reshape((size, size))
z=zp-zn
c=plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
# -
# And voila! The SVM decision rule reasonably distinguishes the red from the blue points. Despite being optimized for learning the discriminative function maximizing the margin, the SVM output quality wise remotely resembles the original distribution of the gaussian mixture model.
# Let us visualise the output using different kernels.
# +
def plot_outputs(kernels):
plt.figure(figsize=(20,5))
plt.suptitle('Binary Classification using different kernels', fontsize=12)
for i in range(len(kernels)):
plt.subplot(1,len(kernels),i+1)
plt.title(kernels[i].get_name())
svm.put('kernel', kernels[i])
svm.train()
grid_out=svm.apply(grid)
z=grid_out.get_values().reshape((size, size))
c=plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.scatter(traindata[0,:], traindata[1,:], c=trainlab, s=35)
plot_outputs(kernels)
# -
# ### Kernel Normalizers
# Kernel normalizers post-process kernel values by carrying out normalization in feature space. Since kernel based SVMs use a non-linear mapping, in most cases any normalization in input space is lost in feature space. Kernel normalizers are a possible solution to this. Kernel Normalization is not strictly-speaking a form of preprocessing since it is not applied directly on the input vectors but can be seen as a kernel interpretation of the preprocessing. The [KernelNormalizer](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1KernelNormalizer.html) class provides tools for kernel normalization. Some of the kernel normalizers in Shogun:
#
# * [SqrtDiagKernelNormalizer](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CSqrtDiagKernelNormalizer.html) : This normalization in the feature space amounts to defining a new kernel $k'({\bf x},{\bf x'}) = \frac{k({\bf x},{\bf x'})}{\sqrt{k({\bf x},{\bf x})k({\bf x'},{\bf x'})}}$
#
# * [AvgDiagKernelNormalizer](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CAvgDiagKernelNormalizer.html) : Scaling with a constant $k({\bf x},{\bf x'})= \frac{1}{c}\cdot k({\bf x},{\bf x'})$
#
# * [ZeroMeanCenterKernelNormalizer](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1ZeroMeanCenterKernelNormalizer.html) : Centers the kernel in feature space and ensures each feature must have zero mean after centering.
#
# The `set_normalizer()` method of [Kernel](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1Kernel.html) is used to add a normalizer.
# Let us try it out on the [ionosphere dataset](https://archive.ics.uci.edu/ml/datasets/Ionosphere) where we use a small training set of 30 samples to train our SVM. Gaussian kernel with and without normalization is used. See reference [1] for details.
# +
f = open(os.path.join(SHOGUN_DATA_DIR, 'uci/ionosphere/ionosphere.data'))
mat = []
labels = []
# read data from file
for line in f:
words = line.rstrip().split(',')
mat.append([float(i) for i in words[0:-1]])
if str(words[-1])=='g':
labels.append(1)
else:
labels.append(-1)
f.close()
mat_train=mat[:30]
mat_test=mat[30:110]
lab_train=sg.BinaryLabels(np.array(labels[:30]).reshape((30,)))
lab_test=sg.BinaryLabels(np.array(labels[30:110]).reshape((len(labels[30:110]),)))
feats_train = sg.features(np.array(mat_train).T)
feats_test = sg.features(np.array(mat_test).T)
# +
#without normalization
gaussian_kernel=sg.kernel("GaussianKernel", log_width=np.log(0.1))
gaussian_kernel.init(feats_train, feats_train)
C=1
svm=sg.machine('LibSVM', C1=C, C2=C, kernel=gaussian_kernel, labels=lab_train)
_=svm.train()
output=svm.apply(feats_test)
Err=sg.ErrorRateMeasure()
error=Err.evaluate(output, lab_test)
print('Error:', error)
#set normalization
gaussian_kernel=sg.kernel("GaussianKernel", log_width=np.log(0.1))
# TODO: currently there is a bug that makes it impossible to use Gaussian kernels and kernel normalisers
# See github issue #3504
#gaussian_kernel.set_normalizer(sg.SqrtDiagKernelNormalizer())
gaussian_kernel.init(feats_train, feats_train)
svm.put('kernel', gaussian_kernel)
svm.train()
output=svm.apply(feats_test)
Err=sg.ErrorRateMeasure()
error=Err.evaluate(output, lab_test)
print('Error with normalization:', error)
# -
# ### Multiclass classification
# Multiclass classification can be done using SVM by reducing the problem to binary classification. More on multiclass reductions in [this notebook](http://www.shogun-toolbox.org/static/notebook/current/multiclass_reduction.html). [CGMNPSVM](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGMNPSVM.html) class provides a built in [one vs rest multiclass](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1MulticlassOneVsRestStrategy.html) classification using [GMNPlib](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGMNPLib.html). Let us see classification using it on four classes. [CGMM](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGMM.html) class is used to sample the data.
# +
num=30;
num_components=4
means=np.zeros((num_components, 2))
means[0]=[-1.5,1.5]
means[1]=[1.5,-1.5]
means[2]=[-1.5,-1.5]
means[3]=[1.5,1.5]
covs=np.array([[1.0,0.0],[0.0,1.0]])
gmm=sg.GMM(num_components)
[gmm.set_nth_mean(means[i], i) for i in range(num_components)]
[gmm.set_nth_cov(covs,i) for i in range(num_components)]
gmm.put('m_coefficients', np.array([1.0,0.0,0.0,0.0]))
xntr=np.array([gmm.sample() for i in range(num)]).T
xnte=np.array([gmm.sample() for i in range(5000)]).T
gmm.put('m_coefficients', np.array([0.0,1.0,0.0,0.0]))
xntr1=np.array([gmm.sample() for i in range(num)]).T
xnte1=np.array([gmm.sample() for i in range(5000)]).T
gmm.put('m_coefficients', np.array([0.0,0.0,1.0,0.0]))
xptr=np.array([gmm.sample() for i in range(num)]).T
xpte=np.array([gmm.sample() for i in range(5000)]).T
gmm.put('m_coefficients', np.array([0.0,0.0,0.0,1.0]))
xptr1=np.array([gmm.sample() for i in range(num)]).T
xpte1=np.array([gmm.sample() for i in range(5000)]).T
traindata=np.concatenate((xntr,xntr1,xptr,xptr1), axis=1)
testdata=np.concatenate((xnte,xnte1,xpte,xpte1), axis=1)
l0 = np.array([0.0 for i in range(num)])
l1 = np.array([1.0 for i in range(num)])
l2 = np.array([2.0 for i in range(num)])
l3 = np.array([3.0 for i in range(num)])
trainlab=np.concatenate((l0,l1,l2,l3))
testlab=np.concatenate((l0,l1,l2,l3))
plt.title('Toy data for multiclass classification')
plt.jet()
plt.scatter(traindata[0,:], traindata[1,:], c=trainlab, s=75)
# -
feats_train=sg.features(traindata)
labels=sg.MulticlassLabels(trainlab)
# Let us try the multiclass classification for different kernels.
# +
gaussian_kernel=sg.kernel("GaussianKernel", log_width=np.log(2))
poly_kernel=sg.kernel('PolyKernel', degree=4, c=1.0)
poly_kernel.init(feats_train, feats_train)
linear_kernel=sg.kernel('LinearKernel')
linear_kernel.init(feats_train, feats_train)
kernels=[gaussian_kernel, poly_kernel, linear_kernel]
# +
svm=sg.GMNPSVM(1, gaussian_kernel, labels)
_=svm.train(feats_train)
size=100
x1=np.linspace(-6, 6, size)
x2=np.linspace(-6, 6, size)
x, y=np.meshgrid(x1, x2)
grid=sg.features(np.array((np.ravel(x), np.ravel(y))))
def plot_outputs(kernels):
plt.figure(figsize=(20,5))
plt.suptitle('Multiclass Classification using different kernels', fontsize=12)
for i in range(len(kernels)):
plt.subplot(1,len(kernels),i+1)
plt.title(kernels[i].get_name())
svm.set_kernel(kernels[i])
svm.train(feats_train)
grid_out=svm.apply(grid)
z=grid_out.get_labels().reshape((size, size))
plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.scatter(traindata[0,:], traindata[1,:], c=trainlab, s=35)
plot_outputs(kernels)
# -
# The distinguishing properties of the kernels are visible in these classification outputs.
# ### References
# [1] Classification in a Normalized Feature Space Using Support Vector Machines - <NAME>, <NAME>, and <NAME> - IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 14, NO. 3, MAY 2003
#
# [2] <NAME>.; <NAME> (2004). Convex Optimization ([pdf](http://www.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf)). Cambridge University Press. ISBN 978-0-521-83378-3. Retrieved October 15, 2011.
#
# [3] <NAME>., <NAME>., and <NAME>. (2007). A note on Platt's probabilistic outputs for support vector machines.
| doc/ipython-notebooks/classification/SupportVectorMachines.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="wSaPaVo6KfkV"
# # sentence-transformers日本語版
#
# https://github.com/sonoisa/sentence-transformers
# + id="W731JqPZKeuK"
# !pip install -q transformers==4.7.0 fugashi ipadic
# + id="KAuRL6VPOZzz"
from transformers import BertJapaneseTokenizer, BertModel
import torch
class SentenceBertJapanese:
def __init__(self, model_name_or_path, device=None):
self.tokenizer = BertJapaneseTokenizer.from_pretrained(model_name_or_path)
self.model = BertModel.from_pretrained(model_name_or_path)
self.model.eval()
if device is None:
device = "cuda" if torch.cuda.is_available() else "cpu"
self.device = torch.device(device)
self.model.to(device)
def _mean_pooling(self, model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
@torch.no_grad()
def encode(self, sentences, batch_size=8):
all_embeddings = []
iterator = range(0, len(sentences), batch_size)
for batch_idx in iterator:
batch = sentences[batch_idx:batch_idx + batch_size]
encoded_input = self.tokenizer.batch_encode_plus(batch, padding="longest",
truncation=True, return_tensors="pt").to(self.device)
model_output = self.model(**encoded_input)
sentence_embeddings = self._mean_pooling(model_output, encoded_input["attention_mask"]).to('cpu')
all_embeddings.extend(sentence_embeddings)
# return torch.stack(all_embeddings).numpy()
return torch.stack(all_embeddings)
# + id="GSBWBtmnGsb1"
model = SentenceBertJapanese("sonoisa/sentence-bert-base-ja-mean-tokens")
# + id="LaSVlRmpQXmV"
# 出典: https://qiita.com/sonoisa/items/775ac4c7871ced6ed4c3 で公開されている「いらすとや」さんの画像タイトル抜粋(「のイラスト」「のマーク」「のキャラクター」という文言を削った)
sentences = ["お辞儀をしている男性会社員", "笑い袋", "テクニカルエバンジェリスト(女性)", "戦うAI", "笑う男性(5段階)",
"漫才師", "お辞儀をしている医者(女性)", "お辞儀をしている薬剤師", "福笑いをしている人", "AIの家族", "コント師",
"福笑い(女性)", "お辞儀をしている犬", "苦笑いをする女性", "お辞儀をしている医者", "いろいろな漫符",
"雛人形「仕丁・三人上戸」", "ダンス「踊る男性」", "拍手をしている人", "定年(男性)", "ものまね芸人", "福笑いのおたふく",
"お辞儀をしている看護師(男性)", "愛想笑い", "福笑い(ひょっとこ)", "成長する人工知能", "苦笑いをする男性",
"運動会「徒競走・白組」", "人工知能と喧嘩をする人", "人工知能", "ありがた迷惑", "お辞儀をしているクマ", "笑う女性(5段階)",
"人工知能とメールをする人(男性)", "技術書", "笑いをこらえる人(女性)", "ダンス「踊る女性」", "お辞儀をしている猫",
"福笑い(男性)", "武器を持つAI", "作曲する人工知能", "縄跳びを飛んでいる女性", "福笑い(おかめ)", "茅の輪くぐり", "表情",
"AIと仲良くなる人間", "お笑い芸人「漫才師」", "人工知能とメールをする人(女性)", "人工知能と戦う囲碁の棋士", "拍手している女の子",
"検索する人工知能", "ピースサインを出す人(女性)", "啓示を受けた人(女性)", "仕事をする人工知能", "一輪車に乗る女の子",
"お辞儀をしているウサギ", "走る猫(笑顔)", "人工知能と戦う将棋の棋士", "遠足「お弁当・男の子・女の子」", "心を持ったAI",
"プレゼントをもらって喜ぶ女の子", "技術書(バラバラ)", "いろいろな表情の酔っぱらい(男性)", "拍手している人(棒人間)",
"仕事を奪う人工知能", "文章を書く人工知能", "いろいろな映画の「つづく」", "絵を描く人工知能", "拍手している男の子", "ハリセン",
"人工知能と仲良くする人たち", "ON AIRランプ", "いろいろな表情の酔っぱらい(女性)", "徹夜明けの笑顔(女性)",
"徹夜明けの笑顔(男性)", "お辞儀をしている女性会社員", "バンザイをしているお婆さん", "画像認識をするAI",
"芸人の男の子(将来の夢)", "料理「女性」", "ピコピコハンマー", "鏡を見る人(笑顔の男性)", "笑いをこらえる人(男性)",
"シンギュラリティ", "人工知能に仕事を任せる人", "スマートスピーカー", "学ぶ人工知能", "人工知能・AI", "英語のアルファベット",
"お金を見つめてニヤけている男性", "「ありがとう」と言っている人", "定年(女性)", "テクニカルエバンジェリスト(男性)",
"スタンディングオベーション"]
# + id="RjklyLfLYB99"
sentence_vectors = model.encode(sentences)
# + [markdown] id="WTRWMqrTghZj"
# ## 意味が近い文をクラスタリングしてみる
# + id="BDMA3fK2YHQT"
from sklearn.cluster import KMeans
num_clusters = 8
clustering_model = KMeans(n_clusters=num_clusters)
clustering_model.fit(sentence_vectors)
cluster_assignment = clustering_model.labels_
clustered_sentences = [[] for i in range(num_clusters)]
for sentence_id, cluster_id in enumerate(cluster_assignment):
clustered_sentences[cluster_id].append(sentences[sentence_id])
for i, cluster in enumerate(clustered_sentences):
print("Cluster ", i+1)
print(cluster)
print("")
# + [markdown] id="WsGFsv6Ngt_h"
# ## 意味が近い文を検索してみる
# + id="77PS5zYnYJrj"
import scipy.spatial
queries = ['暴走したAI', '暴走した人工知能', 'いらすとやさんに感謝', 'つづく']
query_embeddings = model.encode(queries).numpy()
closest_n = 5
for query, query_embedding in zip(queries, query_embeddings):
distances = scipy.spatial.distance.cdist([query_embedding], sentence_vectors, metric="cosine")[0]
results = zip(range(len(distances)), distances)
results = sorted(results, key=lambda x: x[1])
print("\n\n======================\n\n")
print("Query:", query)
print("\nTop 5 most similar sentences in corpus:")
for idx, distance in results[0:closest_n]:
print(sentences[idx].strip(), "(Score: %.4f)" % (distance / 2))
# + [markdown] id="fopjCM30g1KG"
# ## TensorBoardで潜在意味空間を可視化してみる
#
# TensorBoardが起動したら、右上のメニューからPROJECTORを選択してください。
# 可視化アルゴリズム(TensorBoardの左下ペイン)はUMAPの2D、neighbors(TensorBoardの右ペイン)は10に設定すると見やすいでしょう。
# + id="GJ-Br-VIfTN6"
# %load_ext tensorboard
import os
logs_base_dir = "runs"
os.makedirs(logs_base_dir, exist_ok=True)
# + id="jAsWvH6XdFBg"
import numpy as np
import torch
from torch.utils.tensorboard import SummaryWriter
import tensorflow as tf
import tensorboard as tb
tf.io.gfile = tb.compat.tensorflow_stub.io.gfile
summary_writer = SummaryWriter()
summary_writer.add_embedding(mat=np.array(sentence_vectors), metadata=sentences)
# + id="504i6WqcfcS1"
# %tensorboard --logdir {logs_base_dir}
# + id="xfAHfc-OgDki"
| sentence_transformers_ja.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
x = np.random.randint(0, 10_000, size= 100_000)
x = pd.Series(x)
x.head()
x.hist()
# +
averages = []
for i in range(1000):
sample = x.sample(50)
averages.append(sample.mean())
averages = pd.Series(averages)
# -
# What is the mean of the sample means?
averages.mean()
# What is the population mean?
x.mean()
averages.hist(bins = 20)
# +
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# +
# ignore warnings
import warnings
warnings.filterwarnings("ignore")
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
from pydataset import data
# read Iris data from pydatset
df = data('iris')
# convert column names to lowercase, replace '.' in column names with '_'
df.columns = [col.lower().replace('.', '_') for col in df]
df.head()
# +
from sklearn.model_selection import train_test_split
def train_validate_test_split(df, target, seed=123):
'''
This function takes in a dataframe, the name of the target variable
(for stratification purposes), and an integer for a setting a seed
and splits the data into train, validate and test.
Test is 20% of the original dataset, validate is .30*.80= 24% of the
original dataset, and train is .70*.80= 56% of the original dataset.
The function returns, in this order, train, validate and test dataframes.
'''
train_validate, test = train_test_split(df, test_size=0.2,
random_state=seed,
stratify=df[target])
train, validate = train_test_split(train_validate, test_size=0.3,
random_state=seed,
stratify=train_validate[target])
return train, validate, test
# +
# split into train, validate, test
train, validate, test = train_validate_test_split(df, target='species', seed=123)
# create X & y version of train, where y is a series with just the target variable and X are all the features.
X_train = train.drop(columns=['species'])
y_train = train.species
X_validate = validate.drop(columns=['species'])
y_validate = validate.species
X_test = test.drop(columns=['species'])
y_test = test.species
# -
# # Train Model
#
#
rf = RandomForestClassifier(bootstrap=True,
class_weight=None,
criterion='gini',
min_samples_leaf=3,
n_estimators=100,
max_depth=3,
random_state=123)
# Fit the model
rf.fit(X_train, y_train)
print(rf.feature_importances_)
# Make Predictions
y_pred = rf.predict(X_train)
# Estimate Probability
y_pred_proba = rf.predict_proba(X_train)
# # Evaluate Model
# Compute the Accuracy
print('Accuracy of random forest classifier on training set: {:.2f}'
.format(rf.score(X_train, y_train)))
# Create a confusion matrix
print(confusion_matrix(y_train, y_pred))
print(classification_report(y_train, y_pred))
# # Validate Model
print('Accuracy of random forest classifier on test set: {:.2f}'
.format(rf.score(X_validate, y_validate)))
# +
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# +
# Exercises
# Continue working in your model file with titanic data to do the following:
import aquire
import prepare
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
df = aquire.get_titanic_data()
df = df.drop(columns='deck')
df = df[~ df.age.isna()]
df = df[~ df.embarked.isna()]
df = df.drop(columns=['embarked', 'class', 'passenger_id', 'age'])
df.head()
# -
df["is_female"] = df.sex == 1
dummy_df = pd.get_dummies(df[["embark_town"]], drop_first=True)
dummy_df
df = pd.concat([df, dummy_df], axis=1)
df = df.drop(columns=['sex', 'embark_town'])
df.head()
train, validate, test = train_validate_test_split(df, target = 'survived')
train.head()
# +
# create X & y version of train, where y is a series with just the target variable and X are all the features.
X_train = train.drop(columns=['survived'])
y_train = train.survived
X_validate = validate.drop(columns=['survived'])
y_validate = validate.survived
X_test = test.drop(columns=['survived'])
y_test = test.survived
# +
# 1) Fit the Random Forest classifier to your training sample and transform
# (i.e. make predictions on the training sample) setting the random_state accordingly and
# setting min_samples_leaf = 1 and max_depth = 10.
rf = RandomForestClassifier(bootstrap=True,
class_weight=None,
criterion='gini',
min_samples_leaf=1,
n_estimators=100,
max_depth=10,
random_state=123)
# Fit the model
rf.fit(X_train, y_train)
# -
print(rf.feature_importances_)
# Make Predictions
y_pred = rf.predict(X_train)
# Estimate Probability
y_pred_proba = rf.predict_proba(X_train)
# +
# 2) Evaluate your results using the model score, confusion matrix, and classification report.
# Compute the Accuracy
print('Accuracy of random forest classifier on training set: {:.2f}'
.format(rf.score(X_train, y_train)))
# -
# Create a confusion matrix
print(confusion_matrix(y_train, y_pred))
print(classification_report(y_train, y_pred))
# 3) Print and clearly label the following: Accuracy, true positive rate, false positive rate,
# true negative rate, false negative rate, precision, recall, f1-score, and support.
import pandas as pd
report = classification_report(y_train, y_pred, output_dict=True)
pd.DataFrame(report)
# +
# 4) Run through steps increasing your min_samples_leaf and decreasing your max_depth.
rf2 = RandomForestClassifier(bootstrap=True,
class_weight=None,
criterion='gini',
min_samples_leaf=4,
n_estimators=100,
max_depth=3,
random_state=123)
# Fit the model
rf.fit(X_train, y_train)
# -
print(rf.feature_importances_)
# Make Predictions
y_pred = rf.predict(X_train)
# Estimate Probability
y_pred_proba = rf.predict_proba(X_train)
# Compute the Accuracy
print('Accuracy of random forest classifier on training set: {:.2f}'
.format(rf.score(X_train, y_train)))
# Create a confusion matrix
print(confusion_matrix(y_train, y_pred))
print(classification_report(y_train, y_pred))
import pandas as pd
report = classification_report(y_train, y_pred, output_dict=True)
pd.DataFrame(report)
# out of sample data
y_val_pred_1 = rf.predict(validate.drop(columns='survived'))
y_val_pred_2 = rf2.predict(validate.drop(columns='survived'))
# get validation accuracy
accuracy_v_1 = rf.score(validate.drop(columns='survived'), validate.survived)
accuracy_v_2 = rf2.score(validate.drop(columns='survived'), validate.survived)
accuracy_v_1
accuracy_v_2
# +
# 5) What are the differences in the evaluation metrics? Which performs better on your
# in-sample data? Why?
def get_metrics_binary(rf):
'''
get_metrics_binary takes in a confusion matrix (cnf) for a binary classifier and prints out metrics based on
values in variables named X_train, y_train, and y_pred.
return: a classification report as a transposed DataFrame
'''
accuracy = rf.score(X_train, y_train)
class_report = pd.DataFrame(classification_report(y_train, y_pred, output_dict=True)).T
conf = confusion_matrix(y_train, y_pred)
tpr = conf[1][1] / conf[1].sum()
fpr = conf[0][1] / conf[0].sum()
tnr = conf[0][0] / conf[0].sum()
fnr = conf[1][0] / conf[1].sum()
print(f'''
The accuracy for our model is {accuracy:.4}
The True Positive Rate is {tpr:.3}, The False Positive Rate is {fpr:.3},
The True Negative Rate is {tnr:.3}, and the False Negative Rate is {fnr:.3}
''')
return class_report
# -
report = get_metrics_binary(rf)
rf2.fit(X_train, y_train)
report2 = get_metrics_binary(rf2)
# +
# After making a few models, which one has the best performance
# (or closest metrics) on both train and validate?
#report2 has the closest metrics. rf Model1 seems to be over fit.
| model2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="JL4o3MB7qwNn" executionInfo={"status": "ok", "timestamp": 1630589043942, "user_tz": -330, "elapsed": 36894, "user": {"displayName": "CE076_Krupali_Mehta", "photoUrl": "", "userId": "13507360727960861074"}} outputId="20ac46ce-418e-45db-ed45-b2141eef8918"
from google.colab import drive
drive.mount("/content/drive")
# + id="2hYU72Pork2n" executionInfo={"status": "ok", "timestamp": 1630589055296, "user_tz": -330, "elapsed": 1993, "user": {"displayName": "CE076_Krupali_Mehta", "photoUrl": "", "userId": "13507360727960861074"}}
import nltk
import matplotlib.pyplot as plt
import pandas as pd
# + colab={"base_uri": "https://localhost:8080/"} id="kDV1x61DrsF0" executionInfo={"status": "ok", "timestamp": 1630589082666, "user_tz": -330, "elapsed": 1118, "user": {"displayName": "CE076_Krupali_Mehta", "photoUrl": "", "userId": "13507360727960861074"}} outputId="ebb197ce-294b-4fb6-9b8d-b30ce6568159"
# Raw Text Analysis
random_text = """Hello mars. It is a dream #of #2050 #civilizations https://www.instagram.com/p/COs6emJhStvY8GadvQM64OlVh-rT3tFixBrRlg0/"""
import re
import string
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from nltk.tokenize import TweetTokenizer
remove_link_text = re.sub(r'https?:\/\/.*[\r\n]*', '', random_text)
remove_link_text = re.sub(r'#', '', remove_link_text)
print(remove_link_text)
# + colab={"base_uri": "https://localhost:8080/"} id="Yd2k5q6wrusJ" executionInfo={"status": "ok", "timestamp": 1630589103880, "user_tz": -330, "elapsed": 1409, "user": {"displayName": "CE076_Krupali_Mehta", "photoUrl": "", "userId": "13507360727960861074"}} outputId="3f8058ef-bf2a-41bc-ee55-a406dc89b42b"
print('\033[92m' + random_text)
print('\033[92m' + remove_link_text)
# + colab={"base_uri": "https://localhost:8080/"} id="N5jBZyovr1Ix" executionInfo={"status": "ok", "timestamp": 1630589119376, "user_tz": -330, "elapsed": 977, "user": {"displayName": "CE076_Krupali_Mehta", "photoUrl": "", "userId": "13507360727960861074"}} outputId="f9a49aef-c75e-441c-c1a7-6266523a57ca"
from nltk.tokenize import sent_tokenize
text="""Hello Mr. steve, how you doing? whats up? The weather is great, and city is awesome. how you doing? The sky is pinkish-blue. You shouldn't eat cardboard, how you doing?"""
# download punkt
nltk.download("punkt")
tokenized_text=sent_tokenize(text)
print(tokenized_text)
# + colab={"base_uri": "https://localhost:8080/"} id="4otM2wO4r5yG" executionInfo={"status": "ok", "timestamp": 1630589137211, "user_tz": -330, "elapsed": 778, "user": {"displayName": "CE076_Krupali_Mehta", "photoUrl": "", "userId": "13507360727960861074"}} outputId="0c8bdf59-0343-479f-92d9-4e753bb4570a"
# breaks paregraph into words
from nltk.tokenize import word_tokenize
tokenized_word=word_tokenize(text)
print(tokenized_word)
# + colab={"base_uri": "https://localhost:8080/", "height": 330} id="Z3-iK7Zar-AN" executionInfo={"status": "ok", "timestamp": 1630589156708, "user_tz": -330, "elapsed": 2514, "user": {"displayName": "CE076_Krupali_Mehta", "photoUrl": "", "userId": "13507360727960861074"}} outputId="103d8abb-a72b-4fe2-94cc-8ad5671b19d5"
# frequency distribution
from nltk.probability import FreqDist
fdist = FreqDist(tokenized_word)
fdist.most_common(4)
# Frequency Distribution Plot
import matplotlib.pyplot as plt
fdist.plot(30, cumulative = False, color = "green")
plt.show()
# + colab={"base_uri": "https://localhost:8080/"} id="FubOF8B1sBt7" executionInfo={"status": "ok", "timestamp": 1630589169609, "user_tz": -330, "elapsed": 1396, "user": {"displayName": "CE076_Krupali_Mehta", "photoUrl": "", "userId": "13507360727960861074"}} outputId="76e650a5-dace-47fa-ac8d-334f16bb13ca"
# stop words
from nltk.corpus import stopwords
# download stopwords
nltk.download("stopwords")
stop_words = set(stopwords.words("english"))
print(stop_words)
# + colab={"base_uri": "https://localhost:8080/"} id="ohwDZJe7sFBy" executionInfo={"status": "ok", "timestamp": 1630589184133, "user_tz": -330, "elapsed": 1555, "user": {"displayName": "CE076_Krupali_Mehta", "photoUrl": "", "userId": "13507360727960861074"}} outputId="e97f4468-93e1-4be1-adfc-5e415928f104"
filtered_sent=[]
for w in tokenized_word:
if w not in stop_words:
filtered_sent.append(w)
print("Tokenized Sentence:",tokenized_word)
print("Filterd Sentence:",filtered_sent)
# + colab={"base_uri": "https://localhost:8080/"} id="c1lRIBSpsI7O" executionInfo={"status": "ok", "timestamp": 1630589199656, "user_tz": -330, "elapsed": 1453, "user": {"displayName": "CE076_Krupali_Mehta", "photoUrl": "", "userId": "13507360727960861074"}} outputId="e31c8b46-746d-47bb-b49e-45b589ee25b1"
# stemming
from nltk.stem import PorterStemmer
from nltk.tokenize import sent_tokenize, word_tokenize
ps = PorterStemmer()
stemmed_words=[]
for w in filtered_sent:
stemmed_words.append(ps.stem(w))
print("Filtered Sentence:",filtered_sent)
print("Stemmed Sentence:",stemmed_words)
# + colab={"base_uri": "https://localhost:8080/"} id="B3ein_dWsMQc" executionInfo={"status": "ok", "timestamp": 1630589215410, "user_tz": -330, "elapsed": 3131, "user": {"displayName": "CE076_Krupali_Mehta", "photoUrl": "", "userId": "13507360727960861074"}} outputId="69a663f2-2d36-4c99-c8cd-6506429403bd"
#Lexicon Normalization
#performing stemming and Lemmatization
from nltk.stem.wordnet import WordNetLemmatizer
nltk.download('wordnet')
lem = WordNetLemmatizer()
from nltk.stem.porter import PorterStemmer
stem = PorterStemmer()
word = "flying"
print("Lemmatized Word:",lem.lemmatize(word,"v"))
print("Stemmed Word:",stem.stem(word))
# + id="M3K_2cvCsSx4" executionInfo={"status": "ok", "timestamp": 1630589241393, "user_tz": -330, "elapsed": 830, "user": {"displayName": "CE076_Krupali_Mehta", "photoUrl": "", "userId": "13507360727960861074"}}
import nltk # Python library for NLP
from nltk.corpus import twitter_samples # sample Twitter dataset from NLTK
import matplotlib.pyplot as plt # library for visualization
import random # pseudo-random number generator
# + colab={"base_uri": "https://localhost:8080/"} id="UJvo_KyQsYDX" executionInfo={"status": "ok", "timestamp": 1630589264393, "user_tz": -330, "elapsed": 3821, "user": {"displayName": "CE076_Krupali_Mehta", "photoUrl": "", "userId": "13507360727960861074"}} outputId="6f25a41a-93a1-4c3f-8ec7-09bfe9a7286e"
#downloads sample twitter dataset.
nltk.download('twitter_samples')
# + id="hLvh_qdcsbfZ" executionInfo={"status": "ok", "timestamp": 1630589274908, "user_tz": -330, "elapsed": 912, "user": {"displayName": "CE076_Krupali_Mehta", "photoUrl": "", "userId": "13507360727960861074"}}
# select the set of positive and negative tweets
all_positive_tweets = twitter_samples.strings('positive_tweets.json')
all_negative_tweets = twitter_samples.strings('negative_tweets.json')
# + colab={"base_uri": "https://localhost:8080/"} id="EXnmm1Cgse5v" executionInfo={"status": "ok", "timestamp": 1630589289481, "user_tz": -330, "elapsed": 1665, "user": {"displayName": "CE076_Krupali_Mehta", "photoUrl": "", "userId": "13507360727960861074"}} outputId="7ab94924-49c3-4bb3-f1d1-791894a43c4d"
print('Number of positive tweets: ', len(all_positive_tweets))
print('Number of negative tweets: ', len(all_negative_tweets))
print('\nThe type of all_positive_tweets is: ', type(all_positive_tweets))
print('The type of a tweet entry is: ', type(all_negative_tweets[0]))
# + colab={"base_uri": "https://localhost:8080/", "height": 303} id="LqL-Tvy_sjBg" executionInfo={"status": "ok", "timestamp": 1630589306744, "user_tz": -330, "elapsed": 1429, "user": {"displayName": "CE076_Krupali_Mehta", "photoUrl": "", "userId": "13507360727960861074"}} outputId="b4c05098-ca82-494a-99bf-6bed9a30bcf5"
# Declare a figure with a custom size
fig = plt.figure(figsize=(5, 5))
# labels for the classes
labels = 'ML-BSB-Lec', 'ML-HAP-Lec','ML-HAP-Lab'
# Sizes for each slide
sizes = [40, 35, 25]
# Declare pie chart, where the slices will be ordered and plotted counter-clockwise:
plt.pie(sizes, labels=labels, autopct='%.2f%%',
shadow=True, startangle=90)
#autopct enables you to display the percent value using Python string formatting.
#For example, if autopct='%.2f', then for each pie wedge, the format string is '%.2f' and
# Equal aspect ratio ensures that pie is drawn as a circle.
plt.axis('equal')
# Display the chart
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 303} id="YaqVm2n6smZJ" executionInfo={"status": "ok", "timestamp": 1630589320849, "user_tz": -330, "elapsed": 2369, "user": {"displayName": "CE076_Krupali_Mehta", "photoUrl": "", "userId": "13507360727960861074"}} outputId="365f9e45-0516-41e3-daff-034bfa2a2b6f"
# Declare a figure with a custom size
fig = plt.figure(figsize=(5, 5))
# labels for the two classes
labels = 'Positives', 'Negative'
# Sizes for each slide
sizes = [len(all_positive_tweets), len(all_negative_tweets)]
# Declare pie chart, where the slices will be ordered and plotted counter-clockwise:
plt.pie(sizes, labels=labels, autopct='%1.1f%%',
shadow=True, startangle=90)
# Equal aspect ratio ensures that pie is drawn as a circle.
plt.axis('equal')
# Display the chart
plt.show()
# + colab={"base_uri": "https://localhost:8080/"} id="0fdf-Iocsqi2" executionInfo={"status": "ok", "timestamp": 1630589336837, "user_tz": -330, "elapsed": 1164, "user": {"displayName": "CE076_Krupali_Mehta", "photoUrl": "", "userId": "13507360727960861074"}} outputId="c70f5ef8-d532-447b-880c-71e881a4460c"
# print positive in greeen
print('\033[92m' + all_positive_tweets[random.randint(0,5000)])
# print negative in red
print('\033[91m' + all_negative_tweets[random.randint(0,5000)])
# + colab={"base_uri": "https://localhost:8080/"} id="T0UduAm1swGp" executionInfo={"status": "ok", "timestamp": 1630589359611, "user_tz": -330, "elapsed": 1103, "user": {"displayName": "CE076_Krupali_Mehta", "photoUrl": "", "userId": "13507360727960861074"}} outputId="c485f713-ec27-46cc-c480-317fe1fdee33"
# Our selected sample
tweet = all_positive_tweets[2277]
print(tweet)
# + colab={"base_uri": "https://localhost:8080/"} id="B8cf4MYqszGM" executionInfo={"status": "ok", "timestamp": 1630589373101, "user_tz": -330, "elapsed": 945, "user": {"displayName": "CE076_Krupali_Mehta", "photoUrl": "", "userId": "13507360727960861074"}} outputId="aa5dc00e-9dbf-4288-f4e7-396ab03ceba9"
# download the stopwords from NLTK
nltk.download('stopwords')
# + id="Mc7Ut509s3AR" executionInfo={"status": "ok", "timestamp": 1630589387898, "user_tz": -330, "elapsed": 1124, "user": {"displayName": "CE076_Krupali_Mehta", "photoUrl": "", "userId": "13507360727960861074"}}
import re # library for regular expression operations
import string # for string operations
from nltk.corpus import stopwords # module for stop words that come with NLTK
from nltk.stem import PorterStemmer # module for stemming
from nltk.tokenize import TweetTokenizer # module for tokenizing strings
# + colab={"base_uri": "https://localhost:8080/"} id="B0KM3Mogs5-v" executionInfo={"status": "ok", "timestamp": 1630589399631, "user_tz": -330, "elapsed": 874, "user": {"displayName": "CE076_Krupali_Mehta", "photoUrl": "", "userId": "13507360727960861074"}} outputId="dbe19b6b-9889-4a5e-cb0d-8d4ba84adf7d"
print('\033[92m' + tweet)
print('\033[94m')
# remove hyperlinks
tweet2 = re.sub(r'https?:\/\/.*[\r\n]*', '', tweet)
# remove hashtags
# only removing the hash # sign from the word
tweet2 = re.sub(r'#', '', tweet2)
print(tweet2)
# + colab={"base_uri": "https://localhost:8080/"} id="y2NS88o1s9C5" executionInfo={"status": "ok", "timestamp": 1630589412419, "user_tz": -330, "elapsed": 917, "user": {"displayName": "CE076_Krupali_Mehta", "photoUrl": "", "userId": "13507360727960861074"}} outputId="5eaaaee0-9b98-42ab-f1d0-ac72644f30e5"
print()
print('\033[92m' + tweet2)
print('\033[94m')
# instantiate tokenizer class
tokenizer = TweetTokenizer(preserve_case=False)
# tokenize tweets
tweet_tokens = tokenizer.tokenize(tweet2)
print()
print('Tokenized string:')
print(tweet_tokens)
# + colab={"base_uri": "https://localhost:8080/"} id="Jt2_GJvEtC4q" executionInfo={"status": "ok", "timestamp": 1630589437191, "user_tz": -330, "elapsed": 1459, "user": {"displayName": "CE076_Krupali_Mehta", "photoUrl": "", "userId": "13507360727960861074"}} outputId="e4e248fc-8086-462e-e110-8a21629e70b7"
#Import the english stop words list from NLTK
stopwords_english = stopwords.words('english')
print('Stop words\n')
print(stopwords_english)
print('\nPunctuation\n')
print(string.punctuation)
# + colab={"base_uri": "https://localhost:8080/"} id="3hs1i7-PtGhu" executionInfo={"status": "ok", "timestamp": 1630589451686, "user_tz": -330, "elapsed": 1150, "user": {"displayName": "CE076_Krupali_Mehta", "photoUrl": "", "userId": "13507360727960861074"}} outputId="36ef13c3-1119-49c3-f1af-0c92073435d8"
print()
print('\033[92m')
print(tweet_tokens)
print('\033[94m')
tweets_clean = []
for word in tweet_tokens: # Go through every word in your tokens list
if (word not in stopwords_english and word not in string.punctuation): # remove punctuation
tweets_clean.append(word)
print('removed stop words and punctuation:')
print(tweets_clean)
# + id="5o8GjrbTtK_c" executionInfo={"status": "ok", "timestamp": 1630589470078, "user_tz": -330, "elapsed": 1566, "user": {"displayName": "CE076_Krupali_Mehta", "photoUrl": "", "userId": "13507360727960861074"}} outputId="04d48ab0-b0c7-4623-cd02-21025ea9d5da" colab={"base_uri": "https://localhost:8080/"}
print()
print('\033[92m')
print(tweets_clean)
print('\033[94m')
# Instantiate stemming class
stemmer = PorterStemmer()
# Create an empty list to store the stems
tweets_stem = []
for word in tweets_clean:
stem_word = stemmer.stem(word) # stemming word
tweets_stem.append(stem_word) # append to the list
print('stemmed words:')
print(tweets_stem)
| Lab1/Lab1_02.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#
# House_Price_Prediction.py
#
# This is a very simple prediction of house prices based on house size, implemented
# in TensorFlow. This code is part of Pluralsight's course "TensorFlow: Getting Started"
#
import tensorflow as tf
import numpy as np
import math
import matplotlib.pyplot as plt
import matplotlib.animation as animation # import animation support
# generation some house sizes between 1000 and 3500 (typical sq ft of house)
num_house = 160
np.random.seed(42)
house_size = np.random.randint(low=1000, high=3500, size=num_house )
# Generate house prices from house size with a random noise added.
np.random.seed(42)
house_price = house_size * 100.0 + np.random.randint(low=20000, high=70000, size=num_house)
# Plot generated house and size
plt.plot(house_size, house_price, "bx") # bx = blue x
plt.ylabel("Price")
plt.xlabel("Size")
plt.show()
# you need to normalize values to prevent under/overflows.
def normalize(array):
return (array - array.mean()) / array.std()
# define number of training samples, 0.7 = 70%. We can take the first 70% since the values are randomized
num_train_samples = math.floor(num_house * 0.7)
# define training data
train_house_size = np.asarray(house_size[:num_train_samples])
train_price = np.asanyarray(house_price[:num_train_samples:])
train_house_size_norm = normalize(train_house_size)
train_price_norm = normalize(train_price)
# define test data
test_house_size = np.array(house_size[num_train_samples:])
test_house_price = np.array(house_price[num_train_samples:])
test_house_size_norm = normalize(test_house_size)
test_house_price_norm = normalize(test_house_price)
# Set up the TensorFlow placeholders that get updated as we descend down the gradient
tf_house_size = tf.placeholder("float", name="house_size")
tf_price = tf.placeholder("float", name="price")
# Define the variables holding the size_factor and price we set during training.
# We initialize them to some random values based on the normal distribution.
tf_size_factor = tf.Variable(np.random.randn(), name="size_factor")
tf_price_offset = tf.Variable(np.random.randn(), name="price_offset")
# 2. Define the operations for the predicting values - predicted price = (size_factor * house_size ) + price_offset
# Notice, the use of the tensorflow add and multiply functions. These add the operations to the computation graph,
# AND the tensorflow methods understand how to deal with Tensors. Therefore do not try to use numpy or other library
# methods.
tf_price_pred = tf.add(tf.multiply(tf_size_factor, tf_house_size), tf_price_offset)
# 3. Define the Loss Function (how much error) - Mean squared error
tf_cost = tf.reduce_sum(tf.pow(tf_price_pred-tf_price, 2))/(2*num_train_samples)
# Optimizer learning rate. The size of the steps down the gradient
learning_rate = 0.1
# 4. define a Gradient descent optimizer that will minimize the loss defined in the operation "cost".
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(tf_cost)
# Initializing the variables
init = tf.global_variables_initializer()
# Launch the graph in the session
with tf.Session() as sess:
sess.run(init)
# set how often to display training progress and number of training iterations
display_every = 2
num_training_iter = 50
# calculate the number of lines to animation
fit_num_plots = math.floor(num_training_iter/display_every)
# add storage of factor and offset values from each epoch
fit_size_factor = np.zeros(fit_num_plots)
fit_price_offsets = np.zeros(fit_num_plots)
fit_plot_idx = 0
# keep iterating the training data
for iteration in range(num_training_iter):
# Fit all training data
for (x, y) in zip(train_house_size_norm, train_price_norm):
sess.run(optimizer, feed_dict={tf_house_size: x, tf_price: y})
# Display current status
if (iteration + 1) % display_every == 0:
c = sess.run(tf_cost, feed_dict={tf_house_size: train_house_size_norm, tf_price:train_price_norm})
print("iteration #:", '%04d' % (iteration + 1), "cost=", "{:.9f}".format(c), \
"size_factor=", sess.run(tf_size_factor), "price_offset=", sess.run(tf_price_offset))
# Save the fit size_factor and price_offset to allow animation of learning process
fit_size_factor[fit_plot_idx] = sess.run(tf_size_factor)
fit_price_offsets[fit_plot_idx] = sess.run(tf_price_offset)
fit_plot_idx = fit_plot_idx + 1
print("Optimization Finished!")
training_cost = sess.run(tf_cost, feed_dict={tf_house_size: train_house_size_norm, tf_price: train_price_norm})
print("Trained cost=", training_cost, "size_factor=", sess.run(tf_size_factor), "price_offset=", sess.run(tf_price_offset), '\n')
# Plot of training and test data, and learned regression
# get values used to normalized data so we can denormalize data back to its original scale
train_house_size_mean = train_house_size.mean()
train_house_size_std = train_house_size.std()
train_price_mean = train_price.mean()
train_price_std = train_price.std()
# Plot the graph
plt.rcParams["figure.figsize"] = (10,8)
plt.figure()
plt.ylabel("Price")
plt.xlabel("Size (sq.ft)")
plt.plot(train_house_size, train_price, 'go', label='Training data')
plt.plot(test_house_size, test_house_price, 'mo', label='Testing data')
plt.plot(train_house_size_norm * train_house_size_std + train_house_size_mean,
(sess.run(tf_size_factor) * train_house_size_norm + sess.run(tf_price_offset)) * train_price_std + train_price_mean,
label='Learned Regression')
plt.legend(loc='upper left')
plt.show()
#
# Plot another graph that animation of how Gradient Descent sequentually adjusted size_factor and price_offset to
# find the values that returned the "best" fit line.
fig, ax = plt.subplots()
line, = ax.plot(house_size, house_price)
plt.rcParams["figure.figsize"] = (10,8)
plt.title("Gradient Descent Fitting Regression Line")
plt.ylabel("Price")
plt.xlabel("Size (sq.ft)")
plt.plot(train_house_size, train_price, 'go', label='Training data')
plt.plot(test_house_size, test_house_price, 'mo', label='Testing data')
def animate(i):
line.set_xdata(train_house_size_norm * train_house_size_std + train_house_size_mean) # update the data
line.set_ydata((fit_size_factor[i] * train_house_size_norm + fit_price_offsets[i]) * train_price_std + train_price_mean) # update the data
return line,
# Init only required for blitting to give a clean slate.
def initAnim():
line.set_ydata(np.zeros(shape=house_price.shape[0])) # set y's to 0
return line,
ani = animation.FuncAnimation(fig, animate, frames=np.arange(0, fit_plot_idx), init_func=initAnim,
interval=1000, blit=True)
plt.show()
# -
| Sentinel.ML.Tensorflow/notebooks/.ipynb_checkpoints/House_Price_Prediction-with-anim-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # PROJECT: H1-B VISA FORECASTING MODEL
# H1-B visa is category of employment-based visa which is granted by United States department of immigration for highly skilled foreign workers who wants to work for companies which are in united states .H1-B visa are granted under strict stipulations. Companies in United States apply for this visa for their employees from their respective countries. This is also the most common visa status applied for and held by international students once they complete college or higher education and begin working in a full-time position.The idea behind our project is to predict the chances of an employee of getting the visa approved after analyzing parameters such as his/her salary, job title, company profile, Company location etc. So, we believe that a predictive model generated using all the past data can be a useful resource to predict the outcome for the applicants and the sponsors
# # Importing Libraries
# +
import numpy as np
import pandas as pd
import pandas_profiling
import matplotlib
import matplotlib.pyplot as plt
from math import isnan
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.preprocessing import LabelEncoder
from sklearn import preprocessing
from sklearn.metrics import accuracy_score
import sklearn.metrics as metrics
from sklearn.svm import SVC
from sklearn.naive_bayes import GaussianNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import f1_score, confusion_matrix, precision_score
import seaborn as sns
from sklearn.cluster import KMeans
import warnings
warnings.filterwarnings("ignore")
# %matplotlib inline
# -
# # Importing Data
#Reading the csv file
df=pd.read_csv('/Users/kulbir/Downloads/dataset111.csv',encoding = "ISO-8859-1")
df
# Creating a copy of dataframe df
dp=df.copy()
# # DATA EXPLORATION
print('No of rows and columns: ',df.shape)
print('\n Total no of entry in each column: \n', df.count())
print('Types of Data:\n',df.info())
df.describe()
# Analyzing the missing values in the columns of dataframe via heatmap
print(df.isnull().sum())
sns.heatmap(df.isnull(),cbar=False)
# # Generating Insights from Data
#
# # 1) H1-B VISA DETAILING CERTIFIED AND DENIED
df.CASE_STATUS.value_counts()
# +
case_status = {
'CASE_STATUS': {
r'CERTIFIED-WITHDRAWN': 'CERTIFIED'}
}
df.replace(case_status, regex=True, inplace=True)
# drop rows with 'WITHDRAWN' value
sp = df[df['CASE_STATUS']=='WITHDRAWN'].index
df.drop(sp , inplace=True)
# -
#color=['lightcoral','lightseagreen','goldenrod','cornflowerblue','darkorchid','olivedrab','lightsalmon','forestgreen']
colors=['indigo','purple','firebrick','indianred','peru','orange','goldenrod','gold','khaki','lemonchiffon']
# Plots before and after processing CASE_STATUS Column
plt.figure(figsize=(10,7))
plt.subplot(1, 2, 1)
dp['CASE_STATUS'].value_counts().plot(kind='bar',title='Visa Petition frequency(with all status)', color=['indigo','indianred','firebrick','goldenrod'])
plt.subplot(1, 2, 2)
df['CASE_STATUS'].value_counts().plot(kind='bar',title='Visa Petition frequency after preprocessing CASE_STATUS', color=['indigo','goldenrod'])
plt.tight_layout()
plt.show()
# Here we have visuliazed our data set after removing withdrwan visa applicationsas it is not required and replaced certified-withdrawn with certified status
# # 2) Full Time VS Part time
#Full Time VS Part time
dp['FULL_TIME_POSITION'].hist(color='indianred')
# From the above graph we can say that Full time positions have applications when compared to part-time positions.
# # 3) Prevailing wage vs AVERAGE WAGE
# +
visa_certified = df[(df["CASE_STATUS"]=='CERTIFIED')]
visa_denied=df[(df["CASE_STATUS"]=='DENIED')]
visa_denied_wage = visa_denied['PREVAILING_WAGE']
visa_denied_count = visa_denied['CASE_STATUS'].count()
visa_denied_avg = ((visa_denied_wage.sum())/visa_denied_count)
#Do people who have been certified the visa in general earn more wage than those whose visa have been denied?
print("Avg wage of visa denied candidates:", visa_denied_avg)
visa_certified_wage = visa_certified['PREVAILING_WAGE']
visa_certified_count = visa_certified['CASE_STATUS'].count()
visa_certified_avg = ((visa_certified_wage.sum())/visa_certified_count)
print("Avg wage of visa certified candidates:", visa_certified_avg)
details = {
'CASE_STATUS' : ['CERTIFIED','DENIED'],
'PREVAILING_WAGE' : [visa_certified_avg,visa_denied_avg],
}
T = pd.DataFrame(details)
fig, ax = plt.subplots()
sns.barplot(x='CASE_STATUS', y='PREVAILING_WAGE', data=df, ax=ax)
ax2 = ax.twinx()
sns.lineplot(x='CASE_STATUS', y='PREVAILING_WAGE', data=T, ax=ax2, color='green')
plt.show()
# -
# From the above it is clear that average wage of people who have been certified the visa is greater than the average wage of people who have been denied the visa.We can also interpret that average salary of certified candidates is more tham denied candidates.
# # 4) Top 10 occupations hiring h1-B applicants
visa_certified = df[df['CASE_STATUS'] == 'CERTIFIED']
visa_soc = visa_certified['SOC_CODE'].value_counts().head(10).to_frame().reset_index().rename(columns={"index": "SOC_CODE", "SOC_CODE": "COUNT"})
visa_soc['PERCENT'] = round(visa_soc['COUNT'] / len(visa_certified),3)
soc_code_list = visa_soc['SOC_CODE']
visa_soc['SOC_NAME'] = np.nan
for i in range(10):
name = df[df['SOC_CODE'] == soc_code_list[i]]['SOC_NAME'].value_counts().reset_index().iloc[0,0]
visa_soc.iloc[i,3] = name
visa_soc
plt.figure(figsize=(10,6))
sns.barplot(x='SOC_NAME', y='PERCENT', data=visa_soc, color='goldenrod')
plt.title('Top 10 Occupations hiring H1-B applicants')
plt.xlabel('Occupation')
plt.ylabel('Percentage of Certified Cases')
plt.xticks(rotation=90)
plt.show()
# The above graph shows that if someone wants to improve their visa certifying probability they should apply for more software developers oriented roles.
# # 5) Checking top 10 states and cities based on h1b visa counts
five=pd.DataFrame(df.EMPLOYER_STATE.value_counts(normalize = True)).head(10) * 100
five1=df.WORKSITE_CITY.value_counts().head(10)
df.WORKSITE_CITY.value_counts().head(10).plot(kind='bar', title='Top 10 City with most working position', color=colors)
(pd.DataFrame(df.EMPLOYER_STATE.value_counts(normalize = True)).head(10) * 100).plot(kind='bar', title='Top 10 states with most working position', color=colors)
print(five)
print('------------------------------------------------------------------')
print(five1)
# Its not surprising to see that california and newyork are top working states and cities for h1-b applicantions as they are technical-hubs of united states.
# # 6) Top 10 job positions and companies hiring filing H1-B visa applications
print('Summary of EMPLOYER_NAME column: ',df.EMPLOYER_NAME.describe())
print('Summary of SOC_NAME column: : ',df.SOC_NAME.describe())
# +
# #Plotting top 10 Job position and companies for Visa petition
# plt.figure(figsize=(10,8))
# plt.subplot(1, 2, 1)
# # df.SOC_NAME.value_counts().head(10).plot(kind='bar',title='Top 10 Job Position', color=colors)
# # plt.subplot(1, 2, 2)
# df.EMPLOYER_NAME.value_counts().head(10).plot(kind='bar',title='Top 10 Job Companies', color=colors)
# plt.tight_layout()
# plt.show()
# -
# # 7) Analyzing outliers on case_status and prevailing wage
# +
#Analysing mean and median to understand outliers
print('Median: ', np.nanmedian(df.PREVAILING_WAGE))
print('Mean: ', np.nanmean(df.PREVAILING_WAGE))
df.PREVAILING_WAGE.describe()
# -
# In PREVAILING_WAGE column, minimum salary is 0.0 where, maximum is 6000 million. Median value is 65000.0 but mean is 142891. It can be seen that there is extrem difference between the minimum and maximum value. So, it is clear that multiple outliers exist in the dataset. From the box plot of 500 rows of data, it shows the existence of outlier. Interestingly, in case_status vs wage plot, more outlier is identified in denial cases.
#Analyzing outliers in PREVAILING_WAGE and CASE_STATUS
plt.figure(figsize=(10,8))
fig,ax = plt.subplots(1,2)
sns.boxplot(x=dp.PREVAILING_WAGE.head(500), ax=ax[0])
sns.boxplot(x="CASE_STATUS", y="PREVAILING_WAGE", data=df.head(100), palette="flare", ax=ax[1])
plt.tight_layout()
fig.show()
# # 8) Analyzing Data Scientist and software developers Job
jobs = ['DATA SCIENTIST', 'DATA ANALYST', 'DATA ENGINEER', 'ML ENGINEER', 'BUSINESS ANALYST']
count = []
# Counting the number of applicants related to each job title.
for var in jobs:
q = dp[dp['JOB_TITLE']==var]['JOB_TITLE'].count()
count.append(q)
plt.figure(figsize=(12,5))
plt.bar(x=jobs, height=count, color=colors)
plt.show()
print()
jobs1 = ['SOFTWARE DEVELOPER', 'PROGRAMMER ANALYST', 'JAVA DEVELOPER', 'SOFTWARE ENGINEER', 'SOFTWARE PROGRAMMER']
count1 = []
# Counting the number of applicants related to each job title.
for var in jobs1:
d = dp[dp['JOB_TITLE']==var]['JOB_TITLE'].count()
count1.append(d)
plt.figure(figsize=(12,5))
plt.bar(x=jobs1, height=count1, color=colors)
plt.show()
print()
# From the above graphs it is observed that if you are a programmer analyst or business analyst your approval probability is higher than other roles.
# # 9) Comparing certified and denied applications
df['CASE_STATUS'].value_counts()[:10].plot(kind='pie', title='Case Status Percentage(Certfied & Denied)',legend=False,autopct='%1.1f%%',explode=(0, 0.1),colors = ['indianred', 'gold'],shadow=True, startangle=0)
# We can infer that the percentage of certified applications in our dataset is more than the denied applications.
# # 10) Top 10 Employers in United states
df['EMPLOYER_NAME'].value_counts()[:10].plot(kind='bar',color=colors)
# From the above graphs it can be inferred that the combination of top 10 job positions and top 10 companies may increase the probability of getting the h1-b application certified.
# # 11) Analyzing wage trend over the years
dp['YEAR'] = pd.DatetimeIndex(dp['CASE_SUBMITTED']).year
df['YEAR'] = pd.DatetimeIndex(df['CASE_SUBMITTED']).year
dp['YEAR'].min()
dp['YEAR'].max()
#Trend of PREVAILING_WAGE from 2010-2016
years=[2010,2011,2012,2013,2014,2015,2016]
for year in years:
subset = dp[dp['YEAR']==year]
sns.distplot(subset['PREVAILING_WAGE'], hist=False, kde=True, kde_kws={'linewidth':1}, label=year)
plt.legend(prop={'size':10},title='Wage Trend')
plt.xlabel('Wage')
plt.ylabel('Density')
plt.xlim(0,200000)
# In the above density plot, it shows the trend of wages over the years. All the patterns are right skewed and indicate a number of outliers.
df.WORKSITE_CITY.describe()
# # 12) Analyzing companies trends of H1-B applications over the years
topEmp = list(dp['EMPLOYER_NAME'][dp['YEAR'] >= 2016].groupby(dp['EMPLOYER_NAME']).count().sort_values(ascending=False).head(10).index)
byEmpYear = dp[['EMPLOYER_NAME', 'YEAR', 'PREVAILING_WAGE']][dp['EMPLOYER_NAME'].isin(topEmp)]
byEmpYear = byEmpYear.groupby([dp['EMPLOYER_NAME'],dp['YEAR']])
fig = plt.figure(figsize=(10,6))
for company in topEmp:
tmp = byEmpYear.count().loc[company]
plt.plot(tmp.index.values, tmp["PREVAILING_WAGE"].values, label=company, linewidth=2)
plt.xlabel("Year")
plt.ylabel("# Applications")
plt.legend()
plt.title('# Applications of Top 10 Applicants')
plt.show()
# It is observed that over the years companies do hire H1-B applicants incresingly but after 2015 the number of applications started decreasing
# # 13) Analyzing avg salary of companies over the years
fig = plt.figure(figsize=(10,6))
for company in topEmp:
tmp = byEmpYear.mean().loc[company]
plt.plot(tmp.index.values, tmp["PREVAILING_WAGE"].values, label=company, linewidth=2)
plt.xlabel("Year")
plt.ylabel("Average Salary($)")
plt.legend()
plt.title("Average Salary of Top 10 Applicants")
plt.show()
# We can infer from the average salary graph that the google is the highest salary payer from 2011-2016 and also tata consultancy being not paying much but have followed an increasing trend in their wages.
# # 14) Most popular jobs with their avg salary and number of applications
newdf=dp.copy()
PopJobs = newdf[['JOB_TITLE', 'EMPLOYER_NAME', 'PREVAILING_WAGE']][newdf['EMPLOYER_NAME'].isin(topEmp)].groupby(['JOB_TITLE'])
topJobs = list(PopJobs.count().sort_values(by='EMPLOYER_NAME', ascending=False).head(30).index)
newdf = PopJobs.count().loc[topJobs].assign(MEAN_WAGE=PopJobs.mean().loc[topJobs])
fig = plt.figure(figsize=(9,10))
ax1 = fig.add_subplot(111)
ax2 = ax1.twiny()
width = 0.35
newdf.EMPLOYER_NAME.plot(kind='barh', ax=ax1, color='indigo', width=0.4, position=0, label='# of Applications')
newdf.MEAN_WAGE.plot(kind='barh', ax=ax2, color='indianred', width=0.4, position=1, label='Mean Salary')
ax1.set_xlabel('Number of Applications')
ax1.set_ylabel('')
ax1.legend(loc=(0.75,0.55))
ax2.set_xlabel('Average Salary')
ax2.set_ylabel('Job Title')
ax2.legend(loc=(0.75,0.50))
plt.show()
df
# # DATA CLEANING
#Dropping the columns not required
df1=df.drop(['H1B_DEPENDENT','AGENT_ATTORNEY_CITY','AGENT_ATTORNEY_NAME','AGENT_ATTORNEY_STATE','EMPLOYER_PHONE','EMPLOYER_PHONE_EXT','EMPLOYER_POSTAL_CODE','EMPLOYER_PROVINCE','JOB_TITLE','NAICS_CODE','WAGE_RATE_OF_PAY_TO','WAGE_UNIT_OF_PAY','WILLFUL_VIOLATOR','CASE_NUMBER','CASE_SUBMITTED','DECISION_DATE','EMPLOYER_ADDRESS','EMPLOYER_CITY','EMPLOYER_COUNTRY','EMPLOYER_PHONE','EMPLOYER_PHONE_EXT','EMPLOYER_POSTAL_CODE','EMPLOYER_PROVINCE','EMPLOYER_STATE','EMPLOYMENT_END_DATE','EMPLOYMENT_START_DATE','PW_UNIT_OF_PAY','PW_WAGE_SOURCE','PW_WAGE_SOURCE_OTHER','PW_WAGE_SOURCE_YEAR','SOC_CODE','TOTAL_WORKERS','VISA_CLASS','WAGE_RATE_OF_PAY_FROM' ,'WAGE_RATE_OF_PAY_TO','WAGE_UNIT_OF_PAY','WILLFUL_VIOLATOR','WORKSITE_COUNTY','WORKSITE_POSTAL_CODE','WORKSITE_STATE'],axis=1)
#Checking for nan
count_nan = len(df1) - df1.count() # checking number of nan values
print(count_nan)
#Dropping the nan value
df1.dropna(subset=['CASE_STATUS','WORKSITE_CITY','FULL_TIME_POSITION','EMPLOYER_NAME','SOC_NAME','H-1B_DEPENDENT','NAIC_CODE','SOC_NAME','PREVAILING_WAGE'], inplace=True) ## dropping null values
#Checking whether the null value got dropped
count_nan = len(df1) - df1.count()
print(count_nan)
# +
case_status = {
'CASE_STATUS': {
r'CERTIFIED-WITHDRAWN': 'CERTIFIED'}
}
df1.replace(case_status, regex=True, inplace=True)
# drop rows with 'WITHDRAWN' value
indexNames = df1[df1['CASE_STATUS']=='WITHDRAWN'].index
df1.drop(indexNames , inplace=True)
# -
df1.CASE_STATUS.value_counts()
#Down sampling
class_certified, class_denied = df1.CASE_STATUS.value_counts()
#Divide by class
df1_samp = df1[df1.CASE_STATUS=='CERTIFIED']
df1_s_d = df1[df1.CASE_STATUS=='DENIED']
seed=7
df1_samp_under =df1_samp.sample(class_denied,random_state=seed)
df1_down = pd.concat([df1_samp_under, df1_s_d], axis=0)
print('Random under-sampling:')
print(df1.CASE_STATUS.value_counts())
# Graph before before downsampling
plt.figure(figsize=(10,5))
plt.subplot(1, 2, 1)
df1['CASE_STATUS'].value_counts().plot(kind='bar', title='Count(CASE_STATUS)- Before Downsample', color=['indigo','goldenrod']);
plt.subplot(1, 2, 2)
# Graph after downsampling
df1_down.CASE_STATUS.value_counts().plot(kind='bar', title='Count(CASE_STATUS)-After Downsample',color=['indigo','goldenrod']);
plt.tight_layout()
plt.show()
# Detecting outlier
q1=df1_down["PREVAILING_WAGE"].quantile(0.25)
q3=df1_down["PREVAILING_WAGE"].quantile(0.75)
IQR=q3-q1
outliers=((df1_down["PREVAILING_WAGE"]<(q1 - 1.5*IQR)) | (df1_down["PREVAILING_WAGE"]>(q3 + 1.5*IQR))).sum()
print('No of outlier:', outliers)
#Removing the outliers
df1_down = df1_down.drop(df1_down[df1_down.PREVAILING_WAGE < (q1 - 1.5*IQR)].index)
df1_down = df1_down.drop(df1_down[df1_down.PREVAILING_WAGE > (q1 + 1.5*IQR)].index)
#Plot density before and after removing the outliers from PREVAILING_WAGE
plt.figure(figsize=(10,8))
fig,ax=plt.subplots(2,1)
plt.title('Distribution of PREVAILING WAGE with and without outliers')
sns.distplot(df['PREVAILING_WAGE'], hist=False, kde=True, color='indigo', kde_kws={'linewidth':4}, ax=ax[0])
sns.distplot(df1_down['PREVAILING_WAGE'], hist=False, kde=True, color='indigo', kde_kws={'linewidth':4}, ax=ax[1])
plt.tight_layout()
fig.show()
# +
#Cleaning the EMPLOYER_NAME column using regular expression
df1_down.EMPLOYER_NAME = df1_down.EMPLOYER_NAME.str.lower()
emp_name = {
'EMPLOYER_NAME': {
r"[.\-,);\"'(+/]|(")":'',
r'ltd':'limited',
r'(&)|&':'and',r'(.gates corporation.$)':'gates corporation',
r'corp$':'corporation',
r'^europeanamerican':'european american',
r'(.euromarket designs inc.$)':'euro market designs inc',
r'(.eurofins lancaster laboratories$)':'eurofins lancaster laboratories inc',
r'^eurocolletion|^eurocollection':'euro collection',
r'^technosoft':'techno soft',
r'^healthcare':'health care',
r'^healthplan':'health plan',
r'warner university inc':'warner university',
r'grouppc$':'group pc',
r'americasinc$':'americas inc'}
}
df1_down.replace(emp_name, regex=True, inplace=True)
# -
#Remove rows of the employers with less than 4 application
df_dict = df1_down.EMPLOYER_NAME.value_counts().to_dict()
emp_list = [k for k,v in df_dict.items() if v<=4]
len(emp_list)
df1_down = df1_down[~df1_down.EMPLOYER_NAME.isin(emp_list)]
# replace 'CERTIFIED' and 'DENIED' label of 'CASE_STATUS' respectively with '1' and '0'
df1_down['CASE_STATUS'] = df1_down['CASE_STATUS'].replace({'CERTIFIED': 1,'DENIED':0})
df1_down.CASE_STATUS.astype(int)
df1_downtest = df1_down.copy()
#replace into 'low', 'medium' and 'high'
df1_downtest['PREVAILING_WAGE_Group'] = pd.cut(df1_downtest['PREVAILING_WAGE'],3)
#df1_downtest['PREVAILING_WAGE_Group']
bins=[-9110.4, 3036800.0, 6073600.0, 9110400.0]
labels=['Low', 'Medium', 'High']
df1_downtest['PREVAILING_WAGE_Group'] = pd.cut(df1_downtest['PREVAILING_WAGE'], bins, labels=labels)
df1_downtest['PREVAILING_WAGE_Group']
df1_down['FULL_TIME_POSITION']=df1_down['FULL_TIME_POSITION'].replace({'Y': 1, 'N': 0})
df1_down.FULL_TIME_POSITION.astype(int)
categorical_col=['EMPLOYER_NAME','SOC_NAME','WORKSITE_CITY','YEAR','PREVAILING_WAGE']
dummy_df = pd.get_dummies(df1_down[categorical_col])
df1_down =pd.concat([df1_down,dummy_df],axis=1)
df1_down =df1_down.drop(categorical_col,axis=1)
label_encoder = preprocessing.LabelEncoder()
df1_down['FULL_TIME_POSITION']= label_encoder.fit_transform(df1_down['FULL_TIME_POSITION'])
#df1_down['NAIC_CODE']= label_encoder.fit_transform(df1_down['NAIC_CODE'])
df1_down['H-1B_DEPENDENT']= label_encoder.fit_transform(df1_down['H-1B_DEPENDENT'])
# # MODEL
X = df1_down.drop('CASE_STATUS', axis = 1)
y = df1_down['CASE_STATUS']
X.shape
y.shape
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 142)
# DecisionTreeClassifier
DecisionTree = DecisionTreeClassifier()
DecisionTree = DecisionTree.fit(X_train,y_train)
y_pred = DecisionTree.predict(X_test)
DTAcc = accuracy_score(y_test,y_pred)
DTAcc
confusion_matrix(y_test, y_pred)
DTPre=precision_score(y_test, y_pred)
DTPRe=metrics.recall_score(y_test, y_pred)
DTPF1=metrics.f1_score(y_test, y_pred)
predictionDTP=DecisionTree.predict(X_test)
# Gaussian Naive Bayes
GNB = GaussianNB()
GNB = GNB.fit(X_train, y_train)
y_pred1 = GNB.predict(X_test)
GNBAcc = accuracy_score(y_test,y_pred1)
GNBAcc
confusion_matrix(y_test, y_pred1)
GNBPre=precision_score(y_test, y_pred1)
GNBRe=metrics.recall_score(y_test, y_pred1)
GNBF1=metrics.f1_score(y_test, y_pred1)
predictionGNB=GNB.predict(X_test)
# KNN
KNN = KNeighborsClassifier(n_neighbors=3)
KNN = KNN.fit(X_train,y_train)
y_pred2 = KNN.predict(X_test)
KNNAcc = accuracy_score(y_test,y_pred2)
KNNAcc
confusion_matrix(y_test, y_pred2)
KNNPre=precision_score(y_test, y_pred2)
KNNRe=metrics.recall_score(y_test, y_pred2)
KNNF1=metrics.f1_score(y_test, y_pred2)
predictionKNN=KNN.predict(X_test)
# Random Forest
RFC = RandomForestClassifier(n_estimators = 100)
RFC=RFC.fit(X_train, y_train)
y_pred3 = RFC.predict(X_test)
RFCAcc=accuracy_score(y_test, y_pred3)
RFCAcc
confusion_matrix(y_test, y_pred3)
RFCPre=precision_score(y_test, y_pred3)
RFCRe=metrics.recall_score(y_test, y_pred3)
RFCF1=metrics.f1_score(y_test, y_pred3)
predictionRFC=RFC.predict(X_test)
# Logistic Regression
LR = LogisticRegression()
LR=LR.fit(X_train, y_train)
y_pred4 = LR.predict(X_test)
LRAcc=accuracy_score(y_test, y_pred3)
LRAcc
confusion_matrix(y_test, y_pred4)
LRPre=precision_score(y_test, y_pred4)
LRRe=metrics.recall_score(y_test, y_pred4)
LRF1=metrics.f1_score(y_test, y_pred4)
predictionLR=LR.predict(X_test)
model1 = ['DecisionTreeClassifier', 'Gaussian Naive bayes', 'K-Nearest Neighbours', 'Random Forest','Logistic Regression']
score1 = [DTAcc, GNBAcc, KNNAcc, RFCAcc, LRAcc]
compare1 = pd.DataFrame({'Model': model1, 'Accuracy': score1}, index=[1, 2, 3, 4, 5,])
compare1
plt.figure(figsize=(13,5))
sns.pointplot(x='Model', y='Accuracy', data=compare1, color='purple')
plt.title('Accuracy')
plt.xlabel('Model')
plt.ylabel('score')
plt.show()
# # Performance metrics
model1 = ['DecisionTreeClassifier', 'Gaussian Naive bayes', 'K-Nearest Neighbours', 'Random Forest', 'Logistic Regression']
pscore = [DTPre, GNBPre, KNNPre, RFCPre, LRPre]
rscore = [DTPRe, GNBRe, KNNRe, RFCRe, LRRe]
fscore = [DTPF1, GNBF1, KNNF1, RFCF1, LRF1]
compare2 = pd.DataFrame({'Model': model1, 'Precision': pscore, 'Recall': rscore, 'F1-Score': fscore}, index=[1, 2, 3, 4, 5,])
compare2
plt.figure(figsize=(13,5))
sns.pointplot(x='Model', y='Precision', data=compare2, color='firebrick')
plt.title('Precision ')
plt.xlabel('Model')
plt.ylabel('score')
plt.show()
from sklearn.metrics import roc_curve
from sklearn.metrics import auc
def roc_curve_graph(x_test,y_test,model):
preds = model. predict_proba(x_test) [:,1]
#Compute Receiver operating characteristic (ROC) curve
fpr, tpr, threshold = roc_curve(y_test, preds)
#ROC Score
roc_auc = auc(fpr, tpr)
plt.title('Receiver Operating Characteristic (ROC)')
plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % roc_auc, color='dodgerblue')
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--',color='firebrick')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
from imblearn.metrics import classification_report_imbalanced
def metrics_cal(x_test,y_test,prediction,model):
print("Model Accuracy:",metrics.accuracy_score(y_test, prediction))
probs = model.predict_proba(x_test)
roc_auc = metrics.roc_auc_score(y_test, probs[:,1])
print("ROC:",roc_auc)
y_pred = model.predict(X_test)
print("Confusion Matrix:")
print(confusion_matrix(y_test, y_pred))
return print (classification_report_imbalanced(y_test, y_pred))
# # Metrics Calculation for Random Forest
#Metrics Calculation
metrics_cal(X_test,y_test,predictionRFC,RFC)
#ROC Curve
roc_curve_graph(X_test,y_test,RFC)
# # Metrics Calculation for Decision Tree
#Metrics Calculation
metrics_cal(X_test,y_test,predictionDTP, DecisionTree)
roc_curve_graph(X_test,y_test, DecisionTree)
# # Metrics Calculation for KNN
#Metrics Calculation
metrics_cal(X_test,y_test,predictionKNN, KNN)
roc_curve_graph(X_test,y_test, KNN)
# # Metrics Calculation for Guassian Naive Bayes
#Metrics Calculation
metrics_cal(X_test,y_test,predictionGNB, GNB)
roc_curve_graph(X_test,y_test, GNB)
# # Metrics Calculation for Logistic Regression
#Metrics Calculation
metrics_cal(X_test,y_test,predictionLR, LR)
roc_curve_graph(X_test,y_test, LR)
# # Software Developer Job: Analysis and Prediction
import scipy
from scipy import optimize
dsj = dp[['JOB_TITLE','YEAR']][dp['JOB_TITLE'] == "SOFTWARE DEVELOPER"].groupby('YEAR').count()['JOB_TITLE']
X = np.array(dsj.index)
Y = dsj.values
def func(x, a, b, c):
return a*np.power(x-2011,b)+c
popt, pcov = optimize.curve_fit(func, X, Y)
X1 = np.linspace(2011,2018,9)
X2 = np.linspace(2016,2018,3)
X3 = np.linspace(2017,2018,2)
fig = plt.figure(figsize=(7,5))
plt.scatter(list(dsj.index), dsj.values, c='indigo', marker='s', s=120, label='Data')
plt.plot(X1, func(X1,*popt), color='indigo', label='')
plt.plot(X2, func(X2,*popt), color='indianred', linewidth=3, marker='o', markersize=1, label='')
plt.plot(X3, func(X3,*popt), color='indianred', marker='o', markersize=10, label='Prediction')
plt.legend()
plt.title('Number of Software Developer Jobs')
plt.xlabel('Year')
plt.show()
# # Conclusion
# To build a model Data pre-processing is very important stage as your model effeciency depends alot on on how you process your data.
#
# 1)According to our analysis the best algorithm which fits our model is Random forest because the accuracy of this model is higher compared to other models,also the best curve we are getting is in random forest.The precision,recall and F-1 score are satisfactory for this model so we would go with Random Forest for the case_status prediction.
#
# 2)Also we can say that the number of software developer jobs will increase from year 2016-2017 and further will rise again in year 2017-2018.
#
# 3)One should target comapanies such as INFOSYS LIMITED, TATA CONSULTANCY SERVICES LIMITED, CAPGEMINI AMERICA INC, IBM INDIA PRIVATE LIMITED.These are huge multi-national companies and they have good immigration team that can take care of your visa and status.
#
# 4)For a MS student pursuing technical courses in United States of America should apply in giant techs like infosys,tata(Top ten Companies) for roles like software developers(Top 10 job roles) as this combination will have high chances of getting their visa approved.
# # Contribution
# 15/11/2021 : Methodology <NAME>,<NAME>
# 01/12/2021 : Data Exploration Kulbir, Ishita
# 05/12/2021 : Data Cleaning <NAME>ita
# 07/12/2021 : Models Kulbir, Ishita
# 08/12/2021 : Conclusion Kulbir, Ishita
#
| H1-B_Project.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # What are LightCurve objects?
# `LightCurve` objects are data objects which encapsulate the brightness of a star over time. They provide a series of common operations, for example folding, binning, plotting, etc. There are a range of subclasses of `LightCurve` objects specific to telescopes, including `KeplerLightCurve` for Kepler and K2 data and `TessLightCurve` for TESS data.
#
# Although `lightkurve` was designed with Kepler, K2 and TESS in mind, these objects can be used for a range of astronomy data.
#
# You can create a `LightCurve` object from a `TargetPixelFile` object using Simple Aperture Photometry (see our tutorial for more information on Target Pixel Files [here](http://lightkurve.keplerscience.org/tutorials/1.02-target-pixel-files.html). Aperture Photometry is the simple act of summing up the values of all the pixels in a pre-defined aperture, as a function of time. By carefully choosing the shape of the aperture mask, you can avoid nearby contaminants or improve the strength of the specific signal you are trying to measure relative to the background.
#
# To demonstrate, lets create a `KeplerLightCurve` from a `KeplerTargetPixelFile`.
# +
from lightkurve import KeplerTargetPixelFile, KeplerLightCurve
#First we open a Target Pixel File from MAST, this one is already cached from our previous tutorial!
tpf = KeplerTargetPixelFile.from_archive(6922244, quarter=4)
#Then we convert the target pixel file into a light curve using the pipeline-defined aperture mask.
lc = tpf.to_lightcurve(aperture_mask=tpf.pipeline_mask)
# -
# We've built a new `KeplerLightCurve` object called `lc`. Note in this case we've passed an **aperture_mask** to the `to_lightcurve` method. The default is to use the *Kepler* pipeline aperture. (You can pass your own aperture, which is a boolean `numpy` array.) By summing all the pixels in the aperture we have created a Simple Aperture Photometry (SAP) lightcurve.
#
# `KeplerLightCurve` has many useful functions that you can use. As with Target Pixel Files you can access the meta data very simply:
lc.mission
lc.quarter
# And you still have access to time and flux attributes. In a light curve, there is only one flux point for every time stamp:
lc.time, lc.flux
# You can also check the "CDPP" noise metric of the lightcurve using the built in method:
lc.cdpp()
# Now we can use the built in `plot` function on the `KeplerLightCurve` object to plot the time series. You can pass `plot` any keywords you would normally pass to `matplotlib.pyplot.plot`.
# %matplotlib inline
lc.plot();
# There are a set of useful functions in `LightCurve` objects which you can use to work with the data. These include:
# * `flatten()`: Remove long term trends using a [Savitzky–Golay filter](https://en.wikipedia.org/wiki/Savitzky%E2%80%93Golay_filter)
# * `remove_outliers()`: Remove outliers using simple sigma clipping
# * `remove_nans()`: Remove infinite or NaN values (these can occur during thruster firings)
# * `fold()`: Fold the data at a particular period
# * `bin()`: Reduce the time resolution of the array, taking the average value in each bin.
#
# We can use these simply on a light curve object
flat_lc = lc.flatten(window_length=401)
flat_lc.plot();
folded_lc = flat_lc.fold(period=3.5225)
folded_lc.plot();
binned_lc = folded_lc.bin(binsize=10)
binned_lc.plot();
# Or we can do these all in a single (long) line!
lc.remove_nans().flatten(window_length=401).fold(period=3.5225).bin(binsize=10).plot();
| docs/source/tutorials/1.03-what-are-lightcurves.ipynb |